diff --git a/v1.26/ceake/PRODUCT.yaml b/v1.26/ceake/PRODUCT.yaml new file mode 100644 index 0000000000..7be89c5d4e --- /dev/null +++ b/v1.26/ceake/PRODUCT.yaml @@ -0,0 +1,8 @@ +vendor: Cecloud +name: CeaKE +version: 5.0.0 +website_url: https://cecloud.com/product/7037705179667369984.html +documentation_url: https://cecloud.com/product/7037705179667369984.html +type: Distribution +description: 'CeaKE is dedicated to building a software infrastructure layer to enable the digital transformation of the cloud era.' +contact_email_address: zhuyue@cestc.cn \ No newline at end of file diff --git a/v1.26/ceake/README.md b/v1.26/ceake/README.md new file mode 100644 index 0000000000..5553fea793 --- /dev/null +++ b/v1.26/ceake/README.md @@ -0,0 +1,64 @@ +# Conformance test for CeaKE + +## Create a cluster + +1. Prepare The Nodes +- Ceake has three types of nodes: bootstrap nodes, master nodes, and worker nodes. Bootstrap nodes are responsible for cluster deployment and bootstrapping the cluster to start working. Masters serve as the control plane for the cluster, while worker nodes handle the cluster's workload. +- Once the node role is planned, the next step is to install the operating system. The operating system supported by Ceake, such as Kylin OS, should be installed. + +2. Configure the cluster +- After the system installation is completed, SSH into the bootstrap node, and execute tar -zxvf Ceake_install* to unpack the CeaKE deployment package. +- On the bootstrap node, create a CeaKE configuration file config.json, and place it in the cluster-installer/deploy folder. The content of the config.json configuration file is as follows: +``` + { + "baseDomain": "ceake.kylin.cn", + "clusterName": "test0928", + "apiAddress": "192.168.0.101", + "ingressAddress": "192.168.0.102", + "serviceNetworkCIRD": "172.16.0.0/16", + "clusterNetworkCIRD": "172.17.0.0/16", + "clusterNode": { + "rootPassword": "Cestc_1!", + "masters": [ + "192.168.0.6", + "192.168.0.7", + "192.168.0.8" + ], + "workers": [ + "192.168.0.9", + "192.168.0.10" + ] + }, + "ntp": [ + "120.25.115.20" + ] + } +``` + +3. Deploy cluster +- After the operating system is installed, the cluster deployment process can begin. To do this, copy the Ceake installation package to the bootstrap node and unzip it. Then, execute script located in the cluster-installer/deploy directory. +``` +sh bootstrap_init.sh kylin_v10sp2 +``` +- Wait for the installation and deployment to complete. + +## Run conformance tests + +1. Deploy a Sonobuoy pod to CeaKE cluster with: + +``` +sonobuoy run --mode=certified-conformance --dns-namespace=ccos-kni-infra --dns-pod-labels app=kni-infra-mdns +``` + +2. View actively running pods: + +``` +sonobuoy status +``` + +3. Once conformance testing is completed, run: + +``` +sonobuoy retrieve +sonobuoy delete +``` \ No newline at end of file diff --git a/v1.26/ceake/e2e.log b/v1.26/ceake/e2e.log new file mode 100644 index 0000000000..c4fe6ace77 --- /dev/null +++ b/v1.26/ceake/e2e.log @@ -0,0 +1,38263 @@ +I1013 08:13:31.362274 23 e2e.go:126] Starting e2e run "bac244cc-4119-4800-a1cc-8eb31f68e1cb" on Ginkgo node 1 +Oct 13 08:13:31.384: INFO: Enabling in-tree volume drivers +Running Suite: Kubernetes e2e suite - /usr/local/bin +==================================================== +Random Seed: 1697184810 - will randomize all specs + +Will run 368 of 7069 specs +------------------------------ +[SynchronizedBeforeSuite] +test/e2e/e2e.go:77 +[SynchronizedBeforeSuite] TOP-LEVEL + test/e2e/e2e.go:77 +Oct 13 08:13:31.550: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +Oct 13 08:13:31.552: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable +Oct 13 08:13:31.571: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready +Oct 13 08:13:31.593: INFO: 23 / 23 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) +Oct 13 08:13:31.593: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. +Oct 13 08:13:31.593: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start +Oct 13 08:13:31.596: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) +Oct 13 08:13:31.596: INFO: e2e test version: v1.26.5 +Oct 13 08:13:31.597: INFO: kube-apiserver version: v1.26.5 +[SynchronizedBeforeSuite] TOP-LEVEL + test/e2e/e2e.go:77 +Oct 13 08:13:31.597: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +Oct 13 08:13:31.600: INFO: Cluster IP family: ipv4 +------------------------------ +[SynchronizedBeforeSuite] PASSED [0.050 seconds] +[SynchronizedBeforeSuite] +test/e2e/e2e.go:77 + + Begin Captured GinkgoWriter Output >> + [SynchronizedBeforeSuite] TOP-LEVEL + test/e2e/e2e.go:77 + Oct 13 08:13:31.550: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + Oct 13 08:13:31.552: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable + Oct 13 08:13:31.571: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready + Oct 13 08:13:31.593: INFO: 23 / 23 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) + Oct 13 08:13:31.593: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. + Oct 13 08:13:31.593: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start + Oct 13 08:13:31.596: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) + Oct 13 08:13:31.596: INFO: e2e test version: v1.26.5 + Oct 13 08:13:31.597: INFO: kube-apiserver version: v1.26.5 + [SynchronizedBeforeSuite] TOP-LEVEL + test/e2e/e2e.go:77 + Oct 13 08:13:31.597: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + Oct 13 08:13:31.600: INFO: Cluster IP family: ipv4 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + should validate Statefulset Status endpoints [Conformance] + test/e2e/apps/statefulset.go:977 +[BeforeEach] [sig-apps] StatefulSet + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:13:31.631 +Oct 13 08:13:31.631: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename statefulset 10/13/23 08:13:31.632 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:13:31.646 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:13:31.648 +[BeforeEach] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:98 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:113 +STEP: Creating service test in namespace statefulset-7971 10/13/23 08:13:31.651 +[It] should validate Statefulset Status endpoints [Conformance] + test/e2e/apps/statefulset.go:977 +STEP: Creating statefulset ss in namespace statefulset-7971 10/13/23 08:13:31.659 +Oct 13 08:13:31.667: INFO: Found 0 stateful pods, waiting for 1 +Oct 13 08:13:41.674: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: Patch Statefulset to include a label 10/13/23 08:13:41.683 +STEP: Getting /status 10/13/23 08:13:41.699 +Oct 13 08:13:41.706: INFO: StatefulSet ss has Conditions: []v1.StatefulSetCondition(nil) +STEP: updating the StatefulSet Status 10/13/23 08:13:41.706 +Oct 13 08:13:41.717: INFO: updatedStatus.Conditions: []v1.StatefulSetCondition{v1.StatefulSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} +STEP: watching for the statefulset status to be updated 10/13/23 08:13:41.717 +Oct 13 08:13:41.720: INFO: Observed &StatefulSet event: ADDED +Oct 13 08:13:41.720: INFO: Found Statefulset ss in namespace statefulset-7971 with labels: map[e2e:testing] annotations: map[] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} +Oct 13 08:13:41.720: INFO: Statefulset ss has an updated status +STEP: patching the Statefulset Status 10/13/23 08:13:41.72 +Oct 13 08:13:41.720: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} +Oct 13 08:13:41.729: INFO: Patched status conditions: []v1.StatefulSetCondition{v1.StatefulSetCondition{Type:"StatusPatched", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"", Message:""}} +STEP: watching for the Statefulset status to be patched 10/13/23 08:13:41.729 +Oct 13 08:13:41.731: INFO: Observed &StatefulSet event: ADDED +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:124 +Oct 13 08:13:41.731: INFO: Deleting all statefulset in ns statefulset-7971 +Oct 13 08:13:41.735: INFO: Scaling statefulset ss to 0 +Oct 13 08:13:51.754: INFO: Waiting for statefulset status.replicas updated to 0 +Oct 13 08:13:51.758: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + test/e2e/framework/node/init/init.go:32 +Oct 13 08:13:51.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] StatefulSet + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] StatefulSet + tear down framework | framework.go:193 +STEP: Destroying namespace "statefulset-7971" for this suite. 10/13/23 08:13:51.774 +------------------------------ +• [SLOW TEST] [20.149 seconds] +[sig-apps] StatefulSet +test/e2e/apps/framework.go:23 + Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:103 + should validate Statefulset Status endpoints [Conformance] + test/e2e/apps/statefulset.go:977 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] StatefulSet + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:13:31.631 + Oct 13 08:13:31.631: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename statefulset 10/13/23 08:13:31.632 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:13:31.646 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:13:31.648 + [BeforeEach] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:98 + [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:113 + STEP: Creating service test in namespace statefulset-7971 10/13/23 08:13:31.651 + [It] should validate Statefulset Status endpoints [Conformance] + test/e2e/apps/statefulset.go:977 + STEP: Creating statefulset ss in namespace statefulset-7971 10/13/23 08:13:31.659 + Oct 13 08:13:31.667: INFO: Found 0 stateful pods, waiting for 1 + Oct 13 08:13:41.674: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true + STEP: Patch Statefulset to include a label 10/13/23 08:13:41.683 + STEP: Getting /status 10/13/23 08:13:41.699 + Oct 13 08:13:41.706: INFO: StatefulSet ss has Conditions: []v1.StatefulSetCondition(nil) + STEP: updating the StatefulSet Status 10/13/23 08:13:41.706 + Oct 13 08:13:41.717: INFO: updatedStatus.Conditions: []v1.StatefulSetCondition{v1.StatefulSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} + STEP: watching for the statefulset status to be updated 10/13/23 08:13:41.717 + Oct 13 08:13:41.720: INFO: Observed &StatefulSet event: ADDED + Oct 13 08:13:41.720: INFO: Found Statefulset ss in namespace statefulset-7971 with labels: map[e2e:testing] annotations: map[] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} + Oct 13 08:13:41.720: INFO: Statefulset ss has an updated status + STEP: patching the Statefulset Status 10/13/23 08:13:41.72 + Oct 13 08:13:41.720: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} + Oct 13 08:13:41.729: INFO: Patched status conditions: []v1.StatefulSetCondition{v1.StatefulSetCondition{Type:"StatusPatched", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"", Message:""}} + STEP: watching for the Statefulset status to be patched 10/13/23 08:13:41.729 + Oct 13 08:13:41.731: INFO: Observed &StatefulSet event: ADDED + [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:124 + Oct 13 08:13:41.731: INFO: Deleting all statefulset in ns statefulset-7971 + Oct 13 08:13:41.735: INFO: Scaling statefulset ss to 0 + Oct 13 08:13:51.754: INFO: Waiting for statefulset status.replicas updated to 0 + Oct 13 08:13:51.758: INFO: Deleting statefulset ss + [AfterEach] [sig-apps] StatefulSet + test/e2e/framework/node/init/init.go:32 + Oct 13 08:13:51.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] StatefulSet + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] StatefulSet + tear down framework | framework.go:193 + STEP: Destroying namespace "statefulset-7971" for this suite. 10/13/23 08:13:51.774 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:207 +[BeforeEach] [sig-storage] EmptyDir volumes + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:13:51.781 +Oct 13 08:13:51.781: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename emptydir 10/13/23 08:13:51.782 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:13:51.798 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:13:51.802 +[BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:31 +[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:207 +STEP: Creating a pod to test emptydir 0666 on node default medium 10/13/23 08:13:51.805 +Oct 13 08:13:51.813: INFO: Waiting up to 5m0s for pod "pod-d611e3a5-fab2-41c0-bbe7-1cea9703d4ad" in namespace "emptydir-7294" to be "Succeeded or Failed" +Oct 13 08:13:51.816: INFO: Pod "pod-d611e3a5-fab2-41c0-bbe7-1cea9703d4ad": Phase="Pending", Reason="", readiness=false. Elapsed: 3.014136ms +Oct 13 08:13:53.820: INFO: Pod "pod-d611e3a5-fab2-41c0-bbe7-1cea9703d4ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007425843s +Oct 13 08:13:55.823: INFO: Pod "pod-d611e3a5-fab2-41c0-bbe7-1cea9703d4ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010249903s +STEP: Saw pod success 10/13/23 08:13:55.823 +Oct 13 08:13:55.823: INFO: Pod "pod-d611e3a5-fab2-41c0-bbe7-1cea9703d4ad" satisfied condition "Succeeded or Failed" +Oct 13 08:13:55.828: INFO: Trying to get logs from node node2 pod pod-d611e3a5-fab2-41c0-bbe7-1cea9703d4ad container test-container: +STEP: delete the pod 10/13/23 08:13:55.85 +Oct 13 08:13:55.865: INFO: Waiting for pod pod-d611e3a5-fab2-41c0-bbe7-1cea9703d4ad to disappear +Oct 13 08:13:55.869: INFO: Pod pod-d611e3a5-fab2-41c0-bbe7-1cea9703d4ad no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/node/init/init.go:32 +Oct 13 08:13:55.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + tear down framework | framework.go:193 +STEP: Destroying namespace "emptydir-7294" for this suite. 10/13/23 08:13:55.874 +------------------------------ +• [4.100 seconds] +[sig-storage] EmptyDir volumes +test/e2e/common/storage/framework.go:23 + should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:207 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] EmptyDir volumes + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:13:51.781 + Oct 13 08:13:51.781: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename emptydir 10/13/23 08:13:51.782 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:13:51.798 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:13:51.802 + [BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:31 + [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:207 + STEP: Creating a pod to test emptydir 0666 on node default medium 10/13/23 08:13:51.805 + Oct 13 08:13:51.813: INFO: Waiting up to 5m0s for pod "pod-d611e3a5-fab2-41c0-bbe7-1cea9703d4ad" in namespace "emptydir-7294" to be "Succeeded or Failed" + Oct 13 08:13:51.816: INFO: Pod "pod-d611e3a5-fab2-41c0-bbe7-1cea9703d4ad": Phase="Pending", Reason="", readiness=false. Elapsed: 3.014136ms + Oct 13 08:13:53.820: INFO: Pod "pod-d611e3a5-fab2-41c0-bbe7-1cea9703d4ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007425843s + Oct 13 08:13:55.823: INFO: Pod "pod-d611e3a5-fab2-41c0-bbe7-1cea9703d4ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010249903s + STEP: Saw pod success 10/13/23 08:13:55.823 + Oct 13 08:13:55.823: INFO: Pod "pod-d611e3a5-fab2-41c0-bbe7-1cea9703d4ad" satisfied condition "Succeeded or Failed" + Oct 13 08:13:55.828: INFO: Trying to get logs from node node2 pod pod-d611e3a5-fab2-41c0-bbe7-1cea9703d4ad container test-container: + STEP: delete the pod 10/13/23 08:13:55.85 + Oct 13 08:13:55.865: INFO: Waiting for pod pod-d611e3a5-fab2-41c0-bbe7-1cea9703d4ad to disappear + Oct 13 08:13:55.869: INFO: Pod pod-d611e3a5-fab2-41c0-bbe7-1cea9703d4ad no longer exists + [AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/node/init/init.go:32 + Oct 13 08:13:55.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + tear down framework | framework.go:193 + STEP: Destroying namespace "emptydir-7294" for this suite. 10/13/23 08:13:55.874 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should run and stop simple daemon [Conformance] + test/e2e/apps/daemon_set.go:166 +[BeforeEach] [sig-apps] Daemon set [Serial] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:13:55.884 +Oct 13 08:13:55.884: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename daemonsets 10/13/23 08:13:55.885 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:13:55.899 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:13:55.902 +[BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:146 +[It] should run and stop simple daemon [Conformance] + test/e2e/apps/daemon_set.go:166 +STEP: Creating simple DaemonSet "daemon-set" 10/13/23 08:13:55.922 +STEP: Check that daemon pods launch on every node of the cluster. 10/13/23 08:13:55.928 +Oct 13 08:13:55.936: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Oct 13 08:13:55.936: INFO: Node node1 is running 0 daemon pod, expected 1 +Oct 13 08:13:56.944: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Oct 13 08:13:56.944: INFO: Node node1 is running 0 daemon pod, expected 1 +Oct 13 08:13:57.946: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 +Oct 13 08:13:57.946: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set +STEP: Stop a daemon pod, check that the daemon pod is revived. 10/13/23 08:13:57.949 +Oct 13 08:13:57.971: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Oct 13 08:13:57.971: INFO: Node node2 is running 0 daemon pod, expected 1 +Oct 13 08:13:58.980: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Oct 13 08:13:58.980: INFO: Node node2 is running 0 daemon pod, expected 1 +Oct 13 08:13:59.979: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Oct 13 08:13:59.979: INFO: Node node2 is running 0 daemon pod, expected 1 +Oct 13 08:14:00.980: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Oct 13 08:14:00.980: INFO: Node node2 is running 0 daemon pod, expected 1 +Oct 13 08:14:01.979: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 +Oct 13 08:14:01.979: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set +[AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:111 +STEP: Deleting DaemonSet "daemon-set" 10/13/23 08:14:01.982 +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3746, will wait for the garbage collector to delete the pods 10/13/23 08:14:01.982 +Oct 13 08:14:02.043: INFO: Deleting DaemonSet.extensions daemon-set took: 7.418406ms +Oct 13 08:14:02.144: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.805516ms +Oct 13 08:14:04.349: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Oct 13 08:14:04.349: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set +Oct 13 08:14:04.353: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"9342"},"items":null} + +Oct 13 08:14:04.361: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"9342"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/node/init/init.go:32 +Oct 13 08:14:04.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + tear down framework | framework.go:193 +STEP: Destroying namespace "daemonsets-3746" for this suite. 10/13/23 08:14:04.376 +------------------------------ +• [SLOW TEST] [8.499 seconds] +[sig-apps] Daemon set [Serial] +test/e2e/apps/framework.go:23 + should run and stop simple daemon [Conformance] + test/e2e/apps/daemon_set.go:166 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] Daemon set [Serial] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:13:55.884 + Oct 13 08:13:55.884: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename daemonsets 10/13/23 08:13:55.885 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:13:55.899 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:13:55.902 + [BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:146 + [It] should run and stop simple daemon [Conformance] + test/e2e/apps/daemon_set.go:166 + STEP: Creating simple DaemonSet "daemon-set" 10/13/23 08:13:55.922 + STEP: Check that daemon pods launch on every node of the cluster. 10/13/23 08:13:55.928 + Oct 13 08:13:55.936: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Oct 13 08:13:55.936: INFO: Node node1 is running 0 daemon pod, expected 1 + Oct 13 08:13:56.944: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 + Oct 13 08:13:56.944: INFO: Node node1 is running 0 daemon pod, expected 1 + Oct 13 08:13:57.946: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 + Oct 13 08:13:57.946: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set + STEP: Stop a daemon pod, check that the daemon pod is revived. 10/13/23 08:13:57.949 + Oct 13 08:13:57.971: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 + Oct 13 08:13:57.971: INFO: Node node2 is running 0 daemon pod, expected 1 + Oct 13 08:13:58.980: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 + Oct 13 08:13:58.980: INFO: Node node2 is running 0 daemon pod, expected 1 + Oct 13 08:13:59.979: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 + Oct 13 08:13:59.979: INFO: Node node2 is running 0 daemon pod, expected 1 + Oct 13 08:14:00.980: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 + Oct 13 08:14:00.980: INFO: Node node2 is running 0 daemon pod, expected 1 + Oct 13 08:14:01.979: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 + Oct 13 08:14:01.979: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set + [AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:111 + STEP: Deleting DaemonSet "daemon-set" 10/13/23 08:14:01.982 + STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3746, will wait for the garbage collector to delete the pods 10/13/23 08:14:01.982 + Oct 13 08:14:02.043: INFO: Deleting DaemonSet.extensions daemon-set took: 7.418406ms + Oct 13 08:14:02.144: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.805516ms + Oct 13 08:14:04.349: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Oct 13 08:14:04.349: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set + Oct 13 08:14:04.353: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"9342"},"items":null} + + Oct 13 08:14:04.361: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"9342"},"items":null} + + [AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/node/init/init.go:32 + Oct 13 08:14:04.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + tear down framework | framework.go:193 + STEP: Destroying namespace "daemonsets-3746" for this suite. 10/13/23 08:14:04.376 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicationController + should adopt matching pods on creation [Conformance] + test/e2e/apps/rc.go:92 +[BeforeEach] [sig-apps] ReplicationController + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:14:04.383 +Oct 13 08:14:04.383: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename replication-controller 10/13/23 08:14:04.384 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:14:04.398 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:14:04.401 +[BeforeEach] [sig-apps] ReplicationController + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] ReplicationController + test/e2e/apps/rc.go:57 +[It] should adopt matching pods on creation [Conformance] + test/e2e/apps/rc.go:92 +STEP: Given a Pod with a 'name' label pod-adoption is created 10/13/23 08:14:04.403 +Oct 13 08:14:04.410: INFO: Waiting up to 5m0s for pod "pod-adoption" in namespace "replication-controller-4445" to be "running and ready" +Oct 13 08:14:04.412: INFO: Pod "pod-adoption": Phase="Pending", Reason="", readiness=false. Elapsed: 2.777276ms +Oct 13 08:14:04.412: INFO: The phase of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) +Oct 13 08:14:06.417: INFO: Pod "pod-adoption": Phase="Running", Reason="", readiness=true. Elapsed: 2.007117573s +Oct 13 08:14:06.417: INFO: The phase of Pod pod-adoption is Running (Ready = true) +Oct 13 08:14:06.417: INFO: Pod "pod-adoption" satisfied condition "running and ready" +STEP: When a replication controller with a matching selector is created 10/13/23 08:14:06.419 +STEP: Then the orphan pod is adopted 10/13/23 08:14:06.424 +[AfterEach] [sig-apps] ReplicationController + test/e2e/framework/node/init/init.go:32 +Oct 13 08:14:07.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] ReplicationController + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] ReplicationController + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] ReplicationController + tear down framework | framework.go:193 +STEP: Destroying namespace "replication-controller-4445" for this suite. 10/13/23 08:14:07.439 +------------------------------ +• [3.063 seconds] +[sig-apps] ReplicationController +test/e2e/apps/framework.go:23 + should adopt matching pods on creation [Conformance] + test/e2e/apps/rc.go:92 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] ReplicationController + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:14:04.383 + Oct 13 08:14:04.383: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename replication-controller 10/13/23 08:14:04.384 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:14:04.398 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:14:04.401 + [BeforeEach] [sig-apps] ReplicationController + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] ReplicationController + test/e2e/apps/rc.go:57 + [It] should adopt matching pods on creation [Conformance] + test/e2e/apps/rc.go:92 + STEP: Given a Pod with a 'name' label pod-adoption is created 10/13/23 08:14:04.403 + Oct 13 08:14:04.410: INFO: Waiting up to 5m0s for pod "pod-adoption" in namespace "replication-controller-4445" to be "running and ready" + Oct 13 08:14:04.412: INFO: Pod "pod-adoption": Phase="Pending", Reason="", readiness=false. Elapsed: 2.777276ms + Oct 13 08:14:04.412: INFO: The phase of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) + Oct 13 08:14:06.417: INFO: Pod "pod-adoption": Phase="Running", Reason="", readiness=true. Elapsed: 2.007117573s + Oct 13 08:14:06.417: INFO: The phase of Pod pod-adoption is Running (Ready = true) + Oct 13 08:14:06.417: INFO: Pod "pod-adoption" satisfied condition "running and ready" + STEP: When a replication controller with a matching selector is created 10/13/23 08:14:06.419 + STEP: Then the orphan pod is adopted 10/13/23 08:14:06.424 + [AfterEach] [sig-apps] ReplicationController + test/e2e/framework/node/init/init.go:32 + Oct 13 08:14:07.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] ReplicationController + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] ReplicationController + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] ReplicationController + tear down framework | framework.go:193 + STEP: Destroying namespace "replication-controller-4445" for this suite. 10/13/23 08:14:07.439 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Pods + should be updated [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:344 +[BeforeEach] [sig-node] Pods + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:14:07.448 +Oct 13 08:14:07.448: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename pods 10/13/23 08:14:07.449 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:14:07.466 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:14:07.469 +[BeforeEach] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:194 +[It] should be updated [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:344 +STEP: creating the pod 10/13/23 08:14:07.472 +STEP: submitting the pod to kubernetes 10/13/23 08:14:07.472 +Oct 13 08:14:07.479: INFO: Waiting up to 5m0s for pod "pod-update-0f72cd3a-ebaf-497e-8c31-bb684458918f" in namespace "pods-1463" to be "running and ready" +Oct 13 08:14:07.483: INFO: Pod "pod-update-0f72cd3a-ebaf-497e-8c31-bb684458918f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.263936ms +Oct 13 08:14:07.483: INFO: The phase of Pod pod-update-0f72cd3a-ebaf-497e-8c31-bb684458918f is Pending, waiting for it to be Running (with Ready = true) +Oct 13 08:14:09.487: INFO: Pod "pod-update-0f72cd3a-ebaf-497e-8c31-bb684458918f": Phase="Running", Reason="", readiness=true. Elapsed: 2.007233647s +Oct 13 08:14:09.487: INFO: The phase of Pod pod-update-0f72cd3a-ebaf-497e-8c31-bb684458918f is Running (Ready = true) +Oct 13 08:14:09.487: INFO: Pod "pod-update-0f72cd3a-ebaf-497e-8c31-bb684458918f" satisfied condition "running and ready" +STEP: verifying the pod is in kubernetes 10/13/23 08:14:09.489 +STEP: updating the pod 10/13/23 08:14:09.492 +Oct 13 08:14:10.007: INFO: Successfully updated pod "pod-update-0f72cd3a-ebaf-497e-8c31-bb684458918f" +Oct 13 08:14:10.007: INFO: Waiting up to 5m0s for pod "pod-update-0f72cd3a-ebaf-497e-8c31-bb684458918f" in namespace "pods-1463" to be "running" +Oct 13 08:14:10.010: INFO: Pod "pod-update-0f72cd3a-ebaf-497e-8c31-bb684458918f": Phase="Running", Reason="", readiness=true. Elapsed: 3.230973ms +Oct 13 08:14:10.010: INFO: Pod "pod-update-0f72cd3a-ebaf-497e-8c31-bb684458918f" satisfied condition "running" +STEP: verifying the updated pod is in kubernetes 10/13/23 08:14:10.01 +Oct 13 08:14:10.014: INFO: Pod update OK +[AfterEach] [sig-node] Pods + test/e2e/framework/node/init/init.go:32 +Oct 13 08:14:10.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Pods + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Pods + tear down framework | framework.go:193 +STEP: Destroying namespace "pods-1463" for this suite. 10/13/23 08:14:10.018 +------------------------------ +• [2.576 seconds] +[sig-node] Pods +test/e2e/common/node/framework.go:23 + should be updated [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:344 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Pods + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:14:07.448 + Oct 13 08:14:07.448: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename pods 10/13/23 08:14:07.449 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:14:07.466 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:14:07.469 + [BeforeEach] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:194 + [It] should be updated [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:344 + STEP: creating the pod 10/13/23 08:14:07.472 + STEP: submitting the pod to kubernetes 10/13/23 08:14:07.472 + Oct 13 08:14:07.479: INFO: Waiting up to 5m0s for pod "pod-update-0f72cd3a-ebaf-497e-8c31-bb684458918f" in namespace "pods-1463" to be "running and ready" + Oct 13 08:14:07.483: INFO: Pod "pod-update-0f72cd3a-ebaf-497e-8c31-bb684458918f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.263936ms + Oct 13 08:14:07.483: INFO: The phase of Pod pod-update-0f72cd3a-ebaf-497e-8c31-bb684458918f is Pending, waiting for it to be Running (with Ready = true) + Oct 13 08:14:09.487: INFO: Pod "pod-update-0f72cd3a-ebaf-497e-8c31-bb684458918f": Phase="Running", Reason="", readiness=true. Elapsed: 2.007233647s + Oct 13 08:14:09.487: INFO: The phase of Pod pod-update-0f72cd3a-ebaf-497e-8c31-bb684458918f is Running (Ready = true) + Oct 13 08:14:09.487: INFO: Pod "pod-update-0f72cd3a-ebaf-497e-8c31-bb684458918f" satisfied condition "running and ready" + STEP: verifying the pod is in kubernetes 10/13/23 08:14:09.489 + STEP: updating the pod 10/13/23 08:14:09.492 + Oct 13 08:14:10.007: INFO: Successfully updated pod "pod-update-0f72cd3a-ebaf-497e-8c31-bb684458918f" + Oct 13 08:14:10.007: INFO: Waiting up to 5m0s for pod "pod-update-0f72cd3a-ebaf-497e-8c31-bb684458918f" in namespace "pods-1463" to be "running" + Oct 13 08:14:10.010: INFO: Pod "pod-update-0f72cd3a-ebaf-497e-8c31-bb684458918f": Phase="Running", Reason="", readiness=true. Elapsed: 3.230973ms + Oct 13 08:14:10.010: INFO: Pod "pod-update-0f72cd3a-ebaf-497e-8c31-bb684458918f" satisfied condition "running" + STEP: verifying the updated pod is in kubernetes 10/13/23 08:14:10.01 + Oct 13 08:14:10.014: INFO: Pod update OK + [AfterEach] [sig-node] Pods + test/e2e/framework/node/init/init.go:32 + Oct 13 08:14:10.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Pods + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Pods + tear down framework | framework.go:193 + STEP: Destroying namespace "pods-1463" for this suite. 10/13/23 08:14:10.018 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Kubelet when scheduling a read only busybox container + should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:184 +[BeforeEach] [sig-node] Kubelet + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:14:10.026 +Oct 13 08:14:10.026: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename kubelet-test 10/13/23 08:14:10.026 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:14:10.041 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:14:10.044 +[BeforeEach] [sig-node] Kubelet + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] Kubelet + test/e2e/common/node/kubelet.go:41 +[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:184 +Oct 13 08:14:10.054: INFO: Waiting up to 5m0s for pod "busybox-readonly-fs1df5e3f0-9753-42c5-9cd3-34c4a19c7d6c" in namespace "kubelet-test-7608" to be "running and ready" +Oct 13 08:14:10.057: INFO: Pod "busybox-readonly-fs1df5e3f0-9753-42c5-9cd3-34c4a19c7d6c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.036824ms +Oct 13 08:14:10.057: INFO: The phase of Pod busybox-readonly-fs1df5e3f0-9753-42c5-9cd3-34c4a19c7d6c is Pending, waiting for it to be Running (with Ready = true) +Oct 13 08:14:12.062: INFO: Pod "busybox-readonly-fs1df5e3f0-9753-42c5-9cd3-34c4a19c7d6c": Phase="Running", Reason="", readiness=true. Elapsed: 2.007483652s +Oct 13 08:14:12.062: INFO: The phase of Pod busybox-readonly-fs1df5e3f0-9753-42c5-9cd3-34c4a19c7d6c is Running (Ready = true) +Oct 13 08:14:12.062: INFO: Pod "busybox-readonly-fs1df5e3f0-9753-42c5-9cd3-34c4a19c7d6c" satisfied condition "running and ready" +[AfterEach] [sig-node] Kubelet + test/e2e/framework/node/init/init.go:32 +Oct 13 08:14:12.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Kubelet + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Kubelet + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Kubelet + tear down framework | framework.go:193 +STEP: Destroying namespace "kubelet-test-7608" for this suite. 10/13/23 08:14:12.076 +------------------------------ +• [2.056 seconds] +[sig-node] Kubelet +test/e2e/common/node/framework.go:23 + when scheduling a read only busybox container + test/e2e/common/node/kubelet.go:175 + should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:184 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Kubelet + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:14:10.026 + Oct 13 08:14:10.026: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename kubelet-test 10/13/23 08:14:10.026 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:14:10.041 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:14:10.044 + [BeforeEach] [sig-node] Kubelet + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] Kubelet + test/e2e/common/node/kubelet.go:41 + [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:184 + Oct 13 08:14:10.054: INFO: Waiting up to 5m0s for pod "busybox-readonly-fs1df5e3f0-9753-42c5-9cd3-34c4a19c7d6c" in namespace "kubelet-test-7608" to be "running and ready" + Oct 13 08:14:10.057: INFO: Pod "busybox-readonly-fs1df5e3f0-9753-42c5-9cd3-34c4a19c7d6c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.036824ms + Oct 13 08:14:10.057: INFO: The phase of Pod busybox-readonly-fs1df5e3f0-9753-42c5-9cd3-34c4a19c7d6c is Pending, waiting for it to be Running (with Ready = true) + Oct 13 08:14:12.062: INFO: Pod "busybox-readonly-fs1df5e3f0-9753-42c5-9cd3-34c4a19c7d6c": Phase="Running", Reason="", readiness=true. Elapsed: 2.007483652s + Oct 13 08:14:12.062: INFO: The phase of Pod busybox-readonly-fs1df5e3f0-9753-42c5-9cd3-34c4a19c7d6c is Running (Ready = true) + Oct 13 08:14:12.062: INFO: Pod "busybox-readonly-fs1df5e3f0-9753-42c5-9cd3-34c4a19c7d6c" satisfied condition "running and ready" + [AfterEach] [sig-node] Kubelet + test/e2e/framework/node/init/init.go:32 + Oct 13 08:14:12.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Kubelet + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Kubelet + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Kubelet + tear down framework | framework.go:193 + STEP: Destroying namespace "kubelet-test-7608" for this suite. 10/13/23 08:14:12.076 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSS +------------------------------ +[sig-node] Downward API + should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:166 +[BeforeEach] [sig-node] Downward API + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:14:12.082 +Oct 13 08:14:12.082: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename downward-api 10/13/23 08:14:12.083 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:14:12.1 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:14:12.102 +[BeforeEach] [sig-node] Downward API + test/e2e/framework/metrics/init/init.go:31 +[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:166 +STEP: Creating a pod to test downward api env vars 10/13/23 08:14:12.105 +Oct 13 08:14:12.112: INFO: Waiting up to 5m0s for pod "downward-api-df45e604-2b99-40e9-b25f-16aab3a4eb7c" in namespace "downward-api-5199" to be "Succeeded or Failed" +Oct 13 08:14:12.115: INFO: Pod "downward-api-df45e604-2b99-40e9-b25f-16aab3a4eb7c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.187429ms +Oct 13 08:14:14.120: INFO: Pod "downward-api-df45e604-2b99-40e9-b25f-16aab3a4eb7c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008212956s +Oct 13 08:14:16.121: INFO: Pod "downward-api-df45e604-2b99-40e9-b25f-16aab3a4eb7c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008843011s +STEP: Saw pod success 10/13/23 08:14:16.121 +Oct 13 08:14:16.121: INFO: Pod "downward-api-df45e604-2b99-40e9-b25f-16aab3a4eb7c" satisfied condition "Succeeded or Failed" +Oct 13 08:14:16.125: INFO: Trying to get logs from node node1 pod downward-api-df45e604-2b99-40e9-b25f-16aab3a4eb7c container dapi-container: +STEP: delete the pod 10/13/23 08:14:16.14 +Oct 13 08:14:16.148: INFO: Waiting for pod downward-api-df45e604-2b99-40e9-b25f-16aab3a4eb7c to disappear +Oct 13 08:14:16.151: INFO: Pod downward-api-df45e604-2b99-40e9-b25f-16aab3a4eb7c no longer exists +[AfterEach] [sig-node] Downward API + test/e2e/framework/node/init/init.go:32 +Oct 13 08:14:16.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Downward API + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Downward API + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Downward API + tear down framework | framework.go:193 +STEP: Destroying namespace "downward-api-5199" for this suite. 10/13/23 08:14:16.155 +------------------------------ +• [4.077 seconds] +[sig-node] Downward API +test/e2e/common/node/framework.go:23 + should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:166 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Downward API + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:14:12.082 + Oct 13 08:14:12.082: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename downward-api 10/13/23 08:14:12.083 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:14:12.1 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:14:12.102 + [BeforeEach] [sig-node] Downward API + test/e2e/framework/metrics/init/init.go:31 + [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:166 + STEP: Creating a pod to test downward api env vars 10/13/23 08:14:12.105 + Oct 13 08:14:12.112: INFO: Waiting up to 5m0s for pod "downward-api-df45e604-2b99-40e9-b25f-16aab3a4eb7c" in namespace "downward-api-5199" to be "Succeeded or Failed" + Oct 13 08:14:12.115: INFO: Pod "downward-api-df45e604-2b99-40e9-b25f-16aab3a4eb7c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.187429ms + Oct 13 08:14:14.120: INFO: Pod "downward-api-df45e604-2b99-40e9-b25f-16aab3a4eb7c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008212956s + Oct 13 08:14:16.121: INFO: Pod "downward-api-df45e604-2b99-40e9-b25f-16aab3a4eb7c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008843011s + STEP: Saw pod success 10/13/23 08:14:16.121 + Oct 13 08:14:16.121: INFO: Pod "downward-api-df45e604-2b99-40e9-b25f-16aab3a4eb7c" satisfied condition "Succeeded or Failed" + Oct 13 08:14:16.125: INFO: Trying to get logs from node node1 pod downward-api-df45e604-2b99-40e9-b25f-16aab3a4eb7c container dapi-container: + STEP: delete the pod 10/13/23 08:14:16.14 + Oct 13 08:14:16.148: INFO: Waiting for pod downward-api-df45e604-2b99-40e9-b25f-16aab3a4eb7c to disappear + Oct 13 08:14:16.151: INFO: Pod downward-api-df45e604-2b99-40e9-b25f-16aab3a4eb7c no longer exists + [AfterEach] [sig-node] Downward API + test/e2e/framework/node/init/init.go:32 + Oct 13 08:14:16.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Downward API + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Downward API + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Downward API + tear down framework | framework.go:193 + STEP: Destroying namespace "downward-api-5199" for this suite. 10/13/23 08:14:16.155 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Secrets + should be consumable from pods in env vars [NodeConformance] [Conformance] + test/e2e/common/node/secrets.go:46 +[BeforeEach] [sig-node] Secrets + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:14:16.16 +Oct 13 08:14:16.160: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename secrets 10/13/23 08:14:16.161 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:14:16.172 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:14:16.175 +[BeforeEach] [sig-node] Secrets + test/e2e/framework/metrics/init/init.go:31 +[It] should be consumable from pods in env vars [NodeConformance] [Conformance] + test/e2e/common/node/secrets.go:46 +STEP: Creating secret with name secret-test-6a84f684-0ccf-4880-8501-5b10227668fb 10/13/23 08:14:16.177 +STEP: Creating a pod to test consume secrets 10/13/23 08:14:16.181 +Oct 13 08:14:16.187: INFO: Waiting up to 5m0s for pod "pod-secrets-99af96d1-e49f-45f6-a20b-f9dfb265502b" in namespace "secrets-9443" to be "Succeeded or Failed" +Oct 13 08:14:16.189: INFO: Pod "pod-secrets-99af96d1-e49f-45f6-a20b-f9dfb265502b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.544837ms +Oct 13 08:14:18.193: INFO: Pod "pod-secrets-99af96d1-e49f-45f6-a20b-f9dfb265502b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006391732s +Oct 13 08:14:20.194: INFO: Pod "pod-secrets-99af96d1-e49f-45f6-a20b-f9dfb265502b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007020905s +STEP: Saw pod success 10/13/23 08:14:20.194 +Oct 13 08:14:20.194: INFO: Pod "pod-secrets-99af96d1-e49f-45f6-a20b-f9dfb265502b" satisfied condition "Succeeded or Failed" +Oct 13 08:14:20.197: INFO: Trying to get logs from node node1 pod pod-secrets-99af96d1-e49f-45f6-a20b-f9dfb265502b container secret-env-test: +STEP: delete the pod 10/13/23 08:14:20.203 +Oct 13 08:14:20.215: INFO: Waiting for pod pod-secrets-99af96d1-e49f-45f6-a20b-f9dfb265502b to disappear +Oct 13 08:14:20.218: INFO: Pod pod-secrets-99af96d1-e49f-45f6-a20b-f9dfb265502b no longer exists +[AfterEach] [sig-node] Secrets + test/e2e/framework/node/init/init.go:32 +Oct 13 08:14:20.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Secrets + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Secrets + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Secrets + tear down framework | framework.go:193 +STEP: Destroying namespace "secrets-9443" for this suite. 10/13/23 08:14:20.221 +------------------------------ +• [4.067 seconds] +[sig-node] Secrets +test/e2e/common/node/framework.go:23 + should be consumable from pods in env vars [NodeConformance] [Conformance] + test/e2e/common/node/secrets.go:46 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Secrets + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:14:16.16 + Oct 13 08:14:16.160: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename secrets 10/13/23 08:14:16.161 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:14:16.172 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:14:16.175 + [BeforeEach] [sig-node] Secrets + test/e2e/framework/metrics/init/init.go:31 + [It] should be consumable from pods in env vars [NodeConformance] [Conformance] + test/e2e/common/node/secrets.go:46 + STEP: Creating secret with name secret-test-6a84f684-0ccf-4880-8501-5b10227668fb 10/13/23 08:14:16.177 + STEP: Creating a pod to test consume secrets 10/13/23 08:14:16.181 + Oct 13 08:14:16.187: INFO: Waiting up to 5m0s for pod "pod-secrets-99af96d1-e49f-45f6-a20b-f9dfb265502b" in namespace "secrets-9443" to be "Succeeded or Failed" + Oct 13 08:14:16.189: INFO: Pod "pod-secrets-99af96d1-e49f-45f6-a20b-f9dfb265502b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.544837ms + Oct 13 08:14:18.193: INFO: Pod "pod-secrets-99af96d1-e49f-45f6-a20b-f9dfb265502b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006391732s + Oct 13 08:14:20.194: INFO: Pod "pod-secrets-99af96d1-e49f-45f6-a20b-f9dfb265502b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007020905s + STEP: Saw pod success 10/13/23 08:14:20.194 + Oct 13 08:14:20.194: INFO: Pod "pod-secrets-99af96d1-e49f-45f6-a20b-f9dfb265502b" satisfied condition "Succeeded or Failed" + Oct 13 08:14:20.197: INFO: Trying to get logs from node node1 pod pod-secrets-99af96d1-e49f-45f6-a20b-f9dfb265502b container secret-env-test: + STEP: delete the pod 10/13/23 08:14:20.203 + Oct 13 08:14:20.215: INFO: Waiting for pod pod-secrets-99af96d1-e49f-45f6-a20b-f9dfb265502b to disappear + Oct 13 08:14:20.218: INFO: Pod pod-secrets-99af96d1-e49f-45f6-a20b-f9dfb265502b no longer exists + [AfterEach] [sig-node] Secrets + test/e2e/framework/node/init/init.go:32 + Oct 13 08:14:20.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Secrets + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Secrets + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Secrets + tear down framework | framework.go:193 + STEP: Destroying namespace "secrets-9443" for this suite. 10/13/23 08:14:20.221 + << End Captured GinkgoWriter Output +------------------------------ +SS +------------------------------ +[sig-network] IngressClass API + should support creating IngressClass API operations [Conformance] + test/e2e/network/ingressclass.go:223 +[BeforeEach] [sig-network] IngressClass API + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:14:20.227 +Oct 13 08:14:20.227: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename ingressclass 10/13/23 08:14:20.228 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:14:20.243 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:14:20.246 +[BeforeEach] [sig-network] IngressClass API + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-network] IngressClass API + test/e2e/network/ingressclass.go:211 +[It] should support creating IngressClass API operations [Conformance] + test/e2e/network/ingressclass.go:223 +STEP: getting /apis 10/13/23 08:14:20.248 +STEP: getting /apis/networking.k8s.io 10/13/23 08:14:20.251 +STEP: getting /apis/networking.k8s.iov1 10/13/23 08:14:20.252 +STEP: creating 10/13/23 08:14:20.253 +STEP: getting 10/13/23 08:14:20.27 +STEP: listing 10/13/23 08:14:20.273 +STEP: watching 10/13/23 08:14:20.275 +Oct 13 08:14:20.275: INFO: starting watch +STEP: patching 10/13/23 08:14:20.276 +STEP: updating 10/13/23 08:14:20.282 +Oct 13 08:14:20.290: INFO: waiting for watch events with expected annotations +Oct 13 08:14:20.290: INFO: saw patched and updated annotations +STEP: deleting 10/13/23 08:14:20.29 +STEP: deleting a collection 10/13/23 08:14:20.301 +[AfterEach] [sig-network] IngressClass API + test/e2e/framework/node/init/init.go:32 +Oct 13 08:14:20.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] IngressClass API + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] IngressClass API + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] IngressClass API + tear down framework | framework.go:193 +STEP: Destroying namespace "ingressclass-9319" for this suite. 10/13/23 08:14:20.319 +------------------------------ +• [0.100 seconds] +[sig-network] IngressClass API +test/e2e/network/common/framework.go:23 + should support creating IngressClass API operations [Conformance] + test/e2e/network/ingressclass.go:223 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] IngressClass API + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:14:20.227 + Oct 13 08:14:20.227: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename ingressclass 10/13/23 08:14:20.228 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:14:20.243 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:14:20.246 + [BeforeEach] [sig-network] IngressClass API + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-network] IngressClass API + test/e2e/network/ingressclass.go:211 + [It] should support creating IngressClass API operations [Conformance] + test/e2e/network/ingressclass.go:223 + STEP: getting /apis 10/13/23 08:14:20.248 + STEP: getting /apis/networking.k8s.io 10/13/23 08:14:20.251 + STEP: getting /apis/networking.k8s.iov1 10/13/23 08:14:20.252 + STEP: creating 10/13/23 08:14:20.253 + STEP: getting 10/13/23 08:14:20.27 + STEP: listing 10/13/23 08:14:20.273 + STEP: watching 10/13/23 08:14:20.275 + Oct 13 08:14:20.275: INFO: starting watch + STEP: patching 10/13/23 08:14:20.276 + STEP: updating 10/13/23 08:14:20.282 + Oct 13 08:14:20.290: INFO: waiting for watch events with expected annotations + Oct 13 08:14:20.290: INFO: saw patched and updated annotations + STEP: deleting 10/13/23 08:14:20.29 + STEP: deleting a collection 10/13/23 08:14:20.301 + [AfterEach] [sig-network] IngressClass API + test/e2e/framework/node/init/init.go:32 + Oct 13 08:14:20.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] IngressClass API + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] IngressClass API + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] IngressClass API + tear down framework | framework.go:193 + STEP: Destroying namespace "ingressclass-9319" for this suite. 10/13/23 08:14:20.319 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook + should execute prestop exec hook properly [NodeConformance] [Conformance] + test/e2e/common/node/lifecycle_hook.go:151 +[BeforeEach] [sig-node] Container Lifecycle Hook + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:14:20.334 +Oct 13 08:14:20.334: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename container-lifecycle-hook 10/13/23 08:14:20.335 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:14:20.35 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:14:20.353 +[BeforeEach] [sig-node] Container Lifecycle Hook + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] when create a pod with lifecycle hook + test/e2e/common/node/lifecycle_hook.go:77 +STEP: create the container to handle the HTTPGet hook request. 10/13/23 08:14:20.359 +Oct 13 08:14:20.366: INFO: Waiting up to 5m0s for pod "pod-handle-http-request" in namespace "container-lifecycle-hook-4494" to be "running and ready" +Oct 13 08:14:20.369: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 2.977566ms +Oct 13 08:14:20.369: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Oct 13 08:14:22.372: INFO: Pod "pod-handle-http-request": Phase="Running", Reason="", readiness=true. Elapsed: 2.006691486s +Oct 13 08:14:22.372: INFO: The phase of Pod pod-handle-http-request is Running (Ready = true) +Oct 13 08:14:22.372: INFO: Pod "pod-handle-http-request" satisfied condition "running and ready" +[It] should execute prestop exec hook properly [NodeConformance] [Conformance] + test/e2e/common/node/lifecycle_hook.go:151 +STEP: create the pod with lifecycle hook 10/13/23 08:14:22.375 +Oct 13 08:14:22.381: INFO: Waiting up to 5m0s for pod "pod-with-prestop-exec-hook" in namespace "container-lifecycle-hook-4494" to be "running and ready" +Oct 13 08:14:22.385: INFO: Pod "pod-with-prestop-exec-hook": Phase="Pending", Reason="", readiness=false. Elapsed: 3.408413ms +Oct 13 08:14:22.385: INFO: The phase of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) +Oct 13 08:14:24.389: INFO: Pod "pod-with-prestop-exec-hook": Phase="Running", Reason="", readiness=true. Elapsed: 2.008022415s +Oct 13 08:14:24.389: INFO: The phase of Pod pod-with-prestop-exec-hook is Running (Ready = true) +Oct 13 08:14:24.389: INFO: Pod "pod-with-prestop-exec-hook" satisfied condition "running and ready" +STEP: delete the pod with lifecycle hook 10/13/23 08:14:24.392 +Oct 13 08:14:24.396: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Oct 13 08:14:24.399: INFO: Pod pod-with-prestop-exec-hook still exists +Oct 13 08:14:26.399: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Oct 13 08:14:26.403: INFO: Pod pod-with-prestop-exec-hook still exists +Oct 13 08:14:28.400: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Oct 13 08:14:28.403: INFO: Pod pod-with-prestop-exec-hook no longer exists +STEP: check prestop hook 10/13/23 08:14:28.403 +[AfterEach] [sig-node] Container Lifecycle Hook + test/e2e/framework/node/init/init.go:32 +Oct 13 08:14:28.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Container Lifecycle Hook + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Container Lifecycle Hook + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Container Lifecycle Hook + tear down framework | framework.go:193 +STEP: Destroying namespace "container-lifecycle-hook-4494" for this suite. 10/13/23 08:14:28.429 +------------------------------ +• [SLOW TEST] [8.101 seconds] +[sig-node] Container Lifecycle Hook +test/e2e/common/node/framework.go:23 + when create a pod with lifecycle hook + test/e2e/common/node/lifecycle_hook.go:46 + should execute prestop exec hook properly [NodeConformance] [Conformance] + test/e2e/common/node/lifecycle_hook.go:151 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Container Lifecycle Hook + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:14:20.334 + Oct 13 08:14:20.334: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename container-lifecycle-hook 10/13/23 08:14:20.335 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:14:20.35 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:14:20.353 + [BeforeEach] [sig-node] Container Lifecycle Hook + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] when create a pod with lifecycle hook + test/e2e/common/node/lifecycle_hook.go:77 + STEP: create the container to handle the HTTPGet hook request. 10/13/23 08:14:20.359 + Oct 13 08:14:20.366: INFO: Waiting up to 5m0s for pod "pod-handle-http-request" in namespace "container-lifecycle-hook-4494" to be "running and ready" + Oct 13 08:14:20.369: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 2.977566ms + Oct 13 08:14:20.369: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) + Oct 13 08:14:22.372: INFO: Pod "pod-handle-http-request": Phase="Running", Reason="", readiness=true. Elapsed: 2.006691486s + Oct 13 08:14:22.372: INFO: The phase of Pod pod-handle-http-request is Running (Ready = true) + Oct 13 08:14:22.372: INFO: Pod "pod-handle-http-request" satisfied condition "running and ready" + [It] should execute prestop exec hook properly [NodeConformance] [Conformance] + test/e2e/common/node/lifecycle_hook.go:151 + STEP: create the pod with lifecycle hook 10/13/23 08:14:22.375 + Oct 13 08:14:22.381: INFO: Waiting up to 5m0s for pod "pod-with-prestop-exec-hook" in namespace "container-lifecycle-hook-4494" to be "running and ready" + Oct 13 08:14:22.385: INFO: Pod "pod-with-prestop-exec-hook": Phase="Pending", Reason="", readiness=false. Elapsed: 3.408413ms + Oct 13 08:14:22.385: INFO: The phase of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) + Oct 13 08:14:24.389: INFO: Pod "pod-with-prestop-exec-hook": Phase="Running", Reason="", readiness=true. Elapsed: 2.008022415s + Oct 13 08:14:24.389: INFO: The phase of Pod pod-with-prestop-exec-hook is Running (Ready = true) + Oct 13 08:14:24.389: INFO: Pod "pod-with-prestop-exec-hook" satisfied condition "running and ready" + STEP: delete the pod with lifecycle hook 10/13/23 08:14:24.392 + Oct 13 08:14:24.396: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear + Oct 13 08:14:24.399: INFO: Pod pod-with-prestop-exec-hook still exists + Oct 13 08:14:26.399: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear + Oct 13 08:14:26.403: INFO: Pod pod-with-prestop-exec-hook still exists + Oct 13 08:14:28.400: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear + Oct 13 08:14:28.403: INFO: Pod pod-with-prestop-exec-hook no longer exists + STEP: check prestop hook 10/13/23 08:14:28.403 + [AfterEach] [sig-node] Container Lifecycle Hook + test/e2e/framework/node/init/init.go:32 + Oct 13 08:14:28.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Container Lifecycle Hook + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Container Lifecycle Hook + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Container Lifecycle Hook + tear down framework | framework.go:193 + STEP: Destroying namespace "container-lifecycle-hook-4494" for this suite. 10/13/23 08:14:28.429 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Job + should manage the lifecycle of a job [Conformance] + test/e2e/apps/job.go:703 +[BeforeEach] [sig-apps] Job + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:14:28.437 +Oct 13 08:14:28.437: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename job 10/13/23 08:14:28.438 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:14:28.452 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:14:28.454 +[BeforeEach] [sig-apps] Job + test/e2e/framework/metrics/init/init.go:31 +[It] should manage the lifecycle of a job [Conformance] + test/e2e/apps/job.go:703 +STEP: Creating a suspended job 10/13/23 08:14:28.46 +STEP: Patching the Job 10/13/23 08:14:28.465 +STEP: Watching for Job to be patched 10/13/23 08:14:28.476 +Oct 13 08:14:28.477: INFO: Event ADDED observed for Job e2e-xfhmd in namespace job-6208 with labels: map[e2e-job-label:e2e-xfhmd] and annotations: map[batch.kubernetes.io/job-tracking:] +Oct 13 08:14:28.477: INFO: Event MODIFIED found for Job e2e-xfhmd in namespace job-6208 with labels: map[e2e-job-label:e2e-xfhmd e2e-xfhmd:patched] and annotations: map[batch.kubernetes.io/job-tracking:] +STEP: Updating the job 10/13/23 08:14:28.477 +STEP: Watching for Job to be updated 10/13/23 08:14:28.486 +Oct 13 08:14:28.488: INFO: Event MODIFIED found for Job e2e-xfhmd in namespace job-6208 with labels: map[e2e-job-label:e2e-xfhmd e2e-xfhmd:patched] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] +Oct 13 08:14:28.488: INFO: Found Job annotations: map[string]string{"batch.kubernetes.io/job-tracking":"", "updated":"true"} +STEP: Listing all Jobs with LabelSelector 10/13/23 08:14:28.488 +Oct 13 08:14:28.491: INFO: Job: e2e-xfhmd as labels: map[e2e-job-label:e2e-xfhmd e2e-xfhmd:patched] +STEP: Waiting for job to complete 10/13/23 08:14:28.491 +STEP: Delete a job collection with a labelselector 10/13/23 08:14:36.495 +STEP: Watching for Job to be deleted 10/13/23 08:14:36.502 +Oct 13 08:14:36.503: INFO: Event MODIFIED observed for Job e2e-xfhmd in namespace job-6208 with labels: map[e2e-job-label:e2e-xfhmd e2e-xfhmd:patched] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] +Oct 13 08:14:36.503: INFO: Event MODIFIED observed for Job e2e-xfhmd in namespace job-6208 with labels: map[e2e-job-label:e2e-xfhmd e2e-xfhmd:patched] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] +Oct 13 08:14:36.503: INFO: Event MODIFIED observed for Job e2e-xfhmd in namespace job-6208 with labels: map[e2e-job-label:e2e-xfhmd e2e-xfhmd:patched] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] +Oct 13 08:14:36.503: INFO: Event MODIFIED observed for Job e2e-xfhmd in namespace job-6208 with labels: map[e2e-job-label:e2e-xfhmd e2e-xfhmd:patched] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] +Oct 13 08:14:36.503: INFO: Event MODIFIED observed for Job e2e-xfhmd in namespace job-6208 with labels: map[e2e-job-label:e2e-xfhmd e2e-xfhmd:patched] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] +Oct 13 08:14:36.503: INFO: Event DELETED found for Job e2e-xfhmd in namespace job-6208 with labels: map[e2e-job-label:e2e-xfhmd e2e-xfhmd:patched] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] +STEP: Relist jobs to confirm deletion 10/13/23 08:14:36.504 +[AfterEach] [sig-apps] Job + test/e2e/framework/node/init/init.go:32 +Oct 13 08:14:36.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] Job + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] Job + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] Job + tear down framework | framework.go:193 +STEP: Destroying namespace "job-6208" for this suite. 10/13/23 08:14:36.51 +------------------------------ +• [SLOW TEST] [8.079 seconds] +[sig-apps] Job +test/e2e/apps/framework.go:23 + should manage the lifecycle of a job [Conformance] + test/e2e/apps/job.go:703 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] Job + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:14:28.437 + Oct 13 08:14:28.437: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename job 10/13/23 08:14:28.438 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:14:28.452 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:14:28.454 + [BeforeEach] [sig-apps] Job + test/e2e/framework/metrics/init/init.go:31 + [It] should manage the lifecycle of a job [Conformance] + test/e2e/apps/job.go:703 + STEP: Creating a suspended job 10/13/23 08:14:28.46 + STEP: Patching the Job 10/13/23 08:14:28.465 + STEP: Watching for Job to be patched 10/13/23 08:14:28.476 + Oct 13 08:14:28.477: INFO: Event ADDED observed for Job e2e-xfhmd in namespace job-6208 with labels: map[e2e-job-label:e2e-xfhmd] and annotations: map[batch.kubernetes.io/job-tracking:] + Oct 13 08:14:28.477: INFO: Event MODIFIED found for Job e2e-xfhmd in namespace job-6208 with labels: map[e2e-job-label:e2e-xfhmd e2e-xfhmd:patched] and annotations: map[batch.kubernetes.io/job-tracking:] + STEP: Updating the job 10/13/23 08:14:28.477 + STEP: Watching for Job to be updated 10/13/23 08:14:28.486 + Oct 13 08:14:28.488: INFO: Event MODIFIED found for Job e2e-xfhmd in namespace job-6208 with labels: map[e2e-job-label:e2e-xfhmd e2e-xfhmd:patched] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] + Oct 13 08:14:28.488: INFO: Found Job annotations: map[string]string{"batch.kubernetes.io/job-tracking":"", "updated":"true"} + STEP: Listing all Jobs with LabelSelector 10/13/23 08:14:28.488 + Oct 13 08:14:28.491: INFO: Job: e2e-xfhmd as labels: map[e2e-job-label:e2e-xfhmd e2e-xfhmd:patched] + STEP: Waiting for job to complete 10/13/23 08:14:28.491 + STEP: Delete a job collection with a labelselector 10/13/23 08:14:36.495 + STEP: Watching for Job to be deleted 10/13/23 08:14:36.502 + Oct 13 08:14:36.503: INFO: Event MODIFIED observed for Job e2e-xfhmd in namespace job-6208 with labels: map[e2e-job-label:e2e-xfhmd e2e-xfhmd:patched] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] + Oct 13 08:14:36.503: INFO: Event MODIFIED observed for Job e2e-xfhmd in namespace job-6208 with labels: map[e2e-job-label:e2e-xfhmd e2e-xfhmd:patched] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] + Oct 13 08:14:36.503: INFO: Event MODIFIED observed for Job e2e-xfhmd in namespace job-6208 with labels: map[e2e-job-label:e2e-xfhmd e2e-xfhmd:patched] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] + Oct 13 08:14:36.503: INFO: Event MODIFIED observed for Job e2e-xfhmd in namespace job-6208 with labels: map[e2e-job-label:e2e-xfhmd e2e-xfhmd:patched] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] + Oct 13 08:14:36.503: INFO: Event MODIFIED observed for Job e2e-xfhmd in namespace job-6208 with labels: map[e2e-job-label:e2e-xfhmd e2e-xfhmd:patched] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] + Oct 13 08:14:36.503: INFO: Event DELETED found for Job e2e-xfhmd in namespace job-6208 with labels: map[e2e-job-label:e2e-xfhmd e2e-xfhmd:patched] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] + STEP: Relist jobs to confirm deletion 10/13/23 08:14:36.504 + [AfterEach] [sig-apps] Job + test/e2e/framework/node/init/init.go:32 + Oct 13 08:14:36.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] Job + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] Job + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] Job + tear down framework | framework.go:193 + STEP: Destroying namespace "job-6208" for this suite. 10/13/23 08:14:36.51 + << End Captured GinkgoWriter Output +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:117 +[BeforeEach] [sig-storage] EmptyDir volumes + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:14:36.516 +Oct 13 08:14:36.516: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename emptydir 10/13/23 08:14:36.517 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:14:36.532 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:14:36.535 +[BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:31 +[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:117 +STEP: Creating a pod to test emptydir 0777 on tmpfs 10/13/23 08:14:36.537 +Oct 13 08:14:36.544: INFO: Waiting up to 5m0s for pod "pod-74bebe37-86be-4564-ac51-791b29ab9066" in namespace "emptydir-6426" to be "Succeeded or Failed" +Oct 13 08:14:36.546: INFO: Pod "pod-74bebe37-86be-4564-ac51-791b29ab9066": Phase="Pending", Reason="", readiness=false. Elapsed: 2.398324ms +Oct 13 08:14:38.550: INFO: Pod "pod-74bebe37-86be-4564-ac51-791b29ab9066": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006473515s +Oct 13 08:14:40.551: INFO: Pod "pod-74bebe37-86be-4564-ac51-791b29ab9066": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007140248s +STEP: Saw pod success 10/13/23 08:14:40.551 +Oct 13 08:14:40.551: INFO: Pod "pod-74bebe37-86be-4564-ac51-791b29ab9066" satisfied condition "Succeeded or Failed" +Oct 13 08:14:40.554: INFO: Trying to get logs from node node1 pod pod-74bebe37-86be-4564-ac51-791b29ab9066 container test-container: +STEP: delete the pod 10/13/23 08:14:40.561 +Oct 13 08:14:40.572: INFO: Waiting for pod pod-74bebe37-86be-4564-ac51-791b29ab9066 to disappear +Oct 13 08:14:40.574: INFO: Pod pod-74bebe37-86be-4564-ac51-791b29ab9066 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/node/init/init.go:32 +Oct 13 08:14:40.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + tear down framework | framework.go:193 +STEP: Destroying namespace "emptydir-6426" for this suite. 10/13/23 08:14:40.581 +------------------------------ +• [4.072 seconds] +[sig-storage] EmptyDir volumes +test/e2e/common/storage/framework.go:23 + should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:117 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] EmptyDir volumes + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:14:36.516 + Oct 13 08:14:36.516: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename emptydir 10/13/23 08:14:36.517 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:14:36.532 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:14:36.535 + [BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:31 + [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:117 + STEP: Creating a pod to test emptydir 0777 on tmpfs 10/13/23 08:14:36.537 + Oct 13 08:14:36.544: INFO: Waiting up to 5m0s for pod "pod-74bebe37-86be-4564-ac51-791b29ab9066" in namespace "emptydir-6426" to be "Succeeded or Failed" + Oct 13 08:14:36.546: INFO: Pod "pod-74bebe37-86be-4564-ac51-791b29ab9066": Phase="Pending", Reason="", readiness=false. Elapsed: 2.398324ms + Oct 13 08:14:38.550: INFO: Pod "pod-74bebe37-86be-4564-ac51-791b29ab9066": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006473515s + Oct 13 08:14:40.551: INFO: Pod "pod-74bebe37-86be-4564-ac51-791b29ab9066": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007140248s + STEP: Saw pod success 10/13/23 08:14:40.551 + Oct 13 08:14:40.551: INFO: Pod "pod-74bebe37-86be-4564-ac51-791b29ab9066" satisfied condition "Succeeded or Failed" + Oct 13 08:14:40.554: INFO: Trying to get logs from node node1 pod pod-74bebe37-86be-4564-ac51-791b29ab9066 container test-container: + STEP: delete the pod 10/13/23 08:14:40.561 + Oct 13 08:14:40.572: INFO: Waiting for pod pod-74bebe37-86be-4564-ac51-791b29ab9066 to disappear + Oct 13 08:14:40.574: INFO: Pod pod-74bebe37-86be-4564-ac51-791b29ab9066 no longer exists + [AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/node/init/init.go:32 + Oct 13 08:14:40.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + tear down framework | framework.go:193 + STEP: Destroying namespace "emptydir-6426" for this suite. 10/13/23 08:14:40.581 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-instrumentation] Events API + should delete a collection of events [Conformance] + test/e2e/instrumentation/events.go:207 +[BeforeEach] [sig-instrumentation] Events API + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:14:40.588 +Oct 13 08:14:40.588: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename events 10/13/23 08:14:40.589 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:14:40.602 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:14:40.605 +[BeforeEach] [sig-instrumentation] Events API + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-instrumentation] Events API + test/e2e/instrumentation/events.go:84 +[It] should delete a collection of events [Conformance] + test/e2e/instrumentation/events.go:207 +STEP: Create set of events 10/13/23 08:14:40.608 +STEP: get a list of Events with a label in the current namespace 10/13/23 08:14:40.621 +STEP: delete a list of events 10/13/23 08:14:40.624 +Oct 13 08:14:40.624: INFO: requesting DeleteCollection of events +STEP: check that the list of events matches the requested quantity 10/13/23 08:14:40.64 +[AfterEach] [sig-instrumentation] Events API + test/e2e/framework/node/init/init.go:32 +Oct 13 08:14:40.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-instrumentation] Events API + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-instrumentation] Events API + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-instrumentation] Events API + tear down framework | framework.go:193 +STEP: Destroying namespace "events-4555" for this suite. 10/13/23 08:14:40.647 +------------------------------ +• [0.063 seconds] +[sig-instrumentation] Events API +test/e2e/instrumentation/common/framework.go:23 + should delete a collection of events [Conformance] + test/e2e/instrumentation/events.go:207 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-instrumentation] Events API + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:14:40.588 + Oct 13 08:14:40.588: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename events 10/13/23 08:14:40.589 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:14:40.602 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:14:40.605 + [BeforeEach] [sig-instrumentation] Events API + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-instrumentation] Events API + test/e2e/instrumentation/events.go:84 + [It] should delete a collection of events [Conformance] + test/e2e/instrumentation/events.go:207 + STEP: Create set of events 10/13/23 08:14:40.608 + STEP: get a list of Events with a label in the current namespace 10/13/23 08:14:40.621 + STEP: delete a list of events 10/13/23 08:14:40.624 + Oct 13 08:14:40.624: INFO: requesting DeleteCollection of events + STEP: check that the list of events matches the requested quantity 10/13/23 08:14:40.64 + [AfterEach] [sig-instrumentation] Events API + test/e2e/framework/node/init/init.go:32 + Oct 13 08:14:40.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-instrumentation] Events API + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-instrumentation] Events API + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-instrumentation] Events API + tear down framework | framework.go:193 + STEP: Destroying namespace "events-4555" for this suite. 10/13/23 08:14:40.647 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSS +------------------------------ +[sig-node] Containers + should use the image defaults if command and args are blank [NodeConformance] [Conformance] + test/e2e/common/node/containers.go:39 +[BeforeEach] [sig-node] Containers + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:14:40.652 +Oct 13 08:14:40.652: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename containers 10/13/23 08:14:40.653 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:14:40.667 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:14:40.67 +[BeforeEach] [sig-node] Containers + test/e2e/framework/metrics/init/init.go:31 +[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] + test/e2e/common/node/containers.go:39 +Oct 13 08:14:40.679: INFO: Waiting up to 5m0s for pod "client-containers-c0ed2900-2139-482c-9734-25c20928b9f8" in namespace "containers-6991" to be "running" +Oct 13 08:14:40.682: INFO: Pod "client-containers-c0ed2900-2139-482c-9734-25c20928b9f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.799576ms +Oct 13 08:14:42.685: INFO: Pod "client-containers-c0ed2900-2139-482c-9734-25c20928b9f8": Phase="Running", Reason="", readiness=true. Elapsed: 2.006531066s +Oct 13 08:14:42.685: INFO: Pod "client-containers-c0ed2900-2139-482c-9734-25c20928b9f8" satisfied condition "running" +[AfterEach] [sig-node] Containers + test/e2e/framework/node/init/init.go:32 +Oct 13 08:14:42.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Containers + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Containers + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Containers + tear down framework | framework.go:193 +STEP: Destroying namespace "containers-6991" for this suite. 10/13/23 08:14:42.692 +------------------------------ +• [2.044 seconds] +[sig-node] Containers +test/e2e/common/node/framework.go:23 + should use the image defaults if command and args are blank [NodeConformance] [Conformance] + test/e2e/common/node/containers.go:39 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Containers + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:14:40.652 + Oct 13 08:14:40.652: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename containers 10/13/23 08:14:40.653 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:14:40.667 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:14:40.67 + [BeforeEach] [sig-node] Containers + test/e2e/framework/metrics/init/init.go:31 + [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] + test/e2e/common/node/containers.go:39 + Oct 13 08:14:40.679: INFO: Waiting up to 5m0s for pod "client-containers-c0ed2900-2139-482c-9734-25c20928b9f8" in namespace "containers-6991" to be "running" + Oct 13 08:14:40.682: INFO: Pod "client-containers-c0ed2900-2139-482c-9734-25c20928b9f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.799576ms + Oct 13 08:14:42.685: INFO: Pod "client-containers-c0ed2900-2139-482c-9734-25c20928b9f8": Phase="Running", Reason="", readiness=true. Elapsed: 2.006531066s + Oct 13 08:14:42.685: INFO: Pod "client-containers-c0ed2900-2139-482c-9734-25c20928b9f8" satisfied condition "running" + [AfterEach] [sig-node] Containers + test/e2e/framework/node/init/init.go:32 + Oct 13 08:14:42.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Containers + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Containers + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Containers + tear down framework | framework.go:193 + STEP: Destroying namespace "containers-6991" for this suite. 10/13/23 08:14:42.692 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should be able to deny pod and configmap creation [Conformance] + test/e2e/apimachinery/webhook.go:197 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:14:42.697 +Oct 13 08:14:42.697: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename webhook 10/13/23 08:14:42.698 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:14:42.709 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:14:42.712 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 +STEP: Setting up server cert 10/13/23 08:14:42.722 +STEP: Create role binding to let webhook read extension-apiserver-authentication 10/13/23 08:14:43.265 +STEP: Deploying the webhook pod 10/13/23 08:14:43.274 +STEP: Wait for the deployment to be ready 10/13/23 08:14:43.289 +Oct 13 08:14:43.293: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created +STEP: Deploying the webhook service 10/13/23 08:14:45.302 +STEP: Verifying the service has paired with the endpoint 10/13/23 08:14:45.314 +Oct 13 08:14:46.314: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should be able to deny pod and configmap creation [Conformance] + test/e2e/apimachinery/webhook.go:197 +STEP: Registering the webhook via the AdmissionRegistration API 10/13/23 08:14:46.319 +STEP: create a pod that should be denied by the webhook 10/13/23 08:14:46.339 +STEP: create a pod that causes the webhook to hang 10/13/23 08:14:46.35 +STEP: create a configmap that should be denied by the webhook 10/13/23 08:14:56.36 +STEP: create a configmap that should be admitted by the webhook 10/13/23 08:14:56.44 +STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook 10/13/23 08:14:56.454 +STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook 10/13/23 08:14:56.462 +STEP: create a namespace that bypass the webhook 10/13/23 08:14:56.469 +STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace 10/13/23 08:14:56.476 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Oct 13 08:14:56.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "webhook-5933" for this suite. 10/13/23 08:14:56.537 +STEP: Destroying namespace "webhook-5933-markers" for this suite. 10/13/23 08:14:56.544 +------------------------------ +• [SLOW TEST] [13.855 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should be able to deny pod and configmap creation [Conformance] + test/e2e/apimachinery/webhook.go:197 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:14:42.697 + Oct 13 08:14:42.697: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename webhook 10/13/23 08:14:42.698 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:14:42.709 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:14:42.712 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 + STEP: Setting up server cert 10/13/23 08:14:42.722 + STEP: Create role binding to let webhook read extension-apiserver-authentication 10/13/23 08:14:43.265 + STEP: Deploying the webhook pod 10/13/23 08:14:43.274 + STEP: Wait for the deployment to be ready 10/13/23 08:14:43.289 + Oct 13 08:14:43.293: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created + STEP: Deploying the webhook service 10/13/23 08:14:45.302 + STEP: Verifying the service has paired with the endpoint 10/13/23 08:14:45.314 + Oct 13 08:14:46.314: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] should be able to deny pod and configmap creation [Conformance] + test/e2e/apimachinery/webhook.go:197 + STEP: Registering the webhook via the AdmissionRegistration API 10/13/23 08:14:46.319 + STEP: create a pod that should be denied by the webhook 10/13/23 08:14:46.339 + STEP: create a pod that causes the webhook to hang 10/13/23 08:14:46.35 + STEP: create a configmap that should be denied by the webhook 10/13/23 08:14:56.36 + STEP: create a configmap that should be admitted by the webhook 10/13/23 08:14:56.44 + STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook 10/13/23 08:14:56.454 + STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook 10/13/23 08:14:56.462 + STEP: create a namespace that bypass the webhook 10/13/23 08:14:56.469 + STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace 10/13/23 08:14:56.476 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Oct 13 08:14:56.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "webhook-5933" for this suite. 10/13/23 08:14:56.537 + STEP: Destroying namespace "webhook-5933-markers" for this suite. 10/13/23 08:14:56.544 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + custom resource defaulting for requests and from storage works [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:269 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:14:56.553 +Oct 13 08:14:56.553: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename custom-resource-definition 10/13/23 08:14:56.554 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:14:56.571 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:14:56.574 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[It] custom resource defaulting for requests and from storage works [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:269 +Oct 13 08:14:56.577: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Oct 13 08:14:59.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "custom-resource-definition-4476" for this suite. 10/13/23 08:14:59.734 +------------------------------ +• [3.186 seconds] +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + custom resource defaulting for requests and from storage works [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:269 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:14:56.553 + Oct 13 08:14:56.553: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename custom-resource-definition 10/13/23 08:14:56.554 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:14:56.571 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:14:56.574 + [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [It] custom resource defaulting for requests and from storage works [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:269 + Oct 13 08:14:56.577: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Oct 13 08:14:59.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "custom-resource-definition-4476" for this suite. 10/13/23 08:14:59.734 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with projected pod [Conformance] + test/e2e/storage/subpath.go:106 +[BeforeEach] [sig-storage] Subpath + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:14:59.739 +Oct 13 08:14:59.740: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename subpath 10/13/23 08:14:59.74 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:14:59.753 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:14:59.756 +[BeforeEach] [sig-storage] Subpath + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] Atomic writer volumes + test/e2e/storage/subpath.go:40 +STEP: Setting up data 10/13/23 08:14:59.758 +[It] should support subpaths with projected pod [Conformance] + test/e2e/storage/subpath.go:106 +STEP: Creating pod pod-subpath-test-projected-df55 10/13/23 08:14:59.765 +STEP: Creating a pod to test atomic-volume-subpath 10/13/23 08:14:59.765 +Oct 13 08:14:59.772: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-df55" in namespace "subpath-7243" to be "Succeeded or Failed" +Oct 13 08:14:59.791: INFO: Pod "pod-subpath-test-projected-df55": Phase="Pending", Reason="", readiness=false. Elapsed: 19.387981ms +Oct 13 08:15:01.796: INFO: Pod "pod-subpath-test-projected-df55": Phase="Running", Reason="", readiness=true. Elapsed: 2.023588821s +Oct 13 08:15:03.794: INFO: Pod "pod-subpath-test-projected-df55": Phase="Running", Reason="", readiness=true. Elapsed: 4.022478557s +Oct 13 08:15:05.797: INFO: Pod "pod-subpath-test-projected-df55": Phase="Running", Reason="", readiness=true. Elapsed: 6.025015605s +Oct 13 08:15:07.796: INFO: Pod "pod-subpath-test-projected-df55": Phase="Running", Reason="", readiness=true. Elapsed: 8.024082425s +Oct 13 08:15:09.797: INFO: Pod "pod-subpath-test-projected-df55": Phase="Running", Reason="", readiness=true. Elapsed: 10.024953302s +Oct 13 08:15:11.797: INFO: Pod "pod-subpath-test-projected-df55": Phase="Running", Reason="", readiness=true. Elapsed: 12.024890414s +Oct 13 08:15:13.796: INFO: Pod "pod-subpath-test-projected-df55": Phase="Running", Reason="", readiness=true. Elapsed: 14.023994262s +Oct 13 08:15:15.797: INFO: Pod "pod-subpath-test-projected-df55": Phase="Running", Reason="", readiness=true. Elapsed: 16.024586424s +Oct 13 08:15:17.797: INFO: Pod "pod-subpath-test-projected-df55": Phase="Running", Reason="", readiness=true. Elapsed: 18.024781024s +Oct 13 08:15:19.798: INFO: Pod "pod-subpath-test-projected-df55": Phase="Running", Reason="", readiness=true. Elapsed: 20.026158242s +Oct 13 08:15:21.795: INFO: Pod "pod-subpath-test-projected-df55": Phase="Running", Reason="", readiness=false. Elapsed: 22.023301053s +Oct 13 08:15:23.795: INFO: Pod "pod-subpath-test-projected-df55": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.02354249s +STEP: Saw pod success 10/13/23 08:15:23.796 +Oct 13 08:15:23.796: INFO: Pod "pod-subpath-test-projected-df55" satisfied condition "Succeeded or Failed" +Oct 13 08:15:23.799: INFO: Trying to get logs from node node2 pod pod-subpath-test-projected-df55 container test-container-subpath-projected-df55: +STEP: delete the pod 10/13/23 08:15:23.808 +Oct 13 08:15:23.822: INFO: Waiting for pod pod-subpath-test-projected-df55 to disappear +Oct 13 08:15:23.824: INFO: Pod pod-subpath-test-projected-df55 no longer exists +STEP: Deleting pod pod-subpath-test-projected-df55 10/13/23 08:15:23.824 +Oct 13 08:15:23.824: INFO: Deleting pod "pod-subpath-test-projected-df55" in namespace "subpath-7243" +[AfterEach] [sig-storage] Subpath + test/e2e/framework/node/init/init.go:32 +Oct 13 08:15:23.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Subpath + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Subpath + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Subpath + tear down framework | framework.go:193 +STEP: Destroying namespace "subpath-7243" for this suite. 10/13/23 08:15:23.831 +------------------------------ +• [SLOW TEST] [24.098 seconds] +[sig-storage] Subpath +test/e2e/storage/utils/framework.go:23 + Atomic writer volumes + test/e2e/storage/subpath.go:36 + should support subpaths with projected pod [Conformance] + test/e2e/storage/subpath.go:106 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Subpath + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:14:59.739 + Oct 13 08:14:59.740: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename subpath 10/13/23 08:14:59.74 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:14:59.753 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:14:59.756 + [BeforeEach] [sig-storage] Subpath + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] Atomic writer volumes + test/e2e/storage/subpath.go:40 + STEP: Setting up data 10/13/23 08:14:59.758 + [It] should support subpaths with projected pod [Conformance] + test/e2e/storage/subpath.go:106 + STEP: Creating pod pod-subpath-test-projected-df55 10/13/23 08:14:59.765 + STEP: Creating a pod to test atomic-volume-subpath 10/13/23 08:14:59.765 + Oct 13 08:14:59.772: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-df55" in namespace "subpath-7243" to be "Succeeded or Failed" + Oct 13 08:14:59.791: INFO: Pod "pod-subpath-test-projected-df55": Phase="Pending", Reason="", readiness=false. Elapsed: 19.387981ms + Oct 13 08:15:01.796: INFO: Pod "pod-subpath-test-projected-df55": Phase="Running", Reason="", readiness=true. Elapsed: 2.023588821s + Oct 13 08:15:03.794: INFO: Pod "pod-subpath-test-projected-df55": Phase="Running", Reason="", readiness=true. Elapsed: 4.022478557s + Oct 13 08:15:05.797: INFO: Pod "pod-subpath-test-projected-df55": Phase="Running", Reason="", readiness=true. Elapsed: 6.025015605s + Oct 13 08:15:07.796: INFO: Pod "pod-subpath-test-projected-df55": Phase="Running", Reason="", readiness=true. Elapsed: 8.024082425s + Oct 13 08:15:09.797: INFO: Pod "pod-subpath-test-projected-df55": Phase="Running", Reason="", readiness=true. Elapsed: 10.024953302s + Oct 13 08:15:11.797: INFO: Pod "pod-subpath-test-projected-df55": Phase="Running", Reason="", readiness=true. Elapsed: 12.024890414s + Oct 13 08:15:13.796: INFO: Pod "pod-subpath-test-projected-df55": Phase="Running", Reason="", readiness=true. Elapsed: 14.023994262s + Oct 13 08:15:15.797: INFO: Pod "pod-subpath-test-projected-df55": Phase="Running", Reason="", readiness=true. Elapsed: 16.024586424s + Oct 13 08:15:17.797: INFO: Pod "pod-subpath-test-projected-df55": Phase="Running", Reason="", readiness=true. Elapsed: 18.024781024s + Oct 13 08:15:19.798: INFO: Pod "pod-subpath-test-projected-df55": Phase="Running", Reason="", readiness=true. Elapsed: 20.026158242s + Oct 13 08:15:21.795: INFO: Pod "pod-subpath-test-projected-df55": Phase="Running", Reason="", readiness=false. Elapsed: 22.023301053s + Oct 13 08:15:23.795: INFO: Pod "pod-subpath-test-projected-df55": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.02354249s + STEP: Saw pod success 10/13/23 08:15:23.796 + Oct 13 08:15:23.796: INFO: Pod "pod-subpath-test-projected-df55" satisfied condition "Succeeded or Failed" + Oct 13 08:15:23.799: INFO: Trying to get logs from node node2 pod pod-subpath-test-projected-df55 container test-container-subpath-projected-df55: + STEP: delete the pod 10/13/23 08:15:23.808 + Oct 13 08:15:23.822: INFO: Waiting for pod pod-subpath-test-projected-df55 to disappear + Oct 13 08:15:23.824: INFO: Pod pod-subpath-test-projected-df55 no longer exists + STEP: Deleting pod pod-subpath-test-projected-df55 10/13/23 08:15:23.824 + Oct 13 08:15:23.824: INFO: Deleting pod "pod-subpath-test-projected-df55" in namespace "subpath-7243" + [AfterEach] [sig-storage] Subpath + test/e2e/framework/node/init/init.go:32 + Oct 13 08:15:23.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Subpath + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Subpath + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Subpath + tear down framework | framework.go:193 + STEP: Destroying namespace "subpath-7243" for this suite. 10/13/23 08:15:23.831 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-auth] ServiceAccounts + should run through the lifecycle of a ServiceAccount [Conformance] + test/e2e/auth/service_accounts.go:649 +[BeforeEach] [sig-auth] ServiceAccounts + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:15:23.839 +Oct 13 08:15:23.839: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename svcaccounts 10/13/23 08:15:23.84 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:15:23.854 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:15:23.857 +[BeforeEach] [sig-auth] ServiceAccounts + test/e2e/framework/metrics/init/init.go:31 +[It] should run through the lifecycle of a ServiceAccount [Conformance] + test/e2e/auth/service_accounts.go:649 +STEP: creating a ServiceAccount 10/13/23 08:15:23.859 +STEP: watching for the ServiceAccount to be added 10/13/23 08:15:23.867 +STEP: patching the ServiceAccount 10/13/23 08:15:23.868 +STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) 10/13/23 08:15:23.874 +STEP: deleting the ServiceAccount 10/13/23 08:15:23.876 +[AfterEach] [sig-auth] ServiceAccounts + test/e2e/framework/node/init/init.go:32 +Oct 13 08:15:23.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-auth] ServiceAccounts + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-auth] ServiceAccounts + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-auth] ServiceAccounts + tear down framework | framework.go:193 +STEP: Destroying namespace "svcaccounts-2490" for this suite. 10/13/23 08:15:23.889 +------------------------------ +• [0.055 seconds] +[sig-auth] ServiceAccounts +test/e2e/auth/framework.go:23 + should run through the lifecycle of a ServiceAccount [Conformance] + test/e2e/auth/service_accounts.go:649 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-auth] ServiceAccounts + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:15:23.839 + Oct 13 08:15:23.839: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename svcaccounts 10/13/23 08:15:23.84 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:15:23.854 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:15:23.857 + [BeforeEach] [sig-auth] ServiceAccounts + test/e2e/framework/metrics/init/init.go:31 + [It] should run through the lifecycle of a ServiceAccount [Conformance] + test/e2e/auth/service_accounts.go:649 + STEP: creating a ServiceAccount 10/13/23 08:15:23.859 + STEP: watching for the ServiceAccount to be added 10/13/23 08:15:23.867 + STEP: patching the ServiceAccount 10/13/23 08:15:23.868 + STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) 10/13/23 08:15:23.874 + STEP: deleting the ServiceAccount 10/13/23 08:15:23.876 + [AfterEach] [sig-auth] ServiceAccounts + test/e2e/framework/node/init/init.go:32 + Oct 13 08:15:23.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-auth] ServiceAccounts + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-auth] ServiceAccounts + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-auth] ServiceAccounts + tear down framework | framework.go:193 + STEP: Destroying namespace "svcaccounts-2490" for this suite. 10/13/23 08:15:23.889 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSS +------------------------------ +[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook + should execute poststart http hook properly [NodeConformance] [Conformance] + test/e2e/common/node/lifecycle_hook.go:167 +[BeforeEach] [sig-node] Container Lifecycle Hook + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:15:23.895 +Oct 13 08:15:23.895: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename container-lifecycle-hook 10/13/23 08:15:23.896 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:15:23.909 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:15:23.912 +[BeforeEach] [sig-node] Container Lifecycle Hook + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] when create a pod with lifecycle hook + test/e2e/common/node/lifecycle_hook.go:77 +STEP: create the container to handle the HTTPGet hook request. 10/13/23 08:15:23.918 +Oct 13 08:15:23.925: INFO: Waiting up to 5m0s for pod "pod-handle-http-request" in namespace "container-lifecycle-hook-3062" to be "running and ready" +Oct 13 08:15:23.928: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 3.248214ms +Oct 13 08:15:23.928: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Oct 13 08:15:25.933: INFO: Pod "pod-handle-http-request": Phase="Running", Reason="", readiness=true. Elapsed: 2.008365635s +Oct 13 08:15:25.934: INFO: The phase of Pod pod-handle-http-request is Running (Ready = true) +Oct 13 08:15:25.934: INFO: Pod "pod-handle-http-request" satisfied condition "running and ready" +[It] should execute poststart http hook properly [NodeConformance] [Conformance] + test/e2e/common/node/lifecycle_hook.go:167 +STEP: create the pod with lifecycle hook 10/13/23 08:15:25.937 +Oct 13 08:15:25.943: INFO: Waiting up to 5m0s for pod "pod-with-poststart-http-hook" in namespace "container-lifecycle-hook-3062" to be "running and ready" +Oct 13 08:15:25.946: INFO: Pod "pod-with-poststart-http-hook": Phase="Pending", Reason="", readiness=false. Elapsed: 3.202647ms +Oct 13 08:15:25.946: INFO: The phase of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) +Oct 13 08:15:27.952: INFO: Pod "pod-with-poststart-http-hook": Phase="Running", Reason="", readiness=true. Elapsed: 2.009280141s +Oct 13 08:15:27.952: INFO: The phase of Pod pod-with-poststart-http-hook is Running (Ready = true) +Oct 13 08:15:27.952: INFO: Pod "pod-with-poststart-http-hook" satisfied condition "running and ready" +STEP: check poststart hook 10/13/23 08:15:27.957 +STEP: delete the pod with lifecycle hook 10/13/23 08:15:27.968 +Oct 13 08:15:27.976: INFO: Waiting for pod pod-with-poststart-http-hook to disappear +Oct 13 08:15:27.982: INFO: Pod pod-with-poststart-http-hook still exists +Oct 13 08:15:29.983: INFO: Waiting for pod pod-with-poststart-http-hook to disappear +Oct 13 08:15:29.988: INFO: Pod pod-with-poststart-http-hook still exists +Oct 13 08:15:31.983: INFO: Waiting for pod pod-with-poststart-http-hook to disappear +Oct 13 08:15:31.988: INFO: Pod pod-with-poststart-http-hook no longer exists +[AfterEach] [sig-node] Container Lifecycle Hook + test/e2e/framework/node/init/init.go:32 +Oct 13 08:15:31.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Container Lifecycle Hook + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Container Lifecycle Hook + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Container Lifecycle Hook + tear down framework | framework.go:193 +STEP: Destroying namespace "container-lifecycle-hook-3062" for this suite. 10/13/23 08:15:31.992 +------------------------------ +• [SLOW TEST] [8.104 seconds] +[sig-node] Container Lifecycle Hook +test/e2e/common/node/framework.go:23 + when create a pod with lifecycle hook + test/e2e/common/node/lifecycle_hook.go:46 + should execute poststart http hook properly [NodeConformance] [Conformance] + test/e2e/common/node/lifecycle_hook.go:167 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Container Lifecycle Hook + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:15:23.895 + Oct 13 08:15:23.895: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename container-lifecycle-hook 10/13/23 08:15:23.896 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:15:23.909 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:15:23.912 + [BeforeEach] [sig-node] Container Lifecycle Hook + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] when create a pod with lifecycle hook + test/e2e/common/node/lifecycle_hook.go:77 + STEP: create the container to handle the HTTPGet hook request. 10/13/23 08:15:23.918 + Oct 13 08:15:23.925: INFO: Waiting up to 5m0s for pod "pod-handle-http-request" in namespace "container-lifecycle-hook-3062" to be "running and ready" + Oct 13 08:15:23.928: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 3.248214ms + Oct 13 08:15:23.928: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) + Oct 13 08:15:25.933: INFO: Pod "pod-handle-http-request": Phase="Running", Reason="", readiness=true. Elapsed: 2.008365635s + Oct 13 08:15:25.934: INFO: The phase of Pod pod-handle-http-request is Running (Ready = true) + Oct 13 08:15:25.934: INFO: Pod "pod-handle-http-request" satisfied condition "running and ready" + [It] should execute poststart http hook properly [NodeConformance] [Conformance] + test/e2e/common/node/lifecycle_hook.go:167 + STEP: create the pod with lifecycle hook 10/13/23 08:15:25.937 + Oct 13 08:15:25.943: INFO: Waiting up to 5m0s for pod "pod-with-poststart-http-hook" in namespace "container-lifecycle-hook-3062" to be "running and ready" + Oct 13 08:15:25.946: INFO: Pod "pod-with-poststart-http-hook": Phase="Pending", Reason="", readiness=false. Elapsed: 3.202647ms + Oct 13 08:15:25.946: INFO: The phase of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) + Oct 13 08:15:27.952: INFO: Pod "pod-with-poststart-http-hook": Phase="Running", Reason="", readiness=true. Elapsed: 2.009280141s + Oct 13 08:15:27.952: INFO: The phase of Pod pod-with-poststart-http-hook is Running (Ready = true) + Oct 13 08:15:27.952: INFO: Pod "pod-with-poststart-http-hook" satisfied condition "running and ready" + STEP: check poststart hook 10/13/23 08:15:27.957 + STEP: delete the pod with lifecycle hook 10/13/23 08:15:27.968 + Oct 13 08:15:27.976: INFO: Waiting for pod pod-with-poststart-http-hook to disappear + Oct 13 08:15:27.982: INFO: Pod pod-with-poststart-http-hook still exists + Oct 13 08:15:29.983: INFO: Waiting for pod pod-with-poststart-http-hook to disappear + Oct 13 08:15:29.988: INFO: Pod pod-with-poststart-http-hook still exists + Oct 13 08:15:31.983: INFO: Waiting for pod pod-with-poststart-http-hook to disappear + Oct 13 08:15:31.988: INFO: Pod pod-with-poststart-http-hook no longer exists + [AfterEach] [sig-node] Container Lifecycle Hook + test/e2e/framework/node/init/init.go:32 + Oct 13 08:15:31.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Container Lifecycle Hook + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Container Lifecycle Hook + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Container Lifecycle Hook + tear down framework | framework.go:193 + STEP: Destroying namespace "container-lifecycle-hook-3062" for this suite. 10/13/23 08:15:31.992 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-auth] Certificates API [Privileged:ClusterAdmin] + should support CSR API operations [Conformance] + test/e2e/auth/certificates.go:200 +[BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:15:32 +Oct 13 08:15:32.000: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename certificates 10/13/23 08:15:32.001 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:15:32.017 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:15:32.02 +[BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[It] should support CSR API operations [Conformance] + test/e2e/auth/certificates.go:200 +STEP: getting /apis 10/13/23 08:15:32.855 +STEP: getting /apis/certificates.k8s.io 10/13/23 08:15:32.858 +STEP: getting /apis/certificates.k8s.io/v1 10/13/23 08:15:32.86 +STEP: creating 10/13/23 08:15:32.861 +STEP: getting 10/13/23 08:15:32.877 +STEP: listing 10/13/23 08:15:32.88 +STEP: watching 10/13/23 08:15:32.884 +Oct 13 08:15:32.884: INFO: starting watch +STEP: patching 10/13/23 08:15:32.885 +STEP: updating 10/13/23 08:15:32.892 +Oct 13 08:15:32.897: INFO: waiting for watch events with expected annotations +Oct 13 08:15:32.897: INFO: saw patched and updated annotations +STEP: getting /approval 10/13/23 08:15:32.897 +STEP: patching /approval 10/13/23 08:15:32.9 +STEP: updating /approval 10/13/23 08:15:32.907 +STEP: getting /status 10/13/23 08:15:32.912 +STEP: patching /status 10/13/23 08:15:32.914 +STEP: updating /status 10/13/23 08:15:32.92 +STEP: deleting 10/13/23 08:15:32.925 +STEP: deleting a collection 10/13/23 08:15:32.933 +[AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Oct 13 08:15:32.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-auth] Certificates API [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-auth] Certificates API [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-auth] Certificates API [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "certificates-9998" for this suite. 10/13/23 08:15:32.947 +------------------------------ +• [0.952 seconds] +[sig-auth] Certificates API [Privileged:ClusterAdmin] +test/e2e/auth/framework.go:23 + should support CSR API operations [Conformance] + test/e2e/auth/certificates.go:200 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:15:32 + Oct 13 08:15:32.000: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename certificates 10/13/23 08:15:32.001 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:15:32.017 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:15:32.02 + [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [It] should support CSR API operations [Conformance] + test/e2e/auth/certificates.go:200 + STEP: getting /apis 10/13/23 08:15:32.855 + STEP: getting /apis/certificates.k8s.io 10/13/23 08:15:32.858 + STEP: getting /apis/certificates.k8s.io/v1 10/13/23 08:15:32.86 + STEP: creating 10/13/23 08:15:32.861 + STEP: getting 10/13/23 08:15:32.877 + STEP: listing 10/13/23 08:15:32.88 + STEP: watching 10/13/23 08:15:32.884 + Oct 13 08:15:32.884: INFO: starting watch + STEP: patching 10/13/23 08:15:32.885 + STEP: updating 10/13/23 08:15:32.892 + Oct 13 08:15:32.897: INFO: waiting for watch events with expected annotations + Oct 13 08:15:32.897: INFO: saw patched and updated annotations + STEP: getting /approval 10/13/23 08:15:32.897 + STEP: patching /approval 10/13/23 08:15:32.9 + STEP: updating /approval 10/13/23 08:15:32.907 + STEP: getting /status 10/13/23 08:15:32.912 + STEP: patching /status 10/13/23 08:15:32.914 + STEP: updating /status 10/13/23 08:15:32.92 + STEP: deleting 10/13/23 08:15:32.925 + STEP: deleting a collection 10/13/23 08:15:32.933 + [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Oct 13 08:15:32.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-auth] Certificates API [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-auth] Certificates API [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-auth] Certificates API [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "certificates-9998" for this suite. 10/13/23 08:15:32.947 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-auth] ServiceAccounts + should mount an API token into pods [Conformance] + test/e2e/auth/service_accounts.go:78 +[BeforeEach] [sig-auth] ServiceAccounts + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:15:32.953 +Oct 13 08:15:32.953: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename svcaccounts 10/13/23 08:15:32.957 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:15:32.972 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:15:32.974 +[BeforeEach] [sig-auth] ServiceAccounts + test/e2e/framework/metrics/init/init.go:31 +[It] should mount an API token into pods [Conformance] + test/e2e/auth/service_accounts.go:78 +Oct 13 08:15:32.986: INFO: Waiting up to 5m0s for pod "pod-service-account-3ebbc11a-a66c-4b4c-98c9-4608a9814c05" in namespace "svcaccounts-5457" to be "running" +Oct 13 08:15:32.989: INFO: Pod "pod-service-account-3ebbc11a-a66c-4b4c-98c9-4608a9814c05": Phase="Pending", Reason="", readiness=false. Elapsed: 2.88325ms +Oct 13 08:15:34.994: INFO: Pod "pod-service-account-3ebbc11a-a66c-4b4c-98c9-4608a9814c05": Phase="Running", Reason="", readiness=true. Elapsed: 2.00807169s +Oct 13 08:15:34.994: INFO: Pod "pod-service-account-3ebbc11a-a66c-4b4c-98c9-4608a9814c05" satisfied condition "running" +STEP: reading a file in the container 10/13/23 08:15:34.994 +Oct 13 08:15:34.995: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5457 pod-service-account-3ebbc11a-a66c-4b4c-98c9-4608a9814c05 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' +STEP: reading a file in the container 10/13/23 08:15:35.309 +Oct 13 08:15:35.310: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5457 pod-service-account-3ebbc11a-a66c-4b4c-98c9-4608a9814c05 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' +STEP: reading a file in the container 10/13/23 08:15:35.474 +Oct 13 08:15:35.474: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5457 pod-service-account-3ebbc11a-a66c-4b4c-98c9-4608a9814c05 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' +Oct 13 08:15:35.601: INFO: Got root ca configmap in namespace "svcaccounts-5457" +[AfterEach] [sig-auth] ServiceAccounts + test/e2e/framework/node/init/init.go:32 +Oct 13 08:15:35.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-auth] ServiceAccounts + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-auth] ServiceAccounts + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-auth] ServiceAccounts + tear down framework | framework.go:193 +STEP: Destroying namespace "svcaccounts-5457" for this suite. 10/13/23 08:15:35.606 +------------------------------ +• [2.658 seconds] +[sig-auth] ServiceAccounts +test/e2e/auth/framework.go:23 + should mount an API token into pods [Conformance] + test/e2e/auth/service_accounts.go:78 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-auth] ServiceAccounts + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:15:32.953 + Oct 13 08:15:32.953: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename svcaccounts 10/13/23 08:15:32.957 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:15:32.972 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:15:32.974 + [BeforeEach] [sig-auth] ServiceAccounts + test/e2e/framework/metrics/init/init.go:31 + [It] should mount an API token into pods [Conformance] + test/e2e/auth/service_accounts.go:78 + Oct 13 08:15:32.986: INFO: Waiting up to 5m0s for pod "pod-service-account-3ebbc11a-a66c-4b4c-98c9-4608a9814c05" in namespace "svcaccounts-5457" to be "running" + Oct 13 08:15:32.989: INFO: Pod "pod-service-account-3ebbc11a-a66c-4b4c-98c9-4608a9814c05": Phase="Pending", Reason="", readiness=false. Elapsed: 2.88325ms + Oct 13 08:15:34.994: INFO: Pod "pod-service-account-3ebbc11a-a66c-4b4c-98c9-4608a9814c05": Phase="Running", Reason="", readiness=true. Elapsed: 2.00807169s + Oct 13 08:15:34.994: INFO: Pod "pod-service-account-3ebbc11a-a66c-4b4c-98c9-4608a9814c05" satisfied condition "running" + STEP: reading a file in the container 10/13/23 08:15:34.994 + Oct 13 08:15:34.995: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5457 pod-service-account-3ebbc11a-a66c-4b4c-98c9-4608a9814c05 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' + STEP: reading a file in the container 10/13/23 08:15:35.309 + Oct 13 08:15:35.310: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5457 pod-service-account-3ebbc11a-a66c-4b4c-98c9-4608a9814c05 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' + STEP: reading a file in the container 10/13/23 08:15:35.474 + Oct 13 08:15:35.474: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5457 pod-service-account-3ebbc11a-a66c-4b4c-98c9-4608a9814c05 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' + Oct 13 08:15:35.601: INFO: Got root ca configmap in namespace "svcaccounts-5457" + [AfterEach] [sig-auth] ServiceAccounts + test/e2e/framework/node/init/init.go:32 + Oct 13 08:15:35.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-auth] ServiceAccounts + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-auth] ServiceAccounts + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-auth] ServiceAccounts + tear down framework | framework.go:193 + STEP: Destroying namespace "svcaccounts-5457" for this suite. 10/13/23 08:15:35.606 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:147 +[BeforeEach] [sig-storage] EmptyDir volumes + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:15:35.612 +Oct 13 08:15:35.612: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename emptydir 10/13/23 08:15:35.613 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:15:35.626 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:15:35.629 +[BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:31 +[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:147 +STEP: Creating a pod to test emptydir 0777 on tmpfs 10/13/23 08:15:35.631 +Oct 13 08:15:35.638: INFO: Waiting up to 5m0s for pod "pod-65991ae4-d786-42ef-860f-58f8f7414ed6" in namespace "emptydir-5112" to be "Succeeded or Failed" +Oct 13 08:15:35.640: INFO: Pod "pod-65991ae4-d786-42ef-860f-58f8f7414ed6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.454007ms +Oct 13 08:15:37.645: INFO: Pod "pod-65991ae4-d786-42ef-860f-58f8f7414ed6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007044237s +Oct 13 08:15:39.645: INFO: Pod "pod-65991ae4-d786-42ef-860f-58f8f7414ed6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007159939s +STEP: Saw pod success 10/13/23 08:15:39.645 +Oct 13 08:15:39.645: INFO: Pod "pod-65991ae4-d786-42ef-860f-58f8f7414ed6" satisfied condition "Succeeded or Failed" +Oct 13 08:15:39.647: INFO: Trying to get logs from node node1 pod pod-65991ae4-d786-42ef-860f-58f8f7414ed6 container test-container: +STEP: delete the pod 10/13/23 08:15:39.652 +Oct 13 08:15:39.659: INFO: Waiting for pod pod-65991ae4-d786-42ef-860f-58f8f7414ed6 to disappear +Oct 13 08:15:39.661: INFO: Pod pod-65991ae4-d786-42ef-860f-58f8f7414ed6 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/node/init/init.go:32 +Oct 13 08:15:39.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + tear down framework | framework.go:193 +STEP: Destroying namespace "emptydir-5112" for this suite. 10/13/23 08:15:39.664 +------------------------------ +• [4.057 seconds] +[sig-storage] EmptyDir volumes +test/e2e/common/storage/framework.go:23 + should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:147 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] EmptyDir volumes + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:15:35.612 + Oct 13 08:15:35.612: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename emptydir 10/13/23 08:15:35.613 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:15:35.626 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:15:35.629 + [BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:31 + [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:147 + STEP: Creating a pod to test emptydir 0777 on tmpfs 10/13/23 08:15:35.631 + Oct 13 08:15:35.638: INFO: Waiting up to 5m0s for pod "pod-65991ae4-d786-42ef-860f-58f8f7414ed6" in namespace "emptydir-5112" to be "Succeeded or Failed" + Oct 13 08:15:35.640: INFO: Pod "pod-65991ae4-d786-42ef-860f-58f8f7414ed6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.454007ms + Oct 13 08:15:37.645: INFO: Pod "pod-65991ae4-d786-42ef-860f-58f8f7414ed6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007044237s + Oct 13 08:15:39.645: INFO: Pod "pod-65991ae4-d786-42ef-860f-58f8f7414ed6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007159939s + STEP: Saw pod success 10/13/23 08:15:39.645 + Oct 13 08:15:39.645: INFO: Pod "pod-65991ae4-d786-42ef-860f-58f8f7414ed6" satisfied condition "Succeeded or Failed" + Oct 13 08:15:39.647: INFO: Trying to get logs from node node1 pod pod-65991ae4-d786-42ef-860f-58f8f7414ed6 container test-container: + STEP: delete the pod 10/13/23 08:15:39.652 + Oct 13 08:15:39.659: INFO: Waiting for pod pod-65991ae4-d786-42ef-860f-58f8f7414ed6 to disappear + Oct 13 08:15:39.661: INFO: Pod pod-65991ae4-d786-42ef-860f-58f8f7414ed6 no longer exists + [AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/node/init/init.go:32 + Oct 13 08:15:39.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + tear down framework | framework.go:193 + STEP: Destroying namespace "emptydir-5112" for this suite. 10/13/23 08:15:39.664 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSS +------------------------------ +[sig-auth] ServiceAccounts + ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] + test/e2e/auth/service_accounts.go:531 +[BeforeEach] [sig-auth] ServiceAccounts + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:15:39.669 +Oct 13 08:15:39.669: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename svcaccounts 10/13/23 08:15:39.67 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:15:39.693 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:15:39.696 +[BeforeEach] [sig-auth] ServiceAccounts + test/e2e/framework/metrics/init/init.go:31 +[It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] + test/e2e/auth/service_accounts.go:531 +Oct 13 08:15:39.710: INFO: created pod +Oct 13 08:15:39.710: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-5663" to be "Succeeded or Failed" +Oct 13 08:15:39.713: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.873959ms +Oct 13 08:15:41.717: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007131893s +Oct 13 08:15:43.717: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007262675s +STEP: Saw pod success 10/13/23 08:15:43.717 +Oct 13 08:15:43.717: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed" +Oct 13 08:16:13.718: INFO: polling logs +Oct 13 08:16:13.728: INFO: Pod logs: +I1013 08:15:41.135156 1 log.go:198] OK: Got token +I1013 08:15:41.135213 1 log.go:198] validating with in-cluster discovery +I1013 08:15:41.135474 1 log.go:198] OK: got issuer https://kubernetes.default.svc.cluster.local +I1013 08:15:41.135497 1 log.go:198] Full, not-validated claims: +openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-5663:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1697185540, NotBefore:1697184940, IssuedAt:1697184940, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-5663", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"64584f7c-9e02-40c6-9fe3-bc2c2d64706b"}}} +I1013 08:15:41.144460 1 log.go:198] OK: Constructed OIDC provider for issuer https://kubernetes.default.svc.cluster.local +I1013 08:15:41.152830 1 log.go:198] OK: Validated signature on JWT +I1013 08:15:41.152910 1 log.go:198] OK: Got valid claims from token! +I1013 08:15:41.152935 1 log.go:198] Full, validated claims: +&openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-5663:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1697185540, NotBefore:1697184940, IssuedAt:1697184940, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-5663", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"64584f7c-9e02-40c6-9fe3-bc2c2d64706b"}}} + +Oct 13 08:16:13.728: INFO: completed pod +[AfterEach] [sig-auth] ServiceAccounts + test/e2e/framework/node/init/init.go:32 +Oct 13 08:16:13.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-auth] ServiceAccounts + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-auth] ServiceAccounts + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-auth] ServiceAccounts + tear down framework | framework.go:193 +STEP: Destroying namespace "svcaccounts-5663" for this suite. 10/13/23 08:16:13.737 +------------------------------ +• [SLOW TEST] [34.074 seconds] +[sig-auth] ServiceAccounts +test/e2e/auth/framework.go:23 + ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] + test/e2e/auth/service_accounts.go:531 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-auth] ServiceAccounts + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:15:39.669 + Oct 13 08:15:39.669: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename svcaccounts 10/13/23 08:15:39.67 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:15:39.693 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:15:39.696 + [BeforeEach] [sig-auth] ServiceAccounts + test/e2e/framework/metrics/init/init.go:31 + [It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] + test/e2e/auth/service_accounts.go:531 + Oct 13 08:15:39.710: INFO: created pod + Oct 13 08:15:39.710: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-5663" to be "Succeeded or Failed" + Oct 13 08:15:39.713: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.873959ms + Oct 13 08:15:41.717: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007131893s + Oct 13 08:15:43.717: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007262675s + STEP: Saw pod success 10/13/23 08:15:43.717 + Oct 13 08:15:43.717: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed" + Oct 13 08:16:13.718: INFO: polling logs + Oct 13 08:16:13.728: INFO: Pod logs: + I1013 08:15:41.135156 1 log.go:198] OK: Got token + I1013 08:15:41.135213 1 log.go:198] validating with in-cluster discovery + I1013 08:15:41.135474 1 log.go:198] OK: got issuer https://kubernetes.default.svc.cluster.local + I1013 08:15:41.135497 1 log.go:198] Full, not-validated claims: + openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-5663:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1697185540, NotBefore:1697184940, IssuedAt:1697184940, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-5663", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"64584f7c-9e02-40c6-9fe3-bc2c2d64706b"}}} + I1013 08:15:41.144460 1 log.go:198] OK: Constructed OIDC provider for issuer https://kubernetes.default.svc.cluster.local + I1013 08:15:41.152830 1 log.go:198] OK: Validated signature on JWT + I1013 08:15:41.152910 1 log.go:198] OK: Got valid claims from token! + I1013 08:15:41.152935 1 log.go:198] Full, validated claims: + &openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-5663:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1697185540, NotBefore:1697184940, IssuedAt:1697184940, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-5663", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"64584f7c-9e02-40c6-9fe3-bc2c2d64706b"}}} + + Oct 13 08:16:13.728: INFO: completed pod + [AfterEach] [sig-auth] ServiceAccounts + test/e2e/framework/node/init/init.go:32 + Oct 13 08:16:13.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-auth] ServiceAccounts + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-auth] ServiceAccounts + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-auth] ServiceAccounts + tear down framework | framework.go:193 + STEP: Destroying namespace "svcaccounts-5663" for this suite. 10/13/23 08:16:13.737 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Servers with support for Table transformation + should return a 406 for a backend which does not implement metadata [Conformance] + test/e2e/apimachinery/table_conversion.go:154 +[BeforeEach] [sig-api-machinery] Servers with support for Table transformation + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:16:13.744 +Oct 13 08:16:13.744: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename tables 10/13/23 08:16:13.745 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:16:13.761 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:16:13.764 +[BeforeEach] [sig-api-machinery] Servers with support for Table transformation + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-api-machinery] Servers with support for Table transformation + test/e2e/apimachinery/table_conversion.go:49 +[It] should return a 406 for a backend which does not implement metadata [Conformance] + test/e2e/apimachinery/table_conversion.go:154 +[AfterEach] [sig-api-machinery] Servers with support for Table transformation + test/e2e/framework/node/init/init.go:32 +Oct 13 08:16:13.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] Servers with support for Table transformation + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] Servers with support for Table transformation + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] Servers with support for Table transformation + tear down framework | framework.go:193 +STEP: Destroying namespace "tables-2074" for this suite. 10/13/23 08:16:13.773 +------------------------------ +• [0.035 seconds] +[sig-api-machinery] Servers with support for Table transformation +test/e2e/apimachinery/framework.go:23 + should return a 406 for a backend which does not implement metadata [Conformance] + test/e2e/apimachinery/table_conversion.go:154 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] Servers with support for Table transformation + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:16:13.744 + Oct 13 08:16:13.744: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename tables 10/13/23 08:16:13.745 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:16:13.761 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:16:13.764 + [BeforeEach] [sig-api-machinery] Servers with support for Table transformation + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-api-machinery] Servers with support for Table transformation + test/e2e/apimachinery/table_conversion.go:49 + [It] should return a 406 for a backend which does not implement metadata [Conformance] + test/e2e/apimachinery/table_conversion.go:154 + [AfterEach] [sig-api-machinery] Servers with support for Table transformation + test/e2e/framework/node/init/init.go:32 + Oct 13 08:16:13.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] Servers with support for Table transformation + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] Servers with support for Table transformation + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] Servers with support for Table transformation + tear down framework | framework.go:193 + STEP: Destroying namespace "tables-2074" for this suite. 10/13/23 08:16:13.773 + << End Captured GinkgoWriter Output +------------------------------ +SSS +------------------------------ +[sig-node] InitContainer [NodeConformance] + should invoke init containers on a RestartNever pod [Conformance] + test/e2e/common/node/init_container.go:177 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:16:13.779 +Oct 13 08:16:13.779: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename init-container 10/13/23 08:16:13.78 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:16:13.795 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:16:13.798 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + test/e2e/common/node/init_container.go:165 +[It] should invoke init containers on a RestartNever pod [Conformance] + test/e2e/common/node/init_container.go:177 +STEP: creating the pod 10/13/23 08:16:13.801 +Oct 13 08:16:13.801: INFO: PodSpec: initContainers in spec.initContainers +[AfterEach] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/node/init/init.go:32 +Oct 13 08:16:18.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] + tear down framework | framework.go:193 +STEP: Destroying namespace "init-container-9243" for this suite. 10/13/23 08:16:18.67 +------------------------------ +• [4.897 seconds] +[sig-node] InitContainer [NodeConformance] +test/e2e/common/node/framework.go:23 + should invoke init containers on a RestartNever pod [Conformance] + test/e2e/common/node/init_container.go:177 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] InitContainer [NodeConformance] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:16:13.779 + Oct 13 08:16:13.779: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename init-container 10/13/23 08:16:13.78 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:16:13.795 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:16:13.798 + [BeforeEach] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] InitContainer [NodeConformance] + test/e2e/common/node/init_container.go:165 + [It] should invoke init containers on a RestartNever pod [Conformance] + test/e2e/common/node/init_container.go:177 + STEP: creating the pod 10/13/23 08:16:13.801 + Oct 13 08:16:13.801: INFO: PodSpec: initContainers in spec.initContainers + [AfterEach] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/node/init/init.go:32 + Oct 13 08:16:18.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] + tear down framework | framework.go:193 + STEP: Destroying namespace "init-container-9243" for this suite. 10/13/23 08:16:18.67 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-node] Downward API + should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:217 +[BeforeEach] [sig-node] Downward API + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:16:18.677 +Oct 13 08:16:18.677: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename downward-api 10/13/23 08:16:18.678 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:16:18.691 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:16:18.694 +[BeforeEach] [sig-node] Downward API + test/e2e/framework/metrics/init/init.go:31 +[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:217 +STEP: Creating a pod to test downward api env vars 10/13/23 08:16:18.696 +Oct 13 08:16:18.703: INFO: Waiting up to 5m0s for pod "downward-api-6b2544a7-5354-48a5-bce5-afaec1dcb919" in namespace "downward-api-8327" to be "Succeeded or Failed" +Oct 13 08:16:18.706: INFO: Pod "downward-api-6b2544a7-5354-48a5-bce5-afaec1dcb919": Phase="Pending", Reason="", readiness=false. Elapsed: 2.889661ms +Oct 13 08:16:20.715: INFO: Pod "downward-api-6b2544a7-5354-48a5-bce5-afaec1dcb919": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01197029s +Oct 13 08:16:22.710: INFO: Pod "downward-api-6b2544a7-5354-48a5-bce5-afaec1dcb919": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.006600143s +STEP: Saw pod success 10/13/23 08:16:22.71 +Oct 13 08:16:22.710: INFO: Pod "downward-api-6b2544a7-5354-48a5-bce5-afaec1dcb919" satisfied condition "Succeeded or Failed" +Oct 13 08:16:22.713: INFO: Trying to get logs from node node2 pod downward-api-6b2544a7-5354-48a5-bce5-afaec1dcb919 container dapi-container: +STEP: delete the pod 10/13/23 08:16:22.719 +Oct 13 08:16:22.733: INFO: Waiting for pod downward-api-6b2544a7-5354-48a5-bce5-afaec1dcb919 to disappear +Oct 13 08:16:22.736: INFO: Pod downward-api-6b2544a7-5354-48a5-bce5-afaec1dcb919 no longer exists +[AfterEach] [sig-node] Downward API + test/e2e/framework/node/init/init.go:32 +Oct 13 08:16:22.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Downward API + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Downward API + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Downward API + tear down framework | framework.go:193 +STEP: Destroying namespace "downward-api-8327" for this suite. 10/13/23 08:16:22.739 +------------------------------ +• [4.067 seconds] +[sig-node] Downward API +test/e2e/common/node/framework.go:23 + should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:217 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Downward API + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:16:18.677 + Oct 13 08:16:18.677: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename downward-api 10/13/23 08:16:18.678 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:16:18.691 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:16:18.694 + [BeforeEach] [sig-node] Downward API + test/e2e/framework/metrics/init/init.go:31 + [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:217 + STEP: Creating a pod to test downward api env vars 10/13/23 08:16:18.696 + Oct 13 08:16:18.703: INFO: Waiting up to 5m0s for pod "downward-api-6b2544a7-5354-48a5-bce5-afaec1dcb919" in namespace "downward-api-8327" to be "Succeeded or Failed" + Oct 13 08:16:18.706: INFO: Pod "downward-api-6b2544a7-5354-48a5-bce5-afaec1dcb919": Phase="Pending", Reason="", readiness=false. Elapsed: 2.889661ms + Oct 13 08:16:20.715: INFO: Pod "downward-api-6b2544a7-5354-48a5-bce5-afaec1dcb919": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01197029s + Oct 13 08:16:22.710: INFO: Pod "downward-api-6b2544a7-5354-48a5-bce5-afaec1dcb919": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.006600143s + STEP: Saw pod success 10/13/23 08:16:22.71 + Oct 13 08:16:22.710: INFO: Pod "downward-api-6b2544a7-5354-48a5-bce5-afaec1dcb919" satisfied condition "Succeeded or Failed" + Oct 13 08:16:22.713: INFO: Trying to get logs from node node2 pod downward-api-6b2544a7-5354-48a5-bce5-afaec1dcb919 container dapi-container: + STEP: delete the pod 10/13/23 08:16:22.719 + Oct 13 08:16:22.733: INFO: Waiting for pod downward-api-6b2544a7-5354-48a5-bce5-afaec1dcb919 to disappear + Oct 13 08:16:22.736: INFO: Pod downward-api-6b2544a7-5354-48a5-bce5-afaec1dcb919 no longer exists + [AfterEach] [sig-node] Downward API + test/e2e/framework/node/init/init.go:32 + Oct 13 08:16:22.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Downward API + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Downward API + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Downward API + tear down framework | framework.go:193 + STEP: Destroying namespace "downward-api-8327" for this suite. 10/13/23 08:16:22.739 + << End Captured GinkgoWriter Output +------------------------------ +SS +------------------------------ +[sig-apps] Daemon set [Serial] + should run and stop complex daemon [Conformance] + test/e2e/apps/daemon_set.go:194 +[BeforeEach] [sig-apps] Daemon set [Serial] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:16:22.744 +Oct 13 08:16:22.744: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename daemonsets 10/13/23 08:16:22.745 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:16:22.759 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:16:22.762 +[BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:146 +[It] should run and stop complex daemon [Conformance] + test/e2e/apps/daemon_set.go:194 +Oct 13 08:16:22.780: INFO: Creating daemon "daemon-set" with a node selector +STEP: Initially, daemon pods should not be running on any nodes. 10/13/23 08:16:22.784 +Oct 13 08:16:22.787: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Oct 13 08:16:22.787: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set +STEP: Change node label to blue, check that daemon pod is launched. 10/13/23 08:16:22.787 +Oct 13 08:16:22.803: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Oct 13 08:16:22.803: INFO: Node node1 is running 0 daemon pod, expected 1 +Oct 13 08:16:23.807: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Oct 13 08:16:23.807: INFO: Number of running nodes: 1, number of available pods: 1 in daemonset daemon-set +STEP: Update the node label to green, and wait for daemons to be unscheduled 10/13/23 08:16:23.81 +Oct 13 08:16:23.823: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Oct 13 08:16:23.823: INFO: Number of running nodes: 0, number of available pods: 1 in daemonset daemon-set +Oct 13 08:16:24.829: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Oct 13 08:16:24.829: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set +STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate 10/13/23 08:16:24.829 +Oct 13 08:16:24.847: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Oct 13 08:16:24.847: INFO: Node node1 is running 0 daemon pod, expected 1 +Oct 13 08:16:25.853: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Oct 13 08:16:25.853: INFO: Node node1 is running 0 daemon pod, expected 1 +Oct 13 08:16:26.852: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Oct 13 08:16:26.852: INFO: Node node1 is running 0 daemon pod, expected 1 +Oct 13 08:16:27.850: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Oct 13 08:16:27.850: INFO: Number of running nodes: 1, number of available pods: 1 in daemonset daemon-set +[AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:111 +STEP: Deleting DaemonSet "daemon-set" 10/13/23 08:16:27.856 +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8578, will wait for the garbage collector to delete the pods 10/13/23 08:16:27.856 +Oct 13 08:16:27.915: INFO: Deleting DaemonSet.extensions daemon-set took: 5.503158ms +Oct 13 08:16:28.015: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.239733ms +Oct 13 08:16:30.819: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Oct 13 08:16:30.819: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set +Oct 13 08:16:30.821: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"10401"},"items":null} + +Oct 13 08:16:30.823: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"10401"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/node/init/init.go:32 +Oct 13 08:16:30.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + tear down framework | framework.go:193 +STEP: Destroying namespace "daemonsets-8578" for this suite. 10/13/23 08:16:30.848 +------------------------------ +• [SLOW TEST] [8.109 seconds] +[sig-apps] Daemon set [Serial] +test/e2e/apps/framework.go:23 + should run and stop complex daemon [Conformance] + test/e2e/apps/daemon_set.go:194 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] Daemon set [Serial] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:16:22.744 + Oct 13 08:16:22.744: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename daemonsets 10/13/23 08:16:22.745 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:16:22.759 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:16:22.762 + [BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:146 + [It] should run and stop complex daemon [Conformance] + test/e2e/apps/daemon_set.go:194 + Oct 13 08:16:22.780: INFO: Creating daemon "daemon-set" with a node selector + STEP: Initially, daemon pods should not be running on any nodes. 10/13/23 08:16:22.784 + Oct 13 08:16:22.787: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Oct 13 08:16:22.787: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set + STEP: Change node label to blue, check that daemon pod is launched. 10/13/23 08:16:22.787 + Oct 13 08:16:22.803: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Oct 13 08:16:22.803: INFO: Node node1 is running 0 daemon pod, expected 1 + Oct 13 08:16:23.807: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 + Oct 13 08:16:23.807: INFO: Number of running nodes: 1, number of available pods: 1 in daemonset daemon-set + STEP: Update the node label to green, and wait for daemons to be unscheduled 10/13/23 08:16:23.81 + Oct 13 08:16:23.823: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 + Oct 13 08:16:23.823: INFO: Number of running nodes: 0, number of available pods: 1 in daemonset daemon-set + Oct 13 08:16:24.829: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Oct 13 08:16:24.829: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set + STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate 10/13/23 08:16:24.829 + Oct 13 08:16:24.847: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Oct 13 08:16:24.847: INFO: Node node1 is running 0 daemon pod, expected 1 + Oct 13 08:16:25.853: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Oct 13 08:16:25.853: INFO: Node node1 is running 0 daemon pod, expected 1 + Oct 13 08:16:26.852: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Oct 13 08:16:26.852: INFO: Node node1 is running 0 daemon pod, expected 1 + Oct 13 08:16:27.850: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 + Oct 13 08:16:27.850: INFO: Number of running nodes: 1, number of available pods: 1 in daemonset daemon-set + [AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:111 + STEP: Deleting DaemonSet "daemon-set" 10/13/23 08:16:27.856 + STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8578, will wait for the garbage collector to delete the pods 10/13/23 08:16:27.856 + Oct 13 08:16:27.915: INFO: Deleting DaemonSet.extensions daemon-set took: 5.503158ms + Oct 13 08:16:28.015: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.239733ms + Oct 13 08:16:30.819: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Oct 13 08:16:30.819: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set + Oct 13 08:16:30.821: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"10401"},"items":null} + + Oct 13 08:16:30.823: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"10401"},"items":null} + + [AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/node/init/init.go:32 + Oct 13 08:16:30.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + tear down framework | framework.go:193 + STEP: Destroying namespace "daemonsets-8578" for this suite. 10/13/23 08:16:30.848 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + should be able to convert from CR v1 to CR v2 [Conformance] + test/e2e/apimachinery/crd_conversion_webhook.go:149 +[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:16:30.854 +Oct 13 08:16:30.854: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename crd-webhook 10/13/23 08:16:30.855 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:16:30.871 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:16:30.874 +[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/crd_conversion_webhook.go:128 +STEP: Setting up server cert 10/13/23 08:16:30.877 +STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication 10/13/23 08:16:31.131 +STEP: Deploying the custom resource conversion webhook pod 10/13/23 08:16:31.14 +STEP: Wait for the deployment to be ready 10/13/23 08:16:31.15 +Oct 13 08:16:31.156: INFO: new replicaset for deployment "sample-crd-conversion-webhook-deployment" is yet to be created +STEP: Deploying the webhook service 10/13/23 08:16:33.166 +STEP: Verifying the service has paired with the endpoint 10/13/23 08:16:33.175 +Oct 13 08:16:34.176: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 +[It] should be able to convert from CR v1 to CR v2 [Conformance] + test/e2e/apimachinery/crd_conversion_webhook.go:149 +Oct 13 08:16:34.180: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Creating a v1 custom resource 10/13/23 08:16:36.764 +STEP: v2 custom resource should be converted 10/13/23 08:16:36.769 +[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Oct 13 08:16:37.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/crd_conversion_webhook.go:139 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "crd-webhook-561" for this suite. 10/13/23 08:16:37.332 +------------------------------ +• [SLOW TEST] [6.484 seconds] +[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should be able to convert from CR v1 to CR v2 [Conformance] + test/e2e/apimachinery/crd_conversion_webhook.go:149 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:16:30.854 + Oct 13 08:16:30.854: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename crd-webhook 10/13/23 08:16:30.855 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:16:30.871 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:16:30.874 + [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/crd_conversion_webhook.go:128 + STEP: Setting up server cert 10/13/23 08:16:30.877 + STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication 10/13/23 08:16:31.131 + STEP: Deploying the custom resource conversion webhook pod 10/13/23 08:16:31.14 + STEP: Wait for the deployment to be ready 10/13/23 08:16:31.15 + Oct 13 08:16:31.156: INFO: new replicaset for deployment "sample-crd-conversion-webhook-deployment" is yet to be created + STEP: Deploying the webhook service 10/13/23 08:16:33.166 + STEP: Verifying the service has paired with the endpoint 10/13/23 08:16:33.175 + Oct 13 08:16:34.176: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 + [It] should be able to convert from CR v1 to CR v2 [Conformance] + test/e2e/apimachinery/crd_conversion_webhook.go:149 + Oct 13 08:16:34.180: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Creating a v1 custom resource 10/13/23 08:16:36.764 + STEP: v2 custom resource should be converted 10/13/23 08:16:36.769 + [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Oct 13 08:16:37.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/crd_conversion_webhook.go:139 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "crd-webhook-561" for this suite. 10/13/23 08:16:37.332 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:125 +[BeforeEach] [sig-storage] Secrets + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:16:37.339 +Oct 13 08:16:37.339: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename secrets 10/13/23 08:16:37.34 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:16:37.357 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:16:37.361 +[BeforeEach] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:31 +[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:125 +STEP: Creating secret with name secret-test-611238eb-120c-40ba-b404-99d2bb739113 10/13/23 08:16:37.364 +STEP: Creating a pod to test consume secrets 10/13/23 08:16:37.369 +Oct 13 08:16:37.378: INFO: Waiting up to 5m0s for pod "pod-secrets-b0785e2f-c196-4235-9f41-b1f66d8a91d2" in namespace "secrets-5572" to be "Succeeded or Failed" +Oct 13 08:16:37.382: INFO: Pod "pod-secrets-b0785e2f-c196-4235-9f41-b1f66d8a91d2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.910609ms +Oct 13 08:16:39.386: INFO: Pod "pod-secrets-b0785e2f-c196-4235-9f41-b1f66d8a91d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007214791s +Oct 13 08:16:41.387: INFO: Pod "pod-secrets-b0785e2f-c196-4235-9f41-b1f66d8a91d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009082754s +STEP: Saw pod success 10/13/23 08:16:41.388 +Oct 13 08:16:41.388: INFO: Pod "pod-secrets-b0785e2f-c196-4235-9f41-b1f66d8a91d2" satisfied condition "Succeeded or Failed" +Oct 13 08:16:41.391: INFO: Trying to get logs from node node2 pod pod-secrets-b0785e2f-c196-4235-9f41-b1f66d8a91d2 container secret-volume-test: +STEP: delete the pod 10/13/23 08:16:41.396 +Oct 13 08:16:41.405: INFO: Waiting for pod pod-secrets-b0785e2f-c196-4235-9f41-b1f66d8a91d2 to disappear +Oct 13 08:16:41.407: INFO: Pod pod-secrets-b0785e2f-c196-4235-9f41-b1f66d8a91d2 no longer exists +[AfterEach] [sig-storage] Secrets + test/e2e/framework/node/init/init.go:32 +Oct 13 08:16:41.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Secrets + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Secrets + tear down framework | framework.go:193 +STEP: Destroying namespace "secrets-5572" for this suite. 10/13/23 08:16:41.411 +------------------------------ +• [4.078 seconds] +[sig-storage] Secrets +test/e2e/common/storage/framework.go:23 + should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:125 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Secrets + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:16:37.339 + Oct 13 08:16:37.339: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename secrets 10/13/23 08:16:37.34 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:16:37.357 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:16:37.361 + [BeforeEach] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:31 + [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:125 + STEP: Creating secret with name secret-test-611238eb-120c-40ba-b404-99d2bb739113 10/13/23 08:16:37.364 + STEP: Creating a pod to test consume secrets 10/13/23 08:16:37.369 + Oct 13 08:16:37.378: INFO: Waiting up to 5m0s for pod "pod-secrets-b0785e2f-c196-4235-9f41-b1f66d8a91d2" in namespace "secrets-5572" to be "Succeeded or Failed" + Oct 13 08:16:37.382: INFO: Pod "pod-secrets-b0785e2f-c196-4235-9f41-b1f66d8a91d2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.910609ms + Oct 13 08:16:39.386: INFO: Pod "pod-secrets-b0785e2f-c196-4235-9f41-b1f66d8a91d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007214791s + Oct 13 08:16:41.387: INFO: Pod "pod-secrets-b0785e2f-c196-4235-9f41-b1f66d8a91d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009082754s + STEP: Saw pod success 10/13/23 08:16:41.388 + Oct 13 08:16:41.388: INFO: Pod "pod-secrets-b0785e2f-c196-4235-9f41-b1f66d8a91d2" satisfied condition "Succeeded or Failed" + Oct 13 08:16:41.391: INFO: Trying to get logs from node node2 pod pod-secrets-b0785e2f-c196-4235-9f41-b1f66d8a91d2 container secret-volume-test: + STEP: delete the pod 10/13/23 08:16:41.396 + Oct 13 08:16:41.405: INFO: Waiting for pod pod-secrets-b0785e2f-c196-4235-9f41-b1f66d8a91d2 to disappear + Oct 13 08:16:41.407: INFO: Pod pod-secrets-b0785e2f-c196-4235-9f41-b1f66d8a91d2 no longer exists + [AfterEach] [sig-storage] Secrets + test/e2e/framework/node/init/init.go:32 + Oct 13 08:16:41.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Secrets + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Secrets + tear down framework | framework.go:193 + STEP: Destroying namespace "secrets-5572" for this suite. 10/13/23 08:16:41.411 + << End Captured GinkgoWriter Output +------------------------------ +SSS +------------------------------ +[sig-cli] Kubectl client Kubectl api-versions + should check if v1 is in available api versions [Conformance] + test/e2e/kubectl/kubectl.go:824 +[BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:16:41.417 +Oct 13 08:16:41.417: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename kubectl 10/13/23 08:16:41.419 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:16:41.434 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:16:41.436 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 +[It] should check if v1 is in available api versions [Conformance] + test/e2e/kubectl/kubectl.go:824 +STEP: validating api versions 10/13/23 08:16:41.439 +Oct 13 08:16:41.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-2593 api-versions' +Oct 13 08:16:41.526: INFO: stderr: "" +Oct 13 08:16:41.526: INFO: stdout: "admissionregistration.k8s.io/v1\napiextensions.k8s.io/v1\napiregistration.k8s.io/v1\napps/v1\nauthentication.k8s.io/v1\nauthorization.k8s.io/v1\nautoscaling/v1\nautoscaling/v2\nbatch/v1\ncertificates.k8s.io/v1\ncoordination.k8s.io/v1\ndiscovery.k8s.io/v1\nevents.k8s.io/v1\nflowcontrol.apiserver.k8s.io/v1beta2\nflowcontrol.apiserver.k8s.io/v1beta3\nnetworking.k8s.io/v1\nnode.k8s.io/v1\npolicy/v1\nrbac.authorization.k8s.io/v1\nscheduling.k8s.io/v1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 +Oct 13 08:16:41.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 +STEP: Destroying namespace "kubectl-2593" for this suite. 10/13/23 08:16:41.53 +------------------------------ +• [0.119 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Kubectl api-versions + test/e2e/kubectl/kubectl.go:818 + should check if v1 is in available api versions [Conformance] + test/e2e/kubectl/kubectl.go:824 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:16:41.417 + Oct 13 08:16:41.417: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename kubectl 10/13/23 08:16:41.419 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:16:41.434 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:16:41.436 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 + [It] should check if v1 is in available api versions [Conformance] + test/e2e/kubectl/kubectl.go:824 + STEP: validating api versions 10/13/23 08:16:41.439 + Oct 13 08:16:41.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-2593 api-versions' + Oct 13 08:16:41.526: INFO: stderr: "" + Oct 13 08:16:41.526: INFO: stdout: "admissionregistration.k8s.io/v1\napiextensions.k8s.io/v1\napiregistration.k8s.io/v1\napps/v1\nauthentication.k8s.io/v1\nauthorization.k8s.io/v1\nautoscaling/v1\nautoscaling/v2\nbatch/v1\ncertificates.k8s.io/v1\ncoordination.k8s.io/v1\ndiscovery.k8s.io/v1\nevents.k8s.io/v1\nflowcontrol.apiserver.k8s.io/v1beta2\nflowcontrol.apiserver.k8s.io/v1beta3\nnetworking.k8s.io/v1\nnode.k8s.io/v1\npolicy/v1\nrbac.authorization.k8s.io/v1\nscheduling.k8s.io/v1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" + [AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 + Oct 13 08:16:41.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 + STEP: Destroying namespace "kubectl-2593" for this suite. 10/13/23 08:16:41.53 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSS +------------------------------ +[sig-node] Downward API + should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:44 +[BeforeEach] [sig-node] Downward API + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:16:41.536 +Oct 13 08:16:41.536: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename downward-api 10/13/23 08:16:41.537 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:16:41.552 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:16:41.555 +[BeforeEach] [sig-node] Downward API + test/e2e/framework/metrics/init/init.go:31 +[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:44 +STEP: Creating a pod to test downward api env vars 10/13/23 08:16:41.557 +Oct 13 08:16:41.564: INFO: Waiting up to 5m0s for pod "downward-api-d9ab4422-eb2b-4450-8d27-7c6286cdf933" in namespace "downward-api-6754" to be "Succeeded or Failed" +Oct 13 08:16:41.567: INFO: Pod "downward-api-d9ab4422-eb2b-4450-8d27-7c6286cdf933": Phase="Pending", Reason="", readiness=false. Elapsed: 2.803458ms +Oct 13 08:16:43.572: INFO: Pod "downward-api-d9ab4422-eb2b-4450-8d27-7c6286cdf933": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007160492s +Oct 13 08:16:45.573: INFO: Pod "downward-api-d9ab4422-eb2b-4450-8d27-7c6286cdf933": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00834318s +STEP: Saw pod success 10/13/23 08:16:45.573 +Oct 13 08:16:45.573: INFO: Pod "downward-api-d9ab4422-eb2b-4450-8d27-7c6286cdf933" satisfied condition "Succeeded or Failed" +Oct 13 08:16:45.576: INFO: Trying to get logs from node node2 pod downward-api-d9ab4422-eb2b-4450-8d27-7c6286cdf933 container dapi-container: +STEP: delete the pod 10/13/23 08:16:45.581 +Oct 13 08:16:45.589: INFO: Waiting for pod downward-api-d9ab4422-eb2b-4450-8d27-7c6286cdf933 to disappear +Oct 13 08:16:45.592: INFO: Pod downward-api-d9ab4422-eb2b-4450-8d27-7c6286cdf933 no longer exists +[AfterEach] [sig-node] Downward API + test/e2e/framework/node/init/init.go:32 +Oct 13 08:16:45.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Downward API + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Downward API + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Downward API + tear down framework | framework.go:193 +STEP: Destroying namespace "downward-api-6754" for this suite. 10/13/23 08:16:45.595 +------------------------------ +• [4.063 seconds] +[sig-node] Downward API +test/e2e/common/node/framework.go:23 + should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:44 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Downward API + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:16:41.536 + Oct 13 08:16:41.536: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename downward-api 10/13/23 08:16:41.537 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:16:41.552 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:16:41.555 + [BeforeEach] [sig-node] Downward API + test/e2e/framework/metrics/init/init.go:31 + [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:44 + STEP: Creating a pod to test downward api env vars 10/13/23 08:16:41.557 + Oct 13 08:16:41.564: INFO: Waiting up to 5m0s for pod "downward-api-d9ab4422-eb2b-4450-8d27-7c6286cdf933" in namespace "downward-api-6754" to be "Succeeded or Failed" + Oct 13 08:16:41.567: INFO: Pod "downward-api-d9ab4422-eb2b-4450-8d27-7c6286cdf933": Phase="Pending", Reason="", readiness=false. Elapsed: 2.803458ms + Oct 13 08:16:43.572: INFO: Pod "downward-api-d9ab4422-eb2b-4450-8d27-7c6286cdf933": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007160492s + Oct 13 08:16:45.573: INFO: Pod "downward-api-d9ab4422-eb2b-4450-8d27-7c6286cdf933": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00834318s + STEP: Saw pod success 10/13/23 08:16:45.573 + Oct 13 08:16:45.573: INFO: Pod "downward-api-d9ab4422-eb2b-4450-8d27-7c6286cdf933" satisfied condition "Succeeded or Failed" + Oct 13 08:16:45.576: INFO: Trying to get logs from node node2 pod downward-api-d9ab4422-eb2b-4450-8d27-7c6286cdf933 container dapi-container: + STEP: delete the pod 10/13/23 08:16:45.581 + Oct 13 08:16:45.589: INFO: Waiting for pod downward-api-d9ab4422-eb2b-4450-8d27-7c6286cdf933 to disappear + Oct 13 08:16:45.592: INFO: Pod downward-api-d9ab4422-eb2b-4450-8d27-7c6286cdf933 no longer exists + [AfterEach] [sig-node] Downward API + test/e2e/framework/node/init/init.go:32 + Oct 13 08:16:45.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Downward API + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Downward API + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Downward API + tear down framework | framework.go:193 + STEP: Destroying namespace "downward-api-6754" for this suite. 10/13/23 08:16:45.595 + << End Captured GinkgoWriter Output +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:89 +[BeforeEach] [sig-storage] Secrets + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:16:45.6 +Oct 13 08:16:45.600: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename secrets 10/13/23 08:16:45.6 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:16:45.614 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:16:45.617 +[BeforeEach] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:31 +[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:89 +STEP: Creating secret with name secret-test-map-1fc9e503-c511-4a00-8809-eaecb87224ef 10/13/23 08:16:45.619 +STEP: Creating a pod to test consume secrets 10/13/23 08:16:45.623 +Oct 13 08:16:45.629: INFO: Waiting up to 5m0s for pod "pod-secrets-7ec7d7e6-2a60-43d8-beae-a81e5376f15c" in namespace "secrets-9619" to be "Succeeded or Failed" +Oct 13 08:16:45.632: INFO: Pod "pod-secrets-7ec7d7e6-2a60-43d8-beae-a81e5376f15c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.61753ms +Oct 13 08:16:47.635: INFO: Pod "pod-secrets-7ec7d7e6-2a60-43d8-beae-a81e5376f15c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005857397s +Oct 13 08:16:49.636: INFO: Pod "pod-secrets-7ec7d7e6-2a60-43d8-beae-a81e5376f15c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00672746s +STEP: Saw pod success 10/13/23 08:16:49.636 +Oct 13 08:16:49.636: INFO: Pod "pod-secrets-7ec7d7e6-2a60-43d8-beae-a81e5376f15c" satisfied condition "Succeeded or Failed" +Oct 13 08:16:49.642: INFO: Trying to get logs from node node2 pod pod-secrets-7ec7d7e6-2a60-43d8-beae-a81e5376f15c container secret-volume-test: +STEP: delete the pod 10/13/23 08:16:49.648 +Oct 13 08:16:49.659: INFO: Waiting for pod pod-secrets-7ec7d7e6-2a60-43d8-beae-a81e5376f15c to disappear +Oct 13 08:16:49.662: INFO: Pod pod-secrets-7ec7d7e6-2a60-43d8-beae-a81e5376f15c no longer exists +[AfterEach] [sig-storage] Secrets + test/e2e/framework/node/init/init.go:32 +Oct 13 08:16:49.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Secrets + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Secrets + tear down framework | framework.go:193 +STEP: Destroying namespace "secrets-9619" for this suite. 10/13/23 08:16:49.666 +------------------------------ +• [4.072 seconds] +[sig-storage] Secrets +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:89 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Secrets + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:16:45.6 + Oct 13 08:16:45.600: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename secrets 10/13/23 08:16:45.6 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:16:45.614 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:16:45.617 + [BeforeEach] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:31 + [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:89 + STEP: Creating secret with name secret-test-map-1fc9e503-c511-4a00-8809-eaecb87224ef 10/13/23 08:16:45.619 + STEP: Creating a pod to test consume secrets 10/13/23 08:16:45.623 + Oct 13 08:16:45.629: INFO: Waiting up to 5m0s for pod "pod-secrets-7ec7d7e6-2a60-43d8-beae-a81e5376f15c" in namespace "secrets-9619" to be "Succeeded or Failed" + Oct 13 08:16:45.632: INFO: Pod "pod-secrets-7ec7d7e6-2a60-43d8-beae-a81e5376f15c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.61753ms + Oct 13 08:16:47.635: INFO: Pod "pod-secrets-7ec7d7e6-2a60-43d8-beae-a81e5376f15c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005857397s + Oct 13 08:16:49.636: INFO: Pod "pod-secrets-7ec7d7e6-2a60-43d8-beae-a81e5376f15c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00672746s + STEP: Saw pod success 10/13/23 08:16:49.636 + Oct 13 08:16:49.636: INFO: Pod "pod-secrets-7ec7d7e6-2a60-43d8-beae-a81e5376f15c" satisfied condition "Succeeded or Failed" + Oct 13 08:16:49.642: INFO: Trying to get logs from node node2 pod pod-secrets-7ec7d7e6-2a60-43d8-beae-a81e5376f15c container secret-volume-test: + STEP: delete the pod 10/13/23 08:16:49.648 + Oct 13 08:16:49.659: INFO: Waiting for pod pod-secrets-7ec7d7e6-2a60-43d8-beae-a81e5376f15c to disappear + Oct 13 08:16:49.662: INFO: Pod pod-secrets-7ec7d7e6-2a60-43d8-beae-a81e5376f15c no longer exists + [AfterEach] [sig-storage] Secrets + test/e2e/framework/node/init/init.go:32 + Oct 13 08:16:49.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Secrets + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Secrets + tear down framework | framework.go:193 + STEP: Destroying namespace "secrets-9619" for this suite. 10/13/23 08:16:49.666 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl version + should check is all data is printed [Conformance] + test/e2e/kubectl/kubectl.go:1685 +[BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:16:49.672 +Oct 13 08:16:49.672: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename kubectl 10/13/23 08:16:49.673 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:16:49.686 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:16:49.689 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 +[It] should check is all data is printed [Conformance] + test/e2e/kubectl/kubectl.go:1685 +Oct 13 08:16:49.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-4786 version' +Oct 13 08:16:49.753: INFO: stderr: "WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version.\n" +Oct 13 08:16:49.753: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"26\", GitVersion:\"v1.26.5\", GitCommit:\"890a139214b4de1f01543d15003b5bda71aae9c7\", GitTreeState:\"clean\", BuildDate:\"2023-05-17T14:14:46Z\", GoVersion:\"go1.19.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nKustomize Version: v4.5.7\nServer Version: version.Info{Major:\"1\", Minor:\"26\", GitVersion:\"v1.26.5\", GitCommit:\"890a139214b4de1f01543d15003b5bda71aae9c7\", GitTreeState:\"clean\", BuildDate:\"2023-05-17T14:08:49Z\", GoVersion:\"go1.19.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 +Oct 13 08:16:49.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 +STEP: Destroying namespace "kubectl-4786" for this suite. 10/13/23 08:16:49.757 +------------------------------ +• [0.090 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Kubectl version + test/e2e/kubectl/kubectl.go:1679 + should check is all data is printed [Conformance] + test/e2e/kubectl/kubectl.go:1685 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:16:49.672 + Oct 13 08:16:49.672: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename kubectl 10/13/23 08:16:49.673 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:16:49.686 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:16:49.689 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 + [It] should check is all data is printed [Conformance] + test/e2e/kubectl/kubectl.go:1685 + Oct 13 08:16:49.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-4786 version' + Oct 13 08:16:49.753: INFO: stderr: "WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version.\n" + Oct 13 08:16:49.753: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"26\", GitVersion:\"v1.26.5\", GitCommit:\"890a139214b4de1f01543d15003b5bda71aae9c7\", GitTreeState:\"clean\", BuildDate:\"2023-05-17T14:14:46Z\", GoVersion:\"go1.19.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nKustomize Version: v4.5.7\nServer Version: version.Info{Major:\"1\", Minor:\"26\", GitVersion:\"v1.26.5\", GitCommit:\"890a139214b4de1f01543d15003b5bda71aae9c7\", GitTreeState:\"clean\", BuildDate:\"2023-05-17T14:08:49Z\", GoVersion:\"go1.19.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" + [AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 + Oct 13 08:16:49.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 + STEP: Destroying namespace "kubectl-4786" for this suite. 10/13/23 08:16:49.757 + << End Captured GinkgoWriter Output +------------------------------ +SSSSS +------------------------------ +[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook + should execute poststart exec hook properly [NodeConformance] [Conformance] + test/e2e/common/node/lifecycle_hook.go:134 +[BeforeEach] [sig-node] Container Lifecycle Hook + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:16:49.763 +Oct 13 08:16:49.763: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename container-lifecycle-hook 10/13/23 08:16:49.763 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:16:49.778 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:16:49.78 +[BeforeEach] [sig-node] Container Lifecycle Hook + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] when create a pod with lifecycle hook + test/e2e/common/node/lifecycle_hook.go:77 +STEP: create the container to handle the HTTPGet hook request. 10/13/23 08:16:49.786 +Oct 13 08:16:49.793: INFO: Waiting up to 5m0s for pod "pod-handle-http-request" in namespace "container-lifecycle-hook-6807" to be "running and ready" +Oct 13 08:16:49.797: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 3.066008ms +Oct 13 08:16:49.797: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Oct 13 08:16:51.800: INFO: Pod "pod-handle-http-request": Phase="Running", Reason="", readiness=true. Elapsed: 2.006451107s +Oct 13 08:16:51.800: INFO: The phase of Pod pod-handle-http-request is Running (Ready = true) +Oct 13 08:16:51.800: INFO: Pod "pod-handle-http-request" satisfied condition "running and ready" +[It] should execute poststart exec hook properly [NodeConformance] [Conformance] + test/e2e/common/node/lifecycle_hook.go:134 +STEP: create the pod with lifecycle hook 10/13/23 08:16:51.803 +Oct 13 08:16:51.811: INFO: Waiting up to 5m0s for pod "pod-with-poststart-exec-hook" in namespace "container-lifecycle-hook-6807" to be "running and ready" +Oct 13 08:16:51.814: INFO: Pod "pod-with-poststart-exec-hook": Phase="Pending", Reason="", readiness=false. Elapsed: 2.834058ms +Oct 13 08:16:51.814: INFO: The phase of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) +Oct 13 08:16:53.817: INFO: Pod "pod-with-poststart-exec-hook": Phase="Running", Reason="", readiness=true. Elapsed: 2.006611151s +Oct 13 08:16:53.817: INFO: The phase of Pod pod-with-poststart-exec-hook is Running (Ready = true) +Oct 13 08:16:53.817: INFO: Pod "pod-with-poststart-exec-hook" satisfied condition "running and ready" +STEP: check poststart hook 10/13/23 08:16:53.82 +STEP: delete the pod with lifecycle hook 10/13/23 08:16:53.825 +Oct 13 08:16:53.830: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Oct 13 08:16:53.833: INFO: Pod pod-with-poststart-exec-hook still exists +Oct 13 08:16:55.834: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Oct 13 08:16:55.839: INFO: Pod pod-with-poststart-exec-hook still exists +Oct 13 08:16:57.834: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Oct 13 08:16:57.839: INFO: Pod pod-with-poststart-exec-hook no longer exists +[AfterEach] [sig-node] Container Lifecycle Hook + test/e2e/framework/node/init/init.go:32 +Oct 13 08:16:57.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Container Lifecycle Hook + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Container Lifecycle Hook + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Container Lifecycle Hook + tear down framework | framework.go:193 +STEP: Destroying namespace "container-lifecycle-hook-6807" for this suite. 10/13/23 08:16:57.844 +------------------------------ +• [SLOW TEST] [8.088 seconds] +[sig-node] Container Lifecycle Hook +test/e2e/common/node/framework.go:23 + when create a pod with lifecycle hook + test/e2e/common/node/lifecycle_hook.go:46 + should execute poststart exec hook properly [NodeConformance] [Conformance] + test/e2e/common/node/lifecycle_hook.go:134 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Container Lifecycle Hook + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:16:49.763 + Oct 13 08:16:49.763: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename container-lifecycle-hook 10/13/23 08:16:49.763 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:16:49.778 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:16:49.78 + [BeforeEach] [sig-node] Container Lifecycle Hook + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] when create a pod with lifecycle hook + test/e2e/common/node/lifecycle_hook.go:77 + STEP: create the container to handle the HTTPGet hook request. 10/13/23 08:16:49.786 + Oct 13 08:16:49.793: INFO: Waiting up to 5m0s for pod "pod-handle-http-request" in namespace "container-lifecycle-hook-6807" to be "running and ready" + Oct 13 08:16:49.797: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 3.066008ms + Oct 13 08:16:49.797: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) + Oct 13 08:16:51.800: INFO: Pod "pod-handle-http-request": Phase="Running", Reason="", readiness=true. Elapsed: 2.006451107s + Oct 13 08:16:51.800: INFO: The phase of Pod pod-handle-http-request is Running (Ready = true) + Oct 13 08:16:51.800: INFO: Pod "pod-handle-http-request" satisfied condition "running and ready" + [It] should execute poststart exec hook properly [NodeConformance] [Conformance] + test/e2e/common/node/lifecycle_hook.go:134 + STEP: create the pod with lifecycle hook 10/13/23 08:16:51.803 + Oct 13 08:16:51.811: INFO: Waiting up to 5m0s for pod "pod-with-poststart-exec-hook" in namespace "container-lifecycle-hook-6807" to be "running and ready" + Oct 13 08:16:51.814: INFO: Pod "pod-with-poststart-exec-hook": Phase="Pending", Reason="", readiness=false. Elapsed: 2.834058ms + Oct 13 08:16:51.814: INFO: The phase of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) + Oct 13 08:16:53.817: INFO: Pod "pod-with-poststart-exec-hook": Phase="Running", Reason="", readiness=true. Elapsed: 2.006611151s + Oct 13 08:16:53.817: INFO: The phase of Pod pod-with-poststart-exec-hook is Running (Ready = true) + Oct 13 08:16:53.817: INFO: Pod "pod-with-poststart-exec-hook" satisfied condition "running and ready" + STEP: check poststart hook 10/13/23 08:16:53.82 + STEP: delete the pod with lifecycle hook 10/13/23 08:16:53.825 + Oct 13 08:16:53.830: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear + Oct 13 08:16:53.833: INFO: Pod pod-with-poststart-exec-hook still exists + Oct 13 08:16:55.834: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear + Oct 13 08:16:55.839: INFO: Pod pod-with-poststart-exec-hook still exists + Oct 13 08:16:57.834: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear + Oct 13 08:16:57.839: INFO: Pod pod-with-poststart-exec-hook no longer exists + [AfterEach] [sig-node] Container Lifecycle Hook + test/e2e/framework/node/init/init.go:32 + Oct 13 08:16:57.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Container Lifecycle Hook + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Container Lifecycle Hook + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Container Lifecycle Hook + tear down framework | framework.go:193 + STEP: Destroying namespace "container-lifecycle-hook-6807" for this suite. 10/13/23 08:16:57.844 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:167 +[BeforeEach] [sig-storage] EmptyDir volumes + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:16:57.851 +Oct 13 08:16:57.851: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename emptydir 10/13/23 08:16:57.853 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:16:57.867 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:16:57.87 +[BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:31 +[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:167 +STEP: Creating a pod to test emptydir 0644 on node default medium 10/13/23 08:16:57.872 +Oct 13 08:16:57.879: INFO: Waiting up to 5m0s for pod "pod-5501a8f1-cdcb-4ae6-8deb-49f776b8742a" in namespace "emptydir-8162" to be "Succeeded or Failed" +Oct 13 08:16:57.881: INFO: Pod "pod-5501a8f1-cdcb-4ae6-8deb-49f776b8742a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.722286ms +Oct 13 08:16:59.887: INFO: Pod "pod-5501a8f1-cdcb-4ae6-8deb-49f776b8742a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008730505s +Oct 13 08:17:01.886: INFO: Pod "pod-5501a8f1-cdcb-4ae6-8deb-49f776b8742a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.006892915s +STEP: Saw pod success 10/13/23 08:17:01.886 +Oct 13 08:17:01.886: INFO: Pod "pod-5501a8f1-cdcb-4ae6-8deb-49f776b8742a" satisfied condition "Succeeded or Failed" +Oct 13 08:17:01.889: INFO: Trying to get logs from node node1 pod pod-5501a8f1-cdcb-4ae6-8deb-49f776b8742a container test-container: +STEP: delete the pod 10/13/23 08:17:01.894 +Oct 13 08:17:01.907: INFO: Waiting for pod pod-5501a8f1-cdcb-4ae6-8deb-49f776b8742a to disappear +Oct 13 08:17:01.915: INFO: Pod pod-5501a8f1-cdcb-4ae6-8deb-49f776b8742a no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/node/init/init.go:32 +Oct 13 08:17:01.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + tear down framework | framework.go:193 +STEP: Destroying namespace "emptydir-8162" for this suite. 10/13/23 08:17:01.92 +------------------------------ +• [4.074 seconds] +[sig-storage] EmptyDir volumes +test/e2e/common/storage/framework.go:23 + should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:167 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] EmptyDir volumes + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:16:57.851 + Oct 13 08:16:57.851: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename emptydir 10/13/23 08:16:57.853 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:16:57.867 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:16:57.87 + [BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:31 + [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:167 + STEP: Creating a pod to test emptydir 0644 on node default medium 10/13/23 08:16:57.872 + Oct 13 08:16:57.879: INFO: Waiting up to 5m0s for pod "pod-5501a8f1-cdcb-4ae6-8deb-49f776b8742a" in namespace "emptydir-8162" to be "Succeeded or Failed" + Oct 13 08:16:57.881: INFO: Pod "pod-5501a8f1-cdcb-4ae6-8deb-49f776b8742a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.722286ms + Oct 13 08:16:59.887: INFO: Pod "pod-5501a8f1-cdcb-4ae6-8deb-49f776b8742a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008730505s + Oct 13 08:17:01.886: INFO: Pod "pod-5501a8f1-cdcb-4ae6-8deb-49f776b8742a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.006892915s + STEP: Saw pod success 10/13/23 08:17:01.886 + Oct 13 08:17:01.886: INFO: Pod "pod-5501a8f1-cdcb-4ae6-8deb-49f776b8742a" satisfied condition "Succeeded or Failed" + Oct 13 08:17:01.889: INFO: Trying to get logs from node node1 pod pod-5501a8f1-cdcb-4ae6-8deb-49f776b8742a container test-container: + STEP: delete the pod 10/13/23 08:17:01.894 + Oct 13 08:17:01.907: INFO: Waiting for pod pod-5501a8f1-cdcb-4ae6-8deb-49f776b8742a to disappear + Oct 13 08:17:01.915: INFO: Pod pod-5501a8f1-cdcb-4ae6-8deb-49f776b8742a no longer exists + [AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/node/init/init.go:32 + Oct 13 08:17:01.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + tear down framework | framework.go:193 + STEP: Destroying namespace "emptydir-8162" for this suite. 10/13/23 08:17:01.92 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with downward pod [Conformance] + test/e2e/storage/subpath.go:92 +[BeforeEach] [sig-storage] Subpath + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:17:01.927 +Oct 13 08:17:01.927: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename subpath 10/13/23 08:17:01.928 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:17:01.943 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:17:01.945 +[BeforeEach] [sig-storage] Subpath + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] Atomic writer volumes + test/e2e/storage/subpath.go:40 +STEP: Setting up data 10/13/23 08:17:01.948 +[It] should support subpaths with downward pod [Conformance] + test/e2e/storage/subpath.go:92 +STEP: Creating pod pod-subpath-test-downwardapi-zk4v 10/13/23 08:17:01.957 +STEP: Creating a pod to test atomic-volume-subpath 10/13/23 08:17:01.957 +Oct 13 08:17:01.964: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-zk4v" in namespace "subpath-307" to be "Succeeded or Failed" +Oct 13 08:17:01.968: INFO: Pod "pod-subpath-test-downwardapi-zk4v": Phase="Pending", Reason="", readiness=false. Elapsed: 3.597743ms +Oct 13 08:17:03.974: INFO: Pod "pod-subpath-test-downwardapi-zk4v": Phase="Running", Reason="", readiness=true. Elapsed: 2.009354891s +Oct 13 08:17:05.974: INFO: Pod "pod-subpath-test-downwardapi-zk4v": Phase="Running", Reason="", readiness=true. Elapsed: 4.009552511s +Oct 13 08:17:07.973: INFO: Pod "pod-subpath-test-downwardapi-zk4v": Phase="Running", Reason="", readiness=true. Elapsed: 6.008239012s +Oct 13 08:17:09.973: INFO: Pod "pod-subpath-test-downwardapi-zk4v": Phase="Running", Reason="", readiness=true. Elapsed: 8.008290329s +Oct 13 08:17:11.973: INFO: Pod "pod-subpath-test-downwardapi-zk4v": Phase="Running", Reason="", readiness=true. Elapsed: 10.008107174s +Oct 13 08:17:13.974: INFO: Pod "pod-subpath-test-downwardapi-zk4v": Phase="Running", Reason="", readiness=true. Elapsed: 12.009300224s +Oct 13 08:17:15.974: INFO: Pod "pod-subpath-test-downwardapi-zk4v": Phase="Running", Reason="", readiness=true. Elapsed: 14.009055579s +Oct 13 08:17:17.974: INFO: Pod "pod-subpath-test-downwardapi-zk4v": Phase="Running", Reason="", readiness=true. Elapsed: 16.009775455s +Oct 13 08:17:19.973: INFO: Pod "pod-subpath-test-downwardapi-zk4v": Phase="Running", Reason="", readiness=true. Elapsed: 18.008138125s +Oct 13 08:17:21.973: INFO: Pod "pod-subpath-test-downwardapi-zk4v": Phase="Running", Reason="", readiness=true. Elapsed: 20.008948058s +Oct 13 08:17:23.974: INFO: Pod "pod-subpath-test-downwardapi-zk4v": Phase="Running", Reason="", readiness=false. Elapsed: 22.009290337s +Oct 13 08:17:25.974: INFO: Pod "pod-subpath-test-downwardapi-zk4v": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.009538741s +STEP: Saw pod success 10/13/23 08:17:25.974 +Oct 13 08:17:25.974: INFO: Pod "pod-subpath-test-downwardapi-zk4v" satisfied condition "Succeeded or Failed" +Oct 13 08:17:25.978: INFO: Trying to get logs from node node1 pod pod-subpath-test-downwardapi-zk4v container test-container-subpath-downwardapi-zk4v: +STEP: delete the pod 10/13/23 08:17:25.988 +Oct 13 08:17:26.002: INFO: Waiting for pod pod-subpath-test-downwardapi-zk4v to disappear +Oct 13 08:17:26.005: INFO: Pod pod-subpath-test-downwardapi-zk4v no longer exists +STEP: Deleting pod pod-subpath-test-downwardapi-zk4v 10/13/23 08:17:26.005 +Oct 13 08:17:26.005: INFO: Deleting pod "pod-subpath-test-downwardapi-zk4v" in namespace "subpath-307" +[AfterEach] [sig-storage] Subpath + test/e2e/framework/node/init/init.go:32 +Oct 13 08:17:26.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Subpath + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Subpath + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Subpath + tear down framework | framework.go:193 +STEP: Destroying namespace "subpath-307" for this suite. 10/13/23 08:17:26.012 +------------------------------ +• [SLOW TEST] [24.091 seconds] +[sig-storage] Subpath +test/e2e/storage/utils/framework.go:23 + Atomic writer volumes + test/e2e/storage/subpath.go:36 + should support subpaths with downward pod [Conformance] + test/e2e/storage/subpath.go:92 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Subpath + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:17:01.927 + Oct 13 08:17:01.927: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename subpath 10/13/23 08:17:01.928 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:17:01.943 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:17:01.945 + [BeforeEach] [sig-storage] Subpath + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] Atomic writer volumes + test/e2e/storage/subpath.go:40 + STEP: Setting up data 10/13/23 08:17:01.948 + [It] should support subpaths with downward pod [Conformance] + test/e2e/storage/subpath.go:92 + STEP: Creating pod pod-subpath-test-downwardapi-zk4v 10/13/23 08:17:01.957 + STEP: Creating a pod to test atomic-volume-subpath 10/13/23 08:17:01.957 + Oct 13 08:17:01.964: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-zk4v" in namespace "subpath-307" to be "Succeeded or Failed" + Oct 13 08:17:01.968: INFO: Pod "pod-subpath-test-downwardapi-zk4v": Phase="Pending", Reason="", readiness=false. Elapsed: 3.597743ms + Oct 13 08:17:03.974: INFO: Pod "pod-subpath-test-downwardapi-zk4v": Phase="Running", Reason="", readiness=true. Elapsed: 2.009354891s + Oct 13 08:17:05.974: INFO: Pod "pod-subpath-test-downwardapi-zk4v": Phase="Running", Reason="", readiness=true. Elapsed: 4.009552511s + Oct 13 08:17:07.973: INFO: Pod "pod-subpath-test-downwardapi-zk4v": Phase="Running", Reason="", readiness=true. Elapsed: 6.008239012s + Oct 13 08:17:09.973: INFO: Pod "pod-subpath-test-downwardapi-zk4v": Phase="Running", Reason="", readiness=true. Elapsed: 8.008290329s + Oct 13 08:17:11.973: INFO: Pod "pod-subpath-test-downwardapi-zk4v": Phase="Running", Reason="", readiness=true. Elapsed: 10.008107174s + Oct 13 08:17:13.974: INFO: Pod "pod-subpath-test-downwardapi-zk4v": Phase="Running", Reason="", readiness=true. Elapsed: 12.009300224s + Oct 13 08:17:15.974: INFO: Pod "pod-subpath-test-downwardapi-zk4v": Phase="Running", Reason="", readiness=true. Elapsed: 14.009055579s + Oct 13 08:17:17.974: INFO: Pod "pod-subpath-test-downwardapi-zk4v": Phase="Running", Reason="", readiness=true. Elapsed: 16.009775455s + Oct 13 08:17:19.973: INFO: Pod "pod-subpath-test-downwardapi-zk4v": Phase="Running", Reason="", readiness=true. Elapsed: 18.008138125s + Oct 13 08:17:21.973: INFO: Pod "pod-subpath-test-downwardapi-zk4v": Phase="Running", Reason="", readiness=true. Elapsed: 20.008948058s + Oct 13 08:17:23.974: INFO: Pod "pod-subpath-test-downwardapi-zk4v": Phase="Running", Reason="", readiness=false. Elapsed: 22.009290337s + Oct 13 08:17:25.974: INFO: Pod "pod-subpath-test-downwardapi-zk4v": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.009538741s + STEP: Saw pod success 10/13/23 08:17:25.974 + Oct 13 08:17:25.974: INFO: Pod "pod-subpath-test-downwardapi-zk4v" satisfied condition "Succeeded or Failed" + Oct 13 08:17:25.978: INFO: Trying to get logs from node node1 pod pod-subpath-test-downwardapi-zk4v container test-container-subpath-downwardapi-zk4v: + STEP: delete the pod 10/13/23 08:17:25.988 + Oct 13 08:17:26.002: INFO: Waiting for pod pod-subpath-test-downwardapi-zk4v to disappear + Oct 13 08:17:26.005: INFO: Pod pod-subpath-test-downwardapi-zk4v no longer exists + STEP: Deleting pod pod-subpath-test-downwardapi-zk4v 10/13/23 08:17:26.005 + Oct 13 08:17:26.005: INFO: Deleting pod "pod-subpath-test-downwardapi-zk4v" in namespace "subpath-307" + [AfterEach] [sig-storage] Subpath + test/e2e/framework/node/init/init.go:32 + Oct 13 08:17:26.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Subpath + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Subpath + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Subpath + tear down framework | framework.go:193 + STEP: Destroying namespace "subpath-307" for this suite. 10/13/23 08:17:26.012 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + Should recreate evicted statefulset [Conformance] + test/e2e/apps/statefulset.go:739 +[BeforeEach] [sig-apps] StatefulSet + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:17:26.018 +Oct 13 08:17:26.018: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename statefulset 10/13/23 08:17:26.019 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:17:26.036 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:17:26.039 +[BeforeEach] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:98 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:113 +STEP: Creating service test in namespace statefulset-7812 10/13/23 08:17:26.042 +[It] Should recreate evicted statefulset [Conformance] + test/e2e/apps/statefulset.go:739 +STEP: Looking for a node to schedule stateful set and pod 10/13/23 08:17:26.046 +STEP: Creating pod with conflicting port in namespace statefulset-7812 10/13/23 08:17:26.05 +STEP: Waiting until pod test-pod will start running in namespace statefulset-7812 10/13/23 08:17:26.057 +Oct 13 08:17:26.057: INFO: Waiting up to 5m0s for pod "test-pod" in namespace "statefulset-7812" to be "running" +Oct 13 08:17:26.060: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.746279ms +Oct 13 08:17:28.066: INFO: Pod "test-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.008375739s +Oct 13 08:17:28.066: INFO: Pod "test-pod" satisfied condition "running" +STEP: Creating statefulset with conflicting port in namespace statefulset-7812 10/13/23 08:17:28.066 +STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-7812 10/13/23 08:17:28.072 +Oct 13 08:17:28.086: INFO: Observed stateful pod in namespace: statefulset-7812, name: ss-0, uid: 76fcb7ef-3e05-4cbe-b306-336e1f704d9c, status phase: Pending. Waiting for statefulset controller to delete. +Oct 13 08:17:28.098: INFO: Observed stateful pod in namespace: statefulset-7812, name: ss-0, uid: 76fcb7ef-3e05-4cbe-b306-336e1f704d9c, status phase: Failed. Waiting for statefulset controller to delete. +Oct 13 08:17:28.105: INFO: Observed stateful pod in namespace: statefulset-7812, name: ss-0, uid: 76fcb7ef-3e05-4cbe-b306-336e1f704d9c, status phase: Failed. Waiting for statefulset controller to delete. +Oct 13 08:17:28.109: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-7812 +STEP: Removing pod with conflicting port in namespace statefulset-7812 10/13/23 08:17:28.109 +STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-7812 and will be in running state 10/13/23 08:17:28.118 +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:124 +Oct 13 08:17:30.126: INFO: Deleting all statefulset in ns statefulset-7812 +Oct 13 08:17:30.129: INFO: Scaling statefulset ss to 0 +Oct 13 08:17:40.148: INFO: Waiting for statefulset status.replicas updated to 0 +Oct 13 08:17:40.152: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + test/e2e/framework/node/init/init.go:32 +Oct 13 08:17:40.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] StatefulSet + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] StatefulSet + tear down framework | framework.go:193 +STEP: Destroying namespace "statefulset-7812" for this suite. 10/13/23 08:17:40.168 +------------------------------ +• [SLOW TEST] [14.155 seconds] +[sig-apps] StatefulSet +test/e2e/apps/framework.go:23 + Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:103 + Should recreate evicted statefulset [Conformance] + test/e2e/apps/statefulset.go:739 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] StatefulSet + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:17:26.018 + Oct 13 08:17:26.018: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename statefulset 10/13/23 08:17:26.019 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:17:26.036 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:17:26.039 + [BeforeEach] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:98 + [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:113 + STEP: Creating service test in namespace statefulset-7812 10/13/23 08:17:26.042 + [It] Should recreate evicted statefulset [Conformance] + test/e2e/apps/statefulset.go:739 + STEP: Looking for a node to schedule stateful set and pod 10/13/23 08:17:26.046 + STEP: Creating pod with conflicting port in namespace statefulset-7812 10/13/23 08:17:26.05 + STEP: Waiting until pod test-pod will start running in namespace statefulset-7812 10/13/23 08:17:26.057 + Oct 13 08:17:26.057: INFO: Waiting up to 5m0s for pod "test-pod" in namespace "statefulset-7812" to be "running" + Oct 13 08:17:26.060: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.746279ms + Oct 13 08:17:28.066: INFO: Pod "test-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.008375739s + Oct 13 08:17:28.066: INFO: Pod "test-pod" satisfied condition "running" + STEP: Creating statefulset with conflicting port in namespace statefulset-7812 10/13/23 08:17:28.066 + STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-7812 10/13/23 08:17:28.072 + Oct 13 08:17:28.086: INFO: Observed stateful pod in namespace: statefulset-7812, name: ss-0, uid: 76fcb7ef-3e05-4cbe-b306-336e1f704d9c, status phase: Pending. Waiting for statefulset controller to delete. + Oct 13 08:17:28.098: INFO: Observed stateful pod in namespace: statefulset-7812, name: ss-0, uid: 76fcb7ef-3e05-4cbe-b306-336e1f704d9c, status phase: Failed. Waiting for statefulset controller to delete. + Oct 13 08:17:28.105: INFO: Observed stateful pod in namespace: statefulset-7812, name: ss-0, uid: 76fcb7ef-3e05-4cbe-b306-336e1f704d9c, status phase: Failed. Waiting for statefulset controller to delete. + Oct 13 08:17:28.109: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-7812 + STEP: Removing pod with conflicting port in namespace statefulset-7812 10/13/23 08:17:28.109 + STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-7812 and will be in running state 10/13/23 08:17:28.118 + [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:124 + Oct 13 08:17:30.126: INFO: Deleting all statefulset in ns statefulset-7812 + Oct 13 08:17:30.129: INFO: Scaling statefulset ss to 0 + Oct 13 08:17:40.148: INFO: Waiting for statefulset status.replicas updated to 0 + Oct 13 08:17:40.152: INFO: Deleting statefulset ss + [AfterEach] [sig-apps] StatefulSet + test/e2e/framework/node/init/init.go:32 + Oct 13 08:17:40.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] StatefulSet + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] StatefulSet + tear down framework | framework.go:193 + STEP: Destroying namespace "statefulset-7812" for this suite. 10/13/23 08:17:40.168 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should include webhook resources in discovery documents [Conformance] + test/e2e/apimachinery/webhook.go:117 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:17:40.175 +Oct 13 08:17:40.175: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename webhook 10/13/23 08:17:40.176 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:17:40.191 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:17:40.194 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 +STEP: Setting up server cert 10/13/23 08:17:40.207 +STEP: Create role binding to let webhook read extension-apiserver-authentication 10/13/23 08:17:40.45 +STEP: Deploying the webhook pod 10/13/23 08:17:40.46 +STEP: Wait for the deployment to be ready 10/13/23 08:17:40.469 +Oct 13 08:17:40.480: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service 10/13/23 08:17:42.49 +STEP: Verifying the service has paired with the endpoint 10/13/23 08:17:42.499 +Oct 13 08:17:43.499: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should include webhook resources in discovery documents [Conformance] + test/e2e/apimachinery/webhook.go:117 +STEP: fetching the /apis discovery document 10/13/23 08:17:43.503 +STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document 10/13/23 08:17:43.505 +STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document 10/13/23 08:17:43.505 +STEP: fetching the /apis/admissionregistration.k8s.io discovery document 10/13/23 08:17:43.505 +STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document 10/13/23 08:17:43.506 +STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document 10/13/23 08:17:43.506 +STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document 10/13/23 08:17:43.508 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Oct 13 08:17:43.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "webhook-4198" for this suite. 10/13/23 08:17:43.544 +STEP: Destroying namespace "webhook-4198-markers" for this suite. 10/13/23 08:17:43.55 +------------------------------ +• [3.381 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should include webhook resources in discovery documents [Conformance] + test/e2e/apimachinery/webhook.go:117 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:17:40.175 + Oct 13 08:17:40.175: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename webhook 10/13/23 08:17:40.176 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:17:40.191 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:17:40.194 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 + STEP: Setting up server cert 10/13/23 08:17:40.207 + STEP: Create role binding to let webhook read extension-apiserver-authentication 10/13/23 08:17:40.45 + STEP: Deploying the webhook pod 10/13/23 08:17:40.46 + STEP: Wait for the deployment to be ready 10/13/23 08:17:40.469 + Oct 13 08:17:40.480: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set + STEP: Deploying the webhook service 10/13/23 08:17:42.49 + STEP: Verifying the service has paired with the endpoint 10/13/23 08:17:42.499 + Oct 13 08:17:43.499: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] should include webhook resources in discovery documents [Conformance] + test/e2e/apimachinery/webhook.go:117 + STEP: fetching the /apis discovery document 10/13/23 08:17:43.503 + STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document 10/13/23 08:17:43.505 + STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document 10/13/23 08:17:43.505 + STEP: fetching the /apis/admissionregistration.k8s.io discovery document 10/13/23 08:17:43.505 + STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document 10/13/23 08:17:43.506 + STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document 10/13/23 08:17:43.506 + STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document 10/13/23 08:17:43.508 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Oct 13 08:17:43.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "webhook-4198" for this suite. 10/13/23 08:17:43.544 + STEP: Destroying namespace "webhook-4198-markers" for this suite. 10/13/23 08:17:43.55 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] InitContainer [NodeConformance] + should not start app containers if init containers fail on a RestartAlways pod [Conformance] + test/e2e/common/node/init_container.go:334 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:17:43.557 +Oct 13 08:17:43.557: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename init-container 10/13/23 08:17:43.558 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:17:43.58 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:17:43.583 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + test/e2e/common/node/init_container.go:165 +[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] + test/e2e/common/node/init_container.go:334 +STEP: creating the pod 10/13/23 08:17:43.586 +Oct 13 08:17:43.586: INFO: PodSpec: initContainers in spec.initContainers +Oct 13 08:18:27.082: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-dc3fe8be-5ff0-44bb-85d1-41e9b7614a28", GenerateName:"", Namespace:"init-container-4276", SelfLink:"", UID:"5c25efe0-932c-4baa-b9fa-d051184c1b37", ResourceVersion:"11068", Generation:0, CreationTimestamp:time.Date(2023, time.October, 13, 8, 17, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"586854653"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:time.Date(2023, time.October, 13, 8, 17, 44, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001199d88), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:time.Date(2023, time.October, 13, 8, 18, 27, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001199db8), Subresource:"status"}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-hvjlq", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc000d7a4a0), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"registry.k8s.io/e2e-test-images/busybox:1.29-4", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil), Claims:[]v1.ResourceClaim(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-hvjlq", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"registry.k8s.io/e2e-test-images/busybox:1.29-4", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil), Claims:[]v1.ResourceClaim(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-hvjlq", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"registry.k8s.io/pause:3.9", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Claims:[]v1.ResourceClaim(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-hvjlq", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0049bb8f8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"node2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000f09ce0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0049bb980)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0049bb9a0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0049bb9a8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0049bb9ac), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc004ac6140), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2023, time.October, 13, 8, 17, 44, 0, time.Local), Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2023, time.October, 13, 8, 17, 44, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2023, time.October, 13, 8, 17, 44, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2023, time.October, 13, 8, 17, 43, 0, time.Local), Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.253.8.111", PodIP:"10.244.1.27", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.27"}}, StartTime:time.Date(2023, time.October, 13, 8, 17, 44, 0, time.Local), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000f09dc0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000f09e30)}, Ready:false, RestartCount:3, Image:"registry.k8s.io/e2e-test-images/busybox:1.29-4", ImageID:"sha256:d59c675982d8692814ec9e1486d4c645cd86ad825ef33975a5db196cf2801592", ContainerID:"containerd://fa83f6e2cac562ed6854d58b2076b04d1a536ce1aa2dd1e7af84d69810f115b3", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000d7a520), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"registry.k8s.io/e2e-test-images/busybox:1.29-4", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000d7a500), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"registry.k8s.io/pause:3.9", ImageID:"", ContainerID:"", Started:(*bool)(0xc0049bba2f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} +[AfterEach] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/node/init/init.go:32 +Oct 13 08:18:27.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] + tear down framework | framework.go:193 +STEP: Destroying namespace "init-container-4276" for this suite. 10/13/23 08:18:27.086 +------------------------------ +• [SLOW TEST] [43.535 seconds] +[sig-node] InitContainer [NodeConformance] +test/e2e/common/node/framework.go:23 + should not start app containers if init containers fail on a RestartAlways pod [Conformance] + test/e2e/common/node/init_container.go:334 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] InitContainer [NodeConformance] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:17:43.557 + Oct 13 08:17:43.557: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename init-container 10/13/23 08:17:43.558 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:17:43.58 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:17:43.583 + [BeforeEach] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] InitContainer [NodeConformance] + test/e2e/common/node/init_container.go:165 + [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] + test/e2e/common/node/init_container.go:334 + STEP: creating the pod 10/13/23 08:17:43.586 + Oct 13 08:17:43.586: INFO: PodSpec: initContainers in spec.initContainers + Oct 13 08:18:27.082: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-dc3fe8be-5ff0-44bb-85d1-41e9b7614a28", GenerateName:"", Namespace:"init-container-4276", SelfLink:"", UID:"5c25efe0-932c-4baa-b9fa-d051184c1b37", ResourceVersion:"11068", Generation:0, CreationTimestamp:time.Date(2023, time.October, 13, 8, 17, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"586854653"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:time.Date(2023, time.October, 13, 8, 17, 44, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001199d88), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:time.Date(2023, time.October, 13, 8, 18, 27, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001199db8), Subresource:"status"}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-hvjlq", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc000d7a4a0), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"registry.k8s.io/e2e-test-images/busybox:1.29-4", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil), Claims:[]v1.ResourceClaim(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-hvjlq", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"registry.k8s.io/e2e-test-images/busybox:1.29-4", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil), Claims:[]v1.ResourceClaim(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-hvjlq", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"registry.k8s.io/pause:3.9", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Claims:[]v1.ResourceClaim(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-hvjlq", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0049bb8f8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"node2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000f09ce0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0049bb980)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0049bb9a0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0049bb9a8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0049bb9ac), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc004ac6140), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2023, time.October, 13, 8, 17, 44, 0, time.Local), Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2023, time.October, 13, 8, 17, 44, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2023, time.October, 13, 8, 17, 44, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2023, time.October, 13, 8, 17, 43, 0, time.Local), Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.253.8.111", PodIP:"10.244.1.27", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.27"}}, StartTime:time.Date(2023, time.October, 13, 8, 17, 44, 0, time.Local), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000f09dc0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000f09e30)}, Ready:false, RestartCount:3, Image:"registry.k8s.io/e2e-test-images/busybox:1.29-4", ImageID:"sha256:d59c675982d8692814ec9e1486d4c645cd86ad825ef33975a5db196cf2801592", ContainerID:"containerd://fa83f6e2cac562ed6854d58b2076b04d1a536ce1aa2dd1e7af84d69810f115b3", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000d7a520), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"registry.k8s.io/e2e-test-images/busybox:1.29-4", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000d7a500), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"registry.k8s.io/pause:3.9", ImageID:"", ContainerID:"", Started:(*bool)(0xc0049bba2f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} + [AfterEach] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/node/init/init.go:32 + Oct 13 08:18:27.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] + tear down framework | framework.go:193 + STEP: Destroying namespace "init-container-4276" for this suite. 10/13/23 08:18:27.086 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's cpu request [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:221 +[BeforeEach] [sig-storage] Projected downwardAPI + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:18:27.094 +Oct 13 08:18:27.094: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename projected 10/13/23 08:18:27.095 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:18:27.108 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:18:27.11 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:44 +[It] should provide container's cpu request [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:221 +STEP: Creating a pod to test downward API volume plugin 10/13/23 08:18:27.113 +Oct 13 08:18:27.120: INFO: Waiting up to 5m0s for pod "downwardapi-volume-91564299-a742-426f-b48f-bacf4b485d3a" in namespace "projected-6206" to be "Succeeded or Failed" +Oct 13 08:18:27.123: INFO: Pod "downwardapi-volume-91564299-a742-426f-b48f-bacf4b485d3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.818955ms +Oct 13 08:18:29.129: INFO: Pod "downwardapi-volume-91564299-a742-426f-b48f-bacf4b485d3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008038137s +Oct 13 08:18:31.128: INFO: Pod "downwardapi-volume-91564299-a742-426f-b48f-bacf4b485d3a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007785579s +STEP: Saw pod success 10/13/23 08:18:31.128 +Oct 13 08:18:31.129: INFO: Pod "downwardapi-volume-91564299-a742-426f-b48f-bacf4b485d3a" satisfied condition "Succeeded or Failed" +Oct 13 08:18:31.133: INFO: Trying to get logs from node node1 pod downwardapi-volume-91564299-a742-426f-b48f-bacf4b485d3a container client-container: +STEP: delete the pod 10/13/23 08:18:31.14 +Oct 13 08:18:31.153: INFO: Waiting for pod downwardapi-volume-91564299-a742-426f-b48f-bacf4b485d3a to disappear +Oct 13 08:18:31.156: INFO: Pod downwardapi-volume-91564299-a742-426f-b48f-bacf4b485d3a no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/node/init/init.go:32 +Oct 13 08:18:31.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI + tear down framework | framework.go:193 +STEP: Destroying namespace "projected-6206" for this suite. 10/13/23 08:18:31.16 +------------------------------ +• [4.072 seconds] +[sig-storage] Projected downwardAPI +test/e2e/common/storage/framework.go:23 + should provide container's cpu request [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:221 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected downwardAPI + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:18:27.094 + Oct 13 08:18:27.094: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename projected 10/13/23 08:18:27.095 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:18:27.108 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:18:27.11 + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:44 + [It] should provide container's cpu request [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:221 + STEP: Creating a pod to test downward API volume plugin 10/13/23 08:18:27.113 + Oct 13 08:18:27.120: INFO: Waiting up to 5m0s for pod "downwardapi-volume-91564299-a742-426f-b48f-bacf4b485d3a" in namespace "projected-6206" to be "Succeeded or Failed" + Oct 13 08:18:27.123: INFO: Pod "downwardapi-volume-91564299-a742-426f-b48f-bacf4b485d3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.818955ms + Oct 13 08:18:29.129: INFO: Pod "downwardapi-volume-91564299-a742-426f-b48f-bacf4b485d3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008038137s + Oct 13 08:18:31.128: INFO: Pod "downwardapi-volume-91564299-a742-426f-b48f-bacf4b485d3a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007785579s + STEP: Saw pod success 10/13/23 08:18:31.128 + Oct 13 08:18:31.129: INFO: Pod "downwardapi-volume-91564299-a742-426f-b48f-bacf4b485d3a" satisfied condition "Succeeded or Failed" + Oct 13 08:18:31.133: INFO: Trying to get logs from node node1 pod downwardapi-volume-91564299-a742-426f-b48f-bacf4b485d3a container client-container: + STEP: delete the pod 10/13/23 08:18:31.14 + Oct 13 08:18:31.153: INFO: Waiting for pod downwardapi-volume-91564299-a742-426f-b48f-bacf4b485d3a to disappear + Oct 13 08:18:31.156: INFO: Pod downwardapi-volume-91564299-a742-426f-b48f-bacf4b485d3a no longer exists + [AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/node/init/init.go:32 + Oct 13 08:18:31.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + tear down framework | framework.go:193 + STEP: Destroying namespace "projected-6206" for this suite. 10/13/23 08:18:31.16 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl diff + should check if kubectl diff finds a difference for Deployments [Conformance] + test/e2e/kubectl/kubectl.go:931 +[BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:18:31.167 +Oct 13 08:18:31.167: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename kubectl 10/13/23 08:18:31.168 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:18:31.18 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:18:31.183 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 +[It] should check if kubectl diff finds a difference for Deployments [Conformance] + test/e2e/kubectl/kubectl.go:931 +STEP: create deployment with httpd image 10/13/23 08:18:31.187 +Oct 13 08:18:31.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-6884 create -f -' +Oct 13 08:18:31.981: INFO: stderr: "" +Oct 13 08:18:31.981: INFO: stdout: "deployment.apps/httpd-deployment created\n" +STEP: verify diff finds difference between live and declared image 10/13/23 08:18:31.981 +Oct 13 08:18:31.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-6884 diff -f -' +Oct 13 08:18:32.212: INFO: rc: 1 +Oct 13 08:18:32.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-6884 delete -f -' +Oct 13 08:18:32.293: INFO: stderr: "" +Oct 13 08:18:32.293: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 +Oct 13 08:18:32.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 +STEP: Destroying namespace "kubectl-6884" for this suite. 10/13/23 08:18:32.298 +------------------------------ +• [1.138 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Kubectl diff + test/e2e/kubectl/kubectl.go:925 + should check if kubectl diff finds a difference for Deployments [Conformance] + test/e2e/kubectl/kubectl.go:931 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:18:31.167 + Oct 13 08:18:31.167: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename kubectl 10/13/23 08:18:31.168 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:18:31.18 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:18:31.183 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 + [It] should check if kubectl diff finds a difference for Deployments [Conformance] + test/e2e/kubectl/kubectl.go:931 + STEP: create deployment with httpd image 10/13/23 08:18:31.187 + Oct 13 08:18:31.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-6884 create -f -' + Oct 13 08:18:31.981: INFO: stderr: "" + Oct 13 08:18:31.981: INFO: stdout: "deployment.apps/httpd-deployment created\n" + STEP: verify diff finds difference between live and declared image 10/13/23 08:18:31.981 + Oct 13 08:18:31.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-6884 diff -f -' + Oct 13 08:18:32.212: INFO: rc: 1 + Oct 13 08:18:32.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-6884 delete -f -' + Oct 13 08:18:32.293: INFO: stderr: "" + Oct 13 08:18:32.293: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" + [AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 + Oct 13 08:18:32.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 + STEP: Destroying namespace "kubectl-6884" for this suite. 10/13/23 08:18:32.298 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] NoExecuteTaintManager Single Pod [Serial] + removing taint cancels eviction [Disruptive] [Conformance] + test/e2e/node/taints.go:293 +[BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:18:32.306 +Oct 13 08:18:32.306: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename taint-single-pod 10/13/23 08:18:32.307 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:18:32.323 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:18:32.326 +[BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] + test/e2e/node/taints.go:170 +Oct 13 08:18:32.328: INFO: Waiting up to 1m0s for all nodes to be ready +Oct 13 08:19:32.358: INFO: Waiting for terminating namespaces to be deleted... +[It] removing taint cancels eviction [Disruptive] [Conformance] + test/e2e/node/taints.go:293 +Oct 13 08:19:32.362: INFO: Starting informer... +STEP: Starting pod... 10/13/23 08:19:32.362 +Oct 13 08:19:32.575: INFO: Pod is running on node2. Tainting Node +STEP: Trying to apply a taint on the Node 10/13/23 08:19:32.575 +STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute 10/13/23 08:19:32.586 +STEP: Waiting short time to make sure Pod is queued for deletion 10/13/23 08:19:32.589 +Oct 13 08:19:32.589: INFO: Pod wasn't evicted. Proceeding +Oct 13 08:19:32.589: INFO: Removing taint from Node +STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute 10/13/23 08:19:32.601 +STEP: Waiting some time to make sure that toleration time passed. 10/13/23 08:19:32.605 +Oct 13 08:20:47.605: INFO: Pod wasn't evicted. Test successful +[AfterEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] + test/e2e/framework/node/init/init.go:32 +Oct 13 08:20:47.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] NoExecuteTaintManager Single Pod [Serial] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] NoExecuteTaintManager Single Pod [Serial] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] NoExecuteTaintManager Single Pod [Serial] + tear down framework | framework.go:193 +STEP: Destroying namespace "taint-single-pod-810" for this suite. 10/13/23 08:20:47.611 +------------------------------ +• [SLOW TEST] [135.314 seconds] +[sig-node] NoExecuteTaintManager Single Pod [Serial] +test/e2e/node/framework.go:23 + removing taint cancels eviction [Disruptive] [Conformance] + test/e2e/node/taints.go:293 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:18:32.306 + Oct 13 08:18:32.306: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename taint-single-pod 10/13/23 08:18:32.307 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:18:32.323 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:18:32.326 + [BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] + test/e2e/node/taints.go:170 + Oct 13 08:18:32.328: INFO: Waiting up to 1m0s for all nodes to be ready + Oct 13 08:19:32.358: INFO: Waiting for terminating namespaces to be deleted... + [It] removing taint cancels eviction [Disruptive] [Conformance] + test/e2e/node/taints.go:293 + Oct 13 08:19:32.362: INFO: Starting informer... + STEP: Starting pod... 10/13/23 08:19:32.362 + Oct 13 08:19:32.575: INFO: Pod is running on node2. Tainting Node + STEP: Trying to apply a taint on the Node 10/13/23 08:19:32.575 + STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute 10/13/23 08:19:32.586 + STEP: Waiting short time to make sure Pod is queued for deletion 10/13/23 08:19:32.589 + Oct 13 08:19:32.589: INFO: Pod wasn't evicted. Proceeding + Oct 13 08:19:32.589: INFO: Removing taint from Node + STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute 10/13/23 08:19:32.601 + STEP: Waiting some time to make sure that toleration time passed. 10/13/23 08:19:32.605 + Oct 13 08:20:47.605: INFO: Pod wasn't evicted. Test successful + [AfterEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] + test/e2e/framework/node/init/init.go:32 + Oct 13 08:20:47.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] NoExecuteTaintManager Single Pod [Serial] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] NoExecuteTaintManager Single Pod [Serial] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] NoExecuteTaintManager Single Pod [Serial] + tear down framework | framework.go:193 + STEP: Destroying namespace "taint-single-pod-810" for this suite. 10/13/23 08:20:47.611 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir wrapper volumes + should not cause race condition when used for configmaps [Serial] [Conformance] + test/e2e/storage/empty_dir_wrapper.go:189 +[BeforeEach] [sig-storage] EmptyDir wrapper volumes + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:20:47.62 +Oct 13 08:20:47.620: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename emptydir-wrapper 10/13/23 08:20:47.621 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:20:47.639 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:20:47.642 +[BeforeEach] [sig-storage] EmptyDir wrapper volumes + test/e2e/framework/metrics/init/init.go:31 +[It] should not cause race condition when used for configmaps [Serial] [Conformance] + test/e2e/storage/empty_dir_wrapper.go:189 +STEP: Creating 50 configmaps 10/13/23 08:20:47.644 +STEP: Creating RC which spawns configmap-volume pods 10/13/23 08:20:47.878 +Oct 13 08:20:47.978: INFO: Pod name wrapped-volume-race-3495ed51-a8ee-4f94-939e-71ce3e9fd213: Found 3 pods out of 5 +Oct 13 08:20:52.987: INFO: Pod name wrapped-volume-race-3495ed51-a8ee-4f94-939e-71ce3e9fd213: Found 5 pods out of 5 +STEP: Ensuring each pod is running 10/13/23 08:20:52.987 +Oct 13 08:20:52.987: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-3495ed51-a8ee-4f94-939e-71ce3e9fd213-6sfcn" in namespace "emptydir-wrapper-460" to be "running" +Oct 13 08:20:52.991: INFO: Pod "wrapped-volume-race-3495ed51-a8ee-4f94-939e-71ce3e9fd213-6sfcn": Phase="Pending", Reason="", readiness=false. Elapsed: 3.700327ms +Oct 13 08:20:54.999: INFO: Pod "wrapped-volume-race-3495ed51-a8ee-4f94-939e-71ce3e9fd213-6sfcn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012071065s +Oct 13 08:20:56.995: INFO: Pod "wrapped-volume-race-3495ed51-a8ee-4f94-939e-71ce3e9fd213-6sfcn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007806479s +Oct 13 08:20:58.996: INFO: Pod "wrapped-volume-race-3495ed51-a8ee-4f94-939e-71ce3e9fd213-6sfcn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.00862368s +Oct 13 08:21:00.995: INFO: Pod "wrapped-volume-race-3495ed51-a8ee-4f94-939e-71ce3e9fd213-6sfcn": Phase="Pending", Reason="", readiness=false. Elapsed: 8.008176778s +Oct 13 08:21:02.995: INFO: Pod "wrapped-volume-race-3495ed51-a8ee-4f94-939e-71ce3e9fd213-6sfcn": Phase="Running", Reason="", readiness=true. Elapsed: 10.008233423s +Oct 13 08:21:02.996: INFO: Pod "wrapped-volume-race-3495ed51-a8ee-4f94-939e-71ce3e9fd213-6sfcn" satisfied condition "running" +Oct 13 08:21:02.996: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-3495ed51-a8ee-4f94-939e-71ce3e9fd213-7zn49" in namespace "emptydir-wrapper-460" to be "running" +Oct 13 08:21:02.998: INFO: Pod "wrapped-volume-race-3495ed51-a8ee-4f94-939e-71ce3e9fd213-7zn49": Phase="Running", Reason="", readiness=true. Elapsed: 2.750483ms +Oct 13 08:21:02.998: INFO: Pod "wrapped-volume-race-3495ed51-a8ee-4f94-939e-71ce3e9fd213-7zn49" satisfied condition "running" +Oct 13 08:21:02.998: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-3495ed51-a8ee-4f94-939e-71ce3e9fd213-ftk6x" in namespace "emptydir-wrapper-460" to be "running" +Oct 13 08:21:03.001: INFO: Pod "wrapped-volume-race-3495ed51-a8ee-4f94-939e-71ce3e9fd213-ftk6x": Phase="Running", Reason="", readiness=true. Elapsed: 2.52717ms +Oct 13 08:21:03.001: INFO: Pod "wrapped-volume-race-3495ed51-a8ee-4f94-939e-71ce3e9fd213-ftk6x" satisfied condition "running" +Oct 13 08:21:03.001: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-3495ed51-a8ee-4f94-939e-71ce3e9fd213-l4zjp" in namespace "emptydir-wrapper-460" to be "running" +Oct 13 08:21:03.004: INFO: Pod "wrapped-volume-race-3495ed51-a8ee-4f94-939e-71ce3e9fd213-l4zjp": Phase="Running", Reason="", readiness=true. Elapsed: 2.719409ms +Oct 13 08:21:03.004: INFO: Pod "wrapped-volume-race-3495ed51-a8ee-4f94-939e-71ce3e9fd213-l4zjp" satisfied condition "running" +Oct 13 08:21:03.004: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-3495ed51-a8ee-4f94-939e-71ce3e9fd213-v55g6" in namespace "emptydir-wrapper-460" to be "running" +Oct 13 08:21:03.006: INFO: Pod "wrapped-volume-race-3495ed51-a8ee-4f94-939e-71ce3e9fd213-v55g6": Phase="Running", Reason="", readiness=true. Elapsed: 2.530877ms +Oct 13 08:21:03.006: INFO: Pod "wrapped-volume-race-3495ed51-a8ee-4f94-939e-71ce3e9fd213-v55g6" satisfied condition "running" +STEP: deleting ReplicationController wrapped-volume-race-3495ed51-a8ee-4f94-939e-71ce3e9fd213 in namespace emptydir-wrapper-460, will wait for the garbage collector to delete the pods 10/13/23 08:21:03.006 +Oct 13 08:21:03.065: INFO: Deleting ReplicationController wrapped-volume-race-3495ed51-a8ee-4f94-939e-71ce3e9fd213 took: 5.357603ms +Oct 13 08:21:03.166: INFO: Terminating ReplicationController wrapped-volume-race-3495ed51-a8ee-4f94-939e-71ce3e9fd213 pods took: 100.637484ms +STEP: Creating RC which spawns configmap-volume pods 10/13/23 08:21:05.67 +Oct 13 08:21:05.682: INFO: Pod name wrapped-volume-race-b0efa78c-5be2-4abe-9d50-0b2e095eb8c8: Found 0 pods out of 5 +Oct 13 08:21:10.692: INFO: Pod name wrapped-volume-race-b0efa78c-5be2-4abe-9d50-0b2e095eb8c8: Found 5 pods out of 5 +STEP: Ensuring each pod is running 10/13/23 08:21:10.692 +Oct 13 08:21:10.692: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-b0efa78c-5be2-4abe-9d50-0b2e095eb8c8-bqqt6" in namespace "emptydir-wrapper-460" to be "running" +Oct 13 08:21:10.696: INFO: Pod "wrapped-volume-race-b0efa78c-5be2-4abe-9d50-0b2e095eb8c8-bqqt6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.81066ms +Oct 13 08:21:12.700: INFO: Pod "wrapped-volume-race-b0efa78c-5be2-4abe-9d50-0b2e095eb8c8-bqqt6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008409276s +Oct 13 08:21:14.700: INFO: Pod "wrapped-volume-race-b0efa78c-5be2-4abe-9d50-0b2e095eb8c8-bqqt6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00835967s +Oct 13 08:21:16.701: INFO: Pod "wrapped-volume-race-b0efa78c-5be2-4abe-9d50-0b2e095eb8c8-bqqt6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.008685571s +Oct 13 08:21:18.701: INFO: Pod "wrapped-volume-race-b0efa78c-5be2-4abe-9d50-0b2e095eb8c8-bqqt6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.008815662s +Oct 13 08:21:20.704: INFO: Pod "wrapped-volume-race-b0efa78c-5be2-4abe-9d50-0b2e095eb8c8-bqqt6": Phase="Running", Reason="", readiness=true. Elapsed: 10.011757628s +Oct 13 08:21:20.704: INFO: Pod "wrapped-volume-race-b0efa78c-5be2-4abe-9d50-0b2e095eb8c8-bqqt6" satisfied condition "running" +Oct 13 08:21:20.704: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-b0efa78c-5be2-4abe-9d50-0b2e095eb8c8-jrvkw" in namespace "emptydir-wrapper-460" to be "running" +Oct 13 08:21:20.708: INFO: Pod "wrapped-volume-race-b0efa78c-5be2-4abe-9d50-0b2e095eb8c8-jrvkw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.208921ms +Oct 13 08:21:22.716: INFO: Pod "wrapped-volume-race-b0efa78c-5be2-4abe-9d50-0b2e095eb8c8-jrvkw": Phase="Running", Reason="", readiness=true. Elapsed: 2.01175476s +Oct 13 08:21:22.716: INFO: Pod "wrapped-volume-race-b0efa78c-5be2-4abe-9d50-0b2e095eb8c8-jrvkw" satisfied condition "running" +Oct 13 08:21:22.716: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-b0efa78c-5be2-4abe-9d50-0b2e095eb8c8-sdp6z" in namespace "emptydir-wrapper-460" to be "running" +Oct 13 08:21:22.721: INFO: Pod "wrapped-volume-race-b0efa78c-5be2-4abe-9d50-0b2e095eb8c8-sdp6z": Phase="Running", Reason="", readiness=true. Elapsed: 5.310071ms +Oct 13 08:21:22.721: INFO: Pod "wrapped-volume-race-b0efa78c-5be2-4abe-9d50-0b2e095eb8c8-sdp6z" satisfied condition "running" +Oct 13 08:21:22.721: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-b0efa78c-5be2-4abe-9d50-0b2e095eb8c8-vf89b" in namespace "emptydir-wrapper-460" to be "running" +Oct 13 08:21:22.725: INFO: Pod "wrapped-volume-race-b0efa78c-5be2-4abe-9d50-0b2e095eb8c8-vf89b": Phase="Running", Reason="", readiness=true. Elapsed: 4.276045ms +Oct 13 08:21:22.725: INFO: Pod "wrapped-volume-race-b0efa78c-5be2-4abe-9d50-0b2e095eb8c8-vf89b" satisfied condition "running" +Oct 13 08:21:22.725: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-b0efa78c-5be2-4abe-9d50-0b2e095eb8c8-xn5nd" in namespace "emptydir-wrapper-460" to be "running" +Oct 13 08:21:22.730: INFO: Pod "wrapped-volume-race-b0efa78c-5be2-4abe-9d50-0b2e095eb8c8-xn5nd": Phase="Running", Reason="", readiness=true. Elapsed: 4.428727ms +Oct 13 08:21:22.730: INFO: Pod "wrapped-volume-race-b0efa78c-5be2-4abe-9d50-0b2e095eb8c8-xn5nd" satisfied condition "running" +STEP: deleting ReplicationController wrapped-volume-race-b0efa78c-5be2-4abe-9d50-0b2e095eb8c8 in namespace emptydir-wrapper-460, will wait for the garbage collector to delete the pods 10/13/23 08:21:22.73 +Oct 13 08:21:22.792: INFO: Deleting ReplicationController wrapped-volume-race-b0efa78c-5be2-4abe-9d50-0b2e095eb8c8 took: 7.293071ms +Oct 13 08:21:22.893: INFO: Terminating ReplicationController wrapped-volume-race-b0efa78c-5be2-4abe-9d50-0b2e095eb8c8 pods took: 100.133689ms +STEP: Creating RC which spawns configmap-volume pods 10/13/23 08:21:25.597 +Oct 13 08:21:25.609: INFO: Pod name wrapped-volume-race-03430138-6889-42d6-bbb6-0a7caa31ca39: Found 0 pods out of 5 +Oct 13 08:21:30.620: INFO: Pod name wrapped-volume-race-03430138-6889-42d6-bbb6-0a7caa31ca39: Found 5 pods out of 5 +STEP: Ensuring each pod is running 10/13/23 08:21:30.62 +Oct 13 08:21:30.620: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-03430138-6889-42d6-bbb6-0a7caa31ca39-ctgm4" in namespace "emptydir-wrapper-460" to be "running" +Oct 13 08:21:30.623: INFO: Pod "wrapped-volume-race-03430138-6889-42d6-bbb6-0a7caa31ca39-ctgm4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.625121ms +Oct 13 08:21:32.630: INFO: Pod "wrapped-volume-race-03430138-6889-42d6-bbb6-0a7caa31ca39-ctgm4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010576505s +Oct 13 08:21:34.630: INFO: Pod "wrapped-volume-race-03430138-6889-42d6-bbb6-0a7caa31ca39-ctgm4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01032274s +Oct 13 08:21:36.628: INFO: Pod "wrapped-volume-race-03430138-6889-42d6-bbb6-0a7caa31ca39-ctgm4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.007896966s +Oct 13 08:21:38.630: INFO: Pod "wrapped-volume-race-03430138-6889-42d6-bbb6-0a7caa31ca39-ctgm4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.010207127s +Oct 13 08:21:40.630: INFO: Pod "wrapped-volume-race-03430138-6889-42d6-bbb6-0a7caa31ca39-ctgm4": Phase="Running", Reason="", readiness=true. Elapsed: 10.010086985s +Oct 13 08:21:40.630: INFO: Pod "wrapped-volume-race-03430138-6889-42d6-bbb6-0a7caa31ca39-ctgm4" satisfied condition "running" +Oct 13 08:21:40.630: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-03430138-6889-42d6-bbb6-0a7caa31ca39-mv26d" in namespace "emptydir-wrapper-460" to be "running" +Oct 13 08:21:40.635: INFO: Pod "wrapped-volume-race-03430138-6889-42d6-bbb6-0a7caa31ca39-mv26d": Phase="Running", Reason="", readiness=true. Elapsed: 5.015866ms +Oct 13 08:21:40.635: INFO: Pod "wrapped-volume-race-03430138-6889-42d6-bbb6-0a7caa31ca39-mv26d" satisfied condition "running" +Oct 13 08:21:40.635: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-03430138-6889-42d6-bbb6-0a7caa31ca39-nhhzs" in namespace "emptydir-wrapper-460" to be "running" +Oct 13 08:21:40.639: INFO: Pod "wrapped-volume-race-03430138-6889-42d6-bbb6-0a7caa31ca39-nhhzs": Phase="Running", Reason="", readiness=true. Elapsed: 4.101757ms +Oct 13 08:21:40.639: INFO: Pod "wrapped-volume-race-03430138-6889-42d6-bbb6-0a7caa31ca39-nhhzs" satisfied condition "running" +Oct 13 08:21:40.639: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-03430138-6889-42d6-bbb6-0a7caa31ca39-nmhx5" in namespace "emptydir-wrapper-460" to be "running" +Oct 13 08:21:40.643: INFO: Pod "wrapped-volume-race-03430138-6889-42d6-bbb6-0a7caa31ca39-nmhx5": Phase="Running", Reason="", readiness=true. Elapsed: 3.53216ms +Oct 13 08:21:40.643: INFO: Pod "wrapped-volume-race-03430138-6889-42d6-bbb6-0a7caa31ca39-nmhx5" satisfied condition "running" +Oct 13 08:21:40.643: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-03430138-6889-42d6-bbb6-0a7caa31ca39-v6j8x" in namespace "emptydir-wrapper-460" to be "running" +Oct 13 08:21:40.646: INFO: Pod "wrapped-volume-race-03430138-6889-42d6-bbb6-0a7caa31ca39-v6j8x": Phase="Running", Reason="", readiness=true. Elapsed: 3.614181ms +Oct 13 08:21:40.646: INFO: Pod "wrapped-volume-race-03430138-6889-42d6-bbb6-0a7caa31ca39-v6j8x" satisfied condition "running" +STEP: deleting ReplicationController wrapped-volume-race-03430138-6889-42d6-bbb6-0a7caa31ca39 in namespace emptydir-wrapper-460, will wait for the garbage collector to delete the pods 10/13/23 08:21:40.647 +Oct 13 08:21:40.707: INFO: Deleting ReplicationController wrapped-volume-race-03430138-6889-42d6-bbb6-0a7caa31ca39 took: 6.57005ms +Oct 13 08:21:40.808: INFO: Terminating ReplicationController wrapped-volume-race-03430138-6889-42d6-bbb6-0a7caa31ca39 pods took: 100.189344ms +STEP: Cleaning up the configMaps 10/13/23 08:21:44.009 +[AfterEach] [sig-storage] EmptyDir wrapper volumes + test/e2e/framework/node/init/init.go:32 +Oct 13 08:21:44.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] EmptyDir wrapper volumes + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] EmptyDir wrapper volumes + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] EmptyDir wrapper volumes + tear down framework | framework.go:193 +STEP: Destroying namespace "emptydir-wrapper-460" for this suite. 10/13/23 08:21:44.274 +------------------------------ +• [SLOW TEST] [56.658 seconds] +[sig-storage] EmptyDir wrapper volumes +test/e2e/storage/utils/framework.go:23 + should not cause race condition when used for configmaps [Serial] [Conformance] + test/e2e/storage/empty_dir_wrapper.go:189 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] EmptyDir wrapper volumes + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:20:47.62 + Oct 13 08:20:47.620: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename emptydir-wrapper 10/13/23 08:20:47.621 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:20:47.639 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:20:47.642 + [BeforeEach] [sig-storage] EmptyDir wrapper volumes + test/e2e/framework/metrics/init/init.go:31 + [It] should not cause race condition when used for configmaps [Serial] [Conformance] + test/e2e/storage/empty_dir_wrapper.go:189 + STEP: Creating 50 configmaps 10/13/23 08:20:47.644 + STEP: Creating RC which spawns configmap-volume pods 10/13/23 08:20:47.878 + Oct 13 08:20:47.978: INFO: Pod name wrapped-volume-race-3495ed51-a8ee-4f94-939e-71ce3e9fd213: Found 3 pods out of 5 + Oct 13 08:20:52.987: INFO: Pod name wrapped-volume-race-3495ed51-a8ee-4f94-939e-71ce3e9fd213: Found 5 pods out of 5 + STEP: Ensuring each pod is running 10/13/23 08:20:52.987 + Oct 13 08:20:52.987: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-3495ed51-a8ee-4f94-939e-71ce3e9fd213-6sfcn" in namespace "emptydir-wrapper-460" to be "running" + Oct 13 08:20:52.991: INFO: Pod "wrapped-volume-race-3495ed51-a8ee-4f94-939e-71ce3e9fd213-6sfcn": Phase="Pending", Reason="", readiness=false. Elapsed: 3.700327ms + Oct 13 08:20:54.999: INFO: Pod "wrapped-volume-race-3495ed51-a8ee-4f94-939e-71ce3e9fd213-6sfcn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012071065s + Oct 13 08:20:56.995: INFO: Pod "wrapped-volume-race-3495ed51-a8ee-4f94-939e-71ce3e9fd213-6sfcn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007806479s + Oct 13 08:20:58.996: INFO: Pod "wrapped-volume-race-3495ed51-a8ee-4f94-939e-71ce3e9fd213-6sfcn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.00862368s + Oct 13 08:21:00.995: INFO: Pod "wrapped-volume-race-3495ed51-a8ee-4f94-939e-71ce3e9fd213-6sfcn": Phase="Pending", Reason="", readiness=false. Elapsed: 8.008176778s + Oct 13 08:21:02.995: INFO: Pod "wrapped-volume-race-3495ed51-a8ee-4f94-939e-71ce3e9fd213-6sfcn": Phase="Running", Reason="", readiness=true. Elapsed: 10.008233423s + Oct 13 08:21:02.996: INFO: Pod "wrapped-volume-race-3495ed51-a8ee-4f94-939e-71ce3e9fd213-6sfcn" satisfied condition "running" + Oct 13 08:21:02.996: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-3495ed51-a8ee-4f94-939e-71ce3e9fd213-7zn49" in namespace "emptydir-wrapper-460" to be "running" + Oct 13 08:21:02.998: INFO: Pod "wrapped-volume-race-3495ed51-a8ee-4f94-939e-71ce3e9fd213-7zn49": Phase="Running", Reason="", readiness=true. Elapsed: 2.750483ms + Oct 13 08:21:02.998: INFO: Pod "wrapped-volume-race-3495ed51-a8ee-4f94-939e-71ce3e9fd213-7zn49" satisfied condition "running" + Oct 13 08:21:02.998: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-3495ed51-a8ee-4f94-939e-71ce3e9fd213-ftk6x" in namespace "emptydir-wrapper-460" to be "running" + Oct 13 08:21:03.001: INFO: Pod "wrapped-volume-race-3495ed51-a8ee-4f94-939e-71ce3e9fd213-ftk6x": Phase="Running", Reason="", readiness=true. Elapsed: 2.52717ms + Oct 13 08:21:03.001: INFO: Pod "wrapped-volume-race-3495ed51-a8ee-4f94-939e-71ce3e9fd213-ftk6x" satisfied condition "running" + Oct 13 08:21:03.001: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-3495ed51-a8ee-4f94-939e-71ce3e9fd213-l4zjp" in namespace "emptydir-wrapper-460" to be "running" + Oct 13 08:21:03.004: INFO: Pod "wrapped-volume-race-3495ed51-a8ee-4f94-939e-71ce3e9fd213-l4zjp": Phase="Running", Reason="", readiness=true. Elapsed: 2.719409ms + Oct 13 08:21:03.004: INFO: Pod "wrapped-volume-race-3495ed51-a8ee-4f94-939e-71ce3e9fd213-l4zjp" satisfied condition "running" + Oct 13 08:21:03.004: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-3495ed51-a8ee-4f94-939e-71ce3e9fd213-v55g6" in namespace "emptydir-wrapper-460" to be "running" + Oct 13 08:21:03.006: INFO: Pod "wrapped-volume-race-3495ed51-a8ee-4f94-939e-71ce3e9fd213-v55g6": Phase="Running", Reason="", readiness=true. Elapsed: 2.530877ms + Oct 13 08:21:03.006: INFO: Pod "wrapped-volume-race-3495ed51-a8ee-4f94-939e-71ce3e9fd213-v55g6" satisfied condition "running" + STEP: deleting ReplicationController wrapped-volume-race-3495ed51-a8ee-4f94-939e-71ce3e9fd213 in namespace emptydir-wrapper-460, will wait for the garbage collector to delete the pods 10/13/23 08:21:03.006 + Oct 13 08:21:03.065: INFO: Deleting ReplicationController wrapped-volume-race-3495ed51-a8ee-4f94-939e-71ce3e9fd213 took: 5.357603ms + Oct 13 08:21:03.166: INFO: Terminating ReplicationController wrapped-volume-race-3495ed51-a8ee-4f94-939e-71ce3e9fd213 pods took: 100.637484ms + STEP: Creating RC which spawns configmap-volume pods 10/13/23 08:21:05.67 + Oct 13 08:21:05.682: INFO: Pod name wrapped-volume-race-b0efa78c-5be2-4abe-9d50-0b2e095eb8c8: Found 0 pods out of 5 + Oct 13 08:21:10.692: INFO: Pod name wrapped-volume-race-b0efa78c-5be2-4abe-9d50-0b2e095eb8c8: Found 5 pods out of 5 + STEP: Ensuring each pod is running 10/13/23 08:21:10.692 + Oct 13 08:21:10.692: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-b0efa78c-5be2-4abe-9d50-0b2e095eb8c8-bqqt6" in namespace "emptydir-wrapper-460" to be "running" + Oct 13 08:21:10.696: INFO: Pod "wrapped-volume-race-b0efa78c-5be2-4abe-9d50-0b2e095eb8c8-bqqt6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.81066ms + Oct 13 08:21:12.700: INFO: Pod "wrapped-volume-race-b0efa78c-5be2-4abe-9d50-0b2e095eb8c8-bqqt6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008409276s + Oct 13 08:21:14.700: INFO: Pod "wrapped-volume-race-b0efa78c-5be2-4abe-9d50-0b2e095eb8c8-bqqt6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00835967s + Oct 13 08:21:16.701: INFO: Pod "wrapped-volume-race-b0efa78c-5be2-4abe-9d50-0b2e095eb8c8-bqqt6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.008685571s + Oct 13 08:21:18.701: INFO: Pod "wrapped-volume-race-b0efa78c-5be2-4abe-9d50-0b2e095eb8c8-bqqt6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.008815662s + Oct 13 08:21:20.704: INFO: Pod "wrapped-volume-race-b0efa78c-5be2-4abe-9d50-0b2e095eb8c8-bqqt6": Phase="Running", Reason="", readiness=true. Elapsed: 10.011757628s + Oct 13 08:21:20.704: INFO: Pod "wrapped-volume-race-b0efa78c-5be2-4abe-9d50-0b2e095eb8c8-bqqt6" satisfied condition "running" + Oct 13 08:21:20.704: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-b0efa78c-5be2-4abe-9d50-0b2e095eb8c8-jrvkw" in namespace "emptydir-wrapper-460" to be "running" + Oct 13 08:21:20.708: INFO: Pod "wrapped-volume-race-b0efa78c-5be2-4abe-9d50-0b2e095eb8c8-jrvkw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.208921ms + Oct 13 08:21:22.716: INFO: Pod "wrapped-volume-race-b0efa78c-5be2-4abe-9d50-0b2e095eb8c8-jrvkw": Phase="Running", Reason="", readiness=true. Elapsed: 2.01175476s + Oct 13 08:21:22.716: INFO: Pod "wrapped-volume-race-b0efa78c-5be2-4abe-9d50-0b2e095eb8c8-jrvkw" satisfied condition "running" + Oct 13 08:21:22.716: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-b0efa78c-5be2-4abe-9d50-0b2e095eb8c8-sdp6z" in namespace "emptydir-wrapper-460" to be "running" + Oct 13 08:21:22.721: INFO: Pod "wrapped-volume-race-b0efa78c-5be2-4abe-9d50-0b2e095eb8c8-sdp6z": Phase="Running", Reason="", readiness=true. Elapsed: 5.310071ms + Oct 13 08:21:22.721: INFO: Pod "wrapped-volume-race-b0efa78c-5be2-4abe-9d50-0b2e095eb8c8-sdp6z" satisfied condition "running" + Oct 13 08:21:22.721: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-b0efa78c-5be2-4abe-9d50-0b2e095eb8c8-vf89b" in namespace "emptydir-wrapper-460" to be "running" + Oct 13 08:21:22.725: INFO: Pod "wrapped-volume-race-b0efa78c-5be2-4abe-9d50-0b2e095eb8c8-vf89b": Phase="Running", Reason="", readiness=true. Elapsed: 4.276045ms + Oct 13 08:21:22.725: INFO: Pod "wrapped-volume-race-b0efa78c-5be2-4abe-9d50-0b2e095eb8c8-vf89b" satisfied condition "running" + Oct 13 08:21:22.725: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-b0efa78c-5be2-4abe-9d50-0b2e095eb8c8-xn5nd" in namespace "emptydir-wrapper-460" to be "running" + Oct 13 08:21:22.730: INFO: Pod "wrapped-volume-race-b0efa78c-5be2-4abe-9d50-0b2e095eb8c8-xn5nd": Phase="Running", Reason="", readiness=true. Elapsed: 4.428727ms + Oct 13 08:21:22.730: INFO: Pod "wrapped-volume-race-b0efa78c-5be2-4abe-9d50-0b2e095eb8c8-xn5nd" satisfied condition "running" + STEP: deleting ReplicationController wrapped-volume-race-b0efa78c-5be2-4abe-9d50-0b2e095eb8c8 in namespace emptydir-wrapper-460, will wait for the garbage collector to delete the pods 10/13/23 08:21:22.73 + Oct 13 08:21:22.792: INFO: Deleting ReplicationController wrapped-volume-race-b0efa78c-5be2-4abe-9d50-0b2e095eb8c8 took: 7.293071ms + Oct 13 08:21:22.893: INFO: Terminating ReplicationController wrapped-volume-race-b0efa78c-5be2-4abe-9d50-0b2e095eb8c8 pods took: 100.133689ms + STEP: Creating RC which spawns configmap-volume pods 10/13/23 08:21:25.597 + Oct 13 08:21:25.609: INFO: Pod name wrapped-volume-race-03430138-6889-42d6-bbb6-0a7caa31ca39: Found 0 pods out of 5 + Oct 13 08:21:30.620: INFO: Pod name wrapped-volume-race-03430138-6889-42d6-bbb6-0a7caa31ca39: Found 5 pods out of 5 + STEP: Ensuring each pod is running 10/13/23 08:21:30.62 + Oct 13 08:21:30.620: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-03430138-6889-42d6-bbb6-0a7caa31ca39-ctgm4" in namespace "emptydir-wrapper-460" to be "running" + Oct 13 08:21:30.623: INFO: Pod "wrapped-volume-race-03430138-6889-42d6-bbb6-0a7caa31ca39-ctgm4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.625121ms + Oct 13 08:21:32.630: INFO: Pod "wrapped-volume-race-03430138-6889-42d6-bbb6-0a7caa31ca39-ctgm4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010576505s + Oct 13 08:21:34.630: INFO: Pod "wrapped-volume-race-03430138-6889-42d6-bbb6-0a7caa31ca39-ctgm4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01032274s + Oct 13 08:21:36.628: INFO: Pod "wrapped-volume-race-03430138-6889-42d6-bbb6-0a7caa31ca39-ctgm4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.007896966s + Oct 13 08:21:38.630: INFO: Pod "wrapped-volume-race-03430138-6889-42d6-bbb6-0a7caa31ca39-ctgm4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.010207127s + Oct 13 08:21:40.630: INFO: Pod "wrapped-volume-race-03430138-6889-42d6-bbb6-0a7caa31ca39-ctgm4": Phase="Running", Reason="", readiness=true. Elapsed: 10.010086985s + Oct 13 08:21:40.630: INFO: Pod "wrapped-volume-race-03430138-6889-42d6-bbb6-0a7caa31ca39-ctgm4" satisfied condition "running" + Oct 13 08:21:40.630: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-03430138-6889-42d6-bbb6-0a7caa31ca39-mv26d" in namespace "emptydir-wrapper-460" to be "running" + Oct 13 08:21:40.635: INFO: Pod "wrapped-volume-race-03430138-6889-42d6-bbb6-0a7caa31ca39-mv26d": Phase="Running", Reason="", readiness=true. Elapsed: 5.015866ms + Oct 13 08:21:40.635: INFO: Pod "wrapped-volume-race-03430138-6889-42d6-bbb6-0a7caa31ca39-mv26d" satisfied condition "running" + Oct 13 08:21:40.635: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-03430138-6889-42d6-bbb6-0a7caa31ca39-nhhzs" in namespace "emptydir-wrapper-460" to be "running" + Oct 13 08:21:40.639: INFO: Pod "wrapped-volume-race-03430138-6889-42d6-bbb6-0a7caa31ca39-nhhzs": Phase="Running", Reason="", readiness=true. Elapsed: 4.101757ms + Oct 13 08:21:40.639: INFO: Pod "wrapped-volume-race-03430138-6889-42d6-bbb6-0a7caa31ca39-nhhzs" satisfied condition "running" + Oct 13 08:21:40.639: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-03430138-6889-42d6-bbb6-0a7caa31ca39-nmhx5" in namespace "emptydir-wrapper-460" to be "running" + Oct 13 08:21:40.643: INFO: Pod "wrapped-volume-race-03430138-6889-42d6-bbb6-0a7caa31ca39-nmhx5": Phase="Running", Reason="", readiness=true. Elapsed: 3.53216ms + Oct 13 08:21:40.643: INFO: Pod "wrapped-volume-race-03430138-6889-42d6-bbb6-0a7caa31ca39-nmhx5" satisfied condition "running" + Oct 13 08:21:40.643: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-03430138-6889-42d6-bbb6-0a7caa31ca39-v6j8x" in namespace "emptydir-wrapper-460" to be "running" + Oct 13 08:21:40.646: INFO: Pod "wrapped-volume-race-03430138-6889-42d6-bbb6-0a7caa31ca39-v6j8x": Phase="Running", Reason="", readiness=true. Elapsed: 3.614181ms + Oct 13 08:21:40.646: INFO: Pod "wrapped-volume-race-03430138-6889-42d6-bbb6-0a7caa31ca39-v6j8x" satisfied condition "running" + STEP: deleting ReplicationController wrapped-volume-race-03430138-6889-42d6-bbb6-0a7caa31ca39 in namespace emptydir-wrapper-460, will wait for the garbage collector to delete the pods 10/13/23 08:21:40.647 + Oct 13 08:21:40.707: INFO: Deleting ReplicationController wrapped-volume-race-03430138-6889-42d6-bbb6-0a7caa31ca39 took: 6.57005ms + Oct 13 08:21:40.808: INFO: Terminating ReplicationController wrapped-volume-race-03430138-6889-42d6-bbb6-0a7caa31ca39 pods took: 100.189344ms + STEP: Cleaning up the configMaps 10/13/23 08:21:44.009 + [AfterEach] [sig-storage] EmptyDir wrapper volumes + test/e2e/framework/node/init/init.go:32 + Oct 13 08:21:44.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] EmptyDir wrapper volumes + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] EmptyDir wrapper volumes + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] EmptyDir wrapper volumes + tear down framework | framework.go:193 + STEP: Destroying namespace "emptydir-wrapper-460" for this suite. 10/13/23 08:21:44.274 + << End Captured GinkgoWriter Output +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] + test/e2e/apimachinery/resource_quota.go:75 +[BeforeEach] [sig-api-machinery] ResourceQuota + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:21:44.278 +Oct 13 08:21:44.279: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename resourcequota 10/13/23 08:21:44.279 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:21:44.295 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:21:44.298 +[BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:31 +[It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] + test/e2e/apimachinery/resource_quota.go:75 +STEP: Counting existing ResourceQuota 10/13/23 08:21:44.3 +STEP: Creating a ResourceQuota 10/13/23 08:21:49.306 +STEP: Ensuring resource quota status is calculated 10/13/23 08:21:49.313 +[AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/node/init/init.go:32 +Oct 13 08:21:51.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + tear down framework | framework.go:193 +STEP: Destroying namespace "resourcequota-5252" for this suite. 10/13/23 08:21:51.319 +------------------------------ +• [SLOW TEST] [7.045 seconds] +[sig-api-machinery] ResourceQuota +test/e2e/apimachinery/framework.go:23 + should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] + test/e2e/apimachinery/resource_quota.go:75 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] ResourceQuota + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:21:44.278 + Oct 13 08:21:44.279: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename resourcequota 10/13/23 08:21:44.279 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:21:44.295 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:21:44.298 + [BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:31 + [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] + test/e2e/apimachinery/resource_quota.go:75 + STEP: Counting existing ResourceQuota 10/13/23 08:21:44.3 + STEP: Creating a ResourceQuota 10/13/23 08:21:49.306 + STEP: Ensuring resource quota status is calculated 10/13/23 08:21:49.313 + [AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/node/init/init.go:32 + Oct 13 08:21:51.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + tear down framework | framework.go:193 + STEP: Destroying namespace "resourcequota-5252" for this suite. 10/13/23 08:21:51.319 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate custom resource with different stored version [Conformance] + test/e2e/apimachinery/webhook.go:323 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:21:51.324 +Oct 13 08:21:51.325: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename webhook 10/13/23 08:21:51.325 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:21:51.338 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:21:51.34 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 +STEP: Setting up server cert 10/13/23 08:21:51.351 +STEP: Create role binding to let webhook read extension-apiserver-authentication 10/13/23 08:21:51.809 +STEP: Deploying the webhook pod 10/13/23 08:21:51.819 +STEP: Wait for the deployment to be ready 10/13/23 08:21:51.831 +Oct 13 08:21:51.837: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created +STEP: Deploying the webhook service 10/13/23 08:21:53.845 +STEP: Verifying the service has paired with the endpoint 10/13/23 08:21:53.854 +Oct 13 08:21:54.855: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate custom resource with different stored version [Conformance] + test/e2e/apimachinery/webhook.go:323 +Oct 13 08:21:54.859: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3506-crds.webhook.example.com via the AdmissionRegistration API 10/13/23 08:21:55.373 +STEP: Creating a custom resource while v1 is storage version 10/13/23 08:21:55.388 +STEP: Patching Custom Resource Definition to set v2 as storage 10/13/23 08:21:57.436 +STEP: Patching the custom resource while v2 is storage version 10/13/23 08:21:57.449 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Oct 13 08:21:58.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "webhook-5692" for this suite. 10/13/23 08:21:58.05 +STEP: Destroying namespace "webhook-5692-markers" for this suite. 10/13/23 08:21:58.057 +------------------------------ +• [SLOW TEST] [6.740 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should mutate custom resource with different stored version [Conformance] + test/e2e/apimachinery/webhook.go:323 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:21:51.324 + Oct 13 08:21:51.325: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename webhook 10/13/23 08:21:51.325 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:21:51.338 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:21:51.34 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 + STEP: Setting up server cert 10/13/23 08:21:51.351 + STEP: Create role binding to let webhook read extension-apiserver-authentication 10/13/23 08:21:51.809 + STEP: Deploying the webhook pod 10/13/23 08:21:51.819 + STEP: Wait for the deployment to be ready 10/13/23 08:21:51.831 + Oct 13 08:21:51.837: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created + STEP: Deploying the webhook service 10/13/23 08:21:53.845 + STEP: Verifying the service has paired with the endpoint 10/13/23 08:21:53.854 + Oct 13 08:21:54.855: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] should mutate custom resource with different stored version [Conformance] + test/e2e/apimachinery/webhook.go:323 + Oct 13 08:21:54.859: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3506-crds.webhook.example.com via the AdmissionRegistration API 10/13/23 08:21:55.373 + STEP: Creating a custom resource while v1 is storage version 10/13/23 08:21:55.388 + STEP: Patching Custom Resource Definition to set v2 as storage 10/13/23 08:21:57.436 + STEP: Patching the custom resource while v2 is storage version 10/13/23 08:21:57.449 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Oct 13 08:21:58.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "webhook-5692" for this suite. 10/13/23 08:21:58.05 + STEP: Destroying namespace "webhook-5692-markers" for this suite. 10/13/23 08:21:58.057 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Lease + lease API should be available [Conformance] + test/e2e/common/node/lease.go:72 +[BeforeEach] [sig-node] Lease + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:21:58.068 +Oct 13 08:21:58.068: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename lease-test 10/13/23 08:21:58.069 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:21:58.086 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:21:58.089 +[BeforeEach] [sig-node] Lease + test/e2e/framework/metrics/init/init.go:31 +[It] lease API should be available [Conformance] + test/e2e/common/node/lease.go:72 +[AfterEach] [sig-node] Lease + test/e2e/framework/node/init/init.go:32 +Oct 13 08:21:58.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Lease + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Lease + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Lease + tear down framework | framework.go:193 +STEP: Destroying namespace "lease-test-2198" for this suite. 10/13/23 08:21:58.154 +------------------------------ +• [0.091 seconds] +[sig-node] Lease +test/e2e/common/node/framework.go:23 + lease API should be available [Conformance] + test/e2e/common/node/lease.go:72 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Lease + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:21:58.068 + Oct 13 08:21:58.068: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename lease-test 10/13/23 08:21:58.069 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:21:58.086 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:21:58.089 + [BeforeEach] [sig-node] Lease + test/e2e/framework/metrics/init/init.go:31 + [It] lease API should be available [Conformance] + test/e2e/common/node/lease.go:72 + [AfterEach] [sig-node] Lease + test/e2e/framework/node/init/init.go:32 + Oct 13 08:21:58.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Lease + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Lease + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Lease + tear down framework | framework.go:193 + STEP: Destroying namespace "lease-test-2198" for this suite. 10/13/23 08:21:58.154 + << End Captured GinkgoWriter Output +------------------------------ +SSSS +------------------------------ +[sig-api-machinery] Watchers + should observe an object deletion if it stops meeting the requirements of the selector [Conformance] + test/e2e/apimachinery/watch.go:257 +[BeforeEach] [sig-api-machinery] Watchers + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:21:58.159 +Oct 13 08:21:58.159: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename watch 10/13/23 08:21:58.16 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:21:58.172 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:21:58.174 +[BeforeEach] [sig-api-machinery] Watchers + test/e2e/framework/metrics/init/init.go:31 +[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] + test/e2e/apimachinery/watch.go:257 +STEP: creating a watch on configmaps with a certain label 10/13/23 08:21:58.177 +STEP: creating a new configmap 10/13/23 08:21:58.178 +STEP: modifying the configmap once 10/13/23 08:21:58.182 +STEP: changing the label value of the configmap 10/13/23 08:21:58.188 +STEP: Expecting to observe a delete notification for the watched object 10/13/23 08:21:58.194 +Oct 13 08:21:58.194: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1986 96625999-872c-4cb6-8968-4496b8976a8d 12563 0 2023-10-13 08:21:58 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-10-13 08:21:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 13 08:21:58.195: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1986 96625999-872c-4cb6-8968-4496b8976a8d 12564 0 2023-10-13 08:21:58 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-10-13 08:21:58 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 13 08:21:58.195: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1986 96625999-872c-4cb6-8968-4496b8976a8d 12565 0 2023-10-13 08:21:58 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-10-13 08:21:58 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: modifying the configmap a second time 10/13/23 08:21:58.195 +STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements 10/13/23 08:21:58.202 +STEP: changing the label value of the configmap back 10/13/23 08:22:08.203 +STEP: modifying the configmap a third time 10/13/23 08:22:08.213 +STEP: deleting the configmap 10/13/23 08:22:08.221 +STEP: Expecting to observe an add notification for the watched object when the label value was restored 10/13/23 08:22:08.229 +Oct 13 08:22:08.229: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1986 96625999-872c-4cb6-8968-4496b8976a8d 12605 0 2023-10-13 08:21:58 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-10-13 08:22:09 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 13 08:22:08.229: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1986 96625999-872c-4cb6-8968-4496b8976a8d 12606 0 2023-10-13 08:21:58 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-10-13 08:22:09 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 13 08:22:08.229: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1986 96625999-872c-4cb6-8968-4496b8976a8d 12607 0 2023-10-13 08:21:58 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-10-13 08:22:09 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} +[AfterEach] [sig-api-machinery] Watchers + test/e2e/framework/node/init/init.go:32 +Oct 13 08:22:08.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] Watchers + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] Watchers + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] Watchers + tear down framework | framework.go:193 +STEP: Destroying namespace "watch-1986" for this suite. 10/13/23 08:22:08.233 +------------------------------ +• [SLOW TEST] [10.080 seconds] +[sig-api-machinery] Watchers +test/e2e/apimachinery/framework.go:23 + should observe an object deletion if it stops meeting the requirements of the selector [Conformance] + test/e2e/apimachinery/watch.go:257 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] Watchers + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:21:58.159 + Oct 13 08:21:58.159: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename watch 10/13/23 08:21:58.16 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:21:58.172 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:21:58.174 + [BeforeEach] [sig-api-machinery] Watchers + test/e2e/framework/metrics/init/init.go:31 + [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] + test/e2e/apimachinery/watch.go:257 + STEP: creating a watch on configmaps with a certain label 10/13/23 08:21:58.177 + STEP: creating a new configmap 10/13/23 08:21:58.178 + STEP: modifying the configmap once 10/13/23 08:21:58.182 + STEP: changing the label value of the configmap 10/13/23 08:21:58.188 + STEP: Expecting to observe a delete notification for the watched object 10/13/23 08:21:58.194 + Oct 13 08:21:58.194: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1986 96625999-872c-4cb6-8968-4496b8976a8d 12563 0 2023-10-13 08:21:58 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-10-13 08:21:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} + Oct 13 08:21:58.195: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1986 96625999-872c-4cb6-8968-4496b8976a8d 12564 0 2023-10-13 08:21:58 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-10-13 08:21:58 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} + Oct 13 08:21:58.195: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1986 96625999-872c-4cb6-8968-4496b8976a8d 12565 0 2023-10-13 08:21:58 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-10-13 08:21:58 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} + STEP: modifying the configmap a second time 10/13/23 08:21:58.195 + STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements 10/13/23 08:21:58.202 + STEP: changing the label value of the configmap back 10/13/23 08:22:08.203 + STEP: modifying the configmap a third time 10/13/23 08:22:08.213 + STEP: deleting the configmap 10/13/23 08:22:08.221 + STEP: Expecting to observe an add notification for the watched object when the label value was restored 10/13/23 08:22:08.229 + Oct 13 08:22:08.229: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1986 96625999-872c-4cb6-8968-4496b8976a8d 12605 0 2023-10-13 08:21:58 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-10-13 08:22:09 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} + Oct 13 08:22:08.229: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1986 96625999-872c-4cb6-8968-4496b8976a8d 12606 0 2023-10-13 08:21:58 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-10-13 08:22:09 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} + Oct 13 08:22:08.229: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1986 96625999-872c-4cb6-8968-4496b8976a8d 12607 0 2023-10-13 08:21:58 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-10-13 08:22:09 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} + [AfterEach] [sig-api-machinery] Watchers + test/e2e/framework/node/init/init.go:32 + Oct 13 08:22:08.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] Watchers + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] Watchers + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] Watchers + tear down framework | framework.go:193 + STEP: Destroying namespace "watch-1986" for this suite. 10/13/23 08:22:08.233 + << End Captured GinkgoWriter Output +------------------------------ +SSS +------------------------------ +[sig-network] Services + should serve a basic endpoint from pods [Conformance] + test/e2e/network/service.go:787 +[BeforeEach] [sig-network] Services + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:22:08.239 +Oct 13 08:22:08.239: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename services 10/13/23 08:22:08.24 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:22:08.254 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:22:08.257 +[BeforeEach] [sig-network] Services + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 +[It] should serve a basic endpoint from pods [Conformance] + test/e2e/network/service.go:787 +STEP: creating service endpoint-test2 in namespace services-3062 10/13/23 08:22:08.259 +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3062 to expose endpoints map[] 10/13/23 08:22:08.269 +Oct 13 08:22:08.272: INFO: Failed go get Endpoints object: endpoints "endpoint-test2" not found +Oct 13 08:22:09.281: INFO: successfully validated that service endpoint-test2 in namespace services-3062 exposes endpoints map[] +STEP: Creating pod pod1 in namespace services-3062 10/13/23 08:22:09.281 +Oct 13 08:22:09.288: INFO: Waiting up to 5m0s for pod "pod1" in namespace "services-3062" to be "running and ready" +Oct 13 08:22:09.291: INFO: Pod "pod1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.232486ms +Oct 13 08:22:09.291: INFO: The phase of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) +Oct 13 08:22:11.296: INFO: Pod "pod1": Phase="Running", Reason="", readiness=true. Elapsed: 2.007911697s +Oct 13 08:22:11.296: INFO: The phase of Pod pod1 is Running (Ready = true) +Oct 13 08:22:11.296: INFO: Pod "pod1" satisfied condition "running and ready" +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3062 to expose endpoints map[pod1:[80]] 10/13/23 08:22:11.299 +Oct 13 08:22:11.309: INFO: successfully validated that service endpoint-test2 in namespace services-3062 exposes endpoints map[pod1:[80]] +STEP: Checking if the Service forwards traffic to pod1 10/13/23 08:22:11.309 +Oct 13 08:22:11.309: INFO: Creating new exec pod +Oct 13 08:22:11.319: INFO: Waiting up to 5m0s for pod "execpodnw62s" in namespace "services-3062" to be "running" +Oct 13 08:22:11.323: INFO: Pod "execpodnw62s": Phase="Pending", Reason="", readiness=false. Elapsed: 3.648859ms +Oct 13 08:22:13.327: INFO: Pod "execpodnw62s": Phase="Running", Reason="", readiness=true. Elapsed: 2.008264749s +Oct 13 08:22:13.327: INFO: Pod "execpodnw62s" satisfied condition "running" +Oct 13 08:22:14.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-3062 exec execpodnw62s -- /bin/sh -x -c nc -v -z -w 2 endpoint-test2 80' +Oct 13 08:22:14.653: INFO: stderr: "+ nc -v -z -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" +Oct 13 08:22:14.653: INFO: stdout: "" +Oct 13 08:22:14.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-3062 exec execpodnw62s -- /bin/sh -x -c nc -v -z -w 2 10.100.100.149 80' +Oct 13 08:22:14.794: INFO: stderr: "+ nc -v -z -w 2 10.100.100.149 80\nConnection to 10.100.100.149 80 port [tcp/http] succeeded!\n" +Oct 13 08:22:14.794: INFO: stdout: "" +STEP: Creating pod pod2 in namespace services-3062 10/13/23 08:22:14.794 +Oct 13 08:22:14.799: INFO: Waiting up to 5m0s for pod "pod2" in namespace "services-3062" to be "running and ready" +Oct 13 08:22:14.802: INFO: Pod "pod2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.303682ms +Oct 13 08:22:14.802: INFO: The phase of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) +Oct 13 08:22:16.808: INFO: Pod "pod2": Phase="Running", Reason="", readiness=true. Elapsed: 2.008922122s +Oct 13 08:22:16.808: INFO: The phase of Pod pod2 is Running (Ready = true) +Oct 13 08:22:16.808: INFO: Pod "pod2" satisfied condition "running and ready" +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3062 to expose endpoints map[pod1:[80] pod2:[80]] 10/13/23 08:22:16.812 +Oct 13 08:22:16.823: INFO: successfully validated that service endpoint-test2 in namespace services-3062 exposes endpoints map[pod1:[80] pod2:[80]] +STEP: Checking if the Service forwards traffic to pod1 and pod2 10/13/23 08:22:16.823 +Oct 13 08:22:17.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-3062 exec execpodnw62s -- /bin/sh -x -c nc -v -z -w 2 endpoint-test2 80' +Oct 13 08:22:17.969: INFO: stderr: "+ nc -v -z -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" +Oct 13 08:22:17.969: INFO: stdout: "" +Oct 13 08:22:17.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-3062 exec execpodnw62s -- /bin/sh -x -c nc -v -z -w 2 10.100.100.149 80' +Oct 13 08:22:18.113: INFO: stderr: "+ nc -v -z -w 2 10.100.100.149 80\nConnection to 10.100.100.149 80 port [tcp/http] succeeded!\n" +Oct 13 08:22:18.113: INFO: stdout: "" +STEP: Deleting pod pod1 in namespace services-3062 10/13/23 08:22:18.113 +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3062 to expose endpoints map[pod2:[80]] 10/13/23 08:22:18.124 +Oct 13 08:22:20.144: INFO: successfully validated that service endpoint-test2 in namespace services-3062 exposes endpoints map[pod2:[80]] +STEP: Checking if the Service forwards traffic to pod2 10/13/23 08:22:20.144 +Oct 13 08:22:21.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-3062 exec execpodnw62s -- /bin/sh -x -c nc -v -z -w 2 endpoint-test2 80' +Oct 13 08:22:21.305: INFO: stderr: "+ nc -v -z -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" +Oct 13 08:22:21.305: INFO: stdout: "" +Oct 13 08:22:21.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-3062 exec execpodnw62s -- /bin/sh -x -c nc -v -z -w 2 10.100.100.149 80' +Oct 13 08:22:21.445: INFO: stderr: "+ nc -v -z -w 2 10.100.100.149 80\nConnection to 10.100.100.149 80 port [tcp/http] succeeded!\n" +Oct 13 08:22:21.445: INFO: stdout: "" +STEP: Deleting pod pod2 in namespace services-3062 10/13/23 08:22:21.445 +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3062 to expose endpoints map[] 10/13/23 08:22:21.454 +Oct 13 08:22:22.468: INFO: successfully validated that service endpoint-test2 in namespace services-3062 exposes endpoints map[] +[AfterEach] [sig-network] Services + test/e2e/framework/node/init/init.go:32 +Oct 13 08:22:22.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] Services + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] Services + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] Services + tear down framework | framework.go:193 +STEP: Destroying namespace "services-3062" for this suite. 10/13/23 08:22:22.49 +------------------------------ +• [SLOW TEST] [14.257 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should serve a basic endpoint from pods [Conformance] + test/e2e/network/service.go:787 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Services + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:22:08.239 + Oct 13 08:22:08.239: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename services 10/13/23 08:22:08.24 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:22:08.254 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:22:08.257 + [BeforeEach] [sig-network] Services + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 + [It] should serve a basic endpoint from pods [Conformance] + test/e2e/network/service.go:787 + STEP: creating service endpoint-test2 in namespace services-3062 10/13/23 08:22:08.259 + STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3062 to expose endpoints map[] 10/13/23 08:22:08.269 + Oct 13 08:22:08.272: INFO: Failed go get Endpoints object: endpoints "endpoint-test2" not found + Oct 13 08:22:09.281: INFO: successfully validated that service endpoint-test2 in namespace services-3062 exposes endpoints map[] + STEP: Creating pod pod1 in namespace services-3062 10/13/23 08:22:09.281 + Oct 13 08:22:09.288: INFO: Waiting up to 5m0s for pod "pod1" in namespace "services-3062" to be "running and ready" + Oct 13 08:22:09.291: INFO: Pod "pod1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.232486ms + Oct 13 08:22:09.291: INFO: The phase of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) + Oct 13 08:22:11.296: INFO: Pod "pod1": Phase="Running", Reason="", readiness=true. Elapsed: 2.007911697s + Oct 13 08:22:11.296: INFO: The phase of Pod pod1 is Running (Ready = true) + Oct 13 08:22:11.296: INFO: Pod "pod1" satisfied condition "running and ready" + STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3062 to expose endpoints map[pod1:[80]] 10/13/23 08:22:11.299 + Oct 13 08:22:11.309: INFO: successfully validated that service endpoint-test2 in namespace services-3062 exposes endpoints map[pod1:[80]] + STEP: Checking if the Service forwards traffic to pod1 10/13/23 08:22:11.309 + Oct 13 08:22:11.309: INFO: Creating new exec pod + Oct 13 08:22:11.319: INFO: Waiting up to 5m0s for pod "execpodnw62s" in namespace "services-3062" to be "running" + Oct 13 08:22:11.323: INFO: Pod "execpodnw62s": Phase="Pending", Reason="", readiness=false. Elapsed: 3.648859ms + Oct 13 08:22:13.327: INFO: Pod "execpodnw62s": Phase="Running", Reason="", readiness=true. Elapsed: 2.008264749s + Oct 13 08:22:13.327: INFO: Pod "execpodnw62s" satisfied condition "running" + Oct 13 08:22:14.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-3062 exec execpodnw62s -- /bin/sh -x -c nc -v -z -w 2 endpoint-test2 80' + Oct 13 08:22:14.653: INFO: stderr: "+ nc -v -z -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" + Oct 13 08:22:14.653: INFO: stdout: "" + Oct 13 08:22:14.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-3062 exec execpodnw62s -- /bin/sh -x -c nc -v -z -w 2 10.100.100.149 80' + Oct 13 08:22:14.794: INFO: stderr: "+ nc -v -z -w 2 10.100.100.149 80\nConnection to 10.100.100.149 80 port [tcp/http] succeeded!\n" + Oct 13 08:22:14.794: INFO: stdout: "" + STEP: Creating pod pod2 in namespace services-3062 10/13/23 08:22:14.794 + Oct 13 08:22:14.799: INFO: Waiting up to 5m0s for pod "pod2" in namespace "services-3062" to be "running and ready" + Oct 13 08:22:14.802: INFO: Pod "pod2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.303682ms + Oct 13 08:22:14.802: INFO: The phase of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) + Oct 13 08:22:16.808: INFO: Pod "pod2": Phase="Running", Reason="", readiness=true. Elapsed: 2.008922122s + Oct 13 08:22:16.808: INFO: The phase of Pod pod2 is Running (Ready = true) + Oct 13 08:22:16.808: INFO: Pod "pod2" satisfied condition "running and ready" + STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3062 to expose endpoints map[pod1:[80] pod2:[80]] 10/13/23 08:22:16.812 + Oct 13 08:22:16.823: INFO: successfully validated that service endpoint-test2 in namespace services-3062 exposes endpoints map[pod1:[80] pod2:[80]] + STEP: Checking if the Service forwards traffic to pod1 and pod2 10/13/23 08:22:16.823 + Oct 13 08:22:17.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-3062 exec execpodnw62s -- /bin/sh -x -c nc -v -z -w 2 endpoint-test2 80' + Oct 13 08:22:17.969: INFO: stderr: "+ nc -v -z -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" + Oct 13 08:22:17.969: INFO: stdout: "" + Oct 13 08:22:17.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-3062 exec execpodnw62s -- /bin/sh -x -c nc -v -z -w 2 10.100.100.149 80' + Oct 13 08:22:18.113: INFO: stderr: "+ nc -v -z -w 2 10.100.100.149 80\nConnection to 10.100.100.149 80 port [tcp/http] succeeded!\n" + Oct 13 08:22:18.113: INFO: stdout: "" + STEP: Deleting pod pod1 in namespace services-3062 10/13/23 08:22:18.113 + STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3062 to expose endpoints map[pod2:[80]] 10/13/23 08:22:18.124 + Oct 13 08:22:20.144: INFO: successfully validated that service endpoint-test2 in namespace services-3062 exposes endpoints map[pod2:[80]] + STEP: Checking if the Service forwards traffic to pod2 10/13/23 08:22:20.144 + Oct 13 08:22:21.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-3062 exec execpodnw62s -- /bin/sh -x -c nc -v -z -w 2 endpoint-test2 80' + Oct 13 08:22:21.305: INFO: stderr: "+ nc -v -z -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" + Oct 13 08:22:21.305: INFO: stdout: "" + Oct 13 08:22:21.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-3062 exec execpodnw62s -- /bin/sh -x -c nc -v -z -w 2 10.100.100.149 80' + Oct 13 08:22:21.445: INFO: stderr: "+ nc -v -z -w 2 10.100.100.149 80\nConnection to 10.100.100.149 80 port [tcp/http] succeeded!\n" + Oct 13 08:22:21.445: INFO: stdout: "" + STEP: Deleting pod pod2 in namespace services-3062 10/13/23 08:22:21.445 + STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3062 to expose endpoints map[] 10/13/23 08:22:21.454 + Oct 13 08:22:22.468: INFO: successfully validated that service endpoint-test2 in namespace services-3062 exposes endpoints map[] + [AfterEach] [sig-network] Services + test/e2e/framework/node/init/init.go:32 + Oct 13 08:22:22.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] Services + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] Services + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] Services + tear down framework | framework.go:193 + STEP: Destroying namespace "services-3062" for this suite. 10/13/23 08:22:22.49 + << End Captured GinkgoWriter Output +------------------------------ +SSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for CRD preserving unknown fields at the schema root [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:194 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:22:22.498 +Oct 13 08:22:22.498: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename crd-publish-openapi 10/13/23 08:22:22.499 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:22:22.517 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:22:22.521 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[It] works for CRD preserving unknown fields at the schema root [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:194 +Oct 13 08:22:22.524: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: kubectl validation (kubectl create and apply) allows request with any unknown properties 10/13/23 08:22:24.471 +Oct 13 08:22:24.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=crd-publish-openapi-3944 --namespace=crd-publish-openapi-3944 create -f -' +Oct 13 08:22:25.124: INFO: stderr: "" +Oct 13 08:22:25.124: INFO: stdout: "e2e-test-crd-publish-openapi-7082-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" +Oct 13 08:22:25.124: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=crd-publish-openapi-3944 --namespace=crd-publish-openapi-3944 delete e2e-test-crd-publish-openapi-7082-crds test-cr' +Oct 13 08:22:25.237: INFO: stderr: "" +Oct 13 08:22:25.237: INFO: stdout: "e2e-test-crd-publish-openapi-7082-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" +Oct 13 08:22:25.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=crd-publish-openapi-3944 --namespace=crd-publish-openapi-3944 apply -f -' +Oct 13 08:22:25.481: INFO: stderr: "" +Oct 13 08:22:25.481: INFO: stdout: "e2e-test-crd-publish-openapi-7082-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" +Oct 13 08:22:25.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=crd-publish-openapi-3944 --namespace=crd-publish-openapi-3944 delete e2e-test-crd-publish-openapi-7082-crds test-cr' +Oct 13 08:22:25.572: INFO: stderr: "" +Oct 13 08:22:25.572: INFO: stdout: "e2e-test-crd-publish-openapi-7082-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" +STEP: kubectl explain works to explain CR 10/13/23 08:22:25.572 +Oct 13 08:22:25.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=crd-publish-openapi-3944 explain e2e-test-crd-publish-openapi-7082-crds' +Oct 13 08:22:26.085: INFO: stderr: "" +Oct 13 08:22:26.085: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-7082-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Oct 13 08:22:28.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "crd-publish-openapi-3944" for this suite. 10/13/23 08:22:28.025 +------------------------------ +• [SLOW TEST] [5.537 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + works for CRD preserving unknown fields at the schema root [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:194 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:22:22.498 + Oct 13 08:22:22.498: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename crd-publish-openapi 10/13/23 08:22:22.499 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:22:22.517 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:22:22.521 + [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [It] works for CRD preserving unknown fields at the schema root [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:194 + Oct 13 08:22:22.524: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: kubectl validation (kubectl create and apply) allows request with any unknown properties 10/13/23 08:22:24.471 + Oct 13 08:22:24.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=crd-publish-openapi-3944 --namespace=crd-publish-openapi-3944 create -f -' + Oct 13 08:22:25.124: INFO: stderr: "" + Oct 13 08:22:25.124: INFO: stdout: "e2e-test-crd-publish-openapi-7082-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" + Oct 13 08:22:25.124: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=crd-publish-openapi-3944 --namespace=crd-publish-openapi-3944 delete e2e-test-crd-publish-openapi-7082-crds test-cr' + Oct 13 08:22:25.237: INFO: stderr: "" + Oct 13 08:22:25.237: INFO: stdout: "e2e-test-crd-publish-openapi-7082-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" + Oct 13 08:22:25.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=crd-publish-openapi-3944 --namespace=crd-publish-openapi-3944 apply -f -' + Oct 13 08:22:25.481: INFO: stderr: "" + Oct 13 08:22:25.481: INFO: stdout: "e2e-test-crd-publish-openapi-7082-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" + Oct 13 08:22:25.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=crd-publish-openapi-3944 --namespace=crd-publish-openapi-3944 delete e2e-test-crd-publish-openapi-7082-crds test-cr' + Oct 13 08:22:25.572: INFO: stderr: "" + Oct 13 08:22:25.572: INFO: stdout: "e2e-test-crd-publish-openapi-7082-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" + STEP: kubectl explain works to explain CR 10/13/23 08:22:25.572 + Oct 13 08:22:25.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=crd-publish-openapi-3944 explain e2e-test-crd-publish-openapi-7082-crds' + Oct 13 08:22:26.085: INFO: stderr: "" + Oct 13 08:22:26.085: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-7082-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" + [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Oct 13 08:22:28.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "crd-publish-openapi-3944" for this suite. 10/13/23 08:22:28.025 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Container Runtime blackbox test on terminated container + should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:216 +[BeforeEach] [sig-node] Container Runtime + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:22:28.036 +Oct 13 08:22:28.036: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename container-runtime 10/13/23 08:22:28.037 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:22:28.053 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:22:28.056 +[BeforeEach] [sig-node] Container Runtime + test/e2e/framework/metrics/init/init.go:31 +[It] should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:216 +STEP: create the container 10/13/23 08:22:28.058 +STEP: wait for the container to reach Failed 10/13/23 08:22:28.064 +STEP: get the container status 10/13/23 08:22:32.086 +STEP: the container should be terminated 10/13/23 08:22:32.09 +STEP: the termination message should be set 10/13/23 08:22:32.09 +Oct 13 08:22:32.090: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- +STEP: delete the container 10/13/23 08:22:32.09 +[AfterEach] [sig-node] Container Runtime + test/e2e/framework/node/init/init.go:32 +Oct 13 08:22:32.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Container Runtime + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Container Runtime + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Container Runtime + tear down framework | framework.go:193 +STEP: Destroying namespace "container-runtime-4620" for this suite. 10/13/23 08:22:32.108 +------------------------------ +• [4.078 seconds] +[sig-node] Container Runtime +test/e2e/common/node/framework.go:23 + blackbox test + test/e2e/common/node/runtime.go:44 + on terminated container + test/e2e/common/node/runtime.go:137 + should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:216 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Container Runtime + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:22:28.036 + Oct 13 08:22:28.036: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename container-runtime 10/13/23 08:22:28.037 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:22:28.053 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:22:28.056 + [BeforeEach] [sig-node] Container Runtime + test/e2e/framework/metrics/init/init.go:31 + [It] should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:216 + STEP: create the container 10/13/23 08:22:28.058 + STEP: wait for the container to reach Failed 10/13/23 08:22:28.064 + STEP: get the container status 10/13/23 08:22:32.086 + STEP: the container should be terminated 10/13/23 08:22:32.09 + STEP: the termination message should be set 10/13/23 08:22:32.09 + Oct 13 08:22:32.090: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- + STEP: delete the container 10/13/23 08:22:32.09 + [AfterEach] [sig-node] Container Runtime + test/e2e/framework/node/init/init.go:32 + Oct 13 08:22:32.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Container Runtime + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Container Runtime + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Container Runtime + tear down framework | framework.go:193 + STEP: Destroying namespace "container-runtime-4620" for this suite. 10/13/23 08:22:32.108 + << End Captured GinkgoWriter Output +------------------------------ +SSSSS +------------------------------ +[sig-node] Security Context When creating a pod with readOnlyRootFilesystem + should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] + test/e2e/common/node/security_context.go:486 +[BeforeEach] [sig-node] Security Context + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:22:32.114 +Oct 13 08:22:32.114: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename security-context-test 10/13/23 08:22:32.115 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:22:32.13 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:22:32.132 +[BeforeEach] [sig-node] Security Context + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] Security Context + test/e2e/common/node/security_context.go:50 +[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] + test/e2e/common/node/security_context.go:486 +Oct 13 08:22:32.142: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-8cd15368-33ce-4968-882e-800d719860f3" in namespace "security-context-test-2753" to be "Succeeded or Failed" +Oct 13 08:22:32.147: INFO: Pod "busybox-readonly-false-8cd15368-33ce-4968-882e-800d719860f3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.90008ms +Oct 13 08:22:34.152: INFO: Pod "busybox-readonly-false-8cd15368-33ce-4968-882e-800d719860f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009872879s +Oct 13 08:22:36.153: INFO: Pod "busybox-readonly-false-8cd15368-33ce-4968-882e-800d719860f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011162542s +Oct 13 08:22:36.153: INFO: Pod "busybox-readonly-false-8cd15368-33ce-4968-882e-800d719860f3" satisfied condition "Succeeded or Failed" +[AfterEach] [sig-node] Security Context + test/e2e/framework/node/init/init.go:32 +Oct 13 08:22:36.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Security Context + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Security Context + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Security Context + tear down framework | framework.go:193 +STEP: Destroying namespace "security-context-test-2753" for this suite. 10/13/23 08:22:36.158 +------------------------------ +• [4.050 seconds] +[sig-node] Security Context +test/e2e/common/node/framework.go:23 + When creating a pod with readOnlyRootFilesystem + test/e2e/common/node/security_context.go:430 + should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] + test/e2e/common/node/security_context.go:486 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Security Context + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:22:32.114 + Oct 13 08:22:32.114: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename security-context-test 10/13/23 08:22:32.115 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:22:32.13 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:22:32.132 + [BeforeEach] [sig-node] Security Context + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] Security Context + test/e2e/common/node/security_context.go:50 + [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] + test/e2e/common/node/security_context.go:486 + Oct 13 08:22:32.142: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-8cd15368-33ce-4968-882e-800d719860f3" in namespace "security-context-test-2753" to be "Succeeded or Failed" + Oct 13 08:22:32.147: INFO: Pod "busybox-readonly-false-8cd15368-33ce-4968-882e-800d719860f3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.90008ms + Oct 13 08:22:34.152: INFO: Pod "busybox-readonly-false-8cd15368-33ce-4968-882e-800d719860f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009872879s + Oct 13 08:22:36.153: INFO: Pod "busybox-readonly-false-8cd15368-33ce-4968-882e-800d719860f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011162542s + Oct 13 08:22:36.153: INFO: Pod "busybox-readonly-false-8cd15368-33ce-4968-882e-800d719860f3" satisfied condition "Succeeded or Failed" + [AfterEach] [sig-node] Security Context + test/e2e/framework/node/init/init.go:32 + Oct 13 08:22:36.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Security Context + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Security Context + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Security Context + tear down framework | framework.go:193 + STEP: Destroying namespace "security-context-test-2753" for this suite. 10/13/23 08:22:36.158 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:261 +[BeforeEach] [sig-storage] Projected downwardAPI + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:22:36.166 +Oct 13 08:22:36.166: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename projected 10/13/23 08:22:36.168 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:22:36.183 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:22:36.186 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:44 +[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:261 +STEP: Creating a pod to test downward API volume plugin 10/13/23 08:22:36.188 +Oct 13 08:22:36.196: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e435f1e2-bd93-4933-b62b-ac4f71e08bcf" in namespace "projected-1388" to be "Succeeded or Failed" +Oct 13 08:22:36.199: INFO: Pod "downwardapi-volume-e435f1e2-bd93-4933-b62b-ac4f71e08bcf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.923362ms +Oct 13 08:22:38.203: INFO: Pod "downwardapi-volume-e435f1e2-bd93-4933-b62b-ac4f71e08bcf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007268587s +Oct 13 08:22:40.205: INFO: Pod "downwardapi-volume-e435f1e2-bd93-4933-b62b-ac4f71e08bcf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008751799s +STEP: Saw pod success 10/13/23 08:22:40.205 +Oct 13 08:22:40.205: INFO: Pod "downwardapi-volume-e435f1e2-bd93-4933-b62b-ac4f71e08bcf" satisfied condition "Succeeded or Failed" +Oct 13 08:22:40.209: INFO: Trying to get logs from node node2 pod downwardapi-volume-e435f1e2-bd93-4933-b62b-ac4f71e08bcf container client-container: +STEP: delete the pod 10/13/23 08:22:40.235 +Oct 13 08:22:40.247: INFO: Waiting for pod downwardapi-volume-e435f1e2-bd93-4933-b62b-ac4f71e08bcf to disappear +Oct 13 08:22:40.251: INFO: Pod downwardapi-volume-e435f1e2-bd93-4933-b62b-ac4f71e08bcf no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/node/init/init.go:32 +Oct 13 08:22:40.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI + tear down framework | framework.go:193 +STEP: Destroying namespace "projected-1388" for this suite. 10/13/23 08:22:40.255 +------------------------------ +• [4.096 seconds] +[sig-storage] Projected downwardAPI +test/e2e/common/storage/framework.go:23 + should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:261 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected downwardAPI + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:22:36.166 + Oct 13 08:22:36.166: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename projected 10/13/23 08:22:36.168 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:22:36.183 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:22:36.186 + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:44 + [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:261 + STEP: Creating a pod to test downward API volume plugin 10/13/23 08:22:36.188 + Oct 13 08:22:36.196: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e435f1e2-bd93-4933-b62b-ac4f71e08bcf" in namespace "projected-1388" to be "Succeeded or Failed" + Oct 13 08:22:36.199: INFO: Pod "downwardapi-volume-e435f1e2-bd93-4933-b62b-ac4f71e08bcf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.923362ms + Oct 13 08:22:38.203: INFO: Pod "downwardapi-volume-e435f1e2-bd93-4933-b62b-ac4f71e08bcf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007268587s + Oct 13 08:22:40.205: INFO: Pod "downwardapi-volume-e435f1e2-bd93-4933-b62b-ac4f71e08bcf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008751799s + STEP: Saw pod success 10/13/23 08:22:40.205 + Oct 13 08:22:40.205: INFO: Pod "downwardapi-volume-e435f1e2-bd93-4933-b62b-ac4f71e08bcf" satisfied condition "Succeeded or Failed" + Oct 13 08:22:40.209: INFO: Trying to get logs from node node2 pod downwardapi-volume-e435f1e2-bd93-4933-b62b-ac4f71e08bcf container client-container: + STEP: delete the pod 10/13/23 08:22:40.235 + Oct 13 08:22:40.247: INFO: Waiting for pod downwardapi-volume-e435f1e2-bd93-4933-b62b-ac4f71e08bcf to disappear + Oct 13 08:22:40.251: INFO: Pod downwardapi-volume-e435f1e2-bd93-4933-b62b-ac4f71e08bcf no longer exists + [AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/node/init/init.go:32 + Oct 13 08:22:40.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + tear down framework | framework.go:193 + STEP: Destroying namespace "projected-1388" for this suite. 10/13/23 08:22:40.255 + << End Captured GinkgoWriter Output +------------------------------ +SSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:47 +[BeforeEach] [sig-storage] ConfigMap + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:22:40.263 +Oct 13 08:22:40.263: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename configmap 10/13/23 08:22:40.264 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:22:40.279 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:22:40.281 +[BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:31 +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:47 +STEP: Creating configMap with name configmap-test-volume-4ff3bf79-6be0-4127-ac89-85de955845b9 10/13/23 08:22:40.283 +STEP: Creating a pod to test consume configMaps 10/13/23 08:22:40.288 +Oct 13 08:22:40.295: INFO: Waiting up to 5m0s for pod "pod-configmaps-2f7097bf-b22c-4f08-9963-d6b6958ca2f3" in namespace "configmap-8595" to be "Succeeded or Failed" +Oct 13 08:22:40.298: INFO: Pod "pod-configmaps-2f7097bf-b22c-4f08-9963-d6b6958ca2f3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.305865ms +Oct 13 08:22:42.302: INFO: Pod "pod-configmaps-2f7097bf-b22c-4f08-9963-d6b6958ca2f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007363405s +Oct 13 08:22:44.305: INFO: Pod "pod-configmaps-2f7097bf-b22c-4f08-9963-d6b6958ca2f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009873911s +STEP: Saw pod success 10/13/23 08:22:44.305 +Oct 13 08:22:44.305: INFO: Pod "pod-configmaps-2f7097bf-b22c-4f08-9963-d6b6958ca2f3" satisfied condition "Succeeded or Failed" +Oct 13 08:22:44.309: INFO: Trying to get logs from node node2 pod pod-configmaps-2f7097bf-b22c-4f08-9963-d6b6958ca2f3 container agnhost-container: +STEP: delete the pod 10/13/23 08:22:44.315 +Oct 13 08:22:44.324: INFO: Waiting for pod pod-configmaps-2f7097bf-b22c-4f08-9963-d6b6958ca2f3 to disappear +Oct 13 08:22:44.327: INFO: Pod pod-configmaps-2f7097bf-b22c-4f08-9963-d6b6958ca2f3 no longer exists +[AfterEach] [sig-storage] ConfigMap + test/e2e/framework/node/init/init.go:32 +Oct 13 08:22:44.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] ConfigMap + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] ConfigMap + tear down framework | framework.go:193 +STEP: Destroying namespace "configmap-8595" for this suite. 10/13/23 08:22:44.33 +------------------------------ +• [4.072 seconds] +[sig-storage] ConfigMap +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:47 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] ConfigMap + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:22:40.263 + Oct 13 08:22:40.263: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename configmap 10/13/23 08:22:40.264 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:22:40.279 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:22:40.281 + [BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:31 + [It] should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:47 + STEP: Creating configMap with name configmap-test-volume-4ff3bf79-6be0-4127-ac89-85de955845b9 10/13/23 08:22:40.283 + STEP: Creating a pod to test consume configMaps 10/13/23 08:22:40.288 + Oct 13 08:22:40.295: INFO: Waiting up to 5m0s for pod "pod-configmaps-2f7097bf-b22c-4f08-9963-d6b6958ca2f3" in namespace "configmap-8595" to be "Succeeded or Failed" + Oct 13 08:22:40.298: INFO: Pod "pod-configmaps-2f7097bf-b22c-4f08-9963-d6b6958ca2f3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.305865ms + Oct 13 08:22:42.302: INFO: Pod "pod-configmaps-2f7097bf-b22c-4f08-9963-d6b6958ca2f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007363405s + Oct 13 08:22:44.305: INFO: Pod "pod-configmaps-2f7097bf-b22c-4f08-9963-d6b6958ca2f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009873911s + STEP: Saw pod success 10/13/23 08:22:44.305 + Oct 13 08:22:44.305: INFO: Pod "pod-configmaps-2f7097bf-b22c-4f08-9963-d6b6958ca2f3" satisfied condition "Succeeded or Failed" + Oct 13 08:22:44.309: INFO: Trying to get logs from node node2 pod pod-configmaps-2f7097bf-b22c-4f08-9963-d6b6958ca2f3 container agnhost-container: + STEP: delete the pod 10/13/23 08:22:44.315 + Oct 13 08:22:44.324: INFO: Waiting for pod pod-configmaps-2f7097bf-b22c-4f08-9963-d6b6958ca2f3 to disappear + Oct 13 08:22:44.327: INFO: Pod pod-configmaps-2f7097bf-b22c-4f08-9963-d6b6958ca2f3 no longer exists + [AfterEach] [sig-storage] ConfigMap + test/e2e/framework/node/init/init.go:32 + Oct 13 08:22:44.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] ConfigMap + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] ConfigMap + tear down framework | framework.go:193 + STEP: Destroying namespace "configmap-8595" for this suite. 10/13/23 08:22:44.33 + << End Captured GinkgoWriter Output +------------------------------ +S +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:46 +[BeforeEach] [sig-storage] Projected secret + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:22:44.335 +Oct 13 08:22:44.335: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename projected 10/13/23 08:22:44.336 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:22:44.35 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:22:44.352 +[BeforeEach] [sig-storage] Projected secret + test/e2e/framework/metrics/init/init.go:31 +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:46 +STEP: Creating projection with secret that has name projected-secret-test-44a2c6a2-ae51-435f-8145-f0d233787e87 10/13/23 08:22:44.354 +STEP: Creating a pod to test consume secrets 10/13/23 08:22:44.358 +Oct 13 08:22:44.366: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c1640d23-0c7f-47a4-90ce-882983344429" in namespace "projected-8899" to be "Succeeded or Failed" +Oct 13 08:22:44.369: INFO: Pod "pod-projected-secrets-c1640d23-0c7f-47a4-90ce-882983344429": Phase="Pending", Reason="", readiness=false. Elapsed: 2.912386ms +Oct 13 08:22:46.374: INFO: Pod "pod-projected-secrets-c1640d23-0c7f-47a4-90ce-882983344429": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008121729s +Oct 13 08:22:48.374: INFO: Pod "pod-projected-secrets-c1640d23-0c7f-47a4-90ce-882983344429": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007650917s +STEP: Saw pod success 10/13/23 08:22:48.374 +Oct 13 08:22:48.374: INFO: Pod "pod-projected-secrets-c1640d23-0c7f-47a4-90ce-882983344429" satisfied condition "Succeeded or Failed" +Oct 13 08:22:48.377: INFO: Trying to get logs from node node2 pod pod-projected-secrets-c1640d23-0c7f-47a4-90ce-882983344429 container projected-secret-volume-test: +STEP: delete the pod 10/13/23 08:22:48.383 +Oct 13 08:22:48.395: INFO: Waiting for pod pod-projected-secrets-c1640d23-0c7f-47a4-90ce-882983344429 to disappear +Oct 13 08:22:48.398: INFO: Pod pod-projected-secrets-c1640d23-0c7f-47a4-90ce-882983344429 no longer exists +[AfterEach] [sig-storage] Projected secret + test/e2e/framework/node/init/init.go:32 +Oct 13 08:22:48.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected secret + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Projected secret + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Projected secret + tear down framework | framework.go:193 +STEP: Destroying namespace "projected-8899" for this suite. 10/13/23 08:22:48.401 +------------------------------ +• [4.071 seconds] +[sig-storage] Projected secret +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:46 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected secret + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:22:44.335 + Oct 13 08:22:44.335: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename projected 10/13/23 08:22:44.336 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:22:44.35 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:22:44.352 + [BeforeEach] [sig-storage] Projected secret + test/e2e/framework/metrics/init/init.go:31 + [It] should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:46 + STEP: Creating projection with secret that has name projected-secret-test-44a2c6a2-ae51-435f-8145-f0d233787e87 10/13/23 08:22:44.354 + STEP: Creating a pod to test consume secrets 10/13/23 08:22:44.358 + Oct 13 08:22:44.366: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c1640d23-0c7f-47a4-90ce-882983344429" in namespace "projected-8899" to be "Succeeded or Failed" + Oct 13 08:22:44.369: INFO: Pod "pod-projected-secrets-c1640d23-0c7f-47a4-90ce-882983344429": Phase="Pending", Reason="", readiness=false. Elapsed: 2.912386ms + Oct 13 08:22:46.374: INFO: Pod "pod-projected-secrets-c1640d23-0c7f-47a4-90ce-882983344429": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008121729s + Oct 13 08:22:48.374: INFO: Pod "pod-projected-secrets-c1640d23-0c7f-47a4-90ce-882983344429": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007650917s + STEP: Saw pod success 10/13/23 08:22:48.374 + Oct 13 08:22:48.374: INFO: Pod "pod-projected-secrets-c1640d23-0c7f-47a4-90ce-882983344429" satisfied condition "Succeeded or Failed" + Oct 13 08:22:48.377: INFO: Trying to get logs from node node2 pod pod-projected-secrets-c1640d23-0c7f-47a4-90ce-882983344429 container projected-secret-volume-test: + STEP: delete the pod 10/13/23 08:22:48.383 + Oct 13 08:22:48.395: INFO: Waiting for pod pod-projected-secrets-c1640d23-0c7f-47a4-90ce-882983344429 to disappear + Oct 13 08:22:48.398: INFO: Pod pod-projected-secrets-c1640d23-0c7f-47a4-90ce-882983344429 no longer exists + [AfterEach] [sig-storage] Projected secret + test/e2e/framework/node/init/init.go:32 + Oct 13 08:22:48.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected secret + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Projected secret + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Projected secret + tear down framework | framework.go:193 + STEP: Destroying namespace "projected-8899" for this suite. 10/13/23 08:22:48.401 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-instrumentation] Events API + should ensure that an event can be fetched, patched, deleted, and listed [Conformance] + test/e2e/instrumentation/events.go:98 +[BeforeEach] [sig-instrumentation] Events API + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:22:48.408 +Oct 13 08:22:48.409: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename events 10/13/23 08:22:48.409 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:22:48.424 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:22:48.426 +[BeforeEach] [sig-instrumentation] Events API + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-instrumentation] Events API + test/e2e/instrumentation/events.go:84 +[It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] + test/e2e/instrumentation/events.go:98 +STEP: creating a test event 10/13/23 08:22:48.428 +STEP: listing events in all namespaces 10/13/23 08:22:48.434 +STEP: listing events in test namespace 10/13/23 08:22:48.442 +STEP: listing events with field selection filtering on source 10/13/23 08:22:48.445 +STEP: listing events with field selection filtering on reportingController 10/13/23 08:22:48.447 +STEP: getting the test event 10/13/23 08:22:48.45 +STEP: patching the test event 10/13/23 08:22:48.452 +STEP: getting the test event 10/13/23 08:22:48.458 +STEP: updating the test event 10/13/23 08:22:48.461 +STEP: getting the test event 10/13/23 08:22:48.466 +STEP: deleting the test event 10/13/23 08:22:48.469 +STEP: listing events in all namespaces 10/13/23 08:22:48.474 +STEP: listing events in test namespace 10/13/23 08:22:48.481 +[AfterEach] [sig-instrumentation] Events API + test/e2e/framework/node/init/init.go:32 +Oct 13 08:22:48.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-instrumentation] Events API + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-instrumentation] Events API + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-instrumentation] Events API + tear down framework | framework.go:193 +STEP: Destroying namespace "events-1956" for this suite. 10/13/23 08:22:48.486 +------------------------------ +• [0.082 seconds] +[sig-instrumentation] Events API +test/e2e/instrumentation/common/framework.go:23 + should ensure that an event can be fetched, patched, deleted, and listed [Conformance] + test/e2e/instrumentation/events.go:98 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-instrumentation] Events API + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:22:48.408 + Oct 13 08:22:48.409: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename events 10/13/23 08:22:48.409 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:22:48.424 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:22:48.426 + [BeforeEach] [sig-instrumentation] Events API + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-instrumentation] Events API + test/e2e/instrumentation/events.go:84 + [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] + test/e2e/instrumentation/events.go:98 + STEP: creating a test event 10/13/23 08:22:48.428 + STEP: listing events in all namespaces 10/13/23 08:22:48.434 + STEP: listing events in test namespace 10/13/23 08:22:48.442 + STEP: listing events with field selection filtering on source 10/13/23 08:22:48.445 + STEP: listing events with field selection filtering on reportingController 10/13/23 08:22:48.447 + STEP: getting the test event 10/13/23 08:22:48.45 + STEP: patching the test event 10/13/23 08:22:48.452 + STEP: getting the test event 10/13/23 08:22:48.458 + STEP: updating the test event 10/13/23 08:22:48.461 + STEP: getting the test event 10/13/23 08:22:48.466 + STEP: deleting the test event 10/13/23 08:22:48.469 + STEP: listing events in all namespaces 10/13/23 08:22:48.474 + STEP: listing events in test namespace 10/13/23 08:22:48.481 + [AfterEach] [sig-instrumentation] Events API + test/e2e/framework/node/init/init.go:32 + Oct 13 08:22:48.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-instrumentation] Events API + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-instrumentation] Events API + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-instrumentation] Events API + tear down framework | framework.go:193 + STEP: Destroying namespace "events-1956" for this suite. 10/13/23 08:22:48.486 + << End Captured GinkgoWriter Output +------------------------------ +SS +------------------------------ +[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch + watch on custom resource definition objects [Conformance] + test/e2e/apimachinery/crd_watch.go:51 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:22:48.491 +Oct 13 08:22:48.491: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename crd-watch 10/13/23 08:22:48.492 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:22:48.506 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:22:48.508 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[It] watch on custom resource definition objects [Conformance] + test/e2e/apimachinery/crd_watch.go:51 +Oct 13 08:22:48.510: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Creating first CR 10/13/23 08:22:51.07 +Oct 13 08:22:51.075: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-10-13T08:22:51Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-10-13T08:22:51Z]] name:name1 resourceVersion:12924 uid:df512e66-3619-4c4e-9a96-c1bdc823186d] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Creating second CR 10/13/23 08:23:01.078 +Oct 13 08:23:01.086: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-10-13T08:23:01Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-10-13T08:23:01Z]] name:name2 resourceVersion:12957 uid:bdc36221-37e9-46a3-a666-1560563fe25f] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Modifying first CR 10/13/23 08:23:11.091 +Oct 13 08:23:11.100: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-10-13T08:22:51Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-10-13T08:23:11Z]] name:name1 resourceVersion:12977 uid:df512e66-3619-4c4e-9a96-c1bdc823186d] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Modifying second CR 10/13/23 08:23:21.105 +Oct 13 08:23:21.116: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-10-13T08:23:01Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-10-13T08:23:21Z]] name:name2 resourceVersion:12997 uid:bdc36221-37e9-46a3-a666-1560563fe25f] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Deleting first CR 10/13/23 08:23:31.121 +Oct 13 08:23:31.135: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-10-13T08:22:51Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-10-13T08:23:11Z]] name:name1 resourceVersion:13017 uid:df512e66-3619-4c4e-9a96-c1bdc823186d] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Deleting second CR 10/13/23 08:23:41.139 +Oct 13 08:23:41.149: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-10-13T08:23:01Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-10-13T08:23:21Z]] name:name2 resourceVersion:13037 uid:bdc36221-37e9-46a3-a666-1560563fe25f] num:map[num1:9223372036854775807 num2:1000000]]} +[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Oct 13 08:23:51.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "crd-watch-9347" for this suite. 10/13/23 08:23:51.676 +------------------------------ +• [SLOW TEST] [63.193 seconds] +[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + CustomResourceDefinition Watch + test/e2e/apimachinery/crd_watch.go:44 + watch on custom resource definition objects [Conformance] + test/e2e/apimachinery/crd_watch.go:51 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:22:48.491 + Oct 13 08:22:48.491: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename crd-watch 10/13/23 08:22:48.492 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:22:48.506 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:22:48.508 + [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [It] watch on custom resource definition objects [Conformance] + test/e2e/apimachinery/crd_watch.go:51 + Oct 13 08:22:48.510: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Creating first CR 10/13/23 08:22:51.07 + Oct 13 08:22:51.075: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-10-13T08:22:51Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-10-13T08:22:51Z]] name:name1 resourceVersion:12924 uid:df512e66-3619-4c4e-9a96-c1bdc823186d] num:map[num1:9223372036854775807 num2:1000000]]} + STEP: Creating second CR 10/13/23 08:23:01.078 + Oct 13 08:23:01.086: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-10-13T08:23:01Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-10-13T08:23:01Z]] name:name2 resourceVersion:12957 uid:bdc36221-37e9-46a3-a666-1560563fe25f] num:map[num1:9223372036854775807 num2:1000000]]} + STEP: Modifying first CR 10/13/23 08:23:11.091 + Oct 13 08:23:11.100: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-10-13T08:22:51Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-10-13T08:23:11Z]] name:name1 resourceVersion:12977 uid:df512e66-3619-4c4e-9a96-c1bdc823186d] num:map[num1:9223372036854775807 num2:1000000]]} + STEP: Modifying second CR 10/13/23 08:23:21.105 + Oct 13 08:23:21.116: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-10-13T08:23:01Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-10-13T08:23:21Z]] name:name2 resourceVersion:12997 uid:bdc36221-37e9-46a3-a666-1560563fe25f] num:map[num1:9223372036854775807 num2:1000000]]} + STEP: Deleting first CR 10/13/23 08:23:31.121 + Oct 13 08:23:31.135: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-10-13T08:22:51Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-10-13T08:23:11Z]] name:name1 resourceVersion:13017 uid:df512e66-3619-4c4e-9a96-c1bdc823186d] num:map[num1:9223372036854775807 num2:1000000]]} + STEP: Deleting second CR 10/13/23 08:23:41.139 + Oct 13 08:23:41.149: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-10-13T08:23:01Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-10-13T08:23:21Z]] name:name2 resourceVersion:13037 uid:bdc36221-37e9-46a3-a666-1560563fe25f] num:map[num1:9223372036854775807 num2:1000000]]} + [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Oct 13 08:23:51.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "crd-watch-9347" for this suite. 10/13/23 08:23:51.676 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate custom resource [Conformance] + test/e2e/apimachinery/webhook.go:291 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:23:51.685 +Oct 13 08:23:51.685: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename webhook 10/13/23 08:23:51.686 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:23:51.709 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:23:51.712 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 +STEP: Setting up server cert 10/13/23 08:23:51.726 +STEP: Create role binding to let webhook read extension-apiserver-authentication 10/13/23 08:23:52.22 +STEP: Deploying the webhook pod 10/13/23 08:23:52.229 +STEP: Wait for the deployment to be ready 10/13/23 08:23:52.245 +Oct 13 08:23:52.252: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service 10/13/23 08:23:54.266 +STEP: Verifying the service has paired with the endpoint 10/13/23 08:23:54.277 +Oct 13 08:23:55.277: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate custom resource [Conformance] + test/e2e/apimachinery/webhook.go:291 +Oct 13 08:23:55.281: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Registering the mutating webhook for custom resource e2e-test-webhook-388-crds.webhook.example.com via the AdmissionRegistration API 10/13/23 08:23:55.794 +STEP: Creating a custom resource that should be mutated by the webhook 10/13/23 08:23:55.814 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Oct 13 08:23:58.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "webhook-8151" for this suite. 10/13/23 08:23:58.436 +STEP: Destroying namespace "webhook-8151-markers" for this suite. 10/13/23 08:23:58.443 +------------------------------ +• [SLOW TEST] [6.767 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should mutate custom resource [Conformance] + test/e2e/apimachinery/webhook.go:291 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:23:51.685 + Oct 13 08:23:51.685: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename webhook 10/13/23 08:23:51.686 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:23:51.709 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:23:51.712 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 + STEP: Setting up server cert 10/13/23 08:23:51.726 + STEP: Create role binding to let webhook read extension-apiserver-authentication 10/13/23 08:23:52.22 + STEP: Deploying the webhook pod 10/13/23 08:23:52.229 + STEP: Wait for the deployment to be ready 10/13/23 08:23:52.245 + Oct 13 08:23:52.252: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set + STEP: Deploying the webhook service 10/13/23 08:23:54.266 + STEP: Verifying the service has paired with the endpoint 10/13/23 08:23:54.277 + Oct 13 08:23:55.277: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] should mutate custom resource [Conformance] + test/e2e/apimachinery/webhook.go:291 + Oct 13 08:23:55.281: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Registering the mutating webhook for custom resource e2e-test-webhook-388-crds.webhook.example.com via the AdmissionRegistration API 10/13/23 08:23:55.794 + STEP: Creating a custom resource that should be mutated by the webhook 10/13/23 08:23:55.814 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Oct 13 08:23:58.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "webhook-8151" for this suite. 10/13/23 08:23:58.436 + STEP: Destroying namespace "webhook-8151-markers" for this suite. 10/13/23 08:23:58.443 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] + test/e2e/apimachinery/garbage_collector.go:550 +[BeforeEach] [sig-api-machinery] Garbage collector + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:23:58.454 +Oct 13 08:23:58.454: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename gc 10/13/23 08:23:58.455 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:23:58.477 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:23:58.48 +[BeforeEach] [sig-api-machinery] Garbage collector + test/e2e/framework/metrics/init/init.go:31 +[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] + test/e2e/apimachinery/garbage_collector.go:550 +STEP: create the deployment 10/13/23 08:23:58.482 +STEP: Wait for the Deployment to create new ReplicaSet 10/13/23 08:23:58.488 +STEP: delete the deployment 10/13/23 08:23:59.003 +STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs 10/13/23 08:23:59.01 +STEP: Gathering metrics 10/13/23 08:23:59.528 +Oct 13 08:23:59.552: INFO: Waiting up to 5m0s for pod "kube-controller-manager-node3" in namespace "kube-system" to be "running and ready" +Oct 13 08:23:59.555: INFO: Pod "kube-controller-manager-node3": Phase="Running", Reason="", readiness=true. Elapsed: 3.070864ms +Oct 13 08:23:59.555: INFO: The phase of Pod kube-controller-manager-node3 is Running (Ready = true) +Oct 13 08:23:59.555: INFO: Pod "kube-controller-manager-node3" satisfied condition "running and ready" +Oct 13 08:23:59.634: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +[AfterEach] [sig-api-machinery] Garbage collector + test/e2e/framework/node/init/init.go:32 +Oct 13 08:23:59.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] Garbage collector + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] Garbage collector + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] Garbage collector + tear down framework | framework.go:193 +STEP: Destroying namespace "gc-8318" for this suite. 10/13/23 08:23:59.64 +------------------------------ +• [1.192 seconds] +[sig-api-machinery] Garbage collector +test/e2e/apimachinery/framework.go:23 + should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] + test/e2e/apimachinery/garbage_collector.go:550 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] Garbage collector + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:23:58.454 + Oct 13 08:23:58.454: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename gc 10/13/23 08:23:58.455 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:23:58.477 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:23:58.48 + [BeforeEach] [sig-api-machinery] Garbage collector + test/e2e/framework/metrics/init/init.go:31 + [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] + test/e2e/apimachinery/garbage_collector.go:550 + STEP: create the deployment 10/13/23 08:23:58.482 + STEP: Wait for the Deployment to create new ReplicaSet 10/13/23 08:23:58.488 + STEP: delete the deployment 10/13/23 08:23:59.003 + STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs 10/13/23 08:23:59.01 + STEP: Gathering metrics 10/13/23 08:23:59.528 + Oct 13 08:23:59.552: INFO: Waiting up to 5m0s for pod "kube-controller-manager-node3" in namespace "kube-system" to be "running and ready" + Oct 13 08:23:59.555: INFO: Pod "kube-controller-manager-node3": Phase="Running", Reason="", readiness=true. Elapsed: 3.070864ms + Oct 13 08:23:59.555: INFO: The phase of Pod kube-controller-manager-node3 is Running (Ready = true) + Oct 13 08:23:59.555: INFO: Pod "kube-controller-manager-node3" satisfied condition "running and ready" + Oct 13 08:23:59.634: INFO: For apiserver_request_total: + For apiserver_request_latency_seconds: + For apiserver_init_events_total: + For garbage_collector_attempt_to_delete_queue_latency: + For garbage_collector_attempt_to_delete_work_duration: + For garbage_collector_attempt_to_orphan_queue_latency: + For garbage_collector_attempt_to_orphan_work_duration: + For garbage_collector_dirty_processing_latency_microseconds: + For garbage_collector_event_processing_latency_microseconds: + For garbage_collector_graph_changes_queue_latency: + For garbage_collector_graph_changes_work_duration: + For garbage_collector_orphan_processing_latency_microseconds: + For namespace_queue_latency: + For namespace_queue_latency_sum: + For namespace_queue_latency_count: + For namespace_retries: + For namespace_work_duration: + For namespace_work_duration_sum: + For namespace_work_duration_count: + For function_duration_seconds: + For errors_total: + For evicted_pods_total: + + [AfterEach] [sig-api-machinery] Garbage collector + test/e2e/framework/node/init/init.go:32 + Oct 13 08:23:59.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + tear down framework | framework.go:193 + STEP: Destroying namespace "gc-8318" for this suite. 10/13/23 08:23:59.64 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should update labels on modification [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:130 +[BeforeEach] [sig-storage] Downward API volume + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:23:59.646 +Oct 13 08:23:59.646: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename downward-api 10/13/23 08:23:59.647 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:23:59.67 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:23:59.673 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:44 +[It] should update labels on modification [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:130 +STEP: Creating the pod 10/13/23 08:23:59.675 +Oct 13 08:23:59.683: INFO: Waiting up to 5m0s for pod "labelsupdatea0b8981e-2132-4e43-a663-5402a508d9b9" in namespace "downward-api-1382" to be "running and ready" +Oct 13 08:23:59.686: INFO: Pod "labelsupdatea0b8981e-2132-4e43-a663-5402a508d9b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.855325ms +Oct 13 08:23:59.686: INFO: The phase of Pod labelsupdatea0b8981e-2132-4e43-a663-5402a508d9b9 is Pending, waiting for it to be Running (with Ready = true) +Oct 13 08:24:01.691: INFO: Pod "labelsupdatea0b8981e-2132-4e43-a663-5402a508d9b9": Phase="Running", Reason="", readiness=true. Elapsed: 2.008455965s +Oct 13 08:24:01.691: INFO: The phase of Pod labelsupdatea0b8981e-2132-4e43-a663-5402a508d9b9 is Running (Ready = true) +Oct 13 08:24:01.691: INFO: Pod "labelsupdatea0b8981e-2132-4e43-a663-5402a508d9b9" satisfied condition "running and ready" +Oct 13 08:24:02.217: INFO: Successfully updated pod "labelsupdatea0b8981e-2132-4e43-a663-5402a508d9b9" +[AfterEach] [sig-storage] Downward API volume + test/e2e/framework/node/init/init.go:32 +Oct 13 08:24:06.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Downward API volume + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Downward API volume + tear down framework | framework.go:193 +STEP: Destroying namespace "downward-api-1382" for this suite. 10/13/23 08:24:06.244 +------------------------------ +• [SLOW TEST] [6.604 seconds] +[sig-storage] Downward API volume +test/e2e/common/storage/framework.go:23 + should update labels on modification [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:130 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Downward API volume + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:23:59.646 + Oct 13 08:23:59.646: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename downward-api 10/13/23 08:23:59.647 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:23:59.67 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:23:59.673 + [BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:44 + [It] should update labels on modification [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:130 + STEP: Creating the pod 10/13/23 08:23:59.675 + Oct 13 08:23:59.683: INFO: Waiting up to 5m0s for pod "labelsupdatea0b8981e-2132-4e43-a663-5402a508d9b9" in namespace "downward-api-1382" to be "running and ready" + Oct 13 08:23:59.686: INFO: Pod "labelsupdatea0b8981e-2132-4e43-a663-5402a508d9b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.855325ms + Oct 13 08:23:59.686: INFO: The phase of Pod labelsupdatea0b8981e-2132-4e43-a663-5402a508d9b9 is Pending, waiting for it to be Running (with Ready = true) + Oct 13 08:24:01.691: INFO: Pod "labelsupdatea0b8981e-2132-4e43-a663-5402a508d9b9": Phase="Running", Reason="", readiness=true. Elapsed: 2.008455965s + Oct 13 08:24:01.691: INFO: The phase of Pod labelsupdatea0b8981e-2132-4e43-a663-5402a508d9b9 is Running (Ready = true) + Oct 13 08:24:01.691: INFO: Pod "labelsupdatea0b8981e-2132-4e43-a663-5402a508d9b9" satisfied condition "running and ready" + Oct 13 08:24:02.217: INFO: Successfully updated pod "labelsupdatea0b8981e-2132-4e43-a663-5402a508d9b9" + [AfterEach] [sig-storage] Downward API volume + test/e2e/framework/node/init/init.go:32 + Oct 13 08:24:06.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Downward API volume + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Downward API volume + tear down framework | framework.go:193 + STEP: Destroying namespace "downward-api-1382" for this suite. 10/13/23 08:24:06.244 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Deployment + deployment should delete old replica sets [Conformance] + test/e2e/apps/deployment.go:122 +[BeforeEach] [sig-apps] Deployment + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:24:06.251 +Oct 13 08:24:06.251: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename deployment 10/13/23 08:24:06.252 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:24:06.267 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:24:06.27 +[BeforeEach] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 +[It] deployment should delete old replica sets [Conformance] + test/e2e/apps/deployment.go:122 +Oct 13 08:24:06.279: INFO: Pod name cleanup-pod: Found 0 pods out of 1 +Oct 13 08:24:11.283: INFO: Pod name cleanup-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running 10/13/23 08:24:11.283 +Oct 13 08:24:11.283: INFO: Creating deployment test-cleanup-deployment +STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up 10/13/23 08:24:11.296 +[AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 +Oct 13 08:24:11.308: INFO: Deployment "test-cleanup-deployment": +&Deployment{ObjectMeta:{test-cleanup-deployment deployment-4745 06ea7e17-3ce8-402c-8ddd-e31cdc840f74 13267 1 2023-10-13 08:24:11 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2023-10-13 08:24:11 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} }]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost registry.k8s.io/e2e-test-images/agnhost:2.43 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00586e2d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} + +Oct 13 08:24:11.312: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. +Oct 13 08:24:11.312: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": +Oct 13 08:24:11.312: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-4745 bbb71a9a-6b7d-4ab7-91ab-87d301a562ec 13269 1 2023-10-13 08:24:06 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 06ea7e17-3ce8-402c-8ddd-e31cdc840f74 0xc005daf7a7 0xc005daf7a8}] [] [{e2e.test Update apps/v1 2023-10-13 08:24:06 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-10-13 08:24:07 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status} {kube-controller-manager Update apps/v1 2023-10-13 08:24:11 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"06ea7e17-3ce8-402c-8ddd-e31cdc840f74\"}":{}}}} }]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc005daf868 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Oct 13 08:24:11.317: INFO: Pod "test-cleanup-controller-7vxrt" is available: +&Pod{ObjectMeta:{test-cleanup-controller-7vxrt test-cleanup-controller- deployment-4745 0d1a7110-c69c-4e76-84c7-a1ec2d7100b8 13254 0 2023-10-13 08:24:06 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller bbb71a9a-6b7d-4ab7-91ab-87d301a562ec 0xc003859b77 0xc003859b78}] [] [{kube-controller-manager Update v1 2023-10-13 08:24:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bbb71a9a-6b7d-4ab7-91ab-87d301a562ec\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-10-13 08:24:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.40\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zklgj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zklgj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:24:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:24:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:24:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:24:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.253.8.111,PodIP:10.244.1.40,StartTime:2023-10-13 08:24:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-10-13 08:24:07 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:sha256:987dae5a853a43c663ab2f902708e65874bd2c0189aa0bc57d81ffb57187d089,ContainerID:containerd://865b98f8af810814d191b0ce161b8c881629efa97e44952e5e4547d8d284b036,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.40,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + test/e2e/framework/node/init/init.go:32 +Oct 13 08:24:11.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] Deployment + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] Deployment + tear down framework | framework.go:193 +STEP: Destroying namespace "deployment-4745" for this suite. 10/13/23 08:24:11.322 +------------------------------ +• [SLOW TEST] [5.079 seconds] +[sig-apps] Deployment +test/e2e/apps/framework.go:23 + deployment should delete old replica sets [Conformance] + test/e2e/apps/deployment.go:122 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] Deployment + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:24:06.251 + Oct 13 08:24:06.251: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename deployment 10/13/23 08:24:06.252 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:24:06.267 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:24:06.27 + [BeforeEach] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 + [It] deployment should delete old replica sets [Conformance] + test/e2e/apps/deployment.go:122 + Oct 13 08:24:06.279: INFO: Pod name cleanup-pod: Found 0 pods out of 1 + Oct 13 08:24:11.283: INFO: Pod name cleanup-pod: Found 1 pods out of 1 + STEP: ensuring each pod is running 10/13/23 08:24:11.283 + Oct 13 08:24:11.283: INFO: Creating deployment test-cleanup-deployment + STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up 10/13/23 08:24:11.296 + [AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 + Oct 13 08:24:11.308: INFO: Deployment "test-cleanup-deployment": + &Deployment{ObjectMeta:{test-cleanup-deployment deployment-4745 06ea7e17-3ce8-402c-8ddd-e31cdc840f74 13267 1 2023-10-13 08:24:11 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2023-10-13 08:24:11 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} }]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost registry.k8s.io/e2e-test-images/agnhost:2.43 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00586e2d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} + + Oct 13 08:24:11.312: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. + Oct 13 08:24:11.312: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": + Oct 13 08:24:11.312: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-4745 bbb71a9a-6b7d-4ab7-91ab-87d301a562ec 13269 1 2023-10-13 08:24:06 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 06ea7e17-3ce8-402c-8ddd-e31cdc840f74 0xc005daf7a7 0xc005daf7a8}] [] [{e2e.test Update apps/v1 2023-10-13 08:24:06 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-10-13 08:24:07 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status} {kube-controller-manager Update apps/v1 2023-10-13 08:24:11 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"06ea7e17-3ce8-402c-8ddd-e31cdc840f74\"}":{}}}} }]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc005daf868 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} + Oct 13 08:24:11.317: INFO: Pod "test-cleanup-controller-7vxrt" is available: + &Pod{ObjectMeta:{test-cleanup-controller-7vxrt test-cleanup-controller- deployment-4745 0d1a7110-c69c-4e76-84c7-a1ec2d7100b8 13254 0 2023-10-13 08:24:06 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller bbb71a9a-6b7d-4ab7-91ab-87d301a562ec 0xc003859b77 0xc003859b78}] [] [{kube-controller-manager Update v1 2023-10-13 08:24:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bbb71a9a-6b7d-4ab7-91ab-87d301a562ec\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-10-13 08:24:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.40\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zklgj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zklgj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:24:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:24:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:24:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:24:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.253.8.111,PodIP:10.244.1.40,StartTime:2023-10-13 08:24:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-10-13 08:24:07 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:sha256:987dae5a853a43c663ab2f902708e65874bd2c0189aa0bc57d81ffb57187d089,ContainerID:containerd://865b98f8af810814d191b0ce161b8c881629efa97e44952e5e4547d8d284b036,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.40,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + [AfterEach] [sig-apps] Deployment + test/e2e/framework/node/init/init.go:32 + Oct 13 08:24:11.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] Deployment + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] Deployment + tear down framework | framework.go:193 + STEP: Destroying namespace "deployment-4745" for this suite. 10/13/23 08:24:11.322 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Guestbook application + should create and stop a working application [Conformance] + test/e2e/kubectl/kubectl.go:394 +[BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:24:11.331 +Oct 13 08:24:11.331: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename kubectl 10/13/23 08:24:11.333 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:24:11.353 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:24:11.356 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 +[It] should create and stop a working application [Conformance] + test/e2e/kubectl/kubectl.go:394 +STEP: creating all guestbook components 10/13/23 08:24:11.358 +Oct 13 08:24:11.358: INFO: apiVersion: v1 +kind: Service +metadata: + name: agnhost-replica + labels: + app: agnhost + role: replica + tier: backend +spec: + ports: + - port: 6379 + selector: + app: agnhost + role: replica + tier: backend + +Oct 13 08:24:11.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-902 create -f -' +Oct 13 08:24:12.169: INFO: stderr: "" +Oct 13 08:24:12.169: INFO: stdout: "service/agnhost-replica created\n" +Oct 13 08:24:12.169: INFO: apiVersion: v1 +kind: Service +metadata: + name: agnhost-primary + labels: + app: agnhost + role: primary + tier: backend +spec: + ports: + - port: 6379 + targetPort: 6379 + selector: + app: agnhost + role: primary + tier: backend + +Oct 13 08:24:12.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-902 create -f -' +Oct 13 08:24:12.857: INFO: stderr: "" +Oct 13 08:24:12.857: INFO: stdout: "service/agnhost-primary created\n" +Oct 13 08:24:12.857: INFO: apiVersion: v1 +kind: Service +metadata: + name: frontend + labels: + app: guestbook + tier: frontend +spec: + # if your cluster supports it, uncomment the following to automatically create + # an external load-balanced IP for the frontend service. + # type: LoadBalancer + ports: + - port: 80 + selector: + app: guestbook + tier: frontend + +Oct 13 08:24:12.857: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-902 create -f -' +Oct 13 08:24:13.563: INFO: stderr: "" +Oct 13 08:24:13.563: INFO: stdout: "service/frontend created\n" +Oct 13 08:24:13.564: INFO: apiVersion: apps/v1 +kind: Deployment +metadata: + name: frontend +spec: + replicas: 3 + selector: + matchLabels: + app: guestbook + tier: frontend + template: + metadata: + labels: + app: guestbook + tier: frontend + spec: + containers: + - name: guestbook-frontend + image: registry.k8s.io/e2e-test-images/agnhost:2.43 + args: [ "guestbook", "--backend-port", "6379" ] + resources: + requests: + cpu: 100m + memory: 100Mi + ports: + - containerPort: 80 + +Oct 13 08:24:13.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-902 create -f -' +Oct 13 08:24:13.776: INFO: stderr: "" +Oct 13 08:24:13.776: INFO: stdout: "deployment.apps/frontend created\n" +Oct 13 08:24:13.776: INFO: apiVersion: apps/v1 +kind: Deployment +metadata: + name: agnhost-primary +spec: + replicas: 1 + selector: + matchLabels: + app: agnhost + role: primary + tier: backend + template: + metadata: + labels: + app: agnhost + role: primary + tier: backend + spec: + containers: + - name: primary + image: registry.k8s.io/e2e-test-images/agnhost:2.43 + args: [ "guestbook", "--http-port", "6379" ] + resources: + requests: + cpu: 100m + memory: 100Mi + ports: + - containerPort: 6379 + +Oct 13 08:24:13.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-902 create -f -' +Oct 13 08:24:13.978: INFO: stderr: "" +Oct 13 08:24:13.978: INFO: stdout: "deployment.apps/agnhost-primary created\n" +Oct 13 08:24:13.978: INFO: apiVersion: apps/v1 +kind: Deployment +metadata: + name: agnhost-replica +spec: + replicas: 2 + selector: + matchLabels: + app: agnhost + role: replica + tier: backend + template: + metadata: + labels: + app: agnhost + role: replica + tier: backend + spec: + containers: + - name: replica + image: registry.k8s.io/e2e-test-images/agnhost:2.43 + args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] + resources: + requests: + cpu: 100m + memory: 100Mi + ports: + - containerPort: 6379 + +Oct 13 08:24:13.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-902 create -f -' +Oct 13 08:24:14.175: INFO: stderr: "" +Oct 13 08:24:14.175: INFO: stdout: "deployment.apps/agnhost-replica created\n" +STEP: validating guestbook app 10/13/23 08:24:14.175 +Oct 13 08:24:14.175: INFO: Waiting for all frontend pods to be Running. +Oct 13 08:24:19.228: INFO: Waiting for frontend to serve content. +Oct 13 08:24:19.239: INFO: Trying to add a new entry to the guestbook. +Oct 13 08:24:19.250: INFO: Verifying that added entry can be retrieved. +STEP: using delete to clean up resources 10/13/23 08:24:19.259 +Oct 13 08:24:19.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-902 delete --grace-period=0 --force -f -' +Oct 13 08:24:19.363: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Oct 13 08:24:19.363: INFO: stdout: "service \"agnhost-replica\" force deleted\n" +STEP: using delete to clean up resources 10/13/23 08:24:19.363 +Oct 13 08:24:19.364: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-902 delete --grace-period=0 --force -f -' +Oct 13 08:24:19.466: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Oct 13 08:24:19.466: INFO: stdout: "service \"agnhost-primary\" force deleted\n" +STEP: using delete to clean up resources 10/13/23 08:24:19.466 +Oct 13 08:24:19.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-902 delete --grace-period=0 --force -f -' +Oct 13 08:24:19.553: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Oct 13 08:24:19.553: INFO: stdout: "service \"frontend\" force deleted\n" +STEP: using delete to clean up resources 10/13/23 08:24:19.553 +Oct 13 08:24:19.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-902 delete --grace-period=0 --force -f -' +Oct 13 08:24:19.627: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Oct 13 08:24:19.627: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" +STEP: using delete to clean up resources 10/13/23 08:24:19.627 +Oct 13 08:24:19.627: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-902 delete --grace-period=0 --force -f -' +Oct 13 08:24:19.709: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Oct 13 08:24:19.709: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" +STEP: using delete to clean up resources 10/13/23 08:24:19.709 +Oct 13 08:24:19.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-902 delete --grace-period=0 --force -f -' +Oct 13 08:24:19.791: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Oct 13 08:24:19.791: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 +Oct 13 08:24:19.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 +STEP: Destroying namespace "kubectl-902" for this suite. 10/13/23 08:24:19.797 +------------------------------ +• [SLOW TEST] [8.475 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Guestbook application + test/e2e/kubectl/kubectl.go:369 + should create and stop a working application [Conformance] + test/e2e/kubectl/kubectl.go:394 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:24:11.331 + Oct 13 08:24:11.331: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename kubectl 10/13/23 08:24:11.333 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:24:11.353 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:24:11.356 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 + [It] should create and stop a working application [Conformance] + test/e2e/kubectl/kubectl.go:394 + STEP: creating all guestbook components 10/13/23 08:24:11.358 + Oct 13 08:24:11.358: INFO: apiVersion: v1 + kind: Service + metadata: + name: agnhost-replica + labels: + app: agnhost + role: replica + tier: backend + spec: + ports: + - port: 6379 + selector: + app: agnhost + role: replica + tier: backend + + Oct 13 08:24:11.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-902 create -f -' + Oct 13 08:24:12.169: INFO: stderr: "" + Oct 13 08:24:12.169: INFO: stdout: "service/agnhost-replica created\n" + Oct 13 08:24:12.169: INFO: apiVersion: v1 + kind: Service + metadata: + name: agnhost-primary + labels: + app: agnhost + role: primary + tier: backend + spec: + ports: + - port: 6379 + targetPort: 6379 + selector: + app: agnhost + role: primary + tier: backend + + Oct 13 08:24:12.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-902 create -f -' + Oct 13 08:24:12.857: INFO: stderr: "" + Oct 13 08:24:12.857: INFO: stdout: "service/agnhost-primary created\n" + Oct 13 08:24:12.857: INFO: apiVersion: v1 + kind: Service + metadata: + name: frontend + labels: + app: guestbook + tier: frontend + spec: + # if your cluster supports it, uncomment the following to automatically create + # an external load-balanced IP for the frontend service. + # type: LoadBalancer + ports: + - port: 80 + selector: + app: guestbook + tier: frontend + + Oct 13 08:24:12.857: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-902 create -f -' + Oct 13 08:24:13.563: INFO: stderr: "" + Oct 13 08:24:13.563: INFO: stdout: "service/frontend created\n" + Oct 13 08:24:13.564: INFO: apiVersion: apps/v1 + kind: Deployment + metadata: + name: frontend + spec: + replicas: 3 + selector: + matchLabels: + app: guestbook + tier: frontend + template: + metadata: + labels: + app: guestbook + tier: frontend + spec: + containers: + - name: guestbook-frontend + image: registry.k8s.io/e2e-test-images/agnhost:2.43 + args: [ "guestbook", "--backend-port", "6379" ] + resources: + requests: + cpu: 100m + memory: 100Mi + ports: + - containerPort: 80 + + Oct 13 08:24:13.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-902 create -f -' + Oct 13 08:24:13.776: INFO: stderr: "" + Oct 13 08:24:13.776: INFO: stdout: "deployment.apps/frontend created\n" + Oct 13 08:24:13.776: INFO: apiVersion: apps/v1 + kind: Deployment + metadata: + name: agnhost-primary + spec: + replicas: 1 + selector: + matchLabels: + app: agnhost + role: primary + tier: backend + template: + metadata: + labels: + app: agnhost + role: primary + tier: backend + spec: + containers: + - name: primary + image: registry.k8s.io/e2e-test-images/agnhost:2.43 + args: [ "guestbook", "--http-port", "6379" ] + resources: + requests: + cpu: 100m + memory: 100Mi + ports: + - containerPort: 6379 + + Oct 13 08:24:13.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-902 create -f -' + Oct 13 08:24:13.978: INFO: stderr: "" + Oct 13 08:24:13.978: INFO: stdout: "deployment.apps/agnhost-primary created\n" + Oct 13 08:24:13.978: INFO: apiVersion: apps/v1 + kind: Deployment + metadata: + name: agnhost-replica + spec: + replicas: 2 + selector: + matchLabels: + app: agnhost + role: replica + tier: backend + template: + metadata: + labels: + app: agnhost + role: replica + tier: backend + spec: + containers: + - name: replica + image: registry.k8s.io/e2e-test-images/agnhost:2.43 + args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] + resources: + requests: + cpu: 100m + memory: 100Mi + ports: + - containerPort: 6379 + + Oct 13 08:24:13.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-902 create -f -' + Oct 13 08:24:14.175: INFO: stderr: "" + Oct 13 08:24:14.175: INFO: stdout: "deployment.apps/agnhost-replica created\n" + STEP: validating guestbook app 10/13/23 08:24:14.175 + Oct 13 08:24:14.175: INFO: Waiting for all frontend pods to be Running. + Oct 13 08:24:19.228: INFO: Waiting for frontend to serve content. + Oct 13 08:24:19.239: INFO: Trying to add a new entry to the guestbook. + Oct 13 08:24:19.250: INFO: Verifying that added entry can be retrieved. + STEP: using delete to clean up resources 10/13/23 08:24:19.259 + Oct 13 08:24:19.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-902 delete --grace-period=0 --force -f -' + Oct 13 08:24:19.363: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" + Oct 13 08:24:19.363: INFO: stdout: "service \"agnhost-replica\" force deleted\n" + STEP: using delete to clean up resources 10/13/23 08:24:19.363 + Oct 13 08:24:19.364: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-902 delete --grace-period=0 --force -f -' + Oct 13 08:24:19.466: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" + Oct 13 08:24:19.466: INFO: stdout: "service \"agnhost-primary\" force deleted\n" + STEP: using delete to clean up resources 10/13/23 08:24:19.466 + Oct 13 08:24:19.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-902 delete --grace-period=0 --force -f -' + Oct 13 08:24:19.553: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" + Oct 13 08:24:19.553: INFO: stdout: "service \"frontend\" force deleted\n" + STEP: using delete to clean up resources 10/13/23 08:24:19.553 + Oct 13 08:24:19.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-902 delete --grace-period=0 --force -f -' + Oct 13 08:24:19.627: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" + Oct 13 08:24:19.627: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" + STEP: using delete to clean up resources 10/13/23 08:24:19.627 + Oct 13 08:24:19.627: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-902 delete --grace-period=0 --force -f -' + Oct 13 08:24:19.709: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" + Oct 13 08:24:19.709: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" + STEP: using delete to clean up resources 10/13/23 08:24:19.709 + Oct 13 08:24:19.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-902 delete --grace-period=0 --force -f -' + Oct 13 08:24:19.791: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" + Oct 13 08:24:19.791: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" + [AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 + Oct 13 08:24:19.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 + STEP: Destroying namespace "kubectl-902" for this suite. 10/13/23 08:24:19.797 + << End Captured GinkgoWriter Output +------------------------------ +SSSSS +------------------------------ +[sig-node] Kubelet when scheduling a busybox command that always fails in a pod + should have an terminated reason [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:110 +[BeforeEach] [sig-node] Kubelet + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:24:19.806 +Oct 13 08:24:19.806: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename kubelet-test 10/13/23 08:24:19.807 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:24:19.831 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:24:19.835 +[BeforeEach] [sig-node] Kubelet + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] Kubelet + test/e2e/common/node/kubelet.go:41 +[BeforeEach] when scheduling a busybox command that always fails in a pod + test/e2e/common/node/kubelet.go:85 +[It] should have an terminated reason [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:110 +[AfterEach] [sig-node] Kubelet + test/e2e/framework/node/init/init.go:32 +Oct 13 08:24:23.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Kubelet + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Kubelet + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Kubelet + tear down framework | framework.go:193 +STEP: Destroying namespace "kubelet-test-1153" for this suite. 10/13/23 08:24:23.859 +------------------------------ +• [4.058 seconds] +[sig-node] Kubelet +test/e2e/common/node/framework.go:23 + when scheduling a busybox command that always fails in a pod + test/e2e/common/node/kubelet.go:82 + should have an terminated reason [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:110 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Kubelet + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:24:19.806 + Oct 13 08:24:19.806: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename kubelet-test 10/13/23 08:24:19.807 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:24:19.831 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:24:19.835 + [BeforeEach] [sig-node] Kubelet + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] Kubelet + test/e2e/common/node/kubelet.go:41 + [BeforeEach] when scheduling a busybox command that always fails in a pod + test/e2e/common/node/kubelet.go:85 + [It] should have an terminated reason [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:110 + [AfterEach] [sig-node] Kubelet + test/e2e/framework/node/init/init.go:32 + Oct 13 08:24:23.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Kubelet + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Kubelet + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Kubelet + tear down framework | framework.go:193 + STEP: Destroying namespace "kubelet-test-1153" for this suite. 10/13/23 08:24:23.859 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with configmap pod [Conformance] + test/e2e/storage/subpath.go:70 +[BeforeEach] [sig-storage] Subpath + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:24:23.865 +Oct 13 08:24:23.865: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename subpath 10/13/23 08:24:23.867 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:24:23.885 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:24:23.888 +[BeforeEach] [sig-storage] Subpath + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] Atomic writer volumes + test/e2e/storage/subpath.go:40 +STEP: Setting up data 10/13/23 08:24:23.89 +[It] should support subpaths with configmap pod [Conformance] + test/e2e/storage/subpath.go:70 +STEP: Creating pod pod-subpath-test-configmap-b54m 10/13/23 08:24:23.899 +STEP: Creating a pod to test atomic-volume-subpath 10/13/23 08:24:23.899 +Oct 13 08:24:23.906: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-b54m" in namespace "subpath-1646" to be "Succeeded or Failed" +Oct 13 08:24:23.909: INFO: Pod "pod-subpath-test-configmap-b54m": Phase="Pending", Reason="", readiness=false. Elapsed: 2.439124ms +Oct 13 08:24:25.912: INFO: Pod "pod-subpath-test-configmap-b54m": Phase="Running", Reason="", readiness=true. Elapsed: 2.005999405s +Oct 13 08:24:27.917: INFO: Pod "pod-subpath-test-configmap-b54m": Phase="Running", Reason="", readiness=true. Elapsed: 4.010281471s +Oct 13 08:24:29.913: INFO: Pod "pod-subpath-test-configmap-b54m": Phase="Running", Reason="", readiness=true. Elapsed: 6.006582242s +Oct 13 08:24:31.913: INFO: Pod "pod-subpath-test-configmap-b54m": Phase="Running", Reason="", readiness=true. Elapsed: 8.006806624s +Oct 13 08:24:33.918: INFO: Pod "pod-subpath-test-configmap-b54m": Phase="Running", Reason="", readiness=true. Elapsed: 10.012113239s +Oct 13 08:24:35.916: INFO: Pod "pod-subpath-test-configmap-b54m": Phase="Running", Reason="", readiness=true. Elapsed: 12.009458219s +Oct 13 08:24:37.914: INFO: Pod "pod-subpath-test-configmap-b54m": Phase="Running", Reason="", readiness=true. Elapsed: 14.008011036s +Oct 13 08:24:39.916: INFO: Pod "pod-subpath-test-configmap-b54m": Phase="Running", Reason="", readiness=true. Elapsed: 16.009967794s +Oct 13 08:24:41.913: INFO: Pod "pod-subpath-test-configmap-b54m": Phase="Running", Reason="", readiness=true. Elapsed: 18.00709304s +Oct 13 08:24:43.917: INFO: Pod "pod-subpath-test-configmap-b54m": Phase="Running", Reason="", readiness=true. Elapsed: 20.010418102s +Oct 13 08:24:45.915: INFO: Pod "pod-subpath-test-configmap-b54m": Phase="Running", Reason="", readiness=false. Elapsed: 22.008295463s +Oct 13 08:24:47.914: INFO: Pod "pod-subpath-test-configmap-b54m": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.007466412s +STEP: Saw pod success 10/13/23 08:24:47.914 +Oct 13 08:24:47.914: INFO: Pod "pod-subpath-test-configmap-b54m" satisfied condition "Succeeded or Failed" +Oct 13 08:24:47.918: INFO: Trying to get logs from node node2 pod pod-subpath-test-configmap-b54m container test-container-subpath-configmap-b54m: +STEP: delete the pod 10/13/23 08:24:47.927 +Oct 13 08:24:47.943: INFO: Waiting for pod pod-subpath-test-configmap-b54m to disappear +Oct 13 08:24:47.946: INFO: Pod pod-subpath-test-configmap-b54m no longer exists +STEP: Deleting pod pod-subpath-test-configmap-b54m 10/13/23 08:24:47.946 +Oct 13 08:24:47.946: INFO: Deleting pod "pod-subpath-test-configmap-b54m" in namespace "subpath-1646" +[AfterEach] [sig-storage] Subpath + test/e2e/framework/node/init/init.go:32 +Oct 13 08:24:47.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Subpath + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Subpath + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Subpath + tear down framework | framework.go:193 +STEP: Destroying namespace "subpath-1646" for this suite. 10/13/23 08:24:47.951 +------------------------------ +• [SLOW TEST] [24.091 seconds] +[sig-storage] Subpath +test/e2e/storage/utils/framework.go:23 + Atomic writer volumes + test/e2e/storage/subpath.go:36 + should support subpaths with configmap pod [Conformance] + test/e2e/storage/subpath.go:70 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Subpath + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:24:23.865 + Oct 13 08:24:23.865: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename subpath 10/13/23 08:24:23.867 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:24:23.885 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:24:23.888 + [BeforeEach] [sig-storage] Subpath + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] Atomic writer volumes + test/e2e/storage/subpath.go:40 + STEP: Setting up data 10/13/23 08:24:23.89 + [It] should support subpaths with configmap pod [Conformance] + test/e2e/storage/subpath.go:70 + STEP: Creating pod pod-subpath-test-configmap-b54m 10/13/23 08:24:23.899 + STEP: Creating a pod to test atomic-volume-subpath 10/13/23 08:24:23.899 + Oct 13 08:24:23.906: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-b54m" in namespace "subpath-1646" to be "Succeeded or Failed" + Oct 13 08:24:23.909: INFO: Pod "pod-subpath-test-configmap-b54m": Phase="Pending", Reason="", readiness=false. Elapsed: 2.439124ms + Oct 13 08:24:25.912: INFO: Pod "pod-subpath-test-configmap-b54m": Phase="Running", Reason="", readiness=true. Elapsed: 2.005999405s + Oct 13 08:24:27.917: INFO: Pod "pod-subpath-test-configmap-b54m": Phase="Running", Reason="", readiness=true. Elapsed: 4.010281471s + Oct 13 08:24:29.913: INFO: Pod "pod-subpath-test-configmap-b54m": Phase="Running", Reason="", readiness=true. Elapsed: 6.006582242s + Oct 13 08:24:31.913: INFO: Pod "pod-subpath-test-configmap-b54m": Phase="Running", Reason="", readiness=true. Elapsed: 8.006806624s + Oct 13 08:24:33.918: INFO: Pod "pod-subpath-test-configmap-b54m": Phase="Running", Reason="", readiness=true. Elapsed: 10.012113239s + Oct 13 08:24:35.916: INFO: Pod "pod-subpath-test-configmap-b54m": Phase="Running", Reason="", readiness=true. Elapsed: 12.009458219s + Oct 13 08:24:37.914: INFO: Pod "pod-subpath-test-configmap-b54m": Phase="Running", Reason="", readiness=true. Elapsed: 14.008011036s + Oct 13 08:24:39.916: INFO: Pod "pod-subpath-test-configmap-b54m": Phase="Running", Reason="", readiness=true. Elapsed: 16.009967794s + Oct 13 08:24:41.913: INFO: Pod "pod-subpath-test-configmap-b54m": Phase="Running", Reason="", readiness=true. Elapsed: 18.00709304s + Oct 13 08:24:43.917: INFO: Pod "pod-subpath-test-configmap-b54m": Phase="Running", Reason="", readiness=true. Elapsed: 20.010418102s + Oct 13 08:24:45.915: INFO: Pod "pod-subpath-test-configmap-b54m": Phase="Running", Reason="", readiness=false. Elapsed: 22.008295463s + Oct 13 08:24:47.914: INFO: Pod "pod-subpath-test-configmap-b54m": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.007466412s + STEP: Saw pod success 10/13/23 08:24:47.914 + Oct 13 08:24:47.914: INFO: Pod "pod-subpath-test-configmap-b54m" satisfied condition "Succeeded or Failed" + Oct 13 08:24:47.918: INFO: Trying to get logs from node node2 pod pod-subpath-test-configmap-b54m container test-container-subpath-configmap-b54m: + STEP: delete the pod 10/13/23 08:24:47.927 + Oct 13 08:24:47.943: INFO: Waiting for pod pod-subpath-test-configmap-b54m to disappear + Oct 13 08:24:47.946: INFO: Pod pod-subpath-test-configmap-b54m no longer exists + STEP: Deleting pod pod-subpath-test-configmap-b54m 10/13/23 08:24:47.946 + Oct 13 08:24:47.946: INFO: Deleting pod "pod-subpath-test-configmap-b54m" in namespace "subpath-1646" + [AfterEach] [sig-storage] Subpath + test/e2e/framework/node/init/init.go:32 + Oct 13 08:24:47.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Subpath + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Subpath + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Subpath + tear down framework | framework.go:193 + STEP: Destroying namespace "subpath-1646" for this suite. 10/13/23 08:24:47.951 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:240 +[BeforeEach] [sig-storage] ConfigMap + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:24:47.957 +Oct 13 08:24:47.957: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename configmap 10/13/23 08:24:47.959 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:24:47.975 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:24:47.977 +[BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:31 +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:240 +STEP: Creating configMap with name cm-test-opt-del-0bc392a0-4eca-493b-a644-823b7b8b1696 10/13/23 08:24:47.982 +STEP: Creating configMap with name cm-test-opt-upd-567f2019-5981-477b-bfde-4fae8b8b06cc 10/13/23 08:24:47.986 +STEP: Creating the pod 10/13/23 08:24:47.99 +Oct 13 08:24:47.999: INFO: Waiting up to 5m0s for pod "pod-configmaps-bc095dba-b6ad-420a-8e7c-4ec6e82ae5a0" in namespace "configmap-2313" to be "running and ready" +Oct 13 08:24:48.003: INFO: Pod "pod-configmaps-bc095dba-b6ad-420a-8e7c-4ec6e82ae5a0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.997981ms +Oct 13 08:24:48.003: INFO: The phase of Pod pod-configmaps-bc095dba-b6ad-420a-8e7c-4ec6e82ae5a0 is Pending, waiting for it to be Running (with Ready = true) +Oct 13 08:24:50.008: INFO: Pod "pod-configmaps-bc095dba-b6ad-420a-8e7c-4ec6e82ae5a0": Phase="Running", Reason="", readiness=true. Elapsed: 2.008822483s +Oct 13 08:24:50.008: INFO: The phase of Pod pod-configmaps-bc095dba-b6ad-420a-8e7c-4ec6e82ae5a0 is Running (Ready = true) +Oct 13 08:24:50.008: INFO: Pod "pod-configmaps-bc095dba-b6ad-420a-8e7c-4ec6e82ae5a0" satisfied condition "running and ready" +STEP: Deleting configmap cm-test-opt-del-0bc392a0-4eca-493b-a644-823b7b8b1696 10/13/23 08:24:50.036 +STEP: Updating configmap cm-test-opt-upd-567f2019-5981-477b-bfde-4fae8b8b06cc 10/13/23 08:24:50.044 +STEP: Creating configMap with name cm-test-opt-create-e92aa62f-a0a4-45b1-b16a-9300f45c9d8d 10/13/23 08:24:50.048 +STEP: waiting to observe update in volume 10/13/23 08:24:50.055 +[AfterEach] [sig-storage] ConfigMap + test/e2e/framework/node/init/init.go:32 +Oct 13 08:24:52.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] ConfigMap + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] ConfigMap + tear down framework | framework.go:193 +STEP: Destroying namespace "configmap-2313" for this suite. 10/13/23 08:24:52.088 +------------------------------ +• [4.138 seconds] +[sig-storage] ConfigMap +test/e2e/common/storage/framework.go:23 + optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:240 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] ConfigMap + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:24:47.957 + Oct 13 08:24:47.957: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename configmap 10/13/23 08:24:47.959 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:24:47.975 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:24:47.977 + [BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:31 + [It] optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:240 + STEP: Creating configMap with name cm-test-opt-del-0bc392a0-4eca-493b-a644-823b7b8b1696 10/13/23 08:24:47.982 + STEP: Creating configMap with name cm-test-opt-upd-567f2019-5981-477b-bfde-4fae8b8b06cc 10/13/23 08:24:47.986 + STEP: Creating the pod 10/13/23 08:24:47.99 + Oct 13 08:24:47.999: INFO: Waiting up to 5m0s for pod "pod-configmaps-bc095dba-b6ad-420a-8e7c-4ec6e82ae5a0" in namespace "configmap-2313" to be "running and ready" + Oct 13 08:24:48.003: INFO: Pod "pod-configmaps-bc095dba-b6ad-420a-8e7c-4ec6e82ae5a0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.997981ms + Oct 13 08:24:48.003: INFO: The phase of Pod pod-configmaps-bc095dba-b6ad-420a-8e7c-4ec6e82ae5a0 is Pending, waiting for it to be Running (with Ready = true) + Oct 13 08:24:50.008: INFO: Pod "pod-configmaps-bc095dba-b6ad-420a-8e7c-4ec6e82ae5a0": Phase="Running", Reason="", readiness=true. Elapsed: 2.008822483s + Oct 13 08:24:50.008: INFO: The phase of Pod pod-configmaps-bc095dba-b6ad-420a-8e7c-4ec6e82ae5a0 is Running (Ready = true) + Oct 13 08:24:50.008: INFO: Pod "pod-configmaps-bc095dba-b6ad-420a-8e7c-4ec6e82ae5a0" satisfied condition "running and ready" + STEP: Deleting configmap cm-test-opt-del-0bc392a0-4eca-493b-a644-823b7b8b1696 10/13/23 08:24:50.036 + STEP: Updating configmap cm-test-opt-upd-567f2019-5981-477b-bfde-4fae8b8b06cc 10/13/23 08:24:50.044 + STEP: Creating configMap with name cm-test-opt-create-e92aa62f-a0a4-45b1-b16a-9300f45c9d8d 10/13/23 08:24:50.048 + STEP: waiting to observe update in volume 10/13/23 08:24:50.055 + [AfterEach] [sig-storage] ConfigMap + test/e2e/framework/node/init/init.go:32 + Oct 13 08:24:52.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] ConfigMap + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] ConfigMap + tear down framework | framework.go:193 + STEP: Destroying namespace "configmap-2313" for this suite. 10/13/23 08:24:52.088 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected combined + should project all components that make up the projection API [Projection][NodeConformance] [Conformance] + test/e2e/common/storage/projected_combined.go:44 +[BeforeEach] [sig-storage] Projected combined + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:24:52.099 +Oct 13 08:24:52.099: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename projected 10/13/23 08:24:52.1 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:24:52.118 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:24:52.12 +[BeforeEach] [sig-storage] Projected combined + test/e2e/framework/metrics/init/init.go:31 +[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] + test/e2e/common/storage/projected_combined.go:44 +STEP: Creating configMap with name configmap-projected-all-test-volume-ecc275d6-5d20-436f-8645-875016c810cc 10/13/23 08:24:52.122 +STEP: Creating secret with name secret-projected-all-test-volume-e7c98b03-3d2d-40ab-8da4-32e711ff62ba 10/13/23 08:24:52.127 +STEP: Creating a pod to test Check all projections for projected volume plugin 10/13/23 08:24:52.131 +Oct 13 08:24:52.139: INFO: Waiting up to 5m0s for pod "projected-volume-205ec2b0-59ed-4daa-bc92-7c393618834d" in namespace "projected-258" to be "Succeeded or Failed" +Oct 13 08:24:52.142: INFO: Pod "projected-volume-205ec2b0-59ed-4daa-bc92-7c393618834d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.944996ms +Oct 13 08:24:54.146: INFO: Pod "projected-volume-205ec2b0-59ed-4daa-bc92-7c393618834d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007404307s +Oct 13 08:24:56.148: INFO: Pod "projected-volume-205ec2b0-59ed-4daa-bc92-7c393618834d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009681136s +STEP: Saw pod success 10/13/23 08:24:56.148 +Oct 13 08:24:56.148: INFO: Pod "projected-volume-205ec2b0-59ed-4daa-bc92-7c393618834d" satisfied condition "Succeeded or Failed" +Oct 13 08:24:56.153: INFO: Trying to get logs from node node1 pod projected-volume-205ec2b0-59ed-4daa-bc92-7c393618834d container projected-all-volume-test: +STEP: delete the pod 10/13/23 08:24:56.175 +Oct 13 08:24:56.193: INFO: Waiting for pod projected-volume-205ec2b0-59ed-4daa-bc92-7c393618834d to disappear +Oct 13 08:24:56.196: INFO: Pod projected-volume-205ec2b0-59ed-4daa-bc92-7c393618834d no longer exists +[AfterEach] [sig-storage] Projected combined + test/e2e/framework/node/init/init.go:32 +Oct 13 08:24:56.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected combined + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Projected combined + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Projected combined + tear down framework | framework.go:193 +STEP: Destroying namespace "projected-258" for this suite. 10/13/23 08:24:56.2 +------------------------------ +• [4.107 seconds] +[sig-storage] Projected combined +test/e2e/common/storage/framework.go:23 + should project all components that make up the projection API [Projection][NodeConformance] [Conformance] + test/e2e/common/storage/projected_combined.go:44 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected combined + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:24:52.099 + Oct 13 08:24:52.099: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename projected 10/13/23 08:24:52.1 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:24:52.118 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:24:52.12 + [BeforeEach] [sig-storage] Projected combined + test/e2e/framework/metrics/init/init.go:31 + [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] + test/e2e/common/storage/projected_combined.go:44 + STEP: Creating configMap with name configmap-projected-all-test-volume-ecc275d6-5d20-436f-8645-875016c810cc 10/13/23 08:24:52.122 + STEP: Creating secret with name secret-projected-all-test-volume-e7c98b03-3d2d-40ab-8da4-32e711ff62ba 10/13/23 08:24:52.127 + STEP: Creating a pod to test Check all projections for projected volume plugin 10/13/23 08:24:52.131 + Oct 13 08:24:52.139: INFO: Waiting up to 5m0s for pod "projected-volume-205ec2b0-59ed-4daa-bc92-7c393618834d" in namespace "projected-258" to be "Succeeded or Failed" + Oct 13 08:24:52.142: INFO: Pod "projected-volume-205ec2b0-59ed-4daa-bc92-7c393618834d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.944996ms + Oct 13 08:24:54.146: INFO: Pod "projected-volume-205ec2b0-59ed-4daa-bc92-7c393618834d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007404307s + Oct 13 08:24:56.148: INFO: Pod "projected-volume-205ec2b0-59ed-4daa-bc92-7c393618834d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009681136s + STEP: Saw pod success 10/13/23 08:24:56.148 + Oct 13 08:24:56.148: INFO: Pod "projected-volume-205ec2b0-59ed-4daa-bc92-7c393618834d" satisfied condition "Succeeded or Failed" + Oct 13 08:24:56.153: INFO: Trying to get logs from node node1 pod projected-volume-205ec2b0-59ed-4daa-bc92-7c393618834d container projected-all-volume-test: + STEP: delete the pod 10/13/23 08:24:56.175 + Oct 13 08:24:56.193: INFO: Waiting for pod projected-volume-205ec2b0-59ed-4daa-bc92-7c393618834d to disappear + Oct 13 08:24:56.196: INFO: Pod projected-volume-205ec2b0-59ed-4daa-bc92-7c393618834d no longer exists + [AfterEach] [sig-storage] Projected combined + test/e2e/framework/node/init/init.go:32 + Oct 13 08:24:56.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected combined + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Projected combined + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Projected combined + tear down framework | framework.go:193 + STEP: Destroying namespace "projected-258" for this suite. 10/13/23 08:24:56.2 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl logs + should be able to retrieve and filter logs [Conformance] + test/e2e/kubectl/kubectl.go:1592 +[BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:24:56.206 +Oct 13 08:24:56.206: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename kubectl 10/13/23 08:24:56.207 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:24:56.222 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:24:56.225 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 +[BeforeEach] Kubectl logs + test/e2e/kubectl/kubectl.go:1572 +STEP: creating an pod 10/13/23 08:24:56.227 +Oct 13 08:24:56.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-272 run logs-generator --image=registry.k8s.io/e2e-test-images/agnhost:2.43 --restart=Never --pod-running-timeout=2m0s -- logs-generator --log-lines-total 100 --run-duration 20s' +Oct 13 08:24:56.315: INFO: stderr: "" +Oct 13 08:24:56.315: INFO: stdout: "pod/logs-generator created\n" +[It] should be able to retrieve and filter logs [Conformance] + test/e2e/kubectl/kubectl.go:1592 +STEP: Waiting for log generator to start. 10/13/23 08:24:56.315 +Oct 13 08:24:56.315: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] +Oct 13 08:24:56.315: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-272" to be "running and ready, or succeeded" +Oct 13 08:24:56.319: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.191525ms +Oct 13 08:24:56.319: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'logs-generator' on 'node1' to be 'Running' but was 'Pending' +Oct 13 08:24:58.324: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 2.008975291s +Oct 13 08:24:58.324: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" +Oct 13 08:24:58.324: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] +STEP: checking for a matching strings 10/13/23 08:24:58.324 +Oct 13 08:24:58.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-272 logs logs-generator logs-generator' +Oct 13 08:24:58.420: INFO: stderr: "" +Oct 13 08:24:58.420: INFO: stdout: "I1013 08:24:57.390770 1 logs_generator.go:76] 0 POST /api/v1/namespaces/kube-system/pods/dlz 235\nI1013 08:24:57.591273 1 logs_generator.go:76] 1 GET /api/v1/namespaces/kube-system/pods/n72 512\nI1013 08:24:57.791820 1 logs_generator.go:76] 2 GET /api/v1/namespaces/default/pods/79rx 429\nI1013 08:24:57.991180 1 logs_generator.go:76] 3 GET /api/v1/namespaces/ns/pods/zrs 491\nI1013 08:24:58.191681 1 logs_generator.go:76] 4 POST /api/v1/namespaces/ns/pods/d49 418\nI1013 08:24:58.391121 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/ns/pods/2r2 501\nI1013 08:24:58.591528 1 logs_generator.go:76] 6 POST /api/v1/namespaces/kube-system/pods/ndq 365\n" +STEP: limiting log lines 10/13/23 08:24:58.42 +Oct 13 08:24:58.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-272 logs logs-generator logs-generator --tail=1' +Oct 13 08:24:58.517: INFO: stderr: "" +Oct 13 08:24:58.517: INFO: stdout: "I1013 08:24:58.790859 1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/vxx 576\n" +Oct 13 08:24:58.517: INFO: got output "I1013 08:24:58.790859 1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/vxx 576\n" +STEP: limiting log bytes 10/13/23 08:24:58.517 +Oct 13 08:24:58.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-272 logs logs-generator logs-generator --limit-bytes=1' +Oct 13 08:24:58.611: INFO: stderr: "" +Oct 13 08:24:58.611: INFO: stdout: "I" +Oct 13 08:24:58.611: INFO: got output "I" +STEP: exposing timestamps 10/13/23 08:24:58.611 +Oct 13 08:24:58.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-272 logs logs-generator logs-generator --tail=1 --timestamps' +Oct 13 08:24:58.707: INFO: stderr: "" +Oct 13 08:24:58.707: INFO: stdout: "2023-10-13T04:24:58.991449133-04:00 I1013 08:24:58.991281 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/vlkk 493\n" +Oct 13 08:24:58.707: INFO: got output "2023-10-13T04:24:58.991449133-04:00 I1013 08:24:58.991281 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/vlkk 493\n" +STEP: restricting to a time range 10/13/23 08:24:58.707 +Oct 13 08:25:01.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-272 logs logs-generator logs-generator --since=1s' +Oct 13 08:25:01.298: INFO: stderr: "" +Oct 13 08:25:01.298: INFO: stdout: "I1013 08:25:00.791263 1 logs_generator.go:76] 17 GET /api/v1/namespaces/default/pods/tttj 259\nI1013 08:25:00.991624 1 logs_generator.go:76] 18 POST /api/v1/namespaces/default/pods/5xf 582\nI1013 08:25:01.191028 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/ns/pods/kkp2 203\nI1013 08:25:01.391480 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/default/pods/qs5 431\nI1013 08:25:01.591800 1 logs_generator.go:76] 21 POST /api/v1/namespaces/default/pods/jg4 316\n" +Oct 13 08:25:01.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-272 logs logs-generator logs-generator --since=24h' +Oct 13 08:25:01.380: INFO: stderr: "" +Oct 13 08:25:01.380: INFO: stdout: "I1013 08:24:57.390770 1 logs_generator.go:76] 0 POST /api/v1/namespaces/kube-system/pods/dlz 235\nI1013 08:24:57.591273 1 logs_generator.go:76] 1 GET /api/v1/namespaces/kube-system/pods/n72 512\nI1013 08:24:57.791820 1 logs_generator.go:76] 2 GET /api/v1/namespaces/default/pods/79rx 429\nI1013 08:24:57.991180 1 logs_generator.go:76] 3 GET /api/v1/namespaces/ns/pods/zrs 491\nI1013 08:24:58.191681 1 logs_generator.go:76] 4 POST /api/v1/namespaces/ns/pods/d49 418\nI1013 08:24:58.391121 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/ns/pods/2r2 501\nI1013 08:24:58.591528 1 logs_generator.go:76] 6 POST /api/v1/namespaces/kube-system/pods/ndq 365\nI1013 08:24:58.790859 1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/vxx 576\nI1013 08:24:58.991281 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/vlkk 493\nI1013 08:24:59.191768 1 logs_generator.go:76] 9 GET /api/v1/namespaces/ns/pods/xkq 438\nI1013 08:24:59.391195 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/ns/pods/hrk 283\nI1013 08:24:59.591652 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/ns/pods/r7f 512\nI1013 08:24:59.790944 1 logs_generator.go:76] 12 POST /api/v1/namespaces/kube-system/pods/lbrk 423\nI1013 08:24:59.991490 1 logs_generator.go:76] 13 GET /api/v1/namespaces/kube-system/pods/l6f 255\nI1013 08:25:00.191797 1 logs_generator.go:76] 14 POST /api/v1/namespaces/default/pods/l7v 501\nI1013 08:25:00.391318 1 logs_generator.go:76] 15 POST /api/v1/namespaces/default/pods/zvk 245\nI1013 08:25:00.591880 1 logs_generator.go:76] 16 POST /api/v1/namespaces/ns/pods/q25h 295\nI1013 08:25:00.791263 1 logs_generator.go:76] 17 GET /api/v1/namespaces/default/pods/tttj 259\nI1013 08:25:00.991624 1 logs_generator.go:76] 18 POST /api/v1/namespaces/default/pods/5xf 582\nI1013 08:25:01.191028 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/ns/pods/kkp2 203\nI1013 08:25:01.391480 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/default/pods/qs5 431\nI1013 08:25:01.591800 1 logs_generator.go:76] 21 POST /api/v1/namespaces/default/pods/jg4 316\n" +[AfterEach] Kubectl logs + test/e2e/kubectl/kubectl.go:1577 +Oct 13 08:25:01.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-272 delete pod logs-generator' +Oct 13 08:25:02.345: INFO: stderr: "" +Oct 13 08:25:02.345: INFO: stdout: "pod \"logs-generator\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 +Oct 13 08:25:02.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 +STEP: Destroying namespace "kubectl-272" for this suite. 10/13/23 08:25:02.349 +------------------------------ +• [SLOW TEST] [6.149 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Kubectl logs + test/e2e/kubectl/kubectl.go:1569 + should be able to retrieve and filter logs [Conformance] + test/e2e/kubectl/kubectl.go:1592 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:24:56.206 + Oct 13 08:24:56.206: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename kubectl 10/13/23 08:24:56.207 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:24:56.222 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:24:56.225 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 + [BeforeEach] Kubectl logs + test/e2e/kubectl/kubectl.go:1572 + STEP: creating an pod 10/13/23 08:24:56.227 + Oct 13 08:24:56.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-272 run logs-generator --image=registry.k8s.io/e2e-test-images/agnhost:2.43 --restart=Never --pod-running-timeout=2m0s -- logs-generator --log-lines-total 100 --run-duration 20s' + Oct 13 08:24:56.315: INFO: stderr: "" + Oct 13 08:24:56.315: INFO: stdout: "pod/logs-generator created\n" + [It] should be able to retrieve and filter logs [Conformance] + test/e2e/kubectl/kubectl.go:1592 + STEP: Waiting for log generator to start. 10/13/23 08:24:56.315 + Oct 13 08:24:56.315: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] + Oct 13 08:24:56.315: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-272" to be "running and ready, or succeeded" + Oct 13 08:24:56.319: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.191525ms + Oct 13 08:24:56.319: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'logs-generator' on 'node1' to be 'Running' but was 'Pending' + Oct 13 08:24:58.324: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 2.008975291s + Oct 13 08:24:58.324: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" + Oct 13 08:24:58.324: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] + STEP: checking for a matching strings 10/13/23 08:24:58.324 + Oct 13 08:24:58.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-272 logs logs-generator logs-generator' + Oct 13 08:24:58.420: INFO: stderr: "" + Oct 13 08:24:58.420: INFO: stdout: "I1013 08:24:57.390770 1 logs_generator.go:76] 0 POST /api/v1/namespaces/kube-system/pods/dlz 235\nI1013 08:24:57.591273 1 logs_generator.go:76] 1 GET /api/v1/namespaces/kube-system/pods/n72 512\nI1013 08:24:57.791820 1 logs_generator.go:76] 2 GET /api/v1/namespaces/default/pods/79rx 429\nI1013 08:24:57.991180 1 logs_generator.go:76] 3 GET /api/v1/namespaces/ns/pods/zrs 491\nI1013 08:24:58.191681 1 logs_generator.go:76] 4 POST /api/v1/namespaces/ns/pods/d49 418\nI1013 08:24:58.391121 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/ns/pods/2r2 501\nI1013 08:24:58.591528 1 logs_generator.go:76] 6 POST /api/v1/namespaces/kube-system/pods/ndq 365\n" + STEP: limiting log lines 10/13/23 08:24:58.42 + Oct 13 08:24:58.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-272 logs logs-generator logs-generator --tail=1' + Oct 13 08:24:58.517: INFO: stderr: "" + Oct 13 08:24:58.517: INFO: stdout: "I1013 08:24:58.790859 1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/vxx 576\n" + Oct 13 08:24:58.517: INFO: got output "I1013 08:24:58.790859 1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/vxx 576\n" + STEP: limiting log bytes 10/13/23 08:24:58.517 + Oct 13 08:24:58.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-272 logs logs-generator logs-generator --limit-bytes=1' + Oct 13 08:24:58.611: INFO: stderr: "" + Oct 13 08:24:58.611: INFO: stdout: "I" + Oct 13 08:24:58.611: INFO: got output "I" + STEP: exposing timestamps 10/13/23 08:24:58.611 + Oct 13 08:24:58.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-272 logs logs-generator logs-generator --tail=1 --timestamps' + Oct 13 08:24:58.707: INFO: stderr: "" + Oct 13 08:24:58.707: INFO: stdout: "2023-10-13T04:24:58.991449133-04:00 I1013 08:24:58.991281 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/vlkk 493\n" + Oct 13 08:24:58.707: INFO: got output "2023-10-13T04:24:58.991449133-04:00 I1013 08:24:58.991281 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/vlkk 493\n" + STEP: restricting to a time range 10/13/23 08:24:58.707 + Oct 13 08:25:01.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-272 logs logs-generator logs-generator --since=1s' + Oct 13 08:25:01.298: INFO: stderr: "" + Oct 13 08:25:01.298: INFO: stdout: "I1013 08:25:00.791263 1 logs_generator.go:76] 17 GET /api/v1/namespaces/default/pods/tttj 259\nI1013 08:25:00.991624 1 logs_generator.go:76] 18 POST /api/v1/namespaces/default/pods/5xf 582\nI1013 08:25:01.191028 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/ns/pods/kkp2 203\nI1013 08:25:01.391480 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/default/pods/qs5 431\nI1013 08:25:01.591800 1 logs_generator.go:76] 21 POST /api/v1/namespaces/default/pods/jg4 316\n" + Oct 13 08:25:01.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-272 logs logs-generator logs-generator --since=24h' + Oct 13 08:25:01.380: INFO: stderr: "" + Oct 13 08:25:01.380: INFO: stdout: "I1013 08:24:57.390770 1 logs_generator.go:76] 0 POST /api/v1/namespaces/kube-system/pods/dlz 235\nI1013 08:24:57.591273 1 logs_generator.go:76] 1 GET /api/v1/namespaces/kube-system/pods/n72 512\nI1013 08:24:57.791820 1 logs_generator.go:76] 2 GET /api/v1/namespaces/default/pods/79rx 429\nI1013 08:24:57.991180 1 logs_generator.go:76] 3 GET /api/v1/namespaces/ns/pods/zrs 491\nI1013 08:24:58.191681 1 logs_generator.go:76] 4 POST /api/v1/namespaces/ns/pods/d49 418\nI1013 08:24:58.391121 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/ns/pods/2r2 501\nI1013 08:24:58.591528 1 logs_generator.go:76] 6 POST /api/v1/namespaces/kube-system/pods/ndq 365\nI1013 08:24:58.790859 1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/vxx 576\nI1013 08:24:58.991281 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/vlkk 493\nI1013 08:24:59.191768 1 logs_generator.go:76] 9 GET /api/v1/namespaces/ns/pods/xkq 438\nI1013 08:24:59.391195 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/ns/pods/hrk 283\nI1013 08:24:59.591652 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/ns/pods/r7f 512\nI1013 08:24:59.790944 1 logs_generator.go:76] 12 POST /api/v1/namespaces/kube-system/pods/lbrk 423\nI1013 08:24:59.991490 1 logs_generator.go:76] 13 GET /api/v1/namespaces/kube-system/pods/l6f 255\nI1013 08:25:00.191797 1 logs_generator.go:76] 14 POST /api/v1/namespaces/default/pods/l7v 501\nI1013 08:25:00.391318 1 logs_generator.go:76] 15 POST /api/v1/namespaces/default/pods/zvk 245\nI1013 08:25:00.591880 1 logs_generator.go:76] 16 POST /api/v1/namespaces/ns/pods/q25h 295\nI1013 08:25:00.791263 1 logs_generator.go:76] 17 GET /api/v1/namespaces/default/pods/tttj 259\nI1013 08:25:00.991624 1 logs_generator.go:76] 18 POST /api/v1/namespaces/default/pods/5xf 582\nI1013 08:25:01.191028 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/ns/pods/kkp2 203\nI1013 08:25:01.391480 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/default/pods/qs5 431\nI1013 08:25:01.591800 1 logs_generator.go:76] 21 POST /api/v1/namespaces/default/pods/jg4 316\n" + [AfterEach] Kubectl logs + test/e2e/kubectl/kubectl.go:1577 + Oct 13 08:25:01.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-272 delete pod logs-generator' + Oct 13 08:25:02.345: INFO: stderr: "" + Oct 13 08:25:02.345: INFO: stdout: "pod \"logs-generator\" deleted\n" + [AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 + Oct 13 08:25:02.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 + STEP: Destroying namespace "kubectl-272" for this suite. 10/13/23 08:25:02.349 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide podname only [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:53 +[BeforeEach] [sig-storage] Downward API volume + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:25:02.357 +Oct 13 08:25:02.357: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename downward-api 10/13/23 08:25:02.358 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:25:02.373 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:25:02.375 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:44 +[It] should provide podname only [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:53 +STEP: Creating a pod to test downward API volume plugin 10/13/23 08:25:02.378 +Oct 13 08:25:02.386: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d40308bf-570f-41b5-981b-e4d912c9f42a" in namespace "downward-api-9702" to be "Succeeded or Failed" +Oct 13 08:25:02.389: INFO: Pod "downwardapi-volume-d40308bf-570f-41b5-981b-e4d912c9f42a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.113343ms +Oct 13 08:25:04.393: INFO: Pod "downwardapi-volume-d40308bf-570f-41b5-981b-e4d912c9f42a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006897425s +Oct 13 08:25:06.400: INFO: Pod "downwardapi-volume-d40308bf-570f-41b5-981b-e4d912c9f42a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014122502s +STEP: Saw pod success 10/13/23 08:25:06.4 +Oct 13 08:25:06.400: INFO: Pod "downwardapi-volume-d40308bf-570f-41b5-981b-e4d912c9f42a" satisfied condition "Succeeded or Failed" +Oct 13 08:25:06.406: INFO: Trying to get logs from node node2 pod downwardapi-volume-d40308bf-570f-41b5-981b-e4d912c9f42a container client-container: +STEP: delete the pod 10/13/23 08:25:06.411 +Oct 13 08:25:06.422: INFO: Waiting for pod downwardapi-volume-d40308bf-570f-41b5-981b-e4d912c9f42a to disappear +Oct 13 08:25:06.425: INFO: Pod downwardapi-volume-d40308bf-570f-41b5-981b-e4d912c9f42a no longer exists +[AfterEach] [sig-storage] Downward API volume + test/e2e/framework/node/init/init.go:32 +Oct 13 08:25:06.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Downward API volume + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Downward API volume + tear down framework | framework.go:193 +STEP: Destroying namespace "downward-api-9702" for this suite. 10/13/23 08:25:06.428 +------------------------------ +• [4.076 seconds] +[sig-storage] Downward API volume +test/e2e/common/storage/framework.go:23 + should provide podname only [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:53 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Downward API volume + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:25:02.357 + Oct 13 08:25:02.357: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename downward-api 10/13/23 08:25:02.358 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:25:02.373 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:25:02.375 + [BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:44 + [It] should provide podname only [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:53 + STEP: Creating a pod to test downward API volume plugin 10/13/23 08:25:02.378 + Oct 13 08:25:02.386: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d40308bf-570f-41b5-981b-e4d912c9f42a" in namespace "downward-api-9702" to be "Succeeded or Failed" + Oct 13 08:25:02.389: INFO: Pod "downwardapi-volume-d40308bf-570f-41b5-981b-e4d912c9f42a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.113343ms + Oct 13 08:25:04.393: INFO: Pod "downwardapi-volume-d40308bf-570f-41b5-981b-e4d912c9f42a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006897425s + Oct 13 08:25:06.400: INFO: Pod "downwardapi-volume-d40308bf-570f-41b5-981b-e4d912c9f42a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014122502s + STEP: Saw pod success 10/13/23 08:25:06.4 + Oct 13 08:25:06.400: INFO: Pod "downwardapi-volume-d40308bf-570f-41b5-981b-e4d912c9f42a" satisfied condition "Succeeded or Failed" + Oct 13 08:25:06.406: INFO: Trying to get logs from node node2 pod downwardapi-volume-d40308bf-570f-41b5-981b-e4d912c9f42a container client-container: + STEP: delete the pod 10/13/23 08:25:06.411 + Oct 13 08:25:06.422: INFO: Waiting for pod downwardapi-volume-d40308bf-570f-41b5-981b-e4d912c9f42a to disappear + Oct 13 08:25:06.425: INFO: Pod downwardapi-volume-d40308bf-570f-41b5-981b-e4d912c9f42a no longer exists + [AfterEach] [sig-storage] Downward API volume + test/e2e/framework/node/init/init.go:32 + Oct 13 08:25:06.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Downward API volume + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Downward API volume + tear down framework | framework.go:193 + STEP: Destroying namespace "downward-api-9702" for this suite. 10/13/23 08:25:06.428 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSS +------------------------------ +[sig-apps] DisruptionController + should update/patch PodDisruptionBudget status [Conformance] + test/e2e/apps/disruption.go:164 +[BeforeEach] [sig-apps] DisruptionController + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:25:06.434 +Oct 13 08:25:06.434: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename disruption 10/13/23 08:25:06.434 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:25:06.449 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:25:06.451 +[BeforeEach] [sig-apps] DisruptionController + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] DisruptionController + test/e2e/apps/disruption.go:72 +[It] should update/patch PodDisruptionBudget status [Conformance] + test/e2e/apps/disruption.go:164 +STEP: Waiting for the pdb to be processed 10/13/23 08:25:06.458 +STEP: Updating PodDisruptionBudget status 10/13/23 08:25:08.466 +STEP: Waiting for all pods to be running 10/13/23 08:25:08.477 +Oct 13 08:25:08.482: INFO: running pods: 0 < 1 +STEP: locating a running pod 10/13/23 08:25:10.488 +STEP: Waiting for the pdb to be processed 10/13/23 08:25:10.499 +STEP: Patching PodDisruptionBudget status 10/13/23 08:25:10.506 +STEP: Waiting for the pdb to be processed 10/13/23 08:25:10.515 +[AfterEach] [sig-apps] DisruptionController + test/e2e/framework/node/init/init.go:32 +Oct 13 08:25:10.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] DisruptionController + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] DisruptionController + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] DisruptionController + tear down framework | framework.go:193 +STEP: Destroying namespace "disruption-4311" for this suite. 10/13/23 08:25:10.523 +------------------------------ +• [4.095 seconds] +[sig-apps] DisruptionController +test/e2e/apps/framework.go:23 + should update/patch PodDisruptionBudget status [Conformance] + test/e2e/apps/disruption.go:164 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] DisruptionController + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:25:06.434 + Oct 13 08:25:06.434: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename disruption 10/13/23 08:25:06.434 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:25:06.449 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:25:06.451 + [BeforeEach] [sig-apps] DisruptionController + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] DisruptionController + test/e2e/apps/disruption.go:72 + [It] should update/patch PodDisruptionBudget status [Conformance] + test/e2e/apps/disruption.go:164 + STEP: Waiting for the pdb to be processed 10/13/23 08:25:06.458 + STEP: Updating PodDisruptionBudget status 10/13/23 08:25:08.466 + STEP: Waiting for all pods to be running 10/13/23 08:25:08.477 + Oct 13 08:25:08.482: INFO: running pods: 0 < 1 + STEP: locating a running pod 10/13/23 08:25:10.488 + STEP: Waiting for the pdb to be processed 10/13/23 08:25:10.499 + STEP: Patching PodDisruptionBudget status 10/13/23 08:25:10.506 + STEP: Waiting for the pdb to be processed 10/13/23 08:25:10.515 + [AfterEach] [sig-apps] DisruptionController + test/e2e/framework/node/init/init.go:32 + Oct 13 08:25:10.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] DisruptionController + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] DisruptionController + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] DisruptionController + tear down framework | framework.go:193 + STEP: Destroying namespace "disruption-4311" for this suite. 10/13/23 08:25:10.523 + << End Captured GinkgoWriter Output +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:47 +[BeforeEach] [sig-storage] Secrets + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:25:10.529 +Oct 13 08:25:10.529: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename secrets 10/13/23 08:25:10.53 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:25:10.545 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:25:10.547 +[BeforeEach] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:31 +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:47 +STEP: Creating secret with name secret-test-91202439-c009-45fd-b537-af3d122f835f 10/13/23 08:25:10.549 +STEP: Creating a pod to test consume secrets 10/13/23 08:25:10.554 +Oct 13 08:25:10.561: INFO: Waiting up to 5m0s for pod "pod-secrets-f66da3a2-a74c-472b-a1fc-f551d676d448" in namespace "secrets-5678" to be "Succeeded or Failed" +Oct 13 08:25:10.564: INFO: Pod "pod-secrets-f66da3a2-a74c-472b-a1fc-f551d676d448": Phase="Pending", Reason="", readiness=false. Elapsed: 3.127259ms +Oct 13 08:25:12.568: INFO: Pod "pod-secrets-f66da3a2-a74c-472b-a1fc-f551d676d448": Phase="Running", Reason="", readiness=false. Elapsed: 2.007184819s +Oct 13 08:25:14.571: INFO: Pod "pod-secrets-f66da3a2-a74c-472b-a1fc-f551d676d448": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010558909s +STEP: Saw pod success 10/13/23 08:25:14.571 +Oct 13 08:25:14.572: INFO: Pod "pod-secrets-f66da3a2-a74c-472b-a1fc-f551d676d448" satisfied condition "Succeeded or Failed" +Oct 13 08:25:14.576: INFO: Trying to get logs from node node2 pod pod-secrets-f66da3a2-a74c-472b-a1fc-f551d676d448 container secret-volume-test: +STEP: delete the pod 10/13/23 08:25:14.582 +Oct 13 08:25:14.593: INFO: Waiting for pod pod-secrets-f66da3a2-a74c-472b-a1fc-f551d676d448 to disappear +Oct 13 08:25:14.596: INFO: Pod pod-secrets-f66da3a2-a74c-472b-a1fc-f551d676d448 no longer exists +[AfterEach] [sig-storage] Secrets + test/e2e/framework/node/init/init.go:32 +Oct 13 08:25:14.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Secrets + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Secrets + tear down framework | framework.go:193 +STEP: Destroying namespace "secrets-5678" for this suite. 10/13/23 08:25:14.599 +------------------------------ +• [4.076 seconds] +[sig-storage] Secrets +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:47 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Secrets + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:25:10.529 + Oct 13 08:25:10.529: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename secrets 10/13/23 08:25:10.53 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:25:10.545 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:25:10.547 + [BeforeEach] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:31 + [It] should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:47 + STEP: Creating secret with name secret-test-91202439-c009-45fd-b537-af3d122f835f 10/13/23 08:25:10.549 + STEP: Creating a pod to test consume secrets 10/13/23 08:25:10.554 + Oct 13 08:25:10.561: INFO: Waiting up to 5m0s for pod "pod-secrets-f66da3a2-a74c-472b-a1fc-f551d676d448" in namespace "secrets-5678" to be "Succeeded or Failed" + Oct 13 08:25:10.564: INFO: Pod "pod-secrets-f66da3a2-a74c-472b-a1fc-f551d676d448": Phase="Pending", Reason="", readiness=false. Elapsed: 3.127259ms + Oct 13 08:25:12.568: INFO: Pod "pod-secrets-f66da3a2-a74c-472b-a1fc-f551d676d448": Phase="Running", Reason="", readiness=false. Elapsed: 2.007184819s + Oct 13 08:25:14.571: INFO: Pod "pod-secrets-f66da3a2-a74c-472b-a1fc-f551d676d448": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010558909s + STEP: Saw pod success 10/13/23 08:25:14.571 + Oct 13 08:25:14.572: INFO: Pod "pod-secrets-f66da3a2-a74c-472b-a1fc-f551d676d448" satisfied condition "Succeeded or Failed" + Oct 13 08:25:14.576: INFO: Trying to get logs from node node2 pod pod-secrets-f66da3a2-a74c-472b-a1fc-f551d676d448 container secret-volume-test: + STEP: delete the pod 10/13/23 08:25:14.582 + Oct 13 08:25:14.593: INFO: Waiting for pod pod-secrets-f66da3a2-a74c-472b-a1fc-f551d676d448 to disappear + Oct 13 08:25:14.596: INFO: Pod pod-secrets-f66da3a2-a74c-472b-a1fc-f551d676d448 no longer exists + [AfterEach] [sig-storage] Secrets + test/e2e/framework/node/init/init.go:32 + Oct 13 08:25:14.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Secrets + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Secrets + tear down framework | framework.go:193 + STEP: Destroying namespace "secrets-5678" for this suite. 10/13/23 08:25:14.599 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:249 +[BeforeEach] [sig-storage] Projected downwardAPI + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:25:14.606 +Oct 13 08:25:14.607: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename projected 10/13/23 08:25:14.608 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:25:14.624 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:25:14.626 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:44 +[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:249 +STEP: Creating a pod to test downward API volume plugin 10/13/23 08:25:14.628 +Oct 13 08:25:14.636: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9194f082-24a7-4bd3-89cc-bfde4242e9e3" in namespace "projected-9722" to be "Succeeded or Failed" +Oct 13 08:25:14.640: INFO: Pod "downwardapi-volume-9194f082-24a7-4bd3-89cc-bfde4242e9e3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.744956ms +Oct 13 08:25:16.646: INFO: Pod "downwardapi-volume-9194f082-24a7-4bd3-89cc-bfde4242e9e3": Phase="Running", Reason="", readiness=false. Elapsed: 2.010258139s +Oct 13 08:25:18.645: INFO: Pod "downwardapi-volume-9194f082-24a7-4bd3-89cc-bfde4242e9e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00926295s +STEP: Saw pod success 10/13/23 08:25:18.645 +Oct 13 08:25:18.645: INFO: Pod "downwardapi-volume-9194f082-24a7-4bd3-89cc-bfde4242e9e3" satisfied condition "Succeeded or Failed" +Oct 13 08:25:18.648: INFO: Trying to get logs from node node2 pod downwardapi-volume-9194f082-24a7-4bd3-89cc-bfde4242e9e3 container client-container: +STEP: delete the pod 10/13/23 08:25:18.654 +Oct 13 08:25:18.664: INFO: Waiting for pod downwardapi-volume-9194f082-24a7-4bd3-89cc-bfde4242e9e3 to disappear +Oct 13 08:25:18.667: INFO: Pod downwardapi-volume-9194f082-24a7-4bd3-89cc-bfde4242e9e3 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/node/init/init.go:32 +Oct 13 08:25:18.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI + tear down framework | framework.go:193 +STEP: Destroying namespace "projected-9722" for this suite. 10/13/23 08:25:18.67 +------------------------------ +• [4.069 seconds] +[sig-storage] Projected downwardAPI +test/e2e/common/storage/framework.go:23 + should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:249 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected downwardAPI + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:25:14.606 + Oct 13 08:25:14.607: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename projected 10/13/23 08:25:14.608 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:25:14.624 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:25:14.626 + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:44 + [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:249 + STEP: Creating a pod to test downward API volume plugin 10/13/23 08:25:14.628 + Oct 13 08:25:14.636: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9194f082-24a7-4bd3-89cc-bfde4242e9e3" in namespace "projected-9722" to be "Succeeded or Failed" + Oct 13 08:25:14.640: INFO: Pod "downwardapi-volume-9194f082-24a7-4bd3-89cc-bfde4242e9e3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.744956ms + Oct 13 08:25:16.646: INFO: Pod "downwardapi-volume-9194f082-24a7-4bd3-89cc-bfde4242e9e3": Phase="Running", Reason="", readiness=false. Elapsed: 2.010258139s + Oct 13 08:25:18.645: INFO: Pod "downwardapi-volume-9194f082-24a7-4bd3-89cc-bfde4242e9e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00926295s + STEP: Saw pod success 10/13/23 08:25:18.645 + Oct 13 08:25:18.645: INFO: Pod "downwardapi-volume-9194f082-24a7-4bd3-89cc-bfde4242e9e3" satisfied condition "Succeeded or Failed" + Oct 13 08:25:18.648: INFO: Trying to get logs from node node2 pod downwardapi-volume-9194f082-24a7-4bd3-89cc-bfde4242e9e3 container client-container: + STEP: delete the pod 10/13/23 08:25:18.654 + Oct 13 08:25:18.664: INFO: Waiting for pod downwardapi-volume-9194f082-24a7-4bd3-89cc-bfde4242e9e3 to disappear + Oct 13 08:25:18.667: INFO: Pod downwardapi-volume-9194f082-24a7-4bd3-89cc-bfde4242e9e3 no longer exists + [AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/node/init/init.go:32 + Oct 13 08:25:18.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + tear down framework | framework.go:193 + STEP: Destroying namespace "projected-9722" for this suite. 10/13/23 08:25:18.67 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should update annotations on modification [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:162 +[BeforeEach] [sig-storage] Projected downwardAPI + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:25:18.676 +Oct 13 08:25:18.676: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename projected 10/13/23 08:25:18.677 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:25:18.691 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:25:18.693 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:44 +[It] should update annotations on modification [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:162 +STEP: Creating the pod 10/13/23 08:25:18.695 +Oct 13 08:25:18.703: INFO: Waiting up to 5m0s for pod "annotationupdate068fa5ef-daa9-4367-9ab6-541f35b54c71" in namespace "projected-5172" to be "running and ready" +Oct 13 08:25:18.706: INFO: Pod "annotationupdate068fa5ef-daa9-4367-9ab6-541f35b54c71": Phase="Pending", Reason="", readiness=false. Elapsed: 3.027426ms +Oct 13 08:25:18.706: INFO: The phase of Pod annotationupdate068fa5ef-daa9-4367-9ab6-541f35b54c71 is Pending, waiting for it to be Running (with Ready = true) +Oct 13 08:25:20.710: INFO: Pod "annotationupdate068fa5ef-daa9-4367-9ab6-541f35b54c71": Phase="Running", Reason="", readiness=true. Elapsed: 2.007683597s +Oct 13 08:25:20.710: INFO: The phase of Pod annotationupdate068fa5ef-daa9-4367-9ab6-541f35b54c71 is Running (Ready = true) +Oct 13 08:25:20.710: INFO: Pod "annotationupdate068fa5ef-daa9-4367-9ab6-541f35b54c71" satisfied condition "running and ready" +Oct 13 08:25:21.233: INFO: Successfully updated pod "annotationupdate068fa5ef-daa9-4367-9ab6-541f35b54c71" +[AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/node/init/init.go:32 +Oct 13 08:25:23.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI + tear down framework | framework.go:193 +STEP: Destroying namespace "projected-5172" for this suite. 10/13/23 08:25:23.254 +------------------------------ +• [4.587 seconds] +[sig-storage] Projected downwardAPI +test/e2e/common/storage/framework.go:23 + should update annotations on modification [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:162 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected downwardAPI + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:25:18.676 + Oct 13 08:25:18.676: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename projected 10/13/23 08:25:18.677 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:25:18.691 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:25:18.693 + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:44 + [It] should update annotations on modification [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:162 + STEP: Creating the pod 10/13/23 08:25:18.695 + Oct 13 08:25:18.703: INFO: Waiting up to 5m0s for pod "annotationupdate068fa5ef-daa9-4367-9ab6-541f35b54c71" in namespace "projected-5172" to be "running and ready" + Oct 13 08:25:18.706: INFO: Pod "annotationupdate068fa5ef-daa9-4367-9ab6-541f35b54c71": Phase="Pending", Reason="", readiness=false. Elapsed: 3.027426ms + Oct 13 08:25:18.706: INFO: The phase of Pod annotationupdate068fa5ef-daa9-4367-9ab6-541f35b54c71 is Pending, waiting for it to be Running (with Ready = true) + Oct 13 08:25:20.710: INFO: Pod "annotationupdate068fa5ef-daa9-4367-9ab6-541f35b54c71": Phase="Running", Reason="", readiness=true. Elapsed: 2.007683597s + Oct 13 08:25:20.710: INFO: The phase of Pod annotationupdate068fa5ef-daa9-4367-9ab6-541f35b54c71 is Running (Ready = true) + Oct 13 08:25:20.710: INFO: Pod "annotationupdate068fa5ef-daa9-4367-9ab6-541f35b54c71" satisfied condition "running and ready" + Oct 13 08:25:21.233: INFO: Successfully updated pod "annotationupdate068fa5ef-daa9-4367-9ab6-541f35b54c71" + [AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/node/init/init.go:32 + Oct 13 08:25:23.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + tear down framework | framework.go:193 + STEP: Destroying namespace "projected-5172" for this suite. 10/13/23 08:25:23.254 + << End Captured GinkgoWriter Output +------------------------------ +S +------------------------------ +[sig-network] EndpointSlice + should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] + test/e2e/network/endpointslice.go:205 +[BeforeEach] [sig-network] EndpointSlice + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:25:23.263 +Oct 13 08:25:23.263: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename endpointslice 10/13/23 08:25:23.264 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:25:23.283 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:25:23.286 +[BeforeEach] [sig-network] EndpointSlice + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-network] EndpointSlice + test/e2e/network/endpointslice.go:52 +[It] should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] + test/e2e/network/endpointslice.go:205 +STEP: referencing a single matching pod 10/13/23 08:25:28.36 +STEP: referencing matching pods with named port 10/13/23 08:25:33.369 +STEP: creating empty Endpoints and EndpointSlices for no matching Pods 10/13/23 08:25:38.38 +STEP: recreating EndpointSlices after they've been deleted 10/13/23 08:25:43.391 +Oct 13 08:25:43.420: INFO: EndpointSlice for Service endpointslice-4511/example-named-port not found +[AfterEach] [sig-network] EndpointSlice + test/e2e/framework/node/init/init.go:32 +Oct 13 08:25:53.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] EndpointSlice + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] EndpointSlice + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] EndpointSlice + tear down framework | framework.go:193 +STEP: Destroying namespace "endpointslice-4511" for this suite. 10/13/23 08:25:53.436 +------------------------------ +• [SLOW TEST] [30.180 seconds] +[sig-network] EndpointSlice +test/e2e/network/common/framework.go:23 + should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] + test/e2e/network/endpointslice.go:205 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] EndpointSlice + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:25:23.263 + Oct 13 08:25:23.263: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename endpointslice 10/13/23 08:25:23.264 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:25:23.283 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:25:23.286 + [BeforeEach] [sig-network] EndpointSlice + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-network] EndpointSlice + test/e2e/network/endpointslice.go:52 + [It] should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] + test/e2e/network/endpointslice.go:205 + STEP: referencing a single matching pod 10/13/23 08:25:28.36 + STEP: referencing matching pods with named port 10/13/23 08:25:33.369 + STEP: creating empty Endpoints and EndpointSlices for no matching Pods 10/13/23 08:25:38.38 + STEP: recreating EndpointSlices after they've been deleted 10/13/23 08:25:43.391 + Oct 13 08:25:43.420: INFO: EndpointSlice for Service endpointslice-4511/example-named-port not found + [AfterEach] [sig-network] EndpointSlice + test/e2e/framework/node/init/init.go:32 + Oct 13 08:25:53.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] EndpointSlice + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] EndpointSlice + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] EndpointSlice + tear down framework | framework.go:193 + STEP: Destroying namespace "endpointslice-4511" for this suite. 10/13/23 08:25:53.436 + << End Captured GinkgoWriter Output +------------------------------ +S +------------------------------ +[sig-cli] Kubectl client Update Demo + should create and stop a replication controller [Conformance] + test/e2e/kubectl/kubectl.go:339 +[BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:25:53.444 +Oct 13 08:25:53.444: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename kubectl 10/13/23 08:25:53.445 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:25:53.464 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:25:53.467 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 +[BeforeEach] Update Demo + test/e2e/kubectl/kubectl.go:326 +[It] should create and stop a replication controller [Conformance] + test/e2e/kubectl/kubectl.go:339 +STEP: creating a replication controller 10/13/23 08:25:53.469 +Oct 13 08:25:53.469: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 create -f -' +Oct 13 08:25:53.706: INFO: stderr: "" +Oct 13 08:25:53.706: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" +STEP: waiting for all containers in name=update-demo pods to come up. 10/13/23 08:25:53.706 +Oct 13 08:25:53.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 13 08:25:53.802: INFO: stderr: "" +Oct 13 08:25:53.802: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " +Oct 13 08:25:53.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 13 08:25:53.890: INFO: stderr: "" +Oct 13 08:25:53.890: INFO: stdout: "" +Oct 13 08:25:53.890: INFO: update-demo-nautilus-5bsg8 is created but not running +Oct 13 08:25:58.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 13 08:25:58.974: INFO: stderr: "" +Oct 13 08:25:58.974: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " +Oct 13 08:25:58.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 13 08:25:59.064: INFO: stderr: "" +Oct 13 08:25:59.064: INFO: stdout: "" +Oct 13 08:25:59.064: INFO: update-demo-nautilus-5bsg8 is created but not running +Oct 13 08:26:04.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 13 08:26:04.161: INFO: stderr: "" +Oct 13 08:26:04.162: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " +Oct 13 08:26:04.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 13 08:26:04.242: INFO: stderr: "" +Oct 13 08:26:04.242: INFO: stdout: "" +Oct 13 08:26:04.242: INFO: update-demo-nautilus-5bsg8 is created but not running +Oct 13 08:26:09.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 13 08:26:09.338: INFO: stderr: "" +Oct 13 08:26:09.338: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " +Oct 13 08:26:09.338: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 13 08:26:09.431: INFO: stderr: "" +Oct 13 08:26:09.431: INFO: stdout: "" +Oct 13 08:26:09.431: INFO: update-demo-nautilus-5bsg8 is created but not running +Oct 13 08:26:14.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 13 08:26:14.519: INFO: stderr: "" +Oct 13 08:26:14.519: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " +Oct 13 08:26:14.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 13 08:26:14.607: INFO: stderr: "" +Oct 13 08:26:14.607: INFO: stdout: "" +Oct 13 08:26:14.607: INFO: update-demo-nautilus-5bsg8 is created but not running +Oct 13 08:26:19.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 13 08:26:19.710: INFO: stderr: "" +Oct 13 08:26:19.710: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " +Oct 13 08:26:19.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 13 08:26:19.799: INFO: stderr: "" +Oct 13 08:26:19.799: INFO: stdout: "" +Oct 13 08:26:19.799: INFO: update-demo-nautilus-5bsg8 is created but not running +Oct 13 08:26:24.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 13 08:26:24.886: INFO: stderr: "" +Oct 13 08:26:24.886: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " +Oct 13 08:26:24.886: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 13 08:26:24.975: INFO: stderr: "" +Oct 13 08:26:24.975: INFO: stdout: "" +Oct 13 08:26:24.975: INFO: update-demo-nautilus-5bsg8 is created but not running +Oct 13 08:26:29.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 13 08:26:30.073: INFO: stderr: "" +Oct 13 08:26:30.073: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " +Oct 13 08:26:30.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 13 08:26:30.167: INFO: stderr: "" +Oct 13 08:26:30.167: INFO: stdout: "" +Oct 13 08:26:30.167: INFO: update-demo-nautilus-5bsg8 is created but not running +Oct 13 08:26:35.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 13 08:26:35.258: INFO: stderr: "" +Oct 13 08:26:35.258: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " +Oct 13 08:26:35.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 13 08:26:35.351: INFO: stderr: "" +Oct 13 08:26:35.351: INFO: stdout: "" +Oct 13 08:26:35.351: INFO: update-demo-nautilus-5bsg8 is created but not running +Oct 13 08:26:40.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 13 08:26:40.465: INFO: stderr: "" +Oct 13 08:26:40.465: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " +Oct 13 08:26:40.465: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 13 08:26:40.560: INFO: stderr: "" +Oct 13 08:26:40.560: INFO: stdout: "" +Oct 13 08:26:40.560: INFO: update-demo-nautilus-5bsg8 is created but not running +Oct 13 08:26:45.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 13 08:26:45.652: INFO: stderr: "" +Oct 13 08:26:45.652: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " +Oct 13 08:26:45.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 13 08:26:45.729: INFO: stderr: "" +Oct 13 08:26:45.729: INFO: stdout: "" +Oct 13 08:26:45.729: INFO: update-demo-nautilus-5bsg8 is created but not running +Oct 13 08:26:50.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 13 08:26:50.842: INFO: stderr: "" +Oct 13 08:26:50.842: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " +Oct 13 08:26:50.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 13 08:26:50.925: INFO: stderr: "" +Oct 13 08:26:50.925: INFO: stdout: "" +Oct 13 08:26:50.925: INFO: update-demo-nautilus-5bsg8 is created but not running +Oct 13 08:26:55.925: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 13 08:26:56.002: INFO: stderr: "" +Oct 13 08:26:56.002: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " +Oct 13 08:26:56.002: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 13 08:26:56.086: INFO: stderr: "" +Oct 13 08:26:56.086: INFO: stdout: "" +Oct 13 08:26:56.086: INFO: update-demo-nautilus-5bsg8 is created but not running +Oct 13 08:27:01.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 13 08:27:01.163: INFO: stderr: "" +Oct 13 08:27:01.163: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " +Oct 13 08:27:01.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 13 08:27:01.235: INFO: stderr: "" +Oct 13 08:27:01.235: INFO: stdout: "" +Oct 13 08:27:01.235: INFO: update-demo-nautilus-5bsg8 is created but not running +Oct 13 08:27:06.236: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 13 08:27:06.310: INFO: stderr: "" +Oct 13 08:27:06.310: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " +Oct 13 08:27:06.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 13 08:27:06.384: INFO: stderr: "" +Oct 13 08:27:06.384: INFO: stdout: "" +Oct 13 08:27:06.384: INFO: update-demo-nautilus-5bsg8 is created but not running +Oct 13 08:27:11.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 13 08:27:11.483: INFO: stderr: "" +Oct 13 08:27:11.483: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " +Oct 13 08:27:11.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 13 08:27:11.575: INFO: stderr: "" +Oct 13 08:27:11.575: INFO: stdout: "" +Oct 13 08:27:11.575: INFO: update-demo-nautilus-5bsg8 is created but not running +Oct 13 08:27:16.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 13 08:27:16.663: INFO: stderr: "" +Oct 13 08:27:16.663: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " +Oct 13 08:27:16.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 13 08:27:16.754: INFO: stderr: "" +Oct 13 08:27:16.754: INFO: stdout: "" +Oct 13 08:27:16.754: INFO: update-demo-nautilus-5bsg8 is created but not running +Oct 13 08:27:21.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 13 08:27:21.836: INFO: stderr: "" +Oct 13 08:27:21.836: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " +Oct 13 08:27:21.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 13 08:27:21.918: INFO: stderr: "" +Oct 13 08:27:21.918: INFO: stdout: "" +Oct 13 08:27:21.918: INFO: update-demo-nautilus-5bsg8 is created but not running +Oct 13 08:27:26.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 13 08:27:27.007: INFO: stderr: "" +Oct 13 08:27:27.007: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " +Oct 13 08:27:27.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 13 08:27:27.101: INFO: stderr: "" +Oct 13 08:27:27.101: INFO: stdout: "" +Oct 13 08:27:27.101: INFO: update-demo-nautilus-5bsg8 is created but not running +Oct 13 08:27:32.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 13 08:27:32.196: INFO: stderr: "" +Oct 13 08:27:32.196: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " +Oct 13 08:27:32.196: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 13 08:27:32.288: INFO: stderr: "" +Oct 13 08:27:32.288: INFO: stdout: "" +Oct 13 08:27:32.288: INFO: update-demo-nautilus-5bsg8 is created but not running +Oct 13 08:27:37.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 13 08:27:37.372: INFO: stderr: "" +Oct 13 08:27:37.372: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " +Oct 13 08:27:37.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 13 08:27:37.454: INFO: stderr: "" +Oct 13 08:27:37.454: INFO: stdout: "" +Oct 13 08:27:37.454: INFO: update-demo-nautilus-5bsg8 is created but not running +Oct 13 08:27:42.455: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 13 08:27:42.529: INFO: stderr: "" +Oct 13 08:27:42.529: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " +Oct 13 08:27:42.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 13 08:27:42.609: INFO: stderr: "" +Oct 13 08:27:42.609: INFO: stdout: "" +Oct 13 08:27:42.609: INFO: update-demo-nautilus-5bsg8 is created but not running +Oct 13 08:27:47.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 13 08:27:47.700: INFO: stderr: "" +Oct 13 08:27:47.700: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " +Oct 13 08:27:47.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 13 08:27:47.790: INFO: stderr: "" +Oct 13 08:27:47.790: INFO: stdout: "" +Oct 13 08:27:47.790: INFO: update-demo-nautilus-5bsg8 is created but not running +Oct 13 08:27:52.791: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 13 08:27:52.874: INFO: stderr: "" +Oct 13 08:27:52.874: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " +Oct 13 08:27:52.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 13 08:27:52.967: INFO: stderr: "" +Oct 13 08:27:52.967: INFO: stdout: "" +Oct 13 08:27:52.967: INFO: update-demo-nautilus-5bsg8 is created but not running +Oct 13 08:27:57.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 13 08:27:58.063: INFO: stderr: "" +Oct 13 08:27:58.063: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " +Oct 13 08:27:58.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 13 08:27:58.154: INFO: stderr: "" +Oct 13 08:27:58.154: INFO: stdout: "" +Oct 13 08:27:58.154: INFO: update-demo-nautilus-5bsg8 is created but not running +Oct 13 08:28:03.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 13 08:28:03.256: INFO: stderr: "" +Oct 13 08:28:03.256: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " +Oct 13 08:28:03.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 13 08:28:03.338: INFO: stderr: "" +Oct 13 08:28:03.338: INFO: stdout: "" +Oct 13 08:28:03.338: INFO: update-demo-nautilus-5bsg8 is created but not running +Oct 13 08:28:08.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 13 08:28:08.427: INFO: stderr: "" +Oct 13 08:28:08.427: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " +Oct 13 08:28:08.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 13 08:28:08.517: INFO: stderr: "" +Oct 13 08:28:08.517: INFO: stdout: "" +Oct 13 08:28:08.517: INFO: update-demo-nautilus-5bsg8 is created but not running +Oct 13 08:28:13.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 13 08:28:13.608: INFO: stderr: "" +Oct 13 08:28:13.608: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " +Oct 13 08:28:13.608: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 13 08:28:13.688: INFO: stderr: "" +Oct 13 08:28:13.688: INFO: stdout: "" +Oct 13 08:28:13.688: INFO: update-demo-nautilus-5bsg8 is created but not running +Oct 13 08:28:18.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 13 08:28:18.775: INFO: stderr: "" +Oct 13 08:28:18.775: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " +Oct 13 08:28:18.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 13 08:28:18.877: INFO: stderr: "" +Oct 13 08:28:18.877: INFO: stdout: "" +Oct 13 08:28:18.877: INFO: update-demo-nautilus-5bsg8 is created but not running +Oct 13 08:28:23.879: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 13 08:28:23.973: INFO: stderr: "" +Oct 13 08:28:23.973: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " +Oct 13 08:28:23.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 13 08:28:24.061: INFO: stderr: "" +Oct 13 08:28:24.062: INFO: stdout: "" +Oct 13 08:28:24.062: INFO: update-demo-nautilus-5bsg8 is created but not running +Oct 13 08:28:29.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 13 08:28:29.169: INFO: stderr: "" +Oct 13 08:28:29.169: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " +Oct 13 08:28:29.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 13 08:28:29.256: INFO: stderr: "" +Oct 13 08:28:29.256: INFO: stdout: "" +Oct 13 08:28:29.256: INFO: update-demo-nautilus-5bsg8 is created but not running +Oct 13 08:28:34.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 13 08:28:34.352: INFO: stderr: "" +Oct 13 08:28:34.352: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " +Oct 13 08:28:34.353: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 13 08:28:34.446: INFO: stderr: "" +Oct 13 08:28:34.446: INFO: stdout: "" +Oct 13 08:28:34.446: INFO: update-demo-nautilus-5bsg8 is created but not running +Oct 13 08:28:39.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 13 08:28:39.541: INFO: stderr: "" +Oct 13 08:28:39.541: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " +Oct 13 08:28:39.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 13 08:28:39.624: INFO: stderr: "" +Oct 13 08:28:39.624: INFO: stdout: "" +Oct 13 08:28:39.624: INFO: update-demo-nautilus-5bsg8 is created but not running +Oct 13 08:28:44.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 13 08:28:44.715: INFO: stderr: "" +Oct 13 08:28:44.715: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " +Oct 13 08:28:44.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 13 08:28:44.805: INFO: stderr: "" +Oct 13 08:28:44.805: INFO: stdout: "" +Oct 13 08:28:44.805: INFO: update-demo-nautilus-5bsg8 is created but not running +Oct 13 08:28:49.806: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 13 08:28:49.904: INFO: stderr: "" +Oct 13 08:28:49.904: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " +Oct 13 08:28:49.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 13 08:28:49.997: INFO: stderr: "" +Oct 13 08:28:49.997: INFO: stdout: "" +Oct 13 08:28:49.997: INFO: update-demo-nautilus-5bsg8 is created but not running +Oct 13 08:28:54.997: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 13 08:28:55.087: INFO: stderr: "" +Oct 13 08:28:55.087: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " +Oct 13 08:28:55.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 13 08:28:55.166: INFO: stderr: "" +Oct 13 08:28:55.166: INFO: stdout: "" +Oct 13 08:28:55.166: INFO: update-demo-nautilus-5bsg8 is created but not running +Oct 13 08:29:00.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 13 08:29:00.244: INFO: stderr: "" +Oct 13 08:29:00.244: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " +Oct 13 08:29:00.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 13 08:29:00.314: INFO: stderr: "" +Oct 13 08:29:00.314: INFO: stdout: "" +Oct 13 08:29:00.314: INFO: update-demo-nautilus-5bsg8 is created but not running +Oct 13 08:29:05.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 13 08:29:05.389: INFO: stderr: "" +Oct 13 08:29:05.389: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " +Oct 13 08:29:05.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 13 08:29:05.467: INFO: stderr: "" +Oct 13 08:29:05.467: INFO: stdout: "" +Oct 13 08:29:05.467: INFO: update-demo-nautilus-5bsg8 is created but not running +Oct 13 08:29:10.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 13 08:29:10.539: INFO: stderr: "" +Oct 13 08:29:10.539: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " +Oct 13 08:29:10.539: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 13 08:29:10.611: INFO: stderr: "" +Oct 13 08:29:10.611: INFO: stdout: "" +Oct 13 08:29:10.611: INFO: update-demo-nautilus-5bsg8 is created but not running +Oct 13 08:29:15.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 13 08:29:15.672: INFO: stderr: "" +Oct 13 08:29:15.672: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " +Oct 13 08:29:15.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 13 08:29:15.733: INFO: stderr: "" +Oct 13 08:29:15.733: INFO: stdout: "" +Oct 13 08:29:15.733: INFO: update-demo-nautilus-5bsg8 is created but not running +Oct 13 08:29:20.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 13 08:29:20.827: INFO: stderr: "" +Oct 13 08:29:20.827: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " +Oct 13 08:29:20.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 13 08:29:20.891: INFO: stderr: "" +Oct 13 08:29:20.891: INFO: stdout: "" +Oct 13 08:29:20.891: INFO: update-demo-nautilus-5bsg8 is created but not running +Oct 13 08:29:25.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 13 08:29:25.995: INFO: stderr: "" +Oct 13 08:29:25.995: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " +Oct 13 08:29:25.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 13 08:29:26.061: INFO: stderr: "" +Oct 13 08:29:26.061: INFO: stdout: "" +Oct 13 08:29:26.061: INFO: update-demo-nautilus-5bsg8 is created but not running +Oct 13 08:29:31.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 13 08:29:31.149: INFO: stderr: "" +Oct 13 08:29:31.149: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " +Oct 13 08:29:31.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 13 08:29:31.229: INFO: stderr: "" +Oct 13 08:29:31.229: INFO: stdout: "" +Oct 13 08:29:31.229: INFO: update-demo-nautilus-5bsg8 is created but not running +Oct 13 08:29:36.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 13 08:29:36.314: INFO: stderr: "" +Oct 13 08:29:36.314: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " +Oct 13 08:29:36.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 13 08:29:36.388: INFO: stderr: "" +Oct 13 08:29:36.388: INFO: stdout: "" +Oct 13 08:29:36.388: INFO: update-demo-nautilus-5bsg8 is created but not running +Oct 13 08:29:41.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 13 08:29:41.481: INFO: stderr: "" +Oct 13 08:29:41.481: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " +Oct 13 08:29:41.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 13 08:29:41.563: INFO: stderr: "" +Oct 13 08:29:41.563: INFO: stdout: "" +Oct 13 08:29:41.563: INFO: update-demo-nautilus-5bsg8 is created but not running +Oct 13 08:29:46.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 13 08:29:46.659: INFO: stderr: "" +Oct 13 08:29:46.659: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " +Oct 13 08:29:46.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 13 08:29:46.739: INFO: stderr: "" +Oct 13 08:29:46.739: INFO: stdout: "" +Oct 13 08:29:46.739: INFO: update-demo-nautilus-5bsg8 is created but not running +Oct 13 08:29:51.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 13 08:29:51.840: INFO: stderr: "" +Oct 13 08:29:51.840: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " +Oct 13 08:29:51.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 13 08:29:51.922: INFO: stderr: "" +Oct 13 08:29:51.922: INFO: stdout: "true" +Oct 13 08:29:51.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Oct 13 08:29:52.008: INFO: stderr: "" +Oct 13 08:29:52.008: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.7" +Oct 13 08:29:52.008: INFO: validating pod update-demo-nautilus-5bsg8 +Oct 13 08:29:52.058: INFO: got data: { + "image": "nautilus.jpg" +} + +Oct 13 08:29:52.058: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Oct 13 08:29:52.058: INFO: update-demo-nautilus-5bsg8 is verified up and running +Oct 13 08:29:52.058: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-bcd8m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 13 08:29:52.150: INFO: stderr: "" +Oct 13 08:29:52.150: INFO: stdout: "true" +Oct 13 08:29:52.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-bcd8m -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Oct 13 08:29:52.244: INFO: stderr: "" +Oct 13 08:29:52.244: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.7" +Oct 13 08:29:52.244: INFO: validating pod update-demo-nautilus-bcd8m +Oct 13 08:29:52.272: INFO: got data: { + "image": "nautilus.jpg" +} + +Oct 13 08:29:52.272: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Oct 13 08:29:52.272: INFO: update-demo-nautilus-bcd8m is verified up and running +STEP: using delete to clean up resources 10/13/23 08:29:52.272 +Oct 13 08:29:52.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 delete --grace-period=0 --force -f -' +Oct 13 08:29:52.363: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Oct 13 08:29:52.363: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" +Oct 13 08:29:52.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get rc,svc -l name=update-demo --no-headers' +Oct 13 08:29:52.451: INFO: stderr: "No resources found in kubectl-1064 namespace.\n" +Oct 13 08:29:52.451: INFO: stdout: "" +Oct 13 08:29:52.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Oct 13 08:29:52.538: INFO: stderr: "" +Oct 13 08:29:52.538: INFO: stdout: "" +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 +Oct 13 08:29:52.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 +STEP: Destroying namespace "kubectl-1064" for this suite. 10/13/23 08:29:52.543 +------------------------------ +• [SLOW TEST] [239.105 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Update Demo + test/e2e/kubectl/kubectl.go:324 + should create and stop a replication controller [Conformance] + test/e2e/kubectl/kubectl.go:339 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:25:53.444 + Oct 13 08:25:53.444: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename kubectl 10/13/23 08:25:53.445 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:25:53.464 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:25:53.467 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 + [BeforeEach] Update Demo + test/e2e/kubectl/kubectl.go:326 + [It] should create and stop a replication controller [Conformance] + test/e2e/kubectl/kubectl.go:339 + STEP: creating a replication controller 10/13/23 08:25:53.469 + Oct 13 08:25:53.469: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 create -f -' + Oct 13 08:25:53.706: INFO: stderr: "" + Oct 13 08:25:53.706: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" + STEP: waiting for all containers in name=update-demo pods to come up. 10/13/23 08:25:53.706 + Oct 13 08:25:53.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Oct 13 08:25:53.802: INFO: stderr: "" + Oct 13 08:25:53.802: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " + Oct 13 08:25:53.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Oct 13 08:25:53.890: INFO: stderr: "" + Oct 13 08:25:53.890: INFO: stdout: "" + Oct 13 08:25:53.890: INFO: update-demo-nautilus-5bsg8 is created but not running + Oct 13 08:25:58.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Oct 13 08:25:58.974: INFO: stderr: "" + Oct 13 08:25:58.974: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " + Oct 13 08:25:58.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Oct 13 08:25:59.064: INFO: stderr: "" + Oct 13 08:25:59.064: INFO: stdout: "" + Oct 13 08:25:59.064: INFO: update-demo-nautilus-5bsg8 is created but not running + Oct 13 08:26:04.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Oct 13 08:26:04.161: INFO: stderr: "" + Oct 13 08:26:04.162: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " + Oct 13 08:26:04.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Oct 13 08:26:04.242: INFO: stderr: "" + Oct 13 08:26:04.242: INFO: stdout: "" + Oct 13 08:26:04.242: INFO: update-demo-nautilus-5bsg8 is created but not running + Oct 13 08:26:09.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Oct 13 08:26:09.338: INFO: stderr: "" + Oct 13 08:26:09.338: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " + Oct 13 08:26:09.338: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Oct 13 08:26:09.431: INFO: stderr: "" + Oct 13 08:26:09.431: INFO: stdout: "" + Oct 13 08:26:09.431: INFO: update-demo-nautilus-5bsg8 is created but not running + Oct 13 08:26:14.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Oct 13 08:26:14.519: INFO: stderr: "" + Oct 13 08:26:14.519: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " + Oct 13 08:26:14.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Oct 13 08:26:14.607: INFO: stderr: "" + Oct 13 08:26:14.607: INFO: stdout: "" + Oct 13 08:26:14.607: INFO: update-demo-nautilus-5bsg8 is created but not running + Oct 13 08:26:19.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Oct 13 08:26:19.710: INFO: stderr: "" + Oct 13 08:26:19.710: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " + Oct 13 08:26:19.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Oct 13 08:26:19.799: INFO: stderr: "" + Oct 13 08:26:19.799: INFO: stdout: "" + Oct 13 08:26:19.799: INFO: update-demo-nautilus-5bsg8 is created but not running + Oct 13 08:26:24.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Oct 13 08:26:24.886: INFO: stderr: "" + Oct 13 08:26:24.886: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " + Oct 13 08:26:24.886: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Oct 13 08:26:24.975: INFO: stderr: "" + Oct 13 08:26:24.975: INFO: stdout: "" + Oct 13 08:26:24.975: INFO: update-demo-nautilus-5bsg8 is created but not running + Oct 13 08:26:29.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Oct 13 08:26:30.073: INFO: stderr: "" + Oct 13 08:26:30.073: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " + Oct 13 08:26:30.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Oct 13 08:26:30.167: INFO: stderr: "" + Oct 13 08:26:30.167: INFO: stdout: "" + Oct 13 08:26:30.167: INFO: update-demo-nautilus-5bsg8 is created but not running + Oct 13 08:26:35.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Oct 13 08:26:35.258: INFO: stderr: "" + Oct 13 08:26:35.258: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " + Oct 13 08:26:35.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Oct 13 08:26:35.351: INFO: stderr: "" + Oct 13 08:26:35.351: INFO: stdout: "" + Oct 13 08:26:35.351: INFO: update-demo-nautilus-5bsg8 is created but not running + Oct 13 08:26:40.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Oct 13 08:26:40.465: INFO: stderr: "" + Oct 13 08:26:40.465: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " + Oct 13 08:26:40.465: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Oct 13 08:26:40.560: INFO: stderr: "" + Oct 13 08:26:40.560: INFO: stdout: "" + Oct 13 08:26:40.560: INFO: update-demo-nautilus-5bsg8 is created but not running + Oct 13 08:26:45.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Oct 13 08:26:45.652: INFO: stderr: "" + Oct 13 08:26:45.652: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " + Oct 13 08:26:45.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Oct 13 08:26:45.729: INFO: stderr: "" + Oct 13 08:26:45.729: INFO: stdout: "" + Oct 13 08:26:45.729: INFO: update-demo-nautilus-5bsg8 is created but not running + Oct 13 08:26:50.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Oct 13 08:26:50.842: INFO: stderr: "" + Oct 13 08:26:50.842: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " + Oct 13 08:26:50.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Oct 13 08:26:50.925: INFO: stderr: "" + Oct 13 08:26:50.925: INFO: stdout: "" + Oct 13 08:26:50.925: INFO: update-demo-nautilus-5bsg8 is created but not running + Oct 13 08:26:55.925: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Oct 13 08:26:56.002: INFO: stderr: "" + Oct 13 08:26:56.002: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " + Oct 13 08:26:56.002: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Oct 13 08:26:56.086: INFO: stderr: "" + Oct 13 08:26:56.086: INFO: stdout: "" + Oct 13 08:26:56.086: INFO: update-demo-nautilus-5bsg8 is created but not running + Oct 13 08:27:01.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Oct 13 08:27:01.163: INFO: stderr: "" + Oct 13 08:27:01.163: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " + Oct 13 08:27:01.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Oct 13 08:27:01.235: INFO: stderr: "" + Oct 13 08:27:01.235: INFO: stdout: "" + Oct 13 08:27:01.235: INFO: update-demo-nautilus-5bsg8 is created but not running + Oct 13 08:27:06.236: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Oct 13 08:27:06.310: INFO: stderr: "" + Oct 13 08:27:06.310: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " + Oct 13 08:27:06.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Oct 13 08:27:06.384: INFO: stderr: "" + Oct 13 08:27:06.384: INFO: stdout: "" + Oct 13 08:27:06.384: INFO: update-demo-nautilus-5bsg8 is created but not running + Oct 13 08:27:11.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Oct 13 08:27:11.483: INFO: stderr: "" + Oct 13 08:27:11.483: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " + Oct 13 08:27:11.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Oct 13 08:27:11.575: INFO: stderr: "" + Oct 13 08:27:11.575: INFO: stdout: "" + Oct 13 08:27:11.575: INFO: update-demo-nautilus-5bsg8 is created but not running + Oct 13 08:27:16.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Oct 13 08:27:16.663: INFO: stderr: "" + Oct 13 08:27:16.663: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " + Oct 13 08:27:16.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Oct 13 08:27:16.754: INFO: stderr: "" + Oct 13 08:27:16.754: INFO: stdout: "" + Oct 13 08:27:16.754: INFO: update-demo-nautilus-5bsg8 is created but not running + Oct 13 08:27:21.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Oct 13 08:27:21.836: INFO: stderr: "" + Oct 13 08:27:21.836: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " + Oct 13 08:27:21.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Oct 13 08:27:21.918: INFO: stderr: "" + Oct 13 08:27:21.918: INFO: stdout: "" + Oct 13 08:27:21.918: INFO: update-demo-nautilus-5bsg8 is created but not running + Oct 13 08:27:26.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Oct 13 08:27:27.007: INFO: stderr: "" + Oct 13 08:27:27.007: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " + Oct 13 08:27:27.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Oct 13 08:27:27.101: INFO: stderr: "" + Oct 13 08:27:27.101: INFO: stdout: "" + Oct 13 08:27:27.101: INFO: update-demo-nautilus-5bsg8 is created but not running + Oct 13 08:27:32.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Oct 13 08:27:32.196: INFO: stderr: "" + Oct 13 08:27:32.196: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " + Oct 13 08:27:32.196: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Oct 13 08:27:32.288: INFO: stderr: "" + Oct 13 08:27:32.288: INFO: stdout: "" + Oct 13 08:27:32.288: INFO: update-demo-nautilus-5bsg8 is created but not running + Oct 13 08:27:37.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Oct 13 08:27:37.372: INFO: stderr: "" + Oct 13 08:27:37.372: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " + Oct 13 08:27:37.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Oct 13 08:27:37.454: INFO: stderr: "" + Oct 13 08:27:37.454: INFO: stdout: "" + Oct 13 08:27:37.454: INFO: update-demo-nautilus-5bsg8 is created but not running + Oct 13 08:27:42.455: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Oct 13 08:27:42.529: INFO: stderr: "" + Oct 13 08:27:42.529: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " + Oct 13 08:27:42.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Oct 13 08:27:42.609: INFO: stderr: "" + Oct 13 08:27:42.609: INFO: stdout: "" + Oct 13 08:27:42.609: INFO: update-demo-nautilus-5bsg8 is created but not running + Oct 13 08:27:47.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Oct 13 08:27:47.700: INFO: stderr: "" + Oct 13 08:27:47.700: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " + Oct 13 08:27:47.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Oct 13 08:27:47.790: INFO: stderr: "" + Oct 13 08:27:47.790: INFO: stdout: "" + Oct 13 08:27:47.790: INFO: update-demo-nautilus-5bsg8 is created but not running + Oct 13 08:27:52.791: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Oct 13 08:27:52.874: INFO: stderr: "" + Oct 13 08:27:52.874: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " + Oct 13 08:27:52.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Oct 13 08:27:52.967: INFO: stderr: "" + Oct 13 08:27:52.967: INFO: stdout: "" + Oct 13 08:27:52.967: INFO: update-demo-nautilus-5bsg8 is created but not running + Oct 13 08:27:57.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Oct 13 08:27:58.063: INFO: stderr: "" + Oct 13 08:27:58.063: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " + Oct 13 08:27:58.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Oct 13 08:27:58.154: INFO: stderr: "" + Oct 13 08:27:58.154: INFO: stdout: "" + Oct 13 08:27:58.154: INFO: update-demo-nautilus-5bsg8 is created but not running + Oct 13 08:28:03.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Oct 13 08:28:03.256: INFO: stderr: "" + Oct 13 08:28:03.256: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " + Oct 13 08:28:03.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Oct 13 08:28:03.338: INFO: stderr: "" + Oct 13 08:28:03.338: INFO: stdout: "" + Oct 13 08:28:03.338: INFO: update-demo-nautilus-5bsg8 is created but not running + Oct 13 08:28:08.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Oct 13 08:28:08.427: INFO: stderr: "" + Oct 13 08:28:08.427: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " + Oct 13 08:28:08.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Oct 13 08:28:08.517: INFO: stderr: "" + Oct 13 08:28:08.517: INFO: stdout: "" + Oct 13 08:28:08.517: INFO: update-demo-nautilus-5bsg8 is created but not running + Oct 13 08:28:13.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Oct 13 08:28:13.608: INFO: stderr: "" + Oct 13 08:28:13.608: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " + Oct 13 08:28:13.608: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Oct 13 08:28:13.688: INFO: stderr: "" + Oct 13 08:28:13.688: INFO: stdout: "" + Oct 13 08:28:13.688: INFO: update-demo-nautilus-5bsg8 is created but not running + Oct 13 08:28:18.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Oct 13 08:28:18.775: INFO: stderr: "" + Oct 13 08:28:18.775: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " + Oct 13 08:28:18.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Oct 13 08:28:18.877: INFO: stderr: "" + Oct 13 08:28:18.877: INFO: stdout: "" + Oct 13 08:28:18.877: INFO: update-demo-nautilus-5bsg8 is created but not running + Oct 13 08:28:23.879: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Oct 13 08:28:23.973: INFO: stderr: "" + Oct 13 08:28:23.973: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " + Oct 13 08:28:23.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Oct 13 08:28:24.061: INFO: stderr: "" + Oct 13 08:28:24.062: INFO: stdout: "" + Oct 13 08:28:24.062: INFO: update-demo-nautilus-5bsg8 is created but not running + Oct 13 08:28:29.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Oct 13 08:28:29.169: INFO: stderr: "" + Oct 13 08:28:29.169: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " + Oct 13 08:28:29.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Oct 13 08:28:29.256: INFO: stderr: "" + Oct 13 08:28:29.256: INFO: stdout: "" + Oct 13 08:28:29.256: INFO: update-demo-nautilus-5bsg8 is created but not running + Oct 13 08:28:34.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Oct 13 08:28:34.352: INFO: stderr: "" + Oct 13 08:28:34.352: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " + Oct 13 08:28:34.353: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Oct 13 08:28:34.446: INFO: stderr: "" + Oct 13 08:28:34.446: INFO: stdout: "" + Oct 13 08:28:34.446: INFO: update-demo-nautilus-5bsg8 is created but not running + Oct 13 08:28:39.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Oct 13 08:28:39.541: INFO: stderr: "" + Oct 13 08:28:39.541: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " + Oct 13 08:28:39.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Oct 13 08:28:39.624: INFO: stderr: "" + Oct 13 08:28:39.624: INFO: stdout: "" + Oct 13 08:28:39.624: INFO: update-demo-nautilus-5bsg8 is created but not running + Oct 13 08:28:44.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Oct 13 08:28:44.715: INFO: stderr: "" + Oct 13 08:28:44.715: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " + Oct 13 08:28:44.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Oct 13 08:28:44.805: INFO: stderr: "" + Oct 13 08:28:44.805: INFO: stdout: "" + Oct 13 08:28:44.805: INFO: update-demo-nautilus-5bsg8 is created but not running + Oct 13 08:28:49.806: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Oct 13 08:28:49.904: INFO: stderr: "" + Oct 13 08:28:49.904: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " + Oct 13 08:28:49.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Oct 13 08:28:49.997: INFO: stderr: "" + Oct 13 08:28:49.997: INFO: stdout: "" + Oct 13 08:28:49.997: INFO: update-demo-nautilus-5bsg8 is created but not running + Oct 13 08:28:54.997: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Oct 13 08:28:55.087: INFO: stderr: "" + Oct 13 08:28:55.087: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " + Oct 13 08:28:55.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Oct 13 08:28:55.166: INFO: stderr: "" + Oct 13 08:28:55.166: INFO: stdout: "" + Oct 13 08:28:55.166: INFO: update-demo-nautilus-5bsg8 is created but not running + Oct 13 08:29:00.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Oct 13 08:29:00.244: INFO: stderr: "" + Oct 13 08:29:00.244: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " + Oct 13 08:29:00.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Oct 13 08:29:00.314: INFO: stderr: "" + Oct 13 08:29:00.314: INFO: stdout: "" + Oct 13 08:29:00.314: INFO: update-demo-nautilus-5bsg8 is created but not running + Oct 13 08:29:05.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Oct 13 08:29:05.389: INFO: stderr: "" + Oct 13 08:29:05.389: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " + Oct 13 08:29:05.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Oct 13 08:29:05.467: INFO: stderr: "" + Oct 13 08:29:05.467: INFO: stdout: "" + Oct 13 08:29:05.467: INFO: update-demo-nautilus-5bsg8 is created but not running + Oct 13 08:29:10.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Oct 13 08:29:10.539: INFO: stderr: "" + Oct 13 08:29:10.539: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " + Oct 13 08:29:10.539: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Oct 13 08:29:10.611: INFO: stderr: "" + Oct 13 08:29:10.611: INFO: stdout: "" + Oct 13 08:29:10.611: INFO: update-demo-nautilus-5bsg8 is created but not running + Oct 13 08:29:15.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Oct 13 08:29:15.672: INFO: stderr: "" + Oct 13 08:29:15.672: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " + Oct 13 08:29:15.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Oct 13 08:29:15.733: INFO: stderr: "" + Oct 13 08:29:15.733: INFO: stdout: "" + Oct 13 08:29:15.733: INFO: update-demo-nautilus-5bsg8 is created but not running + Oct 13 08:29:20.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Oct 13 08:29:20.827: INFO: stderr: "" + Oct 13 08:29:20.827: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " + Oct 13 08:29:20.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Oct 13 08:29:20.891: INFO: stderr: "" + Oct 13 08:29:20.891: INFO: stdout: "" + Oct 13 08:29:20.891: INFO: update-demo-nautilus-5bsg8 is created but not running + Oct 13 08:29:25.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Oct 13 08:29:25.995: INFO: stderr: "" + Oct 13 08:29:25.995: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " + Oct 13 08:29:25.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Oct 13 08:29:26.061: INFO: stderr: "" + Oct 13 08:29:26.061: INFO: stdout: "" + Oct 13 08:29:26.061: INFO: update-demo-nautilus-5bsg8 is created but not running + Oct 13 08:29:31.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Oct 13 08:29:31.149: INFO: stderr: "" + Oct 13 08:29:31.149: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " + Oct 13 08:29:31.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Oct 13 08:29:31.229: INFO: stderr: "" + Oct 13 08:29:31.229: INFO: stdout: "" + Oct 13 08:29:31.229: INFO: update-demo-nautilus-5bsg8 is created but not running + Oct 13 08:29:36.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Oct 13 08:29:36.314: INFO: stderr: "" + Oct 13 08:29:36.314: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " + Oct 13 08:29:36.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Oct 13 08:29:36.388: INFO: stderr: "" + Oct 13 08:29:36.388: INFO: stdout: "" + Oct 13 08:29:36.388: INFO: update-demo-nautilus-5bsg8 is created but not running + Oct 13 08:29:41.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Oct 13 08:29:41.481: INFO: stderr: "" + Oct 13 08:29:41.481: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " + Oct 13 08:29:41.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Oct 13 08:29:41.563: INFO: stderr: "" + Oct 13 08:29:41.563: INFO: stdout: "" + Oct 13 08:29:41.563: INFO: update-demo-nautilus-5bsg8 is created but not running + Oct 13 08:29:46.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Oct 13 08:29:46.659: INFO: stderr: "" + Oct 13 08:29:46.659: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " + Oct 13 08:29:46.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Oct 13 08:29:46.739: INFO: stderr: "" + Oct 13 08:29:46.739: INFO: stdout: "" + Oct 13 08:29:46.739: INFO: update-demo-nautilus-5bsg8 is created but not running + Oct 13 08:29:51.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Oct 13 08:29:51.840: INFO: stderr: "" + Oct 13 08:29:51.840: INFO: stdout: "update-demo-nautilus-5bsg8 update-demo-nautilus-bcd8m " + Oct 13 08:29:51.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Oct 13 08:29:51.922: INFO: stderr: "" + Oct 13 08:29:51.922: INFO: stdout: "true" + Oct 13 08:29:51.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-5bsg8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' + Oct 13 08:29:52.008: INFO: stderr: "" + Oct 13 08:29:52.008: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.7" + Oct 13 08:29:52.008: INFO: validating pod update-demo-nautilus-5bsg8 + Oct 13 08:29:52.058: INFO: got data: { + "image": "nautilus.jpg" + } + + Oct 13 08:29:52.058: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . + Oct 13 08:29:52.058: INFO: update-demo-nautilus-5bsg8 is verified up and running + Oct 13 08:29:52.058: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-bcd8m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Oct 13 08:29:52.150: INFO: stderr: "" + Oct 13 08:29:52.150: INFO: stdout: "true" + Oct 13 08:29:52.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods update-demo-nautilus-bcd8m -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' + Oct 13 08:29:52.244: INFO: stderr: "" + Oct 13 08:29:52.244: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.7" + Oct 13 08:29:52.244: INFO: validating pod update-demo-nautilus-bcd8m + Oct 13 08:29:52.272: INFO: got data: { + "image": "nautilus.jpg" + } + + Oct 13 08:29:52.272: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . + Oct 13 08:29:52.272: INFO: update-demo-nautilus-bcd8m is verified up and running + STEP: using delete to clean up resources 10/13/23 08:29:52.272 + Oct 13 08:29:52.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 delete --grace-period=0 --force -f -' + Oct 13 08:29:52.363: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" + Oct 13 08:29:52.363: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" + Oct 13 08:29:52.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get rc,svc -l name=update-demo --no-headers' + Oct 13 08:29:52.451: INFO: stderr: "No resources found in kubectl-1064 namespace.\n" + Oct 13 08:29:52.451: INFO: stdout: "" + Oct 13 08:29:52.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1064 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' + Oct 13 08:29:52.538: INFO: stderr: "" + Oct 13 08:29:52.538: INFO: stdout: "" + [AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 + Oct 13 08:29:52.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 + STEP: Destroying namespace "kubectl-1064" for this suite. 10/13/23 08:29:52.543 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-scheduling] LimitRange + should list, patch and delete a LimitRange by collection [Conformance] + test/e2e/scheduling/limit_range.go:239 +[BeforeEach] [sig-scheduling] LimitRange + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:29:52.55 +Oct 13 08:29:52.550: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename limitrange 10/13/23 08:29:52.551 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:29:52.57 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:29:52.573 +[BeforeEach] [sig-scheduling] LimitRange + test/e2e/framework/metrics/init/init.go:31 +[It] should list, patch and delete a LimitRange by collection [Conformance] + test/e2e/scheduling/limit_range.go:239 +STEP: Creating LimitRange "e2e-limitrange-rkskx" in namespace "limitrange-378" 10/13/23 08:29:52.575 +STEP: Creating another limitRange in another namespace 10/13/23 08:29:52.58 +Oct 13 08:29:52.597: INFO: Namespace "e2e-limitrange-rkskx-2996" created +Oct 13 08:29:52.597: INFO: Creating LimitRange "e2e-limitrange-rkskx" in namespace "e2e-limitrange-rkskx-2996" +STEP: Listing all LimitRanges with label "e2e-test=e2e-limitrange-rkskx" 10/13/23 08:29:52.603 +Oct 13 08:29:52.606: INFO: Found 2 limitRanges +STEP: Patching LimitRange "e2e-limitrange-rkskx" in "limitrange-378" namespace 10/13/23 08:29:52.606 +Oct 13 08:29:52.612: INFO: LimitRange "e2e-limitrange-rkskx" has been patched +STEP: Delete LimitRange "e2e-limitrange-rkskx" by Collection with labelSelector: "e2e-limitrange-rkskx=patched" 10/13/23 08:29:52.612 +STEP: Confirm that the limitRange "e2e-limitrange-rkskx" has been deleted 10/13/23 08:29:52.62 +Oct 13 08:29:52.620: INFO: Requesting list of LimitRange to confirm quantity +Oct 13 08:29:52.624: INFO: Found 0 LimitRange with label "e2e-limitrange-rkskx=patched" +Oct 13 08:29:52.624: INFO: LimitRange "e2e-limitrange-rkskx" has been deleted. +STEP: Confirm that a single LimitRange still exists with label "e2e-test=e2e-limitrange-rkskx" 10/13/23 08:29:52.624 +Oct 13 08:29:52.627: INFO: Found 1 limitRange +[AfterEach] [sig-scheduling] LimitRange + test/e2e/framework/node/init/init.go:32 +Oct 13 08:29:52.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-scheduling] LimitRange + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-scheduling] LimitRange + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-scheduling] LimitRange + tear down framework | framework.go:193 +STEP: Destroying namespace "limitrange-378" for this suite. 10/13/23 08:29:52.63 +STEP: Destroying namespace "e2e-limitrange-rkskx-2996" for this suite. 10/13/23 08:29:52.635 +------------------------------ +• [0.091 seconds] +[sig-scheduling] LimitRange +test/e2e/scheduling/framework.go:40 + should list, patch and delete a LimitRange by collection [Conformance] + test/e2e/scheduling/limit_range.go:239 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-scheduling] LimitRange + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:29:52.55 + Oct 13 08:29:52.550: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename limitrange 10/13/23 08:29:52.551 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:29:52.57 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:29:52.573 + [BeforeEach] [sig-scheduling] LimitRange + test/e2e/framework/metrics/init/init.go:31 + [It] should list, patch and delete a LimitRange by collection [Conformance] + test/e2e/scheduling/limit_range.go:239 + STEP: Creating LimitRange "e2e-limitrange-rkskx" in namespace "limitrange-378" 10/13/23 08:29:52.575 + STEP: Creating another limitRange in another namespace 10/13/23 08:29:52.58 + Oct 13 08:29:52.597: INFO: Namespace "e2e-limitrange-rkskx-2996" created + Oct 13 08:29:52.597: INFO: Creating LimitRange "e2e-limitrange-rkskx" in namespace "e2e-limitrange-rkskx-2996" + STEP: Listing all LimitRanges with label "e2e-test=e2e-limitrange-rkskx" 10/13/23 08:29:52.603 + Oct 13 08:29:52.606: INFO: Found 2 limitRanges + STEP: Patching LimitRange "e2e-limitrange-rkskx" in "limitrange-378" namespace 10/13/23 08:29:52.606 + Oct 13 08:29:52.612: INFO: LimitRange "e2e-limitrange-rkskx" has been patched + STEP: Delete LimitRange "e2e-limitrange-rkskx" by Collection with labelSelector: "e2e-limitrange-rkskx=patched" 10/13/23 08:29:52.612 + STEP: Confirm that the limitRange "e2e-limitrange-rkskx" has been deleted 10/13/23 08:29:52.62 + Oct 13 08:29:52.620: INFO: Requesting list of LimitRange to confirm quantity + Oct 13 08:29:52.624: INFO: Found 0 LimitRange with label "e2e-limitrange-rkskx=patched" + Oct 13 08:29:52.624: INFO: LimitRange "e2e-limitrange-rkskx" has been deleted. + STEP: Confirm that a single LimitRange still exists with label "e2e-test=e2e-limitrange-rkskx" 10/13/23 08:29:52.624 + Oct 13 08:29:52.627: INFO: Found 1 limitRange + [AfterEach] [sig-scheduling] LimitRange + test/e2e/framework/node/init/init.go:32 + Oct 13 08:29:52.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-scheduling] LimitRange + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-scheduling] LimitRange + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-scheduling] LimitRange + tear down framework | framework.go:193 + STEP: Destroying namespace "limitrange-378" for this suite. 10/13/23 08:29:52.63 + STEP: Destroying namespace "e2e-limitrange-rkskx-2996" for this suite. 10/13/23 08:29:52.635 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should be able to create a functioning NodePort service [Conformance] + test/e2e/network/service.go:1302 +[BeforeEach] [sig-network] Services + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:29:52.642 +Oct 13 08:29:52.642: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename services 10/13/23 08:29:52.643 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:29:52.658 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:29:52.66 +[BeforeEach] [sig-network] Services + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 +[It] should be able to create a functioning NodePort service [Conformance] + test/e2e/network/service.go:1302 +STEP: creating service nodeport-test with type=NodePort in namespace services-8040 10/13/23 08:29:52.662 +STEP: creating replication controller nodeport-test in namespace services-8040 10/13/23 08:29:52.678 +I1013 08:29:52.684316 23 runners.go:193] Created replication controller with name: nodeport-test, namespace: services-8040, replica count: 2 +I1013 08:29:55.736838 23 runners.go:193] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Oct 13 08:29:55.736: INFO: Creating new exec pod +Oct 13 08:29:55.744: INFO: Waiting up to 5m0s for pod "execpodsn2l9" in namespace "services-8040" to be "running" +Oct 13 08:29:55.747: INFO: Pod "execpodsn2l9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.029729ms +Oct 13 08:29:57.752: INFO: Pod "execpodsn2l9": Phase="Running", Reason="", readiness=true. Elapsed: 2.00757151s +Oct 13 08:29:57.752: INFO: Pod "execpodsn2l9" satisfied condition "running" +Oct 13 08:29:58.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-8040 exec execpodsn2l9 -- /bin/sh -x -c nc -v -z -w 2 nodeport-test 80' +Oct 13 08:29:58.891: INFO: stderr: "+ nc -v -z -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" +Oct 13 08:29:58.891: INFO: stdout: "" +Oct 13 08:29:58.891: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-8040 exec execpodsn2l9 -- /bin/sh -x -c nc -v -z -w 2 10.109.176.55 80' +Oct 13 08:29:59.037: INFO: stderr: "+ nc -v -z -w 2 10.109.176.55 80\nConnection to 10.109.176.55 80 port [tcp/http] succeeded!\n" +Oct 13 08:29:59.037: INFO: stdout: "" +Oct 13 08:29:59.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-8040 exec execpodsn2l9 -- /bin/sh -x -c nc -v -z -w 2 10.253.8.111 31345' +Oct 13 08:29:59.166: INFO: stderr: "+ nc -v -z -w 2 10.253.8.111 31345\nConnection to 10.253.8.111 31345 port [tcp/*] succeeded!\n" +Oct 13 08:29:59.166: INFO: stdout: "" +Oct 13 08:29:59.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-8040 exec execpodsn2l9 -- /bin/sh -x -c nc -v -z -w 2 10.253.8.110 31345' +Oct 13 08:29:59.287: INFO: stderr: "+ nc -v -z -w 2 10.253.8.110 31345\nConnection to 10.253.8.110 31345 port [tcp/*] succeeded!\n" +Oct 13 08:29:59.287: INFO: stdout: "" +[AfterEach] [sig-network] Services + test/e2e/framework/node/init/init.go:32 +Oct 13 08:29:59.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] Services + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] Services + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] Services + tear down framework | framework.go:193 +STEP: Destroying namespace "services-8040" for this suite. 10/13/23 08:29:59.292 +------------------------------ +• [SLOW TEST] [6.657 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should be able to create a functioning NodePort service [Conformance] + test/e2e/network/service.go:1302 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Services + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:29:52.642 + Oct 13 08:29:52.642: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename services 10/13/23 08:29:52.643 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:29:52.658 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:29:52.66 + [BeforeEach] [sig-network] Services + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 + [It] should be able to create a functioning NodePort service [Conformance] + test/e2e/network/service.go:1302 + STEP: creating service nodeport-test with type=NodePort in namespace services-8040 10/13/23 08:29:52.662 + STEP: creating replication controller nodeport-test in namespace services-8040 10/13/23 08:29:52.678 + I1013 08:29:52.684316 23 runners.go:193] Created replication controller with name: nodeport-test, namespace: services-8040, replica count: 2 + I1013 08:29:55.736838 23 runners.go:193] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady + Oct 13 08:29:55.736: INFO: Creating new exec pod + Oct 13 08:29:55.744: INFO: Waiting up to 5m0s for pod "execpodsn2l9" in namespace "services-8040" to be "running" + Oct 13 08:29:55.747: INFO: Pod "execpodsn2l9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.029729ms + Oct 13 08:29:57.752: INFO: Pod "execpodsn2l9": Phase="Running", Reason="", readiness=true. Elapsed: 2.00757151s + Oct 13 08:29:57.752: INFO: Pod "execpodsn2l9" satisfied condition "running" + Oct 13 08:29:58.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-8040 exec execpodsn2l9 -- /bin/sh -x -c nc -v -z -w 2 nodeport-test 80' + Oct 13 08:29:58.891: INFO: stderr: "+ nc -v -z -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" + Oct 13 08:29:58.891: INFO: stdout: "" + Oct 13 08:29:58.891: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-8040 exec execpodsn2l9 -- /bin/sh -x -c nc -v -z -w 2 10.109.176.55 80' + Oct 13 08:29:59.037: INFO: stderr: "+ nc -v -z -w 2 10.109.176.55 80\nConnection to 10.109.176.55 80 port [tcp/http] succeeded!\n" + Oct 13 08:29:59.037: INFO: stdout: "" + Oct 13 08:29:59.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-8040 exec execpodsn2l9 -- /bin/sh -x -c nc -v -z -w 2 10.253.8.111 31345' + Oct 13 08:29:59.166: INFO: stderr: "+ nc -v -z -w 2 10.253.8.111 31345\nConnection to 10.253.8.111 31345 port [tcp/*] succeeded!\n" + Oct 13 08:29:59.166: INFO: stdout: "" + Oct 13 08:29:59.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-8040 exec execpodsn2l9 -- /bin/sh -x -c nc -v -z -w 2 10.253.8.110 31345' + Oct 13 08:29:59.287: INFO: stderr: "+ nc -v -z -w 2 10.253.8.110 31345\nConnection to 10.253.8.110 31345 port [tcp/*] succeeded!\n" + Oct 13 08:29:59.287: INFO: stdout: "" + [AfterEach] [sig-network] Services + test/e2e/framework/node/init/init.go:32 + Oct 13 08:29:59.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] Services + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] Services + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] Services + tear down framework | framework.go:193 + STEP: Destroying namespace "services-8040" for this suite. 10/13/23 08:29:59.292 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl label + should update the label on a resource [Conformance] + test/e2e/kubectl/kubectl.go:1509 +[BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:29:59.299 +Oct 13 08:29:59.299: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename kubectl 10/13/23 08:29:59.3 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:29:59.316 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:29:59.318 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 +[BeforeEach] Kubectl label + test/e2e/kubectl/kubectl.go:1494 +STEP: creating the pod 10/13/23 08:29:59.32 +Oct 13 08:29:59.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-9450 create -f -' +Oct 13 08:29:59.497: INFO: stderr: "" +Oct 13 08:29:59.497: INFO: stdout: "pod/pause created\n" +Oct 13 08:29:59.497: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] +Oct 13 08:29:59.497: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-9450" to be "running and ready" +Oct 13 08:29:59.500: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 3.121208ms +Oct 13 08:29:59.500: INFO: Error evaluating pod condition running and ready: want pod 'pause' on 'node1' to be 'Running' but was 'Pending' +Oct 13 08:30:01.506: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 2.008874604s +Oct 13 08:30:01.506: INFO: Pod "pause" satisfied condition "running and ready" +Oct 13 08:30:01.506: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] +[It] should update the label on a resource [Conformance] + test/e2e/kubectl/kubectl.go:1509 +STEP: adding the label testing-label with value testing-label-value to a pod 10/13/23 08:30:01.506 +Oct 13 08:30:01.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-9450 label pods pause testing-label=testing-label-value' +Oct 13 08:30:01.602: INFO: stderr: "" +Oct 13 08:30:01.602: INFO: stdout: "pod/pause labeled\n" +STEP: verifying the pod has the label testing-label with the value testing-label-value 10/13/23 08:30:01.602 +Oct 13 08:30:01.602: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-9450 get pod pause -L testing-label' +Oct 13 08:30:01.691: INFO: stderr: "" +Oct 13 08:30:01.691: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 2s testing-label-value\n" +STEP: removing the label testing-label of a pod 10/13/23 08:30:01.691 +Oct 13 08:30:01.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-9450 label pods pause testing-label-' +Oct 13 08:30:01.784: INFO: stderr: "" +Oct 13 08:30:01.785: INFO: stdout: "pod/pause unlabeled\n" +STEP: verifying the pod doesn't have the label testing-label 10/13/23 08:30:01.785 +Oct 13 08:30:01.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-9450 get pod pause -L testing-label' +Oct 13 08:30:01.863: INFO: stderr: "" +Oct 13 08:30:01.863: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 2s \n" +[AfterEach] Kubectl label + test/e2e/kubectl/kubectl.go:1500 +STEP: using delete to clean up resources 10/13/23 08:30:01.864 +Oct 13 08:30:01.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-9450 delete --grace-period=0 --force -f -' +Oct 13 08:30:01.953: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Oct 13 08:30:01.953: INFO: stdout: "pod \"pause\" force deleted\n" +Oct 13 08:30:01.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-9450 get rc,svc -l name=pause --no-headers' +Oct 13 08:30:02.030: INFO: stderr: "No resources found in kubectl-9450 namespace.\n" +Oct 13 08:30:02.030: INFO: stdout: "" +Oct 13 08:30:02.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-9450 get pods -l name=pause -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Oct 13 08:30:02.116: INFO: stderr: "" +Oct 13 08:30:02.116: INFO: stdout: "" +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 +Oct 13 08:30:02.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 +STEP: Destroying namespace "kubectl-9450" for this suite. 10/13/23 08:30:02.122 +------------------------------ +• [2.831 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Kubectl label + test/e2e/kubectl/kubectl.go:1492 + should update the label on a resource [Conformance] + test/e2e/kubectl/kubectl.go:1509 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:29:59.299 + Oct 13 08:29:59.299: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename kubectl 10/13/23 08:29:59.3 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:29:59.316 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:29:59.318 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 + [BeforeEach] Kubectl label + test/e2e/kubectl/kubectl.go:1494 + STEP: creating the pod 10/13/23 08:29:59.32 + Oct 13 08:29:59.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-9450 create -f -' + Oct 13 08:29:59.497: INFO: stderr: "" + Oct 13 08:29:59.497: INFO: stdout: "pod/pause created\n" + Oct 13 08:29:59.497: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] + Oct 13 08:29:59.497: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-9450" to be "running and ready" + Oct 13 08:29:59.500: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 3.121208ms + Oct 13 08:29:59.500: INFO: Error evaluating pod condition running and ready: want pod 'pause' on 'node1' to be 'Running' but was 'Pending' + Oct 13 08:30:01.506: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 2.008874604s + Oct 13 08:30:01.506: INFO: Pod "pause" satisfied condition "running and ready" + Oct 13 08:30:01.506: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] + [It] should update the label on a resource [Conformance] + test/e2e/kubectl/kubectl.go:1509 + STEP: adding the label testing-label with value testing-label-value to a pod 10/13/23 08:30:01.506 + Oct 13 08:30:01.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-9450 label pods pause testing-label=testing-label-value' + Oct 13 08:30:01.602: INFO: stderr: "" + Oct 13 08:30:01.602: INFO: stdout: "pod/pause labeled\n" + STEP: verifying the pod has the label testing-label with the value testing-label-value 10/13/23 08:30:01.602 + Oct 13 08:30:01.602: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-9450 get pod pause -L testing-label' + Oct 13 08:30:01.691: INFO: stderr: "" + Oct 13 08:30:01.691: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 2s testing-label-value\n" + STEP: removing the label testing-label of a pod 10/13/23 08:30:01.691 + Oct 13 08:30:01.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-9450 label pods pause testing-label-' + Oct 13 08:30:01.784: INFO: stderr: "" + Oct 13 08:30:01.785: INFO: stdout: "pod/pause unlabeled\n" + STEP: verifying the pod doesn't have the label testing-label 10/13/23 08:30:01.785 + Oct 13 08:30:01.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-9450 get pod pause -L testing-label' + Oct 13 08:30:01.863: INFO: stderr: "" + Oct 13 08:30:01.863: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 2s \n" + [AfterEach] Kubectl label + test/e2e/kubectl/kubectl.go:1500 + STEP: using delete to clean up resources 10/13/23 08:30:01.864 + Oct 13 08:30:01.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-9450 delete --grace-period=0 --force -f -' + Oct 13 08:30:01.953: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" + Oct 13 08:30:01.953: INFO: stdout: "pod \"pause\" force deleted\n" + Oct 13 08:30:01.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-9450 get rc,svc -l name=pause --no-headers' + Oct 13 08:30:02.030: INFO: stderr: "No resources found in kubectl-9450 namespace.\n" + Oct 13 08:30:02.030: INFO: stdout: "" + Oct 13 08:30:02.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-9450 get pods -l name=pause -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' + Oct 13 08:30:02.116: INFO: stderr: "" + Oct 13 08:30:02.116: INFO: stdout: "" + [AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 + Oct 13 08:30:02.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 + STEP: Destroying namespace "kubectl-9450" for this suite. 10/13/23 08:30:02.122 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSS +------------------------------ +[sig-node] Secrets + should patch a secret [Conformance] + test/e2e/common/node/secrets.go:154 +[BeforeEach] [sig-node] Secrets + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:30:02.13 +Oct 13 08:30:02.131: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename secrets 10/13/23 08:30:02.132 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:30:02.156 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:30:02.159 +[BeforeEach] [sig-node] Secrets + test/e2e/framework/metrics/init/init.go:31 +[It] should patch a secret [Conformance] + test/e2e/common/node/secrets.go:154 +STEP: creating a secret 10/13/23 08:30:02.162 +STEP: listing secrets in all namespaces to ensure that there are more than zero 10/13/23 08:30:02.167 +STEP: patching the secret 10/13/23 08:30:02.17 +STEP: deleting the secret using a LabelSelector 10/13/23 08:30:02.179 +STEP: listing secrets in all namespaces, searching for label name and value in patch 10/13/23 08:30:02.187 +[AfterEach] [sig-node] Secrets + test/e2e/framework/node/init/init.go:32 +Oct 13 08:30:02.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Secrets + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Secrets + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Secrets + tear down framework | framework.go:193 +STEP: Destroying namespace "secrets-4897" for this suite. 10/13/23 08:30:02.193 +------------------------------ +• [0.069 seconds] +[sig-node] Secrets +test/e2e/common/node/framework.go:23 + should patch a secret [Conformance] + test/e2e/common/node/secrets.go:154 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Secrets + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:30:02.13 + Oct 13 08:30:02.131: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename secrets 10/13/23 08:30:02.132 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:30:02.156 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:30:02.159 + [BeforeEach] [sig-node] Secrets + test/e2e/framework/metrics/init/init.go:31 + [It] should patch a secret [Conformance] + test/e2e/common/node/secrets.go:154 + STEP: creating a secret 10/13/23 08:30:02.162 + STEP: listing secrets in all namespaces to ensure that there are more than zero 10/13/23 08:30:02.167 + STEP: patching the secret 10/13/23 08:30:02.17 + STEP: deleting the secret using a LabelSelector 10/13/23 08:30:02.179 + STEP: listing secrets in all namespaces, searching for label name and value in patch 10/13/23 08:30:02.187 + [AfterEach] [sig-node] Secrets + test/e2e/framework/node/init/init.go:32 + Oct 13 08:30:02.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Secrets + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Secrets + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Secrets + tear down framework | framework.go:193 + STEP: Destroying namespace "secrets-4897" for this suite. 10/13/23 08:30:02.193 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should not be blocked by dependency circle [Conformance] + test/e2e/apimachinery/garbage_collector.go:849 +[BeforeEach] [sig-api-machinery] Garbage collector + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:30:02.2 +Oct 13 08:30:02.200: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename gc 10/13/23 08:30:02.201 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:30:02.218 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:30:02.22 +[BeforeEach] [sig-api-machinery] Garbage collector + test/e2e/framework/metrics/init/init.go:31 +[It] should not be blocked by dependency circle [Conformance] + test/e2e/apimachinery/garbage_collector.go:849 +Oct 13 08:30:02.270: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"feee7928-22de-42de-986f-155c22e257a9", Controller:(*bool)(0xc0037d8dfa), BlockOwnerDeletion:(*bool)(0xc0037d8dfb)}} +Oct 13 08:30:02.277: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"d5d5d8ed-ac14-4465-becc-e141c0242264", Controller:(*bool)(0xc0037d907e), BlockOwnerDeletion:(*bool)(0xc0037d907f)}} +Oct 13 08:30:02.283: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"bd27896e-2592-4e81-ae2c-62c13d777d25", Controller:(*bool)(0xc0037d92fe), BlockOwnerDeletion:(*bool)(0xc0037d92ff)}} +[AfterEach] [sig-api-machinery] Garbage collector + test/e2e/framework/node/init/init.go:32 +Oct 13 08:30:07.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] Garbage collector + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] Garbage collector + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] Garbage collector + tear down framework | framework.go:193 +STEP: Destroying namespace "gc-4979" for this suite. 10/13/23 08:30:07.296 +------------------------------ +• [SLOW TEST] [5.102 seconds] +[sig-api-machinery] Garbage collector +test/e2e/apimachinery/framework.go:23 + should not be blocked by dependency circle [Conformance] + test/e2e/apimachinery/garbage_collector.go:849 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] Garbage collector + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:30:02.2 + Oct 13 08:30:02.200: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename gc 10/13/23 08:30:02.201 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:30:02.218 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:30:02.22 + [BeforeEach] [sig-api-machinery] Garbage collector + test/e2e/framework/metrics/init/init.go:31 + [It] should not be blocked by dependency circle [Conformance] + test/e2e/apimachinery/garbage_collector.go:849 + Oct 13 08:30:02.270: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"feee7928-22de-42de-986f-155c22e257a9", Controller:(*bool)(0xc0037d8dfa), BlockOwnerDeletion:(*bool)(0xc0037d8dfb)}} + Oct 13 08:30:02.277: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"d5d5d8ed-ac14-4465-becc-e141c0242264", Controller:(*bool)(0xc0037d907e), BlockOwnerDeletion:(*bool)(0xc0037d907f)}} + Oct 13 08:30:02.283: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"bd27896e-2592-4e81-ae2c-62c13d777d25", Controller:(*bool)(0xc0037d92fe), BlockOwnerDeletion:(*bool)(0xc0037d92ff)}} + [AfterEach] [sig-api-machinery] Garbage collector + test/e2e/framework/node/init/init.go:32 + Oct 13 08:30:07.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + tear down framework | framework.go:193 + STEP: Destroying namespace "gc-4979" for this suite. 10/13/23 08:30:07.296 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSS +------------------------------ +[sig-network] EndpointSlice + should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] + test/e2e/network/endpointslice.go:102 +[BeforeEach] [sig-network] EndpointSlice + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:30:07.303 +Oct 13 08:30:07.303: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename endpointslice 10/13/23 08:30:07.305 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:30:07.318 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:30:07.321 +[BeforeEach] [sig-network] EndpointSlice + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-network] EndpointSlice + test/e2e/network/endpointslice.go:52 +[It] should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] + test/e2e/network/endpointslice.go:102 +[AfterEach] [sig-network] EndpointSlice + test/e2e/framework/node/init/init.go:32 +Oct 13 08:30:09.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] EndpointSlice + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] EndpointSlice + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] EndpointSlice + tear down framework | framework.go:193 +STEP: Destroying namespace "endpointslice-115" for this suite. 10/13/23 08:30:09.375 +------------------------------ +• [2.078 seconds] +[sig-network] EndpointSlice +test/e2e/network/common/framework.go:23 + should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] + test/e2e/network/endpointslice.go:102 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] EndpointSlice + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:30:07.303 + Oct 13 08:30:07.303: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename endpointslice 10/13/23 08:30:07.305 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:30:07.318 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:30:07.321 + [BeforeEach] [sig-network] EndpointSlice + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-network] EndpointSlice + test/e2e/network/endpointslice.go:52 + [It] should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] + test/e2e/network/endpointslice.go:102 + [AfterEach] [sig-network] EndpointSlice + test/e2e/framework/node/init/init.go:32 + Oct 13 08:30:09.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] EndpointSlice + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] EndpointSlice + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] EndpointSlice + tear down framework | framework.go:193 + STEP: Destroying namespace "endpointslice-115" for this suite. 10/13/23 08:30:09.375 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:79 +[BeforeEach] [sig-storage] Secrets + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:30:09.382 +Oct 13 08:30:09.382: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename secrets 10/13/23 08:30:09.383 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:30:09.4 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:30:09.403 +[BeforeEach] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:31 +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:79 +STEP: Creating secret with name secret-test-map-3b482676-f9c5-41fb-ad8e-e9dc813d2b8b 10/13/23 08:30:09.405 +STEP: Creating a pod to test consume secrets 10/13/23 08:30:09.409 +Oct 13 08:30:09.419: INFO: Waiting up to 5m0s for pod "pod-secrets-893744a9-3e34-4778-8c36-28f18fb16ef2" in namespace "secrets-9217" to be "Succeeded or Failed" +Oct 13 08:30:09.423: INFO: Pod "pod-secrets-893744a9-3e34-4778-8c36-28f18fb16ef2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.702096ms +Oct 13 08:30:11.427: INFO: Pod "pod-secrets-893744a9-3e34-4778-8c36-28f18fb16ef2": Phase="Running", Reason="", readiness=false. Elapsed: 2.007671353s +Oct 13 08:30:13.428: INFO: Pod "pod-secrets-893744a9-3e34-4778-8c36-28f18fb16ef2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009086726s +STEP: Saw pod success 10/13/23 08:30:13.428 +Oct 13 08:30:13.428: INFO: Pod "pod-secrets-893744a9-3e34-4778-8c36-28f18fb16ef2" satisfied condition "Succeeded or Failed" +Oct 13 08:30:13.431: INFO: Trying to get logs from node node2 pod pod-secrets-893744a9-3e34-4778-8c36-28f18fb16ef2 container secret-volume-test: +STEP: delete the pod 10/13/23 08:30:13.445 +Oct 13 08:30:13.463: INFO: Waiting for pod pod-secrets-893744a9-3e34-4778-8c36-28f18fb16ef2 to disappear +Oct 13 08:30:13.467: INFO: Pod pod-secrets-893744a9-3e34-4778-8c36-28f18fb16ef2 no longer exists +[AfterEach] [sig-storage] Secrets + test/e2e/framework/node/init/init.go:32 +Oct 13 08:30:13.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Secrets + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Secrets + tear down framework | framework.go:193 +STEP: Destroying namespace "secrets-9217" for this suite. 10/13/23 08:30:13.47 +------------------------------ +• [4.093 seconds] +[sig-storage] Secrets +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:79 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Secrets + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:30:09.382 + Oct 13 08:30:09.382: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename secrets 10/13/23 08:30:09.383 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:30:09.4 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:30:09.403 + [BeforeEach] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:31 + [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:79 + STEP: Creating secret with name secret-test-map-3b482676-f9c5-41fb-ad8e-e9dc813d2b8b 10/13/23 08:30:09.405 + STEP: Creating a pod to test consume secrets 10/13/23 08:30:09.409 + Oct 13 08:30:09.419: INFO: Waiting up to 5m0s for pod "pod-secrets-893744a9-3e34-4778-8c36-28f18fb16ef2" in namespace "secrets-9217" to be "Succeeded or Failed" + Oct 13 08:30:09.423: INFO: Pod "pod-secrets-893744a9-3e34-4778-8c36-28f18fb16ef2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.702096ms + Oct 13 08:30:11.427: INFO: Pod "pod-secrets-893744a9-3e34-4778-8c36-28f18fb16ef2": Phase="Running", Reason="", readiness=false. Elapsed: 2.007671353s + Oct 13 08:30:13.428: INFO: Pod "pod-secrets-893744a9-3e34-4778-8c36-28f18fb16ef2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009086726s + STEP: Saw pod success 10/13/23 08:30:13.428 + Oct 13 08:30:13.428: INFO: Pod "pod-secrets-893744a9-3e34-4778-8c36-28f18fb16ef2" satisfied condition "Succeeded or Failed" + Oct 13 08:30:13.431: INFO: Trying to get logs from node node2 pod pod-secrets-893744a9-3e34-4778-8c36-28f18fb16ef2 container secret-volume-test: + STEP: delete the pod 10/13/23 08:30:13.445 + Oct 13 08:30:13.463: INFO: Waiting for pod pod-secrets-893744a9-3e34-4778-8c36-28f18fb16ef2 to disappear + Oct 13 08:30:13.467: INFO: Pod pod-secrets-893744a9-3e34-4778-8c36-28f18fb16ef2 no longer exists + [AfterEach] [sig-storage] Secrets + test/e2e/framework/node/init/init.go:32 + Oct 13 08:30:13.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Secrets + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Secrets + tear down framework | framework.go:193 + STEP: Destroying namespace "secrets-9217" for this suite. 10/13/23 08:30:13.47 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] PodTemplates + should run the lifecycle of PodTemplates [Conformance] + test/e2e/common/node/podtemplates.go:53 +[BeforeEach] [sig-node] PodTemplates + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:30:13.478 +Oct 13 08:30:13.478: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename podtemplate 10/13/23 08:30:13.479 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:30:13.492 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:30:13.495 +[BeforeEach] [sig-node] PodTemplates + test/e2e/framework/metrics/init/init.go:31 +[It] should run the lifecycle of PodTemplates [Conformance] + test/e2e/common/node/podtemplates.go:53 +[AfterEach] [sig-node] PodTemplates + test/e2e/framework/node/init/init.go:32 +Oct 13 08:30:13.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] PodTemplates + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] PodTemplates + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] PodTemplates + tear down framework | framework.go:193 +STEP: Destroying namespace "podtemplate-1342" for this suite. 10/13/23 08:30:13.522 +------------------------------ +• [0.050 seconds] +[sig-node] PodTemplates +test/e2e/common/node/framework.go:23 + should run the lifecycle of PodTemplates [Conformance] + test/e2e/common/node/podtemplates.go:53 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] PodTemplates + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:30:13.478 + Oct 13 08:30:13.478: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename podtemplate 10/13/23 08:30:13.479 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:30:13.492 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:30:13.495 + [BeforeEach] [sig-node] PodTemplates + test/e2e/framework/metrics/init/init.go:31 + [It] should run the lifecycle of PodTemplates [Conformance] + test/e2e/common/node/podtemplates.go:53 + [AfterEach] [sig-node] PodTemplates + test/e2e/framework/node/init/init.go:32 + Oct 13 08:30:13.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] PodTemplates + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] PodTemplates + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] PodTemplates + tear down framework | framework.go:193 + STEP: Destroying namespace "podtemplate-1342" for this suite. 10/13/23 08:30:13.522 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] + test/e2e/apimachinery/garbage_collector.go:650 +[BeforeEach] [sig-api-machinery] Garbage collector + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:30:13.528 +Oct 13 08:30:13.528: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename gc 10/13/23 08:30:13.529 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:30:13.544 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:30:13.546 +[BeforeEach] [sig-api-machinery] Garbage collector + test/e2e/framework/metrics/init/init.go:31 +[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] + test/e2e/apimachinery/garbage_collector.go:650 +STEP: create the rc 10/13/23 08:30:13.552 +STEP: delete the rc 10/13/23 08:30:18.561 +STEP: wait for the rc to be deleted 10/13/23 08:30:18.569 +Oct 13 08:30:19.581: INFO: 80 pods remaining +Oct 13 08:30:19.581: INFO: 80 pods has nil DeletionTimestamp +Oct 13 08:30:19.581: INFO: +Oct 13 08:30:20.582: INFO: 71 pods remaining +Oct 13 08:30:20.582: INFO: 71 pods has nil DeletionTimestamp +Oct 13 08:30:20.582: INFO: +Oct 13 08:30:21.578: INFO: 60 pods remaining +Oct 13 08:30:21.578: INFO: 60 pods has nil DeletionTimestamp +Oct 13 08:30:21.578: INFO: +Oct 13 08:30:22.579: INFO: 40 pods remaining +Oct 13 08:30:22.579: INFO: 40 pods has nil DeletionTimestamp +Oct 13 08:30:22.579: INFO: +Oct 13 08:30:23.581: INFO: 31 pods remaining +Oct 13 08:30:23.581: INFO: 31 pods has nil DeletionTimestamp +Oct 13 08:30:23.581: INFO: +Oct 13 08:30:24.577: INFO: 20 pods remaining +Oct 13 08:30:24.577: INFO: 20 pods has nil DeletionTimestamp +Oct 13 08:30:24.577: INFO: +STEP: Gathering metrics 10/13/23 08:30:25.576 +Oct 13 08:30:26.185: INFO: Waiting up to 5m0s for pod "kube-controller-manager-node3" in namespace "kube-system" to be "running and ready" +Oct 13 08:30:26.189: INFO: Pod "kube-controller-manager-node3": Phase="Running", Reason="", readiness=true. Elapsed: 3.154456ms +Oct 13 08:30:26.189: INFO: The phase of Pod kube-controller-manager-node3 is Running (Ready = true) +Oct 13 08:30:26.189: INFO: Pod "kube-controller-manager-node3" satisfied condition "running and ready" +Oct 13 08:30:26.808: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +[AfterEach] [sig-api-machinery] Garbage collector + test/e2e/framework/node/init/init.go:32 +Oct 13 08:30:26.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] Garbage collector + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] Garbage collector + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] Garbage collector + tear down framework | framework.go:193 +STEP: Destroying namespace "gc-7590" for this suite. 10/13/23 08:30:26.812 +------------------------------ +• [SLOW TEST] [13.290 seconds] +[sig-api-machinery] Garbage collector +test/e2e/apimachinery/framework.go:23 + should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] + test/e2e/apimachinery/garbage_collector.go:650 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] Garbage collector + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:30:13.528 + Oct 13 08:30:13.528: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename gc 10/13/23 08:30:13.529 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:30:13.544 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:30:13.546 + [BeforeEach] [sig-api-machinery] Garbage collector + test/e2e/framework/metrics/init/init.go:31 + [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] + test/e2e/apimachinery/garbage_collector.go:650 + STEP: create the rc 10/13/23 08:30:13.552 + STEP: delete the rc 10/13/23 08:30:18.561 + STEP: wait for the rc to be deleted 10/13/23 08:30:18.569 + Oct 13 08:30:19.581: INFO: 80 pods remaining + Oct 13 08:30:19.581: INFO: 80 pods has nil DeletionTimestamp + Oct 13 08:30:19.581: INFO: + Oct 13 08:30:20.582: INFO: 71 pods remaining + Oct 13 08:30:20.582: INFO: 71 pods has nil DeletionTimestamp + Oct 13 08:30:20.582: INFO: + Oct 13 08:30:21.578: INFO: 60 pods remaining + Oct 13 08:30:21.578: INFO: 60 pods has nil DeletionTimestamp + Oct 13 08:30:21.578: INFO: + Oct 13 08:30:22.579: INFO: 40 pods remaining + Oct 13 08:30:22.579: INFO: 40 pods has nil DeletionTimestamp + Oct 13 08:30:22.579: INFO: + Oct 13 08:30:23.581: INFO: 31 pods remaining + Oct 13 08:30:23.581: INFO: 31 pods has nil DeletionTimestamp + Oct 13 08:30:23.581: INFO: + Oct 13 08:30:24.577: INFO: 20 pods remaining + Oct 13 08:30:24.577: INFO: 20 pods has nil DeletionTimestamp + Oct 13 08:30:24.577: INFO: + STEP: Gathering metrics 10/13/23 08:30:25.576 + Oct 13 08:30:26.185: INFO: Waiting up to 5m0s for pod "kube-controller-manager-node3" in namespace "kube-system" to be "running and ready" + Oct 13 08:30:26.189: INFO: Pod "kube-controller-manager-node3": Phase="Running", Reason="", readiness=true. Elapsed: 3.154456ms + Oct 13 08:30:26.189: INFO: The phase of Pod kube-controller-manager-node3 is Running (Ready = true) + Oct 13 08:30:26.189: INFO: Pod "kube-controller-manager-node3" satisfied condition "running and ready" + Oct 13 08:30:26.808: INFO: For apiserver_request_total: + For apiserver_request_latency_seconds: + For apiserver_init_events_total: + For garbage_collector_attempt_to_delete_queue_latency: + For garbage_collector_attempt_to_delete_work_duration: + For garbage_collector_attempt_to_orphan_queue_latency: + For garbage_collector_attempt_to_orphan_work_duration: + For garbage_collector_dirty_processing_latency_microseconds: + For garbage_collector_event_processing_latency_microseconds: + For garbage_collector_graph_changes_queue_latency: + For garbage_collector_graph_changes_work_duration: + For garbage_collector_orphan_processing_latency_microseconds: + For namespace_queue_latency: + For namespace_queue_latency_sum: + For namespace_queue_latency_count: + For namespace_retries: + For namespace_work_duration: + For namespace_work_duration_sum: + For namespace_work_duration_count: + For function_duration_seconds: + For errors_total: + For evicted_pods_total: + + [AfterEach] [sig-api-machinery] Garbage collector + test/e2e/framework/node/init/init.go:32 + Oct 13 08:30:26.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + tear down framework | framework.go:193 + STEP: Destroying namespace "gc-7590" for this suite. 10/13/23 08:30:26.812 + << End Captured GinkgoWriter Output +------------------------------ +S +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:127 +[BeforeEach] [sig-storage] EmptyDir volumes + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:30:26.818 +Oct 13 08:30:26.818: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename emptydir 10/13/23 08:30:26.819 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:30:26.832 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:30:26.834 +[BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:31 +[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:127 +STEP: Creating a pod to test emptydir 0644 on tmpfs 10/13/23 08:30:26.836 +Oct 13 08:30:26.843: INFO: Waiting up to 5m0s for pod "pod-34d665c3-d6c6-4292-b103-8a52f2aa9d99" in namespace "emptydir-3262" to be "Succeeded or Failed" +Oct 13 08:30:26.846: INFO: Pod "pod-34d665c3-d6c6-4292-b103-8a52f2aa9d99": Phase="Pending", Reason="", readiness=false. Elapsed: 3.008297ms +Oct 13 08:30:28.849: INFO: Pod "pod-34d665c3-d6c6-4292-b103-8a52f2aa9d99": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006337069s +Oct 13 08:30:30.851: INFO: Pod "pod-34d665c3-d6c6-4292-b103-8a52f2aa9d99": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007666243s +Oct 13 08:30:32.850: INFO: Pod "pod-34d665c3-d6c6-4292-b103-8a52f2aa9d99": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.007539605s +STEP: Saw pod success 10/13/23 08:30:32.85 +Oct 13 08:30:32.851: INFO: Pod "pod-34d665c3-d6c6-4292-b103-8a52f2aa9d99" satisfied condition "Succeeded or Failed" +Oct 13 08:30:32.853: INFO: Trying to get logs from node node2 pod pod-34d665c3-d6c6-4292-b103-8a52f2aa9d99 container test-container: +STEP: delete the pod 10/13/23 08:30:32.859 +Oct 13 08:30:32.870: INFO: Waiting for pod pod-34d665c3-d6c6-4292-b103-8a52f2aa9d99 to disappear +Oct 13 08:30:32.873: INFO: Pod pod-34d665c3-d6c6-4292-b103-8a52f2aa9d99 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/node/init/init.go:32 +Oct 13 08:30:32.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + tear down framework | framework.go:193 +STEP: Destroying namespace "emptydir-3262" for this suite. 10/13/23 08:30:32.876 +------------------------------ +• [SLOW TEST] [6.063 seconds] +[sig-storage] EmptyDir volumes +test/e2e/common/storage/framework.go:23 + should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:127 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] EmptyDir volumes + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:30:26.818 + Oct 13 08:30:26.818: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename emptydir 10/13/23 08:30:26.819 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:30:26.832 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:30:26.834 + [BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:31 + [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:127 + STEP: Creating a pod to test emptydir 0644 on tmpfs 10/13/23 08:30:26.836 + Oct 13 08:30:26.843: INFO: Waiting up to 5m0s for pod "pod-34d665c3-d6c6-4292-b103-8a52f2aa9d99" in namespace "emptydir-3262" to be "Succeeded or Failed" + Oct 13 08:30:26.846: INFO: Pod "pod-34d665c3-d6c6-4292-b103-8a52f2aa9d99": Phase="Pending", Reason="", readiness=false. Elapsed: 3.008297ms + Oct 13 08:30:28.849: INFO: Pod "pod-34d665c3-d6c6-4292-b103-8a52f2aa9d99": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006337069s + Oct 13 08:30:30.851: INFO: Pod "pod-34d665c3-d6c6-4292-b103-8a52f2aa9d99": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007666243s + Oct 13 08:30:32.850: INFO: Pod "pod-34d665c3-d6c6-4292-b103-8a52f2aa9d99": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.007539605s + STEP: Saw pod success 10/13/23 08:30:32.85 + Oct 13 08:30:32.851: INFO: Pod "pod-34d665c3-d6c6-4292-b103-8a52f2aa9d99" satisfied condition "Succeeded or Failed" + Oct 13 08:30:32.853: INFO: Trying to get logs from node node2 pod pod-34d665c3-d6c6-4292-b103-8a52f2aa9d99 container test-container: + STEP: delete the pod 10/13/23 08:30:32.859 + Oct 13 08:30:32.870: INFO: Waiting for pod pod-34d665c3-d6c6-4292-b103-8a52f2aa9d99 to disappear + Oct 13 08:30:32.873: INFO: Pod pod-34d665c3-d6c6-4292-b103-8a52f2aa9d99 no longer exists + [AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/node/init/init.go:32 + Oct 13 08:30:32.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + tear down framework | framework.go:193 + STEP: Destroying namespace "emptydir-3262" for this suite. 10/13/23 08:30:32.876 + << End Captured GinkgoWriter Output +------------------------------ +SSSS +------------------------------ +[sig-network] Services + should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] + test/e2e/network/service.go:2213 +[BeforeEach] [sig-network] Services + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:30:32.881 +Oct 13 08:30:32.881: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename services 10/13/23 08:30:32.882 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:30:32.896 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:30:32.898 +[BeforeEach] [sig-network] Services + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 +[It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] + test/e2e/network/service.go:2213 +STEP: creating service in namespace services-2794 10/13/23 08:30:32.9 +STEP: creating service affinity-clusterip-transition in namespace services-2794 10/13/23 08:30:32.9 +STEP: creating replication controller affinity-clusterip-transition in namespace services-2794 10/13/23 08:30:32.91 +I1013 08:30:32.915867 23 runners.go:193] Created replication controller with name: affinity-clusterip-transition, namespace: services-2794, replica count: 3 +I1013 08:30:35.967630 23 runners.go:193] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Oct 13 08:30:35.972: INFO: Creating new exec pod +Oct 13 08:30:35.979: INFO: Waiting up to 5m0s for pod "execpod-affinity64z7h" in namespace "services-2794" to be "running" +Oct 13 08:30:35.982: INFO: Pod "execpod-affinity64z7h": Phase="Pending", Reason="", readiness=false. Elapsed: 2.640703ms +Oct 13 08:30:37.985: INFO: Pod "execpod-affinity64z7h": Phase="Running", Reason="", readiness=true. Elapsed: 2.005736195s +Oct 13 08:30:37.985: INFO: Pod "execpod-affinity64z7h" satisfied condition "running" +Oct 13 08:30:38.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-2794 exec execpod-affinity64z7h -- /bin/sh -x -c nc -v -z -w 2 affinity-clusterip-transition 80' +Oct 13 08:30:39.105: INFO: stderr: "+ nc -v -z -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" +Oct 13 08:30:39.105: INFO: stdout: "" +Oct 13 08:30:39.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-2794 exec execpod-affinity64z7h -- /bin/sh -x -c nc -v -z -w 2 10.102.36.136 80' +Oct 13 08:30:39.223: INFO: stderr: "+ nc -v -z -w 2 10.102.36.136 80\nConnection to 10.102.36.136 80 port [tcp/http] succeeded!\n" +Oct 13 08:30:39.223: INFO: stdout: "" +Oct 13 08:30:39.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-2794 exec execpod-affinity64z7h -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.102.36.136:80/ ; done' +Oct 13 08:30:39.567: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.36.136:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.36.136:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.36.136:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.36.136:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.36.136:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.36.136:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.36.136:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.36.136:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.36.136:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.36.136:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.36.136:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.36.136:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.36.136:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.36.136:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.36.136:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.36.136:80/\n" +Oct 13 08:30:39.567: INFO: stdout: "\naffinity-clusterip-transition-xmkpm\naffinity-clusterip-transition-54nbn\naffinity-clusterip-transition-54nbn\naffinity-clusterip-transition-54nbn\naffinity-clusterip-transition-pv89p\naffinity-clusterip-transition-pv89p\naffinity-clusterip-transition-xmkpm\naffinity-clusterip-transition-54nbn\naffinity-clusterip-transition-xmkpm\naffinity-clusterip-transition-pv89p\naffinity-clusterip-transition-xmkpm\naffinity-clusterip-transition-54nbn\naffinity-clusterip-transition-pv89p\naffinity-clusterip-transition-xmkpm\naffinity-clusterip-transition-pv89p\naffinity-clusterip-transition-pv89p" +Oct 13 08:30:39.567: INFO: Received response from host: affinity-clusterip-transition-xmkpm +Oct 13 08:30:39.567: INFO: Received response from host: affinity-clusterip-transition-54nbn +Oct 13 08:30:39.567: INFO: Received response from host: affinity-clusterip-transition-54nbn +Oct 13 08:30:39.567: INFO: Received response from host: affinity-clusterip-transition-54nbn +Oct 13 08:30:39.567: INFO: Received response from host: affinity-clusterip-transition-pv89p +Oct 13 08:30:39.567: INFO: Received response from host: affinity-clusterip-transition-pv89p +Oct 13 08:30:39.567: INFO: Received response from host: affinity-clusterip-transition-xmkpm +Oct 13 08:30:39.567: INFO: Received response from host: affinity-clusterip-transition-54nbn +Oct 13 08:30:39.567: INFO: Received response from host: affinity-clusterip-transition-xmkpm +Oct 13 08:30:39.567: INFO: Received response from host: affinity-clusterip-transition-pv89p +Oct 13 08:30:39.567: INFO: Received response from host: affinity-clusterip-transition-xmkpm +Oct 13 08:30:39.567: INFO: Received response from host: affinity-clusterip-transition-54nbn +Oct 13 08:30:39.567: INFO: Received response from host: affinity-clusterip-transition-pv89p +Oct 13 08:30:39.567: INFO: Received response from host: affinity-clusterip-transition-xmkpm +Oct 13 08:30:39.567: INFO: Received response from host: affinity-clusterip-transition-pv89p +Oct 13 08:30:39.567: INFO: Received response from host: affinity-clusterip-transition-pv89p +Oct 13 08:30:39.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-2794 exec execpod-affinity64z7h -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.102.36.136:80/ ; done' +Oct 13 08:30:39.754: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.36.136:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.36.136:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.36.136:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.36.136:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.36.136:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.36.136:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.36.136:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.36.136:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.36.136:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.36.136:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.36.136:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.36.136:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.36.136:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.36.136:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.36.136:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.36.136:80/\n" +Oct 13 08:30:39.754: INFO: stdout: "\naffinity-clusterip-transition-xmkpm\naffinity-clusterip-transition-xmkpm\naffinity-clusterip-transition-xmkpm\naffinity-clusterip-transition-xmkpm\naffinity-clusterip-transition-xmkpm\naffinity-clusterip-transition-xmkpm\naffinity-clusterip-transition-xmkpm\naffinity-clusterip-transition-xmkpm\naffinity-clusterip-transition-xmkpm\naffinity-clusterip-transition-xmkpm\naffinity-clusterip-transition-xmkpm\naffinity-clusterip-transition-xmkpm\naffinity-clusterip-transition-xmkpm\naffinity-clusterip-transition-xmkpm\naffinity-clusterip-transition-xmkpm\naffinity-clusterip-transition-xmkpm" +Oct 13 08:30:39.754: INFO: Received response from host: affinity-clusterip-transition-xmkpm +Oct 13 08:30:39.754: INFO: Received response from host: affinity-clusterip-transition-xmkpm +Oct 13 08:30:39.754: INFO: Received response from host: affinity-clusterip-transition-xmkpm +Oct 13 08:30:39.754: INFO: Received response from host: affinity-clusterip-transition-xmkpm +Oct 13 08:30:39.754: INFO: Received response from host: affinity-clusterip-transition-xmkpm +Oct 13 08:30:39.754: INFO: Received response from host: affinity-clusterip-transition-xmkpm +Oct 13 08:30:39.754: INFO: Received response from host: affinity-clusterip-transition-xmkpm +Oct 13 08:30:39.754: INFO: Received response from host: affinity-clusterip-transition-xmkpm +Oct 13 08:30:39.754: INFO: Received response from host: affinity-clusterip-transition-xmkpm +Oct 13 08:30:39.754: INFO: Received response from host: affinity-clusterip-transition-xmkpm +Oct 13 08:30:39.754: INFO: Received response from host: affinity-clusterip-transition-xmkpm +Oct 13 08:30:39.754: INFO: Received response from host: affinity-clusterip-transition-xmkpm +Oct 13 08:30:39.754: INFO: Received response from host: affinity-clusterip-transition-xmkpm +Oct 13 08:30:39.754: INFO: Received response from host: affinity-clusterip-transition-xmkpm +Oct 13 08:30:39.754: INFO: Received response from host: affinity-clusterip-transition-xmkpm +Oct 13 08:30:39.754: INFO: Received response from host: affinity-clusterip-transition-xmkpm +Oct 13 08:30:39.754: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-2794, will wait for the garbage collector to delete the pods 10/13/23 08:30:39.766 +Oct 13 08:30:39.829: INFO: Deleting ReplicationController affinity-clusterip-transition took: 5.592167ms +Oct 13 08:30:39.930: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 100.820375ms +[AfterEach] [sig-network] Services + test/e2e/framework/node/init/init.go:32 +Oct 13 08:30:41.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] Services + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] Services + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] Services + tear down framework | framework.go:193 +STEP: Destroying namespace "services-2794" for this suite. 10/13/23 08:30:41.847 +------------------------------ +• [SLOW TEST] [8.971 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] + test/e2e/network/service.go:2213 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Services + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:30:32.881 + Oct 13 08:30:32.881: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename services 10/13/23 08:30:32.882 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:30:32.896 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:30:32.898 + [BeforeEach] [sig-network] Services + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 + [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] + test/e2e/network/service.go:2213 + STEP: creating service in namespace services-2794 10/13/23 08:30:32.9 + STEP: creating service affinity-clusterip-transition in namespace services-2794 10/13/23 08:30:32.9 + STEP: creating replication controller affinity-clusterip-transition in namespace services-2794 10/13/23 08:30:32.91 + I1013 08:30:32.915867 23 runners.go:193] Created replication controller with name: affinity-clusterip-transition, namespace: services-2794, replica count: 3 + I1013 08:30:35.967630 23 runners.go:193] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady + Oct 13 08:30:35.972: INFO: Creating new exec pod + Oct 13 08:30:35.979: INFO: Waiting up to 5m0s for pod "execpod-affinity64z7h" in namespace "services-2794" to be "running" + Oct 13 08:30:35.982: INFO: Pod "execpod-affinity64z7h": Phase="Pending", Reason="", readiness=false. Elapsed: 2.640703ms + Oct 13 08:30:37.985: INFO: Pod "execpod-affinity64z7h": Phase="Running", Reason="", readiness=true. Elapsed: 2.005736195s + Oct 13 08:30:37.985: INFO: Pod "execpod-affinity64z7h" satisfied condition "running" + Oct 13 08:30:38.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-2794 exec execpod-affinity64z7h -- /bin/sh -x -c nc -v -z -w 2 affinity-clusterip-transition 80' + Oct 13 08:30:39.105: INFO: stderr: "+ nc -v -z -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" + Oct 13 08:30:39.105: INFO: stdout: "" + Oct 13 08:30:39.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-2794 exec execpod-affinity64z7h -- /bin/sh -x -c nc -v -z -w 2 10.102.36.136 80' + Oct 13 08:30:39.223: INFO: stderr: "+ nc -v -z -w 2 10.102.36.136 80\nConnection to 10.102.36.136 80 port [tcp/http] succeeded!\n" + Oct 13 08:30:39.223: INFO: stdout: "" + Oct 13 08:30:39.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-2794 exec execpod-affinity64z7h -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.102.36.136:80/ ; done' + Oct 13 08:30:39.567: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.36.136:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.36.136:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.36.136:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.36.136:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.36.136:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.36.136:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.36.136:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.36.136:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.36.136:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.36.136:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.36.136:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.36.136:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.36.136:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.36.136:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.36.136:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.36.136:80/\n" + Oct 13 08:30:39.567: INFO: stdout: "\naffinity-clusterip-transition-xmkpm\naffinity-clusterip-transition-54nbn\naffinity-clusterip-transition-54nbn\naffinity-clusterip-transition-54nbn\naffinity-clusterip-transition-pv89p\naffinity-clusterip-transition-pv89p\naffinity-clusterip-transition-xmkpm\naffinity-clusterip-transition-54nbn\naffinity-clusterip-transition-xmkpm\naffinity-clusterip-transition-pv89p\naffinity-clusterip-transition-xmkpm\naffinity-clusterip-transition-54nbn\naffinity-clusterip-transition-pv89p\naffinity-clusterip-transition-xmkpm\naffinity-clusterip-transition-pv89p\naffinity-clusterip-transition-pv89p" + Oct 13 08:30:39.567: INFO: Received response from host: affinity-clusterip-transition-xmkpm + Oct 13 08:30:39.567: INFO: Received response from host: affinity-clusterip-transition-54nbn + Oct 13 08:30:39.567: INFO: Received response from host: affinity-clusterip-transition-54nbn + Oct 13 08:30:39.567: INFO: Received response from host: affinity-clusterip-transition-54nbn + Oct 13 08:30:39.567: INFO: Received response from host: affinity-clusterip-transition-pv89p + Oct 13 08:30:39.567: INFO: Received response from host: affinity-clusterip-transition-pv89p + Oct 13 08:30:39.567: INFO: Received response from host: affinity-clusterip-transition-xmkpm + Oct 13 08:30:39.567: INFO: Received response from host: affinity-clusterip-transition-54nbn + Oct 13 08:30:39.567: INFO: Received response from host: affinity-clusterip-transition-xmkpm + Oct 13 08:30:39.567: INFO: Received response from host: affinity-clusterip-transition-pv89p + Oct 13 08:30:39.567: INFO: Received response from host: affinity-clusterip-transition-xmkpm + Oct 13 08:30:39.567: INFO: Received response from host: affinity-clusterip-transition-54nbn + Oct 13 08:30:39.567: INFO: Received response from host: affinity-clusterip-transition-pv89p + Oct 13 08:30:39.567: INFO: Received response from host: affinity-clusterip-transition-xmkpm + Oct 13 08:30:39.567: INFO: Received response from host: affinity-clusterip-transition-pv89p + Oct 13 08:30:39.567: INFO: Received response from host: affinity-clusterip-transition-pv89p + Oct 13 08:30:39.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-2794 exec execpod-affinity64z7h -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.102.36.136:80/ ; done' + Oct 13 08:30:39.754: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.36.136:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.36.136:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.36.136:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.36.136:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.36.136:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.36.136:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.36.136:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.36.136:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.36.136:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.36.136:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.36.136:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.36.136:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.36.136:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.36.136:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.36.136:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.36.136:80/\n" + Oct 13 08:30:39.754: INFO: stdout: "\naffinity-clusterip-transition-xmkpm\naffinity-clusterip-transition-xmkpm\naffinity-clusterip-transition-xmkpm\naffinity-clusterip-transition-xmkpm\naffinity-clusterip-transition-xmkpm\naffinity-clusterip-transition-xmkpm\naffinity-clusterip-transition-xmkpm\naffinity-clusterip-transition-xmkpm\naffinity-clusterip-transition-xmkpm\naffinity-clusterip-transition-xmkpm\naffinity-clusterip-transition-xmkpm\naffinity-clusterip-transition-xmkpm\naffinity-clusterip-transition-xmkpm\naffinity-clusterip-transition-xmkpm\naffinity-clusterip-transition-xmkpm\naffinity-clusterip-transition-xmkpm" + Oct 13 08:30:39.754: INFO: Received response from host: affinity-clusterip-transition-xmkpm + Oct 13 08:30:39.754: INFO: Received response from host: affinity-clusterip-transition-xmkpm + Oct 13 08:30:39.754: INFO: Received response from host: affinity-clusterip-transition-xmkpm + Oct 13 08:30:39.754: INFO: Received response from host: affinity-clusterip-transition-xmkpm + Oct 13 08:30:39.754: INFO: Received response from host: affinity-clusterip-transition-xmkpm + Oct 13 08:30:39.754: INFO: Received response from host: affinity-clusterip-transition-xmkpm + Oct 13 08:30:39.754: INFO: Received response from host: affinity-clusterip-transition-xmkpm + Oct 13 08:30:39.754: INFO: Received response from host: affinity-clusterip-transition-xmkpm + Oct 13 08:30:39.754: INFO: Received response from host: affinity-clusterip-transition-xmkpm + Oct 13 08:30:39.754: INFO: Received response from host: affinity-clusterip-transition-xmkpm + Oct 13 08:30:39.754: INFO: Received response from host: affinity-clusterip-transition-xmkpm + Oct 13 08:30:39.754: INFO: Received response from host: affinity-clusterip-transition-xmkpm + Oct 13 08:30:39.754: INFO: Received response from host: affinity-clusterip-transition-xmkpm + Oct 13 08:30:39.754: INFO: Received response from host: affinity-clusterip-transition-xmkpm + Oct 13 08:30:39.754: INFO: Received response from host: affinity-clusterip-transition-xmkpm + Oct 13 08:30:39.754: INFO: Received response from host: affinity-clusterip-transition-xmkpm + Oct 13 08:30:39.754: INFO: Cleaning up the exec pod + STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-2794, will wait for the garbage collector to delete the pods 10/13/23 08:30:39.766 + Oct 13 08:30:39.829: INFO: Deleting ReplicationController affinity-clusterip-transition took: 5.592167ms + Oct 13 08:30:39.930: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 100.820375ms + [AfterEach] [sig-network] Services + test/e2e/framework/node/init/init.go:32 + Oct 13 08:30:41.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] Services + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] Services + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] Services + tear down framework | framework.go:193 + STEP: Destroying namespace "services-2794" for this suite. 10/13/23 08:30:41.847 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSS +------------------------------ +[sig-storage] Projected configMap + optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:174 +[BeforeEach] [sig-storage] Projected configMap + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:30:41.852 +Oct 13 08:30:41.852: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename projected 10/13/23 08:30:41.853 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:30:41.867 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:30:41.869 +[BeforeEach] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:31 +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:174 +STEP: Creating configMap with name cm-test-opt-del-e140da4f-1873-4d90-bd80-3e267959a0dd 10/13/23 08:30:41.873 +STEP: Creating configMap with name cm-test-opt-upd-454961ff-e5f8-4357-8ba5-aabb241a3be3 10/13/23 08:30:41.877 +STEP: Creating the pod 10/13/23 08:30:41.881 +Oct 13 08:30:41.890: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1da752e2-c90b-4a4d-9f1d-e38ffaf583d2" in namespace "projected-9338" to be "running and ready" +Oct 13 08:30:41.894: INFO: Pod "pod-projected-configmaps-1da752e2-c90b-4a4d-9f1d-e38ffaf583d2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.42244ms +Oct 13 08:30:41.894: INFO: The phase of Pod pod-projected-configmaps-1da752e2-c90b-4a4d-9f1d-e38ffaf583d2 is Pending, waiting for it to be Running (with Ready = true) +Oct 13 08:30:43.898: INFO: Pod "pod-projected-configmaps-1da752e2-c90b-4a4d-9f1d-e38ffaf583d2": Phase="Running", Reason="", readiness=true. Elapsed: 2.007281529s +Oct 13 08:30:43.898: INFO: The phase of Pod pod-projected-configmaps-1da752e2-c90b-4a4d-9f1d-e38ffaf583d2 is Running (Ready = true) +Oct 13 08:30:43.898: INFO: Pod "pod-projected-configmaps-1da752e2-c90b-4a4d-9f1d-e38ffaf583d2" satisfied condition "running and ready" +STEP: Deleting configmap cm-test-opt-del-e140da4f-1873-4d90-bd80-3e267959a0dd 10/13/23 08:30:43.917 +STEP: Updating configmap cm-test-opt-upd-454961ff-e5f8-4357-8ba5-aabb241a3be3 10/13/23 08:30:43.922 +STEP: Creating configMap with name cm-test-opt-create-32848879-8fef-45fe-995c-bbf42ae48c6c 10/13/23 08:30:43.926 +STEP: waiting to observe update in volume 10/13/23 08:30:43.931 +[AfterEach] [sig-storage] Projected configMap + test/e2e/framework/node/init/init.go:32 +Oct 13 08:30:45.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Projected configMap + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Projected configMap + tear down framework | framework.go:193 +STEP: Destroying namespace "projected-9338" for this suite. 10/13/23 08:30:45.958 +------------------------------ +• [4.111 seconds] +[sig-storage] Projected configMap +test/e2e/common/storage/framework.go:23 + optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:174 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected configMap + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:30:41.852 + Oct 13 08:30:41.852: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename projected 10/13/23 08:30:41.853 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:30:41.867 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:30:41.869 + [BeforeEach] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:31 + [It] optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:174 + STEP: Creating configMap with name cm-test-opt-del-e140da4f-1873-4d90-bd80-3e267959a0dd 10/13/23 08:30:41.873 + STEP: Creating configMap with name cm-test-opt-upd-454961ff-e5f8-4357-8ba5-aabb241a3be3 10/13/23 08:30:41.877 + STEP: Creating the pod 10/13/23 08:30:41.881 + Oct 13 08:30:41.890: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1da752e2-c90b-4a4d-9f1d-e38ffaf583d2" in namespace "projected-9338" to be "running and ready" + Oct 13 08:30:41.894: INFO: Pod "pod-projected-configmaps-1da752e2-c90b-4a4d-9f1d-e38ffaf583d2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.42244ms + Oct 13 08:30:41.894: INFO: The phase of Pod pod-projected-configmaps-1da752e2-c90b-4a4d-9f1d-e38ffaf583d2 is Pending, waiting for it to be Running (with Ready = true) + Oct 13 08:30:43.898: INFO: Pod "pod-projected-configmaps-1da752e2-c90b-4a4d-9f1d-e38ffaf583d2": Phase="Running", Reason="", readiness=true. Elapsed: 2.007281529s + Oct 13 08:30:43.898: INFO: The phase of Pod pod-projected-configmaps-1da752e2-c90b-4a4d-9f1d-e38ffaf583d2 is Running (Ready = true) + Oct 13 08:30:43.898: INFO: Pod "pod-projected-configmaps-1da752e2-c90b-4a4d-9f1d-e38ffaf583d2" satisfied condition "running and ready" + STEP: Deleting configmap cm-test-opt-del-e140da4f-1873-4d90-bd80-3e267959a0dd 10/13/23 08:30:43.917 + STEP: Updating configmap cm-test-opt-upd-454961ff-e5f8-4357-8ba5-aabb241a3be3 10/13/23 08:30:43.922 + STEP: Creating configMap with name cm-test-opt-create-32848879-8fef-45fe-995c-bbf42ae48c6c 10/13/23 08:30:43.926 + STEP: waiting to observe update in volume 10/13/23 08:30:43.931 + [AfterEach] [sig-storage] Projected configMap + test/e2e/framework/node/init/init.go:32 + Oct 13 08:30:45.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Projected configMap + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Projected configMap + tear down framework | framework.go:193 + STEP: Destroying namespace "projected-9338" for this suite. 10/13/23 08:30:45.958 + << End Captured GinkgoWriter Output +------------------------------ +SS +------------------------------ +[sig-storage] Downward API volume + should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:68 +[BeforeEach] [sig-storage] Downward API volume + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:30:45.963 +Oct 13 08:30:45.964: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename downward-api 10/13/23 08:30:45.964 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:30:45.979 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:30:45.982 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:44 +[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:68 +STEP: Creating a pod to test downward API volume plugin 10/13/23 08:30:45.984 +Oct 13 08:30:45.992: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b70d0611-0743-4379-9828-aa18dc800144" in namespace "downward-api-7402" to be "Succeeded or Failed" +Oct 13 08:30:45.995: INFO: Pod "downwardapi-volume-b70d0611-0743-4379-9828-aa18dc800144": Phase="Pending", Reason="", readiness=false. Elapsed: 2.999808ms +Oct 13 08:30:47.999: INFO: Pod "downwardapi-volume-b70d0611-0743-4379-9828-aa18dc800144": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006826924s +Oct 13 08:30:50.000: INFO: Pod "downwardapi-volume-b70d0611-0743-4379-9828-aa18dc800144": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007543388s +STEP: Saw pod success 10/13/23 08:30:50 +Oct 13 08:30:50.000: INFO: Pod "downwardapi-volume-b70d0611-0743-4379-9828-aa18dc800144" satisfied condition "Succeeded or Failed" +Oct 13 08:30:50.002: INFO: Trying to get logs from node node1 pod downwardapi-volume-b70d0611-0743-4379-9828-aa18dc800144 container client-container: +STEP: delete the pod 10/13/23 08:30:50.017 +Oct 13 08:30:50.032: INFO: Waiting for pod downwardapi-volume-b70d0611-0743-4379-9828-aa18dc800144 to disappear +Oct 13 08:30:50.034: INFO: Pod downwardapi-volume-b70d0611-0743-4379-9828-aa18dc800144 no longer exists +[AfterEach] [sig-storage] Downward API volume + test/e2e/framework/node/init/init.go:32 +Oct 13 08:30:50.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Downward API volume + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Downward API volume + tear down framework | framework.go:193 +STEP: Destroying namespace "downward-api-7402" for this suite. 10/13/23 08:30:50.038 +------------------------------ +• [4.079 seconds] +[sig-storage] Downward API volume +test/e2e/common/storage/framework.go:23 + should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:68 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Downward API volume + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:30:45.963 + Oct 13 08:30:45.964: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename downward-api 10/13/23 08:30:45.964 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:30:45.979 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:30:45.982 + [BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:44 + [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:68 + STEP: Creating a pod to test downward API volume plugin 10/13/23 08:30:45.984 + Oct 13 08:30:45.992: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b70d0611-0743-4379-9828-aa18dc800144" in namespace "downward-api-7402" to be "Succeeded or Failed" + Oct 13 08:30:45.995: INFO: Pod "downwardapi-volume-b70d0611-0743-4379-9828-aa18dc800144": Phase="Pending", Reason="", readiness=false. Elapsed: 2.999808ms + Oct 13 08:30:47.999: INFO: Pod "downwardapi-volume-b70d0611-0743-4379-9828-aa18dc800144": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006826924s + Oct 13 08:30:50.000: INFO: Pod "downwardapi-volume-b70d0611-0743-4379-9828-aa18dc800144": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007543388s + STEP: Saw pod success 10/13/23 08:30:50 + Oct 13 08:30:50.000: INFO: Pod "downwardapi-volume-b70d0611-0743-4379-9828-aa18dc800144" satisfied condition "Succeeded or Failed" + Oct 13 08:30:50.002: INFO: Trying to get logs from node node1 pod downwardapi-volume-b70d0611-0743-4379-9828-aa18dc800144 container client-container: + STEP: delete the pod 10/13/23 08:30:50.017 + Oct 13 08:30:50.032: INFO: Waiting for pod downwardapi-volume-b70d0611-0743-4379-9828-aa18dc800144 to disappear + Oct 13 08:30:50.034: INFO: Pod downwardapi-volume-b70d0611-0743-4379-9828-aa18dc800144 no longer exists + [AfterEach] [sig-storage] Downward API volume + test/e2e/framework/node/init/init.go:32 + Oct 13 08:30:50.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Downward API volume + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Downward API volume + tear down framework | framework.go:193 + STEP: Destroying namespace "downward-api-7402" for this suite. 10/13/23 08:30:50.038 + << End Captured GinkgoWriter Output +------------------------------ +SSSSS +------------------------------ +[sig-apps] Job + should delete a job [Conformance] + test/e2e/apps/job.go:481 +[BeforeEach] [sig-apps] Job + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:30:50.043 +Oct 13 08:30:50.043: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename job 10/13/23 08:30:50.044 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:30:50.058 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:30:50.06 +[BeforeEach] [sig-apps] Job + test/e2e/framework/metrics/init/init.go:31 +[It] should delete a job [Conformance] + test/e2e/apps/job.go:481 +STEP: Creating a job 10/13/23 08:30:50.062 +STEP: Ensuring active pods == parallelism 10/13/23 08:30:50.068 +STEP: delete a job 10/13/23 08:30:52.074 +STEP: deleting Job.batch foo in namespace job-1186, will wait for the garbage collector to delete the pods 10/13/23 08:30:52.074 +Oct 13 08:30:52.138: INFO: Deleting Job.batch foo took: 10.18003ms +Oct 13 08:30:52.239: INFO: Terminating Job.batch foo pods took: 100.746833ms +STEP: Ensuring job was deleted 10/13/23 08:31:24.74 +[AfterEach] [sig-apps] Job + test/e2e/framework/node/init/init.go:32 +Oct 13 08:31:24.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] Job + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] Job + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] Job + tear down framework | framework.go:193 +STEP: Destroying namespace "job-1186" for this suite. 10/13/23 08:31:24.749 +------------------------------ +• [SLOW TEST] [34.712 seconds] +[sig-apps] Job +test/e2e/apps/framework.go:23 + should delete a job [Conformance] + test/e2e/apps/job.go:481 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] Job + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:30:50.043 + Oct 13 08:30:50.043: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename job 10/13/23 08:30:50.044 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:30:50.058 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:30:50.06 + [BeforeEach] [sig-apps] Job + test/e2e/framework/metrics/init/init.go:31 + [It] should delete a job [Conformance] + test/e2e/apps/job.go:481 + STEP: Creating a job 10/13/23 08:30:50.062 + STEP: Ensuring active pods == parallelism 10/13/23 08:30:50.068 + STEP: delete a job 10/13/23 08:30:52.074 + STEP: deleting Job.batch foo in namespace job-1186, will wait for the garbage collector to delete the pods 10/13/23 08:30:52.074 + Oct 13 08:30:52.138: INFO: Deleting Job.batch foo took: 10.18003ms + Oct 13 08:30:52.239: INFO: Terminating Job.batch foo pods took: 100.746833ms + STEP: Ensuring job was deleted 10/13/23 08:31:24.74 + [AfterEach] [sig-apps] Job + test/e2e/framework/node/init/init.go:32 + Oct 13 08:31:24.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] Job + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] Job + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] Job + tear down framework | framework.go:193 + STEP: Destroying namespace "job-1186" for this suite. 10/13/23 08:31:24.749 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSS +------------------------------ +[sig-node] Variable Expansion + should allow composing env vars into new env vars [NodeConformance] [Conformance] + test/e2e/common/node/expansion.go:44 +[BeforeEach] [sig-node] Variable Expansion + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:31:24.755 +Oct 13 08:31:24.755: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename var-expansion 10/13/23 08:31:24.756 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:31:24.771 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:31:24.773 +[BeforeEach] [sig-node] Variable Expansion + test/e2e/framework/metrics/init/init.go:31 +[It] should allow composing env vars into new env vars [NodeConformance] [Conformance] + test/e2e/common/node/expansion.go:44 +STEP: Creating a pod to test env composition 10/13/23 08:31:24.775 +Oct 13 08:31:24.782: INFO: Waiting up to 5m0s for pod "var-expansion-90baee4a-7f97-433a-9166-8021d628f05c" in namespace "var-expansion-8857" to be "Succeeded or Failed" +Oct 13 08:31:24.785: INFO: Pod "var-expansion-90baee4a-7f97-433a-9166-8021d628f05c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.284099ms +Oct 13 08:31:26.791: INFO: Pod "var-expansion-90baee4a-7f97-433a-9166-8021d628f05c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009118617s +Oct 13 08:31:28.792: INFO: Pod "var-expansion-90baee4a-7f97-433a-9166-8021d628f05c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009568711s +STEP: Saw pod success 10/13/23 08:31:28.792 +Oct 13 08:31:28.792: INFO: Pod "var-expansion-90baee4a-7f97-433a-9166-8021d628f05c" satisfied condition "Succeeded or Failed" +Oct 13 08:31:28.797: INFO: Trying to get logs from node node2 pod var-expansion-90baee4a-7f97-433a-9166-8021d628f05c container dapi-container: +STEP: delete the pod 10/13/23 08:31:28.806 +Oct 13 08:31:28.818: INFO: Waiting for pod var-expansion-90baee4a-7f97-433a-9166-8021d628f05c to disappear +Oct 13 08:31:28.821: INFO: Pod var-expansion-90baee4a-7f97-433a-9166-8021d628f05c no longer exists +[AfterEach] [sig-node] Variable Expansion + test/e2e/framework/node/init/init.go:32 +Oct 13 08:31:28.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Variable Expansion + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Variable Expansion + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Variable Expansion + tear down framework | framework.go:193 +STEP: Destroying namespace "var-expansion-8857" for this suite. 10/13/23 08:31:28.826 +------------------------------ +• [4.076 seconds] +[sig-node] Variable Expansion +test/e2e/common/node/framework.go:23 + should allow composing env vars into new env vars [NodeConformance] [Conformance] + test/e2e/common/node/expansion.go:44 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Variable Expansion + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:31:24.755 + Oct 13 08:31:24.755: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename var-expansion 10/13/23 08:31:24.756 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:31:24.771 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:31:24.773 + [BeforeEach] [sig-node] Variable Expansion + test/e2e/framework/metrics/init/init.go:31 + [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] + test/e2e/common/node/expansion.go:44 + STEP: Creating a pod to test env composition 10/13/23 08:31:24.775 + Oct 13 08:31:24.782: INFO: Waiting up to 5m0s for pod "var-expansion-90baee4a-7f97-433a-9166-8021d628f05c" in namespace "var-expansion-8857" to be "Succeeded or Failed" + Oct 13 08:31:24.785: INFO: Pod "var-expansion-90baee4a-7f97-433a-9166-8021d628f05c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.284099ms + Oct 13 08:31:26.791: INFO: Pod "var-expansion-90baee4a-7f97-433a-9166-8021d628f05c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009118617s + Oct 13 08:31:28.792: INFO: Pod "var-expansion-90baee4a-7f97-433a-9166-8021d628f05c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009568711s + STEP: Saw pod success 10/13/23 08:31:28.792 + Oct 13 08:31:28.792: INFO: Pod "var-expansion-90baee4a-7f97-433a-9166-8021d628f05c" satisfied condition "Succeeded or Failed" + Oct 13 08:31:28.797: INFO: Trying to get logs from node node2 pod var-expansion-90baee4a-7f97-433a-9166-8021d628f05c container dapi-container: + STEP: delete the pod 10/13/23 08:31:28.806 + Oct 13 08:31:28.818: INFO: Waiting for pod var-expansion-90baee4a-7f97-433a-9166-8021d628f05c to disappear + Oct 13 08:31:28.821: INFO: Pod var-expansion-90baee4a-7f97-433a-9166-8021d628f05c no longer exists + [AfterEach] [sig-node] Variable Expansion + test/e2e/framework/node/init/init.go:32 + Oct 13 08:31:28.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Variable Expansion + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Variable Expansion + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Variable Expansion + tear down framework | framework.go:193 + STEP: Destroying namespace "var-expansion-8857" for this suite. 10/13/23 08:31:28.826 + << End Captured GinkgoWriter Output +------------------------------ +SSSSS +------------------------------ +[sig-node] Container Runtime blackbox test on terminated container + should report termination message if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:195 +[BeforeEach] [sig-node] Container Runtime + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:31:28.832 +Oct 13 08:31:28.832: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename container-runtime 10/13/23 08:31:28.833 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:31:28.848 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:31:28.85 +[BeforeEach] [sig-node] Container Runtime + test/e2e/framework/metrics/init/init.go:31 +[It] should report termination message if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:195 +STEP: create the container 10/13/23 08:31:28.853 +STEP: wait for the container to reach Succeeded 10/13/23 08:31:28.861 +STEP: get the container status 10/13/23 08:31:32.885 +STEP: the container should be terminated 10/13/23 08:31:32.889 +STEP: the termination message should be set 10/13/23 08:31:32.889 +Oct 13 08:31:32.889: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- +STEP: delete the container 10/13/23 08:31:32.889 +[AfterEach] [sig-node] Container Runtime + test/e2e/framework/node/init/init.go:32 +Oct 13 08:31:32.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Container Runtime + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Container Runtime + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Container Runtime + tear down framework | framework.go:193 +STEP: Destroying namespace "container-runtime-6585" for this suite. 10/13/23 08:31:32.905 +------------------------------ +• [4.078 seconds] +[sig-node] Container Runtime +test/e2e/common/node/framework.go:23 + blackbox test + test/e2e/common/node/runtime.go:44 + on terminated container + test/e2e/common/node/runtime.go:137 + should report termination message if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:195 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Container Runtime + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:31:28.832 + Oct 13 08:31:28.832: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename container-runtime 10/13/23 08:31:28.833 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:31:28.848 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:31:28.85 + [BeforeEach] [sig-node] Container Runtime + test/e2e/framework/metrics/init/init.go:31 + [It] should report termination message if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:195 + STEP: create the container 10/13/23 08:31:28.853 + STEP: wait for the container to reach Succeeded 10/13/23 08:31:28.861 + STEP: get the container status 10/13/23 08:31:32.885 + STEP: the container should be terminated 10/13/23 08:31:32.889 + STEP: the termination message should be set 10/13/23 08:31:32.889 + Oct 13 08:31:32.889: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- + STEP: delete the container 10/13/23 08:31:32.889 + [AfterEach] [sig-node] Container Runtime + test/e2e/framework/node/init/init.go:32 + Oct 13 08:31:32.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Container Runtime + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Container Runtime + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Container Runtime + tear down framework | framework.go:193 + STEP: Destroying namespace "container-runtime-6585" for this suite. 10/13/23 08:31:32.905 + << End Captured GinkgoWriter Output +------------------------------ +SSS +------------------------------ +[sig-node] RuntimeClass + should support RuntimeClasses API operations [Conformance] + test/e2e/common/node/runtimeclass.go:189 +[BeforeEach] [sig-node] RuntimeClass + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:31:32.911 +Oct 13 08:31:32.911: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename runtimeclass 10/13/23 08:31:32.912 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:31:32.926 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:31:32.928 +[BeforeEach] [sig-node] RuntimeClass + test/e2e/framework/metrics/init/init.go:31 +[It] should support RuntimeClasses API operations [Conformance] + test/e2e/common/node/runtimeclass.go:189 +STEP: getting /apis 10/13/23 08:31:32.93 +STEP: getting /apis/node.k8s.io 10/13/23 08:31:32.932 +STEP: getting /apis/node.k8s.io/v1 10/13/23 08:31:32.933 +STEP: creating 10/13/23 08:31:32.934 +STEP: watching 10/13/23 08:31:32.947 +Oct 13 08:31:32.947: INFO: starting watch +STEP: getting 10/13/23 08:31:32.952 +STEP: listing 10/13/23 08:31:32.954 +STEP: patching 10/13/23 08:31:32.957 +STEP: updating 10/13/23 08:31:32.961 +Oct 13 08:31:32.966: INFO: waiting for watch events with expected annotations +STEP: deleting 10/13/23 08:31:32.966 +STEP: deleting a collection 10/13/23 08:31:32.975 +[AfterEach] [sig-node] RuntimeClass + test/e2e/framework/node/init/init.go:32 +Oct 13 08:31:32.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] RuntimeClass + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] RuntimeClass + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] RuntimeClass + tear down framework | framework.go:193 +STEP: Destroying namespace "runtimeclass-1890" for this suite. 10/13/23 08:31:32.991 +------------------------------ +• [0.086 seconds] +[sig-node] RuntimeClass +test/e2e/common/node/framework.go:23 + should support RuntimeClasses API operations [Conformance] + test/e2e/common/node/runtimeclass.go:189 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] RuntimeClass + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:31:32.911 + Oct 13 08:31:32.911: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename runtimeclass 10/13/23 08:31:32.912 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:31:32.926 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:31:32.928 + [BeforeEach] [sig-node] RuntimeClass + test/e2e/framework/metrics/init/init.go:31 + [It] should support RuntimeClasses API operations [Conformance] + test/e2e/common/node/runtimeclass.go:189 + STEP: getting /apis 10/13/23 08:31:32.93 + STEP: getting /apis/node.k8s.io 10/13/23 08:31:32.932 + STEP: getting /apis/node.k8s.io/v1 10/13/23 08:31:32.933 + STEP: creating 10/13/23 08:31:32.934 + STEP: watching 10/13/23 08:31:32.947 + Oct 13 08:31:32.947: INFO: starting watch + STEP: getting 10/13/23 08:31:32.952 + STEP: listing 10/13/23 08:31:32.954 + STEP: patching 10/13/23 08:31:32.957 + STEP: updating 10/13/23 08:31:32.961 + Oct 13 08:31:32.966: INFO: waiting for watch events with expected annotations + STEP: deleting 10/13/23 08:31:32.966 + STEP: deleting a collection 10/13/23 08:31:32.975 + [AfterEach] [sig-node] RuntimeClass + test/e2e/framework/node/init/init.go:32 + Oct 13 08:31:32.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] RuntimeClass + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] RuntimeClass + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] RuntimeClass + tear down framework | framework.go:193 + STEP: Destroying namespace "runtimeclass-1890" for this suite. 10/13/23 08:31:32.991 + << End Captured GinkgoWriter Output +------------------------------ +SSSSS +------------------------------ +[sig-apps] ReplicaSet + should serve a basic image on each replica with a public image [Conformance] + test/e2e/apps/replica_set.go:111 +[BeforeEach] [sig-apps] ReplicaSet + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:31:32.997 +Oct 13 08:31:32.997: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename replicaset 10/13/23 08:31:32.998 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:31:33.011 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:31:33.014 +[BeforeEach] [sig-apps] ReplicaSet + test/e2e/framework/metrics/init/init.go:31 +[It] should serve a basic image on each replica with a public image [Conformance] + test/e2e/apps/replica_set.go:111 +Oct 13 08:31:33.016: INFO: Creating ReplicaSet my-hostname-basic-67885034-3449-40b9-b0d9-e847a71ccfb7 +Oct 13 08:31:33.024: INFO: Pod name my-hostname-basic-67885034-3449-40b9-b0d9-e847a71ccfb7: Found 0 pods out of 1 +Oct 13 08:31:38.029: INFO: Pod name my-hostname-basic-67885034-3449-40b9-b0d9-e847a71ccfb7: Found 1 pods out of 1 +Oct 13 08:31:38.029: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-67885034-3449-40b9-b0d9-e847a71ccfb7" is running +Oct 13 08:31:38.029: INFO: Waiting up to 5m0s for pod "my-hostname-basic-67885034-3449-40b9-b0d9-e847a71ccfb7-vk4zd" in namespace "replicaset-2233" to be "running" +Oct 13 08:31:38.033: INFO: Pod "my-hostname-basic-67885034-3449-40b9-b0d9-e847a71ccfb7-vk4zd": Phase="Running", Reason="", readiness=true. Elapsed: 3.429566ms +Oct 13 08:31:38.033: INFO: Pod "my-hostname-basic-67885034-3449-40b9-b0d9-e847a71ccfb7-vk4zd" satisfied condition "running" +Oct 13 08:31:38.033: INFO: Pod "my-hostname-basic-67885034-3449-40b9-b0d9-e847a71ccfb7-vk4zd" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-10-13 08:31:33 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-10-13 08:31:35 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-10-13 08:31:35 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-10-13 08:31:33 +0000 UTC Reason: Message:}]) +Oct 13 08:31:38.033: INFO: Trying to dial the pod +Oct 13 08:31:43.048: INFO: Controller my-hostname-basic-67885034-3449-40b9-b0d9-e847a71ccfb7: Got expected result from replica 1 [my-hostname-basic-67885034-3449-40b9-b0d9-e847a71ccfb7-vk4zd]: "my-hostname-basic-67885034-3449-40b9-b0d9-e847a71ccfb7-vk4zd", 1 of 1 required successes so far +[AfterEach] [sig-apps] ReplicaSet + test/e2e/framework/node/init/init.go:32 +Oct 13 08:31:43.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] ReplicaSet + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] ReplicaSet + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] ReplicaSet + tear down framework | framework.go:193 +STEP: Destroying namespace "replicaset-2233" for this suite. 10/13/23 08:31:43.053 +------------------------------ +• [SLOW TEST] [10.062 seconds] +[sig-apps] ReplicaSet +test/e2e/apps/framework.go:23 + should serve a basic image on each replica with a public image [Conformance] + test/e2e/apps/replica_set.go:111 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] ReplicaSet + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:31:32.997 + Oct 13 08:31:32.997: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename replicaset 10/13/23 08:31:32.998 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:31:33.011 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:31:33.014 + [BeforeEach] [sig-apps] ReplicaSet + test/e2e/framework/metrics/init/init.go:31 + [It] should serve a basic image on each replica with a public image [Conformance] + test/e2e/apps/replica_set.go:111 + Oct 13 08:31:33.016: INFO: Creating ReplicaSet my-hostname-basic-67885034-3449-40b9-b0d9-e847a71ccfb7 + Oct 13 08:31:33.024: INFO: Pod name my-hostname-basic-67885034-3449-40b9-b0d9-e847a71ccfb7: Found 0 pods out of 1 + Oct 13 08:31:38.029: INFO: Pod name my-hostname-basic-67885034-3449-40b9-b0d9-e847a71ccfb7: Found 1 pods out of 1 + Oct 13 08:31:38.029: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-67885034-3449-40b9-b0d9-e847a71ccfb7" is running + Oct 13 08:31:38.029: INFO: Waiting up to 5m0s for pod "my-hostname-basic-67885034-3449-40b9-b0d9-e847a71ccfb7-vk4zd" in namespace "replicaset-2233" to be "running" + Oct 13 08:31:38.033: INFO: Pod "my-hostname-basic-67885034-3449-40b9-b0d9-e847a71ccfb7-vk4zd": Phase="Running", Reason="", readiness=true. Elapsed: 3.429566ms + Oct 13 08:31:38.033: INFO: Pod "my-hostname-basic-67885034-3449-40b9-b0d9-e847a71ccfb7-vk4zd" satisfied condition "running" + Oct 13 08:31:38.033: INFO: Pod "my-hostname-basic-67885034-3449-40b9-b0d9-e847a71ccfb7-vk4zd" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-10-13 08:31:33 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-10-13 08:31:35 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-10-13 08:31:35 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-10-13 08:31:33 +0000 UTC Reason: Message:}]) + Oct 13 08:31:38.033: INFO: Trying to dial the pod + Oct 13 08:31:43.048: INFO: Controller my-hostname-basic-67885034-3449-40b9-b0d9-e847a71ccfb7: Got expected result from replica 1 [my-hostname-basic-67885034-3449-40b9-b0d9-e847a71ccfb7-vk4zd]: "my-hostname-basic-67885034-3449-40b9-b0d9-e847a71ccfb7-vk4zd", 1 of 1 required successes so far + [AfterEach] [sig-apps] ReplicaSet + test/e2e/framework/node/init/init.go:32 + Oct 13 08:31:43.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] ReplicaSet + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] ReplicaSet + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] ReplicaSet + tear down framework | framework.go:193 + STEP: Destroying namespace "replicaset-2233" for this suite. 10/13/23 08:31:43.053 + << End Captured GinkgoWriter Output +------------------------------ +SSS +------------------------------ +[sig-apps] ControllerRevision [Serial] + should manage the lifecycle of a ControllerRevision [Conformance] + test/e2e/apps/controller_revision.go:124 +[BeforeEach] [sig-apps] ControllerRevision [Serial] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:31:43.059 +Oct 13 08:31:43.059: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename controllerrevisions 10/13/23 08:31:43.06 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:31:43.077 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:31:43.079 +[BeforeEach] [sig-apps] ControllerRevision [Serial] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] ControllerRevision [Serial] + test/e2e/apps/controller_revision.go:93 +[It] should manage the lifecycle of a ControllerRevision [Conformance] + test/e2e/apps/controller_revision.go:124 +STEP: Creating DaemonSet "e2e-r74kn-daemon-set" 10/13/23 08:31:43.098 +STEP: Check that daemon pods launch on every node of the cluster. 10/13/23 08:31:43.103 +Oct 13 08:31:43.110: INFO: Number of nodes with available pods controlled by daemonset e2e-r74kn-daemon-set: 0 +Oct 13 08:31:43.110: INFO: Node node1 is running 0 daemon pod, expected 1 +Oct 13 08:31:44.120: INFO: Number of nodes with available pods controlled by daemonset e2e-r74kn-daemon-set: 1 +Oct 13 08:31:44.120: INFO: Node node1 is running 0 daemon pod, expected 1 +Oct 13 08:31:45.118: INFO: Number of nodes with available pods controlled by daemonset e2e-r74kn-daemon-set: 3 +Oct 13 08:31:45.118: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset e2e-r74kn-daemon-set +STEP: Confirm DaemonSet "e2e-r74kn-daemon-set" successfully created with "daemonset-name=e2e-r74kn-daemon-set" label 10/13/23 08:31:45.121 +STEP: Listing all ControllerRevisions with label "daemonset-name=e2e-r74kn-daemon-set" 10/13/23 08:31:45.126 +Oct 13 08:31:45.129: INFO: Located ControllerRevision: "e2e-r74kn-daemon-set-db5588dbf" +STEP: Patching ControllerRevision "e2e-r74kn-daemon-set-db5588dbf" 10/13/23 08:31:45.132 +Oct 13 08:31:45.138: INFO: e2e-r74kn-daemon-set-db5588dbf has been patched +STEP: Create a new ControllerRevision 10/13/23 08:31:45.138 +Oct 13 08:31:45.142: INFO: Created ControllerRevision: e2e-r74kn-daemon-set-6d8bfc694f +STEP: Confirm that there are two ControllerRevisions 10/13/23 08:31:45.142 +Oct 13 08:31:45.143: INFO: Requesting list of ControllerRevisions to confirm quantity +Oct 13 08:31:45.146: INFO: Found 2 ControllerRevisions +STEP: Deleting ControllerRevision "e2e-r74kn-daemon-set-db5588dbf" 10/13/23 08:31:45.146 +STEP: Confirm that there is only one ControllerRevision 10/13/23 08:31:45.153 +Oct 13 08:31:45.153: INFO: Requesting list of ControllerRevisions to confirm quantity +Oct 13 08:31:45.155: INFO: Found 1 ControllerRevisions +STEP: Updating ControllerRevision "e2e-r74kn-daemon-set-6d8bfc694f" 10/13/23 08:31:45.158 +Oct 13 08:31:45.165: INFO: e2e-r74kn-daemon-set-6d8bfc694f has been updated +STEP: Generate another ControllerRevision by patching the Daemonset 10/13/23 08:31:45.165 +W1013 08:31:45.174211 23 warnings.go:70] unknown field "updateStrategy" +STEP: Confirm that there are two ControllerRevisions 10/13/23 08:31:45.174 +Oct 13 08:31:45.174: INFO: Requesting list of ControllerRevisions to confirm quantity +Oct 13 08:31:46.177: INFO: Requesting list of ControllerRevisions to confirm quantity +Oct 13 08:31:46.181: INFO: Found 2 ControllerRevisions +STEP: Removing a ControllerRevision via 'DeleteCollection' with labelSelector: "e2e-r74kn-daemon-set-6d8bfc694f=updated" 10/13/23 08:31:46.181 +STEP: Confirm that there is only one ControllerRevision 10/13/23 08:31:46.19 +Oct 13 08:31:46.190: INFO: Requesting list of ControllerRevisions to confirm quantity +Oct 13 08:31:46.193: INFO: Found 1 ControllerRevisions +Oct 13 08:31:46.195: INFO: ControllerRevision "e2e-r74kn-daemon-set-58fc85d648" has revision 3 +[AfterEach] [sig-apps] ControllerRevision [Serial] + test/e2e/apps/controller_revision.go:58 +STEP: Deleting DaemonSet "e2e-r74kn-daemon-set" 10/13/23 08:31:46.198 +STEP: deleting DaemonSet.extensions e2e-r74kn-daemon-set in namespace controllerrevisions-2129, will wait for the garbage collector to delete the pods 10/13/23 08:31:46.198 +Oct 13 08:31:46.269: INFO: Deleting DaemonSet.extensions e2e-r74kn-daemon-set took: 16.538092ms +Oct 13 08:31:46.370: INFO: Terminating DaemonSet.extensions e2e-r74kn-daemon-set pods took: 100.950307ms +Oct 13 08:31:47.773: INFO: Number of nodes with available pods controlled by daemonset e2e-r74kn-daemon-set: 0 +Oct 13 08:31:47.773: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset e2e-r74kn-daemon-set +Oct 13 08:31:47.775: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"16717"},"items":null} + +Oct 13 08:31:47.778: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"16717"},"items":null} + +[AfterEach] [sig-apps] ControllerRevision [Serial] + test/e2e/framework/node/init/init.go:32 +Oct 13 08:31:47.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] ControllerRevision [Serial] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] ControllerRevision [Serial] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] ControllerRevision [Serial] + tear down framework | framework.go:193 +STEP: Destroying namespace "controllerrevisions-2129" for this suite. 10/13/23 08:31:47.796 +------------------------------ +• [4.742 seconds] +[sig-apps] ControllerRevision [Serial] +test/e2e/apps/framework.go:23 + should manage the lifecycle of a ControllerRevision [Conformance] + test/e2e/apps/controller_revision.go:124 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] ControllerRevision [Serial] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:31:43.059 + Oct 13 08:31:43.059: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename controllerrevisions 10/13/23 08:31:43.06 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:31:43.077 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:31:43.079 + [BeforeEach] [sig-apps] ControllerRevision [Serial] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] ControllerRevision [Serial] + test/e2e/apps/controller_revision.go:93 + [It] should manage the lifecycle of a ControllerRevision [Conformance] + test/e2e/apps/controller_revision.go:124 + STEP: Creating DaemonSet "e2e-r74kn-daemon-set" 10/13/23 08:31:43.098 + STEP: Check that daemon pods launch on every node of the cluster. 10/13/23 08:31:43.103 + Oct 13 08:31:43.110: INFO: Number of nodes with available pods controlled by daemonset e2e-r74kn-daemon-set: 0 + Oct 13 08:31:43.110: INFO: Node node1 is running 0 daemon pod, expected 1 + Oct 13 08:31:44.120: INFO: Number of nodes with available pods controlled by daemonset e2e-r74kn-daemon-set: 1 + Oct 13 08:31:44.120: INFO: Node node1 is running 0 daemon pod, expected 1 + Oct 13 08:31:45.118: INFO: Number of nodes with available pods controlled by daemonset e2e-r74kn-daemon-set: 3 + Oct 13 08:31:45.118: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset e2e-r74kn-daemon-set + STEP: Confirm DaemonSet "e2e-r74kn-daemon-set" successfully created with "daemonset-name=e2e-r74kn-daemon-set" label 10/13/23 08:31:45.121 + STEP: Listing all ControllerRevisions with label "daemonset-name=e2e-r74kn-daemon-set" 10/13/23 08:31:45.126 + Oct 13 08:31:45.129: INFO: Located ControllerRevision: "e2e-r74kn-daemon-set-db5588dbf" + STEP: Patching ControllerRevision "e2e-r74kn-daemon-set-db5588dbf" 10/13/23 08:31:45.132 + Oct 13 08:31:45.138: INFO: e2e-r74kn-daemon-set-db5588dbf has been patched + STEP: Create a new ControllerRevision 10/13/23 08:31:45.138 + Oct 13 08:31:45.142: INFO: Created ControllerRevision: e2e-r74kn-daemon-set-6d8bfc694f + STEP: Confirm that there are two ControllerRevisions 10/13/23 08:31:45.142 + Oct 13 08:31:45.143: INFO: Requesting list of ControllerRevisions to confirm quantity + Oct 13 08:31:45.146: INFO: Found 2 ControllerRevisions + STEP: Deleting ControllerRevision "e2e-r74kn-daemon-set-db5588dbf" 10/13/23 08:31:45.146 + STEP: Confirm that there is only one ControllerRevision 10/13/23 08:31:45.153 + Oct 13 08:31:45.153: INFO: Requesting list of ControllerRevisions to confirm quantity + Oct 13 08:31:45.155: INFO: Found 1 ControllerRevisions + STEP: Updating ControllerRevision "e2e-r74kn-daemon-set-6d8bfc694f" 10/13/23 08:31:45.158 + Oct 13 08:31:45.165: INFO: e2e-r74kn-daemon-set-6d8bfc694f has been updated + STEP: Generate another ControllerRevision by patching the Daemonset 10/13/23 08:31:45.165 + W1013 08:31:45.174211 23 warnings.go:70] unknown field "updateStrategy" + STEP: Confirm that there are two ControllerRevisions 10/13/23 08:31:45.174 + Oct 13 08:31:45.174: INFO: Requesting list of ControllerRevisions to confirm quantity + Oct 13 08:31:46.177: INFO: Requesting list of ControllerRevisions to confirm quantity + Oct 13 08:31:46.181: INFO: Found 2 ControllerRevisions + STEP: Removing a ControllerRevision via 'DeleteCollection' with labelSelector: "e2e-r74kn-daemon-set-6d8bfc694f=updated" 10/13/23 08:31:46.181 + STEP: Confirm that there is only one ControllerRevision 10/13/23 08:31:46.19 + Oct 13 08:31:46.190: INFO: Requesting list of ControllerRevisions to confirm quantity + Oct 13 08:31:46.193: INFO: Found 1 ControllerRevisions + Oct 13 08:31:46.195: INFO: ControllerRevision "e2e-r74kn-daemon-set-58fc85d648" has revision 3 + [AfterEach] [sig-apps] ControllerRevision [Serial] + test/e2e/apps/controller_revision.go:58 + STEP: Deleting DaemonSet "e2e-r74kn-daemon-set" 10/13/23 08:31:46.198 + STEP: deleting DaemonSet.extensions e2e-r74kn-daemon-set in namespace controllerrevisions-2129, will wait for the garbage collector to delete the pods 10/13/23 08:31:46.198 + Oct 13 08:31:46.269: INFO: Deleting DaemonSet.extensions e2e-r74kn-daemon-set took: 16.538092ms + Oct 13 08:31:46.370: INFO: Terminating DaemonSet.extensions e2e-r74kn-daemon-set pods took: 100.950307ms + Oct 13 08:31:47.773: INFO: Number of nodes with available pods controlled by daemonset e2e-r74kn-daemon-set: 0 + Oct 13 08:31:47.773: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset e2e-r74kn-daemon-set + Oct 13 08:31:47.775: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"16717"},"items":null} + + Oct 13 08:31:47.778: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"16717"},"items":null} + + [AfterEach] [sig-apps] ControllerRevision [Serial] + test/e2e/framework/node/init/init.go:32 + Oct 13 08:31:47.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] ControllerRevision [Serial] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] ControllerRevision [Serial] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] ControllerRevision [Serial] + tear down framework | framework.go:193 + STEP: Destroying namespace "controllerrevisions-2129" for this suite. 10/13/23 08:31:47.796 + << End Captured GinkgoWriter Output +------------------------------ +SSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:47 +[BeforeEach] [sig-storage] Projected configMap + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:31:47.803 +Oct 13 08:31:47.803: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename projected 10/13/23 08:31:47.804 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:31:47.82 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:31:47.823 +[BeforeEach] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:31 +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:47 +STEP: Creating configMap with name projected-configmap-test-volume-73cf7748-bbb0-42bb-a467-71911c71ffba 10/13/23 08:31:47.825 +STEP: Creating a pod to test consume configMaps 10/13/23 08:31:47.829 +Oct 13 08:31:47.838: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0b0d4489-4604-4284-9a74-10d1819bf79d" in namespace "projected-9632" to be "Succeeded or Failed" +Oct 13 08:31:47.841: INFO: Pod "pod-projected-configmaps-0b0d4489-4604-4284-9a74-10d1819bf79d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.145519ms +Oct 13 08:31:49.845: INFO: Pod "pod-projected-configmaps-0b0d4489-4604-4284-9a74-10d1819bf79d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007452813s +Oct 13 08:31:51.847: INFO: Pod "pod-projected-configmaps-0b0d4489-4604-4284-9a74-10d1819bf79d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008990738s +STEP: Saw pod success 10/13/23 08:31:51.847 +Oct 13 08:31:51.847: INFO: Pod "pod-projected-configmaps-0b0d4489-4604-4284-9a74-10d1819bf79d" satisfied condition "Succeeded or Failed" +Oct 13 08:31:51.851: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-0b0d4489-4604-4284-9a74-10d1819bf79d container agnhost-container: +STEP: delete the pod 10/13/23 08:31:51.859 +Oct 13 08:31:51.869: INFO: Waiting for pod pod-projected-configmaps-0b0d4489-4604-4284-9a74-10d1819bf79d to disappear +Oct 13 08:31:51.873: INFO: Pod pod-projected-configmaps-0b0d4489-4604-4284-9a74-10d1819bf79d no longer exists +[AfterEach] [sig-storage] Projected configMap + test/e2e/framework/node/init/init.go:32 +Oct 13 08:31:51.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Projected configMap + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Projected configMap + tear down framework | framework.go:193 +STEP: Destroying namespace "projected-9632" for this suite. 10/13/23 08:31:51.876 +------------------------------ +• [4.080 seconds] +[sig-storage] Projected configMap +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:47 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected configMap + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:31:47.803 + Oct 13 08:31:47.803: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename projected 10/13/23 08:31:47.804 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:31:47.82 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:31:47.823 + [BeforeEach] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:31 + [It] should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:47 + STEP: Creating configMap with name projected-configmap-test-volume-73cf7748-bbb0-42bb-a467-71911c71ffba 10/13/23 08:31:47.825 + STEP: Creating a pod to test consume configMaps 10/13/23 08:31:47.829 + Oct 13 08:31:47.838: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0b0d4489-4604-4284-9a74-10d1819bf79d" in namespace "projected-9632" to be "Succeeded or Failed" + Oct 13 08:31:47.841: INFO: Pod "pod-projected-configmaps-0b0d4489-4604-4284-9a74-10d1819bf79d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.145519ms + Oct 13 08:31:49.845: INFO: Pod "pod-projected-configmaps-0b0d4489-4604-4284-9a74-10d1819bf79d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007452813s + Oct 13 08:31:51.847: INFO: Pod "pod-projected-configmaps-0b0d4489-4604-4284-9a74-10d1819bf79d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008990738s + STEP: Saw pod success 10/13/23 08:31:51.847 + Oct 13 08:31:51.847: INFO: Pod "pod-projected-configmaps-0b0d4489-4604-4284-9a74-10d1819bf79d" satisfied condition "Succeeded or Failed" + Oct 13 08:31:51.851: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-0b0d4489-4604-4284-9a74-10d1819bf79d container agnhost-container: + STEP: delete the pod 10/13/23 08:31:51.859 + Oct 13 08:31:51.869: INFO: Waiting for pod pod-projected-configmaps-0b0d4489-4604-4284-9a74-10d1819bf79d to disappear + Oct 13 08:31:51.873: INFO: Pod pod-projected-configmaps-0b0d4489-4604-4284-9a74-10d1819bf79d no longer exists + [AfterEach] [sig-storage] Projected configMap + test/e2e/framework/node/init/init.go:32 + Oct 13 08:31:51.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Projected configMap + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Projected configMap + tear down framework | framework.go:193 + STEP: Destroying namespace "projected-9632" for this suite. 10/13/23 08:31:51.876 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:119 +[BeforeEach] [sig-storage] Projected secret + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:31:51.884 +Oct 13 08:31:51.884: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename projected 10/13/23 08:31:51.885 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:31:51.904 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:31:51.906 +[BeforeEach] [sig-storage] Projected secret + test/e2e/framework/metrics/init/init.go:31 +[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:119 +STEP: Creating secret with name projected-secret-test-132c85f4-bfd5-4ebe-b382-2f8a440215af 10/13/23 08:31:51.908 +STEP: Creating a pod to test consume secrets 10/13/23 08:31:51.913 +Oct 13 08:31:51.921: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9c04f6fe-1d7d-491e-a3dd-e42df311538e" in namespace "projected-5903" to be "Succeeded or Failed" +Oct 13 08:31:51.924: INFO: Pod "pod-projected-secrets-9c04f6fe-1d7d-491e-a3dd-e42df311538e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.205185ms +Oct 13 08:31:53.927: INFO: Pod "pod-projected-secrets-9c04f6fe-1d7d-491e-a3dd-e42df311538e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006589832s +Oct 13 08:31:55.929: INFO: Pod "pod-projected-secrets-9c04f6fe-1d7d-491e-a3dd-e42df311538e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008083691s +STEP: Saw pod success 10/13/23 08:31:55.929 +Oct 13 08:31:55.929: INFO: Pod "pod-projected-secrets-9c04f6fe-1d7d-491e-a3dd-e42df311538e" satisfied condition "Succeeded or Failed" +Oct 13 08:31:55.932: INFO: Trying to get logs from node node2 pod pod-projected-secrets-9c04f6fe-1d7d-491e-a3dd-e42df311538e container secret-volume-test: +STEP: delete the pod 10/13/23 08:31:55.939 +Oct 13 08:31:55.950: INFO: Waiting for pod pod-projected-secrets-9c04f6fe-1d7d-491e-a3dd-e42df311538e to disappear +Oct 13 08:31:55.952: INFO: Pod pod-projected-secrets-9c04f6fe-1d7d-491e-a3dd-e42df311538e no longer exists +[AfterEach] [sig-storage] Projected secret + test/e2e/framework/node/init/init.go:32 +Oct 13 08:31:55.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected secret + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Projected secret + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Projected secret + tear down framework | framework.go:193 +STEP: Destroying namespace "projected-5903" for this suite. 10/13/23 08:31:55.956 +------------------------------ +• [4.077 seconds] +[sig-storage] Projected secret +test/e2e/common/storage/framework.go:23 + should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:119 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected secret + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:31:51.884 + Oct 13 08:31:51.884: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename projected 10/13/23 08:31:51.885 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:31:51.904 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:31:51.906 + [BeforeEach] [sig-storage] Projected secret + test/e2e/framework/metrics/init/init.go:31 + [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:119 + STEP: Creating secret with name projected-secret-test-132c85f4-bfd5-4ebe-b382-2f8a440215af 10/13/23 08:31:51.908 + STEP: Creating a pod to test consume secrets 10/13/23 08:31:51.913 + Oct 13 08:31:51.921: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9c04f6fe-1d7d-491e-a3dd-e42df311538e" in namespace "projected-5903" to be "Succeeded or Failed" + Oct 13 08:31:51.924: INFO: Pod "pod-projected-secrets-9c04f6fe-1d7d-491e-a3dd-e42df311538e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.205185ms + Oct 13 08:31:53.927: INFO: Pod "pod-projected-secrets-9c04f6fe-1d7d-491e-a3dd-e42df311538e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006589832s + Oct 13 08:31:55.929: INFO: Pod "pod-projected-secrets-9c04f6fe-1d7d-491e-a3dd-e42df311538e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008083691s + STEP: Saw pod success 10/13/23 08:31:55.929 + Oct 13 08:31:55.929: INFO: Pod "pod-projected-secrets-9c04f6fe-1d7d-491e-a3dd-e42df311538e" satisfied condition "Succeeded or Failed" + Oct 13 08:31:55.932: INFO: Trying to get logs from node node2 pod pod-projected-secrets-9c04f6fe-1d7d-491e-a3dd-e42df311538e container secret-volume-test: + STEP: delete the pod 10/13/23 08:31:55.939 + Oct 13 08:31:55.950: INFO: Waiting for pod pod-projected-secrets-9c04f6fe-1d7d-491e-a3dd-e42df311538e to disappear + Oct 13 08:31:55.952: INFO: Pod pod-projected-secrets-9c04f6fe-1d7d-491e-a3dd-e42df311538e no longer exists + [AfterEach] [sig-storage] Projected secret + test/e2e/framework/node/init/init.go:32 + Oct 13 08:31:55.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected secret + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Projected secret + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Projected secret + tear down framework | framework.go:193 + STEP: Destroying namespace "projected-5903" for this suite. 10/13/23 08:31:55.956 + << End Captured GinkgoWriter Output +------------------------------ +[sig-network] HostPort + validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] + test/e2e/network/hostport.go:63 +[BeforeEach] [sig-network] HostPort + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:31:55.961 +Oct 13 08:31:55.961: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename hostport 10/13/23 08:31:55.962 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:31:55.976 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:31:55.979 +[BeforeEach] [sig-network] HostPort + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-network] HostPort + test/e2e/network/hostport.go:49 +[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] + test/e2e/network/hostport.go:63 +STEP: Trying to create a pod(pod1) with hostport 54323 and hostIP 127.0.0.1 and expect scheduled 10/13/23 08:31:55.985 +Oct 13 08:31:55.993: INFO: Waiting up to 5m0s for pod "pod1" in namespace "hostport-9764" to be "running and ready" +Oct 13 08:31:55.997: INFO: Pod "pod1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.899595ms +Oct 13 08:31:55.998: INFO: The phase of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) +Oct 13 08:31:58.003: INFO: Pod "pod1": Phase="Running", Reason="", readiness=true. Elapsed: 2.010836338s +Oct 13 08:31:58.003: INFO: The phase of Pod pod1 is Running (Ready = true) +Oct 13 08:31:58.003: INFO: Pod "pod1" satisfied condition "running and ready" +STEP: Trying to create another pod(pod2) with hostport 54323 but hostIP 10.253.8.112 on the node which pod1 resides and expect scheduled 10/13/23 08:31:58.004 +Oct 13 08:31:58.011: INFO: Waiting up to 5m0s for pod "pod2" in namespace "hostport-9764" to be "running and ready" +Oct 13 08:31:58.014: INFO: Pod "pod2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.94052ms +Oct 13 08:31:58.014: INFO: The phase of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) +Oct 13 08:32:00.019: INFO: Pod "pod2": Phase="Running", Reason="", readiness=false. Elapsed: 2.008366966s +Oct 13 08:32:00.019: INFO: The phase of Pod pod2 is Running (Ready = false) +Oct 13 08:32:02.019: INFO: Pod "pod2": Phase="Running", Reason="", readiness=true. Elapsed: 4.00797205s +Oct 13 08:32:02.019: INFO: The phase of Pod pod2 is Running (Ready = true) +Oct 13 08:32:02.019: INFO: Pod "pod2" satisfied condition "running and ready" +STEP: Trying to create a third pod(pod3) with hostport 54323, hostIP 10.253.8.112 but use UDP protocol on the node which pod2 resides 10/13/23 08:32:02.019 +Oct 13 08:32:02.029: INFO: Waiting up to 5m0s for pod "pod3" in namespace "hostport-9764" to be "running and ready" +Oct 13 08:32:02.033: INFO: Pod "pod3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.276391ms +Oct 13 08:32:02.033: INFO: The phase of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) +Oct 13 08:32:04.039: INFO: Pod "pod3": Phase="Running", Reason="", readiness=true. Elapsed: 2.009699022s +Oct 13 08:32:04.039: INFO: The phase of Pod pod3 is Running (Ready = true) +Oct 13 08:32:04.039: INFO: Pod "pod3" satisfied condition "running and ready" +Oct 13 08:32:04.046: INFO: Waiting up to 5m0s for pod "e2e-host-exec" in namespace "hostport-9764" to be "running and ready" +Oct 13 08:32:04.050: INFO: Pod "e2e-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 3.806262ms +Oct 13 08:32:04.050: INFO: The phase of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) +Oct 13 08:32:06.054: INFO: Pod "e2e-host-exec": Phase="Running", Reason="", readiness=true. Elapsed: 2.007727841s +Oct 13 08:32:06.054: INFO: The phase of Pod e2e-host-exec is Running (Ready = true) +Oct 13 08:32:06.054: INFO: Pod "e2e-host-exec" satisfied condition "running and ready" +STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54323 10/13/23 08:32:06.057 +Oct 13 08:32:06.057: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 10.253.8.112 http://127.0.0.1:54323/hostname] Namespace:hostport-9764 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 13 08:32:06.057: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +Oct 13 08:32:06.058: INFO: ExecWithOptions: Clientset creation +Oct 13 08:32:06.058: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/hostport-9764/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+--connect-timeout+5+--interface+10.253.8.112+http%3A%2F%2F127.0.0.1%3A54323%2Fhostname&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true) +STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.253.8.112, port: 54323 10/13/23 08:32:06.314 +Oct 13 08:32:06.314: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://10.253.8.112:54323/hostname] Namespace:hostport-9764 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 13 08:32:06.314: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +Oct 13 08:32:06.315: INFO: ExecWithOptions: Clientset creation +Oct 13 08:32:06.315: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/hostport-9764/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+--connect-timeout+5+http%3A%2F%2F10.253.8.112%3A54323%2Fhostname&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true) +STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.253.8.112, port: 54323 UDP 10/13/23 08:32:06.379 +Oct 13 08:32:06.379: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostname | nc -u -w 5 10.253.8.112 54323] Namespace:hostport-9764 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 13 08:32:06.379: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +Oct 13 08:32:06.380: INFO: ExecWithOptions: Clientset creation +Oct 13 08:32:06.380: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/hostport-9764/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostname+%7C+nc+-u+-w+5+10.253.8.112+54323&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true) +[AfterEach] [sig-network] HostPort + test/e2e/framework/node/init/init.go:32 +Oct 13 08:32:11.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] HostPort + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] HostPort + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] HostPort + tear down framework | framework.go:193 +STEP: Destroying namespace "hostport-9764" for this suite. 10/13/23 08:32:11.471 +------------------------------ +• [SLOW TEST] [15.516 seconds] +[sig-network] HostPort +test/e2e/network/common/framework.go:23 + validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] + test/e2e/network/hostport.go:63 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] HostPort + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:31:55.961 + Oct 13 08:31:55.961: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename hostport 10/13/23 08:31:55.962 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:31:55.976 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:31:55.979 + [BeforeEach] [sig-network] HostPort + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-network] HostPort + test/e2e/network/hostport.go:49 + [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] + test/e2e/network/hostport.go:63 + STEP: Trying to create a pod(pod1) with hostport 54323 and hostIP 127.0.0.1 and expect scheduled 10/13/23 08:31:55.985 + Oct 13 08:31:55.993: INFO: Waiting up to 5m0s for pod "pod1" in namespace "hostport-9764" to be "running and ready" + Oct 13 08:31:55.997: INFO: Pod "pod1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.899595ms + Oct 13 08:31:55.998: INFO: The phase of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) + Oct 13 08:31:58.003: INFO: Pod "pod1": Phase="Running", Reason="", readiness=true. Elapsed: 2.010836338s + Oct 13 08:31:58.003: INFO: The phase of Pod pod1 is Running (Ready = true) + Oct 13 08:31:58.003: INFO: Pod "pod1" satisfied condition "running and ready" + STEP: Trying to create another pod(pod2) with hostport 54323 but hostIP 10.253.8.112 on the node which pod1 resides and expect scheduled 10/13/23 08:31:58.004 + Oct 13 08:31:58.011: INFO: Waiting up to 5m0s for pod "pod2" in namespace "hostport-9764" to be "running and ready" + Oct 13 08:31:58.014: INFO: Pod "pod2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.94052ms + Oct 13 08:31:58.014: INFO: The phase of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) + Oct 13 08:32:00.019: INFO: Pod "pod2": Phase="Running", Reason="", readiness=false. Elapsed: 2.008366966s + Oct 13 08:32:00.019: INFO: The phase of Pod pod2 is Running (Ready = false) + Oct 13 08:32:02.019: INFO: Pod "pod2": Phase="Running", Reason="", readiness=true. Elapsed: 4.00797205s + Oct 13 08:32:02.019: INFO: The phase of Pod pod2 is Running (Ready = true) + Oct 13 08:32:02.019: INFO: Pod "pod2" satisfied condition "running and ready" + STEP: Trying to create a third pod(pod3) with hostport 54323, hostIP 10.253.8.112 but use UDP protocol on the node which pod2 resides 10/13/23 08:32:02.019 + Oct 13 08:32:02.029: INFO: Waiting up to 5m0s for pod "pod3" in namespace "hostport-9764" to be "running and ready" + Oct 13 08:32:02.033: INFO: Pod "pod3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.276391ms + Oct 13 08:32:02.033: INFO: The phase of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) + Oct 13 08:32:04.039: INFO: Pod "pod3": Phase="Running", Reason="", readiness=true. Elapsed: 2.009699022s + Oct 13 08:32:04.039: INFO: The phase of Pod pod3 is Running (Ready = true) + Oct 13 08:32:04.039: INFO: Pod "pod3" satisfied condition "running and ready" + Oct 13 08:32:04.046: INFO: Waiting up to 5m0s for pod "e2e-host-exec" in namespace "hostport-9764" to be "running and ready" + Oct 13 08:32:04.050: INFO: Pod "e2e-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 3.806262ms + Oct 13 08:32:04.050: INFO: The phase of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) + Oct 13 08:32:06.054: INFO: Pod "e2e-host-exec": Phase="Running", Reason="", readiness=true. Elapsed: 2.007727841s + Oct 13 08:32:06.054: INFO: The phase of Pod e2e-host-exec is Running (Ready = true) + Oct 13 08:32:06.054: INFO: Pod "e2e-host-exec" satisfied condition "running and ready" + STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54323 10/13/23 08:32:06.057 + Oct 13 08:32:06.057: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 10.253.8.112 http://127.0.0.1:54323/hostname] Namespace:hostport-9764 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Oct 13 08:32:06.057: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + Oct 13 08:32:06.058: INFO: ExecWithOptions: Clientset creation + Oct 13 08:32:06.058: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/hostport-9764/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+--connect-timeout+5+--interface+10.253.8.112+http%3A%2F%2F127.0.0.1%3A54323%2Fhostname&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true) + STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.253.8.112, port: 54323 10/13/23 08:32:06.314 + Oct 13 08:32:06.314: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://10.253.8.112:54323/hostname] Namespace:hostport-9764 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Oct 13 08:32:06.314: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + Oct 13 08:32:06.315: INFO: ExecWithOptions: Clientset creation + Oct 13 08:32:06.315: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/hostport-9764/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+--connect-timeout+5+http%3A%2F%2F10.253.8.112%3A54323%2Fhostname&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true) + STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.253.8.112, port: 54323 UDP 10/13/23 08:32:06.379 + Oct 13 08:32:06.379: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostname | nc -u -w 5 10.253.8.112 54323] Namespace:hostport-9764 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Oct 13 08:32:06.379: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + Oct 13 08:32:06.380: INFO: ExecWithOptions: Clientset creation + Oct 13 08:32:06.380: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/hostport-9764/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostname+%7C+nc+-u+-w+5+10.253.8.112+54323&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true) + [AfterEach] [sig-network] HostPort + test/e2e/framework/node/init/init.go:32 + Oct 13 08:32:11.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] HostPort + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] HostPort + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] HostPort + tear down framework | framework.go:193 + STEP: Destroying namespace "hostport-9764" for this suite. 10/13/23 08:32:11.471 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:88 +[BeforeEach] [sig-storage] Projected secret + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:32:11.478 +Oct 13 08:32:11.478: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename projected 10/13/23 08:32:11.479 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:32:11.496 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:32:11.499 +[BeforeEach] [sig-storage] Projected secret + test/e2e/framework/metrics/init/init.go:31 +[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:88 +STEP: Creating projection with secret that has name projected-secret-test-map-e345b306-0712-4bf2-979f-578330316f8c 10/13/23 08:32:11.501 +STEP: Creating a pod to test consume secrets 10/13/23 08:32:11.505 +Oct 13 08:32:11.513: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f93589fb-3b59-40ea-9e15-4bb3c3b900ee" in namespace "projected-4689" to be "Succeeded or Failed" +Oct 13 08:32:11.517: INFO: Pod "pod-projected-secrets-f93589fb-3b59-40ea-9e15-4bb3c3b900ee": Phase="Pending", Reason="", readiness=false. Elapsed: 4.146011ms +Oct 13 08:32:13.520: INFO: Pod "pod-projected-secrets-f93589fb-3b59-40ea-9e15-4bb3c3b900ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00778281s +Oct 13 08:32:15.523: INFO: Pod "pod-projected-secrets-f93589fb-3b59-40ea-9e15-4bb3c3b900ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010832786s +STEP: Saw pod success 10/13/23 08:32:15.523 +Oct 13 08:32:15.524: INFO: Pod "pod-projected-secrets-f93589fb-3b59-40ea-9e15-4bb3c3b900ee" satisfied condition "Succeeded or Failed" +Oct 13 08:32:15.528: INFO: Trying to get logs from node node2 pod pod-projected-secrets-f93589fb-3b59-40ea-9e15-4bb3c3b900ee container projected-secret-volume-test: +STEP: delete the pod 10/13/23 08:32:15.534 +Oct 13 08:32:15.549: INFO: Waiting for pod pod-projected-secrets-f93589fb-3b59-40ea-9e15-4bb3c3b900ee to disappear +Oct 13 08:32:15.551: INFO: Pod pod-projected-secrets-f93589fb-3b59-40ea-9e15-4bb3c3b900ee no longer exists +[AfterEach] [sig-storage] Projected secret + test/e2e/framework/node/init/init.go:32 +Oct 13 08:32:15.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected secret + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Projected secret + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Projected secret + tear down framework | framework.go:193 +STEP: Destroying namespace "projected-4689" for this suite. 10/13/23 08:32:15.555 +------------------------------ +• [4.083 seconds] +[sig-storage] Projected secret +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:88 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected secret + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:32:11.478 + Oct 13 08:32:11.478: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename projected 10/13/23 08:32:11.479 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:32:11.496 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:32:11.499 + [BeforeEach] [sig-storage] Projected secret + test/e2e/framework/metrics/init/init.go:31 + [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:88 + STEP: Creating projection with secret that has name projected-secret-test-map-e345b306-0712-4bf2-979f-578330316f8c 10/13/23 08:32:11.501 + STEP: Creating a pod to test consume secrets 10/13/23 08:32:11.505 + Oct 13 08:32:11.513: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f93589fb-3b59-40ea-9e15-4bb3c3b900ee" in namespace "projected-4689" to be "Succeeded or Failed" + Oct 13 08:32:11.517: INFO: Pod "pod-projected-secrets-f93589fb-3b59-40ea-9e15-4bb3c3b900ee": Phase="Pending", Reason="", readiness=false. Elapsed: 4.146011ms + Oct 13 08:32:13.520: INFO: Pod "pod-projected-secrets-f93589fb-3b59-40ea-9e15-4bb3c3b900ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00778281s + Oct 13 08:32:15.523: INFO: Pod "pod-projected-secrets-f93589fb-3b59-40ea-9e15-4bb3c3b900ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010832786s + STEP: Saw pod success 10/13/23 08:32:15.523 + Oct 13 08:32:15.524: INFO: Pod "pod-projected-secrets-f93589fb-3b59-40ea-9e15-4bb3c3b900ee" satisfied condition "Succeeded or Failed" + Oct 13 08:32:15.528: INFO: Trying to get logs from node node2 pod pod-projected-secrets-f93589fb-3b59-40ea-9e15-4bb3c3b900ee container projected-secret-volume-test: + STEP: delete the pod 10/13/23 08:32:15.534 + Oct 13 08:32:15.549: INFO: Waiting for pod pod-projected-secrets-f93589fb-3b59-40ea-9e15-4bb3c3b900ee to disappear + Oct 13 08:32:15.551: INFO: Pod pod-projected-secrets-f93589fb-3b59-40ea-9e15-4bb3c3b900ee no longer exists + [AfterEach] [sig-storage] Projected secret + test/e2e/framework/node/init/init.go:32 + Oct 13 08:32:15.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected secret + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Projected secret + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Projected secret + tear down framework | framework.go:193 + STEP: Destroying namespace "projected-4689" for this suite. 10/13/23 08:32:15.555 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:249 +[BeforeEach] [sig-storage] Downward API volume + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:32:15.562 +Oct 13 08:32:15.562: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename downward-api 10/13/23 08:32:15.563 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:32:15.577 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:32:15.579 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:44 +[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:249 +STEP: Creating a pod to test downward API volume plugin 10/13/23 08:32:15.582 +Oct 13 08:32:15.593: INFO: Waiting up to 5m0s for pod "downwardapi-volume-29160ffc-e5bb-4c2a-b9c5-957697f21e0b" in namespace "downward-api-3820" to be "Succeeded or Failed" +Oct 13 08:32:15.596: INFO: Pod "downwardapi-volume-29160ffc-e5bb-4c2a-b9c5-957697f21e0b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.173409ms +Oct 13 08:32:17.601: INFO: Pod "downwardapi-volume-29160ffc-e5bb-4c2a-b9c5-957697f21e0b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008084795s +Oct 13 08:32:19.603: INFO: Pod "downwardapi-volume-29160ffc-e5bb-4c2a-b9c5-957697f21e0b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009676068s +STEP: Saw pod success 10/13/23 08:32:19.603 +Oct 13 08:32:19.603: INFO: Pod "downwardapi-volume-29160ffc-e5bb-4c2a-b9c5-957697f21e0b" satisfied condition "Succeeded or Failed" +Oct 13 08:32:19.607: INFO: Trying to get logs from node node2 pod downwardapi-volume-29160ffc-e5bb-4c2a-b9c5-957697f21e0b container client-container: +STEP: delete the pod 10/13/23 08:32:19.614 +Oct 13 08:32:19.626: INFO: Waiting for pod downwardapi-volume-29160ffc-e5bb-4c2a-b9c5-957697f21e0b to disappear +Oct 13 08:32:19.629: INFO: Pod downwardapi-volume-29160ffc-e5bb-4c2a-b9c5-957697f21e0b no longer exists +[AfterEach] [sig-storage] Downward API volume + test/e2e/framework/node/init/init.go:32 +Oct 13 08:32:19.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Downward API volume + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Downward API volume + tear down framework | framework.go:193 +STEP: Destroying namespace "downward-api-3820" for this suite. 10/13/23 08:32:19.632 +------------------------------ +• [4.075 seconds] +[sig-storage] Downward API volume +test/e2e/common/storage/framework.go:23 + should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:249 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Downward API volume + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:32:15.562 + Oct 13 08:32:15.562: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename downward-api 10/13/23 08:32:15.563 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:32:15.577 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:32:15.579 + [BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:44 + [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:249 + STEP: Creating a pod to test downward API volume plugin 10/13/23 08:32:15.582 + Oct 13 08:32:15.593: INFO: Waiting up to 5m0s for pod "downwardapi-volume-29160ffc-e5bb-4c2a-b9c5-957697f21e0b" in namespace "downward-api-3820" to be "Succeeded or Failed" + Oct 13 08:32:15.596: INFO: Pod "downwardapi-volume-29160ffc-e5bb-4c2a-b9c5-957697f21e0b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.173409ms + Oct 13 08:32:17.601: INFO: Pod "downwardapi-volume-29160ffc-e5bb-4c2a-b9c5-957697f21e0b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008084795s + Oct 13 08:32:19.603: INFO: Pod "downwardapi-volume-29160ffc-e5bb-4c2a-b9c5-957697f21e0b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009676068s + STEP: Saw pod success 10/13/23 08:32:19.603 + Oct 13 08:32:19.603: INFO: Pod "downwardapi-volume-29160ffc-e5bb-4c2a-b9c5-957697f21e0b" satisfied condition "Succeeded or Failed" + Oct 13 08:32:19.607: INFO: Trying to get logs from node node2 pod downwardapi-volume-29160ffc-e5bb-4c2a-b9c5-957697f21e0b container client-container: + STEP: delete the pod 10/13/23 08:32:19.614 + Oct 13 08:32:19.626: INFO: Waiting for pod downwardapi-volume-29160ffc-e5bb-4c2a-b9c5-957697f21e0b to disappear + Oct 13 08:32:19.629: INFO: Pod downwardapi-volume-29160ffc-e5bb-4c2a-b9c5-957697f21e0b no longer exists + [AfterEach] [sig-storage] Downward API volume + test/e2e/framework/node/init/init.go:32 + Oct 13 08:32:19.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Downward API volume + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Downward API volume + tear down framework | framework.go:193 + STEP: Destroying namespace "downward-api-3820" for this suite. 10/13/23 08:32:19.632 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] PreStop + should call prestop when killing a pod [Conformance] + test/e2e/node/pre_stop.go:168 +[BeforeEach] [sig-node] PreStop + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:32:19.639 +Oct 13 08:32:19.639: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename prestop 10/13/23 08:32:19.64 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:32:19.654 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:32:19.657 +[BeforeEach] [sig-node] PreStop + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] PreStop + test/e2e/node/pre_stop.go:159 +[It] should call prestop when killing a pod [Conformance] + test/e2e/node/pre_stop.go:168 +STEP: Creating server pod server in namespace prestop-2455 10/13/23 08:32:19.66 +STEP: Waiting for pods to come up. 10/13/23 08:32:19.667 +Oct 13 08:32:19.667: INFO: Waiting up to 5m0s for pod "server" in namespace "prestop-2455" to be "running" +Oct 13 08:32:19.670: INFO: Pod "server": Phase="Pending", Reason="", readiness=false. Elapsed: 2.739736ms +Oct 13 08:32:21.674: INFO: Pod "server": Phase="Running", Reason="", readiness=true. Elapsed: 2.006801395s +Oct 13 08:32:21.674: INFO: Pod "server" satisfied condition "running" +STEP: Creating tester pod tester in namespace prestop-2455 10/13/23 08:32:21.676 +Oct 13 08:32:21.681: INFO: Waiting up to 5m0s for pod "tester" in namespace "prestop-2455" to be "running" +Oct 13 08:32:21.684: INFO: Pod "tester": Phase="Pending", Reason="", readiness=false. Elapsed: 3.129018ms +Oct 13 08:32:23.688: INFO: Pod "tester": Phase="Running", Reason="", readiness=true. Elapsed: 2.006804663s +Oct 13 08:32:23.688: INFO: Pod "tester" satisfied condition "running" +STEP: Deleting pre-stop pod 10/13/23 08:32:23.688 +Oct 13 08:32:28.701: INFO: Saw: { + "Hostname": "server", + "Sent": null, + "Received": { + "prestop": 1 + }, + "Errors": null, + "Log": [ + "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", + "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." + ], + "StillContactingPeers": true +} +STEP: Deleting the server pod 10/13/23 08:32:28.701 +[AfterEach] [sig-node] PreStop + test/e2e/framework/node/init/init.go:32 +Oct 13 08:32:28.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] PreStop + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] PreStop + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] PreStop + tear down framework | framework.go:193 +STEP: Destroying namespace "prestop-2455" for this suite. 10/13/23 08:32:28.72 +------------------------------ +• [SLOW TEST] [9.088 seconds] +[sig-node] PreStop +test/e2e/node/framework.go:23 + should call prestop when killing a pod [Conformance] + test/e2e/node/pre_stop.go:168 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] PreStop + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:32:19.639 + Oct 13 08:32:19.639: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename prestop 10/13/23 08:32:19.64 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:32:19.654 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:32:19.657 + [BeforeEach] [sig-node] PreStop + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] PreStop + test/e2e/node/pre_stop.go:159 + [It] should call prestop when killing a pod [Conformance] + test/e2e/node/pre_stop.go:168 + STEP: Creating server pod server in namespace prestop-2455 10/13/23 08:32:19.66 + STEP: Waiting for pods to come up. 10/13/23 08:32:19.667 + Oct 13 08:32:19.667: INFO: Waiting up to 5m0s for pod "server" in namespace "prestop-2455" to be "running" + Oct 13 08:32:19.670: INFO: Pod "server": Phase="Pending", Reason="", readiness=false. Elapsed: 2.739736ms + Oct 13 08:32:21.674: INFO: Pod "server": Phase="Running", Reason="", readiness=true. Elapsed: 2.006801395s + Oct 13 08:32:21.674: INFO: Pod "server" satisfied condition "running" + STEP: Creating tester pod tester in namespace prestop-2455 10/13/23 08:32:21.676 + Oct 13 08:32:21.681: INFO: Waiting up to 5m0s for pod "tester" in namespace "prestop-2455" to be "running" + Oct 13 08:32:21.684: INFO: Pod "tester": Phase="Pending", Reason="", readiness=false. Elapsed: 3.129018ms + Oct 13 08:32:23.688: INFO: Pod "tester": Phase="Running", Reason="", readiness=true. Elapsed: 2.006804663s + Oct 13 08:32:23.688: INFO: Pod "tester" satisfied condition "running" + STEP: Deleting pre-stop pod 10/13/23 08:32:23.688 + Oct 13 08:32:28.701: INFO: Saw: { + "Hostname": "server", + "Sent": null, + "Received": { + "prestop": 1 + }, + "Errors": null, + "Log": [ + "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", + "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." + ], + "StillContactingPeers": true + } + STEP: Deleting the server pod 10/13/23 08:32:28.701 + [AfterEach] [sig-node] PreStop + test/e2e/framework/node/init/init.go:32 + Oct 13 08:32:28.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] PreStop + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] PreStop + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] PreStop + tear down framework | framework.go:193 + STEP: Destroying namespace "prestop-2455" for this suite. 10/13/23 08:32:28.72 + << End Captured GinkgoWriter Output +------------------------------ +SSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + listing validating webhooks should work [Conformance] + test/e2e/apimachinery/webhook.go:582 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:32:28.727 +Oct 13 08:32:28.727: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename webhook 10/13/23 08:32:28.728 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:32:28.743 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:32:28.745 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 +STEP: Setting up server cert 10/13/23 08:32:28.76 +STEP: Create role binding to let webhook read extension-apiserver-authentication 10/13/23 08:32:29.361 +STEP: Deploying the webhook pod 10/13/23 08:32:29.371 +STEP: Wait for the deployment to be ready 10/13/23 08:32:29.387 +Oct 13 08:32:29.394: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service 10/13/23 08:32:31.408 +STEP: Verifying the service has paired with the endpoint 10/13/23 08:32:31.423 +Oct 13 08:32:32.424: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] listing validating webhooks should work [Conformance] + test/e2e/apimachinery/webhook.go:582 +STEP: Listing all of the created validation webhooks 10/13/23 08:32:32.493 +STEP: Creating a configMap that does not comply to the validation webhook rules 10/13/23 08:32:32.523 +STEP: Deleting the collection of validation webhooks 10/13/23 08:32:32.545 +STEP: Creating a configMap that does not comply to the validation webhook rules 10/13/23 08:32:32.586 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Oct 13 08:32:32.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "webhook-9461" for this suite. 10/13/23 08:32:32.64 +STEP: Destroying namespace "webhook-9461-markers" for this suite. 10/13/23 08:32:32.647 +------------------------------ +• [3.928 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + listing validating webhooks should work [Conformance] + test/e2e/apimachinery/webhook.go:582 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:32:28.727 + Oct 13 08:32:28.727: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename webhook 10/13/23 08:32:28.728 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:32:28.743 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:32:28.745 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 + STEP: Setting up server cert 10/13/23 08:32:28.76 + STEP: Create role binding to let webhook read extension-apiserver-authentication 10/13/23 08:32:29.361 + STEP: Deploying the webhook pod 10/13/23 08:32:29.371 + STEP: Wait for the deployment to be ready 10/13/23 08:32:29.387 + Oct 13 08:32:29.394: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set + STEP: Deploying the webhook service 10/13/23 08:32:31.408 + STEP: Verifying the service has paired with the endpoint 10/13/23 08:32:31.423 + Oct 13 08:32:32.424: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] listing validating webhooks should work [Conformance] + test/e2e/apimachinery/webhook.go:582 + STEP: Listing all of the created validation webhooks 10/13/23 08:32:32.493 + STEP: Creating a configMap that does not comply to the validation webhook rules 10/13/23 08:32:32.523 + STEP: Deleting the collection of validation webhooks 10/13/23 08:32:32.545 + STEP: Creating a configMap that does not comply to the validation webhook rules 10/13/23 08:32:32.586 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Oct 13 08:32:32.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "webhook-9461" for this suite. 10/13/23 08:32:32.64 + STEP: Destroying namespace "webhook-9461-markers" for this suite. 10/13/23 08:32:32.647 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSS +------------------------------ +[sig-apps] ReplicationController + should release no longer matching pods [Conformance] + test/e2e/apps/rc.go:101 +[BeforeEach] [sig-apps] ReplicationController + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:32:32.655 +Oct 13 08:32:32.655: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename replication-controller 10/13/23 08:32:32.656 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:32:32.677 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:32:32.68 +[BeforeEach] [sig-apps] ReplicationController + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] ReplicationController + test/e2e/apps/rc.go:57 +[It] should release no longer matching pods [Conformance] + test/e2e/apps/rc.go:101 +STEP: Given a ReplicationController is created 10/13/23 08:32:32.683 +STEP: When the matched label of one of its pods change 10/13/23 08:32:32.688 +Oct 13 08:32:32.692: INFO: Pod name pod-release: Found 0 pods out of 1 +Oct 13 08:32:37.697: INFO: Pod name pod-release: Found 1 pods out of 1 +STEP: Then the pod is released 10/13/23 08:32:37.709 +[AfterEach] [sig-apps] ReplicationController + test/e2e/framework/node/init/init.go:32 +Oct 13 08:32:38.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] ReplicationController + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] ReplicationController + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] ReplicationController + tear down framework | framework.go:193 +STEP: Destroying namespace "replication-controller-904" for this suite. 10/13/23 08:32:38.719 +------------------------------ +• [SLOW TEST] [6.070 seconds] +[sig-apps] ReplicationController +test/e2e/apps/framework.go:23 + should release no longer matching pods [Conformance] + test/e2e/apps/rc.go:101 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] ReplicationController + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:32:32.655 + Oct 13 08:32:32.655: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename replication-controller 10/13/23 08:32:32.656 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:32:32.677 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:32:32.68 + [BeforeEach] [sig-apps] ReplicationController + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] ReplicationController + test/e2e/apps/rc.go:57 + [It] should release no longer matching pods [Conformance] + test/e2e/apps/rc.go:101 + STEP: Given a ReplicationController is created 10/13/23 08:32:32.683 + STEP: When the matched label of one of its pods change 10/13/23 08:32:32.688 + Oct 13 08:32:32.692: INFO: Pod name pod-release: Found 0 pods out of 1 + Oct 13 08:32:37.697: INFO: Pod name pod-release: Found 1 pods out of 1 + STEP: Then the pod is released 10/13/23 08:32:37.709 + [AfterEach] [sig-apps] ReplicationController + test/e2e/framework/node/init/init.go:32 + Oct 13 08:32:38.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] ReplicationController + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] ReplicationController + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] ReplicationController + tear down framework | framework.go:193 + STEP: Destroying namespace "replication-controller-904" for this suite. 10/13/23 08:32:38.719 + << End Captured GinkgoWriter Output +------------------------------ +[sig-storage] Downward API volume + should provide container's memory limit [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:207 +[BeforeEach] [sig-storage] Downward API volume + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:32:38.725 +Oct 13 08:32:38.725: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename downward-api 10/13/23 08:32:38.726 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:32:38.74 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:32:38.742 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:44 +[It] should provide container's memory limit [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:207 +STEP: Creating a pod to test downward API volume plugin 10/13/23 08:32:38.744 +Oct 13 08:32:38.752: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7c63ee31-dff3-4749-a9f8-362561a1a4cf" in namespace "downward-api-2045" to be "Succeeded or Failed" +Oct 13 08:32:38.755: INFO: Pod "downwardapi-volume-7c63ee31-dff3-4749-a9f8-362561a1a4cf": Phase="Pending", Reason="", readiness=false. Elapsed: 3.082776ms +Oct 13 08:32:40.762: INFO: Pod "downwardapi-volume-7c63ee31-dff3-4749-a9f8-362561a1a4cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009772956s +Oct 13 08:32:42.760: INFO: Pod "downwardapi-volume-7c63ee31-dff3-4749-a9f8-362561a1a4cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008072009s +STEP: Saw pod success 10/13/23 08:32:42.76 +Oct 13 08:32:42.760: INFO: Pod "downwardapi-volume-7c63ee31-dff3-4749-a9f8-362561a1a4cf" satisfied condition "Succeeded or Failed" +Oct 13 08:32:42.763: INFO: Trying to get logs from node node2 pod downwardapi-volume-7c63ee31-dff3-4749-a9f8-362561a1a4cf container client-container: +STEP: delete the pod 10/13/23 08:32:42.769 +Oct 13 08:32:42.779: INFO: Waiting for pod downwardapi-volume-7c63ee31-dff3-4749-a9f8-362561a1a4cf to disappear +Oct 13 08:32:42.782: INFO: Pod downwardapi-volume-7c63ee31-dff3-4749-a9f8-362561a1a4cf no longer exists +[AfterEach] [sig-storage] Downward API volume + test/e2e/framework/node/init/init.go:32 +Oct 13 08:32:42.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Downward API volume + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Downward API volume + tear down framework | framework.go:193 +STEP: Destroying namespace "downward-api-2045" for this suite. 10/13/23 08:32:42.785 +------------------------------ +• [4.065 seconds] +[sig-storage] Downward API volume +test/e2e/common/storage/framework.go:23 + should provide container's memory limit [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:207 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Downward API volume + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:32:38.725 + Oct 13 08:32:38.725: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename downward-api 10/13/23 08:32:38.726 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:32:38.74 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:32:38.742 + [BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:44 + [It] should provide container's memory limit [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:207 + STEP: Creating a pod to test downward API volume plugin 10/13/23 08:32:38.744 + Oct 13 08:32:38.752: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7c63ee31-dff3-4749-a9f8-362561a1a4cf" in namespace "downward-api-2045" to be "Succeeded or Failed" + Oct 13 08:32:38.755: INFO: Pod "downwardapi-volume-7c63ee31-dff3-4749-a9f8-362561a1a4cf": Phase="Pending", Reason="", readiness=false. Elapsed: 3.082776ms + Oct 13 08:32:40.762: INFO: Pod "downwardapi-volume-7c63ee31-dff3-4749-a9f8-362561a1a4cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009772956s + Oct 13 08:32:42.760: INFO: Pod "downwardapi-volume-7c63ee31-dff3-4749-a9f8-362561a1a4cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008072009s + STEP: Saw pod success 10/13/23 08:32:42.76 + Oct 13 08:32:42.760: INFO: Pod "downwardapi-volume-7c63ee31-dff3-4749-a9f8-362561a1a4cf" satisfied condition "Succeeded or Failed" + Oct 13 08:32:42.763: INFO: Trying to get logs from node node2 pod downwardapi-volume-7c63ee31-dff3-4749-a9f8-362561a1a4cf container client-container: + STEP: delete the pod 10/13/23 08:32:42.769 + Oct 13 08:32:42.779: INFO: Waiting for pod downwardapi-volume-7c63ee31-dff3-4749-a9f8-362561a1a4cf to disappear + Oct 13 08:32:42.782: INFO: Pod downwardapi-volume-7c63ee31-dff3-4749-a9f8-362561a1a4cf no longer exists + [AfterEach] [sig-storage] Downward API volume + test/e2e/framework/node/init/init.go:32 + Oct 13 08:32:42.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Downward API volume + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Downward API volume + tear down framework | framework.go:193 + STEP: Destroying namespace "downward-api-2045" for this suite. 10/13/23 08:32:42.785 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSS +------------------------------ +[sig-node] InitContainer [NodeConformance] + should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] + test/e2e/common/node/init_container.go:458 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:32:42.79 +Oct 13 08:32:42.790: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename init-container 10/13/23 08:32:42.791 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:32:42.805 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:32:42.808 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + test/e2e/common/node/init_container.go:165 +[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] + test/e2e/common/node/init_container.go:458 +STEP: creating the pod 10/13/23 08:32:42.809 +Oct 13 08:32:42.810: INFO: PodSpec: initContainers in spec.initContainers +[AfterEach] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/node/init/init.go:32 +Oct 13 08:32:46.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] + tear down framework | framework.go:193 +STEP: Destroying namespace "init-container-5193" for this suite. 10/13/23 08:32:46.72 +------------------------------ +• [3.937 seconds] +[sig-node] InitContainer [NodeConformance] +test/e2e/common/node/framework.go:23 + should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] + test/e2e/common/node/init_container.go:458 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] InitContainer [NodeConformance] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:32:42.79 + Oct 13 08:32:42.790: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename init-container 10/13/23 08:32:42.791 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:32:42.805 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:32:42.808 + [BeforeEach] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] InitContainer [NodeConformance] + test/e2e/common/node/init_container.go:165 + [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] + test/e2e/common/node/init_container.go:458 + STEP: creating the pod 10/13/23 08:32:42.809 + Oct 13 08:32:42.810: INFO: PodSpec: initContainers in spec.initContainers + [AfterEach] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/node/init/init.go:32 + Oct 13 08:32:46.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] + tear down framework | framework.go:193 + STEP: Destroying namespace "init-container-5193" for this suite. 10/13/23 08:32:46.72 + << End Captured GinkgoWriter Output +------------------------------ +SSSS +------------------------------ +[sig-storage] Downward API volume + should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:261 +[BeforeEach] [sig-storage] Downward API volume + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:32:46.727 +Oct 13 08:32:46.727: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename downward-api 10/13/23 08:32:46.728 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:32:46.741 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:32:46.744 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:44 +[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:261 +STEP: Creating a pod to test downward API volume plugin 10/13/23 08:32:46.746 +Oct 13 08:32:46.753: INFO: Waiting up to 5m0s for pod "downwardapi-volume-80241ce7-6ba3-4788-8b25-88b8412bd82c" in namespace "downward-api-500" to be "Succeeded or Failed" +Oct 13 08:32:46.756: INFO: Pod "downwardapi-volume-80241ce7-6ba3-4788-8b25-88b8412bd82c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.08183ms +Oct 13 08:32:48.761: INFO: Pod "downwardapi-volume-80241ce7-6ba3-4788-8b25-88b8412bd82c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007328648s +Oct 13 08:32:50.761: INFO: Pod "downwardapi-volume-80241ce7-6ba3-4788-8b25-88b8412bd82c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007624584s +STEP: Saw pod success 10/13/23 08:32:50.761 +Oct 13 08:32:50.761: INFO: Pod "downwardapi-volume-80241ce7-6ba3-4788-8b25-88b8412bd82c" satisfied condition "Succeeded or Failed" +Oct 13 08:32:50.764: INFO: Trying to get logs from node node2 pod downwardapi-volume-80241ce7-6ba3-4788-8b25-88b8412bd82c container client-container: +STEP: delete the pod 10/13/23 08:32:50.769 +Oct 13 08:32:50.780: INFO: Waiting for pod downwardapi-volume-80241ce7-6ba3-4788-8b25-88b8412bd82c to disappear +Oct 13 08:32:50.783: INFO: Pod downwardapi-volume-80241ce7-6ba3-4788-8b25-88b8412bd82c no longer exists +[AfterEach] [sig-storage] Downward API volume + test/e2e/framework/node/init/init.go:32 +Oct 13 08:32:50.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Downward API volume + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Downward API volume + tear down framework | framework.go:193 +STEP: Destroying namespace "downward-api-500" for this suite. 10/13/23 08:32:50.786 +------------------------------ +• [4.064 seconds] +[sig-storage] Downward API volume +test/e2e/common/storage/framework.go:23 + should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:261 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Downward API volume + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:32:46.727 + Oct 13 08:32:46.727: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename downward-api 10/13/23 08:32:46.728 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:32:46.741 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:32:46.744 + [BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:44 + [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:261 + STEP: Creating a pod to test downward API volume plugin 10/13/23 08:32:46.746 + Oct 13 08:32:46.753: INFO: Waiting up to 5m0s for pod "downwardapi-volume-80241ce7-6ba3-4788-8b25-88b8412bd82c" in namespace "downward-api-500" to be "Succeeded or Failed" + Oct 13 08:32:46.756: INFO: Pod "downwardapi-volume-80241ce7-6ba3-4788-8b25-88b8412bd82c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.08183ms + Oct 13 08:32:48.761: INFO: Pod "downwardapi-volume-80241ce7-6ba3-4788-8b25-88b8412bd82c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007328648s + Oct 13 08:32:50.761: INFO: Pod "downwardapi-volume-80241ce7-6ba3-4788-8b25-88b8412bd82c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007624584s + STEP: Saw pod success 10/13/23 08:32:50.761 + Oct 13 08:32:50.761: INFO: Pod "downwardapi-volume-80241ce7-6ba3-4788-8b25-88b8412bd82c" satisfied condition "Succeeded or Failed" + Oct 13 08:32:50.764: INFO: Trying to get logs from node node2 pod downwardapi-volume-80241ce7-6ba3-4788-8b25-88b8412bd82c container client-container: + STEP: delete the pod 10/13/23 08:32:50.769 + Oct 13 08:32:50.780: INFO: Waiting for pod downwardapi-volume-80241ce7-6ba3-4788-8b25-88b8412bd82c to disappear + Oct 13 08:32:50.783: INFO: Pod downwardapi-volume-80241ce7-6ba3-4788-8b25-88b8412bd82c no longer exists + [AfterEach] [sig-storage] Downward API volume + test/e2e/framework/node/init/init.go:32 + Oct 13 08:32:50.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Downward API volume + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Downward API volume + tear down framework | framework.go:193 + STEP: Destroying namespace "downward-api-500" for this suite. 10/13/23 08:32:50.786 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Probing container + with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:72 +[BeforeEach] [sig-node] Probing container + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:32:50.792 +Oct 13 08:32:50.793: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename container-probe 10/13/23 08:32:50.794 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:32:50.807 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:32:50.81 +[BeforeEach] [sig-node] Probing container + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:63 +[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:72 +Oct 13 08:32:50.819: INFO: Waiting up to 5m0s for pod "test-webserver-1feb1783-8056-4fe5-b7e4-5e25b90a98f1" in namespace "container-probe-4395" to be "running and ready" +Oct 13 08:32:50.822: INFO: Pod "test-webserver-1feb1783-8056-4fe5-b7e4-5e25b90a98f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.855141ms +Oct 13 08:32:50.822: INFO: The phase of Pod test-webserver-1feb1783-8056-4fe5-b7e4-5e25b90a98f1 is Pending, waiting for it to be Running (with Ready = true) +Oct 13 08:32:52.826: INFO: Pod "test-webserver-1feb1783-8056-4fe5-b7e4-5e25b90a98f1": Phase="Running", Reason="", readiness=false. Elapsed: 2.007342969s +Oct 13 08:32:52.826: INFO: The phase of Pod test-webserver-1feb1783-8056-4fe5-b7e4-5e25b90a98f1 is Running (Ready = false) +Oct 13 08:32:54.829: INFO: Pod "test-webserver-1feb1783-8056-4fe5-b7e4-5e25b90a98f1": Phase="Running", Reason="", readiness=false. Elapsed: 4.009663636s +Oct 13 08:32:54.829: INFO: The phase of Pod test-webserver-1feb1783-8056-4fe5-b7e4-5e25b90a98f1 is Running (Ready = false) +Oct 13 08:32:56.826: INFO: Pod "test-webserver-1feb1783-8056-4fe5-b7e4-5e25b90a98f1": Phase="Running", Reason="", readiness=false. Elapsed: 6.00707741s +Oct 13 08:32:56.826: INFO: The phase of Pod test-webserver-1feb1783-8056-4fe5-b7e4-5e25b90a98f1 is Running (Ready = false) +Oct 13 08:32:58.827: INFO: Pod "test-webserver-1feb1783-8056-4fe5-b7e4-5e25b90a98f1": Phase="Running", Reason="", readiness=false. Elapsed: 8.007759553s +Oct 13 08:32:58.827: INFO: The phase of Pod test-webserver-1feb1783-8056-4fe5-b7e4-5e25b90a98f1 is Running (Ready = false) +Oct 13 08:33:00.827: INFO: Pod "test-webserver-1feb1783-8056-4fe5-b7e4-5e25b90a98f1": Phase="Running", Reason="", readiness=false. Elapsed: 10.008299686s +Oct 13 08:33:00.827: INFO: The phase of Pod test-webserver-1feb1783-8056-4fe5-b7e4-5e25b90a98f1 is Running (Ready = false) +Oct 13 08:33:02.829: INFO: Pod "test-webserver-1feb1783-8056-4fe5-b7e4-5e25b90a98f1": Phase="Running", Reason="", readiness=false. Elapsed: 12.009992819s +Oct 13 08:33:02.829: INFO: The phase of Pod test-webserver-1feb1783-8056-4fe5-b7e4-5e25b90a98f1 is Running (Ready = false) +Oct 13 08:33:04.829: INFO: Pod "test-webserver-1feb1783-8056-4fe5-b7e4-5e25b90a98f1": Phase="Running", Reason="", readiness=false. Elapsed: 14.010492535s +Oct 13 08:33:04.829: INFO: The phase of Pod test-webserver-1feb1783-8056-4fe5-b7e4-5e25b90a98f1 is Running (Ready = false) +Oct 13 08:33:06.827: INFO: Pod "test-webserver-1feb1783-8056-4fe5-b7e4-5e25b90a98f1": Phase="Running", Reason="", readiness=false. Elapsed: 16.007749613s +Oct 13 08:33:06.827: INFO: The phase of Pod test-webserver-1feb1783-8056-4fe5-b7e4-5e25b90a98f1 is Running (Ready = false) +Oct 13 08:33:08.829: INFO: Pod "test-webserver-1feb1783-8056-4fe5-b7e4-5e25b90a98f1": Phase="Running", Reason="", readiness=false. Elapsed: 18.010306238s +Oct 13 08:33:08.829: INFO: The phase of Pod test-webserver-1feb1783-8056-4fe5-b7e4-5e25b90a98f1 is Running (Ready = false) +Oct 13 08:33:10.828: INFO: Pod "test-webserver-1feb1783-8056-4fe5-b7e4-5e25b90a98f1": Phase="Running", Reason="", readiness=false. Elapsed: 20.008871875s +Oct 13 08:33:10.828: INFO: The phase of Pod test-webserver-1feb1783-8056-4fe5-b7e4-5e25b90a98f1 is Running (Ready = false) +Oct 13 08:33:12.829: INFO: Pod "test-webserver-1feb1783-8056-4fe5-b7e4-5e25b90a98f1": Phase="Running", Reason="", readiness=true. Elapsed: 22.009635941s +Oct 13 08:33:12.829: INFO: The phase of Pod test-webserver-1feb1783-8056-4fe5-b7e4-5e25b90a98f1 is Running (Ready = true) +Oct 13 08:33:12.829: INFO: Pod "test-webserver-1feb1783-8056-4fe5-b7e4-5e25b90a98f1" satisfied condition "running and ready" +Oct 13 08:33:12.835: INFO: Container started at 2023-10-13 08:32:52 +0000 UTC, pod became ready at 2023-10-13 08:33:11 +0000 UTC +[AfterEach] [sig-node] Probing container + test/e2e/framework/node/init/init.go:32 +Oct 13 08:33:12.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Probing container + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Probing container + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Probing container + tear down framework | framework.go:193 +STEP: Destroying namespace "container-probe-4395" for this suite. 10/13/23 08:33:12.84 +------------------------------ +• [SLOW TEST] [22.054 seconds] +[sig-node] Probing container +test/e2e/common/node/framework.go:23 + with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:72 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Probing container + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:32:50.792 + Oct 13 08:32:50.793: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename container-probe 10/13/23 08:32:50.794 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:32:50.807 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:32:50.81 + [BeforeEach] [sig-node] Probing container + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:63 + [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:72 + Oct 13 08:32:50.819: INFO: Waiting up to 5m0s for pod "test-webserver-1feb1783-8056-4fe5-b7e4-5e25b90a98f1" in namespace "container-probe-4395" to be "running and ready" + Oct 13 08:32:50.822: INFO: Pod "test-webserver-1feb1783-8056-4fe5-b7e4-5e25b90a98f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.855141ms + Oct 13 08:32:50.822: INFO: The phase of Pod test-webserver-1feb1783-8056-4fe5-b7e4-5e25b90a98f1 is Pending, waiting for it to be Running (with Ready = true) + Oct 13 08:32:52.826: INFO: Pod "test-webserver-1feb1783-8056-4fe5-b7e4-5e25b90a98f1": Phase="Running", Reason="", readiness=false. Elapsed: 2.007342969s + Oct 13 08:32:52.826: INFO: The phase of Pod test-webserver-1feb1783-8056-4fe5-b7e4-5e25b90a98f1 is Running (Ready = false) + Oct 13 08:32:54.829: INFO: Pod "test-webserver-1feb1783-8056-4fe5-b7e4-5e25b90a98f1": Phase="Running", Reason="", readiness=false. Elapsed: 4.009663636s + Oct 13 08:32:54.829: INFO: The phase of Pod test-webserver-1feb1783-8056-4fe5-b7e4-5e25b90a98f1 is Running (Ready = false) + Oct 13 08:32:56.826: INFO: Pod "test-webserver-1feb1783-8056-4fe5-b7e4-5e25b90a98f1": Phase="Running", Reason="", readiness=false. Elapsed: 6.00707741s + Oct 13 08:32:56.826: INFO: The phase of Pod test-webserver-1feb1783-8056-4fe5-b7e4-5e25b90a98f1 is Running (Ready = false) + Oct 13 08:32:58.827: INFO: Pod "test-webserver-1feb1783-8056-4fe5-b7e4-5e25b90a98f1": Phase="Running", Reason="", readiness=false. Elapsed: 8.007759553s + Oct 13 08:32:58.827: INFO: The phase of Pod test-webserver-1feb1783-8056-4fe5-b7e4-5e25b90a98f1 is Running (Ready = false) + Oct 13 08:33:00.827: INFO: Pod "test-webserver-1feb1783-8056-4fe5-b7e4-5e25b90a98f1": Phase="Running", Reason="", readiness=false. Elapsed: 10.008299686s + Oct 13 08:33:00.827: INFO: The phase of Pod test-webserver-1feb1783-8056-4fe5-b7e4-5e25b90a98f1 is Running (Ready = false) + Oct 13 08:33:02.829: INFO: Pod "test-webserver-1feb1783-8056-4fe5-b7e4-5e25b90a98f1": Phase="Running", Reason="", readiness=false. Elapsed: 12.009992819s + Oct 13 08:33:02.829: INFO: The phase of Pod test-webserver-1feb1783-8056-4fe5-b7e4-5e25b90a98f1 is Running (Ready = false) + Oct 13 08:33:04.829: INFO: Pod "test-webserver-1feb1783-8056-4fe5-b7e4-5e25b90a98f1": Phase="Running", Reason="", readiness=false. Elapsed: 14.010492535s + Oct 13 08:33:04.829: INFO: The phase of Pod test-webserver-1feb1783-8056-4fe5-b7e4-5e25b90a98f1 is Running (Ready = false) + Oct 13 08:33:06.827: INFO: Pod "test-webserver-1feb1783-8056-4fe5-b7e4-5e25b90a98f1": Phase="Running", Reason="", readiness=false. Elapsed: 16.007749613s + Oct 13 08:33:06.827: INFO: The phase of Pod test-webserver-1feb1783-8056-4fe5-b7e4-5e25b90a98f1 is Running (Ready = false) + Oct 13 08:33:08.829: INFO: Pod "test-webserver-1feb1783-8056-4fe5-b7e4-5e25b90a98f1": Phase="Running", Reason="", readiness=false. Elapsed: 18.010306238s + Oct 13 08:33:08.829: INFO: The phase of Pod test-webserver-1feb1783-8056-4fe5-b7e4-5e25b90a98f1 is Running (Ready = false) + Oct 13 08:33:10.828: INFO: Pod "test-webserver-1feb1783-8056-4fe5-b7e4-5e25b90a98f1": Phase="Running", Reason="", readiness=false. Elapsed: 20.008871875s + Oct 13 08:33:10.828: INFO: The phase of Pod test-webserver-1feb1783-8056-4fe5-b7e4-5e25b90a98f1 is Running (Ready = false) + Oct 13 08:33:12.829: INFO: Pod "test-webserver-1feb1783-8056-4fe5-b7e4-5e25b90a98f1": Phase="Running", Reason="", readiness=true. Elapsed: 22.009635941s + Oct 13 08:33:12.829: INFO: The phase of Pod test-webserver-1feb1783-8056-4fe5-b7e4-5e25b90a98f1 is Running (Ready = true) + Oct 13 08:33:12.829: INFO: Pod "test-webserver-1feb1783-8056-4fe5-b7e4-5e25b90a98f1" satisfied condition "running and ready" + Oct 13 08:33:12.835: INFO: Container started at 2023-10-13 08:32:52 +0000 UTC, pod became ready at 2023-10-13 08:33:11 +0000 UTC + [AfterEach] [sig-node] Probing container + test/e2e/framework/node/init/init.go:32 + Oct 13 08:33:12.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Probing container + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Probing container + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Probing container + tear down framework | framework.go:193 + STEP: Destroying namespace "container-probe-4395" for this suite. 10/13/23 08:33:12.84 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition + getting/updating/patching custom resource definition status sub-resource works [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:145 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:33:12.847 +Oct 13 08:33:12.847: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename custom-resource-definition 10/13/23 08:33:12.848 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:33:12.863 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:33:12.866 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[It] getting/updating/patching custom resource definition status sub-resource works [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:145 +Oct 13 08:33:12.868: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Oct 13 08:33:13.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "custom-resource-definition-4524" for this suite. 10/13/23 08:33:13.415 +------------------------------ +• [0.574 seconds] +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + Simple CustomResourceDefinition + test/e2e/apimachinery/custom_resource_definition.go:50 + getting/updating/patching custom resource definition status sub-resource works [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:145 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:33:12.847 + Oct 13 08:33:12.847: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename custom-resource-definition 10/13/23 08:33:12.848 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:33:12.863 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:33:12.866 + [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:145 + Oct 13 08:33:12.868: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Oct 13 08:33:13.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "custom-resource-definition-4524" for this suite. 10/13/23 08:33:13.415 + << End Captured GinkgoWriter Output +------------------------------ +SSSS +------------------------------ +[sig-network] Services + should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] + test/e2e/network/service.go:2250 +[BeforeEach] [sig-network] Services + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:33:13.422 +Oct 13 08:33:13.422: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename services 10/13/23 08:33:13.423 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:33:13.437 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:33:13.44 +[BeforeEach] [sig-network] Services + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 +[It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] + test/e2e/network/service.go:2250 +STEP: creating service in namespace services-7701 10/13/23 08:33:13.442 +STEP: creating service affinity-nodeport-transition in namespace services-7701 10/13/23 08:33:13.442 +STEP: creating replication controller affinity-nodeport-transition in namespace services-7701 10/13/23 08:33:13.46 +I1013 08:33:13.467977 23 runners.go:193] Created replication controller with name: affinity-nodeport-transition, namespace: services-7701, replica count: 3 +I1013 08:33:16.519355 23 runners.go:193] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Oct 13 08:33:16.530: INFO: Creating new exec pod +Oct 13 08:33:16.539: INFO: Waiting up to 5m0s for pod "execpod-affinityj67f7" in namespace "services-7701" to be "running" +Oct 13 08:33:16.543: INFO: Pod "execpod-affinityj67f7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.682228ms +Oct 13 08:33:18.547: INFO: Pod "execpod-affinityj67f7": Phase="Running", Reason="", readiness=true. Elapsed: 2.0071656s +Oct 13 08:33:18.547: INFO: Pod "execpod-affinityj67f7" satisfied condition "running" +Oct 13 08:33:19.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-7701 exec execpod-affinityj67f7 -- /bin/sh -x -c nc -v -z -w 2 affinity-nodeport-transition 80' +Oct 13 08:33:19.697: INFO: stderr: "+ nc -v -z -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" +Oct 13 08:33:19.697: INFO: stdout: "" +Oct 13 08:33:19.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-7701 exec execpod-affinityj67f7 -- /bin/sh -x -c nc -v -z -w 2 10.108.179.245 80' +Oct 13 08:33:19.850: INFO: stderr: "+ nc -v -z -w 2 10.108.179.245 80\nConnection to 10.108.179.245 80 port [tcp/http] succeeded!\n" +Oct 13 08:33:19.850: INFO: stdout: "" +Oct 13 08:33:19.850: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-7701 exec execpod-affinityj67f7 -- /bin/sh -x -c nc -v -z -w 2 10.253.8.110 31300' +Oct 13 08:33:20.003: INFO: stderr: "+ nc -v -z -w 2 10.253.8.110 31300\nConnection to 10.253.8.110 31300 port [tcp/*] succeeded!\n" +Oct 13 08:33:20.003: INFO: stdout: "" +Oct 13 08:33:20.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-7701 exec execpod-affinityj67f7 -- /bin/sh -x -c nc -v -z -w 2 10.253.8.112 31300' +Oct 13 08:33:20.137: INFO: stderr: "+ nc -v -z -w 2 10.253.8.112 31300\nConnection to 10.253.8.112 31300 port [tcp/*] succeeded!\n" +Oct 13 08:33:20.137: INFO: stdout: "" +Oct 13 08:33:20.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-7701 exec execpod-affinityj67f7 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.253.8.110:31300/ ; done' +Oct 13 08:33:20.375: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31300/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31300/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31300/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31300/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31300/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31300/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31300/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31300/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31300/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31300/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31300/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31300/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31300/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31300/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31300/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31300/\n" +Oct 13 08:33:20.375: INFO: stdout: "\naffinity-nodeport-transition-2vbbd\naffinity-nodeport-transition-tcw8w\naffinity-nodeport-transition-j56r5\naffinity-nodeport-transition-2vbbd\naffinity-nodeport-transition-2vbbd\naffinity-nodeport-transition-tcw8w\naffinity-nodeport-transition-2vbbd\naffinity-nodeport-transition-tcw8w\naffinity-nodeport-transition-tcw8w\naffinity-nodeport-transition-2vbbd\naffinity-nodeport-transition-tcw8w\naffinity-nodeport-transition-tcw8w\naffinity-nodeport-transition-2vbbd\naffinity-nodeport-transition-j56r5\naffinity-nodeport-transition-j56r5\naffinity-nodeport-transition-2vbbd" +Oct 13 08:33:20.375: INFO: Received response from host: affinity-nodeport-transition-2vbbd +Oct 13 08:33:20.375: INFO: Received response from host: affinity-nodeport-transition-tcw8w +Oct 13 08:33:20.375: INFO: Received response from host: affinity-nodeport-transition-j56r5 +Oct 13 08:33:20.375: INFO: Received response from host: affinity-nodeport-transition-2vbbd +Oct 13 08:33:20.375: INFO: Received response from host: affinity-nodeport-transition-2vbbd +Oct 13 08:33:20.375: INFO: Received response from host: affinity-nodeport-transition-tcw8w +Oct 13 08:33:20.375: INFO: Received response from host: affinity-nodeport-transition-2vbbd +Oct 13 08:33:20.375: INFO: Received response from host: affinity-nodeport-transition-tcw8w +Oct 13 08:33:20.375: INFO: Received response from host: affinity-nodeport-transition-tcw8w +Oct 13 08:33:20.375: INFO: Received response from host: affinity-nodeport-transition-2vbbd +Oct 13 08:33:20.375: INFO: Received response from host: affinity-nodeport-transition-tcw8w +Oct 13 08:33:20.375: INFO: Received response from host: affinity-nodeport-transition-tcw8w +Oct 13 08:33:20.375: INFO: Received response from host: affinity-nodeport-transition-2vbbd +Oct 13 08:33:20.375: INFO: Received response from host: affinity-nodeport-transition-j56r5 +Oct 13 08:33:20.375: INFO: Received response from host: affinity-nodeport-transition-j56r5 +Oct 13 08:33:20.375: INFO: Received response from host: affinity-nodeport-transition-2vbbd +Oct 13 08:33:20.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-7701 exec execpod-affinityj67f7 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.253.8.110:31300/ ; done' +Oct 13 08:33:20.586: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31300/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31300/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31300/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31300/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31300/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31300/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31300/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31300/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31300/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31300/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31300/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31300/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31300/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31300/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31300/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31300/\n" +Oct 13 08:33:20.586: INFO: stdout: "\naffinity-nodeport-transition-2vbbd\naffinity-nodeport-transition-2vbbd\naffinity-nodeport-transition-2vbbd\naffinity-nodeport-transition-2vbbd\naffinity-nodeport-transition-2vbbd\naffinity-nodeport-transition-2vbbd\naffinity-nodeport-transition-2vbbd\naffinity-nodeport-transition-2vbbd\naffinity-nodeport-transition-2vbbd\naffinity-nodeport-transition-2vbbd\naffinity-nodeport-transition-2vbbd\naffinity-nodeport-transition-2vbbd\naffinity-nodeport-transition-2vbbd\naffinity-nodeport-transition-2vbbd\naffinity-nodeport-transition-2vbbd\naffinity-nodeport-transition-2vbbd" +Oct 13 08:33:20.586: INFO: Received response from host: affinity-nodeport-transition-2vbbd +Oct 13 08:33:20.586: INFO: Received response from host: affinity-nodeport-transition-2vbbd +Oct 13 08:33:20.586: INFO: Received response from host: affinity-nodeport-transition-2vbbd +Oct 13 08:33:20.586: INFO: Received response from host: affinity-nodeport-transition-2vbbd +Oct 13 08:33:20.586: INFO: Received response from host: affinity-nodeport-transition-2vbbd +Oct 13 08:33:20.586: INFO: Received response from host: affinity-nodeport-transition-2vbbd +Oct 13 08:33:20.586: INFO: Received response from host: affinity-nodeport-transition-2vbbd +Oct 13 08:33:20.586: INFO: Received response from host: affinity-nodeport-transition-2vbbd +Oct 13 08:33:20.586: INFO: Received response from host: affinity-nodeport-transition-2vbbd +Oct 13 08:33:20.586: INFO: Received response from host: affinity-nodeport-transition-2vbbd +Oct 13 08:33:20.586: INFO: Received response from host: affinity-nodeport-transition-2vbbd +Oct 13 08:33:20.586: INFO: Received response from host: affinity-nodeport-transition-2vbbd +Oct 13 08:33:20.586: INFO: Received response from host: affinity-nodeport-transition-2vbbd +Oct 13 08:33:20.586: INFO: Received response from host: affinity-nodeport-transition-2vbbd +Oct 13 08:33:20.586: INFO: Received response from host: affinity-nodeport-transition-2vbbd +Oct 13 08:33:20.586: INFO: Received response from host: affinity-nodeport-transition-2vbbd +Oct 13 08:33:20.586: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-7701, will wait for the garbage collector to delete the pods 10/13/23 08:33:20.596 +Oct 13 08:33:20.656: INFO: Deleting ReplicationController affinity-nodeport-transition took: 6.035706ms +Oct 13 08:33:20.756: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 100.535334ms +[AfterEach] [sig-network] Services + test/e2e/framework/node/init/init.go:32 +Oct 13 08:33:22.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] Services + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] Services + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] Services + tear down framework | framework.go:193 +STEP: Destroying namespace "services-7701" for this suite. 10/13/23 08:33:22.88 +------------------------------ +• [SLOW TEST] [9.463 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] + test/e2e/network/service.go:2250 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Services + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:33:13.422 + Oct 13 08:33:13.422: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename services 10/13/23 08:33:13.423 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:33:13.437 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:33:13.44 + [BeforeEach] [sig-network] Services + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 + [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] + test/e2e/network/service.go:2250 + STEP: creating service in namespace services-7701 10/13/23 08:33:13.442 + STEP: creating service affinity-nodeport-transition in namespace services-7701 10/13/23 08:33:13.442 + STEP: creating replication controller affinity-nodeport-transition in namespace services-7701 10/13/23 08:33:13.46 + I1013 08:33:13.467977 23 runners.go:193] Created replication controller with name: affinity-nodeport-transition, namespace: services-7701, replica count: 3 + I1013 08:33:16.519355 23 runners.go:193] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady + Oct 13 08:33:16.530: INFO: Creating new exec pod + Oct 13 08:33:16.539: INFO: Waiting up to 5m0s for pod "execpod-affinityj67f7" in namespace "services-7701" to be "running" + Oct 13 08:33:16.543: INFO: Pod "execpod-affinityj67f7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.682228ms + Oct 13 08:33:18.547: INFO: Pod "execpod-affinityj67f7": Phase="Running", Reason="", readiness=true. Elapsed: 2.0071656s + Oct 13 08:33:18.547: INFO: Pod "execpod-affinityj67f7" satisfied condition "running" + Oct 13 08:33:19.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-7701 exec execpod-affinityj67f7 -- /bin/sh -x -c nc -v -z -w 2 affinity-nodeport-transition 80' + Oct 13 08:33:19.697: INFO: stderr: "+ nc -v -z -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" + Oct 13 08:33:19.697: INFO: stdout: "" + Oct 13 08:33:19.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-7701 exec execpod-affinityj67f7 -- /bin/sh -x -c nc -v -z -w 2 10.108.179.245 80' + Oct 13 08:33:19.850: INFO: stderr: "+ nc -v -z -w 2 10.108.179.245 80\nConnection to 10.108.179.245 80 port [tcp/http] succeeded!\n" + Oct 13 08:33:19.850: INFO: stdout: "" + Oct 13 08:33:19.850: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-7701 exec execpod-affinityj67f7 -- /bin/sh -x -c nc -v -z -w 2 10.253.8.110 31300' + Oct 13 08:33:20.003: INFO: stderr: "+ nc -v -z -w 2 10.253.8.110 31300\nConnection to 10.253.8.110 31300 port [tcp/*] succeeded!\n" + Oct 13 08:33:20.003: INFO: stdout: "" + Oct 13 08:33:20.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-7701 exec execpod-affinityj67f7 -- /bin/sh -x -c nc -v -z -w 2 10.253.8.112 31300' + Oct 13 08:33:20.137: INFO: stderr: "+ nc -v -z -w 2 10.253.8.112 31300\nConnection to 10.253.8.112 31300 port [tcp/*] succeeded!\n" + Oct 13 08:33:20.137: INFO: stdout: "" + Oct 13 08:33:20.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-7701 exec execpod-affinityj67f7 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.253.8.110:31300/ ; done' + Oct 13 08:33:20.375: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31300/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31300/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31300/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31300/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31300/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31300/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31300/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31300/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31300/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31300/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31300/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31300/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31300/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31300/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31300/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31300/\n" + Oct 13 08:33:20.375: INFO: stdout: "\naffinity-nodeport-transition-2vbbd\naffinity-nodeport-transition-tcw8w\naffinity-nodeport-transition-j56r5\naffinity-nodeport-transition-2vbbd\naffinity-nodeport-transition-2vbbd\naffinity-nodeport-transition-tcw8w\naffinity-nodeport-transition-2vbbd\naffinity-nodeport-transition-tcw8w\naffinity-nodeport-transition-tcw8w\naffinity-nodeport-transition-2vbbd\naffinity-nodeport-transition-tcw8w\naffinity-nodeport-transition-tcw8w\naffinity-nodeport-transition-2vbbd\naffinity-nodeport-transition-j56r5\naffinity-nodeport-transition-j56r5\naffinity-nodeport-transition-2vbbd" + Oct 13 08:33:20.375: INFO: Received response from host: affinity-nodeport-transition-2vbbd + Oct 13 08:33:20.375: INFO: Received response from host: affinity-nodeport-transition-tcw8w + Oct 13 08:33:20.375: INFO: Received response from host: affinity-nodeport-transition-j56r5 + Oct 13 08:33:20.375: INFO: Received response from host: affinity-nodeport-transition-2vbbd + Oct 13 08:33:20.375: INFO: Received response from host: affinity-nodeport-transition-2vbbd + Oct 13 08:33:20.375: INFO: Received response from host: affinity-nodeport-transition-tcw8w + Oct 13 08:33:20.375: INFO: Received response from host: affinity-nodeport-transition-2vbbd + Oct 13 08:33:20.375: INFO: Received response from host: affinity-nodeport-transition-tcw8w + Oct 13 08:33:20.375: INFO: Received response from host: affinity-nodeport-transition-tcw8w + Oct 13 08:33:20.375: INFO: Received response from host: affinity-nodeport-transition-2vbbd + Oct 13 08:33:20.375: INFO: Received response from host: affinity-nodeport-transition-tcw8w + Oct 13 08:33:20.375: INFO: Received response from host: affinity-nodeport-transition-tcw8w + Oct 13 08:33:20.375: INFO: Received response from host: affinity-nodeport-transition-2vbbd + Oct 13 08:33:20.375: INFO: Received response from host: affinity-nodeport-transition-j56r5 + Oct 13 08:33:20.375: INFO: Received response from host: affinity-nodeport-transition-j56r5 + Oct 13 08:33:20.375: INFO: Received response from host: affinity-nodeport-transition-2vbbd + Oct 13 08:33:20.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-7701 exec execpod-affinityj67f7 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.253.8.110:31300/ ; done' + Oct 13 08:33:20.586: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31300/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31300/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31300/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31300/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31300/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31300/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31300/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31300/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31300/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31300/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31300/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31300/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31300/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31300/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31300/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31300/\n" + Oct 13 08:33:20.586: INFO: stdout: "\naffinity-nodeport-transition-2vbbd\naffinity-nodeport-transition-2vbbd\naffinity-nodeport-transition-2vbbd\naffinity-nodeport-transition-2vbbd\naffinity-nodeport-transition-2vbbd\naffinity-nodeport-transition-2vbbd\naffinity-nodeport-transition-2vbbd\naffinity-nodeport-transition-2vbbd\naffinity-nodeport-transition-2vbbd\naffinity-nodeport-transition-2vbbd\naffinity-nodeport-transition-2vbbd\naffinity-nodeport-transition-2vbbd\naffinity-nodeport-transition-2vbbd\naffinity-nodeport-transition-2vbbd\naffinity-nodeport-transition-2vbbd\naffinity-nodeport-transition-2vbbd" + Oct 13 08:33:20.586: INFO: Received response from host: affinity-nodeport-transition-2vbbd + Oct 13 08:33:20.586: INFO: Received response from host: affinity-nodeport-transition-2vbbd + Oct 13 08:33:20.586: INFO: Received response from host: affinity-nodeport-transition-2vbbd + Oct 13 08:33:20.586: INFO: Received response from host: affinity-nodeport-transition-2vbbd + Oct 13 08:33:20.586: INFO: Received response from host: affinity-nodeport-transition-2vbbd + Oct 13 08:33:20.586: INFO: Received response from host: affinity-nodeport-transition-2vbbd + Oct 13 08:33:20.586: INFO: Received response from host: affinity-nodeport-transition-2vbbd + Oct 13 08:33:20.586: INFO: Received response from host: affinity-nodeport-transition-2vbbd + Oct 13 08:33:20.586: INFO: Received response from host: affinity-nodeport-transition-2vbbd + Oct 13 08:33:20.586: INFO: Received response from host: affinity-nodeport-transition-2vbbd + Oct 13 08:33:20.586: INFO: Received response from host: affinity-nodeport-transition-2vbbd + Oct 13 08:33:20.586: INFO: Received response from host: affinity-nodeport-transition-2vbbd + Oct 13 08:33:20.586: INFO: Received response from host: affinity-nodeport-transition-2vbbd + Oct 13 08:33:20.586: INFO: Received response from host: affinity-nodeport-transition-2vbbd + Oct 13 08:33:20.586: INFO: Received response from host: affinity-nodeport-transition-2vbbd + Oct 13 08:33:20.586: INFO: Received response from host: affinity-nodeport-transition-2vbbd + Oct 13 08:33:20.586: INFO: Cleaning up the exec pod + STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-7701, will wait for the garbage collector to delete the pods 10/13/23 08:33:20.596 + Oct 13 08:33:20.656: INFO: Deleting ReplicationController affinity-nodeport-transition took: 6.035706ms + Oct 13 08:33:20.756: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 100.535334ms + [AfterEach] [sig-network] Services + test/e2e/framework/node/init/init.go:32 + Oct 13 08:33:22.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] Services + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] Services + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] Services + tear down framework | framework.go:193 + STEP: Destroying namespace "services-7701" for this suite. 10/13/23 08:33:22.88 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir wrapper volumes + should not conflict [Conformance] + test/e2e/storage/empty_dir_wrapper.go:67 +[BeforeEach] [sig-storage] EmptyDir wrapper volumes + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:33:22.886 +Oct 13 08:33:22.886: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename emptydir-wrapper 10/13/23 08:33:22.887 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:33:22.902 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:33:22.904 +[BeforeEach] [sig-storage] EmptyDir wrapper volumes + test/e2e/framework/metrics/init/init.go:31 +[It] should not conflict [Conformance] + test/e2e/storage/empty_dir_wrapper.go:67 +Oct 13 08:33:22.920: INFO: Waiting up to 5m0s for pod "pod-secrets-29ee0fb7-6284-4cbd-9f6d-43535e48ac42" in namespace "emptydir-wrapper-2957" to be "running and ready" +Oct 13 08:33:22.923: INFO: Pod "pod-secrets-29ee0fb7-6284-4cbd-9f6d-43535e48ac42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.735451ms +Oct 13 08:33:22.923: INFO: The phase of Pod pod-secrets-29ee0fb7-6284-4cbd-9f6d-43535e48ac42 is Pending, waiting for it to be Running (with Ready = true) +Oct 13 08:33:24.928: INFO: Pod "pod-secrets-29ee0fb7-6284-4cbd-9f6d-43535e48ac42": Phase="Running", Reason="", readiness=true. Elapsed: 2.007777119s +Oct 13 08:33:24.928: INFO: The phase of Pod pod-secrets-29ee0fb7-6284-4cbd-9f6d-43535e48ac42 is Running (Ready = true) +Oct 13 08:33:24.928: INFO: Pod "pod-secrets-29ee0fb7-6284-4cbd-9f6d-43535e48ac42" satisfied condition "running and ready" +STEP: Cleaning up the secret 10/13/23 08:33:24.932 +STEP: Cleaning up the configmap 10/13/23 08:33:24.938 +STEP: Cleaning up the pod 10/13/23 08:33:24.948 +[AfterEach] [sig-storage] EmptyDir wrapper volumes + test/e2e/framework/node/init/init.go:32 +Oct 13 08:33:24.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] EmptyDir wrapper volumes + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] EmptyDir wrapper volumes + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] EmptyDir wrapper volumes + tear down framework | framework.go:193 +STEP: Destroying namespace "emptydir-wrapper-2957" for this suite. 10/13/23 08:33:24.96 +------------------------------ +• [2.079 seconds] +[sig-storage] EmptyDir wrapper volumes +test/e2e/storage/utils/framework.go:23 + should not conflict [Conformance] + test/e2e/storage/empty_dir_wrapper.go:67 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] EmptyDir wrapper volumes + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:33:22.886 + Oct 13 08:33:22.886: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename emptydir-wrapper 10/13/23 08:33:22.887 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:33:22.902 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:33:22.904 + [BeforeEach] [sig-storage] EmptyDir wrapper volumes + test/e2e/framework/metrics/init/init.go:31 + [It] should not conflict [Conformance] + test/e2e/storage/empty_dir_wrapper.go:67 + Oct 13 08:33:22.920: INFO: Waiting up to 5m0s for pod "pod-secrets-29ee0fb7-6284-4cbd-9f6d-43535e48ac42" in namespace "emptydir-wrapper-2957" to be "running and ready" + Oct 13 08:33:22.923: INFO: Pod "pod-secrets-29ee0fb7-6284-4cbd-9f6d-43535e48ac42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.735451ms + Oct 13 08:33:22.923: INFO: The phase of Pod pod-secrets-29ee0fb7-6284-4cbd-9f6d-43535e48ac42 is Pending, waiting for it to be Running (with Ready = true) + Oct 13 08:33:24.928: INFO: Pod "pod-secrets-29ee0fb7-6284-4cbd-9f6d-43535e48ac42": Phase="Running", Reason="", readiness=true. Elapsed: 2.007777119s + Oct 13 08:33:24.928: INFO: The phase of Pod pod-secrets-29ee0fb7-6284-4cbd-9f6d-43535e48ac42 is Running (Ready = true) + Oct 13 08:33:24.928: INFO: Pod "pod-secrets-29ee0fb7-6284-4cbd-9f6d-43535e48ac42" satisfied condition "running and ready" + STEP: Cleaning up the secret 10/13/23 08:33:24.932 + STEP: Cleaning up the configmap 10/13/23 08:33:24.938 + STEP: Cleaning up the pod 10/13/23 08:33:24.948 + [AfterEach] [sig-storage] EmptyDir wrapper volumes + test/e2e/framework/node/init/init.go:32 + Oct 13 08:33:24.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] EmptyDir wrapper volumes + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] EmptyDir wrapper volumes + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] EmptyDir wrapper volumes + tear down framework | framework.go:193 + STEP: Destroying namespace "emptydir-wrapper-2957" for this suite. 10/13/23 08:33:24.96 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSS +------------------------------ +[sig-network] DNS + should provide DNS for services [Conformance] + test/e2e/network/dns.go:137 +[BeforeEach] [sig-network] DNS + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:33:24.966 +Oct 13 08:33:24.966: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename dns 10/13/23 08:33:24.967 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:33:24.98 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:33:24.982 +[BeforeEach] [sig-network] DNS + test/e2e/framework/metrics/init/init.go:31 +[It] should provide DNS for services [Conformance] + test/e2e/network/dns.go:137 +STEP: Creating a test headless service 10/13/23 08:33:24.984 +STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6879.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6879.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6879.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6879.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6879.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6879.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6879.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-6879.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6879.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-6879.svc.cluster.local;check="$$(dig +notcp +noall +answer +search 43.42.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.42.43_udp@PTR;check="$$(dig +tcp +noall +answer +search 43.42.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.42.43_tcp@PTR;sleep 1; done + 10/13/23 08:33:25.005 +STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6879.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6879.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6879.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6879.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6879.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6879.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6879.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-6879.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6879.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-6879.svc.cluster.local;check="$$(dig +notcp +noall +answer +search 43.42.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.42.43_udp@PTR;check="$$(dig +tcp +noall +answer +search 43.42.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.42.43_tcp@PTR;sleep 1; done + 10/13/23 08:33:25.005 +STEP: creating a pod to probe DNS 10/13/23 08:33:25.005 +STEP: submitting the pod to kubernetes 10/13/23 08:33:25.005 +Oct 13 08:33:25.016: INFO: Waiting up to 15m0s for pod "dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539" in namespace "dns-6879" to be "running" +Oct 13 08:33:25.020: INFO: Pod "dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539": Phase="Pending", Reason="", readiness=false. Elapsed: 3.986426ms +Oct 13 08:33:27.024: INFO: Pod "dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539": Phase="Running", Reason="", readiness=true. Elapsed: 2.008264362s +Oct 13 08:33:27.024: INFO: Pod "dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539" satisfied condition "running" +STEP: retrieving the pod 10/13/23 08:33:27.024 +STEP: looking for the results for each expected name from probers 10/13/23 08:33:27.027 +Oct 13 08:33:27.030: INFO: Unable to read wheezy_udp@dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) +Oct 13 08:33:27.033: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) +Oct 13 08:33:27.036: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) +Oct 13 08:33:27.038: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) +Oct 13 08:33:27.050: INFO: Unable to read jessie_udp@dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) +Oct 13 08:33:27.053: INFO: Unable to read jessie_tcp@dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) +Oct 13 08:33:27.056: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) +Oct 13 08:33:27.058: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) +Oct 13 08:33:27.068: INFO: Lookups using dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539 failed for: [wheezy_udp@dns-test-service.dns-6879.svc.cluster.local wheezy_tcp@dns-test-service.dns-6879.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local jessie_udp@dns-test-service.dns-6879.svc.cluster.local jessie_tcp@dns-test-service.dns-6879.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local] + +Oct 13 08:33:32.073: INFO: Unable to read wheezy_udp@dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) +Oct 13 08:33:32.076: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) +Oct 13 08:33:32.079: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) +Oct 13 08:33:32.082: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) +Oct 13 08:33:32.096: INFO: Unable to read jessie_udp@dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) +Oct 13 08:33:32.099: INFO: Unable to read jessie_tcp@dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) +Oct 13 08:33:32.102: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) +Oct 13 08:33:32.105: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) +Oct 13 08:33:32.116: INFO: Lookups using dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539 failed for: [wheezy_udp@dns-test-service.dns-6879.svc.cluster.local wheezy_tcp@dns-test-service.dns-6879.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local jessie_udp@dns-test-service.dns-6879.svc.cluster.local jessie_tcp@dns-test-service.dns-6879.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local] + +Oct 13 08:33:37.074: INFO: Unable to read wheezy_udp@dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) +Oct 13 08:33:37.078: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) +Oct 13 08:33:37.082: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) +Oct 13 08:33:37.086: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) +Oct 13 08:33:37.102: INFO: Unable to read jessie_udp@dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) +Oct 13 08:33:37.105: INFO: Unable to read jessie_tcp@dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) +Oct 13 08:33:37.109: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) +Oct 13 08:33:37.112: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) +Oct 13 08:33:37.126: INFO: Lookups using dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539 failed for: [wheezy_udp@dns-test-service.dns-6879.svc.cluster.local wheezy_tcp@dns-test-service.dns-6879.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local jessie_udp@dns-test-service.dns-6879.svc.cluster.local jessie_tcp@dns-test-service.dns-6879.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local] + +Oct 13 08:33:42.074: INFO: Unable to read wheezy_udp@dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) +Oct 13 08:33:42.079: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) +Oct 13 08:33:42.083: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) +Oct 13 08:33:42.087: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) +Oct 13 08:33:42.103: INFO: Unable to read jessie_udp@dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) +Oct 13 08:33:42.106: INFO: Unable to read jessie_tcp@dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) +Oct 13 08:33:42.110: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) +Oct 13 08:33:42.113: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) +Oct 13 08:33:42.128: INFO: Lookups using dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539 failed for: [wheezy_udp@dns-test-service.dns-6879.svc.cluster.local wheezy_tcp@dns-test-service.dns-6879.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local jessie_udp@dns-test-service.dns-6879.svc.cluster.local jessie_tcp@dns-test-service.dns-6879.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local] + +Oct 13 08:33:47.075: INFO: Unable to read wheezy_udp@dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) +Oct 13 08:33:47.079: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) +Oct 13 08:33:47.084: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) +Oct 13 08:33:47.088: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) +Oct 13 08:33:47.110: INFO: Unable to read jessie_udp@dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) +Oct 13 08:33:47.114: INFO: Unable to read jessie_tcp@dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) +Oct 13 08:33:47.118: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) +Oct 13 08:33:47.121: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) +Oct 13 08:33:47.137: INFO: Lookups using dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539 failed for: [wheezy_udp@dns-test-service.dns-6879.svc.cluster.local wheezy_tcp@dns-test-service.dns-6879.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local jessie_udp@dns-test-service.dns-6879.svc.cluster.local jessie_tcp@dns-test-service.dns-6879.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local] + +Oct 13 08:33:52.074: INFO: Unable to read wheezy_udp@dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) +Oct 13 08:33:52.079: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) +Oct 13 08:33:52.082: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) +Oct 13 08:33:52.086: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) +Oct 13 08:33:52.102: INFO: Unable to read jessie_udp@dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) +Oct 13 08:33:52.105: INFO: Unable to read jessie_tcp@dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) +Oct 13 08:33:52.109: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) +Oct 13 08:33:52.113: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) +Oct 13 08:33:52.126: INFO: Lookups using dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539 failed for: [wheezy_udp@dns-test-service.dns-6879.svc.cluster.local wheezy_tcp@dns-test-service.dns-6879.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local jessie_udp@dns-test-service.dns-6879.svc.cluster.local jessie_tcp@dns-test-service.dns-6879.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local] + +Oct 13 08:33:57.132: INFO: DNS probes using dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539 succeeded + +STEP: deleting the pod 10/13/23 08:33:57.132 +STEP: deleting the test service 10/13/23 08:33:57.146 +STEP: deleting the test headless service 10/13/23 08:33:57.177 +[AfterEach] [sig-network] DNS + test/e2e/framework/node/init/init.go:32 +Oct 13 08:33:57.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] DNS + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] DNS + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] DNS + tear down framework | framework.go:193 +STEP: Destroying namespace "dns-6879" for this suite. 10/13/23 08:33:57.198 +------------------------------ +• [SLOW TEST] [32.241 seconds] +[sig-network] DNS +test/e2e/network/common/framework.go:23 + should provide DNS for services [Conformance] + test/e2e/network/dns.go:137 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] DNS + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:33:24.966 + Oct 13 08:33:24.966: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename dns 10/13/23 08:33:24.967 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:33:24.98 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:33:24.982 + [BeforeEach] [sig-network] DNS + test/e2e/framework/metrics/init/init.go:31 + [It] should provide DNS for services [Conformance] + test/e2e/network/dns.go:137 + STEP: Creating a test headless service 10/13/23 08:33:24.984 + STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6879.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6879.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6879.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6879.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6879.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6879.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6879.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-6879.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6879.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-6879.svc.cluster.local;check="$$(dig +notcp +noall +answer +search 43.42.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.42.43_udp@PTR;check="$$(dig +tcp +noall +answer +search 43.42.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.42.43_tcp@PTR;sleep 1; done + 10/13/23 08:33:25.005 + STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6879.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6879.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6879.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6879.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6879.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6879.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6879.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-6879.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6879.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-6879.svc.cluster.local;check="$$(dig +notcp +noall +answer +search 43.42.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.42.43_udp@PTR;check="$$(dig +tcp +noall +answer +search 43.42.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.42.43_tcp@PTR;sleep 1; done + 10/13/23 08:33:25.005 + STEP: creating a pod to probe DNS 10/13/23 08:33:25.005 + STEP: submitting the pod to kubernetes 10/13/23 08:33:25.005 + Oct 13 08:33:25.016: INFO: Waiting up to 15m0s for pod "dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539" in namespace "dns-6879" to be "running" + Oct 13 08:33:25.020: INFO: Pod "dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539": Phase="Pending", Reason="", readiness=false. Elapsed: 3.986426ms + Oct 13 08:33:27.024: INFO: Pod "dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539": Phase="Running", Reason="", readiness=true. Elapsed: 2.008264362s + Oct 13 08:33:27.024: INFO: Pod "dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539" satisfied condition "running" + STEP: retrieving the pod 10/13/23 08:33:27.024 + STEP: looking for the results for each expected name from probers 10/13/23 08:33:27.027 + Oct 13 08:33:27.030: INFO: Unable to read wheezy_udp@dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) + Oct 13 08:33:27.033: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) + Oct 13 08:33:27.036: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) + Oct 13 08:33:27.038: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) + Oct 13 08:33:27.050: INFO: Unable to read jessie_udp@dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) + Oct 13 08:33:27.053: INFO: Unable to read jessie_tcp@dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) + Oct 13 08:33:27.056: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) + Oct 13 08:33:27.058: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) + Oct 13 08:33:27.068: INFO: Lookups using dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539 failed for: [wheezy_udp@dns-test-service.dns-6879.svc.cluster.local wheezy_tcp@dns-test-service.dns-6879.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local jessie_udp@dns-test-service.dns-6879.svc.cluster.local jessie_tcp@dns-test-service.dns-6879.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local] + + Oct 13 08:33:32.073: INFO: Unable to read wheezy_udp@dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) + Oct 13 08:33:32.076: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) + Oct 13 08:33:32.079: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) + Oct 13 08:33:32.082: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) + Oct 13 08:33:32.096: INFO: Unable to read jessie_udp@dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) + Oct 13 08:33:32.099: INFO: Unable to read jessie_tcp@dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) + Oct 13 08:33:32.102: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) + Oct 13 08:33:32.105: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) + Oct 13 08:33:32.116: INFO: Lookups using dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539 failed for: [wheezy_udp@dns-test-service.dns-6879.svc.cluster.local wheezy_tcp@dns-test-service.dns-6879.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local jessie_udp@dns-test-service.dns-6879.svc.cluster.local jessie_tcp@dns-test-service.dns-6879.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local] + + Oct 13 08:33:37.074: INFO: Unable to read wheezy_udp@dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) + Oct 13 08:33:37.078: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) + Oct 13 08:33:37.082: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) + Oct 13 08:33:37.086: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) + Oct 13 08:33:37.102: INFO: Unable to read jessie_udp@dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) + Oct 13 08:33:37.105: INFO: Unable to read jessie_tcp@dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) + Oct 13 08:33:37.109: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) + Oct 13 08:33:37.112: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) + Oct 13 08:33:37.126: INFO: Lookups using dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539 failed for: [wheezy_udp@dns-test-service.dns-6879.svc.cluster.local wheezy_tcp@dns-test-service.dns-6879.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local jessie_udp@dns-test-service.dns-6879.svc.cluster.local jessie_tcp@dns-test-service.dns-6879.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local] + + Oct 13 08:33:42.074: INFO: Unable to read wheezy_udp@dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) + Oct 13 08:33:42.079: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) + Oct 13 08:33:42.083: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) + Oct 13 08:33:42.087: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) + Oct 13 08:33:42.103: INFO: Unable to read jessie_udp@dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) + Oct 13 08:33:42.106: INFO: Unable to read jessie_tcp@dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) + Oct 13 08:33:42.110: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) + Oct 13 08:33:42.113: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) + Oct 13 08:33:42.128: INFO: Lookups using dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539 failed for: [wheezy_udp@dns-test-service.dns-6879.svc.cluster.local wheezy_tcp@dns-test-service.dns-6879.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local jessie_udp@dns-test-service.dns-6879.svc.cluster.local jessie_tcp@dns-test-service.dns-6879.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local] + + Oct 13 08:33:47.075: INFO: Unable to read wheezy_udp@dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) + Oct 13 08:33:47.079: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) + Oct 13 08:33:47.084: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) + Oct 13 08:33:47.088: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) + Oct 13 08:33:47.110: INFO: Unable to read jessie_udp@dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) + Oct 13 08:33:47.114: INFO: Unable to read jessie_tcp@dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) + Oct 13 08:33:47.118: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) + Oct 13 08:33:47.121: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) + Oct 13 08:33:47.137: INFO: Lookups using dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539 failed for: [wheezy_udp@dns-test-service.dns-6879.svc.cluster.local wheezy_tcp@dns-test-service.dns-6879.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local jessie_udp@dns-test-service.dns-6879.svc.cluster.local jessie_tcp@dns-test-service.dns-6879.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local] + + Oct 13 08:33:52.074: INFO: Unable to read wheezy_udp@dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) + Oct 13 08:33:52.079: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) + Oct 13 08:33:52.082: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) + Oct 13 08:33:52.086: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) + Oct 13 08:33:52.102: INFO: Unable to read jessie_udp@dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) + Oct 13 08:33:52.105: INFO: Unable to read jessie_tcp@dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) + Oct 13 08:33:52.109: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) + Oct 13 08:33:52.113: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local from pod dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539: the server could not find the requested resource (get pods dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539) + Oct 13 08:33:52.126: INFO: Lookups using dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539 failed for: [wheezy_udp@dns-test-service.dns-6879.svc.cluster.local wheezy_tcp@dns-test-service.dns-6879.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local jessie_udp@dns-test-service.dns-6879.svc.cluster.local jessie_tcp@dns-test-service.dns-6879.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6879.svc.cluster.local] + + Oct 13 08:33:57.132: INFO: DNS probes using dns-6879/dns-test-27553fdb-e3d0-4ef1-b2fc-68489aa34539 succeeded + + STEP: deleting the pod 10/13/23 08:33:57.132 + STEP: deleting the test service 10/13/23 08:33:57.146 + STEP: deleting the test headless service 10/13/23 08:33:57.177 + [AfterEach] [sig-network] DNS + test/e2e/framework/node/init/init.go:32 + Oct 13 08:33:57.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] DNS + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] DNS + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] DNS + tear down framework | framework.go:193 + STEP: Destroying namespace "dns-6879" for this suite. 10/13/23 08:33:57.198 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + should be able to convert a non homogeneous list of CRs [Conformance] + test/e2e/apimachinery/crd_conversion_webhook.go:184 +[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:33:57.207 +Oct 13 08:33:57.207: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename crd-webhook 10/13/23 08:33:57.208 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:33:57.228 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:33:57.231 +[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/crd_conversion_webhook.go:128 +STEP: Setting up server cert 10/13/23 08:33:57.233 +STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication 10/13/23 08:33:57.691 +STEP: Deploying the custom resource conversion webhook pod 10/13/23 08:33:57.697 +STEP: Wait for the deployment to be ready 10/13/23 08:33:57.708 +Oct 13 08:33:57.715: INFO: new replicaset for deployment "sample-crd-conversion-webhook-deployment" is yet to be created +STEP: Deploying the webhook service 10/13/23 08:33:59.725 +STEP: Verifying the service has paired with the endpoint 10/13/23 08:33:59.735 +Oct 13 08:34:00.736: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 +[It] should be able to convert a non homogeneous list of CRs [Conformance] + test/e2e/apimachinery/crd_conversion_webhook.go:184 +Oct 13 08:34:00.740: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Creating a v1 custom resource 10/13/23 08:34:03.326 +STEP: Create a v2 custom resource 10/13/23 08:34:03.342 +STEP: List CRs in v1 10/13/23 08:34:03.389 +STEP: List CRs in v2 10/13/23 08:34:03.394 +[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Oct 13 08:34:03.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/crd_conversion_webhook.go:139 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "crd-webhook-5113" for this suite. 10/13/23 08:34:03.969 +------------------------------ +• [SLOW TEST] [6.769 seconds] +[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should be able to convert a non homogeneous list of CRs [Conformance] + test/e2e/apimachinery/crd_conversion_webhook.go:184 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:33:57.207 + Oct 13 08:33:57.207: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename crd-webhook 10/13/23 08:33:57.208 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:33:57.228 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:33:57.231 + [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/crd_conversion_webhook.go:128 + STEP: Setting up server cert 10/13/23 08:33:57.233 + STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication 10/13/23 08:33:57.691 + STEP: Deploying the custom resource conversion webhook pod 10/13/23 08:33:57.697 + STEP: Wait for the deployment to be ready 10/13/23 08:33:57.708 + Oct 13 08:33:57.715: INFO: new replicaset for deployment "sample-crd-conversion-webhook-deployment" is yet to be created + STEP: Deploying the webhook service 10/13/23 08:33:59.725 + STEP: Verifying the service has paired with the endpoint 10/13/23 08:33:59.735 + Oct 13 08:34:00.736: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 + [It] should be able to convert a non homogeneous list of CRs [Conformance] + test/e2e/apimachinery/crd_conversion_webhook.go:184 + Oct 13 08:34:00.740: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Creating a v1 custom resource 10/13/23 08:34:03.326 + STEP: Create a v2 custom resource 10/13/23 08:34:03.342 + STEP: List CRs in v1 10/13/23 08:34:03.389 + STEP: List CRs in v2 10/13/23 08:34:03.394 + [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Oct 13 08:34:03.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/crd_conversion_webhook.go:139 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "crd-webhook-5113" for this suite. 10/13/23 08:34:03.969 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Watchers + should observe add, update, and delete watch notifications on configmaps [Conformance] + test/e2e/apimachinery/watch.go:60 +[BeforeEach] [sig-api-machinery] Watchers + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:34:03.977 +Oct 13 08:34:03.977: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename watch 10/13/23 08:34:03.978 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:34:04 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:34:04.003 +[BeforeEach] [sig-api-machinery] Watchers + test/e2e/framework/metrics/init/init.go:31 +[It] should observe add, update, and delete watch notifications on configmaps [Conformance] + test/e2e/apimachinery/watch.go:60 +STEP: creating a watch on configmaps with label A 10/13/23 08:34:04.006 +STEP: creating a watch on configmaps with label B 10/13/23 08:34:04.008 +STEP: creating a watch on configmaps with label A or B 10/13/23 08:34:04.009 +STEP: creating a configmap with label A and ensuring the correct watchers observe the notification 10/13/23 08:34:04.01 +Oct 13 08:34:04.015: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6640 5e1d169d-9f0a-49d3-a784-d18d5029295b 17756 0 2023-10-13 08:34:04 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-10-13 08:34:04 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 13 08:34:04.015: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6640 5e1d169d-9f0a-49d3-a784-d18d5029295b 17756 0 2023-10-13 08:34:04 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-10-13 08:34:04 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: modifying configmap A and ensuring the correct watchers observe the notification 10/13/23 08:34:04.015 +Oct 13 08:34:04.025: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6640 5e1d169d-9f0a-49d3-a784-d18d5029295b 17757 0 2023-10-13 08:34:04 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-10-13 08:34:04 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 13 08:34:04.025: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6640 5e1d169d-9f0a-49d3-a784-d18d5029295b 17757 0 2023-10-13 08:34:04 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-10-13 08:34:04 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: modifying configmap A again and ensuring the correct watchers observe the notification 10/13/23 08:34:04.025 +Oct 13 08:34:04.035: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6640 5e1d169d-9f0a-49d3-a784-d18d5029295b 17758 0 2023-10-13 08:34:04 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-10-13 08:34:04 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 13 08:34:04.035: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6640 5e1d169d-9f0a-49d3-a784-d18d5029295b 17758 0 2023-10-13 08:34:04 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-10-13 08:34:04 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: deleting configmap A and ensuring the correct watchers observe the notification 10/13/23 08:34:04.035 +Oct 13 08:34:04.045: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6640 5e1d169d-9f0a-49d3-a784-d18d5029295b 17759 0 2023-10-13 08:34:04 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-10-13 08:34:04 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 13 08:34:04.045: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6640 5e1d169d-9f0a-49d3-a784-d18d5029295b 17759 0 2023-10-13 08:34:04 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-10-13 08:34:04 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: creating a configmap with label B and ensuring the correct watchers observe the notification 10/13/23 08:34:04.045 +Oct 13 08:34:04.050: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6640 0e2b10e7-1f05-4c7c-81d4-73aa05f1425a 17760 0 2023-10-13 08:34:04 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2023-10-13 08:34:04 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 13 08:34:04.050: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6640 0e2b10e7-1f05-4c7c-81d4-73aa05f1425a 17760 0 2023-10-13 08:34:04 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2023-10-13 08:34:04 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: deleting configmap B and ensuring the correct watchers observe the notification 10/13/23 08:34:14.051 +Oct 13 08:34:14.063: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6640 0e2b10e7-1f05-4c7c-81d4-73aa05f1425a 17790 0 2023-10-13 08:34:04 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2023-10-13 08:34:04 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 13 08:34:14.063: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6640 0e2b10e7-1f05-4c7c-81d4-73aa05f1425a 17790 0 2023-10-13 08:34:04 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2023-10-13 08:34:04 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +[AfterEach] [sig-api-machinery] Watchers + test/e2e/framework/node/init/init.go:32 +Oct 13 08:34:24.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] Watchers + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] Watchers + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] Watchers + tear down framework | framework.go:193 +STEP: Destroying namespace "watch-6640" for this suite. 10/13/23 08:34:24.072 +------------------------------ +• [SLOW TEST] [20.107 seconds] +[sig-api-machinery] Watchers +test/e2e/apimachinery/framework.go:23 + should observe add, update, and delete watch notifications on configmaps [Conformance] + test/e2e/apimachinery/watch.go:60 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] Watchers + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:34:03.977 + Oct 13 08:34:03.977: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename watch 10/13/23 08:34:03.978 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:34:04 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:34:04.003 + [BeforeEach] [sig-api-machinery] Watchers + test/e2e/framework/metrics/init/init.go:31 + [It] should observe add, update, and delete watch notifications on configmaps [Conformance] + test/e2e/apimachinery/watch.go:60 + STEP: creating a watch on configmaps with label A 10/13/23 08:34:04.006 + STEP: creating a watch on configmaps with label B 10/13/23 08:34:04.008 + STEP: creating a watch on configmaps with label A or B 10/13/23 08:34:04.009 + STEP: creating a configmap with label A and ensuring the correct watchers observe the notification 10/13/23 08:34:04.01 + Oct 13 08:34:04.015: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6640 5e1d169d-9f0a-49d3-a784-d18d5029295b 17756 0 2023-10-13 08:34:04 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-10-13 08:34:04 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} + Oct 13 08:34:04.015: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6640 5e1d169d-9f0a-49d3-a784-d18d5029295b 17756 0 2023-10-13 08:34:04 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-10-13 08:34:04 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} + STEP: modifying configmap A and ensuring the correct watchers observe the notification 10/13/23 08:34:04.015 + Oct 13 08:34:04.025: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6640 5e1d169d-9f0a-49d3-a784-d18d5029295b 17757 0 2023-10-13 08:34:04 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-10-13 08:34:04 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} + Oct 13 08:34:04.025: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6640 5e1d169d-9f0a-49d3-a784-d18d5029295b 17757 0 2023-10-13 08:34:04 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-10-13 08:34:04 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} + STEP: modifying configmap A again and ensuring the correct watchers observe the notification 10/13/23 08:34:04.025 + Oct 13 08:34:04.035: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6640 5e1d169d-9f0a-49d3-a784-d18d5029295b 17758 0 2023-10-13 08:34:04 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-10-13 08:34:04 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} + Oct 13 08:34:04.035: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6640 5e1d169d-9f0a-49d3-a784-d18d5029295b 17758 0 2023-10-13 08:34:04 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-10-13 08:34:04 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} + STEP: deleting configmap A and ensuring the correct watchers observe the notification 10/13/23 08:34:04.035 + Oct 13 08:34:04.045: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6640 5e1d169d-9f0a-49d3-a784-d18d5029295b 17759 0 2023-10-13 08:34:04 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-10-13 08:34:04 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} + Oct 13 08:34:04.045: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6640 5e1d169d-9f0a-49d3-a784-d18d5029295b 17759 0 2023-10-13 08:34:04 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-10-13 08:34:04 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} + STEP: creating a configmap with label B and ensuring the correct watchers observe the notification 10/13/23 08:34:04.045 + Oct 13 08:34:04.050: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6640 0e2b10e7-1f05-4c7c-81d4-73aa05f1425a 17760 0 2023-10-13 08:34:04 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2023-10-13 08:34:04 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} + Oct 13 08:34:04.050: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6640 0e2b10e7-1f05-4c7c-81d4-73aa05f1425a 17760 0 2023-10-13 08:34:04 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2023-10-13 08:34:04 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} + STEP: deleting configmap B and ensuring the correct watchers observe the notification 10/13/23 08:34:14.051 + Oct 13 08:34:14.063: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6640 0e2b10e7-1f05-4c7c-81d4-73aa05f1425a 17790 0 2023-10-13 08:34:04 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2023-10-13 08:34:04 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} + Oct 13 08:34:14.063: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6640 0e2b10e7-1f05-4c7c-81d4-73aa05f1425a 17790 0 2023-10-13 08:34:04 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2023-10-13 08:34:04 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} + [AfterEach] [sig-api-machinery] Watchers + test/e2e/framework/node/init/init.go:32 + Oct 13 08:34:24.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] Watchers + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] Watchers + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] Watchers + tear down framework | framework.go:193 + STEP: Destroying namespace "watch-6640" for this suite. 10/13/23 08:34:24.072 + << End Captured GinkgoWriter Output +------------------------------ +S +------------------------------ +[sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints + verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] + test/e2e/scheduling/preemption.go:814 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:34:24.084 +Oct 13 08:34:24.085: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename sched-preemption 10/13/23 08:34:24.086 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:34:24.102 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:34:24.104 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/scheduling/preemption.go:97 +Oct 13 08:34:24.118: INFO: Waiting up to 1m0s for all nodes to be ready +Oct 13 08:35:24.148: INFO: Waiting for terminating namespaces to be deleted... +[BeforeEach] PriorityClass endpoints + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:35:24.151 +Oct 13 08:35:24.151: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename sched-preemption-path 10/13/23 08:35:24.152 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:35:24.173 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:35:24.175 +[BeforeEach] PriorityClass endpoints + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] PriorityClass endpoints + test/e2e/scheduling/preemption.go:771 +[It] verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] + test/e2e/scheduling/preemption.go:814 +Oct 13 08:35:24.192: INFO: PriorityClass.scheduling.k8s.io "p1" is invalid: value: Forbidden: may not be changed in an update. +Oct 13 08:35:24.195: INFO: PriorityClass.scheduling.k8s.io "p2" is invalid: value: Forbidden: may not be changed in an update. +[AfterEach] PriorityClass endpoints + test/e2e/framework/node/init/init.go:32 +Oct 13 08:35:24.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] PriorityClass endpoints + test/e2e/scheduling/preemption.go:787 +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/node/init/init.go:32 +Oct 13 08:35:24.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/scheduling/preemption.go:84 +[DeferCleanup (Each)] PriorityClass endpoints + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] PriorityClass endpoints + dump namespaces | framework.go:196 +[DeferCleanup (Each)] PriorityClass endpoints + tear down framework | framework.go:193 +STEP: Destroying namespace "sched-preemption-path-4392" for this suite. 10/13/23 08:35:24.267 +[DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] + tear down framework | framework.go:193 +STEP: Destroying namespace "sched-preemption-3620" for this suite. 10/13/23 08:35:24.274 +------------------------------ +• [SLOW TEST] [60.196 seconds] +[sig-scheduling] SchedulerPreemption [Serial] +test/e2e/scheduling/framework.go:40 + PriorityClass endpoints + test/e2e/scheduling/preemption.go:764 + verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] + test/e2e/scheduling/preemption.go:814 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:34:24.084 + Oct 13 08:34:24.085: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename sched-preemption 10/13/23 08:34:24.086 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:34:24.102 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:34:24.104 + [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/scheduling/preemption.go:97 + Oct 13 08:34:24.118: INFO: Waiting up to 1m0s for all nodes to be ready + Oct 13 08:35:24.148: INFO: Waiting for terminating namespaces to be deleted... + [BeforeEach] PriorityClass endpoints + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:35:24.151 + Oct 13 08:35:24.151: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename sched-preemption-path 10/13/23 08:35:24.152 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:35:24.173 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:35:24.175 + [BeforeEach] PriorityClass endpoints + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] PriorityClass endpoints + test/e2e/scheduling/preemption.go:771 + [It] verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] + test/e2e/scheduling/preemption.go:814 + Oct 13 08:35:24.192: INFO: PriorityClass.scheduling.k8s.io "p1" is invalid: value: Forbidden: may not be changed in an update. + Oct 13 08:35:24.195: INFO: PriorityClass.scheduling.k8s.io "p2" is invalid: value: Forbidden: may not be changed in an update. + [AfterEach] PriorityClass endpoints + test/e2e/framework/node/init/init.go:32 + Oct 13 08:35:24.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] PriorityClass endpoints + test/e2e/scheduling/preemption.go:787 + [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/node/init/init.go:32 + Oct 13 08:35:24.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/scheduling/preemption.go:84 + [DeferCleanup (Each)] PriorityClass endpoints + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] PriorityClass endpoints + dump namespaces | framework.go:196 + [DeferCleanup (Each)] PriorityClass endpoints + tear down framework | framework.go:193 + STEP: Destroying namespace "sched-preemption-path-4392" for this suite. 10/13/23 08:35:24.267 + [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] + tear down framework | framework.go:193 + STEP: Destroying namespace "sched-preemption-3620" for this suite. 10/13/23 08:35:24.274 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-apps] Job + should apply changes to a job status [Conformance] + test/e2e/apps/job.go:636 +[BeforeEach] [sig-apps] Job + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:35:24.281 +Oct 13 08:35:24.281: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename job 10/13/23 08:35:24.282 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:35:24.297 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:35:24.299 +[BeforeEach] [sig-apps] Job + test/e2e/framework/metrics/init/init.go:31 +[It] should apply changes to a job status [Conformance] + test/e2e/apps/job.go:636 +STEP: Creating a job 10/13/23 08:35:24.301 +STEP: Ensure pods equal to parallelism count is attached to the job 10/13/23 08:35:24.307 +STEP: patching /status 10/13/23 08:35:26.313 +STEP: updating /status 10/13/23 08:35:26.321 +STEP: get /status 10/13/23 08:35:26.331 +[AfterEach] [sig-apps] Job + test/e2e/framework/node/init/init.go:32 +Oct 13 08:35:26.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] Job + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] Job + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] Job + tear down framework | framework.go:193 +STEP: Destroying namespace "job-8855" for this suite. 10/13/23 08:35:26.337 +------------------------------ +• [2.063 seconds] +[sig-apps] Job +test/e2e/apps/framework.go:23 + should apply changes to a job status [Conformance] + test/e2e/apps/job.go:636 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] Job + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:35:24.281 + Oct 13 08:35:24.281: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename job 10/13/23 08:35:24.282 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:35:24.297 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:35:24.299 + [BeforeEach] [sig-apps] Job + test/e2e/framework/metrics/init/init.go:31 + [It] should apply changes to a job status [Conformance] + test/e2e/apps/job.go:636 + STEP: Creating a job 10/13/23 08:35:24.301 + STEP: Ensure pods equal to parallelism count is attached to the job 10/13/23 08:35:24.307 + STEP: patching /status 10/13/23 08:35:26.313 + STEP: updating /status 10/13/23 08:35:26.321 + STEP: get /status 10/13/23 08:35:26.331 + [AfterEach] [sig-apps] Job + test/e2e/framework/node/init/init.go:32 + Oct 13 08:35:26.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] Job + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] Job + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] Job + tear down framework | framework.go:193 + STEP: Destroying namespace "job-8855" for this suite. 10/13/23 08:35:26.337 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be immutable if `immutable` field is set [Conformance] + test/e2e/common/storage/configmap_volume.go:504 +[BeforeEach] [sig-storage] ConfigMap + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:35:26.344 +Oct 13 08:35:26.344: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename configmap 10/13/23 08:35:26.345 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:35:26.359 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:35:26.361 +[BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:31 +[It] should be immutable if `immutable` field is set [Conformance] + test/e2e/common/storage/configmap_volume.go:504 +[AfterEach] [sig-storage] ConfigMap + test/e2e/framework/node/init/init.go:32 +Oct 13 08:35:26.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] ConfigMap + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] ConfigMap + tear down framework | framework.go:193 +STEP: Destroying namespace "configmap-9376" for this suite. 10/13/23 08:35:26.4 +------------------------------ +• [0.063 seconds] +[sig-storage] ConfigMap +test/e2e/common/storage/framework.go:23 + should be immutable if `immutable` field is set [Conformance] + test/e2e/common/storage/configmap_volume.go:504 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] ConfigMap + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:35:26.344 + Oct 13 08:35:26.344: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename configmap 10/13/23 08:35:26.345 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:35:26.359 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:35:26.361 + [BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:31 + [It] should be immutable if `immutable` field is set [Conformance] + test/e2e/common/storage/configmap_volume.go:504 + [AfterEach] [sig-storage] ConfigMap + test/e2e/framework/node/init/init.go:32 + Oct 13 08:35:26.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] ConfigMap + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] ConfigMap + tear down framework | framework.go:193 + STEP: Destroying namespace "configmap-9376" for this suite. 10/13/23 08:35:26.4 + << End Captured GinkgoWriter Output +------------------------------ +S +------------------------------ +[sig-api-machinery] ResourceQuota + should manage the lifecycle of a ResourceQuota [Conformance] + test/e2e/apimachinery/resource_quota.go:943 +[BeforeEach] [sig-api-machinery] ResourceQuota + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:35:26.407 +Oct 13 08:35:26.407: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename resourcequota 10/13/23 08:35:26.409 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:35:26.425 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:35:26.428 +[BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:31 +[It] should manage the lifecycle of a ResourceQuota [Conformance] + test/e2e/apimachinery/resource_quota.go:943 +STEP: Creating a ResourceQuota 10/13/23 08:35:26.431 +STEP: Getting a ResourceQuota 10/13/23 08:35:26.435 +STEP: Listing all ResourceQuotas with LabelSelector 10/13/23 08:35:26.438 +STEP: Patching the ResourceQuota 10/13/23 08:35:26.441 +STEP: Deleting a Collection of ResourceQuotas 10/13/23 08:35:26.447 +STEP: Verifying the deleted ResourceQuota 10/13/23 08:35:26.455 +[AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/node/init/init.go:32 +Oct 13 08:35:26.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + tear down framework | framework.go:193 +STEP: Destroying namespace "resourcequota-9466" for this suite. 10/13/23 08:35:26.462 +------------------------------ +• [0.060 seconds] +[sig-api-machinery] ResourceQuota +test/e2e/apimachinery/framework.go:23 + should manage the lifecycle of a ResourceQuota [Conformance] + test/e2e/apimachinery/resource_quota.go:943 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] ResourceQuota + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:35:26.407 + Oct 13 08:35:26.407: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename resourcequota 10/13/23 08:35:26.409 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:35:26.425 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:35:26.428 + [BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:31 + [It] should manage the lifecycle of a ResourceQuota [Conformance] + test/e2e/apimachinery/resource_quota.go:943 + STEP: Creating a ResourceQuota 10/13/23 08:35:26.431 + STEP: Getting a ResourceQuota 10/13/23 08:35:26.435 + STEP: Listing all ResourceQuotas with LabelSelector 10/13/23 08:35:26.438 + STEP: Patching the ResourceQuota 10/13/23 08:35:26.441 + STEP: Deleting a Collection of ResourceQuotas 10/13/23 08:35:26.447 + STEP: Verifying the deleted ResourceQuota 10/13/23 08:35:26.455 + [AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/node/init/init.go:32 + Oct 13 08:35:26.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + tear down framework | framework.go:193 + STEP: Destroying namespace "resourcequota-9466" for this suite. 10/13/23 08:35:26.462 + << End Captured GinkgoWriter Output +------------------------------ +[sig-node] Containers + should be able to override the image's default command and arguments [NodeConformance] [Conformance] + test/e2e/common/node/containers.go:87 +[BeforeEach] [sig-node] Containers + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:35:26.467 +Oct 13 08:35:26.467: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename containers 10/13/23 08:35:26.468 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:35:26.483 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:35:26.485 +[BeforeEach] [sig-node] Containers + test/e2e/framework/metrics/init/init.go:31 +[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] + test/e2e/common/node/containers.go:87 +STEP: Creating a pod to test override all 10/13/23 08:35:26.487 +Oct 13 08:35:26.494: INFO: Waiting up to 5m0s for pod "client-containers-cd51a116-28d0-4ae3-9350-e24c1448203e" in namespace "containers-9848" to be "Succeeded or Failed" +Oct 13 08:35:26.497: INFO: Pod "client-containers-cd51a116-28d0-4ae3-9350-e24c1448203e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.836233ms +Oct 13 08:35:28.501: INFO: Pod "client-containers-cd51a116-28d0-4ae3-9350-e24c1448203e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007024543s +Oct 13 08:35:30.502: INFO: Pod "client-containers-cd51a116-28d0-4ae3-9350-e24c1448203e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007759759s +STEP: Saw pod success 10/13/23 08:35:30.502 +Oct 13 08:35:30.502: INFO: Pod "client-containers-cd51a116-28d0-4ae3-9350-e24c1448203e" satisfied condition "Succeeded or Failed" +Oct 13 08:35:30.506: INFO: Trying to get logs from node node1 pod client-containers-cd51a116-28d0-4ae3-9350-e24c1448203e container agnhost-container: +STEP: delete the pod 10/13/23 08:35:30.522 +Oct 13 08:35:30.533: INFO: Waiting for pod client-containers-cd51a116-28d0-4ae3-9350-e24c1448203e to disappear +Oct 13 08:35:30.540: INFO: Pod client-containers-cd51a116-28d0-4ae3-9350-e24c1448203e no longer exists +[AfterEach] [sig-node] Containers + test/e2e/framework/node/init/init.go:32 +Oct 13 08:35:30.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Containers + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Containers + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Containers + tear down framework | framework.go:193 +STEP: Destroying namespace "containers-9848" for this suite. 10/13/23 08:35:30.544 +------------------------------ +• [4.082 seconds] +[sig-node] Containers +test/e2e/common/node/framework.go:23 + should be able to override the image's default command and arguments [NodeConformance] [Conformance] + test/e2e/common/node/containers.go:87 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Containers + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:35:26.467 + Oct 13 08:35:26.467: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename containers 10/13/23 08:35:26.468 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:35:26.483 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:35:26.485 + [BeforeEach] [sig-node] Containers + test/e2e/framework/metrics/init/init.go:31 + [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] + test/e2e/common/node/containers.go:87 + STEP: Creating a pod to test override all 10/13/23 08:35:26.487 + Oct 13 08:35:26.494: INFO: Waiting up to 5m0s for pod "client-containers-cd51a116-28d0-4ae3-9350-e24c1448203e" in namespace "containers-9848" to be "Succeeded or Failed" + Oct 13 08:35:26.497: INFO: Pod "client-containers-cd51a116-28d0-4ae3-9350-e24c1448203e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.836233ms + Oct 13 08:35:28.501: INFO: Pod "client-containers-cd51a116-28d0-4ae3-9350-e24c1448203e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007024543s + Oct 13 08:35:30.502: INFO: Pod "client-containers-cd51a116-28d0-4ae3-9350-e24c1448203e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007759759s + STEP: Saw pod success 10/13/23 08:35:30.502 + Oct 13 08:35:30.502: INFO: Pod "client-containers-cd51a116-28d0-4ae3-9350-e24c1448203e" satisfied condition "Succeeded or Failed" + Oct 13 08:35:30.506: INFO: Trying to get logs from node node1 pod client-containers-cd51a116-28d0-4ae3-9350-e24c1448203e container agnhost-container: + STEP: delete the pod 10/13/23 08:35:30.522 + Oct 13 08:35:30.533: INFO: Waiting for pod client-containers-cd51a116-28d0-4ae3-9350-e24c1448203e to disappear + Oct 13 08:35:30.540: INFO: Pod client-containers-cd51a116-28d0-4ae3-9350-e24c1448203e no longer exists + [AfterEach] [sig-node] Containers + test/e2e/framework/node/init/init.go:32 + Oct 13 08:35:30.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Containers + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Containers + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Containers + tear down framework | framework.go:193 + STEP: Destroying namespace "containers-9848" for this suite. 10/13/23 08:35:30.544 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPredicates [Serial] + validates that NodeSelector is respected if not matching [Conformance] + test/e2e/scheduling/predicates.go:443 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:35:30.55 +Oct 13 08:35:30.550: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename sched-pred 10/13/23 08:35:30.551 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:35:30.566 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:35:30.568 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:97 +Oct 13 08:35:30.570: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Oct 13 08:35:30.576: INFO: Waiting for terminating namespaces to be deleted... +Oct 13 08:35:30.579: INFO: +Logging pods the apiserver thinks is on node node1 before test +Oct 13 08:35:30.584: INFO: kube-flannel-ds-jtxbm from kube-flannel started at 2023-10-13 08:11:42 +0000 UTC (1 container statuses recorded) +Oct 13 08:35:30.584: INFO: Container kube-flannel ready: true, restart count 0 +Oct 13 08:35:30.584: INFO: coredns-787d4945fb-89krv from kube-system started at 2023-10-13 08:19:33 +0000 UTC (1 container statuses recorded) +Oct 13 08:35:30.584: INFO: Container coredns ready: true, restart count 0 +Oct 13 08:35:30.584: INFO: etcd-node1 from kube-system started at 2023-10-13 07:51:38 +0000 UTC (1 container statuses recorded) +Oct 13 08:35:30.584: INFO: Container etcd ready: true, restart count 8 +Oct 13 08:35:30.584: INFO: haproxy-node1 from kube-system started at 2023-10-13 07:51:38 +0000 UTC (1 container statuses recorded) +Oct 13 08:35:30.584: INFO: Container haproxy ready: true, restart count 3 +Oct 13 08:35:30.584: INFO: keepalived-node1 from kube-system started at 2023-10-13 07:51:38 +0000 UTC (1 container statuses recorded) +Oct 13 08:35:30.584: INFO: Container keepalived ready: true, restart count 9 +Oct 13 08:35:30.584: INFO: kube-apiserver-node1 from kube-system started at 2023-10-13 07:05:26 +0000 UTC (1 container statuses recorded) +Oct 13 08:35:30.584: INFO: Container kube-apiserver ready: true, restart count 8 +Oct 13 08:35:30.584: INFO: kube-controller-manager-node1 from kube-system started at 2023-10-13 07:51:38 +0000 UTC (1 container statuses recorded) +Oct 13 08:35:30.584: INFO: Container kube-controller-manager ready: true, restart count 8 +Oct 13 08:35:30.584: INFO: kube-proxy-dqr76 from kube-system started at 2023-10-13 07:05:37 +0000 UTC (1 container statuses recorded) +Oct 13 08:35:30.584: INFO: Container kube-proxy ready: true, restart count 1 +Oct 13 08:35:30.584: INFO: kube-scheduler-node1 from kube-system started at 2023-10-13 07:51:38 +0000 UTC (1 container statuses recorded) +Oct 13 08:35:30.584: INFO: Container kube-scheduler ready: true, restart count 11 +Oct 13 08:35:30.584: INFO: sonobuoy from sonobuoy started at 2023-10-13 08:13:29 +0000 UTC (1 container statuses recorded) +Oct 13 08:35:30.584: INFO: Container kube-sonobuoy ready: true, restart count 0 +Oct 13 08:35:30.584: INFO: sonobuoy-systemd-logs-daemon-set-4d2891c1fd524ac6-jr4nt from sonobuoy started at 2023-10-13 08:13:30 +0000 UTC (2 container statuses recorded) +Oct 13 08:35:30.584: INFO: Container sonobuoy-worker ready: true, restart count 0 +Oct 13 08:35:30.584: INFO: Container systemd-logs ready: true, restart count 0 +Oct 13 08:35:30.584: INFO: +Logging pods the apiserver thinks is on node node2 before test +Oct 13 08:35:30.589: INFO: suspend-false-to-true-cjsnw from job-8855 started at 2023-10-13 08:35:25 +0000 UTC (1 container statuses recorded) +Oct 13 08:35:30.589: INFO: Container c ready: true, restart count 0 +Oct 13 08:35:30.589: INFO: suspend-false-to-true-g9djk from job-8855 started at 2023-10-13 08:35:25 +0000 UTC (1 container statuses recorded) +Oct 13 08:35:30.589: INFO: Container c ready: true, restart count 0 +Oct 13 08:35:30.589: INFO: kube-flannel-ds-5qldx from kube-flannel started at 2023-10-13 08:19:34 +0000 UTC (1 container statuses recorded) +Oct 13 08:35:30.589: INFO: Container kube-flannel ready: true, restart count 0 +Oct 13 08:35:30.589: INFO: etcd-node2 from kube-system started at 2023-10-13 07:06:06 +0000 UTC (1 container statuses recorded) +Oct 13 08:35:30.589: INFO: Container etcd ready: true, restart count 1 +Oct 13 08:35:30.589: INFO: haproxy-node2 from kube-system started at 2023-10-13 07:09:12 +0000 UTC (1 container statuses recorded) +Oct 13 08:35:30.589: INFO: Container haproxy ready: true, restart count 1 +Oct 13 08:35:30.589: INFO: keepalived-node2 from kube-system started at 2023-10-13 07:09:12 +0000 UTC (1 container statuses recorded) +Oct 13 08:35:30.589: INFO: Container keepalived ready: true, restart count 1 +Oct 13 08:35:30.589: INFO: kube-apiserver-node2 from kube-system started at 2023-10-13 07:05:53 +0000 UTC (1 container statuses recorded) +Oct 13 08:35:30.589: INFO: Container kube-apiserver ready: true, restart count 2 +Oct 13 08:35:30.589: INFO: kube-controller-manager-node2 from kube-system started at 2023-10-13 07:05:53 +0000 UTC (1 container statuses recorded) +Oct 13 08:35:30.589: INFO: Container kube-controller-manager ready: true, restart count 1 +Oct 13 08:35:30.589: INFO: kube-proxy-tkvwh from kube-system started at 2023-10-13 07:06:06 +0000 UTC (1 container statuses recorded) +Oct 13 08:35:30.589: INFO: Container kube-proxy ready: true, restart count 1 +Oct 13 08:35:30.589: INFO: kube-scheduler-node2 from kube-system started at 2023-10-13 07:05:53 +0000 UTC (1 container statuses recorded) +Oct 13 08:35:30.589: INFO: Container kube-scheduler ready: true, restart count 1 +Oct 13 08:35:30.589: INFO: sonobuoy-systemd-logs-daemon-set-4d2891c1fd524ac6-rhwwd from sonobuoy started at 2023-10-13 08:13:30 +0000 UTC (2 container statuses recorded) +Oct 13 08:35:30.589: INFO: Container sonobuoy-worker ready: true, restart count 0 +Oct 13 08:35:30.589: INFO: Container systemd-logs ready: true, restart count 0 +Oct 13 08:35:30.589: INFO: +Logging pods the apiserver thinks is on node node3 before test +Oct 13 08:35:30.595: INFO: kube-flannel-ds-dzrwh from kube-flannel started at 2023-10-13 08:11:42 +0000 UTC (1 container statuses recorded) +Oct 13 08:35:30.595: INFO: Container kube-flannel ready: true, restart count 0 +Oct 13 08:35:30.595: INFO: coredns-787d4945fb-5dqqv from kube-system started at 2023-10-13 08:12:41 +0000 UTC (1 container statuses recorded) +Oct 13 08:35:30.595: INFO: Container coredns ready: true, restart count 0 +Oct 13 08:35:30.595: INFO: etcd-node3 from kube-system started at 2023-10-13 07:07:40 +0000 UTC (1 container statuses recorded) +Oct 13 08:35:30.595: INFO: Container etcd ready: true, restart count 1 +Oct 13 08:35:30.595: INFO: haproxy-node3 from kube-system started at 2023-10-13 07:07:26 +0000 UTC (1 container statuses recorded) +Oct 13 08:35:30.595: INFO: Container haproxy ready: true, restart count 1 +Oct 13 08:35:30.595: INFO: keepalived-node3 from kube-system started at 2023-10-13 07:07:26 +0000 UTC (1 container statuses recorded) +Oct 13 08:35:30.595: INFO: Container keepalived ready: true, restart count 1 +Oct 13 08:35:30.595: INFO: kube-apiserver-node3 from kube-system started at 2023-10-13 07:07:26 +0000 UTC (1 container statuses recorded) +Oct 13 08:35:30.595: INFO: Container kube-apiserver ready: true, restart count 1 +Oct 13 08:35:30.595: INFO: kube-controller-manager-node3 from kube-system started at 2023-10-13 07:07:26 +0000 UTC (1 container statuses recorded) +Oct 13 08:35:30.595: INFO: Container kube-controller-manager ready: true, restart count 1 +Oct 13 08:35:30.595: INFO: kube-proxy-dkrp7 from kube-system started at 2023-10-13 07:07:46 +0000 UTC (1 container statuses recorded) +Oct 13 08:35:30.595: INFO: Container kube-proxy ready: true, restart count 1 +Oct 13 08:35:30.595: INFO: kube-scheduler-node3 from kube-system started at 2023-10-13 07:07:26 +0000 UTC (1 container statuses recorded) +Oct 13 08:35:30.595: INFO: Container kube-scheduler ready: true, restart count 1 +Oct 13 08:35:30.595: INFO: sonobuoy-e2e-job-bfbf16dda205467f from sonobuoy started at 2023-10-13 08:13:29 +0000 UTC (2 container statuses recorded) +Oct 13 08:35:30.595: INFO: Container e2e ready: true, restart count 0 +Oct 13 08:35:30.595: INFO: Container sonobuoy-worker ready: true, restart count 0 +Oct 13 08:35:30.595: INFO: sonobuoy-systemd-logs-daemon-set-4d2891c1fd524ac6-xj7b7 from sonobuoy started at 2023-10-13 08:13:30 +0000 UTC (2 container statuses recorded) +Oct 13 08:35:30.595: INFO: Container sonobuoy-worker ready: true, restart count 0 +Oct 13 08:35:30.595: INFO: Container systemd-logs ready: true, restart count 0 +[It] validates that NodeSelector is respected if not matching [Conformance] + test/e2e/scheduling/predicates.go:443 +STEP: Trying to schedule Pod with nonempty NodeSelector. 10/13/23 08:35:30.595 +STEP: Considering event: +Type = [Warning], Name = [restricted-pod.178d9dcb45c7f62f], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match Pod's node affinity/selector. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling..] 10/13/23 08:35:30.622 +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/node/init/init.go:32 +Oct 13 08:35:31.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:88 +[DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] + tear down framework | framework.go:193 +STEP: Destroying namespace "sched-pred-8383" for this suite. 10/13/23 08:35:31.623 +------------------------------ +• [1.079 seconds] +[sig-scheduling] SchedulerPredicates [Serial] +test/e2e/scheduling/framework.go:40 + validates that NodeSelector is respected if not matching [Conformance] + test/e2e/scheduling/predicates.go:443 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:35:30.55 + Oct 13 08:35:30.550: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename sched-pred 10/13/23 08:35:30.551 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:35:30.566 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:35:30.568 + [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:97 + Oct 13 08:35:30.570: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready + Oct 13 08:35:30.576: INFO: Waiting for terminating namespaces to be deleted... + Oct 13 08:35:30.579: INFO: + Logging pods the apiserver thinks is on node node1 before test + Oct 13 08:35:30.584: INFO: kube-flannel-ds-jtxbm from kube-flannel started at 2023-10-13 08:11:42 +0000 UTC (1 container statuses recorded) + Oct 13 08:35:30.584: INFO: Container kube-flannel ready: true, restart count 0 + Oct 13 08:35:30.584: INFO: coredns-787d4945fb-89krv from kube-system started at 2023-10-13 08:19:33 +0000 UTC (1 container statuses recorded) + Oct 13 08:35:30.584: INFO: Container coredns ready: true, restart count 0 + Oct 13 08:35:30.584: INFO: etcd-node1 from kube-system started at 2023-10-13 07:51:38 +0000 UTC (1 container statuses recorded) + Oct 13 08:35:30.584: INFO: Container etcd ready: true, restart count 8 + Oct 13 08:35:30.584: INFO: haproxy-node1 from kube-system started at 2023-10-13 07:51:38 +0000 UTC (1 container statuses recorded) + Oct 13 08:35:30.584: INFO: Container haproxy ready: true, restart count 3 + Oct 13 08:35:30.584: INFO: keepalived-node1 from kube-system started at 2023-10-13 07:51:38 +0000 UTC (1 container statuses recorded) + Oct 13 08:35:30.584: INFO: Container keepalived ready: true, restart count 9 + Oct 13 08:35:30.584: INFO: kube-apiserver-node1 from kube-system started at 2023-10-13 07:05:26 +0000 UTC (1 container statuses recorded) + Oct 13 08:35:30.584: INFO: Container kube-apiserver ready: true, restart count 8 + Oct 13 08:35:30.584: INFO: kube-controller-manager-node1 from kube-system started at 2023-10-13 07:51:38 +0000 UTC (1 container statuses recorded) + Oct 13 08:35:30.584: INFO: Container kube-controller-manager ready: true, restart count 8 + Oct 13 08:35:30.584: INFO: kube-proxy-dqr76 from kube-system started at 2023-10-13 07:05:37 +0000 UTC (1 container statuses recorded) + Oct 13 08:35:30.584: INFO: Container kube-proxy ready: true, restart count 1 + Oct 13 08:35:30.584: INFO: kube-scheduler-node1 from kube-system started at 2023-10-13 07:51:38 +0000 UTC (1 container statuses recorded) + Oct 13 08:35:30.584: INFO: Container kube-scheduler ready: true, restart count 11 + Oct 13 08:35:30.584: INFO: sonobuoy from sonobuoy started at 2023-10-13 08:13:29 +0000 UTC (1 container statuses recorded) + Oct 13 08:35:30.584: INFO: Container kube-sonobuoy ready: true, restart count 0 + Oct 13 08:35:30.584: INFO: sonobuoy-systemd-logs-daemon-set-4d2891c1fd524ac6-jr4nt from sonobuoy started at 2023-10-13 08:13:30 +0000 UTC (2 container statuses recorded) + Oct 13 08:35:30.584: INFO: Container sonobuoy-worker ready: true, restart count 0 + Oct 13 08:35:30.584: INFO: Container systemd-logs ready: true, restart count 0 + Oct 13 08:35:30.584: INFO: + Logging pods the apiserver thinks is on node node2 before test + Oct 13 08:35:30.589: INFO: suspend-false-to-true-cjsnw from job-8855 started at 2023-10-13 08:35:25 +0000 UTC (1 container statuses recorded) + Oct 13 08:35:30.589: INFO: Container c ready: true, restart count 0 + Oct 13 08:35:30.589: INFO: suspend-false-to-true-g9djk from job-8855 started at 2023-10-13 08:35:25 +0000 UTC (1 container statuses recorded) + Oct 13 08:35:30.589: INFO: Container c ready: true, restart count 0 + Oct 13 08:35:30.589: INFO: kube-flannel-ds-5qldx from kube-flannel started at 2023-10-13 08:19:34 +0000 UTC (1 container statuses recorded) + Oct 13 08:35:30.589: INFO: Container kube-flannel ready: true, restart count 0 + Oct 13 08:35:30.589: INFO: etcd-node2 from kube-system started at 2023-10-13 07:06:06 +0000 UTC (1 container statuses recorded) + Oct 13 08:35:30.589: INFO: Container etcd ready: true, restart count 1 + Oct 13 08:35:30.589: INFO: haproxy-node2 from kube-system started at 2023-10-13 07:09:12 +0000 UTC (1 container statuses recorded) + Oct 13 08:35:30.589: INFO: Container haproxy ready: true, restart count 1 + Oct 13 08:35:30.589: INFO: keepalived-node2 from kube-system started at 2023-10-13 07:09:12 +0000 UTC (1 container statuses recorded) + Oct 13 08:35:30.589: INFO: Container keepalived ready: true, restart count 1 + Oct 13 08:35:30.589: INFO: kube-apiserver-node2 from kube-system started at 2023-10-13 07:05:53 +0000 UTC (1 container statuses recorded) + Oct 13 08:35:30.589: INFO: Container kube-apiserver ready: true, restart count 2 + Oct 13 08:35:30.589: INFO: kube-controller-manager-node2 from kube-system started at 2023-10-13 07:05:53 +0000 UTC (1 container statuses recorded) + Oct 13 08:35:30.589: INFO: Container kube-controller-manager ready: true, restart count 1 + Oct 13 08:35:30.589: INFO: kube-proxy-tkvwh from kube-system started at 2023-10-13 07:06:06 +0000 UTC (1 container statuses recorded) + Oct 13 08:35:30.589: INFO: Container kube-proxy ready: true, restart count 1 + Oct 13 08:35:30.589: INFO: kube-scheduler-node2 from kube-system started at 2023-10-13 07:05:53 +0000 UTC (1 container statuses recorded) + Oct 13 08:35:30.589: INFO: Container kube-scheduler ready: true, restart count 1 + Oct 13 08:35:30.589: INFO: sonobuoy-systemd-logs-daemon-set-4d2891c1fd524ac6-rhwwd from sonobuoy started at 2023-10-13 08:13:30 +0000 UTC (2 container statuses recorded) + Oct 13 08:35:30.589: INFO: Container sonobuoy-worker ready: true, restart count 0 + Oct 13 08:35:30.589: INFO: Container systemd-logs ready: true, restart count 0 + Oct 13 08:35:30.589: INFO: + Logging pods the apiserver thinks is on node node3 before test + Oct 13 08:35:30.595: INFO: kube-flannel-ds-dzrwh from kube-flannel started at 2023-10-13 08:11:42 +0000 UTC (1 container statuses recorded) + Oct 13 08:35:30.595: INFO: Container kube-flannel ready: true, restart count 0 + Oct 13 08:35:30.595: INFO: coredns-787d4945fb-5dqqv from kube-system started at 2023-10-13 08:12:41 +0000 UTC (1 container statuses recorded) + Oct 13 08:35:30.595: INFO: Container coredns ready: true, restart count 0 + Oct 13 08:35:30.595: INFO: etcd-node3 from kube-system started at 2023-10-13 07:07:40 +0000 UTC (1 container statuses recorded) + Oct 13 08:35:30.595: INFO: Container etcd ready: true, restart count 1 + Oct 13 08:35:30.595: INFO: haproxy-node3 from kube-system started at 2023-10-13 07:07:26 +0000 UTC (1 container statuses recorded) + Oct 13 08:35:30.595: INFO: Container haproxy ready: true, restart count 1 + Oct 13 08:35:30.595: INFO: keepalived-node3 from kube-system started at 2023-10-13 07:07:26 +0000 UTC (1 container statuses recorded) + Oct 13 08:35:30.595: INFO: Container keepalived ready: true, restart count 1 + Oct 13 08:35:30.595: INFO: kube-apiserver-node3 from kube-system started at 2023-10-13 07:07:26 +0000 UTC (1 container statuses recorded) + Oct 13 08:35:30.595: INFO: Container kube-apiserver ready: true, restart count 1 + Oct 13 08:35:30.595: INFO: kube-controller-manager-node3 from kube-system started at 2023-10-13 07:07:26 +0000 UTC (1 container statuses recorded) + Oct 13 08:35:30.595: INFO: Container kube-controller-manager ready: true, restart count 1 + Oct 13 08:35:30.595: INFO: kube-proxy-dkrp7 from kube-system started at 2023-10-13 07:07:46 +0000 UTC (1 container statuses recorded) + Oct 13 08:35:30.595: INFO: Container kube-proxy ready: true, restart count 1 + Oct 13 08:35:30.595: INFO: kube-scheduler-node3 from kube-system started at 2023-10-13 07:07:26 +0000 UTC (1 container statuses recorded) + Oct 13 08:35:30.595: INFO: Container kube-scheduler ready: true, restart count 1 + Oct 13 08:35:30.595: INFO: sonobuoy-e2e-job-bfbf16dda205467f from sonobuoy started at 2023-10-13 08:13:29 +0000 UTC (2 container statuses recorded) + Oct 13 08:35:30.595: INFO: Container e2e ready: true, restart count 0 + Oct 13 08:35:30.595: INFO: Container sonobuoy-worker ready: true, restart count 0 + Oct 13 08:35:30.595: INFO: sonobuoy-systemd-logs-daemon-set-4d2891c1fd524ac6-xj7b7 from sonobuoy started at 2023-10-13 08:13:30 +0000 UTC (2 container statuses recorded) + Oct 13 08:35:30.595: INFO: Container sonobuoy-worker ready: true, restart count 0 + Oct 13 08:35:30.595: INFO: Container systemd-logs ready: true, restart count 0 + [It] validates that NodeSelector is respected if not matching [Conformance] + test/e2e/scheduling/predicates.go:443 + STEP: Trying to schedule Pod with nonempty NodeSelector. 10/13/23 08:35:30.595 + STEP: Considering event: + Type = [Warning], Name = [restricted-pod.178d9dcb45c7f62f], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match Pod's node affinity/selector. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling..] 10/13/23 08:35:30.622 + [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/node/init/init.go:32 + Oct 13 08:35:31.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:88 + [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] + tear down framework | framework.go:193 + STEP: Destroying namespace "sched-pred-8383" for this suite. 10/13/23 08:35:31.623 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-architecture] Conformance Tests + should have at least two untainted nodes [Conformance] + test/e2e/architecture/conformance.go:38 +[BeforeEach] [sig-architecture] Conformance Tests + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:35:31.63 +Oct 13 08:35:31.630: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename conformance-tests 10/13/23 08:35:31.63 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:35:31.649 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:35:31.652 +[BeforeEach] [sig-architecture] Conformance Tests + test/e2e/framework/metrics/init/init.go:31 +[It] should have at least two untainted nodes [Conformance] + test/e2e/architecture/conformance.go:38 +STEP: Getting node addresses 10/13/23 08:35:31.654 +Oct 13 08:35:31.654: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +[AfterEach] [sig-architecture] Conformance Tests + test/e2e/framework/node/init/init.go:32 +Oct 13 08:35:31.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-architecture] Conformance Tests + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-architecture] Conformance Tests + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-architecture] Conformance Tests + tear down framework | framework.go:193 +STEP: Destroying namespace "conformance-tests-4054" for this suite. 10/13/23 08:35:31.662 +------------------------------ +• [0.037 seconds] +[sig-architecture] Conformance Tests +test/e2e/architecture/framework.go:23 + should have at least two untainted nodes [Conformance] + test/e2e/architecture/conformance.go:38 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-architecture] Conformance Tests + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:35:31.63 + Oct 13 08:35:31.630: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename conformance-tests 10/13/23 08:35:31.63 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:35:31.649 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:35:31.652 + [BeforeEach] [sig-architecture] Conformance Tests + test/e2e/framework/metrics/init/init.go:31 + [It] should have at least two untainted nodes [Conformance] + test/e2e/architecture/conformance.go:38 + STEP: Getting node addresses 10/13/23 08:35:31.654 + Oct 13 08:35:31.654: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable + [AfterEach] [sig-architecture] Conformance Tests + test/e2e/framework/node/init/init.go:32 + Oct 13 08:35:31.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-architecture] Conformance Tests + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-architecture] Conformance Tests + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-architecture] Conformance Tests + tear down framework | framework.go:193 + STEP: Destroying namespace "conformance-tests-4054" for this suite. 10/13/23 08:35:31.662 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should have session affinity work for NodePort service [LinuxOnly] [Conformance] + test/e2e/network/service.go:2228 +[BeforeEach] [sig-network] Services + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:35:31.669 +Oct 13 08:35:31.669: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename services 10/13/23 08:35:31.67 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:35:31.685 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:35:31.687 +[BeforeEach] [sig-network] Services + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 +[It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] + test/e2e/network/service.go:2228 +STEP: creating service in namespace services-8352 10/13/23 08:35:31.69 +STEP: creating service affinity-nodeport in namespace services-8352 10/13/23 08:35:31.69 +STEP: creating replication controller affinity-nodeport in namespace services-8352 10/13/23 08:35:31.707 +I1013 08:35:31.715834 23 runners.go:193] Created replication controller with name: affinity-nodeport, namespace: services-8352, replica count: 3 +I1013 08:35:34.766856 23 runners.go:193] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Oct 13 08:35:34.777: INFO: Creating new exec pod +Oct 13 08:35:34.786: INFO: Waiting up to 5m0s for pod "execpod-affinityq99dg" in namespace "services-8352" to be "running" +Oct 13 08:35:34.789: INFO: Pod "execpod-affinityq99dg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.712054ms +Oct 13 08:35:36.792: INFO: Pod "execpod-affinityq99dg": Phase="Running", Reason="", readiness=true. Elapsed: 2.006284593s +Oct 13 08:35:36.792: INFO: Pod "execpod-affinityq99dg" satisfied condition "running" +Oct 13 08:35:37.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-8352 exec execpod-affinityq99dg -- /bin/sh -x -c nc -v -z -w 2 affinity-nodeport 80' +Oct 13 08:35:38.019: INFO: stderr: "+ nc -v -z -w 2 affinity-nodeport 80\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\n" +Oct 13 08:35:38.019: INFO: stdout: "" +Oct 13 08:35:38.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-8352 exec execpod-affinityq99dg -- /bin/sh -x -c nc -v -z -w 2 10.96.210.27 80' +Oct 13 08:35:38.154: INFO: stderr: "+ nc -v -z -w 2 10.96.210.27 80\nConnection to 10.96.210.27 80 port [tcp/http] succeeded!\n" +Oct 13 08:35:38.154: INFO: stdout: "" +Oct 13 08:35:38.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-8352 exec execpod-affinityq99dg -- /bin/sh -x -c nc -v -z -w 2 10.253.8.111 31273' +Oct 13 08:35:38.266: INFO: stderr: "+ nc -v -z -w 2 10.253.8.111 31273\nConnection to 10.253.8.111 31273 port [tcp/*] succeeded!\n" +Oct 13 08:35:38.266: INFO: stdout: "" +Oct 13 08:35:38.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-8352 exec execpod-affinityq99dg -- /bin/sh -x -c nc -v -z -w 2 10.253.8.110 31273' +Oct 13 08:35:38.397: INFO: stderr: "+ nc -v -z -w 2 10.253.8.110 31273\nConnection to 10.253.8.110 31273 port [tcp/*] succeeded!\n" +Oct 13 08:35:38.397: INFO: stdout: "" +Oct 13 08:35:38.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-8352 exec execpod-affinityq99dg -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.253.8.110:31273/ ; done' +Oct 13 08:35:38.606: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31273/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31273/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31273/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31273/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31273/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31273/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31273/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31273/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31273/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31273/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31273/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31273/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31273/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31273/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31273/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31273/\n" +Oct 13 08:35:38.606: INFO: stdout: "\naffinity-nodeport-vgjzk\naffinity-nodeport-vgjzk\naffinity-nodeport-vgjzk\naffinity-nodeport-vgjzk\naffinity-nodeport-vgjzk\naffinity-nodeport-vgjzk\naffinity-nodeport-vgjzk\naffinity-nodeport-vgjzk\naffinity-nodeport-vgjzk\naffinity-nodeport-vgjzk\naffinity-nodeport-vgjzk\naffinity-nodeport-vgjzk\naffinity-nodeport-vgjzk\naffinity-nodeport-vgjzk\naffinity-nodeport-vgjzk\naffinity-nodeport-vgjzk" +Oct 13 08:35:38.606: INFO: Received response from host: affinity-nodeport-vgjzk +Oct 13 08:35:38.606: INFO: Received response from host: affinity-nodeport-vgjzk +Oct 13 08:35:38.606: INFO: Received response from host: affinity-nodeport-vgjzk +Oct 13 08:35:38.606: INFO: Received response from host: affinity-nodeport-vgjzk +Oct 13 08:35:38.606: INFO: Received response from host: affinity-nodeport-vgjzk +Oct 13 08:35:38.606: INFO: Received response from host: affinity-nodeport-vgjzk +Oct 13 08:35:38.606: INFO: Received response from host: affinity-nodeport-vgjzk +Oct 13 08:35:38.606: INFO: Received response from host: affinity-nodeport-vgjzk +Oct 13 08:35:38.606: INFO: Received response from host: affinity-nodeport-vgjzk +Oct 13 08:35:38.606: INFO: Received response from host: affinity-nodeport-vgjzk +Oct 13 08:35:38.606: INFO: Received response from host: affinity-nodeport-vgjzk +Oct 13 08:35:38.606: INFO: Received response from host: affinity-nodeport-vgjzk +Oct 13 08:35:38.606: INFO: Received response from host: affinity-nodeport-vgjzk +Oct 13 08:35:38.606: INFO: Received response from host: affinity-nodeport-vgjzk +Oct 13 08:35:38.606: INFO: Received response from host: affinity-nodeport-vgjzk +Oct 13 08:35:38.606: INFO: Received response from host: affinity-nodeport-vgjzk +Oct 13 08:35:38.606: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-nodeport in namespace services-8352, will wait for the garbage collector to delete the pods 10/13/23 08:35:38.615 +Oct 13 08:35:38.676: INFO: Deleting ReplicationController affinity-nodeport took: 5.94669ms +Oct 13 08:35:38.777: INFO: Terminating ReplicationController affinity-nodeport pods took: 100.879324ms +[AfterEach] [sig-network] Services + test/e2e/framework/node/init/init.go:32 +Oct 13 08:35:40.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] Services + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] Services + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] Services + tear down framework | framework.go:193 +STEP: Destroying namespace "services-8352" for this suite. 10/13/23 08:35:40.7 +------------------------------ +• [SLOW TEST] [9.037 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should have session affinity work for NodePort service [LinuxOnly] [Conformance] + test/e2e/network/service.go:2228 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Services + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:35:31.669 + Oct 13 08:35:31.669: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename services 10/13/23 08:35:31.67 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:35:31.685 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:35:31.687 + [BeforeEach] [sig-network] Services + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 + [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] + test/e2e/network/service.go:2228 + STEP: creating service in namespace services-8352 10/13/23 08:35:31.69 + STEP: creating service affinity-nodeport in namespace services-8352 10/13/23 08:35:31.69 + STEP: creating replication controller affinity-nodeport in namespace services-8352 10/13/23 08:35:31.707 + I1013 08:35:31.715834 23 runners.go:193] Created replication controller with name: affinity-nodeport, namespace: services-8352, replica count: 3 + I1013 08:35:34.766856 23 runners.go:193] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady + Oct 13 08:35:34.777: INFO: Creating new exec pod + Oct 13 08:35:34.786: INFO: Waiting up to 5m0s for pod "execpod-affinityq99dg" in namespace "services-8352" to be "running" + Oct 13 08:35:34.789: INFO: Pod "execpod-affinityq99dg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.712054ms + Oct 13 08:35:36.792: INFO: Pod "execpod-affinityq99dg": Phase="Running", Reason="", readiness=true. Elapsed: 2.006284593s + Oct 13 08:35:36.792: INFO: Pod "execpod-affinityq99dg" satisfied condition "running" + Oct 13 08:35:37.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-8352 exec execpod-affinityq99dg -- /bin/sh -x -c nc -v -z -w 2 affinity-nodeport 80' + Oct 13 08:35:38.019: INFO: stderr: "+ nc -v -z -w 2 affinity-nodeport 80\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\n" + Oct 13 08:35:38.019: INFO: stdout: "" + Oct 13 08:35:38.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-8352 exec execpod-affinityq99dg -- /bin/sh -x -c nc -v -z -w 2 10.96.210.27 80' + Oct 13 08:35:38.154: INFO: stderr: "+ nc -v -z -w 2 10.96.210.27 80\nConnection to 10.96.210.27 80 port [tcp/http] succeeded!\n" + Oct 13 08:35:38.154: INFO: stdout: "" + Oct 13 08:35:38.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-8352 exec execpod-affinityq99dg -- /bin/sh -x -c nc -v -z -w 2 10.253.8.111 31273' + Oct 13 08:35:38.266: INFO: stderr: "+ nc -v -z -w 2 10.253.8.111 31273\nConnection to 10.253.8.111 31273 port [tcp/*] succeeded!\n" + Oct 13 08:35:38.266: INFO: stdout: "" + Oct 13 08:35:38.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-8352 exec execpod-affinityq99dg -- /bin/sh -x -c nc -v -z -w 2 10.253.8.110 31273' + Oct 13 08:35:38.397: INFO: stderr: "+ nc -v -z -w 2 10.253.8.110 31273\nConnection to 10.253.8.110 31273 port [tcp/*] succeeded!\n" + Oct 13 08:35:38.397: INFO: stdout: "" + Oct 13 08:35:38.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-8352 exec execpod-affinityq99dg -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.253.8.110:31273/ ; done' + Oct 13 08:35:38.606: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31273/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31273/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31273/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31273/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31273/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31273/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31273/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31273/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31273/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31273/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31273/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31273/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31273/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31273/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31273/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.253.8.110:31273/\n" + Oct 13 08:35:38.606: INFO: stdout: "\naffinity-nodeport-vgjzk\naffinity-nodeport-vgjzk\naffinity-nodeport-vgjzk\naffinity-nodeport-vgjzk\naffinity-nodeport-vgjzk\naffinity-nodeport-vgjzk\naffinity-nodeport-vgjzk\naffinity-nodeport-vgjzk\naffinity-nodeport-vgjzk\naffinity-nodeport-vgjzk\naffinity-nodeport-vgjzk\naffinity-nodeport-vgjzk\naffinity-nodeport-vgjzk\naffinity-nodeport-vgjzk\naffinity-nodeport-vgjzk\naffinity-nodeport-vgjzk" + Oct 13 08:35:38.606: INFO: Received response from host: affinity-nodeport-vgjzk + Oct 13 08:35:38.606: INFO: Received response from host: affinity-nodeport-vgjzk + Oct 13 08:35:38.606: INFO: Received response from host: affinity-nodeport-vgjzk + Oct 13 08:35:38.606: INFO: Received response from host: affinity-nodeport-vgjzk + Oct 13 08:35:38.606: INFO: Received response from host: affinity-nodeport-vgjzk + Oct 13 08:35:38.606: INFO: Received response from host: affinity-nodeport-vgjzk + Oct 13 08:35:38.606: INFO: Received response from host: affinity-nodeport-vgjzk + Oct 13 08:35:38.606: INFO: Received response from host: affinity-nodeport-vgjzk + Oct 13 08:35:38.606: INFO: Received response from host: affinity-nodeport-vgjzk + Oct 13 08:35:38.606: INFO: Received response from host: affinity-nodeport-vgjzk + Oct 13 08:35:38.606: INFO: Received response from host: affinity-nodeport-vgjzk + Oct 13 08:35:38.606: INFO: Received response from host: affinity-nodeport-vgjzk + Oct 13 08:35:38.606: INFO: Received response from host: affinity-nodeport-vgjzk + Oct 13 08:35:38.606: INFO: Received response from host: affinity-nodeport-vgjzk + Oct 13 08:35:38.606: INFO: Received response from host: affinity-nodeport-vgjzk + Oct 13 08:35:38.606: INFO: Received response from host: affinity-nodeport-vgjzk + Oct 13 08:35:38.606: INFO: Cleaning up the exec pod + STEP: deleting ReplicationController affinity-nodeport in namespace services-8352, will wait for the garbage collector to delete the pods 10/13/23 08:35:38.615 + Oct 13 08:35:38.676: INFO: Deleting ReplicationController affinity-nodeport took: 5.94669ms + Oct 13 08:35:38.777: INFO: Terminating ReplicationController affinity-nodeport pods took: 100.879324ms + [AfterEach] [sig-network] Services + test/e2e/framework/node/init/init.go:32 + Oct 13 08:35:40.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] Services + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] Services + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] Services + tear down framework | framework.go:193 + STEP: Destroying namespace "services-8352" for this suite. 10/13/23 08:35:40.7 + << End Captured GinkgoWriter Output +------------------------------ +SSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:68 +[BeforeEach] [sig-storage] Secrets + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:35:40.706 +Oct 13 08:35:40.706: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename secrets 10/13/23 08:35:40.707 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:35:40.727 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:35:40.729 +[BeforeEach] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:31 +[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:68 +STEP: Creating secret with name secret-test-03269b6c-2408-4bda-a3f0-2f1d71a1ad40 10/13/23 08:35:40.731 +STEP: Creating a pod to test consume secrets 10/13/23 08:35:40.736 +Oct 13 08:35:40.742: INFO: Waiting up to 5m0s for pod "pod-secrets-ec8ce305-d179-4402-ad71-a50e5b096979" in namespace "secrets-1339" to be "Succeeded or Failed" +Oct 13 08:35:40.745: INFO: Pod "pod-secrets-ec8ce305-d179-4402-ad71-a50e5b096979": Phase="Pending", Reason="", readiness=false. Elapsed: 2.566186ms +Oct 13 08:35:42.749: INFO: Pod "pod-secrets-ec8ce305-d179-4402-ad71-a50e5b096979": Phase="Running", Reason="", readiness=false. Elapsed: 2.00638211s +Oct 13 08:35:44.750: INFO: Pod "pod-secrets-ec8ce305-d179-4402-ad71-a50e5b096979": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008042955s +STEP: Saw pod success 10/13/23 08:35:44.75 +Oct 13 08:35:44.750: INFO: Pod "pod-secrets-ec8ce305-d179-4402-ad71-a50e5b096979" satisfied condition "Succeeded or Failed" +Oct 13 08:35:44.753: INFO: Trying to get logs from node node1 pod pod-secrets-ec8ce305-d179-4402-ad71-a50e5b096979 container secret-volume-test: +STEP: delete the pod 10/13/23 08:35:44.759 +Oct 13 08:35:44.773: INFO: Waiting for pod pod-secrets-ec8ce305-d179-4402-ad71-a50e5b096979 to disappear +Oct 13 08:35:44.776: INFO: Pod pod-secrets-ec8ce305-d179-4402-ad71-a50e5b096979 no longer exists +[AfterEach] [sig-storage] Secrets + test/e2e/framework/node/init/init.go:32 +Oct 13 08:35:44.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Secrets + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Secrets + tear down framework | framework.go:193 +STEP: Destroying namespace "secrets-1339" for this suite. 10/13/23 08:35:44.779 +------------------------------ +• [4.085 seconds] +[sig-storage] Secrets +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:68 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Secrets + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:35:40.706 + Oct 13 08:35:40.706: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename secrets 10/13/23 08:35:40.707 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:35:40.727 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:35:40.729 + [BeforeEach] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:31 + [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:68 + STEP: Creating secret with name secret-test-03269b6c-2408-4bda-a3f0-2f1d71a1ad40 10/13/23 08:35:40.731 + STEP: Creating a pod to test consume secrets 10/13/23 08:35:40.736 + Oct 13 08:35:40.742: INFO: Waiting up to 5m0s for pod "pod-secrets-ec8ce305-d179-4402-ad71-a50e5b096979" in namespace "secrets-1339" to be "Succeeded or Failed" + Oct 13 08:35:40.745: INFO: Pod "pod-secrets-ec8ce305-d179-4402-ad71-a50e5b096979": Phase="Pending", Reason="", readiness=false. Elapsed: 2.566186ms + Oct 13 08:35:42.749: INFO: Pod "pod-secrets-ec8ce305-d179-4402-ad71-a50e5b096979": Phase="Running", Reason="", readiness=false. Elapsed: 2.00638211s + Oct 13 08:35:44.750: INFO: Pod "pod-secrets-ec8ce305-d179-4402-ad71-a50e5b096979": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008042955s + STEP: Saw pod success 10/13/23 08:35:44.75 + Oct 13 08:35:44.750: INFO: Pod "pod-secrets-ec8ce305-d179-4402-ad71-a50e5b096979" satisfied condition "Succeeded or Failed" + Oct 13 08:35:44.753: INFO: Trying to get logs from node node1 pod pod-secrets-ec8ce305-d179-4402-ad71-a50e5b096979 container secret-volume-test: + STEP: delete the pod 10/13/23 08:35:44.759 + Oct 13 08:35:44.773: INFO: Waiting for pod pod-secrets-ec8ce305-d179-4402-ad71-a50e5b096979 to disappear + Oct 13 08:35:44.776: INFO: Pod pod-secrets-ec8ce305-d179-4402-ad71-a50e5b096979 no longer exists + [AfterEach] [sig-storage] Secrets + test/e2e/framework/node/init/init.go:32 + Oct 13 08:35:44.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Secrets + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Secrets + tear down framework | framework.go:193 + STEP: Destroying namespace "secrets-1339" for this suite. 10/13/23 08:35:44.779 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSS +------------------------------ +[sig-node] Variable Expansion + should allow substituting values in a volume subpath [Conformance] + test/e2e/common/node/expansion.go:112 +[BeforeEach] [sig-node] Variable Expansion + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:35:44.791 +Oct 13 08:35:44.792: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename var-expansion 10/13/23 08:35:44.793 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:35:44.806 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:35:44.808 +[BeforeEach] [sig-node] Variable Expansion + test/e2e/framework/metrics/init/init.go:31 +[It] should allow substituting values in a volume subpath [Conformance] + test/e2e/common/node/expansion.go:112 +STEP: Creating a pod to test substitution in volume subpath 10/13/23 08:35:44.81 +Oct 13 08:35:44.818: INFO: Waiting up to 5m0s for pod "var-expansion-1dcdbedb-9a1a-48fe-a534-f620e2155329" in namespace "var-expansion-5154" to be "Succeeded or Failed" +Oct 13 08:35:44.820: INFO: Pod "var-expansion-1dcdbedb-9a1a-48fe-a534-f620e2155329": Phase="Pending", Reason="", readiness=false. Elapsed: 2.812882ms +Oct 13 08:35:46.826: INFO: Pod "var-expansion-1dcdbedb-9a1a-48fe-a534-f620e2155329": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008187682s +Oct 13 08:35:48.825: INFO: Pod "var-expansion-1dcdbedb-9a1a-48fe-a534-f620e2155329": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007439151s +STEP: Saw pod success 10/13/23 08:35:48.825 +Oct 13 08:35:48.825: INFO: Pod "var-expansion-1dcdbedb-9a1a-48fe-a534-f620e2155329" satisfied condition "Succeeded or Failed" +Oct 13 08:35:48.829: INFO: Trying to get logs from node node1 pod var-expansion-1dcdbedb-9a1a-48fe-a534-f620e2155329 container dapi-container: +STEP: delete the pod 10/13/23 08:35:48.836 +Oct 13 08:35:48.846: INFO: Waiting for pod var-expansion-1dcdbedb-9a1a-48fe-a534-f620e2155329 to disappear +Oct 13 08:35:48.849: INFO: Pod var-expansion-1dcdbedb-9a1a-48fe-a534-f620e2155329 no longer exists +[AfterEach] [sig-node] Variable Expansion + test/e2e/framework/node/init/init.go:32 +Oct 13 08:35:48.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Variable Expansion + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Variable Expansion + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Variable Expansion + tear down framework | framework.go:193 +STEP: Destroying namespace "var-expansion-5154" for this suite. 10/13/23 08:35:48.852 +------------------------------ +• [4.065 seconds] +[sig-node] Variable Expansion +test/e2e/common/node/framework.go:23 + should allow substituting values in a volume subpath [Conformance] + test/e2e/common/node/expansion.go:112 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Variable Expansion + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:35:44.791 + Oct 13 08:35:44.792: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename var-expansion 10/13/23 08:35:44.793 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:35:44.806 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:35:44.808 + [BeforeEach] [sig-node] Variable Expansion + test/e2e/framework/metrics/init/init.go:31 + [It] should allow substituting values in a volume subpath [Conformance] + test/e2e/common/node/expansion.go:112 + STEP: Creating a pod to test substitution in volume subpath 10/13/23 08:35:44.81 + Oct 13 08:35:44.818: INFO: Waiting up to 5m0s for pod "var-expansion-1dcdbedb-9a1a-48fe-a534-f620e2155329" in namespace "var-expansion-5154" to be "Succeeded or Failed" + Oct 13 08:35:44.820: INFO: Pod "var-expansion-1dcdbedb-9a1a-48fe-a534-f620e2155329": Phase="Pending", Reason="", readiness=false. Elapsed: 2.812882ms + Oct 13 08:35:46.826: INFO: Pod "var-expansion-1dcdbedb-9a1a-48fe-a534-f620e2155329": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008187682s + Oct 13 08:35:48.825: INFO: Pod "var-expansion-1dcdbedb-9a1a-48fe-a534-f620e2155329": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007439151s + STEP: Saw pod success 10/13/23 08:35:48.825 + Oct 13 08:35:48.825: INFO: Pod "var-expansion-1dcdbedb-9a1a-48fe-a534-f620e2155329" satisfied condition "Succeeded or Failed" + Oct 13 08:35:48.829: INFO: Trying to get logs from node node1 pod var-expansion-1dcdbedb-9a1a-48fe-a534-f620e2155329 container dapi-container: + STEP: delete the pod 10/13/23 08:35:48.836 + Oct 13 08:35:48.846: INFO: Waiting for pod var-expansion-1dcdbedb-9a1a-48fe-a534-f620e2155329 to disappear + Oct 13 08:35:48.849: INFO: Pod var-expansion-1dcdbedb-9a1a-48fe-a534-f620e2155329 no longer exists + [AfterEach] [sig-node] Variable Expansion + test/e2e/framework/node/init/init.go:32 + Oct 13 08:35:48.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Variable Expansion + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Variable Expansion + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Variable Expansion + tear down framework | framework.go:193 + STEP: Destroying namespace "var-expansion-5154" for this suite. 10/13/23 08:35:48.852 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's memory request [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:235 +[BeforeEach] [sig-storage] Projected downwardAPI + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:35:48.857 +Oct 13 08:35:48.857: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename projected 10/13/23 08:35:48.858 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:35:48.871 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:35:48.873 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:44 +[It] should provide container's memory request [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:235 +STEP: Creating a pod to test downward API volume plugin 10/13/23 08:35:48.875 +Oct 13 08:35:48.883: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ca11b6a5-79dc-4bf4-a28c-42633dfe9d05" in namespace "projected-5671" to be "Succeeded or Failed" +Oct 13 08:35:48.886: INFO: Pod "downwardapi-volume-ca11b6a5-79dc-4bf4-a28c-42633dfe9d05": Phase="Pending", Reason="", readiness=false. Elapsed: 3.205362ms +Oct 13 08:35:50.890: INFO: Pod "downwardapi-volume-ca11b6a5-79dc-4bf4-a28c-42633dfe9d05": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007291301s +Oct 13 08:35:52.891: INFO: Pod "downwardapi-volume-ca11b6a5-79dc-4bf4-a28c-42633dfe9d05": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007842473s +STEP: Saw pod success 10/13/23 08:35:52.891 +Oct 13 08:35:52.891: INFO: Pod "downwardapi-volume-ca11b6a5-79dc-4bf4-a28c-42633dfe9d05" satisfied condition "Succeeded or Failed" +Oct 13 08:35:52.894: INFO: Trying to get logs from node node1 pod downwardapi-volume-ca11b6a5-79dc-4bf4-a28c-42633dfe9d05 container client-container: +STEP: delete the pod 10/13/23 08:35:52.901 +Oct 13 08:35:52.912: INFO: Waiting for pod downwardapi-volume-ca11b6a5-79dc-4bf4-a28c-42633dfe9d05 to disappear +Oct 13 08:35:52.915: INFO: Pod downwardapi-volume-ca11b6a5-79dc-4bf4-a28c-42633dfe9d05 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/node/init/init.go:32 +Oct 13 08:35:52.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI + tear down framework | framework.go:193 +STEP: Destroying namespace "projected-5671" for this suite. 10/13/23 08:35:52.918 +------------------------------ +• [4.066 seconds] +[sig-storage] Projected downwardAPI +test/e2e/common/storage/framework.go:23 + should provide container's memory request [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:235 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected downwardAPI + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:35:48.857 + Oct 13 08:35:48.857: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename projected 10/13/23 08:35:48.858 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:35:48.871 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:35:48.873 + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:44 + [It] should provide container's memory request [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:235 + STEP: Creating a pod to test downward API volume plugin 10/13/23 08:35:48.875 + Oct 13 08:35:48.883: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ca11b6a5-79dc-4bf4-a28c-42633dfe9d05" in namespace "projected-5671" to be "Succeeded or Failed" + Oct 13 08:35:48.886: INFO: Pod "downwardapi-volume-ca11b6a5-79dc-4bf4-a28c-42633dfe9d05": Phase="Pending", Reason="", readiness=false. Elapsed: 3.205362ms + Oct 13 08:35:50.890: INFO: Pod "downwardapi-volume-ca11b6a5-79dc-4bf4-a28c-42633dfe9d05": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007291301s + Oct 13 08:35:52.891: INFO: Pod "downwardapi-volume-ca11b6a5-79dc-4bf4-a28c-42633dfe9d05": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007842473s + STEP: Saw pod success 10/13/23 08:35:52.891 + Oct 13 08:35:52.891: INFO: Pod "downwardapi-volume-ca11b6a5-79dc-4bf4-a28c-42633dfe9d05" satisfied condition "Succeeded or Failed" + Oct 13 08:35:52.894: INFO: Trying to get logs from node node1 pod downwardapi-volume-ca11b6a5-79dc-4bf4-a28c-42633dfe9d05 container client-container: + STEP: delete the pod 10/13/23 08:35:52.901 + Oct 13 08:35:52.912: INFO: Waiting for pod downwardapi-volume-ca11b6a5-79dc-4bf4-a28c-42633dfe9d05 to disappear + Oct 13 08:35:52.915: INFO: Pod downwardapi-volume-ca11b6a5-79dc-4bf4-a28c-42633dfe9d05 no longer exists + [AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/node/init/init.go:32 + Oct 13 08:35:52.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + tear down framework | framework.go:193 + STEP: Destroying namespace "projected-5671" for this suite. 10/13/23 08:35:52.918 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl cluster-info + should check if Kubernetes control plane services is included in cluster-info [Conformance] + test/e2e/kubectl/kubectl.go:1250 +[BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:35:52.924 +Oct 13 08:35:52.924: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename kubectl 10/13/23 08:35:52.925 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:35:52.94 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:35:52.942 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 +[It] should check if Kubernetes control plane services is included in cluster-info [Conformance] + test/e2e/kubectl/kubectl.go:1250 +STEP: validating cluster-info 10/13/23 08:35:52.945 +Oct 13 08:35:52.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1507 cluster-info' +Oct 13 08:35:53.019: INFO: stderr: "" +Oct 13 08:35:53.019: INFO: stdout: "\x1b[0;32mKubernetes control plane\x1b[0m is running at \x1b[0;33mhttps://10.96.0.1:443\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 +Oct 13 08:35:53.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 +STEP: Destroying namespace "kubectl-1507" for this suite. 10/13/23 08:35:53.023 +------------------------------ +• [0.105 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Kubectl cluster-info + test/e2e/kubectl/kubectl.go:1244 + should check if Kubernetes control plane services is included in cluster-info [Conformance] + test/e2e/kubectl/kubectl.go:1250 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:35:52.924 + Oct 13 08:35:52.924: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename kubectl 10/13/23 08:35:52.925 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:35:52.94 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:35:52.942 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 + [It] should check if Kubernetes control plane services is included in cluster-info [Conformance] + test/e2e/kubectl/kubectl.go:1250 + STEP: validating cluster-info 10/13/23 08:35:52.945 + Oct 13 08:35:52.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1507 cluster-info' + Oct 13 08:35:53.019: INFO: stderr: "" + Oct 13 08:35:53.019: INFO: stdout: "\x1b[0;32mKubernetes control plane\x1b[0m is running at \x1b[0;33mhttps://10.96.0.1:443\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" + [AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 + Oct 13 08:35:53.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 + STEP: Destroying namespace "kubectl-1507" for this suite. 10/13/23 08:35:53.023 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:84 +[BeforeEach] [sig-storage] Projected downwardAPI + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:35:53.03 +Oct 13 08:35:53.030: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename projected 10/13/23 08:35:53.031 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:35:53.044 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:35:53.047 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:44 +[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:84 +STEP: Creating a pod to test downward API volume plugin 10/13/23 08:35:53.049 +Oct 13 08:35:53.057: INFO: Waiting up to 5m0s for pod "downwardapi-volume-837a5407-d9c2-4161-aae8-bcb67d9455c4" in namespace "projected-4400" to be "Succeeded or Failed" +Oct 13 08:35:53.060: INFO: Pod "downwardapi-volume-837a5407-d9c2-4161-aae8-bcb67d9455c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.906499ms +Oct 13 08:35:55.066: INFO: Pod "downwardapi-volume-837a5407-d9c2-4161-aae8-bcb67d9455c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008726693s +Oct 13 08:35:57.065: INFO: Pod "downwardapi-volume-837a5407-d9c2-4161-aae8-bcb67d9455c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008030094s +STEP: Saw pod success 10/13/23 08:35:57.065 +Oct 13 08:35:57.065: INFO: Pod "downwardapi-volume-837a5407-d9c2-4161-aae8-bcb67d9455c4" satisfied condition "Succeeded or Failed" +Oct 13 08:35:57.069: INFO: Trying to get logs from node node1 pod downwardapi-volume-837a5407-d9c2-4161-aae8-bcb67d9455c4 container client-container: +STEP: delete the pod 10/13/23 08:35:57.075 +Oct 13 08:35:57.090: INFO: Waiting for pod downwardapi-volume-837a5407-d9c2-4161-aae8-bcb67d9455c4 to disappear +Oct 13 08:35:57.092: INFO: Pod downwardapi-volume-837a5407-d9c2-4161-aae8-bcb67d9455c4 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/node/init/init.go:32 +Oct 13 08:35:57.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI + tear down framework | framework.go:193 +STEP: Destroying namespace "projected-4400" for this suite. 10/13/23 08:35:57.096 +------------------------------ +• [4.072 seconds] +[sig-storage] Projected downwardAPI +test/e2e/common/storage/framework.go:23 + should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:84 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected downwardAPI + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:35:53.03 + Oct 13 08:35:53.030: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename projected 10/13/23 08:35:53.031 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:35:53.044 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:35:53.047 + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:44 + [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:84 + STEP: Creating a pod to test downward API volume plugin 10/13/23 08:35:53.049 + Oct 13 08:35:53.057: INFO: Waiting up to 5m0s for pod "downwardapi-volume-837a5407-d9c2-4161-aae8-bcb67d9455c4" in namespace "projected-4400" to be "Succeeded or Failed" + Oct 13 08:35:53.060: INFO: Pod "downwardapi-volume-837a5407-d9c2-4161-aae8-bcb67d9455c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.906499ms + Oct 13 08:35:55.066: INFO: Pod "downwardapi-volume-837a5407-d9c2-4161-aae8-bcb67d9455c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008726693s + Oct 13 08:35:57.065: INFO: Pod "downwardapi-volume-837a5407-d9c2-4161-aae8-bcb67d9455c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008030094s + STEP: Saw pod success 10/13/23 08:35:57.065 + Oct 13 08:35:57.065: INFO: Pod "downwardapi-volume-837a5407-d9c2-4161-aae8-bcb67d9455c4" satisfied condition "Succeeded or Failed" + Oct 13 08:35:57.069: INFO: Trying to get logs from node node1 pod downwardapi-volume-837a5407-d9c2-4161-aae8-bcb67d9455c4 container client-container: + STEP: delete the pod 10/13/23 08:35:57.075 + Oct 13 08:35:57.090: INFO: Waiting for pod downwardapi-volume-837a5407-d9c2-4161-aae8-bcb67d9455c4 to disappear + Oct 13 08:35:57.092: INFO: Pod downwardapi-volume-837a5407-d9c2-4161-aae8-bcb67d9455c4 no longer exists + [AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/node/init/init.go:32 + Oct 13 08:35:57.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + tear down framework | framework.go:193 + STEP: Destroying namespace "projected-4400" for this suite. 10/13/23 08:35:57.096 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] CSIInlineVolumes + should support CSIVolumeSource in Pod API [Conformance] + test/e2e/storage/csi_inline.go:131 +[BeforeEach] [sig-storage] CSIInlineVolumes + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:35:57.103 +Oct 13 08:35:57.103: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename csiinlinevolumes 10/13/23 08:35:57.103 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:35:57.117 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:35:57.12 +[BeforeEach] [sig-storage] CSIInlineVolumes + test/e2e/framework/metrics/init/init.go:31 +[It] should support CSIVolumeSource in Pod API [Conformance] + test/e2e/storage/csi_inline.go:131 +STEP: creating 10/13/23 08:35:57.122 +STEP: getting 10/13/23 08:35:57.139 +STEP: listing in namespace 10/13/23 08:35:57.143 +STEP: patching 10/13/23 08:35:57.146 +STEP: deleting 10/13/23 08:35:57.158 +[AfterEach] [sig-storage] CSIInlineVolumes + test/e2e/framework/node/init/init.go:32 +Oct 13 08:35:57.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] CSIInlineVolumes + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] CSIInlineVolumes + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] CSIInlineVolumes + tear down framework | framework.go:193 +STEP: Destroying namespace "csiinlinevolumes-8335" for this suite. 10/13/23 08:35:57.17 +------------------------------ +• [0.073 seconds] +[sig-storage] CSIInlineVolumes +test/e2e/storage/utils/framework.go:23 + should support CSIVolumeSource in Pod API [Conformance] + test/e2e/storage/csi_inline.go:131 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] CSIInlineVolumes + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:35:57.103 + Oct 13 08:35:57.103: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename csiinlinevolumes 10/13/23 08:35:57.103 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:35:57.117 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:35:57.12 + [BeforeEach] [sig-storage] CSIInlineVolumes + test/e2e/framework/metrics/init/init.go:31 + [It] should support CSIVolumeSource in Pod API [Conformance] + test/e2e/storage/csi_inline.go:131 + STEP: creating 10/13/23 08:35:57.122 + STEP: getting 10/13/23 08:35:57.139 + STEP: listing in namespace 10/13/23 08:35:57.143 + STEP: patching 10/13/23 08:35:57.146 + STEP: deleting 10/13/23 08:35:57.158 + [AfterEach] [sig-storage] CSIInlineVolumes + test/e2e/framework/node/init/init.go:32 + Oct 13 08:35:57.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] CSIInlineVolumes + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] CSIInlineVolumes + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] CSIInlineVolumes + tear down framework | framework.go:193 + STEP: Destroying namespace "csiinlinevolumes-8335" for this suite. 10/13/23 08:35:57.17 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should be able to change the type from NodePort to ExternalName [Conformance] + test/e2e/network/service.go:1557 +[BeforeEach] [sig-network] Services + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:35:57.176 +Oct 13 08:35:57.176: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename services 10/13/23 08:35:57.177 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:35:57.191 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:35:57.193 +[BeforeEach] [sig-network] Services + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 +[It] should be able to change the type from NodePort to ExternalName [Conformance] + test/e2e/network/service.go:1557 +STEP: creating a service nodeport-service with the type=NodePort in namespace services-3732 10/13/23 08:35:57.196 +STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service 10/13/23 08:35:57.21 +STEP: creating service externalsvc in namespace services-3732 10/13/23 08:35:57.21 +STEP: creating replication controller externalsvc in namespace services-3732 10/13/23 08:35:57.226 +I1013 08:35:57.235148 23 runners.go:193] Created replication controller with name: externalsvc, namespace: services-3732, replica count: 2 +I1013 08:36:00.287410 23 runners.go:193] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +STEP: changing the NodePort service to type=ExternalName 10/13/23 08:36:00.29 +Oct 13 08:36:00.311: INFO: Creating new exec pod +Oct 13 08:36:00.321: INFO: Waiting up to 5m0s for pod "execpoddxpr9" in namespace "services-3732" to be "running" +Oct 13 08:36:00.324: INFO: Pod "execpoddxpr9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.258751ms +Oct 13 08:36:02.327: INFO: Pod "execpoddxpr9": Phase="Running", Reason="", readiness=true. Elapsed: 2.006594053s +Oct 13 08:36:02.328: INFO: Pod "execpoddxpr9" satisfied condition "running" +Oct 13 08:36:02.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-3732 exec execpoddxpr9 -- /bin/sh -x -c nslookup nodeport-service.services-3732.svc.cluster.local' +Oct 13 08:36:02.496: INFO: stderr: "+ nslookup nodeport-service.services-3732.svc.cluster.local\n" +Oct 13 08:36:02.496: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-3732.svc.cluster.local\tcanonical name = externalsvc.services-3732.svc.cluster.local.\nName:\texternalsvc.services-3732.svc.cluster.local\nAddress: 10.99.45.190\n\n" +STEP: deleting ReplicationController externalsvc in namespace services-3732, will wait for the garbage collector to delete the pods 10/13/23 08:36:02.496 +Oct 13 08:36:02.555: INFO: Deleting ReplicationController externalsvc took: 5.39796ms +Oct 13 08:36:02.655: INFO: Terminating ReplicationController externalsvc pods took: 100.381114ms +Oct 13 08:36:04.775: INFO: Cleaning up the NodePort to ExternalName test service +[AfterEach] [sig-network] Services + test/e2e/framework/node/init/init.go:32 +Oct 13 08:36:04.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] Services + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] Services + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] Services + tear down framework | framework.go:193 +STEP: Destroying namespace "services-3732" for this suite. 10/13/23 08:36:04.788 +------------------------------ +• [SLOW TEST] [7.619 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should be able to change the type from NodePort to ExternalName [Conformance] + test/e2e/network/service.go:1557 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Services + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:35:57.176 + Oct 13 08:35:57.176: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename services 10/13/23 08:35:57.177 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:35:57.191 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:35:57.193 + [BeforeEach] [sig-network] Services + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 + [It] should be able to change the type from NodePort to ExternalName [Conformance] + test/e2e/network/service.go:1557 + STEP: creating a service nodeport-service with the type=NodePort in namespace services-3732 10/13/23 08:35:57.196 + STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service 10/13/23 08:35:57.21 + STEP: creating service externalsvc in namespace services-3732 10/13/23 08:35:57.21 + STEP: creating replication controller externalsvc in namespace services-3732 10/13/23 08:35:57.226 + I1013 08:35:57.235148 23 runners.go:193] Created replication controller with name: externalsvc, namespace: services-3732, replica count: 2 + I1013 08:36:00.287410 23 runners.go:193] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady + STEP: changing the NodePort service to type=ExternalName 10/13/23 08:36:00.29 + Oct 13 08:36:00.311: INFO: Creating new exec pod + Oct 13 08:36:00.321: INFO: Waiting up to 5m0s for pod "execpoddxpr9" in namespace "services-3732" to be "running" + Oct 13 08:36:00.324: INFO: Pod "execpoddxpr9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.258751ms + Oct 13 08:36:02.327: INFO: Pod "execpoddxpr9": Phase="Running", Reason="", readiness=true. Elapsed: 2.006594053s + Oct 13 08:36:02.328: INFO: Pod "execpoddxpr9" satisfied condition "running" + Oct 13 08:36:02.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-3732 exec execpoddxpr9 -- /bin/sh -x -c nslookup nodeport-service.services-3732.svc.cluster.local' + Oct 13 08:36:02.496: INFO: stderr: "+ nslookup nodeport-service.services-3732.svc.cluster.local\n" + Oct 13 08:36:02.496: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-3732.svc.cluster.local\tcanonical name = externalsvc.services-3732.svc.cluster.local.\nName:\texternalsvc.services-3732.svc.cluster.local\nAddress: 10.99.45.190\n\n" + STEP: deleting ReplicationController externalsvc in namespace services-3732, will wait for the garbage collector to delete the pods 10/13/23 08:36:02.496 + Oct 13 08:36:02.555: INFO: Deleting ReplicationController externalsvc took: 5.39796ms + Oct 13 08:36:02.655: INFO: Terminating ReplicationController externalsvc pods took: 100.381114ms + Oct 13 08:36:04.775: INFO: Cleaning up the NodePort to ExternalName test service + [AfterEach] [sig-network] Services + test/e2e/framework/node/init/init.go:32 + Oct 13 08:36:04.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] Services + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] Services + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] Services + tear down framework | framework.go:193 + STEP: Destroying namespace "services-3732" for this suite. 10/13/23 08:36:04.788 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should verify changes to a daemon set status [Conformance] + test/e2e/apps/daemon_set.go:862 +[BeforeEach] [sig-apps] Daemon set [Serial] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:36:04.796 +Oct 13 08:36:04.796: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename daemonsets 10/13/23 08:36:04.797 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:36:04.811 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:36:04.814 +[BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:146 +[It] should verify changes to a daemon set status [Conformance] + test/e2e/apps/daemon_set.go:862 +STEP: Creating simple DaemonSet "daemon-set" 10/13/23 08:36:04.83 +STEP: Check that daemon pods launch on every node of the cluster. 10/13/23 08:36:04.835 +Oct 13 08:36:04.841: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Oct 13 08:36:04.841: INFO: Node node1 is running 0 daemon pod, expected 1 +Oct 13 08:36:05.847: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Oct 13 08:36:05.847: INFO: Node node1 is running 0 daemon pod, expected 1 +Oct 13 08:36:06.849: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 +Oct 13 08:36:06.849: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set +STEP: Getting /status 10/13/23 08:36:06.852 +Oct 13 08:36:06.856: INFO: Daemon Set daemon-set has Conditions: [] +STEP: updating the DaemonSet Status 10/13/23 08:36:06.856 +Oct 13 08:36:06.867: INFO: updatedStatus.Conditions: []v1.DaemonSetCondition{v1.DaemonSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} +STEP: watching for the daemon set status to be updated 10/13/23 08:36:06.867 +Oct 13 08:36:06.868: INFO: Observed &DaemonSet event: ADDED +Oct 13 08:36:06.869: INFO: Observed &DaemonSet event: MODIFIED +Oct 13 08:36:06.869: INFO: Observed &DaemonSet event: MODIFIED +Oct 13 08:36:06.869: INFO: Observed &DaemonSet event: MODIFIED +Oct 13 08:36:06.869: INFO: Observed &DaemonSet event: MODIFIED +Oct 13 08:36:06.869: INFO: Found daemon set daemon-set in namespace daemonsets-8920 with labels: map[daemonset-name:daemon-set] annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] +Oct 13 08:36:06.869: INFO: Daemon set daemon-set has an updated status +STEP: patching the DaemonSet Status 10/13/23 08:36:06.869 +STEP: watching for the daemon set status to be patched 10/13/23 08:36:06.875 +Oct 13 08:36:06.876: INFO: Observed &DaemonSet event: ADDED +Oct 13 08:36:06.876: INFO: Observed &DaemonSet event: MODIFIED +Oct 13 08:36:06.876: INFO: Observed &DaemonSet event: MODIFIED +Oct 13 08:36:06.876: INFO: Observed &DaemonSet event: MODIFIED +Oct 13 08:36:06.877: INFO: Observed &DaemonSet event: MODIFIED +Oct 13 08:36:06.877: INFO: Observed daemon set daemon-set in namespace daemonsets-8920 with annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] +Oct 13 08:36:06.877: INFO: Observed &DaemonSet event: MODIFIED +Oct 13 08:36:06.877: INFO: Found daemon set daemon-set in namespace daemonsets-8920 with labels: map[daemonset-name:daemon-set] annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusPatched True 0001-01-01 00:00:00 +0000 UTC }] +Oct 13 08:36:06.877: INFO: Daemon set daemon-set has a patched status +[AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:111 +STEP: Deleting DaemonSet "daemon-set" 10/13/23 08:36:06.879 +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8920, will wait for the garbage collector to delete the pods 10/13/23 08:36:06.88 +Oct 13 08:36:06.937: INFO: Deleting DaemonSet.extensions daemon-set took: 4.942559ms +Oct 13 08:36:07.038: INFO: Terminating DaemonSet.extensions daemon-set pods took: 101.026596ms +Oct 13 08:36:09.542: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Oct 13 08:36:09.542: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set +Oct 13 08:36:09.545: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"18547"},"items":null} + +Oct 13 08:36:09.547: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"18547"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/node/init/init.go:32 +Oct 13 08:36:09.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + tear down framework | framework.go:193 +STEP: Destroying namespace "daemonsets-8920" for this suite. 10/13/23 08:36:09.559 +------------------------------ +• [4.768 seconds] +[sig-apps] Daemon set [Serial] +test/e2e/apps/framework.go:23 + should verify changes to a daemon set status [Conformance] + test/e2e/apps/daemon_set.go:862 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] Daemon set [Serial] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:36:04.796 + Oct 13 08:36:04.796: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename daemonsets 10/13/23 08:36:04.797 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:36:04.811 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:36:04.814 + [BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:146 + [It] should verify changes to a daemon set status [Conformance] + test/e2e/apps/daemon_set.go:862 + STEP: Creating simple DaemonSet "daemon-set" 10/13/23 08:36:04.83 + STEP: Check that daemon pods launch on every node of the cluster. 10/13/23 08:36:04.835 + Oct 13 08:36:04.841: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Oct 13 08:36:04.841: INFO: Node node1 is running 0 daemon pod, expected 1 + Oct 13 08:36:05.847: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 + Oct 13 08:36:05.847: INFO: Node node1 is running 0 daemon pod, expected 1 + Oct 13 08:36:06.849: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 + Oct 13 08:36:06.849: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set + STEP: Getting /status 10/13/23 08:36:06.852 + Oct 13 08:36:06.856: INFO: Daemon Set daemon-set has Conditions: [] + STEP: updating the DaemonSet Status 10/13/23 08:36:06.856 + Oct 13 08:36:06.867: INFO: updatedStatus.Conditions: []v1.DaemonSetCondition{v1.DaemonSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} + STEP: watching for the daemon set status to be updated 10/13/23 08:36:06.867 + Oct 13 08:36:06.868: INFO: Observed &DaemonSet event: ADDED + Oct 13 08:36:06.869: INFO: Observed &DaemonSet event: MODIFIED + Oct 13 08:36:06.869: INFO: Observed &DaemonSet event: MODIFIED + Oct 13 08:36:06.869: INFO: Observed &DaemonSet event: MODIFIED + Oct 13 08:36:06.869: INFO: Observed &DaemonSet event: MODIFIED + Oct 13 08:36:06.869: INFO: Found daemon set daemon-set in namespace daemonsets-8920 with labels: map[daemonset-name:daemon-set] annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] + Oct 13 08:36:06.869: INFO: Daemon set daemon-set has an updated status + STEP: patching the DaemonSet Status 10/13/23 08:36:06.869 + STEP: watching for the daemon set status to be patched 10/13/23 08:36:06.875 + Oct 13 08:36:06.876: INFO: Observed &DaemonSet event: ADDED + Oct 13 08:36:06.876: INFO: Observed &DaemonSet event: MODIFIED + Oct 13 08:36:06.876: INFO: Observed &DaemonSet event: MODIFIED + Oct 13 08:36:06.876: INFO: Observed &DaemonSet event: MODIFIED + Oct 13 08:36:06.877: INFO: Observed &DaemonSet event: MODIFIED + Oct 13 08:36:06.877: INFO: Observed daemon set daemon-set in namespace daemonsets-8920 with annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] + Oct 13 08:36:06.877: INFO: Observed &DaemonSet event: MODIFIED + Oct 13 08:36:06.877: INFO: Found daemon set daemon-set in namespace daemonsets-8920 with labels: map[daemonset-name:daemon-set] annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusPatched True 0001-01-01 00:00:00 +0000 UTC }] + Oct 13 08:36:06.877: INFO: Daemon set daemon-set has a patched status + [AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:111 + STEP: Deleting DaemonSet "daemon-set" 10/13/23 08:36:06.879 + STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8920, will wait for the garbage collector to delete the pods 10/13/23 08:36:06.88 + Oct 13 08:36:06.937: INFO: Deleting DaemonSet.extensions daemon-set took: 4.942559ms + Oct 13 08:36:07.038: INFO: Terminating DaemonSet.extensions daemon-set pods took: 101.026596ms + Oct 13 08:36:09.542: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Oct 13 08:36:09.542: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set + Oct 13 08:36:09.545: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"18547"},"items":null} + + Oct 13 08:36:09.547: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"18547"},"items":null} + + [AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/node/init/init.go:32 + Oct 13 08:36:09.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + tear down framework | framework.go:193 + STEP: Destroying namespace "daemonsets-8920" for this suite. 10/13/23 08:36:09.559 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicationController + should test the lifecycle of a ReplicationController [Conformance] + test/e2e/apps/rc.go:110 +[BeforeEach] [sig-apps] ReplicationController + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:36:09.564 +Oct 13 08:36:09.565: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename replication-controller 10/13/23 08:36:09.565 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:36:09.578 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:36:09.58 +[BeforeEach] [sig-apps] ReplicationController + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] ReplicationController + test/e2e/apps/rc.go:57 +[It] should test the lifecycle of a ReplicationController [Conformance] + test/e2e/apps/rc.go:110 +STEP: creating a ReplicationController 10/13/23 08:36:09.584 +STEP: waiting for RC to be added 10/13/23 08:36:09.589 +STEP: waiting for available Replicas 10/13/23 08:36:09.589 +STEP: patching ReplicationController 10/13/23 08:36:10.282 +STEP: waiting for RC to be modified 10/13/23 08:36:10.288 +STEP: patching ReplicationController status 10/13/23 08:36:10.289 +STEP: waiting for RC to be modified 10/13/23 08:36:10.293 +STEP: waiting for available Replicas 10/13/23 08:36:10.294 +STEP: fetching ReplicationController status 10/13/23 08:36:10.299 +STEP: patching ReplicationController scale 10/13/23 08:36:10.302 +STEP: waiting for RC to be modified 10/13/23 08:36:10.307 +STEP: waiting for ReplicationController's scale to be the max amount 10/13/23 08:36:10.307 +STEP: fetching ReplicationController; ensuring that it's patched 10/13/23 08:36:11.75 +STEP: updating ReplicationController status 10/13/23 08:36:11.753 +STEP: waiting for RC to be modified 10/13/23 08:36:11.757 +STEP: listing all ReplicationControllers 10/13/23 08:36:11.758 +STEP: checking that ReplicationController has expected values 10/13/23 08:36:11.761 +STEP: deleting ReplicationControllers by collection 10/13/23 08:36:11.761 +STEP: waiting for ReplicationController to have a DELETED watchEvent 10/13/23 08:36:11.768 +[AfterEach] [sig-apps] ReplicationController + test/e2e/framework/node/init/init.go:32 +Oct 13 08:36:11.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] ReplicationController + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] ReplicationController + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] ReplicationController + tear down framework | framework.go:193 +STEP: Destroying namespace "replication-controller-3466" for this suite. 10/13/23 08:36:11.81 +------------------------------ +• [2.251 seconds] +[sig-apps] ReplicationController +test/e2e/apps/framework.go:23 + should test the lifecycle of a ReplicationController [Conformance] + test/e2e/apps/rc.go:110 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] ReplicationController + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:36:09.564 + Oct 13 08:36:09.565: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename replication-controller 10/13/23 08:36:09.565 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:36:09.578 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:36:09.58 + [BeforeEach] [sig-apps] ReplicationController + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] ReplicationController + test/e2e/apps/rc.go:57 + [It] should test the lifecycle of a ReplicationController [Conformance] + test/e2e/apps/rc.go:110 + STEP: creating a ReplicationController 10/13/23 08:36:09.584 + STEP: waiting for RC to be added 10/13/23 08:36:09.589 + STEP: waiting for available Replicas 10/13/23 08:36:09.589 + STEP: patching ReplicationController 10/13/23 08:36:10.282 + STEP: waiting for RC to be modified 10/13/23 08:36:10.288 + STEP: patching ReplicationController status 10/13/23 08:36:10.289 + STEP: waiting for RC to be modified 10/13/23 08:36:10.293 + STEP: waiting for available Replicas 10/13/23 08:36:10.294 + STEP: fetching ReplicationController status 10/13/23 08:36:10.299 + STEP: patching ReplicationController scale 10/13/23 08:36:10.302 + STEP: waiting for RC to be modified 10/13/23 08:36:10.307 + STEP: waiting for ReplicationController's scale to be the max amount 10/13/23 08:36:10.307 + STEP: fetching ReplicationController; ensuring that it's patched 10/13/23 08:36:11.75 + STEP: updating ReplicationController status 10/13/23 08:36:11.753 + STEP: waiting for RC to be modified 10/13/23 08:36:11.757 + STEP: listing all ReplicationControllers 10/13/23 08:36:11.758 + STEP: checking that ReplicationController has expected values 10/13/23 08:36:11.761 + STEP: deleting ReplicationControllers by collection 10/13/23 08:36:11.761 + STEP: waiting for ReplicationController to have a DELETED watchEvent 10/13/23 08:36:11.768 + [AfterEach] [sig-apps] ReplicationController + test/e2e/framework/node/init/init.go:32 + Oct 13 08:36:11.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] ReplicationController + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] ReplicationController + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] ReplicationController + tear down framework | framework.go:193 + STEP: Destroying namespace "replication-controller-3466" for this suite. 10/13/23 08:36:11.81 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + updates the published spec when one version gets renamed [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:391 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:36:11.819 +Oct 13 08:36:11.819: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename crd-publish-openapi 10/13/23 08:36:11.819 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:36:11.832 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:36:11.834 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[It] updates the published spec when one version gets renamed [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:391 +STEP: set up a multi version CRD 10/13/23 08:36:11.836 +Oct 13 08:36:11.837: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: rename a version 10/13/23 08:36:15.67 +STEP: check the new version name is served 10/13/23 08:36:15.686 +STEP: check the old version name is removed 10/13/23 08:36:17.32 +STEP: check the other version is not changed 10/13/23 08:36:17.983 +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Oct 13 08:36:21.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "crd-publish-openapi-4231" for this suite. 10/13/23 08:36:21.13 +------------------------------ +• [SLOW TEST] [9.321 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + updates the published spec when one version gets renamed [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:391 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:36:11.819 + Oct 13 08:36:11.819: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename crd-publish-openapi 10/13/23 08:36:11.819 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:36:11.832 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:36:11.834 + [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [It] updates the published spec when one version gets renamed [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:391 + STEP: set up a multi version CRD 10/13/23 08:36:11.836 + Oct 13 08:36:11.837: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: rename a version 10/13/23 08:36:15.67 + STEP: check the new version name is served 10/13/23 08:36:15.686 + STEP: check the old version name is removed 10/13/23 08:36:17.32 + STEP: check the other version is not changed 10/13/23 08:36:17.983 + [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Oct 13 08:36:21.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "crd-publish-openapi-4231" for this suite. 10/13/23 08:36:21.13 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSS +------------------------------ +[sig-node] Pods + should support remote command execution over websockets [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:536 +[BeforeEach] [sig-node] Pods + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:36:21.14 +Oct 13 08:36:21.140: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename pods 10/13/23 08:36:21.141 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:36:21.16 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:36:21.162 +[BeforeEach] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:194 +[It] should support remote command execution over websockets [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:536 +Oct 13 08:36:21.164: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: creating the pod 10/13/23 08:36:21.165 +STEP: submitting the pod to kubernetes 10/13/23 08:36:21.165 +Oct 13 08:36:21.172: INFO: Waiting up to 5m0s for pod "pod-exec-websocket-72edd696-27e7-4bbf-a281-09db677e0842" in namespace "pods-3096" to be "running and ready" +Oct 13 08:36:21.177: INFO: Pod "pod-exec-websocket-72edd696-27e7-4bbf-a281-09db677e0842": Phase="Pending", Reason="", readiness=false. Elapsed: 4.181457ms +Oct 13 08:36:21.177: INFO: The phase of Pod pod-exec-websocket-72edd696-27e7-4bbf-a281-09db677e0842 is Pending, waiting for it to be Running (with Ready = true) +Oct 13 08:36:23.181: INFO: Pod "pod-exec-websocket-72edd696-27e7-4bbf-a281-09db677e0842": Phase="Running", Reason="", readiness=true. Elapsed: 2.008712421s +Oct 13 08:36:23.181: INFO: The phase of Pod pod-exec-websocket-72edd696-27e7-4bbf-a281-09db677e0842 is Running (Ready = true) +Oct 13 08:36:23.181: INFO: Pod "pod-exec-websocket-72edd696-27e7-4bbf-a281-09db677e0842" satisfied condition "running and ready" +[AfterEach] [sig-node] Pods + test/e2e/framework/node/init/init.go:32 +Oct 13 08:36:23.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Pods + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Pods + tear down framework | framework.go:193 +STEP: Destroying namespace "pods-3096" for this suite. 10/13/23 08:36:23.27 +------------------------------ +• [2.136 seconds] +[sig-node] Pods +test/e2e/common/node/framework.go:23 + should support remote command execution over websockets [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:536 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Pods + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:36:21.14 + Oct 13 08:36:21.140: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename pods 10/13/23 08:36:21.141 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:36:21.16 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:36:21.162 + [BeforeEach] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:194 + [It] should support remote command execution over websockets [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:536 + Oct 13 08:36:21.164: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: creating the pod 10/13/23 08:36:21.165 + STEP: submitting the pod to kubernetes 10/13/23 08:36:21.165 + Oct 13 08:36:21.172: INFO: Waiting up to 5m0s for pod "pod-exec-websocket-72edd696-27e7-4bbf-a281-09db677e0842" in namespace "pods-3096" to be "running and ready" + Oct 13 08:36:21.177: INFO: Pod "pod-exec-websocket-72edd696-27e7-4bbf-a281-09db677e0842": Phase="Pending", Reason="", readiness=false. Elapsed: 4.181457ms + Oct 13 08:36:21.177: INFO: The phase of Pod pod-exec-websocket-72edd696-27e7-4bbf-a281-09db677e0842 is Pending, waiting for it to be Running (with Ready = true) + Oct 13 08:36:23.181: INFO: Pod "pod-exec-websocket-72edd696-27e7-4bbf-a281-09db677e0842": Phase="Running", Reason="", readiness=true. Elapsed: 2.008712421s + Oct 13 08:36:23.181: INFO: The phase of Pod pod-exec-websocket-72edd696-27e7-4bbf-a281-09db677e0842 is Running (Ready = true) + Oct 13 08:36:23.181: INFO: Pod "pod-exec-websocket-72edd696-27e7-4bbf-a281-09db677e0842" satisfied condition "running and ready" + [AfterEach] [sig-node] Pods + test/e2e/framework/node/init/init.go:32 + Oct 13 08:36:23.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Pods + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Pods + tear down framework | framework.go:193 + STEP: Destroying namespace "pods-3096" for this suite. 10/13/23 08:36:23.27 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Namespaces [Serial] + should apply an update to a Namespace [Conformance] + test/e2e/apimachinery/namespace.go:366 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:36:23.276 +Oct 13 08:36:23.276: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename namespaces 10/13/23 08:36:23.278 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:36:23.29 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:36:23.293 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/metrics/init/init.go:31 +[It] should apply an update to a Namespace [Conformance] + test/e2e/apimachinery/namespace.go:366 +STEP: Updating Namespace "namespaces-9280" 10/13/23 08:36:23.295 +Oct 13 08:36:23.302: INFO: Namespace "namespaces-9280" now has labels, map[string]string{"e2e-framework":"namespaces", "e2e-run":"bac244cc-4119-4800-a1cc-8eb31f68e1cb", "kubernetes.io/metadata.name":"namespaces-9280", "namespaces-9280":"updated", "pod-security.kubernetes.io/enforce":"baseline"} +[AfterEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/node/init/init.go:32 +Oct 13 08:36:23.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + tear down framework | framework.go:193 +STEP: Destroying namespace "namespaces-9280" for this suite. 10/13/23 08:36:23.306 +------------------------------ +• [0.045 seconds] +[sig-api-machinery] Namespaces [Serial] +test/e2e/apimachinery/framework.go:23 + should apply an update to a Namespace [Conformance] + test/e2e/apimachinery/namespace.go:366 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] Namespaces [Serial] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:36:23.276 + Oct 13 08:36:23.276: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename namespaces 10/13/23 08:36:23.278 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:36:23.29 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:36:23.293 + [BeforeEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/metrics/init/init.go:31 + [It] should apply an update to a Namespace [Conformance] + test/e2e/apimachinery/namespace.go:366 + STEP: Updating Namespace "namespaces-9280" 10/13/23 08:36:23.295 + Oct 13 08:36:23.302: INFO: Namespace "namespaces-9280" now has labels, map[string]string{"e2e-framework":"namespaces", "e2e-run":"bac244cc-4119-4800-a1cc-8eb31f68e1cb", "kubernetes.io/metadata.name":"namespaces-9280", "namespaces-9280":"updated", "pod-security.kubernetes.io/enforce":"baseline"} + [AfterEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/node/init/init.go:32 + Oct 13 08:36:23.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + tear down framework | framework.go:193 + STEP: Destroying namespace "namespaces-9280" for this suite. 10/13/23 08:36:23.306 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:187 +[BeforeEach] [sig-storage] EmptyDir volumes + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:36:23.322 +Oct 13 08:36:23.322: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename emptydir 10/13/23 08:36:23.322 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:36:23.333 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:36:23.337 +[BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:31 +[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:187 +STEP: Creating a pod to test emptydir 0777 on node default medium 10/13/23 08:36:23.339 +Oct 13 08:36:23.346: INFO: Waiting up to 5m0s for pod "pod-1256959a-047a-4058-a976-48b1b79e8da1" in namespace "emptydir-3076" to be "Succeeded or Failed" +Oct 13 08:36:23.349: INFO: Pod "pod-1256959a-047a-4058-a976-48b1b79e8da1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.827202ms +Oct 13 08:36:25.354: INFO: Pod "pod-1256959a-047a-4058-a976-48b1b79e8da1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008189485s +Oct 13 08:36:27.353: INFO: Pod "pod-1256959a-047a-4058-a976-48b1b79e8da1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.006922178s +STEP: Saw pod success 10/13/23 08:36:27.353 +Oct 13 08:36:27.353: INFO: Pod "pod-1256959a-047a-4058-a976-48b1b79e8da1" satisfied condition "Succeeded or Failed" +Oct 13 08:36:27.356: INFO: Trying to get logs from node node2 pod pod-1256959a-047a-4058-a976-48b1b79e8da1 container test-container: +STEP: delete the pod 10/13/23 08:36:27.361 +Oct 13 08:36:27.378: INFO: Waiting for pod pod-1256959a-047a-4058-a976-48b1b79e8da1 to disappear +Oct 13 08:36:27.381: INFO: Pod pod-1256959a-047a-4058-a976-48b1b79e8da1 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/node/init/init.go:32 +Oct 13 08:36:27.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + tear down framework | framework.go:193 +STEP: Destroying namespace "emptydir-3076" for this suite. 10/13/23 08:36:27.384 +------------------------------ +• [4.069 seconds] +[sig-storage] EmptyDir volumes +test/e2e/common/storage/framework.go:23 + should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:187 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] EmptyDir volumes + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:36:23.322 + Oct 13 08:36:23.322: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename emptydir 10/13/23 08:36:23.322 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:36:23.333 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:36:23.337 + [BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:31 + [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:187 + STEP: Creating a pod to test emptydir 0777 on node default medium 10/13/23 08:36:23.339 + Oct 13 08:36:23.346: INFO: Waiting up to 5m0s for pod "pod-1256959a-047a-4058-a976-48b1b79e8da1" in namespace "emptydir-3076" to be "Succeeded or Failed" + Oct 13 08:36:23.349: INFO: Pod "pod-1256959a-047a-4058-a976-48b1b79e8da1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.827202ms + Oct 13 08:36:25.354: INFO: Pod "pod-1256959a-047a-4058-a976-48b1b79e8da1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008189485s + Oct 13 08:36:27.353: INFO: Pod "pod-1256959a-047a-4058-a976-48b1b79e8da1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.006922178s + STEP: Saw pod success 10/13/23 08:36:27.353 + Oct 13 08:36:27.353: INFO: Pod "pod-1256959a-047a-4058-a976-48b1b79e8da1" satisfied condition "Succeeded or Failed" + Oct 13 08:36:27.356: INFO: Trying to get logs from node node2 pod pod-1256959a-047a-4058-a976-48b1b79e8da1 container test-container: + STEP: delete the pod 10/13/23 08:36:27.361 + Oct 13 08:36:27.378: INFO: Waiting for pod pod-1256959a-047a-4058-a976-48b1b79e8da1 to disappear + Oct 13 08:36:27.381: INFO: Pod pod-1256959a-047a-4058-a976-48b1b79e8da1 no longer exists + [AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/node/init/init.go:32 + Oct 13 08:36:27.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + tear down framework | framework.go:193 + STEP: Destroying namespace "emptydir-3076" for this suite. 10/13/23 08:36:27.384 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate custom resource with pruning [Conformance] + test/e2e/apimachinery/webhook.go:341 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:36:27.391 +Oct 13 08:36:27.391: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename webhook 10/13/23 08:36:27.392 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:36:27.403 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:36:27.405 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 +STEP: Setting up server cert 10/13/23 08:36:27.42 +STEP: Create role binding to let webhook read extension-apiserver-authentication 10/13/23 08:36:27.751 +STEP: Deploying the webhook pod 10/13/23 08:36:27.759 +STEP: Wait for the deployment to be ready 10/13/23 08:36:27.768 +Oct 13 08:36:27.777: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service 10/13/23 08:36:29.785 +STEP: Verifying the service has paired with the endpoint 10/13/23 08:36:29.806 +Oct 13 08:36:30.806: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate custom resource with pruning [Conformance] + test/e2e/apimachinery/webhook.go:341 +Oct 13 08:36:30.812: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6427-crds.webhook.example.com via the AdmissionRegistration API 10/13/23 08:36:31.325 +STEP: Creating a custom resource that should be mutated by the webhook 10/13/23 08:36:31.34 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Oct 13 08:36:33.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "webhook-2133" for this suite. 10/13/23 08:36:33.967 +STEP: Destroying namespace "webhook-2133-markers" for this suite. 10/13/23 08:36:33.976 +------------------------------ +• [SLOW TEST] [6.593 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should mutate custom resource with pruning [Conformance] + test/e2e/apimachinery/webhook.go:341 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:36:27.391 + Oct 13 08:36:27.391: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename webhook 10/13/23 08:36:27.392 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:36:27.403 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:36:27.405 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 + STEP: Setting up server cert 10/13/23 08:36:27.42 + STEP: Create role binding to let webhook read extension-apiserver-authentication 10/13/23 08:36:27.751 + STEP: Deploying the webhook pod 10/13/23 08:36:27.759 + STEP: Wait for the deployment to be ready 10/13/23 08:36:27.768 + Oct 13 08:36:27.777: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set + STEP: Deploying the webhook service 10/13/23 08:36:29.785 + STEP: Verifying the service has paired with the endpoint 10/13/23 08:36:29.806 + Oct 13 08:36:30.806: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] should mutate custom resource with pruning [Conformance] + test/e2e/apimachinery/webhook.go:341 + Oct 13 08:36:30.812: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6427-crds.webhook.example.com via the AdmissionRegistration API 10/13/23 08:36:31.325 + STEP: Creating a custom resource that should be mutated by the webhook 10/13/23 08:36:31.34 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Oct 13 08:36:33.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "webhook-2133" for this suite. 10/13/23 08:36:33.967 + STEP: Destroying namespace "webhook-2133-markers" for this suite. 10/13/23 08:36:33.976 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Watchers + should be able to restart watching from the last resource version observed by the previous watch [Conformance] + test/e2e/apimachinery/watch.go:191 +[BeforeEach] [sig-api-machinery] Watchers + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:36:33.984 +Oct 13 08:36:33.984: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename watch 10/13/23 08:36:33.985 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:36:33.999 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:36:34.002 +[BeforeEach] [sig-api-machinery] Watchers + test/e2e/framework/metrics/init/init.go:31 +[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] + test/e2e/apimachinery/watch.go:191 +STEP: creating a watch on configmaps 10/13/23 08:36:34.005 +STEP: creating a new configmap 10/13/23 08:36:34.008 +STEP: modifying the configmap once 10/13/23 08:36:34.014 +STEP: closing the watch once it receives two notifications 10/13/23 08:36:34.024 +Oct 13 08:36:34.024: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5668 580089a0-7ef9-4d87-89b2-243c12eafd8a 18843 0 2023-10-13 08:36:34 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-10-13 08:36:34 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 13 08:36:34.024: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5668 580089a0-7ef9-4d87-89b2-243c12eafd8a 18844 0 2023-10-13 08:36:34 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-10-13 08:36:34 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: modifying the configmap a second time, while the watch is closed 10/13/23 08:36:34.024 +STEP: creating a new watch on configmaps from the last resource version observed by the first watch 10/13/23 08:36:34.033 +STEP: deleting the configmap 10/13/23 08:36:34.036 +STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed 10/13/23 08:36:34.043 +Oct 13 08:36:34.044: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5668 580089a0-7ef9-4d87-89b2-243c12eafd8a 18845 0 2023-10-13 08:36:34 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-10-13 08:36:34 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 13 08:36:34.044: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5668 580089a0-7ef9-4d87-89b2-243c12eafd8a 18846 0 2023-10-13 08:36:34 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-10-13 08:36:34 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +[AfterEach] [sig-api-machinery] Watchers + test/e2e/framework/node/init/init.go:32 +Oct 13 08:36:34.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] Watchers + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] Watchers + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] Watchers + tear down framework | framework.go:193 +STEP: Destroying namespace "watch-5668" for this suite. 10/13/23 08:36:34.048 +------------------------------ +• [0.070 seconds] +[sig-api-machinery] Watchers +test/e2e/apimachinery/framework.go:23 + should be able to restart watching from the last resource version observed by the previous watch [Conformance] + test/e2e/apimachinery/watch.go:191 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] Watchers + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:36:33.984 + Oct 13 08:36:33.984: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename watch 10/13/23 08:36:33.985 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:36:33.999 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:36:34.002 + [BeforeEach] [sig-api-machinery] Watchers + test/e2e/framework/metrics/init/init.go:31 + [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] + test/e2e/apimachinery/watch.go:191 + STEP: creating a watch on configmaps 10/13/23 08:36:34.005 + STEP: creating a new configmap 10/13/23 08:36:34.008 + STEP: modifying the configmap once 10/13/23 08:36:34.014 + STEP: closing the watch once it receives two notifications 10/13/23 08:36:34.024 + Oct 13 08:36:34.024: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5668 580089a0-7ef9-4d87-89b2-243c12eafd8a 18843 0 2023-10-13 08:36:34 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-10-13 08:36:34 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} + Oct 13 08:36:34.024: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5668 580089a0-7ef9-4d87-89b2-243c12eafd8a 18844 0 2023-10-13 08:36:34 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-10-13 08:36:34 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} + STEP: modifying the configmap a second time, while the watch is closed 10/13/23 08:36:34.024 + STEP: creating a new watch on configmaps from the last resource version observed by the first watch 10/13/23 08:36:34.033 + STEP: deleting the configmap 10/13/23 08:36:34.036 + STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed 10/13/23 08:36:34.043 + Oct 13 08:36:34.044: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5668 580089a0-7ef9-4d87-89b2-243c12eafd8a 18845 0 2023-10-13 08:36:34 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-10-13 08:36:34 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} + Oct 13 08:36:34.044: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5668 580089a0-7ef9-4d87-89b2-243c12eafd8a 18846 0 2023-10-13 08:36:34 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-10-13 08:36:34 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} + [AfterEach] [sig-api-machinery] Watchers + test/e2e/framework/node/init/init.go:32 + Oct 13 08:36:34.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] Watchers + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] Watchers + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] Watchers + tear down framework | framework.go:193 + STEP: Destroying namespace "watch-5668" for this suite. 10/13/23 08:36:34.048 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSS +------------------------------ +[sig-network] DNS + should provide DNS for ExternalName services [Conformance] + test/e2e/network/dns.go:333 +[BeforeEach] [sig-network] DNS + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:36:34.054 +Oct 13 08:36:34.054: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename dns 10/13/23 08:36:34.055 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:36:34.066 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:36:34.069 +[BeforeEach] [sig-network] DNS + test/e2e/framework/metrics/init/init.go:31 +[It] should provide DNS for ExternalName services [Conformance] + test/e2e/network/dns.go:333 +STEP: Creating a test externalName service 10/13/23 08:36:34.07 +STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4029.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4029.svc.cluster.local; sleep 1; done + 10/13/23 08:36:34.074 +STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4029.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4029.svc.cluster.local; sleep 1; done + 10/13/23 08:36:34.075 +STEP: creating a pod to probe DNS 10/13/23 08:36:34.075 +STEP: submitting the pod to kubernetes 10/13/23 08:36:34.075 +Oct 13 08:36:34.083: INFO: Waiting up to 15m0s for pod "dns-test-301b7955-1030-41e9-a76c-fd41c12447c0" in namespace "dns-4029" to be "running" +Oct 13 08:36:34.087: INFO: Pod "dns-test-301b7955-1030-41e9-a76c-fd41c12447c0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.16231ms +Oct 13 08:36:36.091: INFO: Pod "dns-test-301b7955-1030-41e9-a76c-fd41c12447c0": Phase="Running", Reason="", readiness=true. Elapsed: 2.007288812s +Oct 13 08:36:36.091: INFO: Pod "dns-test-301b7955-1030-41e9-a76c-fd41c12447c0" satisfied condition "running" +STEP: retrieving the pod 10/13/23 08:36:36.091 +STEP: looking for the results for each expected name from probers 10/13/23 08:36:36.094 +Oct 13 08:36:36.099: INFO: DNS probes using dns-test-301b7955-1030-41e9-a76c-fd41c12447c0 succeeded + +STEP: deleting the pod 10/13/23 08:36:36.099 +STEP: changing the externalName to bar.example.com 10/13/23 08:36:36.111 +STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4029.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4029.svc.cluster.local; sleep 1; done + 10/13/23 08:36:36.118 +STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4029.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4029.svc.cluster.local; sleep 1; done + 10/13/23 08:36:36.118 +STEP: creating a second pod to probe DNS 10/13/23 08:36:36.118 +STEP: submitting the pod to kubernetes 10/13/23 08:36:36.118 +Oct 13 08:36:36.123: INFO: Waiting up to 15m0s for pod "dns-test-6c4ce1c1-333d-4fb3-b869-cb24a10d45ef" in namespace "dns-4029" to be "running" +Oct 13 08:36:36.127: INFO: Pod "dns-test-6c4ce1c1-333d-4fb3-b869-cb24a10d45ef": Phase="Pending", Reason="", readiness=false. Elapsed: 3.984737ms +Oct 13 08:36:38.132: INFO: Pod "dns-test-6c4ce1c1-333d-4fb3-b869-cb24a10d45ef": Phase="Running", Reason="", readiness=true. Elapsed: 2.008698166s +Oct 13 08:36:38.132: INFO: Pod "dns-test-6c4ce1c1-333d-4fb3-b869-cb24a10d45ef" satisfied condition "running" +STEP: retrieving the pod 10/13/23 08:36:38.132 +STEP: looking for the results for each expected name from probers 10/13/23 08:36:38.134 +Oct 13 08:36:38.138: INFO: File wheezy_udp@dns-test-service-3.dns-4029.svc.cluster.local from pod dns-4029/dns-test-6c4ce1c1-333d-4fb3-b869-cb24a10d45ef contains 'foo.example.com. +' instead of 'bar.example.com.' +Oct 13 08:36:38.142: INFO: File jessie_udp@dns-test-service-3.dns-4029.svc.cluster.local from pod dns-4029/dns-test-6c4ce1c1-333d-4fb3-b869-cb24a10d45ef contains 'foo.example.com. +' instead of 'bar.example.com.' +Oct 13 08:36:38.142: INFO: Lookups using dns-4029/dns-test-6c4ce1c1-333d-4fb3-b869-cb24a10d45ef failed for: [wheezy_udp@dns-test-service-3.dns-4029.svc.cluster.local jessie_udp@dns-test-service-3.dns-4029.svc.cluster.local] + +Oct 13 08:36:43.145: INFO: File wheezy_udp@dns-test-service-3.dns-4029.svc.cluster.local from pod dns-4029/dns-test-6c4ce1c1-333d-4fb3-b869-cb24a10d45ef contains 'foo.example.com. +' instead of 'bar.example.com.' +Oct 13 08:36:43.148: INFO: File jessie_udp@dns-test-service-3.dns-4029.svc.cluster.local from pod dns-4029/dns-test-6c4ce1c1-333d-4fb3-b869-cb24a10d45ef contains 'foo.example.com. +' instead of 'bar.example.com.' +Oct 13 08:36:43.148: INFO: Lookups using dns-4029/dns-test-6c4ce1c1-333d-4fb3-b869-cb24a10d45ef failed for: [wheezy_udp@dns-test-service-3.dns-4029.svc.cluster.local jessie_udp@dns-test-service-3.dns-4029.svc.cluster.local] + +Oct 13 08:36:48.147: INFO: File wheezy_udp@dns-test-service-3.dns-4029.svc.cluster.local from pod dns-4029/dns-test-6c4ce1c1-333d-4fb3-b869-cb24a10d45ef contains 'foo.example.com. +' instead of 'bar.example.com.' +Oct 13 08:36:48.151: INFO: File jessie_udp@dns-test-service-3.dns-4029.svc.cluster.local from pod dns-4029/dns-test-6c4ce1c1-333d-4fb3-b869-cb24a10d45ef contains 'foo.example.com. +' instead of 'bar.example.com.' +Oct 13 08:36:48.151: INFO: Lookups using dns-4029/dns-test-6c4ce1c1-333d-4fb3-b869-cb24a10d45ef failed for: [wheezy_udp@dns-test-service-3.dns-4029.svc.cluster.local jessie_udp@dns-test-service-3.dns-4029.svc.cluster.local] + +Oct 13 08:36:53.147: INFO: File wheezy_udp@dns-test-service-3.dns-4029.svc.cluster.local from pod dns-4029/dns-test-6c4ce1c1-333d-4fb3-b869-cb24a10d45ef contains 'foo.example.com. +' instead of 'bar.example.com.' +Oct 13 08:36:53.152: INFO: File jessie_udp@dns-test-service-3.dns-4029.svc.cluster.local from pod dns-4029/dns-test-6c4ce1c1-333d-4fb3-b869-cb24a10d45ef contains 'foo.example.com. +' instead of 'bar.example.com.' +Oct 13 08:36:53.152: INFO: Lookups using dns-4029/dns-test-6c4ce1c1-333d-4fb3-b869-cb24a10d45ef failed for: [wheezy_udp@dns-test-service-3.dns-4029.svc.cluster.local jessie_udp@dns-test-service-3.dns-4029.svc.cluster.local] + +Oct 13 08:36:58.149: INFO: File wheezy_udp@dns-test-service-3.dns-4029.svc.cluster.local from pod dns-4029/dns-test-6c4ce1c1-333d-4fb3-b869-cb24a10d45ef contains 'foo.example.com. +' instead of 'bar.example.com.' +Oct 13 08:36:58.155: INFO: File jessie_udp@dns-test-service-3.dns-4029.svc.cluster.local from pod dns-4029/dns-test-6c4ce1c1-333d-4fb3-b869-cb24a10d45ef contains 'foo.example.com. +' instead of 'bar.example.com.' +Oct 13 08:36:58.155: INFO: Lookups using dns-4029/dns-test-6c4ce1c1-333d-4fb3-b869-cb24a10d45ef failed for: [wheezy_udp@dns-test-service-3.dns-4029.svc.cluster.local jessie_udp@dns-test-service-3.dns-4029.svc.cluster.local] + +Oct 13 08:37:03.149: INFO: File wheezy_udp@dns-test-service-3.dns-4029.svc.cluster.local from pod dns-4029/dns-test-6c4ce1c1-333d-4fb3-b869-cb24a10d45ef contains 'foo.example.com. +' instead of 'bar.example.com.' +Oct 13 08:37:03.153: INFO: File jessie_udp@dns-test-service-3.dns-4029.svc.cluster.local from pod dns-4029/dns-test-6c4ce1c1-333d-4fb3-b869-cb24a10d45ef contains 'foo.example.com. +' instead of 'bar.example.com.' +Oct 13 08:37:03.153: INFO: Lookups using dns-4029/dns-test-6c4ce1c1-333d-4fb3-b869-cb24a10d45ef failed for: [wheezy_udp@dns-test-service-3.dns-4029.svc.cluster.local jessie_udp@dns-test-service-3.dns-4029.svc.cluster.local] + +Oct 13 08:37:08.155: INFO: DNS probes using dns-test-6c4ce1c1-333d-4fb3-b869-cb24a10d45ef succeeded + +STEP: deleting the pod 10/13/23 08:37:08.155 +STEP: changing the service to type=ClusterIP 10/13/23 08:37:08.18 +STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4029.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-4029.svc.cluster.local; sleep 1; done + 10/13/23 08:37:08.202 +STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4029.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-4029.svc.cluster.local; sleep 1; done + 10/13/23 08:37:08.202 +STEP: creating a third pod to probe DNS 10/13/23 08:37:08.202 +STEP: submitting the pod to kubernetes 10/13/23 08:37:08.207 +Oct 13 08:37:08.219: INFO: Waiting up to 15m0s for pod "dns-test-3219172d-e258-4ef5-b90a-df769ec93638" in namespace "dns-4029" to be "running" +Oct 13 08:37:08.225: INFO: Pod "dns-test-3219172d-e258-4ef5-b90a-df769ec93638": Phase="Pending", Reason="", readiness=false. Elapsed: 6.301614ms +Oct 13 08:37:10.231: INFO: Pod "dns-test-3219172d-e258-4ef5-b90a-df769ec93638": Phase="Running", Reason="", readiness=true. Elapsed: 2.012160916s +Oct 13 08:37:10.231: INFO: Pod "dns-test-3219172d-e258-4ef5-b90a-df769ec93638" satisfied condition "running" +STEP: retrieving the pod 10/13/23 08:37:10.231 +STEP: looking for the results for each expected name from probers 10/13/23 08:37:10.237 +Oct 13 08:37:10.247: INFO: DNS probes using dns-test-3219172d-e258-4ef5-b90a-df769ec93638 succeeded + +STEP: deleting the pod 10/13/23 08:37:10.247 +STEP: deleting the test externalName service 10/13/23 08:37:10.261 +[AfterEach] [sig-network] DNS + test/e2e/framework/node/init/init.go:32 +Oct 13 08:37:10.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] DNS + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] DNS + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] DNS + tear down framework | framework.go:193 +STEP: Destroying namespace "dns-4029" for this suite. 10/13/23 08:37:10.283 +------------------------------ +• [SLOW TEST] [36.237 seconds] +[sig-network] DNS +test/e2e/network/common/framework.go:23 + should provide DNS for ExternalName services [Conformance] + test/e2e/network/dns.go:333 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] DNS + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:36:34.054 + Oct 13 08:36:34.054: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename dns 10/13/23 08:36:34.055 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:36:34.066 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:36:34.069 + [BeforeEach] [sig-network] DNS + test/e2e/framework/metrics/init/init.go:31 + [It] should provide DNS for ExternalName services [Conformance] + test/e2e/network/dns.go:333 + STEP: Creating a test externalName service 10/13/23 08:36:34.07 + STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4029.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4029.svc.cluster.local; sleep 1; done + 10/13/23 08:36:34.074 + STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4029.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4029.svc.cluster.local; sleep 1; done + 10/13/23 08:36:34.075 + STEP: creating a pod to probe DNS 10/13/23 08:36:34.075 + STEP: submitting the pod to kubernetes 10/13/23 08:36:34.075 + Oct 13 08:36:34.083: INFO: Waiting up to 15m0s for pod "dns-test-301b7955-1030-41e9-a76c-fd41c12447c0" in namespace "dns-4029" to be "running" + Oct 13 08:36:34.087: INFO: Pod "dns-test-301b7955-1030-41e9-a76c-fd41c12447c0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.16231ms + Oct 13 08:36:36.091: INFO: Pod "dns-test-301b7955-1030-41e9-a76c-fd41c12447c0": Phase="Running", Reason="", readiness=true. Elapsed: 2.007288812s + Oct 13 08:36:36.091: INFO: Pod "dns-test-301b7955-1030-41e9-a76c-fd41c12447c0" satisfied condition "running" + STEP: retrieving the pod 10/13/23 08:36:36.091 + STEP: looking for the results for each expected name from probers 10/13/23 08:36:36.094 + Oct 13 08:36:36.099: INFO: DNS probes using dns-test-301b7955-1030-41e9-a76c-fd41c12447c0 succeeded + + STEP: deleting the pod 10/13/23 08:36:36.099 + STEP: changing the externalName to bar.example.com 10/13/23 08:36:36.111 + STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4029.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4029.svc.cluster.local; sleep 1; done + 10/13/23 08:36:36.118 + STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4029.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4029.svc.cluster.local; sleep 1; done + 10/13/23 08:36:36.118 + STEP: creating a second pod to probe DNS 10/13/23 08:36:36.118 + STEP: submitting the pod to kubernetes 10/13/23 08:36:36.118 + Oct 13 08:36:36.123: INFO: Waiting up to 15m0s for pod "dns-test-6c4ce1c1-333d-4fb3-b869-cb24a10d45ef" in namespace "dns-4029" to be "running" + Oct 13 08:36:36.127: INFO: Pod "dns-test-6c4ce1c1-333d-4fb3-b869-cb24a10d45ef": Phase="Pending", Reason="", readiness=false. Elapsed: 3.984737ms + Oct 13 08:36:38.132: INFO: Pod "dns-test-6c4ce1c1-333d-4fb3-b869-cb24a10d45ef": Phase="Running", Reason="", readiness=true. Elapsed: 2.008698166s + Oct 13 08:36:38.132: INFO: Pod "dns-test-6c4ce1c1-333d-4fb3-b869-cb24a10d45ef" satisfied condition "running" + STEP: retrieving the pod 10/13/23 08:36:38.132 + STEP: looking for the results for each expected name from probers 10/13/23 08:36:38.134 + Oct 13 08:36:38.138: INFO: File wheezy_udp@dns-test-service-3.dns-4029.svc.cluster.local from pod dns-4029/dns-test-6c4ce1c1-333d-4fb3-b869-cb24a10d45ef contains 'foo.example.com. + ' instead of 'bar.example.com.' + Oct 13 08:36:38.142: INFO: File jessie_udp@dns-test-service-3.dns-4029.svc.cluster.local from pod dns-4029/dns-test-6c4ce1c1-333d-4fb3-b869-cb24a10d45ef contains 'foo.example.com. + ' instead of 'bar.example.com.' + Oct 13 08:36:38.142: INFO: Lookups using dns-4029/dns-test-6c4ce1c1-333d-4fb3-b869-cb24a10d45ef failed for: [wheezy_udp@dns-test-service-3.dns-4029.svc.cluster.local jessie_udp@dns-test-service-3.dns-4029.svc.cluster.local] + + Oct 13 08:36:43.145: INFO: File wheezy_udp@dns-test-service-3.dns-4029.svc.cluster.local from pod dns-4029/dns-test-6c4ce1c1-333d-4fb3-b869-cb24a10d45ef contains 'foo.example.com. + ' instead of 'bar.example.com.' + Oct 13 08:36:43.148: INFO: File jessie_udp@dns-test-service-3.dns-4029.svc.cluster.local from pod dns-4029/dns-test-6c4ce1c1-333d-4fb3-b869-cb24a10d45ef contains 'foo.example.com. + ' instead of 'bar.example.com.' + Oct 13 08:36:43.148: INFO: Lookups using dns-4029/dns-test-6c4ce1c1-333d-4fb3-b869-cb24a10d45ef failed for: [wheezy_udp@dns-test-service-3.dns-4029.svc.cluster.local jessie_udp@dns-test-service-3.dns-4029.svc.cluster.local] + + Oct 13 08:36:48.147: INFO: File wheezy_udp@dns-test-service-3.dns-4029.svc.cluster.local from pod dns-4029/dns-test-6c4ce1c1-333d-4fb3-b869-cb24a10d45ef contains 'foo.example.com. + ' instead of 'bar.example.com.' + Oct 13 08:36:48.151: INFO: File jessie_udp@dns-test-service-3.dns-4029.svc.cluster.local from pod dns-4029/dns-test-6c4ce1c1-333d-4fb3-b869-cb24a10d45ef contains 'foo.example.com. + ' instead of 'bar.example.com.' + Oct 13 08:36:48.151: INFO: Lookups using dns-4029/dns-test-6c4ce1c1-333d-4fb3-b869-cb24a10d45ef failed for: [wheezy_udp@dns-test-service-3.dns-4029.svc.cluster.local jessie_udp@dns-test-service-3.dns-4029.svc.cluster.local] + + Oct 13 08:36:53.147: INFO: File wheezy_udp@dns-test-service-3.dns-4029.svc.cluster.local from pod dns-4029/dns-test-6c4ce1c1-333d-4fb3-b869-cb24a10d45ef contains 'foo.example.com. + ' instead of 'bar.example.com.' + Oct 13 08:36:53.152: INFO: File jessie_udp@dns-test-service-3.dns-4029.svc.cluster.local from pod dns-4029/dns-test-6c4ce1c1-333d-4fb3-b869-cb24a10d45ef contains 'foo.example.com. + ' instead of 'bar.example.com.' + Oct 13 08:36:53.152: INFO: Lookups using dns-4029/dns-test-6c4ce1c1-333d-4fb3-b869-cb24a10d45ef failed for: [wheezy_udp@dns-test-service-3.dns-4029.svc.cluster.local jessie_udp@dns-test-service-3.dns-4029.svc.cluster.local] + + Oct 13 08:36:58.149: INFO: File wheezy_udp@dns-test-service-3.dns-4029.svc.cluster.local from pod dns-4029/dns-test-6c4ce1c1-333d-4fb3-b869-cb24a10d45ef contains 'foo.example.com. + ' instead of 'bar.example.com.' + Oct 13 08:36:58.155: INFO: File jessie_udp@dns-test-service-3.dns-4029.svc.cluster.local from pod dns-4029/dns-test-6c4ce1c1-333d-4fb3-b869-cb24a10d45ef contains 'foo.example.com. + ' instead of 'bar.example.com.' + Oct 13 08:36:58.155: INFO: Lookups using dns-4029/dns-test-6c4ce1c1-333d-4fb3-b869-cb24a10d45ef failed for: [wheezy_udp@dns-test-service-3.dns-4029.svc.cluster.local jessie_udp@dns-test-service-3.dns-4029.svc.cluster.local] + + Oct 13 08:37:03.149: INFO: File wheezy_udp@dns-test-service-3.dns-4029.svc.cluster.local from pod dns-4029/dns-test-6c4ce1c1-333d-4fb3-b869-cb24a10d45ef contains 'foo.example.com. + ' instead of 'bar.example.com.' + Oct 13 08:37:03.153: INFO: File jessie_udp@dns-test-service-3.dns-4029.svc.cluster.local from pod dns-4029/dns-test-6c4ce1c1-333d-4fb3-b869-cb24a10d45ef contains 'foo.example.com. + ' instead of 'bar.example.com.' + Oct 13 08:37:03.153: INFO: Lookups using dns-4029/dns-test-6c4ce1c1-333d-4fb3-b869-cb24a10d45ef failed for: [wheezy_udp@dns-test-service-3.dns-4029.svc.cluster.local jessie_udp@dns-test-service-3.dns-4029.svc.cluster.local] + + Oct 13 08:37:08.155: INFO: DNS probes using dns-test-6c4ce1c1-333d-4fb3-b869-cb24a10d45ef succeeded + + STEP: deleting the pod 10/13/23 08:37:08.155 + STEP: changing the service to type=ClusterIP 10/13/23 08:37:08.18 + STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4029.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-4029.svc.cluster.local; sleep 1; done + 10/13/23 08:37:08.202 + STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4029.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-4029.svc.cluster.local; sleep 1; done + 10/13/23 08:37:08.202 + STEP: creating a third pod to probe DNS 10/13/23 08:37:08.202 + STEP: submitting the pod to kubernetes 10/13/23 08:37:08.207 + Oct 13 08:37:08.219: INFO: Waiting up to 15m0s for pod "dns-test-3219172d-e258-4ef5-b90a-df769ec93638" in namespace "dns-4029" to be "running" + Oct 13 08:37:08.225: INFO: Pod "dns-test-3219172d-e258-4ef5-b90a-df769ec93638": Phase="Pending", Reason="", readiness=false. Elapsed: 6.301614ms + Oct 13 08:37:10.231: INFO: Pod "dns-test-3219172d-e258-4ef5-b90a-df769ec93638": Phase="Running", Reason="", readiness=true. Elapsed: 2.012160916s + Oct 13 08:37:10.231: INFO: Pod "dns-test-3219172d-e258-4ef5-b90a-df769ec93638" satisfied condition "running" + STEP: retrieving the pod 10/13/23 08:37:10.231 + STEP: looking for the results for each expected name from probers 10/13/23 08:37:10.237 + Oct 13 08:37:10.247: INFO: DNS probes using dns-test-3219172d-e258-4ef5-b90a-df769ec93638 succeeded + + STEP: deleting the pod 10/13/23 08:37:10.247 + STEP: deleting the test externalName service 10/13/23 08:37:10.261 + [AfterEach] [sig-network] DNS + test/e2e/framework/node/init/init.go:32 + Oct 13 08:37:10.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] DNS + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] DNS + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] DNS + tear down framework | framework.go:193 + STEP: Destroying namespace "dns-4029" for this suite. 10/13/23 08:37:10.283 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSS +------------------------------ +[sig-apps] Deployment + Deployment should have a working scale subresource [Conformance] + test/e2e/apps/deployment.go:150 +[BeforeEach] [sig-apps] Deployment + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:37:10.292 +Oct 13 08:37:10.292: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename deployment 10/13/23 08:37:10.293 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:37:10.311 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:37:10.314 +[BeforeEach] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 +[It] Deployment should have a working scale subresource [Conformance] + test/e2e/apps/deployment.go:150 +Oct 13 08:37:10.317: INFO: Creating simple deployment test-new-deployment +Oct 13 08:37:10.329: INFO: deployment "test-new-deployment" doesn't have the required revision set +STEP: getting scale subresource 10/13/23 08:37:12.342 +STEP: updating a scale subresource 10/13/23 08:37:12.345 +STEP: verifying the deployment Spec.Replicas was modified 10/13/23 08:37:12.352 +STEP: Patch a scale subresource 10/13/23 08:37:12.358 +[AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 +Oct 13 08:37:12.379: INFO: Deployment "test-new-deployment": +&Deployment{ObjectMeta:{test-new-deployment deployment-6272 44683e29-97c6-4495-ada7-8499899674e2 19036 3 2023-10-13 08:37:10 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 FieldsV1 {"f:spec":{"f:replicas":{}}} scale} {e2e.test Update apps/v1 2023-10-13 08:37:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-10-13 08:37:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0045ac678 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2023-10-13 08:37:11 +0000 UTC,LastTransitionTime:2023-10-13 08:37:11 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-new-deployment-7f5969cbc7" has successfully progressed.,LastUpdateTime:2023-10-13 08:37:11 +0000 UTC,LastTransitionTime:2023-10-13 08:37:10 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + +Oct 13 08:37:12.384: INFO: New ReplicaSet "test-new-deployment-7f5969cbc7" of Deployment "test-new-deployment": +&ReplicaSet{ObjectMeta:{test-new-deployment-7f5969cbc7 deployment-6272 ed8ecf69-264f-432c-b7ee-fe971dba7719 19041 3 2023-10-13 08:37:10 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[deployment.kubernetes.io/desired-replicas:4 deployment.kubernetes.io/max-replicas:5 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-new-deployment 44683e29-97c6-4495-ada7-8499899674e2 0xc0039d89d7 0xc0039d89d8}] [] [{kube-controller-manager Update apps/v1 2023-10-13 08:37:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"44683e29-97c6-4495-ada7-8499899674e2\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-10-13 08:37:12 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 7f5969cbc7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0039d8a68 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Oct 13 08:37:12.393: INFO: Pod "test-new-deployment-7f5969cbc7-gxv7c" is available: +&Pod{ObjectMeta:{test-new-deployment-7f5969cbc7-gxv7c test-new-deployment-7f5969cbc7- deployment-6272 8e86f16f-0788-4d4e-95e9-f6220c43646e 19029 0 2023-10-13 08:37:10 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[] [{apps/v1 ReplicaSet test-new-deployment-7f5969cbc7 ed8ecf69-264f-432c-b7ee-fe971dba7719 0xc0039d8e67 0xc0039d8e68}] [] [{kube-controller-manager Update v1 2023-10-13 08:37:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ed8ecf69-264f-432c-b7ee-fe971dba7719\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-10-13 08:37:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.123\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-gfcm9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gfcm9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:37:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:37:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:37:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:37:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.253.8.111,PodIP:10.244.1.123,StartTime:2023-10-13 08:37:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-10-13 08:37:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:sha256:987dae5a853a43c663ab2f902708e65874bd2c0189aa0bc57d81ffb57187d089,ContainerID:containerd://18f86c981a3a2036831d81d2c0930c89157fb1aeaa447ed604393afc8ddfc703,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.123,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 13 08:37:12.394: INFO: Pod "test-new-deployment-7f5969cbc7-xgdh2" is not available: +&Pod{ObjectMeta:{test-new-deployment-7f5969cbc7-xgdh2 test-new-deployment-7f5969cbc7- deployment-6272 3c7ff324-6881-4bb3-9bf6-4e71f7d0c389 19042 0 2023-10-13 08:37:12 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[] [{apps/v1 ReplicaSet test-new-deployment-7f5969cbc7 ed8ecf69-264f-432c-b7ee-fe971dba7719 0xc0039d9057 0xc0039d9058}] [] [{kube-controller-manager Update v1 2023-10-13 08:37:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ed8ecf69-264f-432c-b7ee-fe971dba7719\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-10-13 08:37:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-xb2tz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xb2tz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:37:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:37:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:37:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:37:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.253.8.110,PodIP:,StartTime:2023-10-13 08:37:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + test/e2e/framework/node/init/init.go:32 +Oct 13 08:37:12.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] Deployment + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] Deployment + tear down framework | framework.go:193 +STEP: Destroying namespace "deployment-6272" for this suite. 10/13/23 08:37:12.405 +------------------------------ +• [2.121 seconds] +[sig-apps] Deployment +test/e2e/apps/framework.go:23 + Deployment should have a working scale subresource [Conformance] + test/e2e/apps/deployment.go:150 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] Deployment + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:37:10.292 + Oct 13 08:37:10.292: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename deployment 10/13/23 08:37:10.293 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:37:10.311 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:37:10.314 + [BeforeEach] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 + [It] Deployment should have a working scale subresource [Conformance] + test/e2e/apps/deployment.go:150 + Oct 13 08:37:10.317: INFO: Creating simple deployment test-new-deployment + Oct 13 08:37:10.329: INFO: deployment "test-new-deployment" doesn't have the required revision set + STEP: getting scale subresource 10/13/23 08:37:12.342 + STEP: updating a scale subresource 10/13/23 08:37:12.345 + STEP: verifying the deployment Spec.Replicas was modified 10/13/23 08:37:12.352 + STEP: Patch a scale subresource 10/13/23 08:37:12.358 + [AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 + Oct 13 08:37:12.379: INFO: Deployment "test-new-deployment": + &Deployment{ObjectMeta:{test-new-deployment deployment-6272 44683e29-97c6-4495-ada7-8499899674e2 19036 3 2023-10-13 08:37:10 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 FieldsV1 {"f:spec":{"f:replicas":{}}} scale} {e2e.test Update apps/v1 2023-10-13 08:37:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-10-13 08:37:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0045ac678 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2023-10-13 08:37:11 +0000 UTC,LastTransitionTime:2023-10-13 08:37:11 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-new-deployment-7f5969cbc7" has successfully progressed.,LastUpdateTime:2023-10-13 08:37:11 +0000 UTC,LastTransitionTime:2023-10-13 08:37:10 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + + Oct 13 08:37:12.384: INFO: New ReplicaSet "test-new-deployment-7f5969cbc7" of Deployment "test-new-deployment": + &ReplicaSet{ObjectMeta:{test-new-deployment-7f5969cbc7 deployment-6272 ed8ecf69-264f-432c-b7ee-fe971dba7719 19041 3 2023-10-13 08:37:10 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[deployment.kubernetes.io/desired-replicas:4 deployment.kubernetes.io/max-replicas:5 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-new-deployment 44683e29-97c6-4495-ada7-8499899674e2 0xc0039d89d7 0xc0039d89d8}] [] [{kube-controller-manager Update apps/v1 2023-10-13 08:37:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"44683e29-97c6-4495-ada7-8499899674e2\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-10-13 08:37:12 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 7f5969cbc7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0039d8a68 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} + Oct 13 08:37:12.393: INFO: Pod "test-new-deployment-7f5969cbc7-gxv7c" is available: + &Pod{ObjectMeta:{test-new-deployment-7f5969cbc7-gxv7c test-new-deployment-7f5969cbc7- deployment-6272 8e86f16f-0788-4d4e-95e9-f6220c43646e 19029 0 2023-10-13 08:37:10 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[] [{apps/v1 ReplicaSet test-new-deployment-7f5969cbc7 ed8ecf69-264f-432c-b7ee-fe971dba7719 0xc0039d8e67 0xc0039d8e68}] [] [{kube-controller-manager Update v1 2023-10-13 08:37:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ed8ecf69-264f-432c-b7ee-fe971dba7719\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-10-13 08:37:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.123\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-gfcm9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gfcm9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:37:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:37:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:37:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:37:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.253.8.111,PodIP:10.244.1.123,StartTime:2023-10-13 08:37:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-10-13 08:37:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:sha256:987dae5a853a43c663ab2f902708e65874bd2c0189aa0bc57d81ffb57187d089,ContainerID:containerd://18f86c981a3a2036831d81d2c0930c89157fb1aeaa447ed604393afc8ddfc703,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.123,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + Oct 13 08:37:12.394: INFO: Pod "test-new-deployment-7f5969cbc7-xgdh2" is not available: + &Pod{ObjectMeta:{test-new-deployment-7f5969cbc7-xgdh2 test-new-deployment-7f5969cbc7- deployment-6272 3c7ff324-6881-4bb3-9bf6-4e71f7d0c389 19042 0 2023-10-13 08:37:12 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[] [{apps/v1 ReplicaSet test-new-deployment-7f5969cbc7 ed8ecf69-264f-432c-b7ee-fe971dba7719 0xc0039d9057 0xc0039d9058}] [] [{kube-controller-manager Update v1 2023-10-13 08:37:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ed8ecf69-264f-432c-b7ee-fe971dba7719\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-10-13 08:37:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-xb2tz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xb2tz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:37:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:37:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:37:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:37:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.253.8.110,PodIP:,StartTime:2023-10-13 08:37:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + [AfterEach] [sig-apps] Deployment + test/e2e/framework/node/init/init.go:32 + Oct 13 08:37:12.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] Deployment + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] Deployment + tear down framework | framework.go:193 + STEP: Destroying namespace "deployment-6272" for this suite. 10/13/23 08:37:12.405 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should delete pods created by rc when not orphaning [Conformance] + test/e2e/apimachinery/garbage_collector.go:312 +[BeforeEach] [sig-api-machinery] Garbage collector + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:37:12.415 +Oct 13 08:37:12.415: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename gc 10/13/23 08:37:12.416 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:37:12.432 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:37:12.435 +[BeforeEach] [sig-api-machinery] Garbage collector + test/e2e/framework/metrics/init/init.go:31 +[It] should delete pods created by rc when not orphaning [Conformance] + test/e2e/apimachinery/garbage_collector.go:312 +STEP: create the rc 10/13/23 08:37:12.438 +STEP: delete the rc 10/13/23 08:37:17.448 +STEP: wait for all pods to be garbage collected 10/13/23 08:37:17.455 +STEP: Gathering metrics 10/13/23 08:37:22.464 +Oct 13 08:37:22.479: INFO: Waiting up to 5m0s for pod "kube-controller-manager-node3" in namespace "kube-system" to be "running and ready" +Oct 13 08:37:22.483: INFO: Pod "kube-controller-manager-node3": Phase="Running", Reason="", readiness=true. Elapsed: 3.412601ms +Oct 13 08:37:22.483: INFO: The phase of Pod kube-controller-manager-node3 is Running (Ready = true) +Oct 13 08:37:22.483: INFO: Pod "kube-controller-manager-node3" satisfied condition "running and ready" +Oct 13 08:37:22.546: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +[AfterEach] [sig-api-machinery] Garbage collector + test/e2e/framework/node/init/init.go:32 +Oct 13 08:37:22.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] Garbage collector + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] Garbage collector + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] Garbage collector + tear down framework | framework.go:193 +STEP: Destroying namespace "gc-1827" for this suite. 10/13/23 08:37:22.552 +------------------------------ +• [SLOW TEST] [10.143 seconds] +[sig-api-machinery] Garbage collector +test/e2e/apimachinery/framework.go:23 + should delete pods created by rc when not orphaning [Conformance] + test/e2e/apimachinery/garbage_collector.go:312 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] Garbage collector + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:37:12.415 + Oct 13 08:37:12.415: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename gc 10/13/23 08:37:12.416 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:37:12.432 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:37:12.435 + [BeforeEach] [sig-api-machinery] Garbage collector + test/e2e/framework/metrics/init/init.go:31 + [It] should delete pods created by rc when not orphaning [Conformance] + test/e2e/apimachinery/garbage_collector.go:312 + STEP: create the rc 10/13/23 08:37:12.438 + STEP: delete the rc 10/13/23 08:37:17.448 + STEP: wait for all pods to be garbage collected 10/13/23 08:37:17.455 + STEP: Gathering metrics 10/13/23 08:37:22.464 + Oct 13 08:37:22.479: INFO: Waiting up to 5m0s for pod "kube-controller-manager-node3" in namespace "kube-system" to be "running and ready" + Oct 13 08:37:22.483: INFO: Pod "kube-controller-manager-node3": Phase="Running", Reason="", readiness=true. Elapsed: 3.412601ms + Oct 13 08:37:22.483: INFO: The phase of Pod kube-controller-manager-node3 is Running (Ready = true) + Oct 13 08:37:22.483: INFO: Pod "kube-controller-manager-node3" satisfied condition "running and ready" + Oct 13 08:37:22.546: INFO: For apiserver_request_total: + For apiserver_request_latency_seconds: + For apiserver_init_events_total: + For garbage_collector_attempt_to_delete_queue_latency: + For garbage_collector_attempt_to_delete_work_duration: + For garbage_collector_attempt_to_orphan_queue_latency: + For garbage_collector_attempt_to_orphan_work_duration: + For garbage_collector_dirty_processing_latency_microseconds: + For garbage_collector_event_processing_latency_microseconds: + For garbage_collector_graph_changes_queue_latency: + For garbage_collector_graph_changes_work_duration: + For garbage_collector_orphan_processing_latency_microseconds: + For namespace_queue_latency: + For namespace_queue_latency_sum: + For namespace_queue_latency_count: + For namespace_retries: + For namespace_work_duration: + For namespace_work_duration_sum: + For namespace_work_duration_count: + For function_duration_seconds: + For errors_total: + For evicted_pods_total: + + [AfterEach] [sig-api-machinery] Garbage collector + test/e2e/framework/node/init/init.go:32 + Oct 13 08:37:22.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + tear down framework | framework.go:193 + STEP: Destroying namespace "gc-1827" for this suite. 10/13/23 08:37:22.552 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] + test/e2e/apps/statefulset.go:697 +[BeforeEach] [sig-apps] StatefulSet + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:37:22.559 +Oct 13 08:37:22.559: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename statefulset 10/13/23 08:37:22.56 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:37:22.573 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:37:22.576 +[BeforeEach] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:98 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:113 +STEP: Creating service test in namespace statefulset-372 10/13/23 08:37:22.578 +[It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] + test/e2e/apps/statefulset.go:697 +STEP: Creating stateful set ss in namespace statefulset-372 10/13/23 08:37:22.583 +STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-372 10/13/23 08:37:22.591 +Oct 13 08:37:22.596: INFO: Found 0 stateful pods, waiting for 1 +Oct 13 08:37:32.602: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod 10/13/23 08:37:32.602 +Oct 13 08:37:32.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=statefulset-372 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Oct 13 08:37:32.778: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Oct 13 08:37:32.778: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Oct 13 08:37:32.778: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Oct 13 08:37:32.782: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true +Oct 13 08:37:42.788: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Oct 13 08:37:42.788: INFO: Waiting for statefulset status.replicas updated to 0 +Oct 13 08:37:42.808: INFO: POD NODE PHASE GRACE CONDITIONS +Oct 13 08:37:42.808: INFO: ss-0 node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:37:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:37:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:37:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:37:22 +0000 UTC }] +Oct 13 08:37:42.808: INFO: +Oct 13 08:37:42.808: INFO: StatefulSet ss has not reached scale 3, at 1 +Oct 13 08:37:43.814: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.993022062s +Oct 13 08:37:44.819: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.987431978s +Oct 13 08:37:45.825: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.982262836s +Oct 13 08:37:46.833: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.975047351s +Oct 13 08:37:47.839: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.967727273s +Oct 13 08:37:48.865: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.962357809s +Oct 13 08:37:49.871: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.936261367s +Oct 13 08:37:50.879: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.930096097s +Oct 13 08:37:51.885: INFO: Verifying statefulset ss doesn't scale past 3 for another 922.468469ms +STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-372 10/13/23 08:37:52.885 +Oct 13 08:37:52.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=statefulset-372 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 13 08:37:53.056: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Oct 13 08:37:53.056: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Oct 13 08:37:53.056: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Oct 13 08:37:53.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=statefulset-372 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 13 08:37:53.211: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" +Oct 13 08:37:53.211: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Oct 13 08:37:53.211: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Oct 13 08:37:53.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=statefulset-372 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 13 08:37:53.385: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" +Oct 13 08:37:53.385: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Oct 13 08:37:53.385: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Oct 13 08:37:53.390: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false +Oct 13 08:38:03.394: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +Oct 13 08:38:03.394: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true +Oct 13 08:38:03.394: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Scale down will not halt with unhealthy stateful pod 10/13/23 08:38:03.394 +Oct 13 08:38:03.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=statefulset-372 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Oct 13 08:38:03.555: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Oct 13 08:38:03.555: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Oct 13 08:38:03.555: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Oct 13 08:38:03.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=statefulset-372 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Oct 13 08:38:03.697: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Oct 13 08:38:03.698: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Oct 13 08:38:03.698: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Oct 13 08:38:03.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=statefulset-372 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Oct 13 08:38:03.862: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Oct 13 08:38:03.862: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Oct 13 08:38:03.862: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Oct 13 08:38:03.862: INFO: Waiting for statefulset status.replicas updated to 0 +Oct 13 08:38:03.866: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 +Oct 13 08:38:13.881: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Oct 13 08:38:13.881: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false +Oct 13 08:38:13.881: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false +Oct 13 08:38:13.893: INFO: POD NODE PHASE GRACE CONDITIONS +Oct 13 08:38:13.893: INFO: ss-0 node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:37:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:38:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:38:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:37:22 +0000 UTC }] +Oct 13 08:38:13.893: INFO: ss-1 node3 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:37:42 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:38:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:38:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:37:43 +0000 UTC }] +Oct 13 08:38:13.893: INFO: ss-2 node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:37:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:38:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:38:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:37:43 +0000 UTC }] +Oct 13 08:38:13.893: INFO: +Oct 13 08:38:13.893: INFO: StatefulSet ss has not reached scale 0, at 3 +Oct 13 08:38:14.896: INFO: POD NODE PHASE GRACE CONDITIONS +Oct 13 08:38:14.897: INFO: ss-1 node3 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:37:42 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:38:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:38:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:37:43 +0000 UTC }] +Oct 13 08:38:14.897: INFO: +Oct 13 08:38:14.897: INFO: StatefulSet ss has not reached scale 0, at 1 +Oct 13 08:38:15.901: INFO: Verifying statefulset ss doesn't scale past 0 for another 7.99234213s +Oct 13 08:38:16.906: INFO: Verifying statefulset ss doesn't scale past 0 for another 6.988136377s +Oct 13 08:38:17.914: INFO: Verifying statefulset ss doesn't scale past 0 for another 5.983333997s +Oct 13 08:38:18.920: INFO: Verifying statefulset ss doesn't scale past 0 for another 4.975612367s +Oct 13 08:38:19.924: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.970048238s +Oct 13 08:38:20.930: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.965923383s +Oct 13 08:38:21.934: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.959753916s +Oct 13 08:38:22.939: INFO: Verifying statefulset ss doesn't scale past 0 for another 955.300726ms +STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-372 10/13/23 08:38:23.939 +Oct 13 08:38:23.943: INFO: Scaling statefulset ss to 0 +Oct 13 08:38:23.952: INFO: Waiting for statefulset status.replicas updated to 0 +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:124 +Oct 13 08:38:23.955: INFO: Deleting all statefulset in ns statefulset-372 +Oct 13 08:38:23.958: INFO: Scaling statefulset ss to 0 +Oct 13 08:38:23.967: INFO: Waiting for statefulset status.replicas updated to 0 +Oct 13 08:38:23.969: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + test/e2e/framework/node/init/init.go:32 +Oct 13 08:38:23.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] StatefulSet + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] StatefulSet + tear down framework | framework.go:193 +STEP: Destroying namespace "statefulset-372" for this suite. 10/13/23 08:38:23.991 +------------------------------ +• [SLOW TEST] [61.439 seconds] +[sig-apps] StatefulSet +test/e2e/apps/framework.go:23 + Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:103 + Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] + test/e2e/apps/statefulset.go:697 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] StatefulSet + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:37:22.559 + Oct 13 08:37:22.559: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename statefulset 10/13/23 08:37:22.56 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:37:22.573 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:37:22.576 + [BeforeEach] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:98 + [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:113 + STEP: Creating service test in namespace statefulset-372 10/13/23 08:37:22.578 + [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] + test/e2e/apps/statefulset.go:697 + STEP: Creating stateful set ss in namespace statefulset-372 10/13/23 08:37:22.583 + STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-372 10/13/23 08:37:22.591 + Oct 13 08:37:22.596: INFO: Found 0 stateful pods, waiting for 1 + Oct 13 08:37:32.602: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true + STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod 10/13/23 08:37:32.602 + Oct 13 08:37:32.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=statefulset-372 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' + Oct 13 08:37:32.778: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" + Oct 13 08:37:32.778: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" + Oct 13 08:37:32.778: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + + Oct 13 08:37:32.782: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true + Oct 13 08:37:42.788: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false + Oct 13 08:37:42.788: INFO: Waiting for statefulset status.replicas updated to 0 + Oct 13 08:37:42.808: INFO: POD NODE PHASE GRACE CONDITIONS + Oct 13 08:37:42.808: INFO: ss-0 node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:37:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:37:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:37:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:37:22 +0000 UTC }] + Oct 13 08:37:42.808: INFO: + Oct 13 08:37:42.808: INFO: StatefulSet ss has not reached scale 3, at 1 + Oct 13 08:37:43.814: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.993022062s + Oct 13 08:37:44.819: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.987431978s + Oct 13 08:37:45.825: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.982262836s + Oct 13 08:37:46.833: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.975047351s + Oct 13 08:37:47.839: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.967727273s + Oct 13 08:37:48.865: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.962357809s + Oct 13 08:37:49.871: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.936261367s + Oct 13 08:37:50.879: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.930096097s + Oct 13 08:37:51.885: INFO: Verifying statefulset ss doesn't scale past 3 for another 922.468469ms + STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-372 10/13/23 08:37:52.885 + Oct 13 08:37:52.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=statefulset-372 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' + Oct 13 08:37:53.056: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" + Oct 13 08:37:53.056: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" + Oct 13 08:37:53.056: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + + Oct 13 08:37:53.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=statefulset-372 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' + Oct 13 08:37:53.211: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" + Oct 13 08:37:53.211: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" + Oct 13 08:37:53.211: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + + Oct 13 08:37:53.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=statefulset-372 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' + Oct 13 08:37:53.385: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" + Oct 13 08:37:53.385: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" + Oct 13 08:37:53.385: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + + Oct 13 08:37:53.390: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false + Oct 13 08:38:03.394: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true + Oct 13 08:38:03.394: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true + Oct 13 08:38:03.394: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true + STEP: Scale down will not halt with unhealthy stateful pod 10/13/23 08:38:03.394 + Oct 13 08:38:03.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=statefulset-372 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' + Oct 13 08:38:03.555: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" + Oct 13 08:38:03.555: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" + Oct 13 08:38:03.555: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + + Oct 13 08:38:03.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=statefulset-372 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' + Oct 13 08:38:03.697: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" + Oct 13 08:38:03.698: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" + Oct 13 08:38:03.698: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + + Oct 13 08:38:03.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=statefulset-372 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' + Oct 13 08:38:03.862: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" + Oct 13 08:38:03.862: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" + Oct 13 08:38:03.862: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + + Oct 13 08:38:03.862: INFO: Waiting for statefulset status.replicas updated to 0 + Oct 13 08:38:03.866: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 + Oct 13 08:38:13.881: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false + Oct 13 08:38:13.881: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false + Oct 13 08:38:13.881: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false + Oct 13 08:38:13.893: INFO: POD NODE PHASE GRACE CONDITIONS + Oct 13 08:38:13.893: INFO: ss-0 node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:37:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:38:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:38:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:37:22 +0000 UTC }] + Oct 13 08:38:13.893: INFO: ss-1 node3 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:37:42 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:38:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:38:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:37:43 +0000 UTC }] + Oct 13 08:38:13.893: INFO: ss-2 node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:37:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:38:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:38:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:37:43 +0000 UTC }] + Oct 13 08:38:13.893: INFO: + Oct 13 08:38:13.893: INFO: StatefulSet ss has not reached scale 0, at 3 + Oct 13 08:38:14.896: INFO: POD NODE PHASE GRACE CONDITIONS + Oct 13 08:38:14.897: INFO: ss-1 node3 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:37:42 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:38:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:38:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:37:43 +0000 UTC }] + Oct 13 08:38:14.897: INFO: + Oct 13 08:38:14.897: INFO: StatefulSet ss has not reached scale 0, at 1 + Oct 13 08:38:15.901: INFO: Verifying statefulset ss doesn't scale past 0 for another 7.99234213s + Oct 13 08:38:16.906: INFO: Verifying statefulset ss doesn't scale past 0 for another 6.988136377s + Oct 13 08:38:17.914: INFO: Verifying statefulset ss doesn't scale past 0 for another 5.983333997s + Oct 13 08:38:18.920: INFO: Verifying statefulset ss doesn't scale past 0 for another 4.975612367s + Oct 13 08:38:19.924: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.970048238s + Oct 13 08:38:20.930: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.965923383s + Oct 13 08:38:21.934: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.959753916s + Oct 13 08:38:22.939: INFO: Verifying statefulset ss doesn't scale past 0 for another 955.300726ms + STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-372 10/13/23 08:38:23.939 + Oct 13 08:38:23.943: INFO: Scaling statefulset ss to 0 + Oct 13 08:38:23.952: INFO: Waiting for statefulset status.replicas updated to 0 + [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:124 + Oct 13 08:38:23.955: INFO: Deleting all statefulset in ns statefulset-372 + Oct 13 08:38:23.958: INFO: Scaling statefulset ss to 0 + Oct 13 08:38:23.967: INFO: Waiting for statefulset status.replicas updated to 0 + Oct 13 08:38:23.969: INFO: Deleting statefulset ss + [AfterEach] [sig-apps] StatefulSet + test/e2e/framework/node/init/init.go:32 + Oct 13 08:38:23.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] StatefulSet + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] StatefulSet + tear down framework | framework.go:193 + STEP: Destroying namespace "statefulset-372" for this suite. 10/13/23 08:38:23.991 + << End Captured GinkgoWriter Output +------------------------------ +SSSSS +------------------------------ +[sig-network] Ingress API + should support creating Ingress API operations [Conformance] + test/e2e/network/ingress.go:552 +[BeforeEach] [sig-network] Ingress API + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:38:23.998 +Oct 13 08:38:23.998: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename ingress 10/13/23 08:38:23.999 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:38:24.015 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:38:24.017 +[BeforeEach] [sig-network] Ingress API + test/e2e/framework/metrics/init/init.go:31 +[It] should support creating Ingress API operations [Conformance] + test/e2e/network/ingress.go:552 +STEP: getting /apis 10/13/23 08:38:24.019 +STEP: getting /apis/networking.k8s.io 10/13/23 08:38:24.021 +STEP: getting /apis/networking.k8s.iov1 10/13/23 08:38:24.021 +STEP: creating 10/13/23 08:38:24.022 +STEP: getting 10/13/23 08:38:24.053 +STEP: listing 10/13/23 08:38:24.057 +STEP: watching 10/13/23 08:38:24.061 +Oct 13 08:38:24.061: INFO: starting watch +STEP: cluster-wide listing 10/13/23 08:38:24.062 +STEP: cluster-wide watching 10/13/23 08:38:24.066 +Oct 13 08:38:24.066: INFO: starting watch +STEP: patching 10/13/23 08:38:24.067 +STEP: updating 10/13/23 08:38:24.075 +Oct 13 08:38:24.082: INFO: waiting for watch events with expected annotations +Oct 13 08:38:24.082: INFO: saw patched and updated annotations +STEP: patching /status 10/13/23 08:38:24.082 +STEP: updating /status 10/13/23 08:38:24.088 +STEP: get /status 10/13/23 08:38:24.095 +STEP: deleting 10/13/23 08:38:24.098 +STEP: deleting a collection 10/13/23 08:38:24.106 +[AfterEach] [sig-network] Ingress API + test/e2e/framework/node/init/init.go:32 +Oct 13 08:38:24.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] Ingress API + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] Ingress API + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] Ingress API + tear down framework | framework.go:193 +STEP: Destroying namespace "ingress-9141" for this suite. 10/13/23 08:38:24.121 +------------------------------ +• [0.130 seconds] +[sig-network] Ingress API +test/e2e/network/common/framework.go:23 + should support creating Ingress API operations [Conformance] + test/e2e/network/ingress.go:552 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Ingress API + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:38:23.998 + Oct 13 08:38:23.998: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename ingress 10/13/23 08:38:23.999 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:38:24.015 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:38:24.017 + [BeforeEach] [sig-network] Ingress API + test/e2e/framework/metrics/init/init.go:31 + [It] should support creating Ingress API operations [Conformance] + test/e2e/network/ingress.go:552 + STEP: getting /apis 10/13/23 08:38:24.019 + STEP: getting /apis/networking.k8s.io 10/13/23 08:38:24.021 + STEP: getting /apis/networking.k8s.iov1 10/13/23 08:38:24.021 + STEP: creating 10/13/23 08:38:24.022 + STEP: getting 10/13/23 08:38:24.053 + STEP: listing 10/13/23 08:38:24.057 + STEP: watching 10/13/23 08:38:24.061 + Oct 13 08:38:24.061: INFO: starting watch + STEP: cluster-wide listing 10/13/23 08:38:24.062 + STEP: cluster-wide watching 10/13/23 08:38:24.066 + Oct 13 08:38:24.066: INFO: starting watch + STEP: patching 10/13/23 08:38:24.067 + STEP: updating 10/13/23 08:38:24.075 + Oct 13 08:38:24.082: INFO: waiting for watch events with expected annotations + Oct 13 08:38:24.082: INFO: saw patched and updated annotations + STEP: patching /status 10/13/23 08:38:24.082 + STEP: updating /status 10/13/23 08:38:24.088 + STEP: get /status 10/13/23 08:38:24.095 + STEP: deleting 10/13/23 08:38:24.098 + STEP: deleting a collection 10/13/23 08:38:24.106 + [AfterEach] [sig-network] Ingress API + test/e2e/framework/node/init/init.go:32 + Oct 13 08:38:24.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] Ingress API + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] Ingress API + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] Ingress API + tear down framework | framework.go:193 + STEP: Destroying namespace "ingress-9141" for this suite. 10/13/23 08:38:24.121 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Proxy version v1 + should proxy through a service and a pod [Conformance] + test/e2e/network/proxy.go:101 +[BeforeEach] version v1 + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:38:24.129 +Oct 13 08:38:24.129: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename proxy 10/13/23 08:38:24.13 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:38:24.142 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:38:24.144 +[BeforeEach] version v1 + test/e2e/framework/metrics/init/init.go:31 +[It] should proxy through a service and a pod [Conformance] + test/e2e/network/proxy.go:101 +STEP: starting an echo server on multiple ports 10/13/23 08:38:24.157 +STEP: creating replication controller proxy-service-8w582 in namespace proxy-1779 10/13/23 08:38:24.157 +I1013 08:38:24.165846 23 runners.go:193] Created replication controller with name: proxy-service-8w582, namespace: proxy-1779, replica count: 1 +I1013 08:38:25.217049 23 runners.go:193] proxy-service-8w582 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I1013 08:38:26.217968 23 runners.go:193] proxy-service-8w582 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Oct 13 08:38:26.222: INFO: setup took 2.075422491s, starting test cases +STEP: running 16 cases, 20 attempts per case, 320 total attempts 10/13/23 08:38:26.222 +Oct 13 08:38:26.232: INFO: (0) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:162/proxy/: bar (200; 10.072044ms) +Oct 13 08:38:26.232: INFO: (0) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:160/proxy/: foo (200; 10.261933ms) +Oct 13 08:38:26.232: INFO: (0) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr/proxy/: test (200; 10.16627ms) +Oct 13 08:38:26.232: INFO: (0) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:1080/proxy/: ... (200; 10.292309ms) +Oct 13 08:38:26.232: INFO: (0) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:1080/proxy/: test<... (200; 10.248539ms) +Oct 13 08:38:26.232: INFO: (0) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:160/proxy/: foo (200; 10.199854ms) +Oct 13 08:38:26.232: INFO: (0) /api/v1/namespaces/proxy-1779/services/http:proxy-service-8w582:portname1/proxy/: foo (200; 10.208831ms) +Oct 13 08:38:26.232: INFO: (0) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:162/proxy/: bar (200; 10.269739ms) +Oct 13 08:38:26.236: INFO: (0) /api/v1/namespaces/proxy-1779/services/proxy-service-8w582:portname2/proxy/: bar (200; 13.810985ms) +Oct 13 08:38:26.236: INFO: (0) /api/v1/namespaces/proxy-1779/services/proxy-service-8w582:portname1/proxy/: foo (200; 14.073824ms) +Oct 13 08:38:26.237: INFO: (0) /api/v1/namespaces/proxy-1779/services/http:proxy-service-8w582:portname2/proxy/: bar (200; 15.137709ms) +Oct 13 08:38:26.237: INFO: (0) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:462/proxy/: tls qux (200; 15.216512ms) +Oct 13 08:38:26.237: INFO: (0) /api/v1/namespaces/proxy-1779/services/https:proxy-service-8w582:tlsportname2/proxy/: tls qux (200; 15.38389ms) +Oct 13 08:38:26.237: INFO: (0) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:460/proxy/: tls baz (200; 15.554557ms) +Oct 13 08:38:26.237: INFO: (0) /api/v1/namespaces/proxy-1779/services/https:proxy-service-8w582:tlsportname1/proxy/: tls baz (200; 15.551855ms) +Oct 13 08:38:26.240: INFO: (0) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:443/proxy/: test (200; 6.876294ms) +Oct 13 08:38:26.247: INFO: (1) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:1080/proxy/: ... (200; 7.057661ms) +Oct 13 08:38:26.247: INFO: (1) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:443/proxy/: test<... (200; 7.296951ms) +Oct 13 08:38:26.247: INFO: (1) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:160/proxy/: foo (200; 7.302875ms) +Oct 13 08:38:26.247: INFO: (1) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:462/proxy/: tls qux (200; 7.335059ms) +Oct 13 08:38:26.248: INFO: (1) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:162/proxy/: bar (200; 7.682735ms) +Oct 13 08:38:26.248: INFO: (1) /api/v1/namespaces/proxy-1779/services/http:proxy-service-8w582:portname2/proxy/: bar (200; 8.127082ms) +Oct 13 08:38:26.249: INFO: (1) /api/v1/namespaces/proxy-1779/services/http:proxy-service-8w582:portname1/proxy/: foo (200; 9.325412ms) +Oct 13 08:38:26.251: INFO: (1) /api/v1/namespaces/proxy-1779/services/proxy-service-8w582:portname2/proxy/: bar (200; 11.335231ms) +Oct 13 08:38:26.251: INFO: (1) /api/v1/namespaces/proxy-1779/services/https:proxy-service-8w582:tlsportname1/proxy/: tls baz (200; 11.336977ms) +Oct 13 08:38:26.251: INFO: (1) /api/v1/namespaces/proxy-1779/services/https:proxy-service-8w582:tlsportname2/proxy/: tls qux (200; 11.37498ms) +Oct 13 08:38:26.251: INFO: (1) /api/v1/namespaces/proxy-1779/services/proxy-service-8w582:portname1/proxy/: foo (200; 11.503205ms) +Oct 13 08:38:26.256: INFO: (2) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:162/proxy/: bar (200; 4.312093ms) +Oct 13 08:38:26.258: INFO: (2) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:162/proxy/: bar (200; 5.517623ms) +Oct 13 08:38:26.259: INFO: (2) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr/proxy/: test (200; 6.911026ms) +Oct 13 08:38:26.259: INFO: (2) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:462/proxy/: tls qux (200; 6.731654ms) +Oct 13 08:38:26.259: INFO: (2) /api/v1/namespaces/proxy-1779/services/http:proxy-service-8w582:portname1/proxy/: foo (200; 7.510084ms) +Oct 13 08:38:26.259: INFO: (2) /api/v1/namespaces/proxy-1779/services/https:proxy-service-8w582:tlsportname1/proxy/: tls baz (200; 7.083503ms) +Oct 13 08:38:26.259: INFO: (2) /api/v1/namespaces/proxy-1779/services/proxy-service-8w582:portname2/proxy/: bar (200; 7.448686ms) +Oct 13 08:38:26.259: INFO: (2) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:1080/proxy/: test<... (200; 6.72661ms) +Oct 13 08:38:26.259: INFO: (2) /api/v1/namespaces/proxy-1779/services/http:proxy-service-8w582:portname2/proxy/: bar (200; 6.847003ms) +Oct 13 08:38:26.260: INFO: (2) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:160/proxy/: foo (200; 7.60374ms) +Oct 13 08:38:26.260: INFO: (2) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:160/proxy/: foo (200; 8.133894ms) +Oct 13 08:38:26.260: INFO: (2) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:443/proxy/: ... (200; 7.57663ms) +Oct 13 08:38:26.260: INFO: (2) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:460/proxy/: tls baz (200; 7.555096ms) +Oct 13 08:38:26.260: INFO: (2) /api/v1/namespaces/proxy-1779/services/proxy-service-8w582:portname1/proxy/: foo (200; 7.793911ms) +Oct 13 08:38:26.262: INFO: (2) /api/v1/namespaces/proxy-1779/services/https:proxy-service-8w582:tlsportname2/proxy/: tls qux (200; 9.582112ms) +Oct 13 08:38:26.268: INFO: (3) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:162/proxy/: bar (200; 5.866988ms) +Oct 13 08:38:26.270: INFO: (3) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:462/proxy/: tls qux (200; 8.279199ms) +Oct 13 08:38:26.270: INFO: (3) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:1080/proxy/: test<... (200; 8.276047ms) +Oct 13 08:38:26.270: INFO: (3) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:160/proxy/: foo (200; 8.317718ms) +Oct 13 08:38:26.270: INFO: (3) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:160/proxy/: foo (200; 8.402704ms) +Oct 13 08:38:26.270: INFO: (3) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:1080/proxy/: ... (200; 8.31651ms) +Oct 13 08:38:26.270: INFO: (3) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr/proxy/: test (200; 8.536353ms) +Oct 13 08:38:26.270: INFO: (3) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:443/proxy/: ... (200; 5.870376ms) +Oct 13 08:38:26.279: INFO: (4) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:460/proxy/: tls baz (200; 5.938415ms) +Oct 13 08:38:26.279: INFO: (4) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:1080/proxy/: test<... (200; 5.875442ms) +Oct 13 08:38:26.279: INFO: (4) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:162/proxy/: bar (200; 5.871515ms) +Oct 13 08:38:26.279: INFO: (4) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr/proxy/: test (200; 5.798648ms) +Oct 13 08:38:26.281: INFO: (4) /api/v1/namespaces/proxy-1779/services/proxy-service-8w582:portname1/proxy/: foo (200; 7.624774ms) +Oct 13 08:38:26.281: INFO: (4) /api/v1/namespaces/proxy-1779/services/http:proxy-service-8w582:portname2/proxy/: bar (200; 7.762959ms) +Oct 13 08:38:26.281: INFO: (4) /api/v1/namespaces/proxy-1779/services/proxy-service-8w582:portname2/proxy/: bar (200; 7.715473ms) +Oct 13 08:38:26.282: INFO: (4) /api/v1/namespaces/proxy-1779/services/https:proxy-service-8w582:tlsportname2/proxy/: tls qux (200; 8.328482ms) +Oct 13 08:38:26.282: INFO: (4) /api/v1/namespaces/proxy-1779/services/http:proxy-service-8w582:portname1/proxy/: foo (200; 8.547972ms) +Oct 13 08:38:26.282: INFO: (4) /api/v1/namespaces/proxy-1779/services/https:proxy-service-8w582:tlsportname1/proxy/: tls baz (200; 8.327111ms) +Oct 13 08:38:26.287: INFO: (5) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr/proxy/: test (200; 4.898535ms) +Oct 13 08:38:26.288: INFO: (5) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:160/proxy/: foo (200; 5.995643ms) +Oct 13 08:38:26.288: INFO: (5) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:460/proxy/: tls baz (200; 6.070099ms) +Oct 13 08:38:26.288: INFO: (5) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:1080/proxy/: ... (200; 6.057413ms) +Oct 13 08:38:26.288: INFO: (5) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:162/proxy/: bar (200; 6.077225ms) +Oct 13 08:38:26.288: INFO: (5) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:1080/proxy/: test<... (200; 6.05107ms) +Oct 13 08:38:26.288: INFO: (5) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:160/proxy/: foo (200; 6.056633ms) +Oct 13 08:38:26.288: INFO: (5) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:443/proxy/: test (200; 7.727989ms) +Oct 13 08:38:26.299: INFO: (6) /api/v1/namespaces/proxy-1779/services/https:proxy-service-8w582:tlsportname2/proxy/: tls qux (200; 7.776865ms) +Oct 13 08:38:26.299: INFO: (6) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:1080/proxy/: test<... (200; 7.693863ms) +Oct 13 08:38:26.299: INFO: (6) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:160/proxy/: foo (200; 7.81159ms) +Oct 13 08:38:26.299: INFO: (6) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:443/proxy/: ... (200; 7.744748ms) +Oct 13 08:38:26.300: INFO: (6) /api/v1/namespaces/proxy-1779/services/http:proxy-service-8w582:portname1/proxy/: foo (200; 8.877899ms) +Oct 13 08:38:26.301: INFO: (6) /api/v1/namespaces/proxy-1779/services/proxy-service-8w582:portname1/proxy/: foo (200; 9.982868ms) +Oct 13 08:38:26.301: INFO: (6) /api/v1/namespaces/proxy-1779/services/http:proxy-service-8w582:portname2/proxy/: bar (200; 9.937292ms) +Oct 13 08:38:26.301: INFO: (6) /api/v1/namespaces/proxy-1779/services/https:proxy-service-8w582:tlsportname1/proxy/: tls baz (200; 10.089639ms) +Oct 13 08:38:26.301: INFO: (6) /api/v1/namespaces/proxy-1779/services/proxy-service-8w582:portname2/proxy/: bar (200; 10.10988ms) +Oct 13 08:38:26.310: INFO: (7) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr/proxy/: test (200; 8.726872ms) +Oct 13 08:38:26.310: INFO: (7) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:443/proxy/: test<... (200; 8.777022ms) +Oct 13 08:38:26.310: INFO: (7) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:160/proxy/: foo (200; 8.762363ms) +Oct 13 08:38:26.310: INFO: (7) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:1080/proxy/: ... (200; 8.742382ms) +Oct 13 08:38:26.310: INFO: (7) /api/v1/namespaces/proxy-1779/services/https:proxy-service-8w582:tlsportname2/proxy/: tls qux (200; 8.910202ms) +Oct 13 08:38:26.310: INFO: (7) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:162/proxy/: bar (200; 8.732012ms) +Oct 13 08:38:26.310: INFO: (7) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:460/proxy/: tls baz (200; 8.891199ms) +Oct 13 08:38:26.310: INFO: (7) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:462/proxy/: tls qux (200; 8.738767ms) +Oct 13 08:38:26.310: INFO: (7) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:160/proxy/: foo (200; 8.820995ms) +Oct 13 08:38:26.312: INFO: (7) /api/v1/namespaces/proxy-1779/services/proxy-service-8w582:portname1/proxy/: foo (200; 10.488821ms) +Oct 13 08:38:26.313: INFO: (7) /api/v1/namespaces/proxy-1779/services/proxy-service-8w582:portname2/proxy/: bar (200; 11.14635ms) +Oct 13 08:38:26.313: INFO: (7) /api/v1/namespaces/proxy-1779/services/http:proxy-service-8w582:portname2/proxy/: bar (200; 11.134373ms) +Oct 13 08:38:26.313: INFO: (7) /api/v1/namespaces/proxy-1779/services/http:proxy-service-8w582:portname1/proxy/: foo (200; 11.240588ms) +Oct 13 08:38:26.313: INFO: (7) /api/v1/namespaces/proxy-1779/services/https:proxy-service-8w582:tlsportname1/proxy/: tls baz (200; 11.053128ms) +Oct 13 08:38:26.317: INFO: (8) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:460/proxy/: tls baz (200; 3.956972ms) +Oct 13 08:38:26.318: INFO: (8) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:1080/proxy/: test<... (200; 4.685746ms) +Oct 13 08:38:26.320: INFO: (8) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:160/proxy/: foo (200; 7.299621ms) +Oct 13 08:38:26.321: INFO: (8) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:162/proxy/: bar (200; 7.307021ms) +Oct 13 08:38:26.321: INFO: (8) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:162/proxy/: bar (200; 7.347214ms) +Oct 13 08:38:26.321: INFO: (8) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:1080/proxy/: ... (200; 7.346712ms) +Oct 13 08:38:26.321: INFO: (8) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:462/proxy/: tls qux (200; 7.323934ms) +Oct 13 08:38:26.320: INFO: (8) /api/v1/namespaces/proxy-1779/services/proxy-service-8w582:portname1/proxy/: foo (200; 7.544993ms) +Oct 13 08:38:26.321: INFO: (8) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr/proxy/: test (200; 7.415196ms) +Oct 13 08:38:26.321: INFO: (8) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:443/proxy/: test<... (200; 8.437345ms) +Oct 13 08:38:26.332: INFO: (9) /api/v1/namespaces/proxy-1779/services/https:proxy-service-8w582:tlsportname1/proxy/: tls baz (200; 8.234037ms) +Oct 13 08:38:26.332: INFO: (9) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:160/proxy/: foo (200; 8.316802ms) +Oct 13 08:38:26.332: INFO: (9) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:1080/proxy/: ... (200; 8.256825ms) +Oct 13 08:38:26.332: INFO: (9) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:460/proxy/: tls baz (200; 8.273467ms) +Oct 13 08:38:26.332: INFO: (9) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr/proxy/: test (200; 8.498945ms) +Oct 13 08:38:26.332: INFO: (9) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:162/proxy/: bar (200; 8.621171ms) +Oct 13 08:38:26.332: INFO: (9) /api/v1/namespaces/proxy-1779/services/proxy-service-8w582:portname1/proxy/: foo (200; 8.695426ms) +Oct 13 08:38:26.332: INFO: (9) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:443/proxy/: test<... (200; 7.614114ms) +Oct 13 08:38:26.342: INFO: (10) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:460/proxy/: tls baz (200; 7.68733ms) +Oct 13 08:38:26.342: INFO: (10) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:162/proxy/: bar (200; 7.718594ms) +Oct 13 08:38:26.343: INFO: (10) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr/proxy/: test (200; 7.88244ms) +Oct 13 08:38:26.343: INFO: (10) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:162/proxy/: bar (200; 7.992232ms) +Oct 13 08:38:26.343: INFO: (10) /api/v1/namespaces/proxy-1779/services/proxy-service-8w582:portname1/proxy/: foo (200; 8.009434ms) +Oct 13 08:38:26.343: INFO: (10) /api/v1/namespaces/proxy-1779/services/proxy-service-8w582:portname2/proxy/: bar (200; 8.059245ms) +Oct 13 08:38:26.343: INFO: (10) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:1080/proxy/: ... (200; 8.463065ms) +Oct 13 08:38:26.343: INFO: (10) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:443/proxy/: test<... (200; 6.053253ms) +Oct 13 08:38:26.351: INFO: (11) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:462/proxy/: tls qux (200; 5.781212ms) +Oct 13 08:38:26.351: INFO: (11) /api/v1/namespaces/proxy-1779/services/http:proxy-service-8w582:portname2/proxy/: bar (200; 6.580381ms) +Oct 13 08:38:26.352: INFO: (11) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:460/proxy/: tls baz (200; 6.705251ms) +Oct 13 08:38:26.352: INFO: (11) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:160/proxy/: foo (200; 6.38447ms) +Oct 13 08:38:26.352: INFO: (11) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:1080/proxy/: ... (200; 6.83028ms) +Oct 13 08:38:26.352: INFO: (11) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr/proxy/: test (200; 6.208413ms) +Oct 13 08:38:26.352: INFO: (11) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:443/proxy/: ... (200; 7.514941ms) +Oct 13 08:38:26.363: INFO: (12) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:443/proxy/: test (200; 7.802837ms) +Oct 13 08:38:26.364: INFO: (12) /api/v1/namespaces/proxy-1779/services/http:proxy-service-8w582:portname2/proxy/: bar (200; 8.40412ms) +Oct 13 08:38:26.364: INFO: (12) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:462/proxy/: tls qux (200; 8.636602ms) +Oct 13 08:38:26.364: INFO: (12) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:1080/proxy/: test<... (200; 8.56902ms) +Oct 13 08:38:26.364: INFO: (12) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:160/proxy/: foo (200; 8.629306ms) +Oct 13 08:38:26.364: INFO: (12) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:460/proxy/: tls baz (200; 8.644795ms) +Oct 13 08:38:26.364: INFO: (12) /api/v1/namespaces/proxy-1779/services/https:proxy-service-8w582:tlsportname2/proxy/: tls qux (200; 8.642923ms) +Oct 13 08:38:26.365: INFO: (12) /api/v1/namespaces/proxy-1779/services/proxy-service-8w582:portname1/proxy/: foo (200; 9.965747ms) +Oct 13 08:38:26.365: INFO: (12) /api/v1/namespaces/proxy-1779/services/http:proxy-service-8w582:portname1/proxy/: foo (200; 9.97275ms) +Oct 13 08:38:26.365: INFO: (12) /api/v1/namespaces/proxy-1779/services/proxy-service-8w582:portname2/proxy/: bar (200; 9.954561ms) +Oct 13 08:38:26.365: INFO: (12) /api/v1/namespaces/proxy-1779/services/https:proxy-service-8w582:tlsportname1/proxy/: tls baz (200; 10.036704ms) +Oct 13 08:38:26.369: INFO: (13) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:460/proxy/: tls baz (200; 3.507572ms) +Oct 13 08:38:26.371: INFO: (13) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:1080/proxy/: test<... (200; 5.111511ms) +Oct 13 08:38:26.371: INFO: (13) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr/proxy/: test (200; 5.316342ms) +Oct 13 08:38:26.371: INFO: (13) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:1080/proxy/: ... (200; 5.226911ms) +Oct 13 08:38:26.371: INFO: (13) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:462/proxy/: tls qux (200; 5.301266ms) +Oct 13 08:38:26.371: INFO: (13) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:162/proxy/: bar (200; 5.241978ms) +Oct 13 08:38:26.371: INFO: (13) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:160/proxy/: foo (200; 5.348765ms) +Oct 13 08:38:26.371: INFO: (13) /api/v1/namespaces/proxy-1779/services/https:proxy-service-8w582:tlsportname1/proxy/: tls baz (200; 6.231113ms) +Oct 13 08:38:26.372: INFO: (13) /api/v1/namespaces/proxy-1779/services/proxy-service-8w582:portname1/proxy/: foo (200; 6.2988ms) +Oct 13 08:38:26.372: INFO: (13) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:160/proxy/: foo (200; 6.280932ms) +Oct 13 08:38:26.372: INFO: (13) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:443/proxy/: test (200; 5.34282ms) +Oct 13 08:38:26.380: INFO: (14) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:462/proxy/: tls qux (200; 5.228692ms) +Oct 13 08:38:26.380: INFO: (14) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:443/proxy/: test<... (200; 5.369281ms) +Oct 13 08:38:26.380: INFO: (14) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:1080/proxy/: ... (200; 5.70348ms) +Oct 13 08:38:26.382: INFO: (14) /api/v1/namespaces/proxy-1779/services/http:proxy-service-8w582:portname1/proxy/: foo (200; 7.384869ms) +Oct 13 08:38:26.382: INFO: (14) /api/v1/namespaces/proxy-1779/services/proxy-service-8w582:portname2/proxy/: bar (200; 7.46141ms) +Oct 13 08:38:26.383: INFO: (14) /api/v1/namespaces/proxy-1779/services/https:proxy-service-8w582:tlsportname2/proxy/: tls qux (200; 8.966801ms) +Oct 13 08:38:26.384: INFO: (14) /api/v1/namespaces/proxy-1779/services/http:proxy-service-8w582:portname2/proxy/: bar (200; 9.107263ms) +Oct 13 08:38:26.384: INFO: (14) /api/v1/namespaces/proxy-1779/services/proxy-service-8w582:portname1/proxy/: foo (200; 9.122384ms) +Oct 13 08:38:26.384: INFO: (14) /api/v1/namespaces/proxy-1779/services/https:proxy-service-8w582:tlsportname1/proxy/: tls baz (200; 9.049991ms) +Oct 13 08:38:26.390: INFO: (15) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:162/proxy/: bar (200; 6.128731ms) +Oct 13 08:38:26.390: INFO: (15) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:443/proxy/: ... (200; 6.152714ms) +Oct 13 08:38:26.390: INFO: (15) /api/v1/namespaces/proxy-1779/services/proxy-service-8w582:portname1/proxy/: foo (200; 6.512229ms) +Oct 13 08:38:26.390: INFO: (15) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:160/proxy/: foo (200; 6.23817ms) +Oct 13 08:38:26.390: INFO: (15) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:162/proxy/: bar (200; 6.383637ms) +Oct 13 08:38:26.390: INFO: (15) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr/proxy/: test (200; 6.316794ms) +Oct 13 08:38:26.390: INFO: (15) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:460/proxy/: tls baz (200; 6.69932ms) +Oct 13 08:38:26.390: INFO: (15) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:1080/proxy/: test<... (200; 6.53578ms) +Oct 13 08:38:26.390: INFO: (15) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:462/proxy/: tls qux (200; 6.326343ms) +Oct 13 08:38:26.390: INFO: (15) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:160/proxy/: foo (200; 6.512086ms) +Oct 13 08:38:26.391: INFO: (15) /api/v1/namespaces/proxy-1779/services/http:proxy-service-8w582:portname2/proxy/: bar (200; 7.269121ms) +Oct 13 08:38:26.392: INFO: (15) /api/v1/namespaces/proxy-1779/services/http:proxy-service-8w582:portname1/proxy/: foo (200; 8.407779ms) +Oct 13 08:38:26.392: INFO: (15) /api/v1/namespaces/proxy-1779/services/proxy-service-8w582:portname2/proxy/: bar (200; 8.388472ms) +Oct 13 08:38:26.392: INFO: (15) /api/v1/namespaces/proxy-1779/services/https:proxy-service-8w582:tlsportname2/proxy/: tls qux (200; 8.553539ms) +Oct 13 08:38:26.392: INFO: (15) /api/v1/namespaces/proxy-1779/services/https:proxy-service-8w582:tlsportname1/proxy/: tls baz (200; 8.478518ms) +Oct 13 08:38:26.396: INFO: (16) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:460/proxy/: tls baz (200; 3.952028ms) +Oct 13 08:38:26.397: INFO: (16) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:462/proxy/: tls qux (200; 4.271937ms) +Oct 13 08:38:26.399: INFO: (16) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:1080/proxy/: ... (200; 6.454172ms) +Oct 13 08:38:26.399: INFO: (16) /api/v1/namespaces/proxy-1779/services/proxy-service-8w582:portname1/proxy/: foo (200; 6.604951ms) +Oct 13 08:38:26.399: INFO: (16) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:162/proxy/: bar (200; 6.605323ms) +Oct 13 08:38:26.399: INFO: (16) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr/proxy/: test (200; 6.75035ms) +Oct 13 08:38:26.399: INFO: (16) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:162/proxy/: bar (200; 6.805859ms) +Oct 13 08:38:26.399: INFO: (16) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:160/proxy/: foo (200; 6.811038ms) +Oct 13 08:38:26.399: INFO: (16) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:1080/proxy/: test<... (200; 6.850779ms) +Oct 13 08:38:26.399: INFO: (16) /api/v1/namespaces/proxy-1779/services/http:proxy-service-8w582:portname1/proxy/: foo (200; 6.903495ms) +Oct 13 08:38:26.400: INFO: (16) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:443/proxy/: ... (200; 4.724807ms) +Oct 13 08:38:26.406: INFO: (17) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:460/proxy/: tls baz (200; 4.916824ms) +Oct 13 08:38:26.406: INFO: (17) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:443/proxy/: test<... (200; 4.893004ms) +Oct 13 08:38:26.406: INFO: (17) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:162/proxy/: bar (200; 5.108877ms) +Oct 13 08:38:26.406: INFO: (17) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr/proxy/: test (200; 5.197758ms) +Oct 13 08:38:26.407: INFO: (17) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:462/proxy/: tls qux (200; 5.515567ms) +Oct 13 08:38:26.407: INFO: (17) /api/v1/namespaces/proxy-1779/services/https:proxy-service-8w582:tlsportname1/proxy/: tls baz (200; 6.265662ms) +Oct 13 08:38:26.408: INFO: (17) /api/v1/namespaces/proxy-1779/services/proxy-service-8w582:portname1/proxy/: foo (200; 7.032692ms) +Oct 13 08:38:26.408: INFO: (17) /api/v1/namespaces/proxy-1779/services/http:proxy-service-8w582:portname1/proxy/: foo (200; 7.190591ms) +Oct 13 08:38:26.409: INFO: (17) /api/v1/namespaces/proxy-1779/services/http:proxy-service-8w582:portname2/proxy/: bar (200; 7.234897ms) +Oct 13 08:38:26.409: INFO: (17) /api/v1/namespaces/proxy-1779/services/proxy-service-8w582:portname2/proxy/: bar (200; 7.815527ms) +Oct 13 08:38:26.409: INFO: (17) /api/v1/namespaces/proxy-1779/services/https:proxy-service-8w582:tlsportname2/proxy/: tls qux (200; 7.714759ms) +Oct 13 08:38:26.412: INFO: (18) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:162/proxy/: bar (200; 3.266701ms) +Oct 13 08:38:26.415: INFO: (18) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr/proxy/: test (200; 5.261433ms) +Oct 13 08:38:26.415: INFO: (18) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:460/proxy/: tls baz (200; 5.335856ms) +Oct 13 08:38:26.415: INFO: (18) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:160/proxy/: foo (200; 5.476828ms) +Oct 13 08:38:26.415: INFO: (18) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:443/proxy/: test<... (200; 5.560387ms) +Oct 13 08:38:26.415: INFO: (18) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:162/proxy/: bar (200; 5.416386ms) +Oct 13 08:38:26.415: INFO: (18) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:160/proxy/: foo (200; 5.33652ms) +Oct 13 08:38:26.415: INFO: (18) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:1080/proxy/: ... (200; 5.827282ms) +Oct 13 08:38:26.415: INFO: (18) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:462/proxy/: tls qux (200; 5.474914ms) +Oct 13 08:38:26.416: INFO: (18) /api/v1/namespaces/proxy-1779/services/proxy-service-8w582:portname2/proxy/: bar (200; 6.725329ms) +Oct 13 08:38:26.417: INFO: (18) /api/v1/namespaces/proxy-1779/services/http:proxy-service-8w582:portname1/proxy/: foo (200; 7.637201ms) +Oct 13 08:38:26.417: INFO: (18) /api/v1/namespaces/proxy-1779/services/https:proxy-service-8w582:tlsportname2/proxy/: tls qux (200; 7.811642ms) +Oct 13 08:38:26.417: INFO: (18) /api/v1/namespaces/proxy-1779/services/proxy-service-8w582:portname1/proxy/: foo (200; 8.19287ms) +Oct 13 08:38:26.417: INFO: (18) /api/v1/namespaces/proxy-1779/services/https:proxy-service-8w582:tlsportname1/proxy/: tls baz (200; 8.034769ms) +Oct 13 08:38:26.417: INFO: (18) /api/v1/namespaces/proxy-1779/services/http:proxy-service-8w582:portname2/proxy/: bar (200; 7.984337ms) +Oct 13 08:38:26.423: INFO: (19) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:1080/proxy/: ... (200; 4.837476ms) +Oct 13 08:38:26.423: INFO: (19) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr/proxy/: test (200; 4.884605ms) +Oct 13 08:38:26.423: INFO: (19) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:443/proxy/: test<... (200; 5.095943ms) +Oct 13 08:38:26.423: INFO: (19) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:160/proxy/: foo (200; 5.257899ms) +Oct 13 08:38:26.423: INFO: (19) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:462/proxy/: tls qux (200; 5.183779ms) +Oct 13 08:38:26.423: INFO: (19) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:460/proxy/: tls baz (200; 5.197292ms) +Oct 13 08:38:26.423: INFO: (19) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:160/proxy/: foo (200; 5.253292ms) +Oct 13 08:38:26.423: INFO: (19) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:162/proxy/: bar (200; 5.198696ms) +Oct 13 08:38:26.424: INFO: (19) /api/v1/namespaces/proxy-1779/services/proxy-service-8w582:portname2/proxy/: bar (200; 6.573763ms) +Oct 13 08:38:26.425: INFO: (19) /api/v1/namespaces/proxy-1779/services/proxy-service-8w582:portname1/proxy/: foo (200; 7.660055ms) +Oct 13 08:38:26.426: INFO: (19) /api/v1/namespaces/proxy-1779/services/https:proxy-service-8w582:tlsportname1/proxy/: tls baz (200; 7.713345ms) +Oct 13 08:38:26.426: INFO: (19) /api/v1/namespaces/proxy-1779/services/https:proxy-service-8w582:tlsportname2/proxy/: tls qux (200; 7.815858ms) +Oct 13 08:38:26.426: INFO: (19) /api/v1/namespaces/proxy-1779/services/http:proxy-service-8w582:portname1/proxy/: foo (200; 8.372125ms) +Oct 13 08:38:26.426: INFO: (19) /api/v1/namespaces/proxy-1779/services/http:proxy-service-8w582:portname2/proxy/: bar (200; 8.318247ms) +STEP: deleting ReplicationController proxy-service-8w582 in namespace proxy-1779, will wait for the garbage collector to delete the pods 10/13/23 08:38:26.426 +Oct 13 08:38:26.484: INFO: Deleting ReplicationController proxy-service-8w582 took: 5.325613ms +Oct 13 08:38:26.585: INFO: Terminating ReplicationController proxy-service-8w582 pods took: 100.841563ms +[AfterEach] version v1 + test/e2e/framework/node/init/init.go:32 +Oct 13 08:38:28.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] version v1 + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] version v1 + dump namespaces | framework.go:196 +[DeferCleanup (Each)] version v1 + tear down framework | framework.go:193 +STEP: Destroying namespace "proxy-1779" for this suite. 10/13/23 08:38:28.791 +------------------------------ +• [4.670 seconds] +[sig-network] Proxy +test/e2e/network/common/framework.go:23 + version v1 + test/e2e/network/proxy.go:74 + should proxy through a service and a pod [Conformance] + test/e2e/network/proxy.go:101 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] version v1 + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:38:24.129 + Oct 13 08:38:24.129: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename proxy 10/13/23 08:38:24.13 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:38:24.142 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:38:24.144 + [BeforeEach] version v1 + test/e2e/framework/metrics/init/init.go:31 + [It] should proxy through a service and a pod [Conformance] + test/e2e/network/proxy.go:101 + STEP: starting an echo server on multiple ports 10/13/23 08:38:24.157 + STEP: creating replication controller proxy-service-8w582 in namespace proxy-1779 10/13/23 08:38:24.157 + I1013 08:38:24.165846 23 runners.go:193] Created replication controller with name: proxy-service-8w582, namespace: proxy-1779, replica count: 1 + I1013 08:38:25.217049 23 runners.go:193] proxy-service-8w582 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady + I1013 08:38:26.217968 23 runners.go:193] proxy-service-8w582 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady + Oct 13 08:38:26.222: INFO: setup took 2.075422491s, starting test cases + STEP: running 16 cases, 20 attempts per case, 320 total attempts 10/13/23 08:38:26.222 + Oct 13 08:38:26.232: INFO: (0) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:162/proxy/: bar (200; 10.072044ms) + Oct 13 08:38:26.232: INFO: (0) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:160/proxy/: foo (200; 10.261933ms) + Oct 13 08:38:26.232: INFO: (0) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr/proxy/: test (200; 10.16627ms) + Oct 13 08:38:26.232: INFO: (0) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:1080/proxy/: ... (200; 10.292309ms) + Oct 13 08:38:26.232: INFO: (0) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:1080/proxy/: test<... (200; 10.248539ms) + Oct 13 08:38:26.232: INFO: (0) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:160/proxy/: foo (200; 10.199854ms) + Oct 13 08:38:26.232: INFO: (0) /api/v1/namespaces/proxy-1779/services/http:proxy-service-8w582:portname1/proxy/: foo (200; 10.208831ms) + Oct 13 08:38:26.232: INFO: (0) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:162/proxy/: bar (200; 10.269739ms) + Oct 13 08:38:26.236: INFO: (0) /api/v1/namespaces/proxy-1779/services/proxy-service-8w582:portname2/proxy/: bar (200; 13.810985ms) + Oct 13 08:38:26.236: INFO: (0) /api/v1/namespaces/proxy-1779/services/proxy-service-8w582:portname1/proxy/: foo (200; 14.073824ms) + Oct 13 08:38:26.237: INFO: (0) /api/v1/namespaces/proxy-1779/services/http:proxy-service-8w582:portname2/proxy/: bar (200; 15.137709ms) + Oct 13 08:38:26.237: INFO: (0) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:462/proxy/: tls qux (200; 15.216512ms) + Oct 13 08:38:26.237: INFO: (0) /api/v1/namespaces/proxy-1779/services/https:proxy-service-8w582:tlsportname2/proxy/: tls qux (200; 15.38389ms) + Oct 13 08:38:26.237: INFO: (0) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:460/proxy/: tls baz (200; 15.554557ms) + Oct 13 08:38:26.237: INFO: (0) /api/v1/namespaces/proxy-1779/services/https:proxy-service-8w582:tlsportname1/proxy/: tls baz (200; 15.551855ms) + Oct 13 08:38:26.240: INFO: (0) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:443/proxy/: test (200; 6.876294ms) + Oct 13 08:38:26.247: INFO: (1) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:1080/proxy/: ... (200; 7.057661ms) + Oct 13 08:38:26.247: INFO: (1) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:443/proxy/: test<... (200; 7.296951ms) + Oct 13 08:38:26.247: INFO: (1) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:160/proxy/: foo (200; 7.302875ms) + Oct 13 08:38:26.247: INFO: (1) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:462/proxy/: tls qux (200; 7.335059ms) + Oct 13 08:38:26.248: INFO: (1) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:162/proxy/: bar (200; 7.682735ms) + Oct 13 08:38:26.248: INFO: (1) /api/v1/namespaces/proxy-1779/services/http:proxy-service-8w582:portname2/proxy/: bar (200; 8.127082ms) + Oct 13 08:38:26.249: INFO: (1) /api/v1/namespaces/proxy-1779/services/http:proxy-service-8w582:portname1/proxy/: foo (200; 9.325412ms) + Oct 13 08:38:26.251: INFO: (1) /api/v1/namespaces/proxy-1779/services/proxy-service-8w582:portname2/proxy/: bar (200; 11.335231ms) + Oct 13 08:38:26.251: INFO: (1) /api/v1/namespaces/proxy-1779/services/https:proxy-service-8w582:tlsportname1/proxy/: tls baz (200; 11.336977ms) + Oct 13 08:38:26.251: INFO: (1) /api/v1/namespaces/proxy-1779/services/https:proxy-service-8w582:tlsportname2/proxy/: tls qux (200; 11.37498ms) + Oct 13 08:38:26.251: INFO: (1) /api/v1/namespaces/proxy-1779/services/proxy-service-8w582:portname1/proxy/: foo (200; 11.503205ms) + Oct 13 08:38:26.256: INFO: (2) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:162/proxy/: bar (200; 4.312093ms) + Oct 13 08:38:26.258: INFO: (2) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:162/proxy/: bar (200; 5.517623ms) + Oct 13 08:38:26.259: INFO: (2) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr/proxy/: test (200; 6.911026ms) + Oct 13 08:38:26.259: INFO: (2) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:462/proxy/: tls qux (200; 6.731654ms) + Oct 13 08:38:26.259: INFO: (2) /api/v1/namespaces/proxy-1779/services/http:proxy-service-8w582:portname1/proxy/: foo (200; 7.510084ms) + Oct 13 08:38:26.259: INFO: (2) /api/v1/namespaces/proxy-1779/services/https:proxy-service-8w582:tlsportname1/proxy/: tls baz (200; 7.083503ms) + Oct 13 08:38:26.259: INFO: (2) /api/v1/namespaces/proxy-1779/services/proxy-service-8w582:portname2/proxy/: bar (200; 7.448686ms) + Oct 13 08:38:26.259: INFO: (2) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:1080/proxy/: test<... (200; 6.72661ms) + Oct 13 08:38:26.259: INFO: (2) /api/v1/namespaces/proxy-1779/services/http:proxy-service-8w582:portname2/proxy/: bar (200; 6.847003ms) + Oct 13 08:38:26.260: INFO: (2) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:160/proxy/: foo (200; 7.60374ms) + Oct 13 08:38:26.260: INFO: (2) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:160/proxy/: foo (200; 8.133894ms) + Oct 13 08:38:26.260: INFO: (2) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:443/proxy/: ... (200; 7.57663ms) + Oct 13 08:38:26.260: INFO: (2) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:460/proxy/: tls baz (200; 7.555096ms) + Oct 13 08:38:26.260: INFO: (2) /api/v1/namespaces/proxy-1779/services/proxy-service-8w582:portname1/proxy/: foo (200; 7.793911ms) + Oct 13 08:38:26.262: INFO: (2) /api/v1/namespaces/proxy-1779/services/https:proxy-service-8w582:tlsportname2/proxy/: tls qux (200; 9.582112ms) + Oct 13 08:38:26.268: INFO: (3) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:162/proxy/: bar (200; 5.866988ms) + Oct 13 08:38:26.270: INFO: (3) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:462/proxy/: tls qux (200; 8.279199ms) + Oct 13 08:38:26.270: INFO: (3) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:1080/proxy/: test<... (200; 8.276047ms) + Oct 13 08:38:26.270: INFO: (3) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:160/proxy/: foo (200; 8.317718ms) + Oct 13 08:38:26.270: INFO: (3) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:160/proxy/: foo (200; 8.402704ms) + Oct 13 08:38:26.270: INFO: (3) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:1080/proxy/: ... (200; 8.31651ms) + Oct 13 08:38:26.270: INFO: (3) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr/proxy/: test (200; 8.536353ms) + Oct 13 08:38:26.270: INFO: (3) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:443/proxy/: ... (200; 5.870376ms) + Oct 13 08:38:26.279: INFO: (4) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:460/proxy/: tls baz (200; 5.938415ms) + Oct 13 08:38:26.279: INFO: (4) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:1080/proxy/: test<... (200; 5.875442ms) + Oct 13 08:38:26.279: INFO: (4) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:162/proxy/: bar (200; 5.871515ms) + Oct 13 08:38:26.279: INFO: (4) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr/proxy/: test (200; 5.798648ms) + Oct 13 08:38:26.281: INFO: (4) /api/v1/namespaces/proxy-1779/services/proxy-service-8w582:portname1/proxy/: foo (200; 7.624774ms) + Oct 13 08:38:26.281: INFO: (4) /api/v1/namespaces/proxy-1779/services/http:proxy-service-8w582:portname2/proxy/: bar (200; 7.762959ms) + Oct 13 08:38:26.281: INFO: (4) /api/v1/namespaces/proxy-1779/services/proxy-service-8w582:portname2/proxy/: bar (200; 7.715473ms) + Oct 13 08:38:26.282: INFO: (4) /api/v1/namespaces/proxy-1779/services/https:proxy-service-8w582:tlsportname2/proxy/: tls qux (200; 8.328482ms) + Oct 13 08:38:26.282: INFO: (4) /api/v1/namespaces/proxy-1779/services/http:proxy-service-8w582:portname1/proxy/: foo (200; 8.547972ms) + Oct 13 08:38:26.282: INFO: (4) /api/v1/namespaces/proxy-1779/services/https:proxy-service-8w582:tlsportname1/proxy/: tls baz (200; 8.327111ms) + Oct 13 08:38:26.287: INFO: (5) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr/proxy/: test (200; 4.898535ms) + Oct 13 08:38:26.288: INFO: (5) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:160/proxy/: foo (200; 5.995643ms) + Oct 13 08:38:26.288: INFO: (5) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:460/proxy/: tls baz (200; 6.070099ms) + Oct 13 08:38:26.288: INFO: (5) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:1080/proxy/: ... (200; 6.057413ms) + Oct 13 08:38:26.288: INFO: (5) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:162/proxy/: bar (200; 6.077225ms) + Oct 13 08:38:26.288: INFO: (5) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:1080/proxy/: test<... (200; 6.05107ms) + Oct 13 08:38:26.288: INFO: (5) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:160/proxy/: foo (200; 6.056633ms) + Oct 13 08:38:26.288: INFO: (5) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:443/proxy/: test (200; 7.727989ms) + Oct 13 08:38:26.299: INFO: (6) /api/v1/namespaces/proxy-1779/services/https:proxy-service-8w582:tlsportname2/proxy/: tls qux (200; 7.776865ms) + Oct 13 08:38:26.299: INFO: (6) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:1080/proxy/: test<... (200; 7.693863ms) + Oct 13 08:38:26.299: INFO: (6) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:160/proxy/: foo (200; 7.81159ms) + Oct 13 08:38:26.299: INFO: (6) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:443/proxy/: ... (200; 7.744748ms) + Oct 13 08:38:26.300: INFO: (6) /api/v1/namespaces/proxy-1779/services/http:proxy-service-8w582:portname1/proxy/: foo (200; 8.877899ms) + Oct 13 08:38:26.301: INFO: (6) /api/v1/namespaces/proxy-1779/services/proxy-service-8w582:portname1/proxy/: foo (200; 9.982868ms) + Oct 13 08:38:26.301: INFO: (6) /api/v1/namespaces/proxy-1779/services/http:proxy-service-8w582:portname2/proxy/: bar (200; 9.937292ms) + Oct 13 08:38:26.301: INFO: (6) /api/v1/namespaces/proxy-1779/services/https:proxy-service-8w582:tlsportname1/proxy/: tls baz (200; 10.089639ms) + Oct 13 08:38:26.301: INFO: (6) /api/v1/namespaces/proxy-1779/services/proxy-service-8w582:portname2/proxy/: bar (200; 10.10988ms) + Oct 13 08:38:26.310: INFO: (7) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr/proxy/: test (200; 8.726872ms) + Oct 13 08:38:26.310: INFO: (7) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:443/proxy/: test<... (200; 8.777022ms) + Oct 13 08:38:26.310: INFO: (7) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:160/proxy/: foo (200; 8.762363ms) + Oct 13 08:38:26.310: INFO: (7) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:1080/proxy/: ... (200; 8.742382ms) + Oct 13 08:38:26.310: INFO: (7) /api/v1/namespaces/proxy-1779/services/https:proxy-service-8w582:tlsportname2/proxy/: tls qux (200; 8.910202ms) + Oct 13 08:38:26.310: INFO: (7) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:162/proxy/: bar (200; 8.732012ms) + Oct 13 08:38:26.310: INFO: (7) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:460/proxy/: tls baz (200; 8.891199ms) + Oct 13 08:38:26.310: INFO: (7) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:462/proxy/: tls qux (200; 8.738767ms) + Oct 13 08:38:26.310: INFO: (7) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:160/proxy/: foo (200; 8.820995ms) + Oct 13 08:38:26.312: INFO: (7) /api/v1/namespaces/proxy-1779/services/proxy-service-8w582:portname1/proxy/: foo (200; 10.488821ms) + Oct 13 08:38:26.313: INFO: (7) /api/v1/namespaces/proxy-1779/services/proxy-service-8w582:portname2/proxy/: bar (200; 11.14635ms) + Oct 13 08:38:26.313: INFO: (7) /api/v1/namespaces/proxy-1779/services/http:proxy-service-8w582:portname2/proxy/: bar (200; 11.134373ms) + Oct 13 08:38:26.313: INFO: (7) /api/v1/namespaces/proxy-1779/services/http:proxy-service-8w582:portname1/proxy/: foo (200; 11.240588ms) + Oct 13 08:38:26.313: INFO: (7) /api/v1/namespaces/proxy-1779/services/https:proxy-service-8w582:tlsportname1/proxy/: tls baz (200; 11.053128ms) + Oct 13 08:38:26.317: INFO: (8) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:460/proxy/: tls baz (200; 3.956972ms) + Oct 13 08:38:26.318: INFO: (8) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:1080/proxy/: test<... (200; 4.685746ms) + Oct 13 08:38:26.320: INFO: (8) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:160/proxy/: foo (200; 7.299621ms) + Oct 13 08:38:26.321: INFO: (8) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:162/proxy/: bar (200; 7.307021ms) + Oct 13 08:38:26.321: INFO: (8) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:162/proxy/: bar (200; 7.347214ms) + Oct 13 08:38:26.321: INFO: (8) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:1080/proxy/: ... (200; 7.346712ms) + Oct 13 08:38:26.321: INFO: (8) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:462/proxy/: tls qux (200; 7.323934ms) + Oct 13 08:38:26.320: INFO: (8) /api/v1/namespaces/proxy-1779/services/proxy-service-8w582:portname1/proxy/: foo (200; 7.544993ms) + Oct 13 08:38:26.321: INFO: (8) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr/proxy/: test (200; 7.415196ms) + Oct 13 08:38:26.321: INFO: (8) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:443/proxy/: test<... (200; 8.437345ms) + Oct 13 08:38:26.332: INFO: (9) /api/v1/namespaces/proxy-1779/services/https:proxy-service-8w582:tlsportname1/proxy/: tls baz (200; 8.234037ms) + Oct 13 08:38:26.332: INFO: (9) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:160/proxy/: foo (200; 8.316802ms) + Oct 13 08:38:26.332: INFO: (9) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:1080/proxy/: ... (200; 8.256825ms) + Oct 13 08:38:26.332: INFO: (9) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:460/proxy/: tls baz (200; 8.273467ms) + Oct 13 08:38:26.332: INFO: (9) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr/proxy/: test (200; 8.498945ms) + Oct 13 08:38:26.332: INFO: (9) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:162/proxy/: bar (200; 8.621171ms) + Oct 13 08:38:26.332: INFO: (9) /api/v1/namespaces/proxy-1779/services/proxy-service-8w582:portname1/proxy/: foo (200; 8.695426ms) + Oct 13 08:38:26.332: INFO: (9) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:443/proxy/: test<... (200; 7.614114ms) + Oct 13 08:38:26.342: INFO: (10) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:460/proxy/: tls baz (200; 7.68733ms) + Oct 13 08:38:26.342: INFO: (10) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:162/proxy/: bar (200; 7.718594ms) + Oct 13 08:38:26.343: INFO: (10) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr/proxy/: test (200; 7.88244ms) + Oct 13 08:38:26.343: INFO: (10) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:162/proxy/: bar (200; 7.992232ms) + Oct 13 08:38:26.343: INFO: (10) /api/v1/namespaces/proxy-1779/services/proxy-service-8w582:portname1/proxy/: foo (200; 8.009434ms) + Oct 13 08:38:26.343: INFO: (10) /api/v1/namespaces/proxy-1779/services/proxy-service-8w582:portname2/proxy/: bar (200; 8.059245ms) + Oct 13 08:38:26.343: INFO: (10) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:1080/proxy/: ... (200; 8.463065ms) + Oct 13 08:38:26.343: INFO: (10) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:443/proxy/: test<... (200; 6.053253ms) + Oct 13 08:38:26.351: INFO: (11) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:462/proxy/: tls qux (200; 5.781212ms) + Oct 13 08:38:26.351: INFO: (11) /api/v1/namespaces/proxy-1779/services/http:proxy-service-8w582:portname2/proxy/: bar (200; 6.580381ms) + Oct 13 08:38:26.352: INFO: (11) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:460/proxy/: tls baz (200; 6.705251ms) + Oct 13 08:38:26.352: INFO: (11) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:160/proxy/: foo (200; 6.38447ms) + Oct 13 08:38:26.352: INFO: (11) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:1080/proxy/: ... (200; 6.83028ms) + Oct 13 08:38:26.352: INFO: (11) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr/proxy/: test (200; 6.208413ms) + Oct 13 08:38:26.352: INFO: (11) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:443/proxy/: ... (200; 7.514941ms) + Oct 13 08:38:26.363: INFO: (12) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:443/proxy/: test (200; 7.802837ms) + Oct 13 08:38:26.364: INFO: (12) /api/v1/namespaces/proxy-1779/services/http:proxy-service-8w582:portname2/proxy/: bar (200; 8.40412ms) + Oct 13 08:38:26.364: INFO: (12) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:462/proxy/: tls qux (200; 8.636602ms) + Oct 13 08:38:26.364: INFO: (12) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:1080/proxy/: test<... (200; 8.56902ms) + Oct 13 08:38:26.364: INFO: (12) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:160/proxy/: foo (200; 8.629306ms) + Oct 13 08:38:26.364: INFO: (12) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:460/proxy/: tls baz (200; 8.644795ms) + Oct 13 08:38:26.364: INFO: (12) /api/v1/namespaces/proxy-1779/services/https:proxy-service-8w582:tlsportname2/proxy/: tls qux (200; 8.642923ms) + Oct 13 08:38:26.365: INFO: (12) /api/v1/namespaces/proxy-1779/services/proxy-service-8w582:portname1/proxy/: foo (200; 9.965747ms) + Oct 13 08:38:26.365: INFO: (12) /api/v1/namespaces/proxy-1779/services/http:proxy-service-8w582:portname1/proxy/: foo (200; 9.97275ms) + Oct 13 08:38:26.365: INFO: (12) /api/v1/namespaces/proxy-1779/services/proxy-service-8w582:portname2/proxy/: bar (200; 9.954561ms) + Oct 13 08:38:26.365: INFO: (12) /api/v1/namespaces/proxy-1779/services/https:proxy-service-8w582:tlsportname1/proxy/: tls baz (200; 10.036704ms) + Oct 13 08:38:26.369: INFO: (13) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:460/proxy/: tls baz (200; 3.507572ms) + Oct 13 08:38:26.371: INFO: (13) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:1080/proxy/: test<... (200; 5.111511ms) + Oct 13 08:38:26.371: INFO: (13) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr/proxy/: test (200; 5.316342ms) + Oct 13 08:38:26.371: INFO: (13) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:1080/proxy/: ... (200; 5.226911ms) + Oct 13 08:38:26.371: INFO: (13) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:462/proxy/: tls qux (200; 5.301266ms) + Oct 13 08:38:26.371: INFO: (13) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:162/proxy/: bar (200; 5.241978ms) + Oct 13 08:38:26.371: INFO: (13) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:160/proxy/: foo (200; 5.348765ms) + Oct 13 08:38:26.371: INFO: (13) /api/v1/namespaces/proxy-1779/services/https:proxy-service-8w582:tlsportname1/proxy/: tls baz (200; 6.231113ms) + Oct 13 08:38:26.372: INFO: (13) /api/v1/namespaces/proxy-1779/services/proxy-service-8w582:portname1/proxy/: foo (200; 6.2988ms) + Oct 13 08:38:26.372: INFO: (13) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:160/proxy/: foo (200; 6.280932ms) + Oct 13 08:38:26.372: INFO: (13) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:443/proxy/: test (200; 5.34282ms) + Oct 13 08:38:26.380: INFO: (14) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:462/proxy/: tls qux (200; 5.228692ms) + Oct 13 08:38:26.380: INFO: (14) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:443/proxy/: test<... (200; 5.369281ms) + Oct 13 08:38:26.380: INFO: (14) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:1080/proxy/: ... (200; 5.70348ms) + Oct 13 08:38:26.382: INFO: (14) /api/v1/namespaces/proxy-1779/services/http:proxy-service-8w582:portname1/proxy/: foo (200; 7.384869ms) + Oct 13 08:38:26.382: INFO: (14) /api/v1/namespaces/proxy-1779/services/proxy-service-8w582:portname2/proxy/: bar (200; 7.46141ms) + Oct 13 08:38:26.383: INFO: (14) /api/v1/namespaces/proxy-1779/services/https:proxy-service-8w582:tlsportname2/proxy/: tls qux (200; 8.966801ms) + Oct 13 08:38:26.384: INFO: (14) /api/v1/namespaces/proxy-1779/services/http:proxy-service-8w582:portname2/proxy/: bar (200; 9.107263ms) + Oct 13 08:38:26.384: INFO: (14) /api/v1/namespaces/proxy-1779/services/proxy-service-8w582:portname1/proxy/: foo (200; 9.122384ms) + Oct 13 08:38:26.384: INFO: (14) /api/v1/namespaces/proxy-1779/services/https:proxy-service-8w582:tlsportname1/proxy/: tls baz (200; 9.049991ms) + Oct 13 08:38:26.390: INFO: (15) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:162/proxy/: bar (200; 6.128731ms) + Oct 13 08:38:26.390: INFO: (15) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:443/proxy/: ... (200; 6.152714ms) + Oct 13 08:38:26.390: INFO: (15) /api/v1/namespaces/proxy-1779/services/proxy-service-8w582:portname1/proxy/: foo (200; 6.512229ms) + Oct 13 08:38:26.390: INFO: (15) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:160/proxy/: foo (200; 6.23817ms) + Oct 13 08:38:26.390: INFO: (15) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:162/proxy/: bar (200; 6.383637ms) + Oct 13 08:38:26.390: INFO: (15) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr/proxy/: test (200; 6.316794ms) + Oct 13 08:38:26.390: INFO: (15) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:460/proxy/: tls baz (200; 6.69932ms) + Oct 13 08:38:26.390: INFO: (15) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:1080/proxy/: test<... (200; 6.53578ms) + Oct 13 08:38:26.390: INFO: (15) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:462/proxy/: tls qux (200; 6.326343ms) + Oct 13 08:38:26.390: INFO: (15) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:160/proxy/: foo (200; 6.512086ms) + Oct 13 08:38:26.391: INFO: (15) /api/v1/namespaces/proxy-1779/services/http:proxy-service-8w582:portname2/proxy/: bar (200; 7.269121ms) + Oct 13 08:38:26.392: INFO: (15) /api/v1/namespaces/proxy-1779/services/http:proxy-service-8w582:portname1/proxy/: foo (200; 8.407779ms) + Oct 13 08:38:26.392: INFO: (15) /api/v1/namespaces/proxy-1779/services/proxy-service-8w582:portname2/proxy/: bar (200; 8.388472ms) + Oct 13 08:38:26.392: INFO: (15) /api/v1/namespaces/proxy-1779/services/https:proxy-service-8w582:tlsportname2/proxy/: tls qux (200; 8.553539ms) + Oct 13 08:38:26.392: INFO: (15) /api/v1/namespaces/proxy-1779/services/https:proxy-service-8w582:tlsportname1/proxy/: tls baz (200; 8.478518ms) + Oct 13 08:38:26.396: INFO: (16) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:460/proxy/: tls baz (200; 3.952028ms) + Oct 13 08:38:26.397: INFO: (16) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:462/proxy/: tls qux (200; 4.271937ms) + Oct 13 08:38:26.399: INFO: (16) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:1080/proxy/: ... (200; 6.454172ms) + Oct 13 08:38:26.399: INFO: (16) /api/v1/namespaces/proxy-1779/services/proxy-service-8w582:portname1/proxy/: foo (200; 6.604951ms) + Oct 13 08:38:26.399: INFO: (16) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:162/proxy/: bar (200; 6.605323ms) + Oct 13 08:38:26.399: INFO: (16) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr/proxy/: test (200; 6.75035ms) + Oct 13 08:38:26.399: INFO: (16) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:162/proxy/: bar (200; 6.805859ms) + Oct 13 08:38:26.399: INFO: (16) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:160/proxy/: foo (200; 6.811038ms) + Oct 13 08:38:26.399: INFO: (16) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:1080/proxy/: test<... (200; 6.850779ms) + Oct 13 08:38:26.399: INFO: (16) /api/v1/namespaces/proxy-1779/services/http:proxy-service-8w582:portname1/proxy/: foo (200; 6.903495ms) + Oct 13 08:38:26.400: INFO: (16) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:443/proxy/: ... (200; 4.724807ms) + Oct 13 08:38:26.406: INFO: (17) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:460/proxy/: tls baz (200; 4.916824ms) + Oct 13 08:38:26.406: INFO: (17) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:443/proxy/: test<... (200; 4.893004ms) + Oct 13 08:38:26.406: INFO: (17) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:162/proxy/: bar (200; 5.108877ms) + Oct 13 08:38:26.406: INFO: (17) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr/proxy/: test (200; 5.197758ms) + Oct 13 08:38:26.407: INFO: (17) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:462/proxy/: tls qux (200; 5.515567ms) + Oct 13 08:38:26.407: INFO: (17) /api/v1/namespaces/proxy-1779/services/https:proxy-service-8w582:tlsportname1/proxy/: tls baz (200; 6.265662ms) + Oct 13 08:38:26.408: INFO: (17) /api/v1/namespaces/proxy-1779/services/proxy-service-8w582:portname1/proxy/: foo (200; 7.032692ms) + Oct 13 08:38:26.408: INFO: (17) /api/v1/namespaces/proxy-1779/services/http:proxy-service-8w582:portname1/proxy/: foo (200; 7.190591ms) + Oct 13 08:38:26.409: INFO: (17) /api/v1/namespaces/proxy-1779/services/http:proxy-service-8w582:portname2/proxy/: bar (200; 7.234897ms) + Oct 13 08:38:26.409: INFO: (17) /api/v1/namespaces/proxy-1779/services/proxy-service-8w582:portname2/proxy/: bar (200; 7.815527ms) + Oct 13 08:38:26.409: INFO: (17) /api/v1/namespaces/proxy-1779/services/https:proxy-service-8w582:tlsportname2/proxy/: tls qux (200; 7.714759ms) + Oct 13 08:38:26.412: INFO: (18) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:162/proxy/: bar (200; 3.266701ms) + Oct 13 08:38:26.415: INFO: (18) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr/proxy/: test (200; 5.261433ms) + Oct 13 08:38:26.415: INFO: (18) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:460/proxy/: tls baz (200; 5.335856ms) + Oct 13 08:38:26.415: INFO: (18) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:160/proxy/: foo (200; 5.476828ms) + Oct 13 08:38:26.415: INFO: (18) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:443/proxy/: test<... (200; 5.560387ms) + Oct 13 08:38:26.415: INFO: (18) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:162/proxy/: bar (200; 5.416386ms) + Oct 13 08:38:26.415: INFO: (18) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:160/proxy/: foo (200; 5.33652ms) + Oct 13 08:38:26.415: INFO: (18) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:1080/proxy/: ... (200; 5.827282ms) + Oct 13 08:38:26.415: INFO: (18) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:462/proxy/: tls qux (200; 5.474914ms) + Oct 13 08:38:26.416: INFO: (18) /api/v1/namespaces/proxy-1779/services/proxy-service-8w582:portname2/proxy/: bar (200; 6.725329ms) + Oct 13 08:38:26.417: INFO: (18) /api/v1/namespaces/proxy-1779/services/http:proxy-service-8w582:portname1/proxy/: foo (200; 7.637201ms) + Oct 13 08:38:26.417: INFO: (18) /api/v1/namespaces/proxy-1779/services/https:proxy-service-8w582:tlsportname2/proxy/: tls qux (200; 7.811642ms) + Oct 13 08:38:26.417: INFO: (18) /api/v1/namespaces/proxy-1779/services/proxy-service-8w582:portname1/proxy/: foo (200; 8.19287ms) + Oct 13 08:38:26.417: INFO: (18) /api/v1/namespaces/proxy-1779/services/https:proxy-service-8w582:tlsportname1/proxy/: tls baz (200; 8.034769ms) + Oct 13 08:38:26.417: INFO: (18) /api/v1/namespaces/proxy-1779/services/http:proxy-service-8w582:portname2/proxy/: bar (200; 7.984337ms) + Oct 13 08:38:26.423: INFO: (19) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:1080/proxy/: ... (200; 4.837476ms) + Oct 13 08:38:26.423: INFO: (19) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr/proxy/: test (200; 4.884605ms) + Oct 13 08:38:26.423: INFO: (19) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:443/proxy/: test<... (200; 5.095943ms) + Oct 13 08:38:26.423: INFO: (19) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:160/proxy/: foo (200; 5.257899ms) + Oct 13 08:38:26.423: INFO: (19) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:462/proxy/: tls qux (200; 5.183779ms) + Oct 13 08:38:26.423: INFO: (19) /api/v1/namespaces/proxy-1779/pods/https:proxy-service-8w582-w5xcr:460/proxy/: tls baz (200; 5.197292ms) + Oct 13 08:38:26.423: INFO: (19) /api/v1/namespaces/proxy-1779/pods/http:proxy-service-8w582-w5xcr:160/proxy/: foo (200; 5.253292ms) + Oct 13 08:38:26.423: INFO: (19) /api/v1/namespaces/proxy-1779/pods/proxy-service-8w582-w5xcr:162/proxy/: bar (200; 5.198696ms) + Oct 13 08:38:26.424: INFO: (19) /api/v1/namespaces/proxy-1779/services/proxy-service-8w582:portname2/proxy/: bar (200; 6.573763ms) + Oct 13 08:38:26.425: INFO: (19) /api/v1/namespaces/proxy-1779/services/proxy-service-8w582:portname1/proxy/: foo (200; 7.660055ms) + Oct 13 08:38:26.426: INFO: (19) /api/v1/namespaces/proxy-1779/services/https:proxy-service-8w582:tlsportname1/proxy/: tls baz (200; 7.713345ms) + Oct 13 08:38:26.426: INFO: (19) /api/v1/namespaces/proxy-1779/services/https:proxy-service-8w582:tlsportname2/proxy/: tls qux (200; 7.815858ms) + Oct 13 08:38:26.426: INFO: (19) /api/v1/namespaces/proxy-1779/services/http:proxy-service-8w582:portname1/proxy/: foo (200; 8.372125ms) + Oct 13 08:38:26.426: INFO: (19) /api/v1/namespaces/proxy-1779/services/http:proxy-service-8w582:portname2/proxy/: bar (200; 8.318247ms) + STEP: deleting ReplicationController proxy-service-8w582 in namespace proxy-1779, will wait for the garbage collector to delete the pods 10/13/23 08:38:26.426 + Oct 13 08:38:26.484: INFO: Deleting ReplicationController proxy-service-8w582 took: 5.325613ms + Oct 13 08:38:26.585: INFO: Terminating ReplicationController proxy-service-8w582 pods took: 100.841563ms + [AfterEach] version v1 + test/e2e/framework/node/init/init.go:32 + Oct 13 08:38:28.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] version v1 + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] version v1 + dump namespaces | framework.go:196 + [DeferCleanup (Each)] version v1 + tear down framework | framework.go:193 + STEP: Destroying namespace "proxy-1779" for this suite. 10/13/23 08:38:28.791 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath + runs ReplicaSets to verify preemption running path [Conformance] + test/e2e/scheduling/preemption.go:624 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:38:28.8 +Oct 13 08:38:28.800: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename sched-preemption 10/13/23 08:38:28.801 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:38:28.817 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:38:28.82 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/scheduling/preemption.go:97 +Oct 13 08:38:28.836: INFO: Waiting up to 1m0s for all nodes to be ready +Oct 13 08:39:28.865: INFO: Waiting for terminating namespaces to be deleted... +[BeforeEach] PreemptionExecutionPath + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:39:28.868 +Oct 13 08:39:28.868: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename sched-preemption-path 10/13/23 08:39:28.869 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:39:28.883 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:39:28.885 +[BeforeEach] PreemptionExecutionPath + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] PreemptionExecutionPath + test/e2e/scheduling/preemption.go:576 +STEP: Finding an available node 10/13/23 08:39:28.888 +STEP: Trying to launch a pod without a label to get a node which can launch it. 10/13/23 08:39:28.888 +Oct 13 08:39:28.894: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-preemption-path-5215" to be "running" +Oct 13 08:39:28.897: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.756959ms +Oct 13 08:39:30.903: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.008599849s +Oct 13 08:39:30.903: INFO: Pod "without-label" satisfied condition "running" +STEP: Explicitly delete pod here to free the resource it takes. 10/13/23 08:39:30.906 +Oct 13 08:39:30.919: INFO: found a healthy node: node2 +[It] runs ReplicaSets to verify preemption running path [Conformance] + test/e2e/scheduling/preemption.go:624 +Oct 13 08:39:36.996: INFO: pods created so far: [1 1 1] +Oct 13 08:39:36.996: INFO: length of pods created so far: 3 +Oct 13 08:39:39.009: INFO: pods created so far: [2 2 1] +[AfterEach] PreemptionExecutionPath + test/e2e/framework/node/init/init.go:32 +Oct 13 08:39:46.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] PreemptionExecutionPath + test/e2e/scheduling/preemption.go:549 +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/node/init/init.go:32 +Oct 13 08:39:46.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/scheduling/preemption.go:84 +[DeferCleanup (Each)] PreemptionExecutionPath + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] PreemptionExecutionPath + dump namespaces | framework.go:196 +[DeferCleanup (Each)] PreemptionExecutionPath + tear down framework | framework.go:193 +STEP: Destroying namespace "sched-preemption-path-5215" for this suite. 10/13/23 08:39:46.092 +[DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] + tear down framework | framework.go:193 +STEP: Destroying namespace "sched-preemption-2232" for this suite. 10/13/23 08:39:46.098 +------------------------------ +• [SLOW TEST] [77.304 seconds] +[sig-scheduling] SchedulerPreemption [Serial] +test/e2e/scheduling/framework.go:40 + PreemptionExecutionPath + test/e2e/scheduling/preemption.go:537 + runs ReplicaSets to verify preemption running path [Conformance] + test/e2e/scheduling/preemption.go:624 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:38:28.8 + Oct 13 08:38:28.800: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename sched-preemption 10/13/23 08:38:28.801 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:38:28.817 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:38:28.82 + [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/scheduling/preemption.go:97 + Oct 13 08:38:28.836: INFO: Waiting up to 1m0s for all nodes to be ready + Oct 13 08:39:28.865: INFO: Waiting for terminating namespaces to be deleted... + [BeforeEach] PreemptionExecutionPath + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:39:28.868 + Oct 13 08:39:28.868: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename sched-preemption-path 10/13/23 08:39:28.869 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:39:28.883 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:39:28.885 + [BeforeEach] PreemptionExecutionPath + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] PreemptionExecutionPath + test/e2e/scheduling/preemption.go:576 + STEP: Finding an available node 10/13/23 08:39:28.888 + STEP: Trying to launch a pod without a label to get a node which can launch it. 10/13/23 08:39:28.888 + Oct 13 08:39:28.894: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-preemption-path-5215" to be "running" + Oct 13 08:39:28.897: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.756959ms + Oct 13 08:39:30.903: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.008599849s + Oct 13 08:39:30.903: INFO: Pod "without-label" satisfied condition "running" + STEP: Explicitly delete pod here to free the resource it takes. 10/13/23 08:39:30.906 + Oct 13 08:39:30.919: INFO: found a healthy node: node2 + [It] runs ReplicaSets to verify preemption running path [Conformance] + test/e2e/scheduling/preemption.go:624 + Oct 13 08:39:36.996: INFO: pods created so far: [1 1 1] + Oct 13 08:39:36.996: INFO: length of pods created so far: 3 + Oct 13 08:39:39.009: INFO: pods created so far: [2 2 1] + [AfterEach] PreemptionExecutionPath + test/e2e/framework/node/init/init.go:32 + Oct 13 08:39:46.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] PreemptionExecutionPath + test/e2e/scheduling/preemption.go:549 + [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/node/init/init.go:32 + Oct 13 08:39:46.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/scheduling/preemption.go:84 + [DeferCleanup (Each)] PreemptionExecutionPath + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] PreemptionExecutionPath + dump namespaces | framework.go:196 + [DeferCleanup (Each)] PreemptionExecutionPath + tear down framework | framework.go:193 + STEP: Destroying namespace "sched-preemption-path-5215" for this suite. 10/13/23 08:39:46.092 + [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] + tear down framework | framework.go:193 + STEP: Destroying namespace "sched-preemption-2232" for this suite. 10/13/23 08:39:46.098 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + should list, patch and delete a collection of StatefulSets [Conformance] + test/e2e/apps/statefulset.go:908 +[BeforeEach] [sig-apps] StatefulSet + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:39:46.106 +Oct 13 08:39:46.107: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename statefulset 10/13/23 08:39:46.108 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:39:46.123 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:39:46.126 +[BeforeEach] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:98 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:113 +STEP: Creating service test in namespace statefulset-3273 10/13/23 08:39:46.128 +[It] should list, patch and delete a collection of StatefulSets [Conformance] + test/e2e/apps/statefulset.go:908 +Oct 13 08:39:46.142: INFO: Found 0 stateful pods, waiting for 1 +Oct 13 08:39:56.149: INFO: Waiting for pod test-ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: patching the StatefulSet 10/13/23 08:39:56.158 +W1013 08:39:56.168390 23 warnings.go:70] unknown field "spec.template.spec.TerminationGracePeriodSeconds" +Oct 13 08:39:56.175: INFO: Found 1 stateful pods, waiting for 2 +Oct 13 08:40:06.185: INFO: Waiting for pod test-ss-0 to enter Running - Ready=true, currently Running - Ready=true +Oct 13 08:40:06.185: INFO: Waiting for pod test-ss-1 to enter Running - Ready=true, currently Running - Ready=true +STEP: Listing all StatefulSets 10/13/23 08:40:06.194 +STEP: Delete all of the StatefulSets 10/13/23 08:40:06.198 +STEP: Verify that StatefulSets have been deleted 10/13/23 08:40:06.207 +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:124 +Oct 13 08:40:06.212: INFO: Deleting all statefulset in ns statefulset-3273 +[AfterEach] [sig-apps] StatefulSet + test/e2e/framework/node/init/init.go:32 +Oct 13 08:40:06.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] StatefulSet + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] StatefulSet + tear down framework | framework.go:193 +STEP: Destroying namespace "statefulset-3273" for this suite. 10/13/23 08:40:06.226 +------------------------------ +• [SLOW TEST] [20.126 seconds] +[sig-apps] StatefulSet +test/e2e/apps/framework.go:23 + Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:103 + should list, patch and delete a collection of StatefulSets [Conformance] + test/e2e/apps/statefulset.go:908 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] StatefulSet + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:39:46.106 + Oct 13 08:39:46.107: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename statefulset 10/13/23 08:39:46.108 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:39:46.123 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:39:46.126 + [BeforeEach] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:98 + [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:113 + STEP: Creating service test in namespace statefulset-3273 10/13/23 08:39:46.128 + [It] should list, patch and delete a collection of StatefulSets [Conformance] + test/e2e/apps/statefulset.go:908 + Oct 13 08:39:46.142: INFO: Found 0 stateful pods, waiting for 1 + Oct 13 08:39:56.149: INFO: Waiting for pod test-ss-0 to enter Running - Ready=true, currently Running - Ready=true + STEP: patching the StatefulSet 10/13/23 08:39:56.158 + W1013 08:39:56.168390 23 warnings.go:70] unknown field "spec.template.spec.TerminationGracePeriodSeconds" + Oct 13 08:39:56.175: INFO: Found 1 stateful pods, waiting for 2 + Oct 13 08:40:06.185: INFO: Waiting for pod test-ss-0 to enter Running - Ready=true, currently Running - Ready=true + Oct 13 08:40:06.185: INFO: Waiting for pod test-ss-1 to enter Running - Ready=true, currently Running - Ready=true + STEP: Listing all StatefulSets 10/13/23 08:40:06.194 + STEP: Delete all of the StatefulSets 10/13/23 08:40:06.198 + STEP: Verify that StatefulSets have been deleted 10/13/23 08:40:06.207 + [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:124 + Oct 13 08:40:06.212: INFO: Deleting all statefulset in ns statefulset-3273 + [AfterEach] [sig-apps] StatefulSet + test/e2e/framework/node/init/init.go:32 + Oct 13 08:40:06.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] StatefulSet + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] StatefulSet + tear down framework | framework.go:193 + STEP: Destroying namespace "statefulset-3273" for this suite. 10/13/23 08:40:06.226 + << End Captured GinkgoWriter Output +------------------------------ +[sig-apps] Deployment + RollingUpdateDeployment should delete old pods and create new ones [Conformance] + test/e2e/apps/deployment.go:105 +[BeforeEach] [sig-apps] Deployment + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:40:06.233 +Oct 13 08:40:06.233: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename deployment 10/13/23 08:40:06.234 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:40:06.253 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:40:06.256 +[BeforeEach] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 +[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] + test/e2e/apps/deployment.go:105 +Oct 13 08:40:06.258: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) +Oct 13 08:40:06.269: INFO: Pod name sample-pod: Found 0 pods out of 1 +Oct 13 08:40:11.275: INFO: Pod name sample-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running 10/13/23 08:40:11.275 +Oct 13 08:40:11.275: INFO: Creating deployment "test-rolling-update-deployment" +Oct 13 08:40:11.280: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has +Oct 13 08:40:11.287: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created +Oct 13 08:40:13.294: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected +Oct 13 08:40:13.297: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) +[AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 +Oct 13 08:40:13.307: INFO: Deployment "test-rolling-update-deployment": +&Deployment{ObjectMeta:{test-rolling-update-deployment deployment-5040 36c20b72-9c1c-4c35-9beb-6ffbe54917fb 20095 1 2023-10-13 08:40:11 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2023-10-13 08:40:11 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-10-13 08:40:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost registry.k8s.io/e2e-test-images/agnhost:2.43 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00438ef38 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2023-10-13 08:40:11 +0000 UTC,LastTransitionTime:2023-10-13 08:40:11 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-7549d9f46d" has successfully progressed.,LastUpdateTime:2023-10-13 08:40:12 +0000 UTC,LastTransitionTime:2023-10-13 08:40:11 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + +Oct 13 08:40:13.309: INFO: New ReplicaSet "test-rolling-update-deployment-7549d9f46d" of Deployment "test-rolling-update-deployment": +&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-7549d9f46d deployment-5040 405a8df8-8437-477e-b131-52c9737e29df 20085 1 2023-10-13 08:40:11 +0000 UTC map[name:sample-pod pod-template-hash:7549d9f46d] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 36c20b72-9c1c-4c35-9beb-6ffbe54917fb 0xc004269a07 0xc004269a08}] [] [{kube-controller-manager Update apps/v1 2023-10-13 08:40:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"36c20b72-9c1c-4c35-9beb-6ffbe54917fb\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-10-13 08:40:12 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 7549d9f46d,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:7549d9f46d] map[] [] [] []} {[] [] [{agnhost registry.k8s.io/e2e-test-images/agnhost:2.43 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004269ab8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Oct 13 08:40:13.309: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": +Oct 13 08:40:13.309: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-5040 70b4678c-2f9a-41bc-82d9-35172054f40e 20094 2 2023-10-13 08:40:06 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 36c20b72-9c1c-4c35-9beb-6ffbe54917fb 0xc0042698d7 0xc0042698d8}] [] [{e2e.test Update apps/v1 2023-10-13 08:40:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-10-13 08:40:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"36c20b72-9c1c-4c35-9beb-6ffbe54917fb\"}":{}}},"f:spec":{"f:replicas":{}}} } {kube-controller-manager Update apps/v1 2023-10-13 08:40:12 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004269998 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Oct 13 08:40:13.313: INFO: Pod "test-rolling-update-deployment-7549d9f46d-7ds2f" is available: +&Pod{ObjectMeta:{test-rolling-update-deployment-7549d9f46d-7ds2f test-rolling-update-deployment-7549d9f46d- deployment-5040 c88cfac0-73d5-4533-8969-ca7dc3b01787 20084 0 2023-10-13 08:40:11 +0000 UTC map[name:sample-pod pod-template-hash:7549d9f46d] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-7549d9f46d 405a8df8-8437-477e-b131-52c9737e29df 0xc006023797 0xc006023798}] [] [{kube-controller-manager Update v1 2023-10-13 08:40:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"405a8df8-8437-477e-b131-52c9737e29df\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-10-13 08:40:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.134\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-jgr5d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:registry.k8s.io/e2e-test-images/agnhost:2.43,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jgr5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:40:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:40:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:40:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:40:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.253.8.111,PodIP:10.244.1.134,StartTime:2023-10-13 08:40:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-10-13 08:40:12 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/agnhost:2.43,ImageID:sha256:30e3d1b869f4d327bd612179ed37850611b462c8415691b3de9eb55787f81e71,ContainerID:containerd://12f86c55fa44e0aa35e940d087cca397c4baef41d6fc6243ee3f7c8387cf098e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.134,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + test/e2e/framework/node/init/init.go:32 +Oct 13 08:40:13.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] Deployment + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] Deployment + tear down framework | framework.go:193 +STEP: Destroying namespace "deployment-5040" for this suite. 10/13/23 08:40:13.316 +------------------------------ +• [SLOW TEST] [7.089 seconds] +[sig-apps] Deployment +test/e2e/apps/framework.go:23 + RollingUpdateDeployment should delete old pods and create new ones [Conformance] + test/e2e/apps/deployment.go:105 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] Deployment + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:40:06.233 + Oct 13 08:40:06.233: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename deployment 10/13/23 08:40:06.234 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:40:06.253 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:40:06.256 + [BeforeEach] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 + [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] + test/e2e/apps/deployment.go:105 + Oct 13 08:40:06.258: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) + Oct 13 08:40:06.269: INFO: Pod name sample-pod: Found 0 pods out of 1 + Oct 13 08:40:11.275: INFO: Pod name sample-pod: Found 1 pods out of 1 + STEP: ensuring each pod is running 10/13/23 08:40:11.275 + Oct 13 08:40:11.275: INFO: Creating deployment "test-rolling-update-deployment" + Oct 13 08:40:11.280: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has + Oct 13 08:40:11.287: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created + Oct 13 08:40:13.294: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected + Oct 13 08:40:13.297: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) + [AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 + Oct 13 08:40:13.307: INFO: Deployment "test-rolling-update-deployment": + &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-5040 36c20b72-9c1c-4c35-9beb-6ffbe54917fb 20095 1 2023-10-13 08:40:11 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2023-10-13 08:40:11 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-10-13 08:40:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost registry.k8s.io/e2e-test-images/agnhost:2.43 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00438ef38 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2023-10-13 08:40:11 +0000 UTC,LastTransitionTime:2023-10-13 08:40:11 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-7549d9f46d" has successfully progressed.,LastUpdateTime:2023-10-13 08:40:12 +0000 UTC,LastTransitionTime:2023-10-13 08:40:11 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + + Oct 13 08:40:13.309: INFO: New ReplicaSet "test-rolling-update-deployment-7549d9f46d" of Deployment "test-rolling-update-deployment": + &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-7549d9f46d deployment-5040 405a8df8-8437-477e-b131-52c9737e29df 20085 1 2023-10-13 08:40:11 +0000 UTC map[name:sample-pod pod-template-hash:7549d9f46d] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 36c20b72-9c1c-4c35-9beb-6ffbe54917fb 0xc004269a07 0xc004269a08}] [] [{kube-controller-manager Update apps/v1 2023-10-13 08:40:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"36c20b72-9c1c-4c35-9beb-6ffbe54917fb\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-10-13 08:40:12 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 7549d9f46d,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:7549d9f46d] map[] [] [] []} {[] [] [{agnhost registry.k8s.io/e2e-test-images/agnhost:2.43 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004269ab8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} + Oct 13 08:40:13.309: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": + Oct 13 08:40:13.309: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-5040 70b4678c-2f9a-41bc-82d9-35172054f40e 20094 2 2023-10-13 08:40:06 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 36c20b72-9c1c-4c35-9beb-6ffbe54917fb 0xc0042698d7 0xc0042698d8}] [] [{e2e.test Update apps/v1 2023-10-13 08:40:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-10-13 08:40:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"36c20b72-9c1c-4c35-9beb-6ffbe54917fb\"}":{}}},"f:spec":{"f:replicas":{}}} } {kube-controller-manager Update apps/v1 2023-10-13 08:40:12 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004269998 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} + Oct 13 08:40:13.313: INFO: Pod "test-rolling-update-deployment-7549d9f46d-7ds2f" is available: + &Pod{ObjectMeta:{test-rolling-update-deployment-7549d9f46d-7ds2f test-rolling-update-deployment-7549d9f46d- deployment-5040 c88cfac0-73d5-4533-8969-ca7dc3b01787 20084 0 2023-10-13 08:40:11 +0000 UTC map[name:sample-pod pod-template-hash:7549d9f46d] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-7549d9f46d 405a8df8-8437-477e-b131-52c9737e29df 0xc006023797 0xc006023798}] [] [{kube-controller-manager Update v1 2023-10-13 08:40:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"405a8df8-8437-477e-b131-52c9737e29df\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-10-13 08:40:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.134\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-jgr5d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:registry.k8s.io/e2e-test-images/agnhost:2.43,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jgr5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:40:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:40:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:40:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:40:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.253.8.111,PodIP:10.244.1.134,StartTime:2023-10-13 08:40:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-10-13 08:40:12 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/agnhost:2.43,ImageID:sha256:30e3d1b869f4d327bd612179ed37850611b462c8415691b3de9eb55787f81e71,ContainerID:containerd://12f86c55fa44e0aa35e940d087cca397c4baef41d6fc6243ee3f7c8387cf098e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.134,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + [AfterEach] [sig-apps] Deployment + test/e2e/framework/node/init/init.go:32 + Oct 13 08:40:13.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] Deployment + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] Deployment + tear down framework | framework.go:193 + STEP: Destroying namespace "deployment-5040" for this suite. 10/13/23 08:40:13.316 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] EndpointSliceMirroring + should mirror a custom Endpoints resource through create update and delete [Conformance] + test/e2e/network/endpointslicemirroring.go:53 +[BeforeEach] [sig-network] EndpointSliceMirroring + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:40:13.323 +Oct 13 08:40:13.323: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename endpointslicemirroring 10/13/23 08:40:13.324 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:40:13.335 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:40:13.338 +[BeforeEach] [sig-network] EndpointSliceMirroring + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-network] EndpointSliceMirroring + test/e2e/network/endpointslicemirroring.go:41 +[It] should mirror a custom Endpoints resource through create update and delete [Conformance] + test/e2e/network/endpointslicemirroring.go:53 +STEP: mirroring a new custom Endpoint 10/13/23 08:40:13.356 +Oct 13 08:40:13.367: INFO: Waiting for at least 1 EndpointSlice to exist, got 0 +STEP: mirroring an update to a custom Endpoint 10/13/23 08:40:15.373 +Oct 13 08:40:15.380: INFO: Expected EndpointSlice to have 10.2.3.4 as address, got 10.1.2.3 +STEP: mirroring deletion of a custom Endpoint 10/13/23 08:40:17.387 +Oct 13 08:40:17.401: INFO: Waiting for 0 EndpointSlices to exist, got 1 +[AfterEach] [sig-network] EndpointSliceMirroring + test/e2e/framework/node/init/init.go:32 +Oct 13 08:40:19.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] EndpointSliceMirroring + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] EndpointSliceMirroring + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] EndpointSliceMirroring + tear down framework | framework.go:193 +STEP: Destroying namespace "endpointslicemirroring-389" for this suite. 10/13/23 08:40:19.412 +------------------------------ +• [SLOW TEST] [6.097 seconds] +[sig-network] EndpointSliceMirroring +test/e2e/network/common/framework.go:23 + should mirror a custom Endpoints resource through create update and delete [Conformance] + test/e2e/network/endpointslicemirroring.go:53 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] EndpointSliceMirroring + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:40:13.323 + Oct 13 08:40:13.323: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename endpointslicemirroring 10/13/23 08:40:13.324 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:40:13.335 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:40:13.338 + [BeforeEach] [sig-network] EndpointSliceMirroring + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-network] EndpointSliceMirroring + test/e2e/network/endpointslicemirroring.go:41 + [It] should mirror a custom Endpoints resource through create update and delete [Conformance] + test/e2e/network/endpointslicemirroring.go:53 + STEP: mirroring a new custom Endpoint 10/13/23 08:40:13.356 + Oct 13 08:40:13.367: INFO: Waiting for at least 1 EndpointSlice to exist, got 0 + STEP: mirroring an update to a custom Endpoint 10/13/23 08:40:15.373 + Oct 13 08:40:15.380: INFO: Expected EndpointSlice to have 10.2.3.4 as address, got 10.1.2.3 + STEP: mirroring deletion of a custom Endpoint 10/13/23 08:40:17.387 + Oct 13 08:40:17.401: INFO: Waiting for 0 EndpointSlices to exist, got 1 + [AfterEach] [sig-network] EndpointSliceMirroring + test/e2e/framework/node/init/init.go:32 + Oct 13 08:40:19.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] EndpointSliceMirroring + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] EndpointSliceMirroring + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] EndpointSliceMirroring + tear down framework | framework.go:193 + STEP: Destroying namespace "endpointslicemirroring-389" for this suite. 10/13/23 08:40:19.412 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-auth] ServiceAccounts + should guarantee kube-root-ca.crt exist in any namespace [Conformance] + test/e2e/auth/service_accounts.go:742 +[BeforeEach] [sig-auth] ServiceAccounts + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:40:19.42 +Oct 13 08:40:19.420: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename svcaccounts 10/13/23 08:40:19.421 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:40:19.434 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:40:19.436 +[BeforeEach] [sig-auth] ServiceAccounts + test/e2e/framework/metrics/init/init.go:31 +[It] should guarantee kube-root-ca.crt exist in any namespace [Conformance] + test/e2e/auth/service_accounts.go:742 +Oct 13 08:40:19.441: INFO: Got root ca configmap in namespace "svcaccounts-5345" +Oct 13 08:40:19.446: INFO: Deleted root ca configmap in namespace "svcaccounts-5345" +STEP: waiting for a new root ca configmap created 10/13/23 08:40:19.946 +Oct 13 08:40:19.950: INFO: Recreated root ca configmap in namespace "svcaccounts-5345" +Oct 13 08:40:19.955: INFO: Updated root ca configmap in namespace "svcaccounts-5345" +STEP: waiting for the root ca configmap reconciled 10/13/23 08:40:20.456 +Oct 13 08:40:20.460: INFO: Reconciled root ca configmap in namespace "svcaccounts-5345" +[AfterEach] [sig-auth] ServiceAccounts + test/e2e/framework/node/init/init.go:32 +Oct 13 08:40:20.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-auth] ServiceAccounts + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-auth] ServiceAccounts + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-auth] ServiceAccounts + tear down framework | framework.go:193 +STEP: Destroying namespace "svcaccounts-5345" for this suite. 10/13/23 08:40:20.464 +------------------------------ +• [1.052 seconds] +[sig-auth] ServiceAccounts +test/e2e/auth/framework.go:23 + should guarantee kube-root-ca.crt exist in any namespace [Conformance] + test/e2e/auth/service_accounts.go:742 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-auth] ServiceAccounts + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:40:19.42 + Oct 13 08:40:19.420: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename svcaccounts 10/13/23 08:40:19.421 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:40:19.434 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:40:19.436 + [BeforeEach] [sig-auth] ServiceAccounts + test/e2e/framework/metrics/init/init.go:31 + [It] should guarantee kube-root-ca.crt exist in any namespace [Conformance] + test/e2e/auth/service_accounts.go:742 + Oct 13 08:40:19.441: INFO: Got root ca configmap in namespace "svcaccounts-5345" + Oct 13 08:40:19.446: INFO: Deleted root ca configmap in namespace "svcaccounts-5345" + STEP: waiting for a new root ca configmap created 10/13/23 08:40:19.946 + Oct 13 08:40:19.950: INFO: Recreated root ca configmap in namespace "svcaccounts-5345" + Oct 13 08:40:19.955: INFO: Updated root ca configmap in namespace "svcaccounts-5345" + STEP: waiting for the root ca configmap reconciled 10/13/23 08:40:20.456 + Oct 13 08:40:20.460: INFO: Reconciled root ca configmap in namespace "svcaccounts-5345" + [AfterEach] [sig-auth] ServiceAccounts + test/e2e/framework/node/init/init.go:32 + Oct 13 08:40:20.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-auth] ServiceAccounts + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-auth] ServiceAccounts + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-auth] ServiceAccounts + tear down framework | framework.go:193 + STEP: Destroying namespace "svcaccounts-5345" for this suite. 10/13/23 08:40:20.464 + << End Captured GinkgoWriter Output +------------------------------ +[sig-node] Probing container + should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:169 +[BeforeEach] [sig-node] Probing container + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:40:20.472 +Oct 13 08:40:20.472: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename container-probe 10/13/23 08:40:20.473 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:40:20.485 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:40:20.487 +[BeforeEach] [sig-node] Probing container + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:63 +[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:169 +STEP: Creating pod liveness-6cd81f93-ab3d-44b1-85f8-0fe6adadd72a in namespace container-probe-6373 10/13/23 08:40:20.489 +Oct 13 08:40:20.497: INFO: Waiting up to 5m0s for pod "liveness-6cd81f93-ab3d-44b1-85f8-0fe6adadd72a" in namespace "container-probe-6373" to be "not pending" +Oct 13 08:40:20.500: INFO: Pod "liveness-6cd81f93-ab3d-44b1-85f8-0fe6adadd72a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.218726ms +Oct 13 08:40:22.505: INFO: Pod "liveness-6cd81f93-ab3d-44b1-85f8-0fe6adadd72a": Phase="Running", Reason="", readiness=true. Elapsed: 2.008009943s +Oct 13 08:40:22.505: INFO: Pod "liveness-6cd81f93-ab3d-44b1-85f8-0fe6adadd72a" satisfied condition "not pending" +Oct 13 08:40:22.505: INFO: Started pod liveness-6cd81f93-ab3d-44b1-85f8-0fe6adadd72a in namespace container-probe-6373 +STEP: checking the pod's current state and verifying that restartCount is present 10/13/23 08:40:22.505 +Oct 13 08:40:22.509: INFO: Initial restart count of pod liveness-6cd81f93-ab3d-44b1-85f8-0fe6adadd72a is 0 +Oct 13 08:40:42.570: INFO: Restart count of pod container-probe-6373/liveness-6cd81f93-ab3d-44b1-85f8-0fe6adadd72a is now 1 (20.06102905s elapsed) +STEP: deleting the pod 10/13/23 08:40:42.57 +[AfterEach] [sig-node] Probing container + test/e2e/framework/node/init/init.go:32 +Oct 13 08:40:42.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Probing container + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Probing container + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Probing container + tear down framework | framework.go:193 +STEP: Destroying namespace "container-probe-6373" for this suite. 10/13/23 08:40:42.586 +------------------------------ +• [SLOW TEST] [22.121 seconds] +[sig-node] Probing container +test/e2e/common/node/framework.go:23 + should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:169 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Probing container + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:40:20.472 + Oct 13 08:40:20.472: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename container-probe 10/13/23 08:40:20.473 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:40:20.485 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:40:20.487 + [BeforeEach] [sig-node] Probing container + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:63 + [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:169 + STEP: Creating pod liveness-6cd81f93-ab3d-44b1-85f8-0fe6adadd72a in namespace container-probe-6373 10/13/23 08:40:20.489 + Oct 13 08:40:20.497: INFO: Waiting up to 5m0s for pod "liveness-6cd81f93-ab3d-44b1-85f8-0fe6adadd72a" in namespace "container-probe-6373" to be "not pending" + Oct 13 08:40:20.500: INFO: Pod "liveness-6cd81f93-ab3d-44b1-85f8-0fe6adadd72a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.218726ms + Oct 13 08:40:22.505: INFO: Pod "liveness-6cd81f93-ab3d-44b1-85f8-0fe6adadd72a": Phase="Running", Reason="", readiness=true. Elapsed: 2.008009943s + Oct 13 08:40:22.505: INFO: Pod "liveness-6cd81f93-ab3d-44b1-85f8-0fe6adadd72a" satisfied condition "not pending" + Oct 13 08:40:22.505: INFO: Started pod liveness-6cd81f93-ab3d-44b1-85f8-0fe6adadd72a in namespace container-probe-6373 + STEP: checking the pod's current state and verifying that restartCount is present 10/13/23 08:40:22.505 + Oct 13 08:40:22.509: INFO: Initial restart count of pod liveness-6cd81f93-ab3d-44b1-85f8-0fe6adadd72a is 0 + Oct 13 08:40:42.570: INFO: Restart count of pod container-probe-6373/liveness-6cd81f93-ab3d-44b1-85f8-0fe6adadd72a is now 1 (20.06102905s elapsed) + STEP: deleting the pod 10/13/23 08:40:42.57 + [AfterEach] [sig-node] Probing container + test/e2e/framework/node/init/init.go:32 + Oct 13 08:40:42.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Probing container + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Probing container + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Probing container + tear down framework | framework.go:193 + STEP: Destroying namespace "container-probe-6373" for this suite. 10/13/23 08:40:42.586 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:78 +[BeforeEach] [sig-storage] Projected secret + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:40:42.593 +Oct 13 08:40:42.594: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename projected 10/13/23 08:40:42.595 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:40:42.606 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:40:42.609 +[BeforeEach] [sig-storage] Projected secret + test/e2e/framework/metrics/init/init.go:31 +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:78 +STEP: Creating projection with secret that has name projected-secret-test-map-d678fc15-1b07-492a-b2fa-6e4ef42c3523 10/13/23 08:40:42.611 +STEP: Creating a pod to test consume secrets 10/13/23 08:40:42.616 +Oct 13 08:40:42.623: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9e4e69e3-1563-476c-a6ea-5dc7e38b817c" in namespace "projected-9005" to be "Succeeded or Failed" +Oct 13 08:40:42.627: INFO: Pod "pod-projected-secrets-9e4e69e3-1563-476c-a6ea-5dc7e38b817c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.17566ms +Oct 13 08:40:44.632: INFO: Pod "pod-projected-secrets-9e4e69e3-1563-476c-a6ea-5dc7e38b817c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008491188s +Oct 13 08:40:46.632: INFO: Pod "pod-projected-secrets-9e4e69e3-1563-476c-a6ea-5dc7e38b817c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00878491s +STEP: Saw pod success 10/13/23 08:40:46.632 +Oct 13 08:40:46.632: INFO: Pod "pod-projected-secrets-9e4e69e3-1563-476c-a6ea-5dc7e38b817c" satisfied condition "Succeeded or Failed" +Oct 13 08:40:46.637: INFO: Trying to get logs from node node2 pod pod-projected-secrets-9e4e69e3-1563-476c-a6ea-5dc7e38b817c container projected-secret-volume-test: +STEP: delete the pod 10/13/23 08:40:46.645 +Oct 13 08:40:46.658: INFO: Waiting for pod pod-projected-secrets-9e4e69e3-1563-476c-a6ea-5dc7e38b817c to disappear +Oct 13 08:40:46.661: INFO: Pod pod-projected-secrets-9e4e69e3-1563-476c-a6ea-5dc7e38b817c no longer exists +[AfterEach] [sig-storage] Projected secret + test/e2e/framework/node/init/init.go:32 +Oct 13 08:40:46.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected secret + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Projected secret + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Projected secret + tear down framework | framework.go:193 +STEP: Destroying namespace "projected-9005" for this suite. 10/13/23 08:40:46.665 +------------------------------ +• [4.078 seconds] +[sig-storage] Projected secret +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:78 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected secret + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:40:42.593 + Oct 13 08:40:42.594: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename projected 10/13/23 08:40:42.595 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:40:42.606 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:40:42.609 + [BeforeEach] [sig-storage] Projected secret + test/e2e/framework/metrics/init/init.go:31 + [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:78 + STEP: Creating projection with secret that has name projected-secret-test-map-d678fc15-1b07-492a-b2fa-6e4ef42c3523 10/13/23 08:40:42.611 + STEP: Creating a pod to test consume secrets 10/13/23 08:40:42.616 + Oct 13 08:40:42.623: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9e4e69e3-1563-476c-a6ea-5dc7e38b817c" in namespace "projected-9005" to be "Succeeded or Failed" + Oct 13 08:40:42.627: INFO: Pod "pod-projected-secrets-9e4e69e3-1563-476c-a6ea-5dc7e38b817c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.17566ms + Oct 13 08:40:44.632: INFO: Pod "pod-projected-secrets-9e4e69e3-1563-476c-a6ea-5dc7e38b817c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008491188s + Oct 13 08:40:46.632: INFO: Pod "pod-projected-secrets-9e4e69e3-1563-476c-a6ea-5dc7e38b817c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00878491s + STEP: Saw pod success 10/13/23 08:40:46.632 + Oct 13 08:40:46.632: INFO: Pod "pod-projected-secrets-9e4e69e3-1563-476c-a6ea-5dc7e38b817c" satisfied condition "Succeeded or Failed" + Oct 13 08:40:46.637: INFO: Trying to get logs from node node2 pod pod-projected-secrets-9e4e69e3-1563-476c-a6ea-5dc7e38b817c container projected-secret-volume-test: + STEP: delete the pod 10/13/23 08:40:46.645 + Oct 13 08:40:46.658: INFO: Waiting for pod pod-projected-secrets-9e4e69e3-1563-476c-a6ea-5dc7e38b817c to disappear + Oct 13 08:40:46.661: INFO: Pod pod-projected-secrets-9e4e69e3-1563-476c-a6ea-5dc7e38b817c no longer exists + [AfterEach] [sig-storage] Projected secret + test/e2e/framework/node/init/init.go:32 + Oct 13 08:40:46.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected secret + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Projected secret + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Projected secret + tear down framework | framework.go:193 + STEP: Destroying namespace "projected-9005" for this suite. 10/13/23 08:40:46.665 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl run pod + should create a pod from an image when restart is Never [Conformance] + test/e2e/kubectl/kubectl.go:1713 +[BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:40:46.673 +Oct 13 08:40:46.673: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename kubectl 10/13/23 08:40:46.675 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:40:46.688 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:40:46.691 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 +[BeforeEach] Kubectl run pod + test/e2e/kubectl/kubectl.go:1700 +[It] should create a pod from an image when restart is Never [Conformance] + test/e2e/kubectl/kubectl.go:1713 +STEP: running the image registry.k8s.io/e2e-test-images/httpd:2.4.38-4 10/13/23 08:40:46.693 +Oct 13 08:40:46.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1408 run e2e-test-httpd-pod --restart=Never --pod-running-timeout=2m0s --image=registry.k8s.io/e2e-test-images/httpd:2.4.38-4' +Oct 13 08:40:46.789: INFO: stderr: "" +Oct 13 08:40:46.789: INFO: stdout: "pod/e2e-test-httpd-pod created\n" +STEP: verifying the pod e2e-test-httpd-pod was created 10/13/23 08:40:46.789 +[AfterEach] Kubectl run pod + test/e2e/kubectl/kubectl.go:1704 +Oct 13 08:40:46.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1408 delete pods e2e-test-httpd-pod' +Oct 13 08:40:49.156: INFO: stderr: "" +Oct 13 08:40:49.156: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 +Oct 13 08:40:49.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 +STEP: Destroying namespace "kubectl-1408" for this suite. 10/13/23 08:40:49.161 +------------------------------ +• [2.493 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Kubectl run pod + test/e2e/kubectl/kubectl.go:1697 + should create a pod from an image when restart is Never [Conformance] + test/e2e/kubectl/kubectl.go:1713 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:40:46.673 + Oct 13 08:40:46.673: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename kubectl 10/13/23 08:40:46.675 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:40:46.688 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:40:46.691 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 + [BeforeEach] Kubectl run pod + test/e2e/kubectl/kubectl.go:1700 + [It] should create a pod from an image when restart is Never [Conformance] + test/e2e/kubectl/kubectl.go:1713 + STEP: running the image registry.k8s.io/e2e-test-images/httpd:2.4.38-4 10/13/23 08:40:46.693 + Oct 13 08:40:46.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1408 run e2e-test-httpd-pod --restart=Never --pod-running-timeout=2m0s --image=registry.k8s.io/e2e-test-images/httpd:2.4.38-4' + Oct 13 08:40:46.789: INFO: stderr: "" + Oct 13 08:40:46.789: INFO: stdout: "pod/e2e-test-httpd-pod created\n" + STEP: verifying the pod e2e-test-httpd-pod was created 10/13/23 08:40:46.789 + [AfterEach] Kubectl run pod + test/e2e/kubectl/kubectl.go:1704 + Oct 13 08:40:46.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1408 delete pods e2e-test-httpd-pod' + Oct 13 08:40:49.156: INFO: stderr: "" + Oct 13 08:40:49.156: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" + [AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 + Oct 13 08:40:49.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 + STEP: Destroying namespace "kubectl-1408" for this suite. 10/13/23 08:40:49.161 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Kubelet when scheduling a busybox command that always fails in a pod + should be possible to delete [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:135 +[BeforeEach] [sig-node] Kubelet + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:40:49.167 +Oct 13 08:40:49.167: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename kubelet-test 10/13/23 08:40:49.168 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:40:49.181 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:40:49.184 +[BeforeEach] [sig-node] Kubelet + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] Kubelet + test/e2e/common/node/kubelet.go:41 +[BeforeEach] when scheduling a busybox command that always fails in a pod + test/e2e/common/node/kubelet.go:85 +[It] should be possible to delete [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:135 +[AfterEach] [sig-node] Kubelet + test/e2e/framework/node/init/init.go:32 +Oct 13 08:40:49.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Kubelet + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Kubelet + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Kubelet + tear down framework | framework.go:193 +STEP: Destroying namespace "kubelet-test-177" for this suite. 10/13/23 08:40:49.214 +------------------------------ +• [0.053 seconds] +[sig-node] Kubelet +test/e2e/common/node/framework.go:23 + when scheduling a busybox command that always fails in a pod + test/e2e/common/node/kubelet.go:82 + should be possible to delete [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:135 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Kubelet + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:40:49.167 + Oct 13 08:40:49.167: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename kubelet-test 10/13/23 08:40:49.168 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:40:49.181 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:40:49.184 + [BeforeEach] [sig-node] Kubelet + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] Kubelet + test/e2e/common/node/kubelet.go:41 + [BeforeEach] when scheduling a busybox command that always fails in a pod + test/e2e/common/node/kubelet.go:85 + [It] should be possible to delete [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:135 + [AfterEach] [sig-node] Kubelet + test/e2e/framework/node/init/init.go:32 + Oct 13 08:40:49.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Kubelet + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Kubelet + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Kubelet + tear down framework | framework.go:193 + STEP: Destroying namespace "kubelet-test-177" for this suite. 10/13/23 08:40:49.214 + << End Captured GinkgoWriter Output +------------------------------ +S +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:177 +[BeforeEach] [sig-storage] EmptyDir volumes + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:40:49.22 +Oct 13 08:40:49.221: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename emptydir 10/13/23 08:40:49.222 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:40:49.236 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:40:49.239 +[BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:31 +[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:177 +STEP: Creating a pod to test emptydir 0666 on node default medium 10/13/23 08:40:49.242 +Oct 13 08:40:49.253: INFO: Waiting up to 5m0s for pod "pod-47ec2b84-41bd-4fb0-940d-3722536ade43" in namespace "emptydir-9778" to be "Succeeded or Failed" +Oct 13 08:40:49.257: INFO: Pod "pod-47ec2b84-41bd-4fb0-940d-3722536ade43": Phase="Pending", Reason="", readiness=false. Elapsed: 3.580185ms +Oct 13 08:40:51.261: INFO: Pod "pod-47ec2b84-41bd-4fb0-940d-3722536ade43": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007723433s +Oct 13 08:40:53.261: INFO: Pod "pod-47ec2b84-41bd-4fb0-940d-3722536ade43": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007494449s +STEP: Saw pod success 10/13/23 08:40:53.261 +Oct 13 08:40:53.261: INFO: Pod "pod-47ec2b84-41bd-4fb0-940d-3722536ade43" satisfied condition "Succeeded or Failed" +Oct 13 08:40:53.264: INFO: Trying to get logs from node node2 pod pod-47ec2b84-41bd-4fb0-940d-3722536ade43 container test-container: +STEP: delete the pod 10/13/23 08:40:53.27 +Oct 13 08:40:53.290: INFO: Waiting for pod pod-47ec2b84-41bd-4fb0-940d-3722536ade43 to disappear +Oct 13 08:40:53.293: INFO: Pod pod-47ec2b84-41bd-4fb0-940d-3722536ade43 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/node/init/init.go:32 +Oct 13 08:40:53.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + tear down framework | framework.go:193 +STEP: Destroying namespace "emptydir-9778" for this suite. 10/13/23 08:40:53.309 +------------------------------ +• [4.111 seconds] +[sig-storage] EmptyDir volumes +test/e2e/common/storage/framework.go:23 + should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:177 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] EmptyDir volumes + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:40:49.22 + Oct 13 08:40:49.221: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename emptydir 10/13/23 08:40:49.222 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:40:49.236 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:40:49.239 + [BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:31 + [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:177 + STEP: Creating a pod to test emptydir 0666 on node default medium 10/13/23 08:40:49.242 + Oct 13 08:40:49.253: INFO: Waiting up to 5m0s for pod "pod-47ec2b84-41bd-4fb0-940d-3722536ade43" in namespace "emptydir-9778" to be "Succeeded or Failed" + Oct 13 08:40:49.257: INFO: Pod "pod-47ec2b84-41bd-4fb0-940d-3722536ade43": Phase="Pending", Reason="", readiness=false. Elapsed: 3.580185ms + Oct 13 08:40:51.261: INFO: Pod "pod-47ec2b84-41bd-4fb0-940d-3722536ade43": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007723433s + Oct 13 08:40:53.261: INFO: Pod "pod-47ec2b84-41bd-4fb0-940d-3722536ade43": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007494449s + STEP: Saw pod success 10/13/23 08:40:53.261 + Oct 13 08:40:53.261: INFO: Pod "pod-47ec2b84-41bd-4fb0-940d-3722536ade43" satisfied condition "Succeeded or Failed" + Oct 13 08:40:53.264: INFO: Trying to get logs from node node2 pod pod-47ec2b84-41bd-4fb0-940d-3722536ade43 container test-container: + STEP: delete the pod 10/13/23 08:40:53.27 + Oct 13 08:40:53.290: INFO: Waiting for pod pod-47ec2b84-41bd-4fb0-940d-3722536ade43 to disappear + Oct 13 08:40:53.293: INFO: Pod pod-47ec2b84-41bd-4fb0-940d-3722536ade43 no longer exists + [AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/node/init/init.go:32 + Oct 13 08:40:53.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + tear down framework | framework.go:193 + STEP: Destroying namespace "emptydir-9778" for this suite. 10/13/23 08:40:53.309 + << End Captured GinkgoWriter Output +------------------------------ +S +------------------------------ +[sig-network] Services + should serve multiport endpoints from pods [Conformance] + test/e2e/network/service.go:848 +[BeforeEach] [sig-network] Services + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:40:53.332 +Oct 13 08:40:53.332: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename services 10/13/23 08:40:53.332 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:40:53.35 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:40:53.352 +[BeforeEach] [sig-network] Services + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 +[It] should serve multiport endpoints from pods [Conformance] + test/e2e/network/service.go:848 +STEP: creating service multi-endpoint-test in namespace services-7362 10/13/23 08:40:53.354 +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7362 to expose endpoints map[] 10/13/23 08:40:53.366 +Oct 13 08:40:53.372: INFO: Failed go get Endpoints object: endpoints "multi-endpoint-test" not found +Oct 13 08:40:54.379: INFO: successfully validated that service multi-endpoint-test in namespace services-7362 exposes endpoints map[] +STEP: Creating pod pod1 in namespace services-7362 10/13/23 08:40:54.379 +Oct 13 08:40:54.385: INFO: Waiting up to 5m0s for pod "pod1" in namespace "services-7362" to be "running and ready" +Oct 13 08:40:54.390: INFO: Pod "pod1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.403319ms +Oct 13 08:40:54.390: INFO: The phase of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) +Oct 13 08:40:56.394: INFO: Pod "pod1": Phase="Running", Reason="", readiness=true. Elapsed: 2.008341583s +Oct 13 08:40:56.394: INFO: The phase of Pod pod1 is Running (Ready = true) +Oct 13 08:40:56.394: INFO: Pod "pod1" satisfied condition "running and ready" +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7362 to expose endpoints map[pod1:[100]] 10/13/23 08:40:56.396 +Oct 13 08:40:56.403: INFO: successfully validated that service multi-endpoint-test in namespace services-7362 exposes endpoints map[pod1:[100]] +STEP: Creating pod pod2 in namespace services-7362 10/13/23 08:40:56.403 +Oct 13 08:40:56.408: INFO: Waiting up to 5m0s for pod "pod2" in namespace "services-7362" to be "running and ready" +Oct 13 08:40:56.410: INFO: Pod "pod2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.680856ms +Oct 13 08:40:56.410: INFO: The phase of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) +Oct 13 08:40:58.415: INFO: Pod "pod2": Phase="Running", Reason="", readiness=true. Elapsed: 2.007558426s +Oct 13 08:40:58.415: INFO: The phase of Pod pod2 is Running (Ready = true) +Oct 13 08:40:58.415: INFO: Pod "pod2" satisfied condition "running and ready" +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7362 to expose endpoints map[pod1:[100] pod2:[101]] 10/13/23 08:40:58.419 +Oct 13 08:40:58.441: INFO: successfully validated that service multi-endpoint-test in namespace services-7362 exposes endpoints map[pod1:[100] pod2:[101]] +STEP: Checking if the Service forwards traffic to pods 10/13/23 08:40:58.441 +Oct 13 08:40:58.441: INFO: Creating new exec pod +Oct 13 08:40:58.446: INFO: Waiting up to 5m0s for pod "execpodzk9n8" in namespace "services-7362" to be "running" +Oct 13 08:40:58.449: INFO: Pod "execpodzk9n8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.90727ms +Oct 13 08:41:00.454: INFO: Pod "execpodzk9n8": Phase="Running", Reason="", readiness=true. Elapsed: 2.008724203s +Oct 13 08:41:00.454: INFO: Pod "execpodzk9n8" satisfied condition "running" +Oct 13 08:41:01.455: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-7362 exec execpodzk9n8 -- /bin/sh -x -c nc -v -z -w 2 multi-endpoint-test 80' +Oct 13 08:41:01.621: INFO: stderr: "+ nc -v -z -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" +Oct 13 08:41:01.621: INFO: stdout: "" +Oct 13 08:41:01.621: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-7362 exec execpodzk9n8 -- /bin/sh -x -c nc -v -z -w 2 10.102.216.189 80' +Oct 13 08:41:01.785: INFO: stderr: "+ nc -v -z -w 2 10.102.216.189 80\nConnection to 10.102.216.189 80 port [tcp/http] succeeded!\n" +Oct 13 08:41:01.785: INFO: stdout: "" +Oct 13 08:41:01.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-7362 exec execpodzk9n8 -- /bin/sh -x -c nc -v -z -w 2 multi-endpoint-test 81' +Oct 13 08:41:01.942: INFO: stderr: "+ nc -v -z -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" +Oct 13 08:41:01.942: INFO: stdout: "" +Oct 13 08:41:01.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-7362 exec execpodzk9n8 -- /bin/sh -x -c nc -v -z -w 2 10.102.216.189 81' +Oct 13 08:41:02.094: INFO: stderr: "+ nc -v -z -w 2 10.102.216.189 81\nConnection to 10.102.216.189 81 port [tcp/*] succeeded!\n" +Oct 13 08:41:02.094: INFO: stdout: "" +STEP: Deleting pod pod1 in namespace services-7362 10/13/23 08:41:02.094 +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7362 to expose endpoints map[pod2:[101]] 10/13/23 08:41:02.11 +Oct 13 08:41:02.128: INFO: successfully validated that service multi-endpoint-test in namespace services-7362 exposes endpoints map[pod2:[101]] +STEP: Deleting pod pod2 in namespace services-7362 10/13/23 08:41:02.128 +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7362 to expose endpoints map[] 10/13/23 08:41:02.142 +Oct 13 08:41:03.170: INFO: successfully validated that service multi-endpoint-test in namespace services-7362 exposes endpoints map[] +[AfterEach] [sig-network] Services + test/e2e/framework/node/init/init.go:32 +Oct 13 08:41:03.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] Services + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] Services + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] Services + tear down framework | framework.go:193 +STEP: Destroying namespace "services-7362" for this suite. 10/13/23 08:41:03.201 +------------------------------ +• [SLOW TEST] [9.876 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should serve multiport endpoints from pods [Conformance] + test/e2e/network/service.go:848 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Services + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:40:53.332 + Oct 13 08:40:53.332: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename services 10/13/23 08:40:53.332 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:40:53.35 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:40:53.352 + [BeforeEach] [sig-network] Services + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 + [It] should serve multiport endpoints from pods [Conformance] + test/e2e/network/service.go:848 + STEP: creating service multi-endpoint-test in namespace services-7362 10/13/23 08:40:53.354 + STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7362 to expose endpoints map[] 10/13/23 08:40:53.366 + Oct 13 08:40:53.372: INFO: Failed go get Endpoints object: endpoints "multi-endpoint-test" not found + Oct 13 08:40:54.379: INFO: successfully validated that service multi-endpoint-test in namespace services-7362 exposes endpoints map[] + STEP: Creating pod pod1 in namespace services-7362 10/13/23 08:40:54.379 + Oct 13 08:40:54.385: INFO: Waiting up to 5m0s for pod "pod1" in namespace "services-7362" to be "running and ready" + Oct 13 08:40:54.390: INFO: Pod "pod1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.403319ms + Oct 13 08:40:54.390: INFO: The phase of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) + Oct 13 08:40:56.394: INFO: Pod "pod1": Phase="Running", Reason="", readiness=true. Elapsed: 2.008341583s + Oct 13 08:40:56.394: INFO: The phase of Pod pod1 is Running (Ready = true) + Oct 13 08:40:56.394: INFO: Pod "pod1" satisfied condition "running and ready" + STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7362 to expose endpoints map[pod1:[100]] 10/13/23 08:40:56.396 + Oct 13 08:40:56.403: INFO: successfully validated that service multi-endpoint-test in namespace services-7362 exposes endpoints map[pod1:[100]] + STEP: Creating pod pod2 in namespace services-7362 10/13/23 08:40:56.403 + Oct 13 08:40:56.408: INFO: Waiting up to 5m0s for pod "pod2" in namespace "services-7362" to be "running and ready" + Oct 13 08:40:56.410: INFO: Pod "pod2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.680856ms + Oct 13 08:40:56.410: INFO: The phase of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) + Oct 13 08:40:58.415: INFO: Pod "pod2": Phase="Running", Reason="", readiness=true. Elapsed: 2.007558426s + Oct 13 08:40:58.415: INFO: The phase of Pod pod2 is Running (Ready = true) + Oct 13 08:40:58.415: INFO: Pod "pod2" satisfied condition "running and ready" + STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7362 to expose endpoints map[pod1:[100] pod2:[101]] 10/13/23 08:40:58.419 + Oct 13 08:40:58.441: INFO: successfully validated that service multi-endpoint-test in namespace services-7362 exposes endpoints map[pod1:[100] pod2:[101]] + STEP: Checking if the Service forwards traffic to pods 10/13/23 08:40:58.441 + Oct 13 08:40:58.441: INFO: Creating new exec pod + Oct 13 08:40:58.446: INFO: Waiting up to 5m0s for pod "execpodzk9n8" in namespace "services-7362" to be "running" + Oct 13 08:40:58.449: INFO: Pod "execpodzk9n8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.90727ms + Oct 13 08:41:00.454: INFO: Pod "execpodzk9n8": Phase="Running", Reason="", readiness=true. Elapsed: 2.008724203s + Oct 13 08:41:00.454: INFO: Pod "execpodzk9n8" satisfied condition "running" + Oct 13 08:41:01.455: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-7362 exec execpodzk9n8 -- /bin/sh -x -c nc -v -z -w 2 multi-endpoint-test 80' + Oct 13 08:41:01.621: INFO: stderr: "+ nc -v -z -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" + Oct 13 08:41:01.621: INFO: stdout: "" + Oct 13 08:41:01.621: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-7362 exec execpodzk9n8 -- /bin/sh -x -c nc -v -z -w 2 10.102.216.189 80' + Oct 13 08:41:01.785: INFO: stderr: "+ nc -v -z -w 2 10.102.216.189 80\nConnection to 10.102.216.189 80 port [tcp/http] succeeded!\n" + Oct 13 08:41:01.785: INFO: stdout: "" + Oct 13 08:41:01.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-7362 exec execpodzk9n8 -- /bin/sh -x -c nc -v -z -w 2 multi-endpoint-test 81' + Oct 13 08:41:01.942: INFO: stderr: "+ nc -v -z -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" + Oct 13 08:41:01.942: INFO: stdout: "" + Oct 13 08:41:01.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-7362 exec execpodzk9n8 -- /bin/sh -x -c nc -v -z -w 2 10.102.216.189 81' + Oct 13 08:41:02.094: INFO: stderr: "+ nc -v -z -w 2 10.102.216.189 81\nConnection to 10.102.216.189 81 port [tcp/*] succeeded!\n" + Oct 13 08:41:02.094: INFO: stdout: "" + STEP: Deleting pod pod1 in namespace services-7362 10/13/23 08:41:02.094 + STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7362 to expose endpoints map[pod2:[101]] 10/13/23 08:41:02.11 + Oct 13 08:41:02.128: INFO: successfully validated that service multi-endpoint-test in namespace services-7362 exposes endpoints map[pod2:[101]] + STEP: Deleting pod pod2 in namespace services-7362 10/13/23 08:41:02.128 + STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7362 to expose endpoints map[] 10/13/23 08:41:02.142 + Oct 13 08:41:03.170: INFO: successfully validated that service multi-endpoint-test in namespace services-7362 exposes endpoints map[] + [AfterEach] [sig-network] Services + test/e2e/framework/node/init/init.go:32 + Oct 13 08:41:03.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] Services + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] Services + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] Services + tear down framework | framework.go:193 + STEP: Destroying namespace "services-7362" for this suite. 10/13/23 08:41:03.201 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSS +------------------------------ +[sig-node] Pods + should be submitted and removed [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:226 +[BeforeEach] [sig-node] Pods + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:41:03.208 +Oct 13 08:41:03.208: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename pods 10/13/23 08:41:03.209 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:41:03.223 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:41:03.226 +[BeforeEach] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:194 +[It] should be submitted and removed [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:226 +STEP: creating the pod 10/13/23 08:41:03.228 +STEP: setting up watch 10/13/23 08:41:03.228 +STEP: submitting the pod to kubernetes 10/13/23 08:41:03.332 +STEP: verifying the pod is in kubernetes 10/13/23 08:41:03.34 +STEP: verifying pod creation was observed 10/13/23 08:41:03.344 +Oct 13 08:41:03.344: INFO: Waiting up to 5m0s for pod "pod-submit-remove-588e16f2-bef1-46a5-bade-df18d9a57f75" in namespace "pods-4078" to be "running" +Oct 13 08:41:03.349: INFO: Pod "pod-submit-remove-588e16f2-bef1-46a5-bade-df18d9a57f75": Phase="Pending", Reason="", readiness=false. Elapsed: 4.766859ms +Oct 13 08:41:05.354: INFO: Pod "pod-submit-remove-588e16f2-bef1-46a5-bade-df18d9a57f75": Phase="Running", Reason="", readiness=true. Elapsed: 2.009753122s +Oct 13 08:41:05.354: INFO: Pod "pod-submit-remove-588e16f2-bef1-46a5-bade-df18d9a57f75" satisfied condition "running" +STEP: deleting the pod gracefully 10/13/23 08:41:05.357 +STEP: verifying pod deletion was observed 10/13/23 08:41:05.371 +[AfterEach] [sig-node] Pods + test/e2e/framework/node/init/init.go:32 +Oct 13 08:41:07.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Pods + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Pods + tear down framework | framework.go:193 +STEP: Destroying namespace "pods-4078" for this suite. 10/13/23 08:41:07.222 +------------------------------ +• [4.019 seconds] +[sig-node] Pods +test/e2e/common/node/framework.go:23 + should be submitted and removed [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:226 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Pods + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:41:03.208 + Oct 13 08:41:03.208: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename pods 10/13/23 08:41:03.209 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:41:03.223 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:41:03.226 + [BeforeEach] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:194 + [It] should be submitted and removed [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:226 + STEP: creating the pod 10/13/23 08:41:03.228 + STEP: setting up watch 10/13/23 08:41:03.228 + STEP: submitting the pod to kubernetes 10/13/23 08:41:03.332 + STEP: verifying the pod is in kubernetes 10/13/23 08:41:03.34 + STEP: verifying pod creation was observed 10/13/23 08:41:03.344 + Oct 13 08:41:03.344: INFO: Waiting up to 5m0s for pod "pod-submit-remove-588e16f2-bef1-46a5-bade-df18d9a57f75" in namespace "pods-4078" to be "running" + Oct 13 08:41:03.349: INFO: Pod "pod-submit-remove-588e16f2-bef1-46a5-bade-df18d9a57f75": Phase="Pending", Reason="", readiness=false. Elapsed: 4.766859ms + Oct 13 08:41:05.354: INFO: Pod "pod-submit-remove-588e16f2-bef1-46a5-bade-df18d9a57f75": Phase="Running", Reason="", readiness=true. Elapsed: 2.009753122s + Oct 13 08:41:05.354: INFO: Pod "pod-submit-remove-588e16f2-bef1-46a5-bade-df18d9a57f75" satisfied condition "running" + STEP: deleting the pod gracefully 10/13/23 08:41:05.357 + STEP: verifying pod deletion was observed 10/13/23 08:41:05.371 + [AfterEach] [sig-node] Pods + test/e2e/framework/node/init/init.go:32 + Oct 13 08:41:07.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Pods + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Pods + tear down framework | framework.go:193 + STEP: Destroying namespace "pods-4078" for this suite. 10/13/23 08:41:07.222 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for CRD without validation schema [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:153 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:41:07.229 +Oct 13 08:41:07.229: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename crd-publish-openapi 10/13/23 08:41:07.23 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:41:07.242 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:41:07.244 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[It] works for CRD without validation schema [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:153 +Oct 13 08:41:07.247: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: kubectl validation (kubectl create and apply) allows request with any unknown properties 10/13/23 08:41:09.106 +Oct 13 08:41:09.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=crd-publish-openapi-9814 --namespace=crd-publish-openapi-9814 create -f -' +Oct 13 08:41:11.620: INFO: stderr: "" +Oct 13 08:41:11.620: INFO: stdout: "e2e-test-crd-publish-openapi-7476-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" +Oct 13 08:41:11.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=crd-publish-openapi-9814 --namespace=crd-publish-openapi-9814 delete e2e-test-crd-publish-openapi-7476-crds test-cr' +Oct 13 08:41:11.729: INFO: stderr: "" +Oct 13 08:41:11.729: INFO: stdout: "e2e-test-crd-publish-openapi-7476-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" +Oct 13 08:41:11.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=crd-publish-openapi-9814 --namespace=crd-publish-openapi-9814 apply -f -' +Oct 13 08:41:12.249: INFO: stderr: "" +Oct 13 08:41:12.249: INFO: stdout: "e2e-test-crd-publish-openapi-7476-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" +Oct 13 08:41:12.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=crd-publish-openapi-9814 --namespace=crd-publish-openapi-9814 delete e2e-test-crd-publish-openapi-7476-crds test-cr' +Oct 13 08:41:12.330: INFO: stderr: "" +Oct 13 08:41:12.330: INFO: stdout: "e2e-test-crd-publish-openapi-7476-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" +STEP: kubectl explain works to explain CR without validation schema 10/13/23 08:41:12.33 +Oct 13 08:41:12.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=crd-publish-openapi-9814 explain e2e-test-crd-publish-openapi-7476-crds' +Oct 13 08:41:12.504: INFO: stderr: "" +Oct 13 08:41:12.504: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-7476-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Oct 13 08:41:14.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "crd-publish-openapi-9814" for this suite. 10/13/23 08:41:14.346 +------------------------------ +• [SLOW TEST] [7.123 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + works for CRD without validation schema [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:153 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:41:07.229 + Oct 13 08:41:07.229: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename crd-publish-openapi 10/13/23 08:41:07.23 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:41:07.242 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:41:07.244 + [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [It] works for CRD without validation schema [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:153 + Oct 13 08:41:07.247: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: kubectl validation (kubectl create and apply) allows request with any unknown properties 10/13/23 08:41:09.106 + Oct 13 08:41:09.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=crd-publish-openapi-9814 --namespace=crd-publish-openapi-9814 create -f -' + Oct 13 08:41:11.620: INFO: stderr: "" + Oct 13 08:41:11.620: INFO: stdout: "e2e-test-crd-publish-openapi-7476-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" + Oct 13 08:41:11.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=crd-publish-openapi-9814 --namespace=crd-publish-openapi-9814 delete e2e-test-crd-publish-openapi-7476-crds test-cr' + Oct 13 08:41:11.729: INFO: stderr: "" + Oct 13 08:41:11.729: INFO: stdout: "e2e-test-crd-publish-openapi-7476-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" + Oct 13 08:41:11.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=crd-publish-openapi-9814 --namespace=crd-publish-openapi-9814 apply -f -' + Oct 13 08:41:12.249: INFO: stderr: "" + Oct 13 08:41:12.249: INFO: stdout: "e2e-test-crd-publish-openapi-7476-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" + Oct 13 08:41:12.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=crd-publish-openapi-9814 --namespace=crd-publish-openapi-9814 delete e2e-test-crd-publish-openapi-7476-crds test-cr' + Oct 13 08:41:12.330: INFO: stderr: "" + Oct 13 08:41:12.330: INFO: stdout: "e2e-test-crd-publish-openapi-7476-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" + STEP: kubectl explain works to explain CR without validation schema 10/13/23 08:41:12.33 + Oct 13 08:41:12.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=crd-publish-openapi-9814 explain e2e-test-crd-publish-openapi-7476-crds' + Oct 13 08:41:12.504: INFO: stderr: "" + Oct 13 08:41:12.504: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-7476-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" + [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Oct 13 08:41:14.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "crd-publish-openapi-9814" for this suite. 10/13/23 08:41:14.346 + << End Captured GinkgoWriter Output +------------------------------ +S +------------------------------ +[sig-node] Pods + should run through the lifecycle of Pods and PodStatus [Conformance] + test/e2e/common/node/pods.go:896 +[BeforeEach] [sig-node] Pods + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:41:14.353 +Oct 13 08:41:14.353: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename pods 10/13/23 08:41:14.354 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:41:14.368 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:41:14.371 +[BeforeEach] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:194 +[It] should run through the lifecycle of Pods and PodStatus [Conformance] + test/e2e/common/node/pods.go:896 +STEP: creating a Pod with a static label 10/13/23 08:41:14.377 +STEP: watching for Pod to be ready 10/13/23 08:41:14.384 +Oct 13 08:41:14.386: INFO: observed Pod pod-test in namespace pods-9064 in phase Pending with labels: map[test-pod-static:true] & conditions [] +Oct 13 08:41:14.389: INFO: observed Pod pod-test in namespace pods-9064 in phase Pending with labels: map[test-pod-static:true] & conditions [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:41:14 +0000 UTC }] +Oct 13 08:41:14.402: INFO: observed Pod pod-test in namespace pods-9064 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:41:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:41:15 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:41:15 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:41:14 +0000 UTC }] +Oct 13 08:41:15.220: INFO: Found Pod pod-test in namespace pods-9064 in phase Running with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:41:15 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:41:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:41:16 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:41:14 +0000 UTC }] +STEP: patching the Pod with a new Label and updated data 10/13/23 08:41:15.223 +STEP: getting the Pod and ensuring that it's patched 10/13/23 08:41:15.232 +STEP: replacing the Pod's status Ready condition to False 10/13/23 08:41:15.235 +STEP: check the Pod again to ensure its Ready conditions are False 10/13/23 08:41:15.244 +STEP: deleting the Pod via a Collection with a LabelSelector 10/13/23 08:41:15.244 +STEP: watching for the Pod to be deleted 10/13/23 08:41:15.25 +Oct 13 08:41:15.252: INFO: observed event type MODIFIED +Oct 13 08:41:17.228: INFO: observed event type MODIFIED +Oct 13 08:41:18.228: INFO: observed event type MODIFIED +Oct 13 08:41:18.235: INFO: observed event type MODIFIED +[AfterEach] [sig-node] Pods + test/e2e/framework/node/init/init.go:32 +Oct 13 08:41:18.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Pods + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Pods + tear down framework | framework.go:193 +STEP: Destroying namespace "pods-9064" for this suite. 10/13/23 08:41:18.244 +------------------------------ +• [3.896 seconds] +[sig-node] Pods +test/e2e/common/node/framework.go:23 + should run through the lifecycle of Pods and PodStatus [Conformance] + test/e2e/common/node/pods.go:896 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Pods + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:41:14.353 + Oct 13 08:41:14.353: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename pods 10/13/23 08:41:14.354 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:41:14.368 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:41:14.371 + [BeforeEach] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:194 + [It] should run through the lifecycle of Pods and PodStatus [Conformance] + test/e2e/common/node/pods.go:896 + STEP: creating a Pod with a static label 10/13/23 08:41:14.377 + STEP: watching for Pod to be ready 10/13/23 08:41:14.384 + Oct 13 08:41:14.386: INFO: observed Pod pod-test in namespace pods-9064 in phase Pending with labels: map[test-pod-static:true] & conditions [] + Oct 13 08:41:14.389: INFO: observed Pod pod-test in namespace pods-9064 in phase Pending with labels: map[test-pod-static:true] & conditions [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:41:14 +0000 UTC }] + Oct 13 08:41:14.402: INFO: observed Pod pod-test in namespace pods-9064 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:41:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:41:15 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:41:15 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:41:14 +0000 UTC }] + Oct 13 08:41:15.220: INFO: Found Pod pod-test in namespace pods-9064 in phase Running with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:41:15 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:41:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:41:16 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:41:14 +0000 UTC }] + STEP: patching the Pod with a new Label and updated data 10/13/23 08:41:15.223 + STEP: getting the Pod and ensuring that it's patched 10/13/23 08:41:15.232 + STEP: replacing the Pod's status Ready condition to False 10/13/23 08:41:15.235 + STEP: check the Pod again to ensure its Ready conditions are False 10/13/23 08:41:15.244 + STEP: deleting the Pod via a Collection with a LabelSelector 10/13/23 08:41:15.244 + STEP: watching for the Pod to be deleted 10/13/23 08:41:15.25 + Oct 13 08:41:15.252: INFO: observed event type MODIFIED + Oct 13 08:41:17.228: INFO: observed event type MODIFIED + Oct 13 08:41:18.228: INFO: observed event type MODIFIED + Oct 13 08:41:18.235: INFO: observed event type MODIFIED + [AfterEach] [sig-node] Pods + test/e2e/framework/node/init/init.go:32 + Oct 13 08:41:18.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Pods + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Pods + tear down framework | framework.go:193 + STEP: Destroying namespace "pods-9064" for this suite. 10/13/23 08:41:18.244 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:57 +[BeforeEach] [sig-storage] Projected configMap + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:41:18.25 +Oct 13 08:41:18.250: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename projected 10/13/23 08:41:18.251 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:41:18.265 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:41:18.267 +[BeforeEach] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:31 +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:57 +STEP: Creating configMap with name projected-configmap-test-volume-06222a5c-ea9a-4851-b991-fc2b937d59eb 10/13/23 08:41:18.269 +STEP: Creating a pod to test consume configMaps 10/13/23 08:41:18.273 +Oct 13 08:41:18.280: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8713594a-ea70-492b-b864-2fcb5619300d" in namespace "projected-9651" to be "Succeeded or Failed" +Oct 13 08:41:18.283: INFO: Pod "pod-projected-configmaps-8713594a-ea70-492b-b864-2fcb5619300d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.929578ms +Oct 13 08:41:20.289: INFO: Pod "pod-projected-configmaps-8713594a-ea70-492b-b864-2fcb5619300d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008575156s +Oct 13 08:41:22.287: INFO: Pod "pod-projected-configmaps-8713594a-ea70-492b-b864-2fcb5619300d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.006888695s +STEP: Saw pod success 10/13/23 08:41:22.287 +Oct 13 08:41:22.287: INFO: Pod "pod-projected-configmaps-8713594a-ea70-492b-b864-2fcb5619300d" satisfied condition "Succeeded or Failed" +Oct 13 08:41:22.290: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-8713594a-ea70-492b-b864-2fcb5619300d container agnhost-container: +STEP: delete the pod 10/13/23 08:41:22.304 +Oct 13 08:41:22.316: INFO: Waiting for pod pod-projected-configmaps-8713594a-ea70-492b-b864-2fcb5619300d to disappear +Oct 13 08:41:22.320: INFO: Pod pod-projected-configmaps-8713594a-ea70-492b-b864-2fcb5619300d no longer exists +[AfterEach] [sig-storage] Projected configMap + test/e2e/framework/node/init/init.go:32 +Oct 13 08:41:22.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Projected configMap + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Projected configMap + tear down framework | framework.go:193 +STEP: Destroying namespace "projected-9651" for this suite. 10/13/23 08:41:22.323 +------------------------------ +• [4.079 seconds] +[sig-storage] Projected configMap +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:57 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected configMap + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:41:18.25 + Oct 13 08:41:18.250: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename projected 10/13/23 08:41:18.251 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:41:18.265 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:41:18.267 + [BeforeEach] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:31 + [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:57 + STEP: Creating configMap with name projected-configmap-test-volume-06222a5c-ea9a-4851-b991-fc2b937d59eb 10/13/23 08:41:18.269 + STEP: Creating a pod to test consume configMaps 10/13/23 08:41:18.273 + Oct 13 08:41:18.280: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8713594a-ea70-492b-b864-2fcb5619300d" in namespace "projected-9651" to be "Succeeded or Failed" + Oct 13 08:41:18.283: INFO: Pod "pod-projected-configmaps-8713594a-ea70-492b-b864-2fcb5619300d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.929578ms + Oct 13 08:41:20.289: INFO: Pod "pod-projected-configmaps-8713594a-ea70-492b-b864-2fcb5619300d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008575156s + Oct 13 08:41:22.287: INFO: Pod "pod-projected-configmaps-8713594a-ea70-492b-b864-2fcb5619300d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.006888695s + STEP: Saw pod success 10/13/23 08:41:22.287 + Oct 13 08:41:22.287: INFO: Pod "pod-projected-configmaps-8713594a-ea70-492b-b864-2fcb5619300d" satisfied condition "Succeeded or Failed" + Oct 13 08:41:22.290: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-8713594a-ea70-492b-b864-2fcb5619300d container agnhost-container: + STEP: delete the pod 10/13/23 08:41:22.304 + Oct 13 08:41:22.316: INFO: Waiting for pod pod-projected-configmaps-8713594a-ea70-492b-b864-2fcb5619300d to disappear + Oct 13 08:41:22.320: INFO: Pod pod-projected-configmaps-8713594a-ea70-492b-b864-2fcb5619300d no longer exists + [AfterEach] [sig-storage] Projected configMap + test/e2e/framework/node/init/init.go:32 + Oct 13 08:41:22.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Projected configMap + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Projected configMap + tear down framework | framework.go:193 + STEP: Destroying namespace "projected-9651" for this suite. 10/13/23 08:41:22.323 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-apps] Deployment + deployment should support proportional scaling [Conformance] + test/e2e/apps/deployment.go:160 +[BeforeEach] [sig-apps] Deployment + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:41:22.329 +Oct 13 08:41:22.329: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename deployment 10/13/23 08:41:22.33 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:41:22.346 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:41:22.349 +[BeforeEach] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 +[It] deployment should support proportional scaling [Conformance] + test/e2e/apps/deployment.go:160 +Oct 13 08:41:22.353: INFO: Creating deployment "webserver-deployment" +Oct 13 08:41:22.358: INFO: Waiting for observed generation 1 +Oct 13 08:41:24.365: INFO: Waiting for all required pods to come up +Oct 13 08:41:24.369: INFO: Pod name httpd: Found 10 pods out of 10 +STEP: ensuring each pod is running 10/13/23 08:41:24.369 +Oct 13 08:41:24.369: INFO: Waiting up to 5m0s for pod "webserver-deployment-7f5969cbc7-gfz4b" in namespace "deployment-5028" to be "running" +Oct 13 08:41:24.371: INFO: Pod "webserver-deployment-7f5969cbc7-gfz4b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.713196ms +Oct 13 08:41:26.376: INFO: Pod "webserver-deployment-7f5969cbc7-gfz4b": Phase="Running", Reason="", readiness=true. Elapsed: 2.007052751s +Oct 13 08:41:26.376: INFO: Pod "webserver-deployment-7f5969cbc7-gfz4b" satisfied condition "running" +Oct 13 08:41:26.376: INFO: Waiting for deployment "webserver-deployment" to complete +Oct 13 08:41:26.382: INFO: Updating deployment "webserver-deployment" with a non-existent image +Oct 13 08:41:26.389: INFO: Updating deployment webserver-deployment +Oct 13 08:41:26.389: INFO: Waiting for observed generation 2 +Oct 13 08:41:28.395: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 +Oct 13 08:41:28.399: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 +Oct 13 08:41:28.402: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas +Oct 13 08:41:28.409: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 +Oct 13 08:41:28.409: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 +Oct 13 08:41:28.412: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas +Oct 13 08:41:28.416: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas +Oct 13 08:41:28.416: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 +Oct 13 08:41:28.424: INFO: Updating deployment webserver-deployment +Oct 13 08:41:28.424: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas +Oct 13 08:41:28.431: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 +Oct 13 08:41:28.434: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 +[AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 +Oct 13 08:41:28.441: INFO: Deployment "webserver-deployment": +&Deployment{ObjectMeta:{webserver-deployment deployment-5028 3ac1fed2-6f66-46c2-aa3e-319211ae5d3e 20794 3 2023-10-13 08:41:22 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{kube-controller-manager Update apps/v1 2023-10-13 08:41:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}} status} {e2e.test Update apps/v1 2023-10-13 08:41:28 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} }]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004865498 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:5,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2023-10-13 08:41:24 +0000 UTC,LastTransitionTime:2023-10-13 08:41:24 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-d9f79cb5" is progressing.,LastUpdateTime:2023-10-13 08:41:26 +0000 UTC,LastTransitionTime:2023-10-13 08:41:22 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} + +Oct 13 08:41:28.447: INFO: New ReplicaSet "webserver-deployment-d9f79cb5" of Deployment "webserver-deployment": +&ReplicaSet{ObjectMeta:{webserver-deployment-d9f79cb5 deployment-5028 ffa31d2c-2110-48a6-8b8b-7a3947b7ec84 20797 3 2023-10-13 08:41:26 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 3ac1fed2-6f66-46c2-aa3e-319211ae5d3e 0xc004445457 0xc004445458}] [] [{kube-controller-manager Update apps/v1 2023-10-13 08:41:26 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status} {kube-controller-manager Update apps/v1 2023-10-13 08:41:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3ac1fed2-6f66-46c2-aa3e-319211ae5d3e\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} }]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: d9f79cb5,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0044454f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Oct 13 08:41:28.447: INFO: All old ReplicaSets of Deployment "webserver-deployment": +Oct 13 08:41:28.447: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-7f5969cbc7 deployment-5028 497ff21d-3c97-4b91-b021-c1b8e7d0c758 20795 3 2023-10-13 08:41:22 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 3ac1fed2-6f66-46c2-aa3e-319211ae5d3e 0xc004445367 0xc004445368}] [] [{kube-controller-manager Update apps/v1 2023-10-13 08:41:26 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status} {kube-controller-manager Update apps/v1 2023-10-13 08:41:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3ac1fed2-6f66-46c2-aa3e-319211ae5d3e\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} }]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 7f5969cbc7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0044453f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} +Oct 13 08:41:28.453: INFO: Pod "webserver-deployment-7f5969cbc7-498fv" is available: +&Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-498fv webserver-deployment-7f5969cbc7- deployment-5028 698a2034-172f-44ce-ad01-78f92ba5aee6 20671 0 2023-10-13 08:41:22 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 497ff21d-3c97-4b91-b021-c1b8e7d0c758 0xc0044459b7 0xc0044459b8}] [] [{kube-controller-manager Update v1 2023-10-13 08:41:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"497ff21d-3c97-4b91-b021-c1b8e7d0c758\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-10-13 08:41:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.146\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-8bfns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8bfns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.253.8.111,PodIP:10.244.1.146,StartTime:2023-10-13 08:41:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-10-13 08:41:23 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:sha256:987dae5a853a43c663ab2f902708e65874bd2c0189aa0bc57d81ffb57187d089,ContainerID:containerd://3f97eb7469a555ff10855315df8896f281659ead74e27a01b34bfff6b4a3941f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.146,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 13 08:41:28.453: INFO: Pod "webserver-deployment-7f5969cbc7-4bbr9" is available: +&Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-4bbr9 webserver-deployment-7f5969cbc7- deployment-5028 2fbabc39-b508-420e-8cb9-d28a47833433 20679 0 2023-10-13 08:41:22 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 497ff21d-3c97-4b91-b021-c1b8e7d0c758 0xc004445c07 0xc004445c08}] [] [{kube-controller-manager Update v1 2023-10-13 08:41:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"497ff21d-3c97-4b91-b021-c1b8e7d0c758\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-10-13 08:41:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.107\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-qh54g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qh54g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node3,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.253.8.112,PodIP:10.244.2.107,StartTime:2023-10-13 08:41:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-10-13 08:41:23 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:sha256:987dae5a853a43c663ab2f902708e65874bd2c0189aa0bc57d81ffb57187d089,ContainerID:containerd://7b47c72f8d849cb267e4ba569d7ffe3466d13838ed5278d5543f8cfd91430f15,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.107,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 13 08:41:28.453: INFO: Pod "webserver-deployment-7f5969cbc7-769q5" is available: +&Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-769q5 webserver-deployment-7f5969cbc7- deployment-5028 04dc59d1-781f-4fa8-b0b3-e8cc7ddf3847 20677 0 2023-10-13 08:41:22 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 497ff21d-3c97-4b91-b021-c1b8e7d0c758 0xc004445e17 0xc004445e18}] [] [{kube-controller-manager Update v1 2023-10-13 08:41:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"497ff21d-3c97-4b91-b021-c1b8e7d0c758\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-10-13 08:41:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.108\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-wh46v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wh46v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node3,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.253.8.112,PodIP:10.244.2.108,StartTime:2023-10-13 08:41:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-10-13 08:41:23 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:sha256:987dae5a853a43c663ab2f902708e65874bd2c0189aa0bc57d81ffb57187d089,ContainerID:containerd://2862345758a8c6aee9a85f8835a1bab3c8f2edd59d286ca0c6c1f43920b374c3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.108,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 13 08:41:28.453: INFO: Pod "webserver-deployment-7f5969cbc7-7mtt6" is available: +&Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-7mtt6 webserver-deployment-7f5969cbc7- deployment-5028 ab132f03-f271-478c-a445-31356aefa6fe 20689 0 2023-10-13 08:41:22 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 497ff21d-3c97-4b91-b021-c1b8e7d0c758 0xc002fc8007 0xc002fc8008}] [] [{kube-controller-manager Update v1 2023-10-13 08:41:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"497ff21d-3c97-4b91-b021-c1b8e7d0c758\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-10-13 08:41:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.145\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-4k428,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4k428,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.253.8.111,PodIP:10.244.1.145,StartTime:2023-10-13 08:41:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-10-13 08:41:23 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:sha256:987dae5a853a43c663ab2f902708e65874bd2c0189aa0bc57d81ffb57187d089,ContainerID:containerd://9413739e4d1b41c4c50810470309418b6e53303d9293655eec4d8a781014a9c7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.145,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 13 08:41:28.454: INFO: Pod "webserver-deployment-7f5969cbc7-8rt7s" is available: +&Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-8rt7s webserver-deployment-7f5969cbc7- deployment-5028 e61c911b-426e-4b7e-88d6-ff3c69d52d81 20657 0 2023-10-13 08:41:22 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 497ff21d-3c97-4b91-b021-c1b8e7d0c758 0xc002fc81f7 0xc002fc81f8}] [] [{kube-controller-manager Update v1 2023-10-13 08:41:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"497ff21d-3c97-4b91-b021-c1b8e7d0c758\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-10-13 08:41:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.0.25\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-9j2t8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9j2t8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.253.8.110,PodIP:10.244.0.25,StartTime:2023-10-13 08:41:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-10-13 08:41:23 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:sha256:987dae5a853a43c663ab2f902708e65874bd2c0189aa0bc57d81ffb57187d089,ContainerID:containerd://94851f31745d74e60cc2f63c359af01d1340105e187c3bca172d1640d98312b1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.0.25,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 13 08:41:28.454: INFO: Pod "webserver-deployment-7f5969cbc7-b5c76" is not available: +&Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-b5c76 webserver-deployment-7f5969cbc7- deployment-5028 c34a10ca-35a4-4068-8ce4-e1c37f2a44b3 20803 0 2023-10-13 08:41:28 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 497ff21d-3c97-4b91-b021-c1b8e7d0c758 0xc002fc83f0 0xc002fc83f1}] [] [{kube-controller-manager Update v1 2023-10-13 08:41:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"497ff21d-3c97-4b91-b021-c1b8e7d0c758\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-z945c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z945c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 13 08:41:28.454: INFO: Pod "webserver-deployment-7f5969cbc7-bx9r9" is available: +&Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-bx9r9 webserver-deployment-7f5969cbc7- deployment-5028 f3beae01-7972-4b8e-9b9f-992168849de5 20685 0 2023-10-13 08:41:22 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 497ff21d-3c97-4b91-b021-c1b8e7d0c758 0xc002fc8527 0xc002fc8528}] [] [{kube-controller-manager Update v1 2023-10-13 08:41:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"497ff21d-3c97-4b91-b021-c1b8e7d0c758\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-10-13 08:41:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.106\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-58dtz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-58dtz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node3,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.253.8.112,PodIP:10.244.2.106,StartTime:2023-10-13 08:41:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-10-13 08:41:23 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:sha256:987dae5a853a43c663ab2f902708e65874bd2c0189aa0bc57d81ffb57187d089,ContainerID:containerd://0be63af028d4da20a44bf95c00d8d913c3f4eaa70e27ae08eec546e48d408ae1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.106,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 13 08:41:28.454: INFO: Pod "webserver-deployment-7f5969cbc7-cxdm8" is available: +&Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-cxdm8 webserver-deployment-7f5969cbc7- deployment-5028 cac55514-abd1-403f-9a9f-8fa03534d3a2 20660 0 2023-10-13 08:41:22 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 497ff21d-3c97-4b91-b021-c1b8e7d0c758 0xc002fc8717 0xc002fc8718}] [] [{kube-controller-manager Update v1 2023-10-13 08:41:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"497ff21d-3c97-4b91-b021-c1b8e7d0c758\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-10-13 08:41:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.0.24\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-dqjmm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dqjmm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.253.8.110,PodIP:10.244.0.24,StartTime:2023-10-13 08:41:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-10-13 08:41:23 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:sha256:987dae5a853a43c663ab2f902708e65874bd2c0189aa0bc57d81ffb57187d089,ContainerID:containerd://df3ffdf3846ec42dc820e1ea091f4ad32a45a5bdd7a549e2feafe3e24a392aa6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.0.24,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 13 08:41:28.454: INFO: Pod "webserver-deployment-7f5969cbc7-mpkj2" is not available: +&Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-mpkj2 webserver-deployment-7f5969cbc7- deployment-5028 76338a01-4a70-42f9-9aab-b955e86867aa 20801 0 2023-10-13 08:41:28 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 497ff21d-3c97-4b91-b021-c1b8e7d0c758 0xc002fc8900 0xc002fc8901}] [] [{kube-controller-manager Update v1 2023-10-13 08:41:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"497ff21d-3c97-4b91-b021-c1b8e7d0c758\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-pw2ll,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pw2ll,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 13 08:41:28.455: INFO: Pod "webserver-deployment-7f5969cbc7-wnvgx" is not available: +&Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-wnvgx webserver-deployment-7f5969cbc7- deployment-5028 a11434c7-7b12-4bb8-ad68-6e1bfc189c09 20799 0 2023-10-13 08:41:28 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 497ff21d-3c97-4b91-b021-c1b8e7d0c758 0xc002fc8a60 0xc002fc8a61}] [] [{kube-controller-manager Update v1 2023-10-13 08:41:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"497ff21d-3c97-4b91-b021-c1b8e7d0c758\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-222zn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-222zn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 13 08:41:28.455: INFO: Pod "webserver-deployment-7f5969cbc7-z7r8w" is available: +&Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-z7r8w webserver-deployment-7f5969cbc7- deployment-5028 fede7220-63a0-4d11-a0df-da8b953ee6cc 20683 0 2023-10-13 08:41:22 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 497ff21d-3c97-4b91-b021-c1b8e7d0c758 0xc002fc8b97 0xc002fc8b98}] [] [{kube-controller-manager Update v1 2023-10-13 08:41:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"497ff21d-3c97-4b91-b021-c1b8e7d0c758\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-10-13 08:41:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.0.23\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-sbh5s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sbh5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.253.8.110,PodIP:10.244.0.23,StartTime:2023-10-13 08:41:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-10-13 08:41:23 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:sha256:987dae5a853a43c663ab2f902708e65874bd2c0189aa0bc57d81ffb57187d089,ContainerID:containerd://119ec17272908af0f69bb417ae321822fcbfd3d974b25a06d69430ad70b5accb,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.0.23,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 13 08:41:28.455: INFO: Pod "webserver-deployment-d9f79cb5-9ngnr" is not available: +&Pod{ObjectMeta:{webserver-deployment-d9f79cb5-9ngnr webserver-deployment-d9f79cb5- deployment-5028 f0b46ef5-ae2b-4106-a421-4c269f6691c0 20783 0 2023-10-13 08:41:26 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[] [{apps/v1 ReplicaSet webserver-deployment-d9f79cb5 ffa31d2c-2110-48a6-8b8b-7a3947b7ec84 0xc002fc8d6f 0xc002fc8d80}] [] [{kube-controller-manager Update v1 2023-10-13 08:41:26 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ffa31d2c-2110-48a6-8b8b-7a3947b7ec84\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-10-13 08:41:27 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.149\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-5cs55,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5cs55,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.253.8.111,PodIP:10.244.1.149,StartTime:2023-10-13 08:41:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": failed to do request: Head "https://registry-1.docker.io/v2/library/webserver/manifests/404": dial tcp: lookup registry-1.docker.io on 10.253.8.111:53: read udp 10.253.8.111:44947->10.253.8.111:53: read: connection refused,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.149,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 13 08:41:28.455: INFO: Pod "webserver-deployment-d9f79cb5-hxtdj" is not available: +&Pod{ObjectMeta:{webserver-deployment-d9f79cb5-hxtdj webserver-deployment-d9f79cb5- deployment-5028 7abfd4e9-cee9-498a-9486-3a81f2169fde 20774 0 2023-10-13 08:41:26 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[] [{apps/v1 ReplicaSet webserver-deployment-d9f79cb5 ffa31d2c-2110-48a6-8b8b-7a3947b7ec84 0xc002fc8fa7 0xc002fc8fa8}] [] [{kube-controller-manager Update v1 2023-10-13 08:41:26 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ffa31d2c-2110-48a6-8b8b-7a3947b7ec84\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-10-13 08:41:27 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.0.26\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-phqgm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-phqgm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.253.8.110,PodIP:10.244.0.26,StartTime:2023-10-13 08:41:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": failed to do request: Head "https://registry-1.docker.io/v2/library/webserver/manifests/404": dial tcp: lookup registry-1.docker.io on 10.253.8.110:53: read udp 10.253.8.110:38030->10.253.8.110:53: read: connection refused,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.0.26,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 13 08:41:28.456: INFO: Pod "webserver-deployment-d9f79cb5-l6hvq" is not available: +&Pod{ObjectMeta:{webserver-deployment-d9f79cb5-l6hvq webserver-deployment-d9f79cb5- deployment-5028 16151e31-5b8b-4096-81f0-d03d8258a820 20790 0 2023-10-13 08:41:26 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[] [{apps/v1 ReplicaSet webserver-deployment-d9f79cb5 ffa31d2c-2110-48a6-8b8b-7a3947b7ec84 0xc002fc93af 0xc002fc93c0}] [] [{kube-controller-manager Update v1 2023-10-13 08:41:26 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ffa31d2c-2110-48a6-8b8b-7a3947b7ec84\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-10-13 08:41:27 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.109\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-29nwj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-29nwj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node3,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.253.8.112,PodIP:10.244.2.109,StartTime:2023-10-13 08:41:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": failed to do request: Head "https://registry-1.docker.io/v2/library/webserver/manifests/404": dial tcp: lookup registry-1.docker.io on 10.253.8.112:53: read udp 10.253.8.112:49276->10.253.8.112:53: read: connection refused,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.109,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 13 08:41:28.456: INFO: Pod "webserver-deployment-d9f79cb5-mj2pb" is not available: +&Pod{ObjectMeta:{webserver-deployment-d9f79cb5-mj2pb webserver-deployment-d9f79cb5- deployment-5028 1c782a1e-39ab-47b5-be5f-c2c142ab9f7a 20769 0 2023-10-13 08:41:26 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[] [{apps/v1 ReplicaSet webserver-deployment-d9f79cb5 ffa31d2c-2110-48a6-8b8b-7a3947b7ec84 0xc002fc9977 0xc002fc9978}] [] [{kube-controller-manager Update v1 2023-10-13 08:41:26 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ffa31d2c-2110-48a6-8b8b-7a3947b7ec84\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-10-13 08:41:27 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.150\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-7g5rr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7g5rr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.253.8.111,PodIP:10.244.1.150,StartTime:2023-10-13 08:41:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": failed to do request: Head "https://registry-1.docker.io/v2/library/webserver/manifests/404": dial tcp: lookup registry-1.docker.io on 10.253.8.111:53: read udp 10.253.8.111:53471->10.253.8.111:53: read: connection refused,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.150,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 13 08:41:28.456: INFO: Pod "webserver-deployment-d9f79cb5-rrttx" is not available: +&Pod{ObjectMeta:{webserver-deployment-d9f79cb5-rrttx webserver-deployment-d9f79cb5- deployment-5028 69d8ef59-7e8e-460b-81b0-e37845cbf0bd 20802 0 2023-10-13 08:41:28 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[] [{apps/v1 ReplicaSet webserver-deployment-d9f79cb5 ffa31d2c-2110-48a6-8b8b-7a3947b7ec84 0xc002fc9bd7 0xc002fc9bd8}] [] [{kube-controller-manager Update v1 2023-10-13 08:41:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ffa31d2c-2110-48a6-8b8b-7a3947b7ec84\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-44pb9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-44pb9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 13 08:41:28.456: INFO: Pod "webserver-deployment-d9f79cb5-txj8n" is not available: +&Pod{ObjectMeta:{webserver-deployment-d9f79cb5-txj8n webserver-deployment-d9f79cb5- deployment-5028 6cc80dff-1064-4e61-9414-99e6b1639392 20777 0 2023-10-13 08:41:26 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[] [{apps/v1 ReplicaSet webserver-deployment-d9f79cb5 ffa31d2c-2110-48a6-8b8b-7a3947b7ec84 0xc002fc9d27 0xc002fc9d28}] [] [{kube-controller-manager Update v1 2023-10-13 08:41:26 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ffa31d2c-2110-48a6-8b8b-7a3947b7ec84\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-10-13 08:41:27 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.0.27\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-rcvsf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rcvsf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.253.8.110,PodIP:10.244.0.27,StartTime:2023-10-13 08:41:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": failed to do request: Head "https://registry-1.docker.io/v2/library/webserver/manifests/404": dial tcp: lookup registry-1.docker.io on 10.253.8.110:53: read udp 10.253.8.110:33299->10.253.8.110:53: read: connection refused,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.0.27,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + test/e2e/framework/node/init/init.go:32 +Oct 13 08:41:28.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] Deployment + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] Deployment + tear down framework | framework.go:193 +STEP: Destroying namespace "deployment-5028" for this suite. 10/13/23 08:41:28.462 +------------------------------ +• [SLOW TEST] [6.138 seconds] +[sig-apps] Deployment +test/e2e/apps/framework.go:23 + deployment should support proportional scaling [Conformance] + test/e2e/apps/deployment.go:160 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] Deployment + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:41:22.329 + Oct 13 08:41:22.329: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename deployment 10/13/23 08:41:22.33 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:41:22.346 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:41:22.349 + [BeforeEach] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 + [It] deployment should support proportional scaling [Conformance] + test/e2e/apps/deployment.go:160 + Oct 13 08:41:22.353: INFO: Creating deployment "webserver-deployment" + Oct 13 08:41:22.358: INFO: Waiting for observed generation 1 + Oct 13 08:41:24.365: INFO: Waiting for all required pods to come up + Oct 13 08:41:24.369: INFO: Pod name httpd: Found 10 pods out of 10 + STEP: ensuring each pod is running 10/13/23 08:41:24.369 + Oct 13 08:41:24.369: INFO: Waiting up to 5m0s for pod "webserver-deployment-7f5969cbc7-gfz4b" in namespace "deployment-5028" to be "running" + Oct 13 08:41:24.371: INFO: Pod "webserver-deployment-7f5969cbc7-gfz4b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.713196ms + Oct 13 08:41:26.376: INFO: Pod "webserver-deployment-7f5969cbc7-gfz4b": Phase="Running", Reason="", readiness=true. Elapsed: 2.007052751s + Oct 13 08:41:26.376: INFO: Pod "webserver-deployment-7f5969cbc7-gfz4b" satisfied condition "running" + Oct 13 08:41:26.376: INFO: Waiting for deployment "webserver-deployment" to complete + Oct 13 08:41:26.382: INFO: Updating deployment "webserver-deployment" with a non-existent image + Oct 13 08:41:26.389: INFO: Updating deployment webserver-deployment + Oct 13 08:41:26.389: INFO: Waiting for observed generation 2 + Oct 13 08:41:28.395: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 + Oct 13 08:41:28.399: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 + Oct 13 08:41:28.402: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas + Oct 13 08:41:28.409: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 + Oct 13 08:41:28.409: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 + Oct 13 08:41:28.412: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas + Oct 13 08:41:28.416: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas + Oct 13 08:41:28.416: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 + Oct 13 08:41:28.424: INFO: Updating deployment webserver-deployment + Oct 13 08:41:28.424: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas + Oct 13 08:41:28.431: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 + Oct 13 08:41:28.434: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 + [AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 + Oct 13 08:41:28.441: INFO: Deployment "webserver-deployment": + &Deployment{ObjectMeta:{webserver-deployment deployment-5028 3ac1fed2-6f66-46c2-aa3e-319211ae5d3e 20794 3 2023-10-13 08:41:22 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{kube-controller-manager Update apps/v1 2023-10-13 08:41:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}} status} {e2e.test Update apps/v1 2023-10-13 08:41:28 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} }]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004865498 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:5,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2023-10-13 08:41:24 +0000 UTC,LastTransitionTime:2023-10-13 08:41:24 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-d9f79cb5" is progressing.,LastUpdateTime:2023-10-13 08:41:26 +0000 UTC,LastTransitionTime:2023-10-13 08:41:22 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} + + Oct 13 08:41:28.447: INFO: New ReplicaSet "webserver-deployment-d9f79cb5" of Deployment "webserver-deployment": + &ReplicaSet{ObjectMeta:{webserver-deployment-d9f79cb5 deployment-5028 ffa31d2c-2110-48a6-8b8b-7a3947b7ec84 20797 3 2023-10-13 08:41:26 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 3ac1fed2-6f66-46c2-aa3e-319211ae5d3e 0xc004445457 0xc004445458}] [] [{kube-controller-manager Update apps/v1 2023-10-13 08:41:26 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status} {kube-controller-manager Update apps/v1 2023-10-13 08:41:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3ac1fed2-6f66-46c2-aa3e-319211ae5d3e\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} }]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: d9f79cb5,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0044454f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} + Oct 13 08:41:28.447: INFO: All old ReplicaSets of Deployment "webserver-deployment": + Oct 13 08:41:28.447: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-7f5969cbc7 deployment-5028 497ff21d-3c97-4b91-b021-c1b8e7d0c758 20795 3 2023-10-13 08:41:22 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 3ac1fed2-6f66-46c2-aa3e-319211ae5d3e 0xc004445367 0xc004445368}] [] [{kube-controller-manager Update apps/v1 2023-10-13 08:41:26 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status} {kube-controller-manager Update apps/v1 2023-10-13 08:41:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3ac1fed2-6f66-46c2-aa3e-319211ae5d3e\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} }]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 7f5969cbc7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0044453f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} + Oct 13 08:41:28.453: INFO: Pod "webserver-deployment-7f5969cbc7-498fv" is available: + &Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-498fv webserver-deployment-7f5969cbc7- deployment-5028 698a2034-172f-44ce-ad01-78f92ba5aee6 20671 0 2023-10-13 08:41:22 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 497ff21d-3c97-4b91-b021-c1b8e7d0c758 0xc0044459b7 0xc0044459b8}] [] [{kube-controller-manager Update v1 2023-10-13 08:41:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"497ff21d-3c97-4b91-b021-c1b8e7d0c758\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-10-13 08:41:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.146\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-8bfns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8bfns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.253.8.111,PodIP:10.244.1.146,StartTime:2023-10-13 08:41:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-10-13 08:41:23 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:sha256:987dae5a853a43c663ab2f902708e65874bd2c0189aa0bc57d81ffb57187d089,ContainerID:containerd://3f97eb7469a555ff10855315df8896f281659ead74e27a01b34bfff6b4a3941f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.146,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + Oct 13 08:41:28.453: INFO: Pod "webserver-deployment-7f5969cbc7-4bbr9" is available: + &Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-4bbr9 webserver-deployment-7f5969cbc7- deployment-5028 2fbabc39-b508-420e-8cb9-d28a47833433 20679 0 2023-10-13 08:41:22 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 497ff21d-3c97-4b91-b021-c1b8e7d0c758 0xc004445c07 0xc004445c08}] [] [{kube-controller-manager Update v1 2023-10-13 08:41:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"497ff21d-3c97-4b91-b021-c1b8e7d0c758\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-10-13 08:41:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.107\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-qh54g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qh54g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node3,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.253.8.112,PodIP:10.244.2.107,StartTime:2023-10-13 08:41:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-10-13 08:41:23 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:sha256:987dae5a853a43c663ab2f902708e65874bd2c0189aa0bc57d81ffb57187d089,ContainerID:containerd://7b47c72f8d849cb267e4ba569d7ffe3466d13838ed5278d5543f8cfd91430f15,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.107,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + Oct 13 08:41:28.453: INFO: Pod "webserver-deployment-7f5969cbc7-769q5" is available: + &Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-769q5 webserver-deployment-7f5969cbc7- deployment-5028 04dc59d1-781f-4fa8-b0b3-e8cc7ddf3847 20677 0 2023-10-13 08:41:22 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 497ff21d-3c97-4b91-b021-c1b8e7d0c758 0xc004445e17 0xc004445e18}] [] [{kube-controller-manager Update v1 2023-10-13 08:41:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"497ff21d-3c97-4b91-b021-c1b8e7d0c758\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-10-13 08:41:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.108\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-wh46v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wh46v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node3,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.253.8.112,PodIP:10.244.2.108,StartTime:2023-10-13 08:41:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-10-13 08:41:23 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:sha256:987dae5a853a43c663ab2f902708e65874bd2c0189aa0bc57d81ffb57187d089,ContainerID:containerd://2862345758a8c6aee9a85f8835a1bab3c8f2edd59d286ca0c6c1f43920b374c3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.108,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + Oct 13 08:41:28.453: INFO: Pod "webserver-deployment-7f5969cbc7-7mtt6" is available: + &Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-7mtt6 webserver-deployment-7f5969cbc7- deployment-5028 ab132f03-f271-478c-a445-31356aefa6fe 20689 0 2023-10-13 08:41:22 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 497ff21d-3c97-4b91-b021-c1b8e7d0c758 0xc002fc8007 0xc002fc8008}] [] [{kube-controller-manager Update v1 2023-10-13 08:41:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"497ff21d-3c97-4b91-b021-c1b8e7d0c758\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-10-13 08:41:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.145\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-4k428,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4k428,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.253.8.111,PodIP:10.244.1.145,StartTime:2023-10-13 08:41:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-10-13 08:41:23 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:sha256:987dae5a853a43c663ab2f902708e65874bd2c0189aa0bc57d81ffb57187d089,ContainerID:containerd://9413739e4d1b41c4c50810470309418b6e53303d9293655eec4d8a781014a9c7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.145,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + Oct 13 08:41:28.454: INFO: Pod "webserver-deployment-7f5969cbc7-8rt7s" is available: + &Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-8rt7s webserver-deployment-7f5969cbc7- deployment-5028 e61c911b-426e-4b7e-88d6-ff3c69d52d81 20657 0 2023-10-13 08:41:22 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 497ff21d-3c97-4b91-b021-c1b8e7d0c758 0xc002fc81f7 0xc002fc81f8}] [] [{kube-controller-manager Update v1 2023-10-13 08:41:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"497ff21d-3c97-4b91-b021-c1b8e7d0c758\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-10-13 08:41:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.0.25\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-9j2t8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9j2t8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.253.8.110,PodIP:10.244.0.25,StartTime:2023-10-13 08:41:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-10-13 08:41:23 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:sha256:987dae5a853a43c663ab2f902708e65874bd2c0189aa0bc57d81ffb57187d089,ContainerID:containerd://94851f31745d74e60cc2f63c359af01d1340105e187c3bca172d1640d98312b1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.0.25,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + Oct 13 08:41:28.454: INFO: Pod "webserver-deployment-7f5969cbc7-b5c76" is not available: + &Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-b5c76 webserver-deployment-7f5969cbc7- deployment-5028 c34a10ca-35a4-4068-8ce4-e1c37f2a44b3 20803 0 2023-10-13 08:41:28 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 497ff21d-3c97-4b91-b021-c1b8e7d0c758 0xc002fc83f0 0xc002fc83f1}] [] [{kube-controller-manager Update v1 2023-10-13 08:41:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"497ff21d-3c97-4b91-b021-c1b8e7d0c758\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-z945c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z945c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + Oct 13 08:41:28.454: INFO: Pod "webserver-deployment-7f5969cbc7-bx9r9" is available: + &Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-bx9r9 webserver-deployment-7f5969cbc7- deployment-5028 f3beae01-7972-4b8e-9b9f-992168849de5 20685 0 2023-10-13 08:41:22 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 497ff21d-3c97-4b91-b021-c1b8e7d0c758 0xc002fc8527 0xc002fc8528}] [] [{kube-controller-manager Update v1 2023-10-13 08:41:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"497ff21d-3c97-4b91-b021-c1b8e7d0c758\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-10-13 08:41:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.106\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-58dtz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-58dtz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node3,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.253.8.112,PodIP:10.244.2.106,StartTime:2023-10-13 08:41:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-10-13 08:41:23 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:sha256:987dae5a853a43c663ab2f902708e65874bd2c0189aa0bc57d81ffb57187d089,ContainerID:containerd://0be63af028d4da20a44bf95c00d8d913c3f4eaa70e27ae08eec546e48d408ae1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.106,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + Oct 13 08:41:28.454: INFO: Pod "webserver-deployment-7f5969cbc7-cxdm8" is available: + &Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-cxdm8 webserver-deployment-7f5969cbc7- deployment-5028 cac55514-abd1-403f-9a9f-8fa03534d3a2 20660 0 2023-10-13 08:41:22 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 497ff21d-3c97-4b91-b021-c1b8e7d0c758 0xc002fc8717 0xc002fc8718}] [] [{kube-controller-manager Update v1 2023-10-13 08:41:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"497ff21d-3c97-4b91-b021-c1b8e7d0c758\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-10-13 08:41:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.0.24\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-dqjmm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dqjmm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.253.8.110,PodIP:10.244.0.24,StartTime:2023-10-13 08:41:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-10-13 08:41:23 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:sha256:987dae5a853a43c663ab2f902708e65874bd2c0189aa0bc57d81ffb57187d089,ContainerID:containerd://df3ffdf3846ec42dc820e1ea091f4ad32a45a5bdd7a549e2feafe3e24a392aa6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.0.24,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + Oct 13 08:41:28.454: INFO: Pod "webserver-deployment-7f5969cbc7-mpkj2" is not available: + &Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-mpkj2 webserver-deployment-7f5969cbc7- deployment-5028 76338a01-4a70-42f9-9aab-b955e86867aa 20801 0 2023-10-13 08:41:28 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 497ff21d-3c97-4b91-b021-c1b8e7d0c758 0xc002fc8900 0xc002fc8901}] [] [{kube-controller-manager Update v1 2023-10-13 08:41:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"497ff21d-3c97-4b91-b021-c1b8e7d0c758\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-pw2ll,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pw2ll,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + Oct 13 08:41:28.455: INFO: Pod "webserver-deployment-7f5969cbc7-wnvgx" is not available: + &Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-wnvgx webserver-deployment-7f5969cbc7- deployment-5028 a11434c7-7b12-4bb8-ad68-6e1bfc189c09 20799 0 2023-10-13 08:41:28 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 497ff21d-3c97-4b91-b021-c1b8e7d0c758 0xc002fc8a60 0xc002fc8a61}] [] [{kube-controller-manager Update v1 2023-10-13 08:41:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"497ff21d-3c97-4b91-b021-c1b8e7d0c758\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-222zn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-222zn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + Oct 13 08:41:28.455: INFO: Pod "webserver-deployment-7f5969cbc7-z7r8w" is available: + &Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-z7r8w webserver-deployment-7f5969cbc7- deployment-5028 fede7220-63a0-4d11-a0df-da8b953ee6cc 20683 0 2023-10-13 08:41:22 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 497ff21d-3c97-4b91-b021-c1b8e7d0c758 0xc002fc8b97 0xc002fc8b98}] [] [{kube-controller-manager Update v1 2023-10-13 08:41:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"497ff21d-3c97-4b91-b021-c1b8e7d0c758\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-10-13 08:41:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.0.23\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-sbh5s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sbh5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.253.8.110,PodIP:10.244.0.23,StartTime:2023-10-13 08:41:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-10-13 08:41:23 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:sha256:987dae5a853a43c663ab2f902708e65874bd2c0189aa0bc57d81ffb57187d089,ContainerID:containerd://119ec17272908af0f69bb417ae321822fcbfd3d974b25a06d69430ad70b5accb,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.0.23,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + Oct 13 08:41:28.455: INFO: Pod "webserver-deployment-d9f79cb5-9ngnr" is not available: + &Pod{ObjectMeta:{webserver-deployment-d9f79cb5-9ngnr webserver-deployment-d9f79cb5- deployment-5028 f0b46ef5-ae2b-4106-a421-4c269f6691c0 20783 0 2023-10-13 08:41:26 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[] [{apps/v1 ReplicaSet webserver-deployment-d9f79cb5 ffa31d2c-2110-48a6-8b8b-7a3947b7ec84 0xc002fc8d6f 0xc002fc8d80}] [] [{kube-controller-manager Update v1 2023-10-13 08:41:26 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ffa31d2c-2110-48a6-8b8b-7a3947b7ec84\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-10-13 08:41:27 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.149\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-5cs55,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5cs55,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.253.8.111,PodIP:10.244.1.149,StartTime:2023-10-13 08:41:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": failed to do request: Head "https://registry-1.docker.io/v2/library/webserver/manifests/404": dial tcp: lookup registry-1.docker.io on 10.253.8.111:53: read udp 10.253.8.111:44947->10.253.8.111:53: read: connection refused,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.149,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + Oct 13 08:41:28.455: INFO: Pod "webserver-deployment-d9f79cb5-hxtdj" is not available: + &Pod{ObjectMeta:{webserver-deployment-d9f79cb5-hxtdj webserver-deployment-d9f79cb5- deployment-5028 7abfd4e9-cee9-498a-9486-3a81f2169fde 20774 0 2023-10-13 08:41:26 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[] [{apps/v1 ReplicaSet webserver-deployment-d9f79cb5 ffa31d2c-2110-48a6-8b8b-7a3947b7ec84 0xc002fc8fa7 0xc002fc8fa8}] [] [{kube-controller-manager Update v1 2023-10-13 08:41:26 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ffa31d2c-2110-48a6-8b8b-7a3947b7ec84\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-10-13 08:41:27 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.0.26\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-phqgm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-phqgm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.253.8.110,PodIP:10.244.0.26,StartTime:2023-10-13 08:41:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": failed to do request: Head "https://registry-1.docker.io/v2/library/webserver/manifests/404": dial tcp: lookup registry-1.docker.io on 10.253.8.110:53: read udp 10.253.8.110:38030->10.253.8.110:53: read: connection refused,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.0.26,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + Oct 13 08:41:28.456: INFO: Pod "webserver-deployment-d9f79cb5-l6hvq" is not available: + &Pod{ObjectMeta:{webserver-deployment-d9f79cb5-l6hvq webserver-deployment-d9f79cb5- deployment-5028 16151e31-5b8b-4096-81f0-d03d8258a820 20790 0 2023-10-13 08:41:26 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[] [{apps/v1 ReplicaSet webserver-deployment-d9f79cb5 ffa31d2c-2110-48a6-8b8b-7a3947b7ec84 0xc002fc93af 0xc002fc93c0}] [] [{kube-controller-manager Update v1 2023-10-13 08:41:26 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ffa31d2c-2110-48a6-8b8b-7a3947b7ec84\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-10-13 08:41:27 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.109\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-29nwj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-29nwj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node3,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.253.8.112,PodIP:10.244.2.109,StartTime:2023-10-13 08:41:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": failed to do request: Head "https://registry-1.docker.io/v2/library/webserver/manifests/404": dial tcp: lookup registry-1.docker.io on 10.253.8.112:53: read udp 10.253.8.112:49276->10.253.8.112:53: read: connection refused,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.109,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + Oct 13 08:41:28.456: INFO: Pod "webserver-deployment-d9f79cb5-mj2pb" is not available: + &Pod{ObjectMeta:{webserver-deployment-d9f79cb5-mj2pb webserver-deployment-d9f79cb5- deployment-5028 1c782a1e-39ab-47b5-be5f-c2c142ab9f7a 20769 0 2023-10-13 08:41:26 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[] [{apps/v1 ReplicaSet webserver-deployment-d9f79cb5 ffa31d2c-2110-48a6-8b8b-7a3947b7ec84 0xc002fc9977 0xc002fc9978}] [] [{kube-controller-manager Update v1 2023-10-13 08:41:26 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ffa31d2c-2110-48a6-8b8b-7a3947b7ec84\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-10-13 08:41:27 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.150\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-7g5rr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7g5rr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.253.8.111,PodIP:10.244.1.150,StartTime:2023-10-13 08:41:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": failed to do request: Head "https://registry-1.docker.io/v2/library/webserver/manifests/404": dial tcp: lookup registry-1.docker.io on 10.253.8.111:53: read udp 10.253.8.111:53471->10.253.8.111:53: read: connection refused,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.150,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + Oct 13 08:41:28.456: INFO: Pod "webserver-deployment-d9f79cb5-rrttx" is not available: + &Pod{ObjectMeta:{webserver-deployment-d9f79cb5-rrttx webserver-deployment-d9f79cb5- deployment-5028 69d8ef59-7e8e-460b-81b0-e37845cbf0bd 20802 0 2023-10-13 08:41:28 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[] [{apps/v1 ReplicaSet webserver-deployment-d9f79cb5 ffa31d2c-2110-48a6-8b8b-7a3947b7ec84 0xc002fc9bd7 0xc002fc9bd8}] [] [{kube-controller-manager Update v1 2023-10-13 08:41:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ffa31d2c-2110-48a6-8b8b-7a3947b7ec84\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-44pb9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-44pb9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + Oct 13 08:41:28.456: INFO: Pod "webserver-deployment-d9f79cb5-txj8n" is not available: + &Pod{ObjectMeta:{webserver-deployment-d9f79cb5-txj8n webserver-deployment-d9f79cb5- deployment-5028 6cc80dff-1064-4e61-9414-99e6b1639392 20777 0 2023-10-13 08:41:26 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[] [{apps/v1 ReplicaSet webserver-deployment-d9f79cb5 ffa31d2c-2110-48a6-8b8b-7a3947b7ec84 0xc002fc9d27 0xc002fc9d28}] [] [{kube-controller-manager Update v1 2023-10-13 08:41:26 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ffa31d2c-2110-48a6-8b8b-7a3947b7ec84\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-10-13 08:41:27 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.0.27\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-rcvsf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rcvsf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:41:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.253.8.110,PodIP:10.244.0.27,StartTime:2023-10-13 08:41:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": failed to do request: Head "https://registry-1.docker.io/v2/library/webserver/manifests/404": dial tcp: lookup registry-1.docker.io on 10.253.8.110:53: read udp 10.253.8.110:33299->10.253.8.110:53: read: connection refused,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.0.27,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + [AfterEach] [sig-apps] Deployment + test/e2e/framework/node/init/init.go:32 + Oct 13 08:41:28.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] Deployment + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] Deployment + tear down framework | framework.go:193 + STEP: Destroying namespace "deployment-5028" for this suite. 10/13/23 08:41:28.462 + << End Captured GinkgoWriter Output +------------------------------ +SSSS +------------------------------ +[sig-scheduling] SchedulerPredicates [Serial] + validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] + test/e2e/scheduling/predicates.go:704 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:41:28.468 +Oct 13 08:41:28.468: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename sched-pred 10/13/23 08:41:28.469 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:41:28.525 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:41:28.527 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:97 +Oct 13 08:41:28.531: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Oct 13 08:41:28.539: INFO: Waiting for terminating namespaces to be deleted... +Oct 13 08:41:28.542: INFO: +Logging pods the apiserver thinks is on node node1 before test +Oct 13 08:41:28.550: INFO: webserver-deployment-7f5969cbc7-8rt7s from deployment-5028 started at 2023-10-13 08:41:22 +0000 UTC (1 container statuses recorded) +Oct 13 08:41:28.550: INFO: Container httpd ready: true, restart count 0 +Oct 13 08:41:28.550: INFO: webserver-deployment-7f5969cbc7-b5c76 from deployment-5028 started at 2023-10-13 08:41:28 +0000 UTC (1 container statuses recorded) +Oct 13 08:41:28.550: INFO: Container httpd ready: false, restart count 0 +Oct 13 08:41:28.550: INFO: webserver-deployment-7f5969cbc7-cxdm8 from deployment-5028 started at 2023-10-13 08:41:22 +0000 UTC (1 container statuses recorded) +Oct 13 08:41:28.550: INFO: Container httpd ready: true, restart count 0 +Oct 13 08:41:28.550: INFO: webserver-deployment-7f5969cbc7-z7r8w from deployment-5028 started at 2023-10-13 08:41:22 +0000 UTC (1 container statuses recorded) +Oct 13 08:41:28.550: INFO: Container httpd ready: true, restart count 0 +Oct 13 08:41:28.550: INFO: webserver-deployment-d9f79cb5-hxtdj from deployment-5028 started at 2023-10-13 08:41:26 +0000 UTC (1 container statuses recorded) +Oct 13 08:41:28.550: INFO: Container httpd ready: false, restart count 0 +Oct 13 08:41:28.550: INFO: webserver-deployment-d9f79cb5-txj8n from deployment-5028 started at 2023-10-13 08:41:26 +0000 UTC (1 container statuses recorded) +Oct 13 08:41:28.550: INFO: Container httpd ready: false, restart count 0 +Oct 13 08:41:28.550: INFO: kube-flannel-ds-jtxbm from kube-flannel started at 2023-10-13 08:11:42 +0000 UTC (1 container statuses recorded) +Oct 13 08:41:28.550: INFO: Container kube-flannel ready: true, restart count 0 +Oct 13 08:41:28.550: INFO: coredns-787d4945fb-89krv from kube-system started at 2023-10-13 08:19:33 +0000 UTC (1 container statuses recorded) +Oct 13 08:41:28.550: INFO: Container coredns ready: true, restart count 0 +Oct 13 08:41:28.550: INFO: etcd-node1 from kube-system started at 2023-10-13 07:51:38 +0000 UTC (1 container statuses recorded) +Oct 13 08:41:28.550: INFO: Container etcd ready: true, restart count 8 +Oct 13 08:41:28.550: INFO: haproxy-node1 from kube-system started at 2023-10-13 07:51:38 +0000 UTC (1 container statuses recorded) +Oct 13 08:41:28.550: INFO: Container haproxy ready: true, restart count 3 +Oct 13 08:41:28.550: INFO: keepalived-node1 from kube-system started at 2023-10-13 07:51:38 +0000 UTC (1 container statuses recorded) +Oct 13 08:41:28.550: INFO: Container keepalived ready: true, restart count 9 +Oct 13 08:41:28.550: INFO: kube-apiserver-node1 from kube-system started at 2023-10-13 07:05:26 +0000 UTC (1 container statuses recorded) +Oct 13 08:41:28.550: INFO: Container kube-apiserver ready: true, restart count 8 +Oct 13 08:41:28.550: INFO: kube-controller-manager-node1 from kube-system started at 2023-10-13 07:51:38 +0000 UTC (1 container statuses recorded) +Oct 13 08:41:28.550: INFO: Container kube-controller-manager ready: true, restart count 8 +Oct 13 08:41:28.550: INFO: kube-proxy-dqr76 from kube-system started at 2023-10-13 07:05:37 +0000 UTC (1 container statuses recorded) +Oct 13 08:41:28.550: INFO: Container kube-proxy ready: true, restart count 1 +Oct 13 08:41:28.550: INFO: kube-scheduler-node1 from kube-system started at 2023-10-13 07:51:38 +0000 UTC (1 container statuses recorded) +Oct 13 08:41:28.550: INFO: Container kube-scheduler ready: true, restart count 11 +Oct 13 08:41:28.550: INFO: sonobuoy from sonobuoy started at 2023-10-13 08:13:29 +0000 UTC (1 container statuses recorded) +Oct 13 08:41:28.550: INFO: Container kube-sonobuoy ready: true, restart count 0 +Oct 13 08:41:28.550: INFO: sonobuoy-systemd-logs-daemon-set-4d2891c1fd524ac6-jr4nt from sonobuoy started at 2023-10-13 08:13:30 +0000 UTC (2 container statuses recorded) +Oct 13 08:41:28.550: INFO: Container sonobuoy-worker ready: true, restart count 0 +Oct 13 08:41:28.550: INFO: Container systemd-logs ready: true, restart count 0 +Oct 13 08:41:28.550: INFO: +Logging pods the apiserver thinks is on node node2 before test +Oct 13 08:41:28.557: INFO: webserver-deployment-7f5969cbc7-498fv from deployment-5028 started at 2023-10-13 08:41:23 +0000 UTC (1 container statuses recorded) +Oct 13 08:41:28.557: INFO: Container httpd ready: true, restart count 0 +Oct 13 08:41:28.557: INFO: webserver-deployment-7f5969cbc7-7mtt6 from deployment-5028 started at 2023-10-13 08:41:23 +0000 UTC (1 container statuses recorded) +Oct 13 08:41:28.557: INFO: Container httpd ready: true, restart count 0 +Oct 13 08:41:28.557: INFO: webserver-deployment-7f5969cbc7-mpkj2 from deployment-5028 started at 2023-10-13 08:41:29 +0000 UTC (1 container statuses recorded) +Oct 13 08:41:28.557: INFO: Container httpd ready: false, restart count 0 +Oct 13 08:41:28.557: INFO: webserver-deployment-7f5969cbc7-wnvgx from deployment-5028 started at (0 container statuses recorded) +Oct 13 08:41:28.557: INFO: webserver-deployment-d9f79cb5-9ngnr from deployment-5028 started at 2023-10-13 08:41:27 +0000 UTC (1 container statuses recorded) +Oct 13 08:41:28.557: INFO: Container httpd ready: false, restart count 0 +Oct 13 08:41:28.557: INFO: webserver-deployment-d9f79cb5-mj2pb from deployment-5028 started at 2023-10-13 08:41:27 +0000 UTC (1 container statuses recorded) +Oct 13 08:41:28.557: INFO: Container httpd ready: false, restart count 0 +Oct 13 08:41:28.557: INFO: kube-flannel-ds-5qldx from kube-flannel started at 2023-10-13 08:19:34 +0000 UTC (1 container statuses recorded) +Oct 13 08:41:28.557: INFO: Container kube-flannel ready: true, restart count 0 +Oct 13 08:41:28.557: INFO: etcd-node2 from kube-system started at 2023-10-13 07:06:06 +0000 UTC (1 container statuses recorded) +Oct 13 08:41:28.557: INFO: Container etcd ready: true, restart count 1 +Oct 13 08:41:28.557: INFO: haproxy-node2 from kube-system started at 2023-10-13 07:09:12 +0000 UTC (1 container statuses recorded) +Oct 13 08:41:28.557: INFO: Container haproxy ready: true, restart count 1 +Oct 13 08:41:28.557: INFO: keepalived-node2 from kube-system started at 2023-10-13 07:09:12 +0000 UTC (1 container statuses recorded) +Oct 13 08:41:28.557: INFO: Container keepalived ready: true, restart count 1 +Oct 13 08:41:28.557: INFO: kube-apiserver-node2 from kube-system started at 2023-10-13 07:05:53 +0000 UTC (1 container statuses recorded) +Oct 13 08:41:28.557: INFO: Container kube-apiserver ready: true, restart count 2 +Oct 13 08:41:28.557: INFO: kube-controller-manager-node2 from kube-system started at 2023-10-13 07:05:53 +0000 UTC (1 container statuses recorded) +Oct 13 08:41:28.557: INFO: Container kube-controller-manager ready: true, restart count 1 +Oct 13 08:41:28.557: INFO: kube-proxy-tkvwh from kube-system started at 2023-10-13 07:06:06 +0000 UTC (1 container statuses recorded) +Oct 13 08:41:28.557: INFO: Container kube-proxy ready: true, restart count 1 +Oct 13 08:41:28.557: INFO: kube-scheduler-node2 from kube-system started at 2023-10-13 07:05:53 +0000 UTC (1 container statuses recorded) +Oct 13 08:41:28.557: INFO: Container kube-scheduler ready: true, restart count 1 +Oct 13 08:41:28.557: INFO: sonobuoy-systemd-logs-daemon-set-4d2891c1fd524ac6-rhwwd from sonobuoy started at 2023-10-13 08:13:30 +0000 UTC (2 container statuses recorded) +Oct 13 08:41:28.557: INFO: Container sonobuoy-worker ready: true, restart count 0 +Oct 13 08:41:28.557: INFO: Container systemd-logs ready: true, restart count 0 +Oct 13 08:41:28.557: INFO: +Logging pods the apiserver thinks is on node node3 before test +Oct 13 08:41:28.563: INFO: webserver-deployment-7f5969cbc7-4bbr9 from deployment-5028 started at 2023-10-13 08:41:22 +0000 UTC (1 container statuses recorded) +Oct 13 08:41:28.563: INFO: Container httpd ready: true, restart count 0 +Oct 13 08:41:28.563: INFO: webserver-deployment-7f5969cbc7-769q5 from deployment-5028 started at 2023-10-13 08:41:22 +0000 UTC (1 container statuses recorded) +Oct 13 08:41:28.563: INFO: Container httpd ready: true, restart count 0 +Oct 13 08:41:28.563: INFO: webserver-deployment-7f5969cbc7-bx9r9 from deployment-5028 started at 2023-10-13 08:41:22 +0000 UTC (1 container statuses recorded) +Oct 13 08:41:28.563: INFO: Container httpd ready: true, restart count 0 +Oct 13 08:41:28.563: INFO: webserver-deployment-d9f79cb5-l6hvq from deployment-5028 started at 2023-10-13 08:41:26 +0000 UTC (1 container statuses recorded) +Oct 13 08:41:28.563: INFO: Container httpd ready: false, restart count 0 +Oct 13 08:41:28.563: INFO: webserver-deployment-d9f79cb5-rrttx from deployment-5028 started at 2023-10-13 08:41:28 +0000 UTC (1 container statuses recorded) +Oct 13 08:41:28.563: INFO: Container httpd ready: false, restart count 0 +Oct 13 08:41:28.563: INFO: kube-flannel-ds-dzrwh from kube-flannel started at 2023-10-13 08:11:42 +0000 UTC (1 container statuses recorded) +Oct 13 08:41:28.563: INFO: Container kube-flannel ready: true, restart count 0 +Oct 13 08:41:28.563: INFO: coredns-787d4945fb-5dqqv from kube-system started at 2023-10-13 08:12:41 +0000 UTC (1 container statuses recorded) +Oct 13 08:41:28.563: INFO: Container coredns ready: true, restart count 0 +Oct 13 08:41:28.563: INFO: etcd-node3 from kube-system started at 2023-10-13 07:07:40 +0000 UTC (1 container statuses recorded) +Oct 13 08:41:28.563: INFO: Container etcd ready: true, restart count 1 +Oct 13 08:41:28.563: INFO: haproxy-node3 from kube-system started at 2023-10-13 07:07:26 +0000 UTC (1 container statuses recorded) +Oct 13 08:41:28.563: INFO: Container haproxy ready: true, restart count 1 +Oct 13 08:41:28.563: INFO: keepalived-node3 from kube-system started at 2023-10-13 07:07:26 +0000 UTC (1 container statuses recorded) +Oct 13 08:41:28.563: INFO: Container keepalived ready: true, restart count 1 +Oct 13 08:41:28.563: INFO: kube-apiserver-node3 from kube-system started at 2023-10-13 07:07:26 +0000 UTC (1 container statuses recorded) +Oct 13 08:41:28.563: INFO: Container kube-apiserver ready: true, restart count 1 +Oct 13 08:41:28.563: INFO: kube-controller-manager-node3 from kube-system started at 2023-10-13 07:07:26 +0000 UTC (1 container statuses recorded) +Oct 13 08:41:28.563: INFO: Container kube-controller-manager ready: true, restart count 1 +Oct 13 08:41:28.563: INFO: kube-proxy-dkrp7 from kube-system started at 2023-10-13 07:07:46 +0000 UTC (1 container statuses recorded) +Oct 13 08:41:28.563: INFO: Container kube-proxy ready: true, restart count 1 +Oct 13 08:41:28.563: INFO: kube-scheduler-node3 from kube-system started at 2023-10-13 07:07:26 +0000 UTC (1 container statuses recorded) +Oct 13 08:41:28.563: INFO: Container kube-scheduler ready: true, restart count 1 +Oct 13 08:41:28.563: INFO: sonobuoy-e2e-job-bfbf16dda205467f from sonobuoy started at 2023-10-13 08:13:29 +0000 UTC (2 container statuses recorded) +Oct 13 08:41:28.563: INFO: Container e2e ready: true, restart count 0 +Oct 13 08:41:28.563: INFO: Container sonobuoy-worker ready: true, restart count 0 +Oct 13 08:41:28.563: INFO: sonobuoy-systemd-logs-daemon-set-4d2891c1fd524ac6-xj7b7 from sonobuoy started at 2023-10-13 08:13:30 +0000 UTC (2 container statuses recorded) +Oct 13 08:41:28.563: INFO: Container sonobuoy-worker ready: true, restart count 0 +Oct 13 08:41:28.563: INFO: Container systemd-logs ready: true, restart count 0 +[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] + test/e2e/scheduling/predicates.go:704 +STEP: Trying to launch a pod without a label to get a node which can launch it. 10/13/23 08:41:28.563 +Oct 13 08:41:28.571: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-8803" to be "running" +Oct 13 08:41:28.574: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 3.234598ms +Oct 13 08:41:30.578: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006827586s +Oct 13 08:41:32.579: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 4.00752001s +Oct 13 08:41:32.579: INFO: Pod "without-label" satisfied condition "running" +STEP: Explicitly delete pod here to free the resource it takes. 10/13/23 08:41:32.582 +STEP: Trying to apply a random label on the found node. 10/13/23 08:41:32.596 +STEP: verifying the node has the label kubernetes.io/e2e-535589c3-3f23-4eb5-949c-984103123e0c 95 10/13/23 08:41:32.605 +STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled 10/13/23 08:41:32.608 +Oct 13 08:41:32.615: INFO: Waiting up to 5m0s for pod "pod4" in namespace "sched-pred-8803" to be "not pending" +Oct 13 08:41:32.619: INFO: Pod "pod4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.461131ms +Oct 13 08:41:34.623: INFO: Pod "pod4": Phase="Running", Reason="", readiness=false. Elapsed: 2.007758019s +Oct 13 08:41:34.623: INFO: Pod "pod4" satisfied condition "not pending" +STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 10.253.8.111 on the node which pod4 resides and expect not scheduled 10/13/23 08:41:34.623 +Oct 13 08:41:34.628: INFO: Waiting up to 5m0s for pod "pod5" in namespace "sched-pred-8803" to be "not pending" +Oct 13 08:41:34.631: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.651341ms +Oct 13 08:41:36.634: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00576101s +Oct 13 08:41:38.635: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006534261s +Oct 13 08:41:40.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009243053s +Oct 13 08:41:42.636: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.007583739s +Oct 13 08:41:44.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.009040312s +Oct 13 08:41:46.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.008800705s +Oct 13 08:41:48.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 14.008567007s +Oct 13 08:41:50.636: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.007528494s +Oct 13 08:41:52.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 18.008929389s +Oct 13 08:41:54.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 20.008042507s +Oct 13 08:41:56.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 22.009184392s +Oct 13 08:41:58.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 24.00894867s +Oct 13 08:42:00.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 26.008981894s +Oct 13 08:42:02.636: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 28.007679786s +Oct 13 08:42:04.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 30.009213729s +Oct 13 08:42:06.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 32.009055291s +Oct 13 08:42:08.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 34.008635809s +Oct 13 08:42:10.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 36.009081197s +Oct 13 08:42:12.636: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 38.007672404s +Oct 13 08:42:14.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 40.008557084s +Oct 13 08:42:16.636: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 42.007649075s +Oct 13 08:42:18.639: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 44.010487838s +Oct 13 08:42:20.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 46.009734548s +Oct 13 08:42:22.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 48.009452s +Oct 13 08:42:24.635: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 50.006826559s +Oct 13 08:42:26.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 52.008189805s +Oct 13 08:42:28.636: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 54.007793148s +Oct 13 08:42:30.641: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 56.012114948s +Oct 13 08:42:32.636: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 58.007533723s +Oct 13 08:42:34.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.008240374s +Oct 13 08:42:36.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.008108337s +Oct 13 08:42:38.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.009015566s +Oct 13 08:42:40.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.00953485s +Oct 13 08:42:42.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.009181404s +Oct 13 08:42:44.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.009564695s +Oct 13 08:42:46.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.008387294s +Oct 13 08:42:48.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.009349522s +Oct 13 08:42:50.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.009981755s +Oct 13 08:42:52.635: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.00678417s +Oct 13 08:42:54.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.008909796s +Oct 13 08:42:56.636: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.00791778s +Oct 13 08:42:58.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.008895142s +Oct 13 08:43:00.640: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.011261528s +Oct 13 08:43:02.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.008867498s +Oct 13 08:43:04.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.008503714s +Oct 13 08:43:06.636: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.008003067s +Oct 13 08:43:08.639: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.010128006s +Oct 13 08:43:10.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.008281197s +Oct 13 08:43:12.636: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.007183269s +Oct 13 08:43:14.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.008383934s +Oct 13 08:43:16.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.008249026s +Oct 13 08:43:18.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.008553419s +Oct 13 08:43:20.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.009001588s +Oct 13 08:43:22.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.009260167s +Oct 13 08:43:24.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.008394251s +Oct 13 08:43:26.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.008050088s +Oct 13 08:43:28.636: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.00746768s +Oct 13 08:43:30.639: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.010048325s +Oct 13 08:43:32.636: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.007054821s +Oct 13 08:43:34.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.008627885s +Oct 13 08:43:36.635: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m2.006551005s +Oct 13 08:43:38.636: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m4.007704551s +Oct 13 08:43:40.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m6.008720025s +Oct 13 08:43:42.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m8.00917023s +Oct 13 08:43:44.636: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m10.007961412s +Oct 13 08:43:46.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m12.008339084s +Oct 13 08:43:48.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m14.008889611s +Oct 13 08:43:50.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m16.009768984s +Oct 13 08:43:52.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m18.008503088s +Oct 13 08:43:54.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m20.008895382s +Oct 13 08:43:56.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m22.008758079s +Oct 13 08:43:58.636: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m24.007649453s +Oct 13 08:44:00.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m26.008620584s +Oct 13 08:44:02.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m28.008573975s +Oct 13 08:44:04.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m30.00949821s +Oct 13 08:44:06.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m32.008986415s +Oct 13 08:44:08.636: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m34.00791435s +Oct 13 08:44:10.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m36.008224311s +Oct 13 08:44:12.636: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m38.007542884s +Oct 13 08:44:14.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m40.008161061s +Oct 13 08:44:16.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m42.00888871s +Oct 13 08:44:18.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m44.00860592s +Oct 13 08:44:20.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m46.009259504s +Oct 13 08:44:22.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m48.008158289s +Oct 13 08:44:24.636: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m50.007929306s +Oct 13 08:44:26.636: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m52.007287207s +Oct 13 08:44:28.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m54.008565153s +Oct 13 08:44:30.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m56.009536184s +Oct 13 08:44:32.636: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m58.007452541s +Oct 13 08:44:34.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m0.009687086s +Oct 13 08:44:36.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m2.008859804s +Oct 13 08:44:38.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m4.008572072s +Oct 13 08:44:40.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m6.00822545s +Oct 13 08:44:42.636: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m8.00797043s +Oct 13 08:44:44.636: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m10.007767027s +Oct 13 08:44:46.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m12.009495904s +Oct 13 08:44:48.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m14.009674894s +Oct 13 08:44:50.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m16.009934005s +Oct 13 08:44:52.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m18.009379724s +Oct 13 08:44:54.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m20.008462563s +Oct 13 08:44:56.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m22.008675073s +Oct 13 08:44:58.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m24.008185699s +Oct 13 08:45:00.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m26.00922896s +Oct 13 08:45:02.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m28.008743824s +Oct 13 08:45:04.639: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m30.010090857s +Oct 13 08:45:06.636: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m32.007523605s +Oct 13 08:45:08.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m34.009381102s +Oct 13 08:45:10.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m36.009153199s +Oct 13 08:45:12.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m38.008050494s +Oct 13 08:45:14.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m40.008228834s +Oct 13 08:45:16.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m42.008373074s +Oct 13 08:45:18.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m44.009002315s +Oct 13 08:45:20.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m46.00960067s +Oct 13 08:45:22.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m48.00834734s +Oct 13 08:45:24.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m50.009498195s +Oct 13 08:45:26.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m52.009734096s +Oct 13 08:45:28.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m54.009597623s +Oct 13 08:45:30.636: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m56.007249242s +Oct 13 08:45:32.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m58.008490281s +Oct 13 08:45:34.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m0.008979221s +Oct 13 08:45:36.636: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m2.007457394s +Oct 13 08:45:38.639: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m4.010772461s +Oct 13 08:45:40.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m6.008504354s +Oct 13 08:45:42.636: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m8.00751873s +Oct 13 08:45:44.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m10.00829605s +Oct 13 08:45:46.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m12.008579756s +Oct 13 08:45:48.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m14.008261083s +Oct 13 08:45:50.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m16.009552686s +Oct 13 08:45:52.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m18.008725823s +Oct 13 08:45:54.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m20.009962065s +Oct 13 08:45:56.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m22.008038681s +Oct 13 08:45:58.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m24.009544861s +Oct 13 08:46:00.636: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m26.007999511s +Oct 13 08:46:02.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m28.008622272s +Oct 13 08:46:04.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m30.009048469s +Oct 13 08:46:06.636: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m32.007098037s +Oct 13 08:46:08.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m34.009413207s +Oct 13 08:46:10.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m36.008844003s +Oct 13 08:46:12.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m38.00847264s +Oct 13 08:46:14.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m40.009007937s +Oct 13 08:46:16.636: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m42.007514654s +Oct 13 08:46:18.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m44.009724906s +Oct 13 08:46:20.639: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m46.010124891s +Oct 13 08:46:22.635: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m48.006895423s +Oct 13 08:46:24.639: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m50.010562593s +Oct 13 08:46:26.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m52.008951022s +Oct 13 08:46:28.636: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m54.008004466s +Oct 13 08:46:30.636: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m56.007613754s +Oct 13 08:46:32.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m58.008344799s +Oct 13 08:46:34.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 5m0.00821688s +Oct 13 08:46:34.641: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 5m0.012322475s +STEP: removing the label kubernetes.io/e2e-535589c3-3f23-4eb5-949c-984103123e0c off the node node2 10/13/23 08:46:34.641 +STEP: verifying the node doesn't have the label kubernetes.io/e2e-535589c3-3f23-4eb5-949c-984103123e0c 10/13/23 08:46:34.653 +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/node/init/init.go:32 +Oct 13 08:46:34.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:88 +[DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] + tear down framework | framework.go:193 +STEP: Destroying namespace "sched-pred-8803" for this suite. 10/13/23 08:46:34.663 +------------------------------ +• [SLOW TEST] [306.202 seconds] +[sig-scheduling] SchedulerPredicates [Serial] +test/e2e/scheduling/framework.go:40 + validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] + test/e2e/scheduling/predicates.go:704 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:41:28.468 + Oct 13 08:41:28.468: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename sched-pred 10/13/23 08:41:28.469 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:41:28.525 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:41:28.527 + [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:97 + Oct 13 08:41:28.531: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready + Oct 13 08:41:28.539: INFO: Waiting for terminating namespaces to be deleted... + Oct 13 08:41:28.542: INFO: + Logging pods the apiserver thinks is on node node1 before test + Oct 13 08:41:28.550: INFO: webserver-deployment-7f5969cbc7-8rt7s from deployment-5028 started at 2023-10-13 08:41:22 +0000 UTC (1 container statuses recorded) + Oct 13 08:41:28.550: INFO: Container httpd ready: true, restart count 0 + Oct 13 08:41:28.550: INFO: webserver-deployment-7f5969cbc7-b5c76 from deployment-5028 started at 2023-10-13 08:41:28 +0000 UTC (1 container statuses recorded) + Oct 13 08:41:28.550: INFO: Container httpd ready: false, restart count 0 + Oct 13 08:41:28.550: INFO: webserver-deployment-7f5969cbc7-cxdm8 from deployment-5028 started at 2023-10-13 08:41:22 +0000 UTC (1 container statuses recorded) + Oct 13 08:41:28.550: INFO: Container httpd ready: true, restart count 0 + Oct 13 08:41:28.550: INFO: webserver-deployment-7f5969cbc7-z7r8w from deployment-5028 started at 2023-10-13 08:41:22 +0000 UTC (1 container statuses recorded) + Oct 13 08:41:28.550: INFO: Container httpd ready: true, restart count 0 + Oct 13 08:41:28.550: INFO: webserver-deployment-d9f79cb5-hxtdj from deployment-5028 started at 2023-10-13 08:41:26 +0000 UTC (1 container statuses recorded) + Oct 13 08:41:28.550: INFO: Container httpd ready: false, restart count 0 + Oct 13 08:41:28.550: INFO: webserver-deployment-d9f79cb5-txj8n from deployment-5028 started at 2023-10-13 08:41:26 +0000 UTC (1 container statuses recorded) + Oct 13 08:41:28.550: INFO: Container httpd ready: false, restart count 0 + Oct 13 08:41:28.550: INFO: kube-flannel-ds-jtxbm from kube-flannel started at 2023-10-13 08:11:42 +0000 UTC (1 container statuses recorded) + Oct 13 08:41:28.550: INFO: Container kube-flannel ready: true, restart count 0 + Oct 13 08:41:28.550: INFO: coredns-787d4945fb-89krv from kube-system started at 2023-10-13 08:19:33 +0000 UTC (1 container statuses recorded) + Oct 13 08:41:28.550: INFO: Container coredns ready: true, restart count 0 + Oct 13 08:41:28.550: INFO: etcd-node1 from kube-system started at 2023-10-13 07:51:38 +0000 UTC (1 container statuses recorded) + Oct 13 08:41:28.550: INFO: Container etcd ready: true, restart count 8 + Oct 13 08:41:28.550: INFO: haproxy-node1 from kube-system started at 2023-10-13 07:51:38 +0000 UTC (1 container statuses recorded) + Oct 13 08:41:28.550: INFO: Container haproxy ready: true, restart count 3 + Oct 13 08:41:28.550: INFO: keepalived-node1 from kube-system started at 2023-10-13 07:51:38 +0000 UTC (1 container statuses recorded) + Oct 13 08:41:28.550: INFO: Container keepalived ready: true, restart count 9 + Oct 13 08:41:28.550: INFO: kube-apiserver-node1 from kube-system started at 2023-10-13 07:05:26 +0000 UTC (1 container statuses recorded) + Oct 13 08:41:28.550: INFO: Container kube-apiserver ready: true, restart count 8 + Oct 13 08:41:28.550: INFO: kube-controller-manager-node1 from kube-system started at 2023-10-13 07:51:38 +0000 UTC (1 container statuses recorded) + Oct 13 08:41:28.550: INFO: Container kube-controller-manager ready: true, restart count 8 + Oct 13 08:41:28.550: INFO: kube-proxy-dqr76 from kube-system started at 2023-10-13 07:05:37 +0000 UTC (1 container statuses recorded) + Oct 13 08:41:28.550: INFO: Container kube-proxy ready: true, restart count 1 + Oct 13 08:41:28.550: INFO: kube-scheduler-node1 from kube-system started at 2023-10-13 07:51:38 +0000 UTC (1 container statuses recorded) + Oct 13 08:41:28.550: INFO: Container kube-scheduler ready: true, restart count 11 + Oct 13 08:41:28.550: INFO: sonobuoy from sonobuoy started at 2023-10-13 08:13:29 +0000 UTC (1 container statuses recorded) + Oct 13 08:41:28.550: INFO: Container kube-sonobuoy ready: true, restart count 0 + Oct 13 08:41:28.550: INFO: sonobuoy-systemd-logs-daemon-set-4d2891c1fd524ac6-jr4nt from sonobuoy started at 2023-10-13 08:13:30 +0000 UTC (2 container statuses recorded) + Oct 13 08:41:28.550: INFO: Container sonobuoy-worker ready: true, restart count 0 + Oct 13 08:41:28.550: INFO: Container systemd-logs ready: true, restart count 0 + Oct 13 08:41:28.550: INFO: + Logging pods the apiserver thinks is on node node2 before test + Oct 13 08:41:28.557: INFO: webserver-deployment-7f5969cbc7-498fv from deployment-5028 started at 2023-10-13 08:41:23 +0000 UTC (1 container statuses recorded) + Oct 13 08:41:28.557: INFO: Container httpd ready: true, restart count 0 + Oct 13 08:41:28.557: INFO: webserver-deployment-7f5969cbc7-7mtt6 from deployment-5028 started at 2023-10-13 08:41:23 +0000 UTC (1 container statuses recorded) + Oct 13 08:41:28.557: INFO: Container httpd ready: true, restart count 0 + Oct 13 08:41:28.557: INFO: webserver-deployment-7f5969cbc7-mpkj2 from deployment-5028 started at 2023-10-13 08:41:29 +0000 UTC (1 container statuses recorded) + Oct 13 08:41:28.557: INFO: Container httpd ready: false, restart count 0 + Oct 13 08:41:28.557: INFO: webserver-deployment-7f5969cbc7-wnvgx from deployment-5028 started at (0 container statuses recorded) + Oct 13 08:41:28.557: INFO: webserver-deployment-d9f79cb5-9ngnr from deployment-5028 started at 2023-10-13 08:41:27 +0000 UTC (1 container statuses recorded) + Oct 13 08:41:28.557: INFO: Container httpd ready: false, restart count 0 + Oct 13 08:41:28.557: INFO: webserver-deployment-d9f79cb5-mj2pb from deployment-5028 started at 2023-10-13 08:41:27 +0000 UTC (1 container statuses recorded) + Oct 13 08:41:28.557: INFO: Container httpd ready: false, restart count 0 + Oct 13 08:41:28.557: INFO: kube-flannel-ds-5qldx from kube-flannel started at 2023-10-13 08:19:34 +0000 UTC (1 container statuses recorded) + Oct 13 08:41:28.557: INFO: Container kube-flannel ready: true, restart count 0 + Oct 13 08:41:28.557: INFO: etcd-node2 from kube-system started at 2023-10-13 07:06:06 +0000 UTC (1 container statuses recorded) + Oct 13 08:41:28.557: INFO: Container etcd ready: true, restart count 1 + Oct 13 08:41:28.557: INFO: haproxy-node2 from kube-system started at 2023-10-13 07:09:12 +0000 UTC (1 container statuses recorded) + Oct 13 08:41:28.557: INFO: Container haproxy ready: true, restart count 1 + Oct 13 08:41:28.557: INFO: keepalived-node2 from kube-system started at 2023-10-13 07:09:12 +0000 UTC (1 container statuses recorded) + Oct 13 08:41:28.557: INFO: Container keepalived ready: true, restart count 1 + Oct 13 08:41:28.557: INFO: kube-apiserver-node2 from kube-system started at 2023-10-13 07:05:53 +0000 UTC (1 container statuses recorded) + Oct 13 08:41:28.557: INFO: Container kube-apiserver ready: true, restart count 2 + Oct 13 08:41:28.557: INFO: kube-controller-manager-node2 from kube-system started at 2023-10-13 07:05:53 +0000 UTC (1 container statuses recorded) + Oct 13 08:41:28.557: INFO: Container kube-controller-manager ready: true, restart count 1 + Oct 13 08:41:28.557: INFO: kube-proxy-tkvwh from kube-system started at 2023-10-13 07:06:06 +0000 UTC (1 container statuses recorded) + Oct 13 08:41:28.557: INFO: Container kube-proxy ready: true, restart count 1 + Oct 13 08:41:28.557: INFO: kube-scheduler-node2 from kube-system started at 2023-10-13 07:05:53 +0000 UTC (1 container statuses recorded) + Oct 13 08:41:28.557: INFO: Container kube-scheduler ready: true, restart count 1 + Oct 13 08:41:28.557: INFO: sonobuoy-systemd-logs-daemon-set-4d2891c1fd524ac6-rhwwd from sonobuoy started at 2023-10-13 08:13:30 +0000 UTC (2 container statuses recorded) + Oct 13 08:41:28.557: INFO: Container sonobuoy-worker ready: true, restart count 0 + Oct 13 08:41:28.557: INFO: Container systemd-logs ready: true, restart count 0 + Oct 13 08:41:28.557: INFO: + Logging pods the apiserver thinks is on node node3 before test + Oct 13 08:41:28.563: INFO: webserver-deployment-7f5969cbc7-4bbr9 from deployment-5028 started at 2023-10-13 08:41:22 +0000 UTC (1 container statuses recorded) + Oct 13 08:41:28.563: INFO: Container httpd ready: true, restart count 0 + Oct 13 08:41:28.563: INFO: webserver-deployment-7f5969cbc7-769q5 from deployment-5028 started at 2023-10-13 08:41:22 +0000 UTC (1 container statuses recorded) + Oct 13 08:41:28.563: INFO: Container httpd ready: true, restart count 0 + Oct 13 08:41:28.563: INFO: webserver-deployment-7f5969cbc7-bx9r9 from deployment-5028 started at 2023-10-13 08:41:22 +0000 UTC (1 container statuses recorded) + Oct 13 08:41:28.563: INFO: Container httpd ready: true, restart count 0 + Oct 13 08:41:28.563: INFO: webserver-deployment-d9f79cb5-l6hvq from deployment-5028 started at 2023-10-13 08:41:26 +0000 UTC (1 container statuses recorded) + Oct 13 08:41:28.563: INFO: Container httpd ready: false, restart count 0 + Oct 13 08:41:28.563: INFO: webserver-deployment-d9f79cb5-rrttx from deployment-5028 started at 2023-10-13 08:41:28 +0000 UTC (1 container statuses recorded) + Oct 13 08:41:28.563: INFO: Container httpd ready: false, restart count 0 + Oct 13 08:41:28.563: INFO: kube-flannel-ds-dzrwh from kube-flannel started at 2023-10-13 08:11:42 +0000 UTC (1 container statuses recorded) + Oct 13 08:41:28.563: INFO: Container kube-flannel ready: true, restart count 0 + Oct 13 08:41:28.563: INFO: coredns-787d4945fb-5dqqv from kube-system started at 2023-10-13 08:12:41 +0000 UTC (1 container statuses recorded) + Oct 13 08:41:28.563: INFO: Container coredns ready: true, restart count 0 + Oct 13 08:41:28.563: INFO: etcd-node3 from kube-system started at 2023-10-13 07:07:40 +0000 UTC (1 container statuses recorded) + Oct 13 08:41:28.563: INFO: Container etcd ready: true, restart count 1 + Oct 13 08:41:28.563: INFO: haproxy-node3 from kube-system started at 2023-10-13 07:07:26 +0000 UTC (1 container statuses recorded) + Oct 13 08:41:28.563: INFO: Container haproxy ready: true, restart count 1 + Oct 13 08:41:28.563: INFO: keepalived-node3 from kube-system started at 2023-10-13 07:07:26 +0000 UTC (1 container statuses recorded) + Oct 13 08:41:28.563: INFO: Container keepalived ready: true, restart count 1 + Oct 13 08:41:28.563: INFO: kube-apiserver-node3 from kube-system started at 2023-10-13 07:07:26 +0000 UTC (1 container statuses recorded) + Oct 13 08:41:28.563: INFO: Container kube-apiserver ready: true, restart count 1 + Oct 13 08:41:28.563: INFO: kube-controller-manager-node3 from kube-system started at 2023-10-13 07:07:26 +0000 UTC (1 container statuses recorded) + Oct 13 08:41:28.563: INFO: Container kube-controller-manager ready: true, restart count 1 + Oct 13 08:41:28.563: INFO: kube-proxy-dkrp7 from kube-system started at 2023-10-13 07:07:46 +0000 UTC (1 container statuses recorded) + Oct 13 08:41:28.563: INFO: Container kube-proxy ready: true, restart count 1 + Oct 13 08:41:28.563: INFO: kube-scheduler-node3 from kube-system started at 2023-10-13 07:07:26 +0000 UTC (1 container statuses recorded) + Oct 13 08:41:28.563: INFO: Container kube-scheduler ready: true, restart count 1 + Oct 13 08:41:28.563: INFO: sonobuoy-e2e-job-bfbf16dda205467f from sonobuoy started at 2023-10-13 08:13:29 +0000 UTC (2 container statuses recorded) + Oct 13 08:41:28.563: INFO: Container e2e ready: true, restart count 0 + Oct 13 08:41:28.563: INFO: Container sonobuoy-worker ready: true, restart count 0 + Oct 13 08:41:28.563: INFO: sonobuoy-systemd-logs-daemon-set-4d2891c1fd524ac6-xj7b7 from sonobuoy started at 2023-10-13 08:13:30 +0000 UTC (2 container statuses recorded) + Oct 13 08:41:28.563: INFO: Container sonobuoy-worker ready: true, restart count 0 + Oct 13 08:41:28.563: INFO: Container systemd-logs ready: true, restart count 0 + [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] + test/e2e/scheduling/predicates.go:704 + STEP: Trying to launch a pod without a label to get a node which can launch it. 10/13/23 08:41:28.563 + Oct 13 08:41:28.571: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-8803" to be "running" + Oct 13 08:41:28.574: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 3.234598ms + Oct 13 08:41:30.578: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006827586s + Oct 13 08:41:32.579: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 4.00752001s + Oct 13 08:41:32.579: INFO: Pod "without-label" satisfied condition "running" + STEP: Explicitly delete pod here to free the resource it takes. 10/13/23 08:41:32.582 + STEP: Trying to apply a random label on the found node. 10/13/23 08:41:32.596 + STEP: verifying the node has the label kubernetes.io/e2e-535589c3-3f23-4eb5-949c-984103123e0c 95 10/13/23 08:41:32.605 + STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled 10/13/23 08:41:32.608 + Oct 13 08:41:32.615: INFO: Waiting up to 5m0s for pod "pod4" in namespace "sched-pred-8803" to be "not pending" + Oct 13 08:41:32.619: INFO: Pod "pod4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.461131ms + Oct 13 08:41:34.623: INFO: Pod "pod4": Phase="Running", Reason="", readiness=false. Elapsed: 2.007758019s + Oct 13 08:41:34.623: INFO: Pod "pod4" satisfied condition "not pending" + STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 10.253.8.111 on the node which pod4 resides and expect not scheduled 10/13/23 08:41:34.623 + Oct 13 08:41:34.628: INFO: Waiting up to 5m0s for pod "pod5" in namespace "sched-pred-8803" to be "not pending" + Oct 13 08:41:34.631: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.651341ms + Oct 13 08:41:36.634: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00576101s + Oct 13 08:41:38.635: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006534261s + Oct 13 08:41:40.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009243053s + Oct 13 08:41:42.636: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.007583739s + Oct 13 08:41:44.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.009040312s + Oct 13 08:41:46.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.008800705s + Oct 13 08:41:48.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 14.008567007s + Oct 13 08:41:50.636: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.007528494s + Oct 13 08:41:52.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 18.008929389s + Oct 13 08:41:54.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 20.008042507s + Oct 13 08:41:56.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 22.009184392s + Oct 13 08:41:58.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 24.00894867s + Oct 13 08:42:00.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 26.008981894s + Oct 13 08:42:02.636: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 28.007679786s + Oct 13 08:42:04.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 30.009213729s + Oct 13 08:42:06.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 32.009055291s + Oct 13 08:42:08.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 34.008635809s + Oct 13 08:42:10.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 36.009081197s + Oct 13 08:42:12.636: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 38.007672404s + Oct 13 08:42:14.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 40.008557084s + Oct 13 08:42:16.636: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 42.007649075s + Oct 13 08:42:18.639: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 44.010487838s + Oct 13 08:42:20.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 46.009734548s + Oct 13 08:42:22.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 48.009452s + Oct 13 08:42:24.635: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 50.006826559s + Oct 13 08:42:26.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 52.008189805s + Oct 13 08:42:28.636: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 54.007793148s + Oct 13 08:42:30.641: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 56.012114948s + Oct 13 08:42:32.636: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 58.007533723s + Oct 13 08:42:34.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.008240374s + Oct 13 08:42:36.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.008108337s + Oct 13 08:42:38.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.009015566s + Oct 13 08:42:40.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.00953485s + Oct 13 08:42:42.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.009181404s + Oct 13 08:42:44.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.009564695s + Oct 13 08:42:46.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.008387294s + Oct 13 08:42:48.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.009349522s + Oct 13 08:42:50.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.009981755s + Oct 13 08:42:52.635: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.00678417s + Oct 13 08:42:54.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.008909796s + Oct 13 08:42:56.636: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.00791778s + Oct 13 08:42:58.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.008895142s + Oct 13 08:43:00.640: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.011261528s + Oct 13 08:43:02.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.008867498s + Oct 13 08:43:04.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.008503714s + Oct 13 08:43:06.636: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.008003067s + Oct 13 08:43:08.639: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.010128006s + Oct 13 08:43:10.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.008281197s + Oct 13 08:43:12.636: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.007183269s + Oct 13 08:43:14.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.008383934s + Oct 13 08:43:16.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.008249026s + Oct 13 08:43:18.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.008553419s + Oct 13 08:43:20.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.009001588s + Oct 13 08:43:22.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.009260167s + Oct 13 08:43:24.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.008394251s + Oct 13 08:43:26.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.008050088s + Oct 13 08:43:28.636: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.00746768s + Oct 13 08:43:30.639: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.010048325s + Oct 13 08:43:32.636: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.007054821s + Oct 13 08:43:34.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.008627885s + Oct 13 08:43:36.635: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m2.006551005s + Oct 13 08:43:38.636: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m4.007704551s + Oct 13 08:43:40.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m6.008720025s + Oct 13 08:43:42.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m8.00917023s + Oct 13 08:43:44.636: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m10.007961412s + Oct 13 08:43:46.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m12.008339084s + Oct 13 08:43:48.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m14.008889611s + Oct 13 08:43:50.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m16.009768984s + Oct 13 08:43:52.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m18.008503088s + Oct 13 08:43:54.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m20.008895382s + Oct 13 08:43:56.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m22.008758079s + Oct 13 08:43:58.636: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m24.007649453s + Oct 13 08:44:00.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m26.008620584s + Oct 13 08:44:02.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m28.008573975s + Oct 13 08:44:04.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m30.00949821s + Oct 13 08:44:06.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m32.008986415s + Oct 13 08:44:08.636: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m34.00791435s + Oct 13 08:44:10.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m36.008224311s + Oct 13 08:44:12.636: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m38.007542884s + Oct 13 08:44:14.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m40.008161061s + Oct 13 08:44:16.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m42.00888871s + Oct 13 08:44:18.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m44.00860592s + Oct 13 08:44:20.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m46.009259504s + Oct 13 08:44:22.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m48.008158289s + Oct 13 08:44:24.636: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m50.007929306s + Oct 13 08:44:26.636: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m52.007287207s + Oct 13 08:44:28.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m54.008565153s + Oct 13 08:44:30.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m56.009536184s + Oct 13 08:44:32.636: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m58.007452541s + Oct 13 08:44:34.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m0.009687086s + Oct 13 08:44:36.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m2.008859804s + Oct 13 08:44:38.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m4.008572072s + Oct 13 08:44:40.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m6.00822545s + Oct 13 08:44:42.636: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m8.00797043s + Oct 13 08:44:44.636: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m10.007767027s + Oct 13 08:44:46.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m12.009495904s + Oct 13 08:44:48.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m14.009674894s + Oct 13 08:44:50.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m16.009934005s + Oct 13 08:44:52.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m18.009379724s + Oct 13 08:44:54.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m20.008462563s + Oct 13 08:44:56.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m22.008675073s + Oct 13 08:44:58.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m24.008185699s + Oct 13 08:45:00.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m26.00922896s + Oct 13 08:45:02.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m28.008743824s + Oct 13 08:45:04.639: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m30.010090857s + Oct 13 08:45:06.636: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m32.007523605s + Oct 13 08:45:08.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m34.009381102s + Oct 13 08:45:10.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m36.009153199s + Oct 13 08:45:12.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m38.008050494s + Oct 13 08:45:14.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m40.008228834s + Oct 13 08:45:16.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m42.008373074s + Oct 13 08:45:18.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m44.009002315s + Oct 13 08:45:20.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m46.00960067s + Oct 13 08:45:22.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m48.00834734s + Oct 13 08:45:24.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m50.009498195s + Oct 13 08:45:26.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m52.009734096s + Oct 13 08:45:28.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m54.009597623s + Oct 13 08:45:30.636: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m56.007249242s + Oct 13 08:45:32.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m58.008490281s + Oct 13 08:45:34.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m0.008979221s + Oct 13 08:45:36.636: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m2.007457394s + Oct 13 08:45:38.639: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m4.010772461s + Oct 13 08:45:40.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m6.008504354s + Oct 13 08:45:42.636: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m8.00751873s + Oct 13 08:45:44.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m10.00829605s + Oct 13 08:45:46.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m12.008579756s + Oct 13 08:45:48.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m14.008261083s + Oct 13 08:45:50.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m16.009552686s + Oct 13 08:45:52.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m18.008725823s + Oct 13 08:45:54.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m20.009962065s + Oct 13 08:45:56.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m22.008038681s + Oct 13 08:45:58.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m24.009544861s + Oct 13 08:46:00.636: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m26.007999511s + Oct 13 08:46:02.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m28.008622272s + Oct 13 08:46:04.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m30.009048469s + Oct 13 08:46:06.636: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m32.007098037s + Oct 13 08:46:08.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m34.009413207s + Oct 13 08:46:10.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m36.008844003s + Oct 13 08:46:12.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m38.00847264s + Oct 13 08:46:14.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m40.009007937s + Oct 13 08:46:16.636: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m42.007514654s + Oct 13 08:46:18.638: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m44.009724906s + Oct 13 08:46:20.639: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m46.010124891s + Oct 13 08:46:22.635: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m48.006895423s + Oct 13 08:46:24.639: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m50.010562593s + Oct 13 08:46:26.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m52.008951022s + Oct 13 08:46:28.636: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m54.008004466s + Oct 13 08:46:30.636: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m56.007613754s + Oct 13 08:46:32.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m58.008344799s + Oct 13 08:46:34.637: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 5m0.00821688s + Oct 13 08:46:34.641: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 5m0.012322475s + STEP: removing the label kubernetes.io/e2e-535589c3-3f23-4eb5-949c-984103123e0c off the node node2 10/13/23 08:46:34.641 + STEP: verifying the node doesn't have the label kubernetes.io/e2e-535589c3-3f23-4eb5-949c-984103123e0c 10/13/23 08:46:34.653 + [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/node/init/init.go:32 + Oct 13 08:46:34.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:88 + [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] + tear down framework | framework.go:193 + STEP: Destroying namespace "sched-pred-8803" for this suite. 10/13/23 08:46:34.663 + << End Captured GinkgoWriter Output +------------------------------ +[sig-node] InitContainer [NodeConformance] + should invoke init containers on a RestartAlways pod [Conformance] + test/e2e/common/node/init_container.go:255 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:46:34.671 +Oct 13 08:46:34.671: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename init-container 10/13/23 08:46:34.672 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:46:34.704 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:46:34.707 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + test/e2e/common/node/init_container.go:165 +[It] should invoke init containers on a RestartAlways pod [Conformance] + test/e2e/common/node/init_container.go:255 +STEP: creating the pod 10/13/23 08:46:34.709 +Oct 13 08:46:34.709: INFO: PodSpec: initContainers in spec.initContainers +[AfterEach] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/node/init/init.go:32 +Oct 13 08:46:38.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] + tear down framework | framework.go:193 +STEP: Destroying namespace "init-container-3171" for this suite. 10/13/23 08:46:38.173 +------------------------------ +• [3.509 seconds] +[sig-node] InitContainer [NodeConformance] +test/e2e/common/node/framework.go:23 + should invoke init containers on a RestartAlways pod [Conformance] + test/e2e/common/node/init_container.go:255 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] InitContainer [NodeConformance] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:46:34.671 + Oct 13 08:46:34.671: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename init-container 10/13/23 08:46:34.672 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:46:34.704 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:46:34.707 + [BeforeEach] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] InitContainer [NodeConformance] + test/e2e/common/node/init_container.go:165 + [It] should invoke init containers on a RestartAlways pod [Conformance] + test/e2e/common/node/init_container.go:255 + STEP: creating the pod 10/13/23 08:46:34.709 + Oct 13 08:46:34.709: INFO: PodSpec: initContainers in spec.initContainers + [AfterEach] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/node/init/init.go:32 + Oct 13 08:46:38.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] + tear down framework | framework.go:193 + STEP: Destroying namespace "init-container-3171" for this suite. 10/13/23 08:46:38.173 + << End Captured GinkgoWriter Output +------------------------------ +S +------------------------------ +[sig-cli] Kubectl client Proxy server + should support --unix-socket=/path [Conformance] + test/e2e/kubectl/kubectl.go:1812 +[BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:46:38.18 +Oct 13 08:46:38.181: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename kubectl 10/13/23 08:46:38.182 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:46:38.197 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:46:38.199 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 +[It] should support --unix-socket=/path [Conformance] + test/e2e/kubectl/kubectl.go:1812 +STEP: Starting the proxy 10/13/23 08:46:38.201 +Oct 13 08:46:38.202: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-9238 proxy --unix-socket=/tmp/kubectl-proxy-unix2824907395/test' +STEP: retrieving proxy /api/ output 10/13/23 08:46:38.27 +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 +Oct 13 08:46:38.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 +STEP: Destroying namespace "kubectl-9238" for this suite. 10/13/23 08:46:38.276 +------------------------------ +• [0.107 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Proxy server + test/e2e/kubectl/kubectl.go:1780 + should support --unix-socket=/path [Conformance] + test/e2e/kubectl/kubectl.go:1812 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:46:38.18 + Oct 13 08:46:38.181: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename kubectl 10/13/23 08:46:38.182 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:46:38.197 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:46:38.199 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 + [It] should support --unix-socket=/path [Conformance] + test/e2e/kubectl/kubectl.go:1812 + STEP: Starting the proxy 10/13/23 08:46:38.201 + Oct 13 08:46:38.202: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-9238 proxy --unix-socket=/tmp/kubectl-proxy-unix2824907395/test' + STEP: retrieving proxy /api/ output 10/13/23 08:46:38.27 + [AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 + Oct 13 08:46:38.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 + STEP: Destroying namespace "kubectl-9238" for this suite. 10/13/23 08:46:38.276 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSS +------------------------------ +[sig-node] Pods + should delete a collection of pods [Conformance] + test/e2e/common/node/pods.go:845 +[BeforeEach] [sig-node] Pods + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:46:38.288 +Oct 13 08:46:38.288: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename pods 10/13/23 08:46:38.289 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:46:38.303 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:46:38.306 +[BeforeEach] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:194 +[It] should delete a collection of pods [Conformance] + test/e2e/common/node/pods.go:845 +STEP: Create set of pods 10/13/23 08:46:38.308 +Oct 13 08:46:38.315: INFO: created test-pod-1 +Oct 13 08:46:38.323: INFO: created test-pod-2 +Oct 13 08:46:38.329: INFO: created test-pod-3 +STEP: waiting for all 3 pods to be running 10/13/23 08:46:38.329 +Oct 13 08:46:38.330: INFO: Waiting up to 5m0s for all pods (need at least 3) in namespace 'pods-7397' to be running and ready +Oct 13 08:46:38.343: INFO: The status of Pod test-pod-1 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed +Oct 13 08:46:38.343: INFO: The status of Pod test-pod-2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed +Oct 13 08:46:38.343: INFO: The status of Pod test-pod-3 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed +Oct 13 08:46:38.343: INFO: 0 / 3 pods in namespace 'pods-7397' are running and ready (0 seconds elapsed) +Oct 13 08:46:38.343: INFO: expected 0 pod replicas in namespace 'pods-7397', 0 are Running and Ready. +Oct 13 08:46:38.343: INFO: POD NODE PHASE GRACE CONDITIONS +Oct 13 08:46:38.343: INFO: test-pod-1 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:46:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:46:38 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:46:38 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:46:38 +0000 UTC }] +Oct 13 08:46:38.343: INFO: test-pod-2 node2 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:46:38 +0000 UTC }] +Oct 13 08:46:38.343: INFO: test-pod-3 node3 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:46:38 +0000 UTC }] +Oct 13 08:46:38.343: INFO: +Oct 13 08:46:40.355: INFO: 3 / 3 pods in namespace 'pods-7397' are running and ready (2 seconds elapsed) +Oct 13 08:46:40.355: INFO: expected 0 pod replicas in namespace 'pods-7397', 0 are Running and Ready. +STEP: waiting for all pods to be deleted 10/13/23 08:46:40.373 +Oct 13 08:46:40.377: INFO: Pod quantity 3 is different from expected quantity 0 +Oct 13 08:46:41.382: INFO: Pod quantity 3 is different from expected quantity 0 +Oct 13 08:46:42.382: INFO: Pod quantity 1 is different from expected quantity 0 +[AfterEach] [sig-node] Pods + test/e2e/framework/node/init/init.go:32 +Oct 13 08:46:43.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Pods + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Pods + tear down framework | framework.go:193 +STEP: Destroying namespace "pods-7397" for this suite. 10/13/23 08:46:43.385 +------------------------------ +• [SLOW TEST] [5.103 seconds] +[sig-node] Pods +test/e2e/common/node/framework.go:23 + should delete a collection of pods [Conformance] + test/e2e/common/node/pods.go:845 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Pods + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:46:38.288 + Oct 13 08:46:38.288: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename pods 10/13/23 08:46:38.289 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:46:38.303 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:46:38.306 + [BeforeEach] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:194 + [It] should delete a collection of pods [Conformance] + test/e2e/common/node/pods.go:845 + STEP: Create set of pods 10/13/23 08:46:38.308 + Oct 13 08:46:38.315: INFO: created test-pod-1 + Oct 13 08:46:38.323: INFO: created test-pod-2 + Oct 13 08:46:38.329: INFO: created test-pod-3 + STEP: waiting for all 3 pods to be running 10/13/23 08:46:38.329 + Oct 13 08:46:38.330: INFO: Waiting up to 5m0s for all pods (need at least 3) in namespace 'pods-7397' to be running and ready + Oct 13 08:46:38.343: INFO: The status of Pod test-pod-1 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed + Oct 13 08:46:38.343: INFO: The status of Pod test-pod-2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed + Oct 13 08:46:38.343: INFO: The status of Pod test-pod-3 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed + Oct 13 08:46:38.343: INFO: 0 / 3 pods in namespace 'pods-7397' are running and ready (0 seconds elapsed) + Oct 13 08:46:38.343: INFO: expected 0 pod replicas in namespace 'pods-7397', 0 are Running and Ready. + Oct 13 08:46:38.343: INFO: POD NODE PHASE GRACE CONDITIONS + Oct 13 08:46:38.343: INFO: test-pod-1 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:46:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:46:38 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:46:38 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:46:38 +0000 UTC }] + Oct 13 08:46:38.343: INFO: test-pod-2 node2 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:46:38 +0000 UTC }] + Oct 13 08:46:38.343: INFO: test-pod-3 node3 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-10-13 08:46:38 +0000 UTC }] + Oct 13 08:46:38.343: INFO: + Oct 13 08:46:40.355: INFO: 3 / 3 pods in namespace 'pods-7397' are running and ready (2 seconds elapsed) + Oct 13 08:46:40.355: INFO: expected 0 pod replicas in namespace 'pods-7397', 0 are Running and Ready. + STEP: waiting for all pods to be deleted 10/13/23 08:46:40.373 + Oct 13 08:46:40.377: INFO: Pod quantity 3 is different from expected quantity 0 + Oct 13 08:46:41.382: INFO: Pod quantity 3 is different from expected quantity 0 + Oct 13 08:46:42.382: INFO: Pod quantity 1 is different from expected quantity 0 + [AfterEach] [sig-node] Pods + test/e2e/framework/node/init/init.go:32 + Oct 13 08:46:43.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Pods + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Pods + tear down framework | framework.go:193 + STEP: Destroying namespace "pods-7397" for this suite. 10/13/23 08:46:43.385 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Secrets + should be consumable via the environment [NodeConformance] [Conformance] + test/e2e/common/node/secrets.go:95 +[BeforeEach] [sig-node] Secrets + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:46:43.392 +Oct 13 08:46:43.392: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename secrets 10/13/23 08:46:43.393 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:46:43.408 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:46:43.41 +[BeforeEach] [sig-node] Secrets + test/e2e/framework/metrics/init/init.go:31 +[It] should be consumable via the environment [NodeConformance] [Conformance] + test/e2e/common/node/secrets.go:95 +STEP: creating secret secrets-6708/secret-test-98cbdbbb-2911-43a3-b5ca-00f7d1394184 10/13/23 08:46:43.413 +STEP: Creating a pod to test consume secrets 10/13/23 08:46:43.418 +Oct 13 08:46:43.426: INFO: Waiting up to 5m0s for pod "pod-configmaps-13723625-591e-47ba-bedd-59473df0f0b8" in namespace "secrets-6708" to be "Succeeded or Failed" +Oct 13 08:46:43.430: INFO: Pod "pod-configmaps-13723625-591e-47ba-bedd-59473df0f0b8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.472828ms +Oct 13 08:46:45.433: INFO: Pod "pod-configmaps-13723625-591e-47ba-bedd-59473df0f0b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007303334s +Oct 13 08:46:47.435: INFO: Pod "pod-configmaps-13723625-591e-47ba-bedd-59473df0f0b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0086992s +STEP: Saw pod success 10/13/23 08:46:47.435 +Oct 13 08:46:47.435: INFO: Pod "pod-configmaps-13723625-591e-47ba-bedd-59473df0f0b8" satisfied condition "Succeeded or Failed" +Oct 13 08:46:47.438: INFO: Trying to get logs from node node2 pod pod-configmaps-13723625-591e-47ba-bedd-59473df0f0b8 container env-test: +STEP: delete the pod 10/13/23 08:46:47.454 +Oct 13 08:46:47.465: INFO: Waiting for pod pod-configmaps-13723625-591e-47ba-bedd-59473df0f0b8 to disappear +Oct 13 08:46:47.468: INFO: Pod pod-configmaps-13723625-591e-47ba-bedd-59473df0f0b8 no longer exists +[AfterEach] [sig-node] Secrets + test/e2e/framework/node/init/init.go:32 +Oct 13 08:46:47.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Secrets + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Secrets + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Secrets + tear down framework | framework.go:193 +STEP: Destroying namespace "secrets-6708" for this suite. 10/13/23 08:46:47.471 +------------------------------ +• [4.085 seconds] +[sig-node] Secrets +test/e2e/common/node/framework.go:23 + should be consumable via the environment [NodeConformance] [Conformance] + test/e2e/common/node/secrets.go:95 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Secrets + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:46:43.392 + Oct 13 08:46:43.392: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename secrets 10/13/23 08:46:43.393 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:46:43.408 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:46:43.41 + [BeforeEach] [sig-node] Secrets + test/e2e/framework/metrics/init/init.go:31 + [It] should be consumable via the environment [NodeConformance] [Conformance] + test/e2e/common/node/secrets.go:95 + STEP: creating secret secrets-6708/secret-test-98cbdbbb-2911-43a3-b5ca-00f7d1394184 10/13/23 08:46:43.413 + STEP: Creating a pod to test consume secrets 10/13/23 08:46:43.418 + Oct 13 08:46:43.426: INFO: Waiting up to 5m0s for pod "pod-configmaps-13723625-591e-47ba-bedd-59473df0f0b8" in namespace "secrets-6708" to be "Succeeded or Failed" + Oct 13 08:46:43.430: INFO: Pod "pod-configmaps-13723625-591e-47ba-bedd-59473df0f0b8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.472828ms + Oct 13 08:46:45.433: INFO: Pod "pod-configmaps-13723625-591e-47ba-bedd-59473df0f0b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007303334s + Oct 13 08:46:47.435: INFO: Pod "pod-configmaps-13723625-591e-47ba-bedd-59473df0f0b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0086992s + STEP: Saw pod success 10/13/23 08:46:47.435 + Oct 13 08:46:47.435: INFO: Pod "pod-configmaps-13723625-591e-47ba-bedd-59473df0f0b8" satisfied condition "Succeeded or Failed" + Oct 13 08:46:47.438: INFO: Trying to get logs from node node2 pod pod-configmaps-13723625-591e-47ba-bedd-59473df0f0b8 container env-test: + STEP: delete the pod 10/13/23 08:46:47.454 + Oct 13 08:46:47.465: INFO: Waiting for pod pod-configmaps-13723625-591e-47ba-bedd-59473df0f0b8 to disappear + Oct 13 08:46:47.468: INFO: Pod pod-configmaps-13723625-591e-47ba-bedd-59473df0f0b8 no longer exists + [AfterEach] [sig-node] Secrets + test/e2e/framework/node/init/init.go:32 + Oct 13 08:46:47.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Secrets + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Secrets + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Secrets + tear down framework | framework.go:193 + STEP: Destroying namespace "secrets-6708" for this suite. 10/13/23 08:46:47.471 + << End Captured GinkgoWriter Output +------------------------------ +SSSS +------------------------------ +[sig-network] Services + should test the lifecycle of an Endpoint [Conformance] + test/e2e/network/service.go:3244 +[BeforeEach] [sig-network] Services + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:46:47.477 +Oct 13 08:46:47.477: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename services 10/13/23 08:46:47.478 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:46:47.494 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:46:47.497 +[BeforeEach] [sig-network] Services + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 +[It] should test the lifecycle of an Endpoint [Conformance] + test/e2e/network/service.go:3244 +STEP: creating an Endpoint 10/13/23 08:46:47.502 +STEP: waiting for available Endpoint 10/13/23 08:46:47.508 +STEP: listing all Endpoints 10/13/23 08:46:47.509 +STEP: updating the Endpoint 10/13/23 08:46:47.512 +STEP: fetching the Endpoint 10/13/23 08:46:47.517 +STEP: patching the Endpoint 10/13/23 08:46:47.52 +STEP: fetching the Endpoint 10/13/23 08:46:47.527 +STEP: deleting the Endpoint by Collection 10/13/23 08:46:47.529 +STEP: waiting for Endpoint deletion 10/13/23 08:46:47.536 +STEP: fetching the Endpoint 10/13/23 08:46:47.538 +[AfterEach] [sig-network] Services + test/e2e/framework/node/init/init.go:32 +Oct 13 08:46:47.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] Services + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] Services + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] Services + tear down framework | framework.go:193 +STEP: Destroying namespace "services-1991" for this suite. 10/13/23 08:46:47.544 +------------------------------ +• [0.072 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should test the lifecycle of an Endpoint [Conformance] + test/e2e/network/service.go:3244 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Services + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:46:47.477 + Oct 13 08:46:47.477: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename services 10/13/23 08:46:47.478 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:46:47.494 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:46:47.497 + [BeforeEach] [sig-network] Services + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 + [It] should test the lifecycle of an Endpoint [Conformance] + test/e2e/network/service.go:3244 + STEP: creating an Endpoint 10/13/23 08:46:47.502 + STEP: waiting for available Endpoint 10/13/23 08:46:47.508 + STEP: listing all Endpoints 10/13/23 08:46:47.509 + STEP: updating the Endpoint 10/13/23 08:46:47.512 + STEP: fetching the Endpoint 10/13/23 08:46:47.517 + STEP: patching the Endpoint 10/13/23 08:46:47.52 + STEP: fetching the Endpoint 10/13/23 08:46:47.527 + STEP: deleting the Endpoint by Collection 10/13/23 08:46:47.529 + STEP: waiting for Endpoint deletion 10/13/23 08:46:47.536 + STEP: fetching the Endpoint 10/13/23 08:46:47.538 + [AfterEach] [sig-network] Services + test/e2e/framework/node/init/init.go:32 + Oct 13 08:46:47.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] Services + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] Services + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] Services + tear down framework | framework.go:193 + STEP: Destroying namespace "services-1991" for this suite. 10/13/23 08:46:47.544 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should be able to update and delete ResourceQuota. [Conformance] + test/e2e/apimachinery/resource_quota.go:884 +[BeforeEach] [sig-api-machinery] ResourceQuota + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:46:47.55 +Oct 13 08:46:47.550: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename resourcequota 10/13/23 08:46:47.551 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:46:47.565 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:46:47.568 +[BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:31 +[It] should be able to update and delete ResourceQuota. [Conformance] + test/e2e/apimachinery/resource_quota.go:884 +STEP: Creating a ResourceQuota 10/13/23 08:46:47.57 +STEP: Getting a ResourceQuota 10/13/23 08:46:47.575 +STEP: Updating a ResourceQuota 10/13/23 08:46:47.578 +STEP: Verifying a ResourceQuota was modified 10/13/23 08:46:47.585 +STEP: Deleting a ResourceQuota 10/13/23 08:46:47.588 +STEP: Verifying the deleted ResourceQuota 10/13/23 08:46:47.594 +[AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/node/init/init.go:32 +Oct 13 08:46:47.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + tear down framework | framework.go:193 +STEP: Destroying namespace "resourcequota-4351" for this suite. 10/13/23 08:46:47.601 +------------------------------ +• [0.057 seconds] +[sig-api-machinery] ResourceQuota +test/e2e/apimachinery/framework.go:23 + should be able to update and delete ResourceQuota. [Conformance] + test/e2e/apimachinery/resource_quota.go:884 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] ResourceQuota + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:46:47.55 + Oct 13 08:46:47.550: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename resourcequota 10/13/23 08:46:47.551 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:46:47.565 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:46:47.568 + [BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:31 + [It] should be able to update and delete ResourceQuota. [Conformance] + test/e2e/apimachinery/resource_quota.go:884 + STEP: Creating a ResourceQuota 10/13/23 08:46:47.57 + STEP: Getting a ResourceQuota 10/13/23 08:46:47.575 + STEP: Updating a ResourceQuota 10/13/23 08:46:47.578 + STEP: Verifying a ResourceQuota was modified 10/13/23 08:46:47.585 + STEP: Deleting a ResourceQuota 10/13/23 08:46:47.588 + STEP: Verifying the deleted ResourceQuota 10/13/23 08:46:47.594 + [AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/node/init/init.go:32 + Oct 13 08:46:47.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + tear down framework | framework.go:193 + STEP: Destroying namespace "resourcequota-4351" for this suite. 10/13/23 08:46:47.601 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:215 +[BeforeEach] [sig-storage] Projected secret + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:46:47.608 +Oct 13 08:46:47.608: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename projected 10/13/23 08:46:47.609 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:46:47.624 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:46:47.626 +[BeforeEach] [sig-storage] Projected secret + test/e2e/framework/metrics/init/init.go:31 +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:215 +STEP: Creating secret with name s-test-opt-del-f309a829-d7b1-4585-8209-34c706e5e3a0 10/13/23 08:46:47.632 +STEP: Creating secret with name s-test-opt-upd-86fe9f77-5772-4c01-a788-16b64d6eb9dc 10/13/23 08:46:47.637 +STEP: Creating the pod 10/13/23 08:46:47.641 +Oct 13 08:46:47.649: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f16b7ee5-863c-4882-aaf9-219f438c2948" in namespace "projected-8003" to be "running and ready" +Oct 13 08:46:47.653: INFO: Pod "pod-projected-secrets-f16b7ee5-863c-4882-aaf9-219f438c2948": Phase="Pending", Reason="", readiness=false. Elapsed: 3.07026ms +Oct 13 08:46:47.653: INFO: The phase of Pod pod-projected-secrets-f16b7ee5-863c-4882-aaf9-219f438c2948 is Pending, waiting for it to be Running (with Ready = true) +Oct 13 08:46:49.657: INFO: Pod "pod-projected-secrets-f16b7ee5-863c-4882-aaf9-219f438c2948": Phase="Running", Reason="", readiness=true. Elapsed: 2.00765789s +Oct 13 08:46:49.657: INFO: The phase of Pod pod-projected-secrets-f16b7ee5-863c-4882-aaf9-219f438c2948 is Running (Ready = true) +Oct 13 08:46:49.657: INFO: Pod "pod-projected-secrets-f16b7ee5-863c-4882-aaf9-219f438c2948" satisfied condition "running and ready" +STEP: Deleting secret s-test-opt-del-f309a829-d7b1-4585-8209-34c706e5e3a0 10/13/23 08:46:49.674 +STEP: Updating secret s-test-opt-upd-86fe9f77-5772-4c01-a788-16b64d6eb9dc 10/13/23 08:46:49.68 +STEP: Creating secret with name s-test-opt-create-92d0c06b-4be0-441f-8b68-32c35d638beb 10/13/23 08:46:49.683 +STEP: waiting to observe update in volume 10/13/23 08:46:49.687 +[AfterEach] [sig-storage] Projected secret + test/e2e/framework/node/init/init.go:32 +Oct 13 08:46:53.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected secret + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Projected secret + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Projected secret + tear down framework | framework.go:193 +STEP: Destroying namespace "projected-8003" for this suite. 10/13/23 08:46:53.718 +------------------------------ +• [SLOW TEST] [6.115 seconds] +[sig-storage] Projected secret +test/e2e/common/storage/framework.go:23 + optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:215 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected secret + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:46:47.608 + Oct 13 08:46:47.608: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename projected 10/13/23 08:46:47.609 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:46:47.624 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:46:47.626 + [BeforeEach] [sig-storage] Projected secret + test/e2e/framework/metrics/init/init.go:31 + [It] optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:215 + STEP: Creating secret with name s-test-opt-del-f309a829-d7b1-4585-8209-34c706e5e3a0 10/13/23 08:46:47.632 + STEP: Creating secret with name s-test-opt-upd-86fe9f77-5772-4c01-a788-16b64d6eb9dc 10/13/23 08:46:47.637 + STEP: Creating the pod 10/13/23 08:46:47.641 + Oct 13 08:46:47.649: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f16b7ee5-863c-4882-aaf9-219f438c2948" in namespace "projected-8003" to be "running and ready" + Oct 13 08:46:47.653: INFO: Pod "pod-projected-secrets-f16b7ee5-863c-4882-aaf9-219f438c2948": Phase="Pending", Reason="", readiness=false. Elapsed: 3.07026ms + Oct 13 08:46:47.653: INFO: The phase of Pod pod-projected-secrets-f16b7ee5-863c-4882-aaf9-219f438c2948 is Pending, waiting for it to be Running (with Ready = true) + Oct 13 08:46:49.657: INFO: Pod "pod-projected-secrets-f16b7ee5-863c-4882-aaf9-219f438c2948": Phase="Running", Reason="", readiness=true. Elapsed: 2.00765789s + Oct 13 08:46:49.657: INFO: The phase of Pod pod-projected-secrets-f16b7ee5-863c-4882-aaf9-219f438c2948 is Running (Ready = true) + Oct 13 08:46:49.657: INFO: Pod "pod-projected-secrets-f16b7ee5-863c-4882-aaf9-219f438c2948" satisfied condition "running and ready" + STEP: Deleting secret s-test-opt-del-f309a829-d7b1-4585-8209-34c706e5e3a0 10/13/23 08:46:49.674 + STEP: Updating secret s-test-opt-upd-86fe9f77-5772-4c01-a788-16b64d6eb9dc 10/13/23 08:46:49.68 + STEP: Creating secret with name s-test-opt-create-92d0c06b-4be0-441f-8b68-32c35d638beb 10/13/23 08:46:49.683 + STEP: waiting to observe update in volume 10/13/23 08:46:49.687 + [AfterEach] [sig-storage] Projected secret + test/e2e/framework/node/init/init.go:32 + Oct 13 08:46:53.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected secret + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Projected secret + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Projected secret + tear down framework | framework.go:193 + STEP: Destroying namespace "projected-8003" for this suite. 10/13/23 08:46:53.718 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should be able to change the type from ExternalName to ClusterIP [Conformance] + test/e2e/network/service.go:1438 +[BeforeEach] [sig-network] Services + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:46:53.724 +Oct 13 08:46:53.724: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename services 10/13/23 08:46:53.725 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:46:53.741 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:46:53.743 +[BeforeEach] [sig-network] Services + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 +[It] should be able to change the type from ExternalName to ClusterIP [Conformance] + test/e2e/network/service.go:1438 +STEP: creating a service externalname-service with the type=ExternalName in namespace services-3948 10/13/23 08:46:53.746 +STEP: changing the ExternalName service to type=ClusterIP 10/13/23 08:46:53.752 +STEP: creating replication controller externalname-service in namespace services-3948 10/13/23 08:46:53.767 +I1013 08:46:53.773220 23 runners.go:193] Created replication controller with name: externalname-service, namespace: services-3948, replica count: 2 +I1013 08:46:56.824292 23 runners.go:193] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Oct 13 08:46:56.824: INFO: Creating new exec pod +Oct 13 08:46:56.836: INFO: Waiting up to 5m0s for pod "execpodz75bw" in namespace "services-3948" to be "running" +Oct 13 08:46:56.839: INFO: Pod "execpodz75bw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.91925ms +Oct 13 08:46:58.843: INFO: Pod "execpodz75bw": Phase="Running", Reason="", readiness=true. Elapsed: 2.007024488s +Oct 13 08:46:58.843: INFO: Pod "execpodz75bw" satisfied condition "running" +Oct 13 08:46:59.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-3948 exec execpodz75bw -- /bin/sh -x -c nc -v -z -w 2 externalname-service 80' +Oct 13 08:46:59.964: INFO: stderr: "+ nc -v -z -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" +Oct 13 08:46:59.964: INFO: stdout: "" +Oct 13 08:46:59.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-3948 exec execpodz75bw -- /bin/sh -x -c nc -v -z -w 2 10.97.194.10 80' +Oct 13 08:47:00.084: INFO: stderr: "+ nc -v -z -w 2 10.97.194.10 80\nConnection to 10.97.194.10 80 port [tcp/http] succeeded!\n" +Oct 13 08:47:00.084: INFO: stdout: "" +Oct 13 08:47:00.084: INFO: Cleaning up the ExternalName to ClusterIP test service +[AfterEach] [sig-network] Services + test/e2e/framework/node/init/init.go:32 +Oct 13 08:47:00.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] Services + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] Services + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] Services + tear down framework | framework.go:193 +STEP: Destroying namespace "services-3948" for this suite. 10/13/23 08:47:00.107 +------------------------------ +• [SLOW TEST] [6.391 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should be able to change the type from ExternalName to ClusterIP [Conformance] + test/e2e/network/service.go:1438 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Services + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:46:53.724 + Oct 13 08:46:53.724: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename services 10/13/23 08:46:53.725 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:46:53.741 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:46:53.743 + [BeforeEach] [sig-network] Services + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 + [It] should be able to change the type from ExternalName to ClusterIP [Conformance] + test/e2e/network/service.go:1438 + STEP: creating a service externalname-service with the type=ExternalName in namespace services-3948 10/13/23 08:46:53.746 + STEP: changing the ExternalName service to type=ClusterIP 10/13/23 08:46:53.752 + STEP: creating replication controller externalname-service in namespace services-3948 10/13/23 08:46:53.767 + I1013 08:46:53.773220 23 runners.go:193] Created replication controller with name: externalname-service, namespace: services-3948, replica count: 2 + I1013 08:46:56.824292 23 runners.go:193] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady + Oct 13 08:46:56.824: INFO: Creating new exec pod + Oct 13 08:46:56.836: INFO: Waiting up to 5m0s for pod "execpodz75bw" in namespace "services-3948" to be "running" + Oct 13 08:46:56.839: INFO: Pod "execpodz75bw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.91925ms + Oct 13 08:46:58.843: INFO: Pod "execpodz75bw": Phase="Running", Reason="", readiness=true. Elapsed: 2.007024488s + Oct 13 08:46:58.843: INFO: Pod "execpodz75bw" satisfied condition "running" + Oct 13 08:46:59.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-3948 exec execpodz75bw -- /bin/sh -x -c nc -v -z -w 2 externalname-service 80' + Oct 13 08:46:59.964: INFO: stderr: "+ nc -v -z -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" + Oct 13 08:46:59.964: INFO: stdout: "" + Oct 13 08:46:59.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-3948 exec execpodz75bw -- /bin/sh -x -c nc -v -z -w 2 10.97.194.10 80' + Oct 13 08:47:00.084: INFO: stderr: "+ nc -v -z -w 2 10.97.194.10 80\nConnection to 10.97.194.10 80 port [tcp/http] succeeded!\n" + Oct 13 08:47:00.084: INFO: stdout: "" + Oct 13 08:47:00.084: INFO: Cleaning up the ExternalName to ClusterIP test service + [AfterEach] [sig-network] Services + test/e2e/framework/node/init/init.go:32 + Oct 13 08:47:00.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] Services + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] Services + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] Services + tear down framework | framework.go:193 + STEP: Destroying namespace "services-3948" for this suite. 10/13/23 08:47:00.107 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] ConfigMap + should run through a ConfigMap lifecycle [Conformance] + test/e2e/common/node/configmap.go:169 +[BeforeEach] [sig-node] ConfigMap + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:47:00.116 +Oct 13 08:47:00.116: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename configmap 10/13/23 08:47:00.117 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:47:00.133 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:47:00.137 +[BeforeEach] [sig-node] ConfigMap + test/e2e/framework/metrics/init/init.go:31 +[It] should run through a ConfigMap lifecycle [Conformance] + test/e2e/common/node/configmap.go:169 +STEP: creating a ConfigMap 10/13/23 08:47:00.139 +STEP: fetching the ConfigMap 10/13/23 08:47:00.145 +STEP: patching the ConfigMap 10/13/23 08:47:00.149 +STEP: listing all ConfigMaps in all namespaces with a label selector 10/13/23 08:47:00.155 +STEP: deleting the ConfigMap by collection with a label selector 10/13/23 08:47:00.159 +STEP: listing all ConfigMaps in test namespace 10/13/23 08:47:00.171 +[AfterEach] [sig-node] ConfigMap + test/e2e/framework/node/init/init.go:32 +Oct 13 08:47:00.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] ConfigMap + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] ConfigMap + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] ConfigMap + tear down framework | framework.go:193 +STEP: Destroying namespace "configmap-2225" for this suite. 10/13/23 08:47:00.18 +------------------------------ +• [0.070 seconds] +[sig-node] ConfigMap +test/e2e/common/node/framework.go:23 + should run through a ConfigMap lifecycle [Conformance] + test/e2e/common/node/configmap.go:169 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] ConfigMap + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:47:00.116 + Oct 13 08:47:00.116: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename configmap 10/13/23 08:47:00.117 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:47:00.133 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:47:00.137 + [BeforeEach] [sig-node] ConfigMap + test/e2e/framework/metrics/init/init.go:31 + [It] should run through a ConfigMap lifecycle [Conformance] + test/e2e/common/node/configmap.go:169 + STEP: creating a ConfigMap 10/13/23 08:47:00.139 + STEP: fetching the ConfigMap 10/13/23 08:47:00.145 + STEP: patching the ConfigMap 10/13/23 08:47:00.149 + STEP: listing all ConfigMaps in all namespaces with a label selector 10/13/23 08:47:00.155 + STEP: deleting the ConfigMap by collection with a label selector 10/13/23 08:47:00.159 + STEP: listing all ConfigMaps in test namespace 10/13/23 08:47:00.171 + [AfterEach] [sig-node] ConfigMap + test/e2e/framework/node/init/init.go:32 + Oct 13 08:47:00.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] ConfigMap + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] ConfigMap + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] ConfigMap + tear down framework | framework.go:193 + STEP: Destroying namespace "configmap-2225" for this suite. 10/13/23 08:47:00.18 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Deployment + should validate Deployment Status endpoints [Conformance] + test/e2e/apps/deployment.go:479 +[BeforeEach] [sig-apps] Deployment + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:47:00.187 +Oct 13 08:47:00.187: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename deployment 10/13/23 08:47:00.187 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:47:00.205 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:47:00.208 +[BeforeEach] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 +[It] should validate Deployment Status endpoints [Conformance] + test/e2e/apps/deployment.go:479 +STEP: creating a Deployment 10/13/23 08:47:00.213 +Oct 13 08:47:00.213: INFO: Creating simple deployment test-deployment-rzhtb +Oct 13 08:47:00.223: INFO: new replicaset for deployment "test-deployment-rzhtb" is yet to be created +STEP: Getting /status 10/13/23 08:47:02.239 +Oct 13 08:47:02.242: INFO: Deployment test-deployment-rzhtb has Conditions: [{Available True 2023-10-13 08:47:01 +0000 UTC 2023-10-13 08:47:01 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2023-10-13 08:47:01 +0000 UTC 2023-10-13 08:47:00 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-rzhtb-54bc444df" has successfully progressed.}] +STEP: updating Deployment Status 10/13/23 08:47:02.242 +Oct 13 08:47:02.253: INFO: updatedStatus.Conditions: []v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.October, 13, 8, 47, 1, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 8, 47, 1, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.October, 13, 8, 47, 1, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 8, 47, 0, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"test-deployment-rzhtb-54bc444df\" has successfully progressed."}, v1.DeploymentCondition{Type:"StatusUpdate", Status:"True", LastUpdateTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} +STEP: watching for the Deployment status to be updated 10/13/23 08:47:02.253 +Oct 13 08:47:02.254: INFO: Observed &Deployment event: ADDED +Oct 13 08:47:02.254: INFO: Observed Deployment test-deployment-rzhtb in namespace deployment-4239 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-10-13 08:47:00 +0000 UTC 2023-10-13 08:47:00 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-rzhtb-54bc444df"} +Oct 13 08:47:02.255: INFO: Observed &Deployment event: MODIFIED +Oct 13 08:47:02.255: INFO: Observed Deployment test-deployment-rzhtb in namespace deployment-4239 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-10-13 08:47:00 +0000 UTC 2023-10-13 08:47:00 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-rzhtb-54bc444df"} +Oct 13 08:47:02.255: INFO: Observed Deployment test-deployment-rzhtb in namespace deployment-4239 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2023-10-13 08:47:00 +0000 UTC 2023-10-13 08:47:00 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} +Oct 13 08:47:02.255: INFO: Observed &Deployment event: MODIFIED +Oct 13 08:47:02.255: INFO: Observed Deployment test-deployment-rzhtb in namespace deployment-4239 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2023-10-13 08:47:00 +0000 UTC 2023-10-13 08:47:00 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} +Oct 13 08:47:02.255: INFO: Observed Deployment test-deployment-rzhtb in namespace deployment-4239 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-10-13 08:47:00 +0000 UTC 2023-10-13 08:47:00 +0000 UTC ReplicaSetUpdated ReplicaSet "test-deployment-rzhtb-54bc444df" is progressing.} +Oct 13 08:47:02.255: INFO: Observed &Deployment event: MODIFIED +Oct 13 08:47:02.255: INFO: Observed Deployment test-deployment-rzhtb in namespace deployment-4239 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2023-10-13 08:47:01 +0000 UTC 2023-10-13 08:47:01 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} +Oct 13 08:47:02.255: INFO: Observed Deployment test-deployment-rzhtb in namespace deployment-4239 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-10-13 08:47:01 +0000 UTC 2023-10-13 08:47:00 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-rzhtb-54bc444df" has successfully progressed.} +Oct 13 08:47:02.255: INFO: Observed &Deployment event: MODIFIED +Oct 13 08:47:02.255: INFO: Observed Deployment test-deployment-rzhtb in namespace deployment-4239 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2023-10-13 08:47:01 +0000 UTC 2023-10-13 08:47:01 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} +Oct 13 08:47:02.255: INFO: Observed Deployment test-deployment-rzhtb in namespace deployment-4239 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-10-13 08:47:01 +0000 UTC 2023-10-13 08:47:00 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-rzhtb-54bc444df" has successfully progressed.} +Oct 13 08:47:02.255: INFO: Found Deployment test-deployment-rzhtb in namespace deployment-4239 with labels: map[e2e:testing name:httpd] annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} +Oct 13 08:47:02.255: INFO: Deployment test-deployment-rzhtb has an updated status +STEP: patching the Statefulset Status 10/13/23 08:47:02.255 +Oct 13 08:47:02.255: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} +Oct 13 08:47:02.265: INFO: Patched status conditions: []v1.DeploymentCondition{v1.DeploymentCondition{Type:"StatusPatched", Status:"True", LastUpdateTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"", Message:""}} +STEP: watching for the Deployment status to be patched 10/13/23 08:47:02.265 +Oct 13 08:47:02.267: INFO: Observed &Deployment event: ADDED +Oct 13 08:47:02.267: INFO: Observed deployment test-deployment-rzhtb in namespace deployment-4239 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-10-13 08:47:00 +0000 UTC 2023-10-13 08:47:00 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-rzhtb-54bc444df"} +Oct 13 08:47:02.267: INFO: Observed &Deployment event: MODIFIED +Oct 13 08:47:02.267: INFO: Observed deployment test-deployment-rzhtb in namespace deployment-4239 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-10-13 08:47:00 +0000 UTC 2023-10-13 08:47:00 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-rzhtb-54bc444df"} +Oct 13 08:47:02.267: INFO: Observed deployment test-deployment-rzhtb in namespace deployment-4239 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2023-10-13 08:47:00 +0000 UTC 2023-10-13 08:47:00 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} +Oct 13 08:47:02.267: INFO: Observed &Deployment event: MODIFIED +Oct 13 08:47:02.267: INFO: Observed deployment test-deployment-rzhtb in namespace deployment-4239 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2023-10-13 08:47:00 +0000 UTC 2023-10-13 08:47:00 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} +Oct 13 08:47:02.267: INFO: Observed deployment test-deployment-rzhtb in namespace deployment-4239 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-10-13 08:47:00 +0000 UTC 2023-10-13 08:47:00 +0000 UTC ReplicaSetUpdated ReplicaSet "test-deployment-rzhtb-54bc444df" is progressing.} +Oct 13 08:47:02.268: INFO: Observed &Deployment event: MODIFIED +Oct 13 08:47:02.268: INFO: Observed deployment test-deployment-rzhtb in namespace deployment-4239 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2023-10-13 08:47:01 +0000 UTC 2023-10-13 08:47:01 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} +Oct 13 08:47:02.268: INFO: Observed deployment test-deployment-rzhtb in namespace deployment-4239 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-10-13 08:47:01 +0000 UTC 2023-10-13 08:47:00 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-rzhtb-54bc444df" has successfully progressed.} +Oct 13 08:47:02.268: INFO: Observed &Deployment event: MODIFIED +Oct 13 08:47:02.268: INFO: Observed deployment test-deployment-rzhtb in namespace deployment-4239 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2023-10-13 08:47:01 +0000 UTC 2023-10-13 08:47:01 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} +Oct 13 08:47:02.268: INFO: Observed deployment test-deployment-rzhtb in namespace deployment-4239 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-10-13 08:47:01 +0000 UTC 2023-10-13 08:47:00 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-rzhtb-54bc444df" has successfully progressed.} +Oct 13 08:47:02.268: INFO: Observed deployment test-deployment-rzhtb in namespace deployment-4239 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} +Oct 13 08:47:02.268: INFO: Observed &Deployment event: MODIFIED +Oct 13 08:47:02.268: INFO: Found deployment test-deployment-rzhtb in namespace deployment-4239 with labels: map[e2e:testing name:httpd] annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusPatched True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } +Oct 13 08:47:02.268: INFO: Deployment test-deployment-rzhtb has a patched status +[AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 +Oct 13 08:47:02.271: INFO: Deployment "test-deployment-rzhtb": +&Deployment{ObjectMeta:{test-deployment-rzhtb deployment-4239 5d529421-2bd4-451f-957d-6309b74ecd20 21961 1 2023-10-13 08:47:00 +0000 UTC map[e2e:testing name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2023-10-13 08:47:00 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-10-13 08:47:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status} {e2e.test Update apps/v1 2023-10-13 08:47:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"StatusPatched\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:status":{},"f:type":{}}}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{e2e: testing,name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[e2e:testing name:httpd] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005e89798 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:StatusPatched,Status:True,Reason:,Message:,LastUpdateTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + +Oct 13 08:47:02.275: INFO: New ReplicaSet "test-deployment-rzhtb-54bc444df" of Deployment "test-deployment-rzhtb": +&ReplicaSet{ObjectMeta:{test-deployment-rzhtb-54bc444df deployment-4239 99d8f382-d255-43e4-ae26-3403a6da39aa 21958 1 2023-10-13 08:47:00 +0000 UTC map[e2e:testing name:httpd pod-template-hash:54bc444df] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment-rzhtb 5d529421-2bd4-451f-957d-6309b74ecd20 0xc005e89b30 0xc005e89b31}] [] [{kube-controller-manager Update apps/v1 2023-10-13 08:47:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5d529421-2bd4-451f-957d-6309b74ecd20\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-10-13 08:47:01 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{e2e: testing,name: httpd,pod-template-hash: 54bc444df,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[e2e:testing name:httpd pod-template-hash:54bc444df] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005e89bd8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Oct 13 08:47:02.278: INFO: Pod "test-deployment-rzhtb-54bc444df-b4h9g" is available: +&Pod{ObjectMeta:{test-deployment-rzhtb-54bc444df-b4h9g test-deployment-rzhtb-54bc444df- deployment-4239 2e654ba9-1174-4855-a23d-9d42f3cfb2c2 21957 0 2023-10-13 08:47:00 +0000 UTC map[e2e:testing name:httpd pod-template-hash:54bc444df] map[] [{apps/v1 ReplicaSet test-deployment-rzhtb-54bc444df 99d8f382-d255-43e4-ae26-3403a6da39aa 0xc0006b5330 0xc0006b5331}] [] [{kube-controller-manager Update v1 2023-10-13 08:47:00 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"99d8f382-d255-43e4-ae26-3403a6da39aa\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-10-13 08:47:01 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.157\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-gvnwl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gvnwl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:47:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:47:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:47:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:47:00 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.253.8.111,PodIP:10.244.1.157,StartTime:2023-10-13 08:47:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-10-13 08:47:01 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:sha256:987dae5a853a43c663ab2f902708e65874bd2c0189aa0bc57d81ffb57187d089,ContainerID:containerd://a3ce3a5ec70147ee54359029dc3cbd1b9735bca2d36555188f0cac6087028122,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.157,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + test/e2e/framework/node/init/init.go:32 +Oct 13 08:47:02.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] Deployment + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] Deployment + tear down framework | framework.go:193 +STEP: Destroying namespace "deployment-4239" for this suite. 10/13/23 08:47:02.282 +------------------------------ +• [2.101 seconds] +[sig-apps] Deployment +test/e2e/apps/framework.go:23 + should validate Deployment Status endpoints [Conformance] + test/e2e/apps/deployment.go:479 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] Deployment + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:47:00.187 + Oct 13 08:47:00.187: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename deployment 10/13/23 08:47:00.187 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:47:00.205 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:47:00.208 + [BeforeEach] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 + [It] should validate Deployment Status endpoints [Conformance] + test/e2e/apps/deployment.go:479 + STEP: creating a Deployment 10/13/23 08:47:00.213 + Oct 13 08:47:00.213: INFO: Creating simple deployment test-deployment-rzhtb + Oct 13 08:47:00.223: INFO: new replicaset for deployment "test-deployment-rzhtb" is yet to be created + STEP: Getting /status 10/13/23 08:47:02.239 + Oct 13 08:47:02.242: INFO: Deployment test-deployment-rzhtb has Conditions: [{Available True 2023-10-13 08:47:01 +0000 UTC 2023-10-13 08:47:01 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2023-10-13 08:47:01 +0000 UTC 2023-10-13 08:47:00 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-rzhtb-54bc444df" has successfully progressed.}] + STEP: updating Deployment Status 10/13/23 08:47:02.242 + Oct 13 08:47:02.253: INFO: updatedStatus.Conditions: []v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.October, 13, 8, 47, 1, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 8, 47, 1, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.October, 13, 8, 47, 1, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 8, 47, 0, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"test-deployment-rzhtb-54bc444df\" has successfully progressed."}, v1.DeploymentCondition{Type:"StatusUpdate", Status:"True", LastUpdateTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} + STEP: watching for the Deployment status to be updated 10/13/23 08:47:02.253 + Oct 13 08:47:02.254: INFO: Observed &Deployment event: ADDED + Oct 13 08:47:02.254: INFO: Observed Deployment test-deployment-rzhtb in namespace deployment-4239 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-10-13 08:47:00 +0000 UTC 2023-10-13 08:47:00 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-rzhtb-54bc444df"} + Oct 13 08:47:02.255: INFO: Observed &Deployment event: MODIFIED + Oct 13 08:47:02.255: INFO: Observed Deployment test-deployment-rzhtb in namespace deployment-4239 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-10-13 08:47:00 +0000 UTC 2023-10-13 08:47:00 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-rzhtb-54bc444df"} + Oct 13 08:47:02.255: INFO: Observed Deployment test-deployment-rzhtb in namespace deployment-4239 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2023-10-13 08:47:00 +0000 UTC 2023-10-13 08:47:00 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} + Oct 13 08:47:02.255: INFO: Observed &Deployment event: MODIFIED + Oct 13 08:47:02.255: INFO: Observed Deployment test-deployment-rzhtb in namespace deployment-4239 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2023-10-13 08:47:00 +0000 UTC 2023-10-13 08:47:00 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} + Oct 13 08:47:02.255: INFO: Observed Deployment test-deployment-rzhtb in namespace deployment-4239 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-10-13 08:47:00 +0000 UTC 2023-10-13 08:47:00 +0000 UTC ReplicaSetUpdated ReplicaSet "test-deployment-rzhtb-54bc444df" is progressing.} + Oct 13 08:47:02.255: INFO: Observed &Deployment event: MODIFIED + Oct 13 08:47:02.255: INFO: Observed Deployment test-deployment-rzhtb in namespace deployment-4239 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2023-10-13 08:47:01 +0000 UTC 2023-10-13 08:47:01 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} + Oct 13 08:47:02.255: INFO: Observed Deployment test-deployment-rzhtb in namespace deployment-4239 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-10-13 08:47:01 +0000 UTC 2023-10-13 08:47:00 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-rzhtb-54bc444df" has successfully progressed.} + Oct 13 08:47:02.255: INFO: Observed &Deployment event: MODIFIED + Oct 13 08:47:02.255: INFO: Observed Deployment test-deployment-rzhtb in namespace deployment-4239 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2023-10-13 08:47:01 +0000 UTC 2023-10-13 08:47:01 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} + Oct 13 08:47:02.255: INFO: Observed Deployment test-deployment-rzhtb in namespace deployment-4239 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-10-13 08:47:01 +0000 UTC 2023-10-13 08:47:00 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-rzhtb-54bc444df" has successfully progressed.} + Oct 13 08:47:02.255: INFO: Found Deployment test-deployment-rzhtb in namespace deployment-4239 with labels: map[e2e:testing name:httpd] annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} + Oct 13 08:47:02.255: INFO: Deployment test-deployment-rzhtb has an updated status + STEP: patching the Statefulset Status 10/13/23 08:47:02.255 + Oct 13 08:47:02.255: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} + Oct 13 08:47:02.265: INFO: Patched status conditions: []v1.DeploymentCondition{v1.DeploymentCondition{Type:"StatusPatched", Status:"True", LastUpdateTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"", Message:""}} + STEP: watching for the Deployment status to be patched 10/13/23 08:47:02.265 + Oct 13 08:47:02.267: INFO: Observed &Deployment event: ADDED + Oct 13 08:47:02.267: INFO: Observed deployment test-deployment-rzhtb in namespace deployment-4239 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-10-13 08:47:00 +0000 UTC 2023-10-13 08:47:00 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-rzhtb-54bc444df"} + Oct 13 08:47:02.267: INFO: Observed &Deployment event: MODIFIED + Oct 13 08:47:02.267: INFO: Observed deployment test-deployment-rzhtb in namespace deployment-4239 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-10-13 08:47:00 +0000 UTC 2023-10-13 08:47:00 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-rzhtb-54bc444df"} + Oct 13 08:47:02.267: INFO: Observed deployment test-deployment-rzhtb in namespace deployment-4239 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2023-10-13 08:47:00 +0000 UTC 2023-10-13 08:47:00 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} + Oct 13 08:47:02.267: INFO: Observed &Deployment event: MODIFIED + Oct 13 08:47:02.267: INFO: Observed deployment test-deployment-rzhtb in namespace deployment-4239 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2023-10-13 08:47:00 +0000 UTC 2023-10-13 08:47:00 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} + Oct 13 08:47:02.267: INFO: Observed deployment test-deployment-rzhtb in namespace deployment-4239 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-10-13 08:47:00 +0000 UTC 2023-10-13 08:47:00 +0000 UTC ReplicaSetUpdated ReplicaSet "test-deployment-rzhtb-54bc444df" is progressing.} + Oct 13 08:47:02.268: INFO: Observed &Deployment event: MODIFIED + Oct 13 08:47:02.268: INFO: Observed deployment test-deployment-rzhtb in namespace deployment-4239 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2023-10-13 08:47:01 +0000 UTC 2023-10-13 08:47:01 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} + Oct 13 08:47:02.268: INFO: Observed deployment test-deployment-rzhtb in namespace deployment-4239 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-10-13 08:47:01 +0000 UTC 2023-10-13 08:47:00 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-rzhtb-54bc444df" has successfully progressed.} + Oct 13 08:47:02.268: INFO: Observed &Deployment event: MODIFIED + Oct 13 08:47:02.268: INFO: Observed deployment test-deployment-rzhtb in namespace deployment-4239 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2023-10-13 08:47:01 +0000 UTC 2023-10-13 08:47:01 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} + Oct 13 08:47:02.268: INFO: Observed deployment test-deployment-rzhtb in namespace deployment-4239 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-10-13 08:47:01 +0000 UTC 2023-10-13 08:47:00 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-rzhtb-54bc444df" has successfully progressed.} + Oct 13 08:47:02.268: INFO: Observed deployment test-deployment-rzhtb in namespace deployment-4239 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} + Oct 13 08:47:02.268: INFO: Observed &Deployment event: MODIFIED + Oct 13 08:47:02.268: INFO: Found deployment test-deployment-rzhtb in namespace deployment-4239 with labels: map[e2e:testing name:httpd] annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusPatched True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } + Oct 13 08:47:02.268: INFO: Deployment test-deployment-rzhtb has a patched status + [AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 + Oct 13 08:47:02.271: INFO: Deployment "test-deployment-rzhtb": + &Deployment{ObjectMeta:{test-deployment-rzhtb deployment-4239 5d529421-2bd4-451f-957d-6309b74ecd20 21961 1 2023-10-13 08:47:00 +0000 UTC map[e2e:testing name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2023-10-13 08:47:00 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-10-13 08:47:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status} {e2e.test Update apps/v1 2023-10-13 08:47:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"StatusPatched\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:status":{},"f:type":{}}}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{e2e: testing,name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[e2e:testing name:httpd] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005e89798 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:StatusPatched,Status:True,Reason:,Message:,LastUpdateTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + + Oct 13 08:47:02.275: INFO: New ReplicaSet "test-deployment-rzhtb-54bc444df" of Deployment "test-deployment-rzhtb": + &ReplicaSet{ObjectMeta:{test-deployment-rzhtb-54bc444df deployment-4239 99d8f382-d255-43e4-ae26-3403a6da39aa 21958 1 2023-10-13 08:47:00 +0000 UTC map[e2e:testing name:httpd pod-template-hash:54bc444df] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment-rzhtb 5d529421-2bd4-451f-957d-6309b74ecd20 0xc005e89b30 0xc005e89b31}] [] [{kube-controller-manager Update apps/v1 2023-10-13 08:47:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5d529421-2bd4-451f-957d-6309b74ecd20\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-10-13 08:47:01 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{e2e: testing,name: httpd,pod-template-hash: 54bc444df,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[e2e:testing name:httpd pod-template-hash:54bc444df] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005e89bd8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} + Oct 13 08:47:02.278: INFO: Pod "test-deployment-rzhtb-54bc444df-b4h9g" is available: + &Pod{ObjectMeta:{test-deployment-rzhtb-54bc444df-b4h9g test-deployment-rzhtb-54bc444df- deployment-4239 2e654ba9-1174-4855-a23d-9d42f3cfb2c2 21957 0 2023-10-13 08:47:00 +0000 UTC map[e2e:testing name:httpd pod-template-hash:54bc444df] map[] [{apps/v1 ReplicaSet test-deployment-rzhtb-54bc444df 99d8f382-d255-43e4-ae26-3403a6da39aa 0xc0006b5330 0xc0006b5331}] [] [{kube-controller-manager Update v1 2023-10-13 08:47:00 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"99d8f382-d255-43e4-ae26-3403a6da39aa\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-10-13 08:47:01 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.157\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-gvnwl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gvnwl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:47:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:47:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:47:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 08:47:00 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.253.8.111,PodIP:10.244.1.157,StartTime:2023-10-13 08:47:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-10-13 08:47:01 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:sha256:987dae5a853a43c663ab2f902708e65874bd2c0189aa0bc57d81ffb57187d089,ContainerID:containerd://a3ce3a5ec70147ee54359029dc3cbd1b9735bca2d36555188f0cac6087028122,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.157,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + [AfterEach] [sig-apps] Deployment + test/e2e/framework/node/init/init.go:32 + Oct 13 08:47:02.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] Deployment + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] Deployment + tear down framework | framework.go:193 + STEP: Destroying namespace "deployment-4239" for this suite. 10/13/23 08:47:02.282 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition + listing custom resource definition objects works [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:85 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:47:02.289 +Oct 13 08:47:02.289: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename custom-resource-definition 10/13/23 08:47:02.291 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:47:02.306 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:47:02.309 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[It] listing custom resource definition objects works [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:85 +Oct 13 08:47:02.311: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Oct 13 08:47:08.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "custom-resource-definition-311" for this suite. 10/13/23 08:47:08.492 +------------------------------ +• [SLOW TEST] [6.208 seconds] +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + Simple CustomResourceDefinition + test/e2e/apimachinery/custom_resource_definition.go:50 + listing custom resource definition objects works [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:85 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:47:02.289 + Oct 13 08:47:02.289: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename custom-resource-definition 10/13/23 08:47:02.291 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:47:02.306 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:47:02.309 + [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [It] listing custom resource definition objects works [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:85 + Oct 13 08:47:02.311: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Oct 13 08:47:08.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "custom-resource-definition-311" for this suite. 10/13/23 08:47:08.492 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should verify ResourceQuota with best effort scope. [Conformance] + test/e2e/apimachinery/resource_quota.go:803 +[BeforeEach] [sig-api-machinery] ResourceQuota + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:47:08.497 +Oct 13 08:47:08.498: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename resourcequota 10/13/23 08:47:08.498 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:47:08.511 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:47:08.513 +[BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:31 +[It] should verify ResourceQuota with best effort scope. [Conformance] + test/e2e/apimachinery/resource_quota.go:803 +STEP: Creating a ResourceQuota with best effort scope 10/13/23 08:47:08.515 +STEP: Ensuring ResourceQuota status is calculated 10/13/23 08:47:08.519 +STEP: Creating a ResourceQuota with not best effort scope 10/13/23 08:47:10.523 +STEP: Ensuring ResourceQuota status is calculated 10/13/23 08:47:10.528 +STEP: Creating a best-effort pod 10/13/23 08:47:12.532 +STEP: Ensuring resource quota with best effort scope captures the pod usage 10/13/23 08:47:12.543 +STEP: Ensuring resource quota with not best effort ignored the pod usage 10/13/23 08:47:14.547 +STEP: Deleting the pod 10/13/23 08:47:16.553 +STEP: Ensuring resource quota status released the pod usage 10/13/23 08:47:16.565 +STEP: Creating a not best-effort pod 10/13/23 08:47:18.57 +STEP: Ensuring resource quota with not best effort scope captures the pod usage 10/13/23 08:47:18.582 +STEP: Ensuring resource quota with best effort scope ignored the pod usage 10/13/23 08:47:20.588 +STEP: Deleting the pod 10/13/23 08:47:22.593 +STEP: Ensuring resource quota status released the pod usage 10/13/23 08:47:22.608 +[AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/node/init/init.go:32 +Oct 13 08:47:24.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + tear down framework | framework.go:193 +STEP: Destroying namespace "resourcequota-3106" for this suite. 10/13/23 08:47:24.617 +------------------------------ +• [SLOW TEST] [16.127 seconds] +[sig-api-machinery] ResourceQuota +test/e2e/apimachinery/framework.go:23 + should verify ResourceQuota with best effort scope. [Conformance] + test/e2e/apimachinery/resource_quota.go:803 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] ResourceQuota + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:47:08.497 + Oct 13 08:47:08.498: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename resourcequota 10/13/23 08:47:08.498 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:47:08.511 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:47:08.513 + [BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:31 + [It] should verify ResourceQuota with best effort scope. [Conformance] + test/e2e/apimachinery/resource_quota.go:803 + STEP: Creating a ResourceQuota with best effort scope 10/13/23 08:47:08.515 + STEP: Ensuring ResourceQuota status is calculated 10/13/23 08:47:08.519 + STEP: Creating a ResourceQuota with not best effort scope 10/13/23 08:47:10.523 + STEP: Ensuring ResourceQuota status is calculated 10/13/23 08:47:10.528 + STEP: Creating a best-effort pod 10/13/23 08:47:12.532 + STEP: Ensuring resource quota with best effort scope captures the pod usage 10/13/23 08:47:12.543 + STEP: Ensuring resource quota with not best effort ignored the pod usage 10/13/23 08:47:14.547 + STEP: Deleting the pod 10/13/23 08:47:16.553 + STEP: Ensuring resource quota status released the pod usage 10/13/23 08:47:16.565 + STEP: Creating a not best-effort pod 10/13/23 08:47:18.57 + STEP: Ensuring resource quota with not best effort scope captures the pod usage 10/13/23 08:47:18.582 + STEP: Ensuring resource quota with best effort scope ignored the pod usage 10/13/23 08:47:20.588 + STEP: Deleting the pod 10/13/23 08:47:22.593 + STEP: Ensuring resource quota status released the pod usage 10/13/23 08:47:22.608 + [AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/node/init/init.go:32 + Oct 13 08:47:24.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + tear down framework | framework.go:193 + STEP: Destroying namespace "resourcequota-3106" for this suite. 10/13/23 08:47:24.617 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl expose + should create services for rc [Conformance] + test/e2e/kubectl/kubectl.go:1415 +[BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:47:24.625 +Oct 13 08:47:24.625: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename kubectl 10/13/23 08:47:24.626 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:47:24.641 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:47:24.643 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 +[It] should create services for rc [Conformance] + test/e2e/kubectl/kubectl.go:1415 +STEP: creating Agnhost RC 10/13/23 08:47:24.646 +Oct 13 08:47:24.646: INFO: namespace kubectl-9025 +Oct 13 08:47:24.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-9025 create -f -' +Oct 13 08:47:25.469: INFO: stderr: "" +Oct 13 08:47:25.469: INFO: stdout: "replicationcontroller/agnhost-primary created\n" +STEP: Waiting for Agnhost primary to start. 10/13/23 08:47:25.469 +Oct 13 08:47:26.475: INFO: Selector matched 1 pods for map[app:agnhost] +Oct 13 08:47:26.475: INFO: Found 1 / 1 +Oct 13 08:47:26.475: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 +Oct 13 08:47:26.480: INFO: Selector matched 1 pods for map[app:agnhost] +Oct 13 08:47:26.480: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +Oct 13 08:47:26.480: INFO: wait on agnhost-primary startup in kubectl-9025 +Oct 13 08:47:26.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-9025 logs agnhost-primary-5h5wh agnhost-primary' +Oct 13 08:47:26.576: INFO: stderr: "" +Oct 13 08:47:26.576: INFO: stdout: "Paused\n" +STEP: exposing RC 10/13/23 08:47:26.576 +Oct 13 08:47:26.576: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-9025 expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379' +Oct 13 08:47:26.683: INFO: stderr: "" +Oct 13 08:47:26.683: INFO: stdout: "service/rm2 exposed\n" +Oct 13 08:47:26.688: INFO: Service rm2 in namespace kubectl-9025 found. +STEP: exposing service 10/13/23 08:47:28.695 +Oct 13 08:47:28.696: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-9025 expose service rm2 --name=rm3 --port=2345 --target-port=6379' +Oct 13 08:47:28.807: INFO: stderr: "" +Oct 13 08:47:28.807: INFO: stdout: "service/rm3 exposed\n" +Oct 13 08:47:28.811: INFO: Service rm3 in namespace kubectl-9025 found. +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 +Oct 13 08:47:30.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 +STEP: Destroying namespace "kubectl-9025" for this suite. 10/13/23 08:47:30.824 +------------------------------ +• [SLOW TEST] [6.206 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Kubectl expose + test/e2e/kubectl/kubectl.go:1409 + should create services for rc [Conformance] + test/e2e/kubectl/kubectl.go:1415 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:47:24.625 + Oct 13 08:47:24.625: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename kubectl 10/13/23 08:47:24.626 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:47:24.641 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:47:24.643 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 + [It] should create services for rc [Conformance] + test/e2e/kubectl/kubectl.go:1415 + STEP: creating Agnhost RC 10/13/23 08:47:24.646 + Oct 13 08:47:24.646: INFO: namespace kubectl-9025 + Oct 13 08:47:24.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-9025 create -f -' + Oct 13 08:47:25.469: INFO: stderr: "" + Oct 13 08:47:25.469: INFO: stdout: "replicationcontroller/agnhost-primary created\n" + STEP: Waiting for Agnhost primary to start. 10/13/23 08:47:25.469 + Oct 13 08:47:26.475: INFO: Selector matched 1 pods for map[app:agnhost] + Oct 13 08:47:26.475: INFO: Found 1 / 1 + Oct 13 08:47:26.475: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 + Oct 13 08:47:26.480: INFO: Selector matched 1 pods for map[app:agnhost] + Oct 13 08:47:26.480: INFO: ForEach: Found 1 pods from the filter. Now looping through them. + Oct 13 08:47:26.480: INFO: wait on agnhost-primary startup in kubectl-9025 + Oct 13 08:47:26.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-9025 logs agnhost-primary-5h5wh agnhost-primary' + Oct 13 08:47:26.576: INFO: stderr: "" + Oct 13 08:47:26.576: INFO: stdout: "Paused\n" + STEP: exposing RC 10/13/23 08:47:26.576 + Oct 13 08:47:26.576: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-9025 expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379' + Oct 13 08:47:26.683: INFO: stderr: "" + Oct 13 08:47:26.683: INFO: stdout: "service/rm2 exposed\n" + Oct 13 08:47:26.688: INFO: Service rm2 in namespace kubectl-9025 found. + STEP: exposing service 10/13/23 08:47:28.695 + Oct 13 08:47:28.696: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-9025 expose service rm2 --name=rm3 --port=2345 --target-port=6379' + Oct 13 08:47:28.807: INFO: stderr: "" + Oct 13 08:47:28.807: INFO: stdout: "service/rm3 exposed\n" + Oct 13 08:47:28.811: INFO: Service rm3 in namespace kubectl-9025 found. + [AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 + Oct 13 08:47:30.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 + STEP: Destroying namespace "kubectl-9025" for this suite. 10/13/23 08:47:30.824 + << End Captured GinkgoWriter Output +------------------------------ +S +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:57 +[BeforeEach] [sig-storage] ConfigMap + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:47:30.831 +Oct 13 08:47:30.832: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename configmap 10/13/23 08:47:30.833 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:47:30.848 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:47:30.851 +[BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:31 +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:57 +STEP: Creating configMap with name configmap-test-volume-4f24466c-7328-46d0-b594-46006b1d0592 10/13/23 08:47:30.853 +STEP: Creating a pod to test consume configMaps 10/13/23 08:47:30.857 +Oct 13 08:47:30.865: INFO: Waiting up to 5m0s for pod "pod-configmaps-4322feb3-1db4-4c91-8c59-7636f462512c" in namespace "configmap-2998" to be "Succeeded or Failed" +Oct 13 08:47:30.868: INFO: Pod "pod-configmaps-4322feb3-1db4-4c91-8c59-7636f462512c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.341908ms +Oct 13 08:47:32.874: INFO: Pod "pod-configmaps-4322feb3-1db4-4c91-8c59-7636f462512c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009138863s +Oct 13 08:47:34.873: INFO: Pod "pod-configmaps-4322feb3-1db4-4c91-8c59-7636f462512c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008221966s +STEP: Saw pod success 10/13/23 08:47:34.873 +Oct 13 08:47:34.873: INFO: Pod "pod-configmaps-4322feb3-1db4-4c91-8c59-7636f462512c" satisfied condition "Succeeded or Failed" +Oct 13 08:47:34.877: INFO: Trying to get logs from node node2 pod pod-configmaps-4322feb3-1db4-4c91-8c59-7636f462512c container agnhost-container: +STEP: delete the pod 10/13/23 08:47:34.883 +Oct 13 08:47:34.892: INFO: Waiting for pod pod-configmaps-4322feb3-1db4-4c91-8c59-7636f462512c to disappear +Oct 13 08:47:34.895: INFO: Pod pod-configmaps-4322feb3-1db4-4c91-8c59-7636f462512c no longer exists +[AfterEach] [sig-storage] ConfigMap + test/e2e/framework/node/init/init.go:32 +Oct 13 08:47:34.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] ConfigMap + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] ConfigMap + tear down framework | framework.go:193 +STEP: Destroying namespace "configmap-2998" for this suite. 10/13/23 08:47:34.899 +------------------------------ +• [4.073 seconds] +[sig-storage] ConfigMap +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:57 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] ConfigMap + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:47:30.831 + Oct 13 08:47:30.832: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename configmap 10/13/23 08:47:30.833 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:47:30.848 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:47:30.851 + [BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:31 + [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:57 + STEP: Creating configMap with name configmap-test-volume-4f24466c-7328-46d0-b594-46006b1d0592 10/13/23 08:47:30.853 + STEP: Creating a pod to test consume configMaps 10/13/23 08:47:30.857 + Oct 13 08:47:30.865: INFO: Waiting up to 5m0s for pod "pod-configmaps-4322feb3-1db4-4c91-8c59-7636f462512c" in namespace "configmap-2998" to be "Succeeded or Failed" + Oct 13 08:47:30.868: INFO: Pod "pod-configmaps-4322feb3-1db4-4c91-8c59-7636f462512c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.341908ms + Oct 13 08:47:32.874: INFO: Pod "pod-configmaps-4322feb3-1db4-4c91-8c59-7636f462512c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009138863s + Oct 13 08:47:34.873: INFO: Pod "pod-configmaps-4322feb3-1db4-4c91-8c59-7636f462512c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008221966s + STEP: Saw pod success 10/13/23 08:47:34.873 + Oct 13 08:47:34.873: INFO: Pod "pod-configmaps-4322feb3-1db4-4c91-8c59-7636f462512c" satisfied condition "Succeeded or Failed" + Oct 13 08:47:34.877: INFO: Trying to get logs from node node2 pod pod-configmaps-4322feb3-1db4-4c91-8c59-7636f462512c container agnhost-container: + STEP: delete the pod 10/13/23 08:47:34.883 + Oct 13 08:47:34.892: INFO: Waiting for pod pod-configmaps-4322feb3-1db4-4c91-8c59-7636f462512c to disappear + Oct 13 08:47:34.895: INFO: Pod pod-configmaps-4322feb3-1db4-4c91-8c59-7636f462512c no longer exists + [AfterEach] [sig-storage] ConfigMap + test/e2e/framework/node/init/init.go:32 + Oct 13 08:47:34.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] ConfigMap + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] ConfigMap + tear down framework | framework.go:193 + STEP: Destroying namespace "configmap-2998" for this suite. 10/13/23 08:47:34.899 + << End Captured GinkgoWriter Output +------------------------------ +SSS +------------------------------ +[sig-node] Probing container + should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:135 +[BeforeEach] [sig-node] Probing container + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:47:34.904 +Oct 13 08:47:34.904: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename container-probe 10/13/23 08:47:34.906 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:47:34.92 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:47:34.922 +[BeforeEach] [sig-node] Probing container + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:63 +[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:135 +STEP: Creating pod busybox-9aa0b654-4992-40c2-a5d8-03bf5c24a4e8 in namespace container-probe-9979 10/13/23 08:47:34.924 +Oct 13 08:47:34.932: INFO: Waiting up to 5m0s for pod "busybox-9aa0b654-4992-40c2-a5d8-03bf5c24a4e8" in namespace "container-probe-9979" to be "not pending" +Oct 13 08:47:34.936: INFO: Pod "busybox-9aa0b654-4992-40c2-a5d8-03bf5c24a4e8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.366094ms +Oct 13 08:47:36.941: INFO: Pod "busybox-9aa0b654-4992-40c2-a5d8-03bf5c24a4e8": Phase="Running", Reason="", readiness=true. Elapsed: 2.009061804s +Oct 13 08:47:36.941: INFO: Pod "busybox-9aa0b654-4992-40c2-a5d8-03bf5c24a4e8" satisfied condition "not pending" +Oct 13 08:47:36.941: INFO: Started pod busybox-9aa0b654-4992-40c2-a5d8-03bf5c24a4e8 in namespace container-probe-9979 +STEP: checking the pod's current state and verifying that restartCount is present 10/13/23 08:47:36.941 +Oct 13 08:47:36.945: INFO: Initial restart count of pod busybox-9aa0b654-4992-40c2-a5d8-03bf5c24a4e8 is 0 +Oct 13 08:48:27.100: INFO: Restart count of pod container-probe-9979/busybox-9aa0b654-4992-40c2-a5d8-03bf5c24a4e8 is now 1 (50.154644683s elapsed) +STEP: deleting the pod 10/13/23 08:48:27.1 +[AfterEach] [sig-node] Probing container + test/e2e/framework/node/init/init.go:32 +Oct 13 08:48:27.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Probing container + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Probing container + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Probing container + tear down framework | framework.go:193 +STEP: Destroying namespace "container-probe-9979" for this suite. 10/13/23 08:48:27.118 +------------------------------ +• [SLOW TEST] [52.220 seconds] +[sig-node] Probing container +test/e2e/common/node/framework.go:23 + should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:135 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Probing container + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:47:34.904 + Oct 13 08:47:34.904: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename container-probe 10/13/23 08:47:34.906 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:47:34.92 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:47:34.922 + [BeforeEach] [sig-node] Probing container + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:63 + [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:135 + STEP: Creating pod busybox-9aa0b654-4992-40c2-a5d8-03bf5c24a4e8 in namespace container-probe-9979 10/13/23 08:47:34.924 + Oct 13 08:47:34.932: INFO: Waiting up to 5m0s for pod "busybox-9aa0b654-4992-40c2-a5d8-03bf5c24a4e8" in namespace "container-probe-9979" to be "not pending" + Oct 13 08:47:34.936: INFO: Pod "busybox-9aa0b654-4992-40c2-a5d8-03bf5c24a4e8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.366094ms + Oct 13 08:47:36.941: INFO: Pod "busybox-9aa0b654-4992-40c2-a5d8-03bf5c24a4e8": Phase="Running", Reason="", readiness=true. Elapsed: 2.009061804s + Oct 13 08:47:36.941: INFO: Pod "busybox-9aa0b654-4992-40c2-a5d8-03bf5c24a4e8" satisfied condition "not pending" + Oct 13 08:47:36.941: INFO: Started pod busybox-9aa0b654-4992-40c2-a5d8-03bf5c24a4e8 in namespace container-probe-9979 + STEP: checking the pod's current state and verifying that restartCount is present 10/13/23 08:47:36.941 + Oct 13 08:47:36.945: INFO: Initial restart count of pod busybox-9aa0b654-4992-40c2-a5d8-03bf5c24a4e8 is 0 + Oct 13 08:48:27.100: INFO: Restart count of pod container-probe-9979/busybox-9aa0b654-4992-40c2-a5d8-03bf5c24a4e8 is now 1 (50.154644683s elapsed) + STEP: deleting the pod 10/13/23 08:48:27.1 + [AfterEach] [sig-node] Probing container + test/e2e/framework/node/init/init.go:32 + Oct 13 08:48:27.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Probing container + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Probing container + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Probing container + tear down framework | framework.go:193 + STEP: Destroying namespace "container-probe-9979" for this suite. 10/13/23 08:48:27.118 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should be able to deny attaching pod [Conformance] + test/e2e/apimachinery/webhook.go:209 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:48:27.125 +Oct 13 08:48:27.125: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename webhook 10/13/23 08:48:27.126 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:48:27.141 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:48:27.144 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 +STEP: Setting up server cert 10/13/23 08:48:27.158 +STEP: Create role binding to let webhook read extension-apiserver-authentication 10/13/23 08:48:27.815 +STEP: Deploying the webhook pod 10/13/23 08:48:27.828 +STEP: Wait for the deployment to be ready 10/13/23 08:48:27.845 +Oct 13 08:48:27.852: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service 10/13/23 08:48:29.867 +STEP: Verifying the service has paired with the endpoint 10/13/23 08:48:29.878 +Oct 13 08:48:30.879: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should be able to deny attaching pod [Conformance] + test/e2e/apimachinery/webhook.go:209 +STEP: Registering the webhook via the AdmissionRegistration API 10/13/23 08:48:30.883 +STEP: create a pod 10/13/23 08:48:30.9 +Oct 13 08:48:30.911: INFO: Waiting up to 5m0s for pod "to-be-attached-pod" in namespace "webhook-1366" to be "running" +Oct 13 08:48:30.914: INFO: Pod "to-be-attached-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 3.314923ms +Oct 13 08:48:32.919: INFO: Pod "to-be-attached-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.008703123s +Oct 13 08:48:32.919: INFO: Pod "to-be-attached-pod" satisfied condition "running" +STEP: 'kubectl attach' the pod, should be denied by the webhook 10/13/23 08:48:32.919 +Oct 13 08:48:32.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=webhook-1366 attach --namespace=webhook-1366 to-be-attached-pod -i -c=container1' +Oct 13 08:48:33.008: INFO: rc: 1 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Oct 13 08:48:33.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "webhook-1366" for this suite. 10/13/23 08:48:33.083 +STEP: Destroying namespace "webhook-1366-markers" for this suite. 10/13/23 08:48:33.09 +------------------------------ +• [SLOW TEST] [5.971 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should be able to deny attaching pod [Conformance] + test/e2e/apimachinery/webhook.go:209 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:48:27.125 + Oct 13 08:48:27.125: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename webhook 10/13/23 08:48:27.126 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:48:27.141 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:48:27.144 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 + STEP: Setting up server cert 10/13/23 08:48:27.158 + STEP: Create role binding to let webhook read extension-apiserver-authentication 10/13/23 08:48:27.815 + STEP: Deploying the webhook pod 10/13/23 08:48:27.828 + STEP: Wait for the deployment to be ready 10/13/23 08:48:27.845 + Oct 13 08:48:27.852: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set + STEP: Deploying the webhook service 10/13/23 08:48:29.867 + STEP: Verifying the service has paired with the endpoint 10/13/23 08:48:29.878 + Oct 13 08:48:30.879: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] should be able to deny attaching pod [Conformance] + test/e2e/apimachinery/webhook.go:209 + STEP: Registering the webhook via the AdmissionRegistration API 10/13/23 08:48:30.883 + STEP: create a pod 10/13/23 08:48:30.9 + Oct 13 08:48:30.911: INFO: Waiting up to 5m0s for pod "to-be-attached-pod" in namespace "webhook-1366" to be "running" + Oct 13 08:48:30.914: INFO: Pod "to-be-attached-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 3.314923ms + Oct 13 08:48:32.919: INFO: Pod "to-be-attached-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.008703123s + Oct 13 08:48:32.919: INFO: Pod "to-be-attached-pod" satisfied condition "running" + STEP: 'kubectl attach' the pod, should be denied by the webhook 10/13/23 08:48:32.919 + Oct 13 08:48:32.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=webhook-1366 attach --namespace=webhook-1366 to-be-attached-pod -i -c=container1' + Oct 13 08:48:33.008: INFO: rc: 1 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Oct 13 08:48:33.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "webhook-1366" for this suite. 10/13/23 08:48:33.083 + STEP: Destroying namespace "webhook-1366-markers" for this suite. 10/13/23 08:48:33.09 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should update annotations on modification [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:162 +[BeforeEach] [sig-storage] Downward API volume + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:48:33.097 +Oct 13 08:48:33.097: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename downward-api 10/13/23 08:48:33.098 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:48:33.118 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:48:33.122 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:44 +[It] should update annotations on modification [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:162 +STEP: Creating the pod 10/13/23 08:48:33.126 +Oct 13 08:48:33.137: INFO: Waiting up to 5m0s for pod "annotationupdate989df4dd-3842-45dd-80e5-31fa6585b450" in namespace "downward-api-9651" to be "running and ready" +Oct 13 08:48:33.142: INFO: Pod "annotationupdate989df4dd-3842-45dd-80e5-31fa6585b450": Phase="Pending", Reason="", readiness=false. Elapsed: 4.659266ms +Oct 13 08:48:33.142: INFO: The phase of Pod annotationupdate989df4dd-3842-45dd-80e5-31fa6585b450 is Pending, waiting for it to be Running (with Ready = true) +Oct 13 08:48:35.148: INFO: Pod "annotationupdate989df4dd-3842-45dd-80e5-31fa6585b450": Phase="Running", Reason="", readiness=true. Elapsed: 2.01004432s +Oct 13 08:48:35.148: INFO: The phase of Pod annotationupdate989df4dd-3842-45dd-80e5-31fa6585b450 is Running (Ready = true) +Oct 13 08:48:35.148: INFO: Pod "annotationupdate989df4dd-3842-45dd-80e5-31fa6585b450" satisfied condition "running and ready" +Oct 13 08:48:35.672: INFO: Successfully updated pod "annotationupdate989df4dd-3842-45dd-80e5-31fa6585b450" +[AfterEach] [sig-storage] Downward API volume + test/e2e/framework/node/init/init.go:32 +Oct 13 08:48:39.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Downward API volume + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Downward API volume + tear down framework | framework.go:193 +STEP: Destroying namespace "downward-api-9651" for this suite. 10/13/23 08:48:39.706 +------------------------------ +• [SLOW TEST] [6.615 seconds] +[sig-storage] Downward API volume +test/e2e/common/storage/framework.go:23 + should update annotations on modification [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:162 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Downward API volume + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:48:33.097 + Oct 13 08:48:33.097: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename downward-api 10/13/23 08:48:33.098 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:48:33.118 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:48:33.122 + [BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:44 + [It] should update annotations on modification [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:162 + STEP: Creating the pod 10/13/23 08:48:33.126 + Oct 13 08:48:33.137: INFO: Waiting up to 5m0s for pod "annotationupdate989df4dd-3842-45dd-80e5-31fa6585b450" in namespace "downward-api-9651" to be "running and ready" + Oct 13 08:48:33.142: INFO: Pod "annotationupdate989df4dd-3842-45dd-80e5-31fa6585b450": Phase="Pending", Reason="", readiness=false. Elapsed: 4.659266ms + Oct 13 08:48:33.142: INFO: The phase of Pod annotationupdate989df4dd-3842-45dd-80e5-31fa6585b450 is Pending, waiting for it to be Running (with Ready = true) + Oct 13 08:48:35.148: INFO: Pod "annotationupdate989df4dd-3842-45dd-80e5-31fa6585b450": Phase="Running", Reason="", readiness=true. Elapsed: 2.01004432s + Oct 13 08:48:35.148: INFO: The phase of Pod annotationupdate989df4dd-3842-45dd-80e5-31fa6585b450 is Running (Ready = true) + Oct 13 08:48:35.148: INFO: Pod "annotationupdate989df4dd-3842-45dd-80e5-31fa6585b450" satisfied condition "running and ready" + Oct 13 08:48:35.672: INFO: Successfully updated pod "annotationupdate989df4dd-3842-45dd-80e5-31fa6585b450" + [AfterEach] [sig-storage] Downward API volume + test/e2e/framework/node/init/init.go:32 + Oct 13 08:48:39.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Downward API volume + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Downward API volume + tear down framework | framework.go:193 + STEP: Destroying namespace "downward-api-9651" for this suite. 10/13/23 08:48:39.706 + << End Captured GinkgoWriter Output +------------------------------ +SSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should list and delete a collection of DaemonSets [Conformance] + test/e2e/apps/daemon_set.go:823 +[BeforeEach] [sig-apps] Daemon set [Serial] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:48:39.712 +Oct 13 08:48:39.712: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename daemonsets 10/13/23 08:48:39.713 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:48:39.729 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:48:39.731 +[BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:146 +[It] should list and delete a collection of DaemonSets [Conformance] + test/e2e/apps/daemon_set.go:823 +STEP: Creating simple DaemonSet "daemon-set" 10/13/23 08:48:39.749 +STEP: Check that daemon pods launch on every node of the cluster. 10/13/23 08:48:39.753 +Oct 13 08:48:39.759: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Oct 13 08:48:39.759: INFO: Node node1 is running 0 daemon pod, expected 1 +Oct 13 08:48:40.767: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 +Oct 13 08:48:40.767: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set +STEP: listing all DeamonSets 10/13/23 08:48:40.769 +STEP: DeleteCollection of the DaemonSets 10/13/23 08:48:40.773 +STEP: Verify that ReplicaSets have been deleted 10/13/23 08:48:40.779 +[AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:111 +Oct 13 08:48:40.787: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"22534"},"items":null} + +Oct 13 08:48:40.790: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"22534"},"items":[{"metadata":{"name":"daemon-set-672zg","generateName":"daemon-set-","namespace":"daemonsets-6163","uid":"82673d6a-c3b6-4155-b9b7-c6676e7bad65","resourceVersion":"22531","creationTimestamp":"2023-10-13T08:48:40Z","labels":{"controller-revision-hash":"6cff669f8c","daemonset-name":"daemon-set","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"96a1eab7-2225-4851-9458-839ec540ef02","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-13T08:48:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"96a1eab7-2225-4851-9458-839ec540ef02\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-13T08:48:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.112\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-55tps","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-55tps","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"node3","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["node3"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-10-13T08:48:39Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-10-13T08:48:40Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-10-13T08:48:40Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-10-13T08:48:40Z"}],"hostIP":"10.253.8.112","podIP":"10.244.2.112","podIPs":[{"ip":"10.244.2.112"}],"startTime":"2023-10-13T08:48:39Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2023-10-13T08:48:40Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","imageID":"sha256:987dae5a853a43c663ab2f902708e65874bd2c0189aa0bc57d81ffb57187d089","containerID":"containerd://7ca098f968d6f5b3c3b85778282ee0ca0c291f7adc1e195b94925f35409517ec","started":true}],"qosClass":"BestEffort"}},{"metadata":{"name":"daemon-set-lltvg","generateName":"daemon-set-","namespace":"daemonsets-6163","uid":"e50b5d0a-7a39-4e17-a145-a1d39c91f09f","resourceVersion":"22526","creationTimestamp":"2023-10-13T08:48:40Z","labels":{"controller-revision-hash":"6cff669f8c","daemonset-name":"daemon-set","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"96a1eab7-2225-4851-9458-839ec540ef02","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-13T08:48:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"96a1eab7-2225-4851-9458-839ec540ef02\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-13T08:48:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.164\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-xdcmq","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-xdcmq","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"node2","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["node2"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-10-13T08:48:40Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-10-13T08:48:41Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-10-13T08:48:41Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-10-13T08:48:40Z"}],"hostIP":"10.253.8.111","podIP":"10.244.1.164","podIPs":[{"ip":"10.244.1.164"}],"startTime":"2023-10-13T08:48:40Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2023-10-13T08:48:41Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","imageID":"sha256:987dae5a853a43c663ab2f902708e65874bd2c0189aa0bc57d81ffb57187d089","containerID":"containerd://654dcfaf2a3f954b7d9b10875e931f5ee765d1b5b55bcda01284294debce7296","started":true}],"qosClass":"BestEffort"}},{"metadata":{"name":"daemon-set-q9xxm","generateName":"daemon-set-","namespace":"daemonsets-6163","uid":"67bd4f24-0160-450e-b87d-82ca3245e906","resourceVersion":"22528","creationTimestamp":"2023-10-13T08:48:40Z","labels":{"controller-revision-hash":"6cff669f8c","daemonset-name":"daemon-set","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"96a1eab7-2225-4851-9458-839ec540ef02","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-13T08:48:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"96a1eab7-2225-4851-9458-839ec540ef02\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-13T08:48:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.0.31\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-67dgn","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-67dgn","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"node1","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["node1"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-10-13T08:48:40Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-10-13T08:48:40Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-10-13T08:48:40Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-10-13T08:48:40Z"}],"hostIP":"10.253.8.110","podIP":"10.244.0.31","podIPs":[{"ip":"10.244.0.31"}],"startTime":"2023-10-13T08:48:40Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2023-10-13T08:48:40Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","imageID":"sha256:987dae5a853a43c663ab2f902708e65874bd2c0189aa0bc57d81ffb57187d089","containerID":"containerd://cd912454992eb40a4135ce94483e4190b132e802bce3316ace6e5081190cdb56","started":true}],"qosClass":"BestEffort"}}]} + +[AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/node/init/init.go:32 +Oct 13 08:48:40.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + tear down framework | framework.go:193 +STEP: Destroying namespace "daemonsets-6163" for this suite. 10/13/23 08:48:40.809 +------------------------------ +• [1.102 seconds] +[sig-apps] Daemon set [Serial] +test/e2e/apps/framework.go:23 + should list and delete a collection of DaemonSets [Conformance] + test/e2e/apps/daemon_set.go:823 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] Daemon set [Serial] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:48:39.712 + Oct 13 08:48:39.712: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename daemonsets 10/13/23 08:48:39.713 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:48:39.729 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:48:39.731 + [BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:146 + [It] should list and delete a collection of DaemonSets [Conformance] + test/e2e/apps/daemon_set.go:823 + STEP: Creating simple DaemonSet "daemon-set" 10/13/23 08:48:39.749 + STEP: Check that daemon pods launch on every node of the cluster. 10/13/23 08:48:39.753 + Oct 13 08:48:39.759: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Oct 13 08:48:39.759: INFO: Node node1 is running 0 daemon pod, expected 1 + Oct 13 08:48:40.767: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 + Oct 13 08:48:40.767: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set + STEP: listing all DeamonSets 10/13/23 08:48:40.769 + STEP: DeleteCollection of the DaemonSets 10/13/23 08:48:40.773 + STEP: Verify that ReplicaSets have been deleted 10/13/23 08:48:40.779 + [AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:111 + Oct 13 08:48:40.787: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"22534"},"items":null} + + Oct 13 08:48:40.790: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"22534"},"items":[{"metadata":{"name":"daemon-set-672zg","generateName":"daemon-set-","namespace":"daemonsets-6163","uid":"82673d6a-c3b6-4155-b9b7-c6676e7bad65","resourceVersion":"22531","creationTimestamp":"2023-10-13T08:48:40Z","labels":{"controller-revision-hash":"6cff669f8c","daemonset-name":"daemon-set","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"96a1eab7-2225-4851-9458-839ec540ef02","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-13T08:48:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"96a1eab7-2225-4851-9458-839ec540ef02\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-13T08:48:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.112\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-55tps","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-55tps","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"node3","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["node3"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-10-13T08:48:39Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-10-13T08:48:40Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-10-13T08:48:40Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-10-13T08:48:40Z"}],"hostIP":"10.253.8.112","podIP":"10.244.2.112","podIPs":[{"ip":"10.244.2.112"}],"startTime":"2023-10-13T08:48:39Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2023-10-13T08:48:40Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","imageID":"sha256:987dae5a853a43c663ab2f902708e65874bd2c0189aa0bc57d81ffb57187d089","containerID":"containerd://7ca098f968d6f5b3c3b85778282ee0ca0c291f7adc1e195b94925f35409517ec","started":true}],"qosClass":"BestEffort"}},{"metadata":{"name":"daemon-set-lltvg","generateName":"daemon-set-","namespace":"daemonsets-6163","uid":"e50b5d0a-7a39-4e17-a145-a1d39c91f09f","resourceVersion":"22526","creationTimestamp":"2023-10-13T08:48:40Z","labels":{"controller-revision-hash":"6cff669f8c","daemonset-name":"daemon-set","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"96a1eab7-2225-4851-9458-839ec540ef02","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-13T08:48:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"96a1eab7-2225-4851-9458-839ec540ef02\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-13T08:48:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.164\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-xdcmq","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-xdcmq","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"node2","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["node2"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-10-13T08:48:40Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-10-13T08:48:41Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-10-13T08:48:41Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-10-13T08:48:40Z"}],"hostIP":"10.253.8.111","podIP":"10.244.1.164","podIPs":[{"ip":"10.244.1.164"}],"startTime":"2023-10-13T08:48:40Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2023-10-13T08:48:41Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","imageID":"sha256:987dae5a853a43c663ab2f902708e65874bd2c0189aa0bc57d81ffb57187d089","containerID":"containerd://654dcfaf2a3f954b7d9b10875e931f5ee765d1b5b55bcda01284294debce7296","started":true}],"qosClass":"BestEffort"}},{"metadata":{"name":"daemon-set-q9xxm","generateName":"daemon-set-","namespace":"daemonsets-6163","uid":"67bd4f24-0160-450e-b87d-82ca3245e906","resourceVersion":"22528","creationTimestamp":"2023-10-13T08:48:40Z","labels":{"controller-revision-hash":"6cff669f8c","daemonset-name":"daemon-set","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"96a1eab7-2225-4851-9458-839ec540ef02","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-13T08:48:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"96a1eab7-2225-4851-9458-839ec540ef02\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-13T08:48:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.0.31\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-67dgn","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-67dgn","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"node1","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["node1"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-10-13T08:48:40Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-10-13T08:48:40Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-10-13T08:48:40Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-10-13T08:48:40Z"}],"hostIP":"10.253.8.110","podIP":"10.244.0.31","podIPs":[{"ip":"10.244.0.31"}],"startTime":"2023-10-13T08:48:40Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2023-10-13T08:48:40Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","imageID":"sha256:987dae5a853a43c663ab2f902708e65874bd2c0189aa0bc57d81ffb57187d089","containerID":"containerd://cd912454992eb40a4135ce94483e4190b132e802bce3316ace6e5081190cdb56","started":true}],"qosClass":"BestEffort"}}]} + + [AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/node/init/init.go:32 + Oct 13 08:48:40.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + tear down framework | framework.go:193 + STEP: Destroying namespace "daemonsets-6163" for this suite. 10/13/23 08:48:40.809 + << End Captured GinkgoWriter Output +------------------------------ +SSSS +------------------------------ +[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook + should execute prestop http hook properly [NodeConformance] [Conformance] + test/e2e/common/node/lifecycle_hook.go:212 +[BeforeEach] [sig-node] Container Lifecycle Hook + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:48:40.815 +Oct 13 08:48:40.816: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename container-lifecycle-hook 10/13/23 08:48:40.817 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:48:40.831 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:48:40.833 +[BeforeEach] [sig-node] Container Lifecycle Hook + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] when create a pod with lifecycle hook + test/e2e/common/node/lifecycle_hook.go:77 +STEP: create the container to handle the HTTPGet hook request. 10/13/23 08:48:40.838 +Oct 13 08:48:40.846: INFO: Waiting up to 5m0s for pod "pod-handle-http-request" in namespace "container-lifecycle-hook-2829" to be "running and ready" +Oct 13 08:48:40.850: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 3.183112ms +Oct 13 08:48:40.850: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Oct 13 08:48:42.856: INFO: Pod "pod-handle-http-request": Phase="Running", Reason="", readiness=true. Elapsed: 2.009833719s +Oct 13 08:48:42.856: INFO: The phase of Pod pod-handle-http-request is Running (Ready = true) +Oct 13 08:48:42.856: INFO: Pod "pod-handle-http-request" satisfied condition "running and ready" +[It] should execute prestop http hook properly [NodeConformance] [Conformance] + test/e2e/common/node/lifecycle_hook.go:212 +STEP: create the pod with lifecycle hook 10/13/23 08:48:42.86 +Oct 13 08:48:42.867: INFO: Waiting up to 5m0s for pod "pod-with-prestop-http-hook" in namespace "container-lifecycle-hook-2829" to be "running and ready" +Oct 13 08:48:42.870: INFO: Pod "pod-with-prestop-http-hook": Phase="Pending", Reason="", readiness=false. Elapsed: 3.81985ms +Oct 13 08:48:42.870: INFO: The phase of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) +Oct 13 08:48:44.875: INFO: Pod "pod-with-prestop-http-hook": Phase="Running", Reason="", readiness=true. Elapsed: 2.008243648s +Oct 13 08:48:44.875: INFO: The phase of Pod pod-with-prestop-http-hook is Running (Ready = true) +Oct 13 08:48:44.875: INFO: Pod "pod-with-prestop-http-hook" satisfied condition "running and ready" +STEP: delete the pod with lifecycle hook 10/13/23 08:48:44.878 +Oct 13 08:48:44.884: INFO: Waiting for pod pod-with-prestop-http-hook to disappear +Oct 13 08:48:44.887: INFO: Pod pod-with-prestop-http-hook still exists +Oct 13 08:48:46.888: INFO: Waiting for pod pod-with-prestop-http-hook to disappear +Oct 13 08:48:46.891: INFO: Pod pod-with-prestop-http-hook no longer exists +STEP: check prestop hook 10/13/23 08:48:46.891 +[AfterEach] [sig-node] Container Lifecycle Hook + test/e2e/framework/node/init/init.go:32 +Oct 13 08:48:46.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Container Lifecycle Hook + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Container Lifecycle Hook + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Container Lifecycle Hook + tear down framework | framework.go:193 +STEP: Destroying namespace "container-lifecycle-hook-2829" for this suite. 10/13/23 08:48:46.9 +------------------------------ +• [SLOW TEST] [6.090 seconds] +[sig-node] Container Lifecycle Hook +test/e2e/common/node/framework.go:23 + when create a pod with lifecycle hook + test/e2e/common/node/lifecycle_hook.go:46 + should execute prestop http hook properly [NodeConformance] [Conformance] + test/e2e/common/node/lifecycle_hook.go:212 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Container Lifecycle Hook + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:48:40.815 + Oct 13 08:48:40.816: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename container-lifecycle-hook 10/13/23 08:48:40.817 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:48:40.831 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:48:40.833 + [BeforeEach] [sig-node] Container Lifecycle Hook + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] when create a pod with lifecycle hook + test/e2e/common/node/lifecycle_hook.go:77 + STEP: create the container to handle the HTTPGet hook request. 10/13/23 08:48:40.838 + Oct 13 08:48:40.846: INFO: Waiting up to 5m0s for pod "pod-handle-http-request" in namespace "container-lifecycle-hook-2829" to be "running and ready" + Oct 13 08:48:40.850: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 3.183112ms + Oct 13 08:48:40.850: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) + Oct 13 08:48:42.856: INFO: Pod "pod-handle-http-request": Phase="Running", Reason="", readiness=true. Elapsed: 2.009833719s + Oct 13 08:48:42.856: INFO: The phase of Pod pod-handle-http-request is Running (Ready = true) + Oct 13 08:48:42.856: INFO: Pod "pod-handle-http-request" satisfied condition "running and ready" + [It] should execute prestop http hook properly [NodeConformance] [Conformance] + test/e2e/common/node/lifecycle_hook.go:212 + STEP: create the pod with lifecycle hook 10/13/23 08:48:42.86 + Oct 13 08:48:42.867: INFO: Waiting up to 5m0s for pod "pod-with-prestop-http-hook" in namespace "container-lifecycle-hook-2829" to be "running and ready" + Oct 13 08:48:42.870: INFO: Pod "pod-with-prestop-http-hook": Phase="Pending", Reason="", readiness=false. Elapsed: 3.81985ms + Oct 13 08:48:42.870: INFO: The phase of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) + Oct 13 08:48:44.875: INFO: Pod "pod-with-prestop-http-hook": Phase="Running", Reason="", readiness=true. Elapsed: 2.008243648s + Oct 13 08:48:44.875: INFO: The phase of Pod pod-with-prestop-http-hook is Running (Ready = true) + Oct 13 08:48:44.875: INFO: Pod "pod-with-prestop-http-hook" satisfied condition "running and ready" + STEP: delete the pod with lifecycle hook 10/13/23 08:48:44.878 + Oct 13 08:48:44.884: INFO: Waiting for pod pod-with-prestop-http-hook to disappear + Oct 13 08:48:44.887: INFO: Pod pod-with-prestop-http-hook still exists + Oct 13 08:48:46.888: INFO: Waiting for pod pod-with-prestop-http-hook to disappear + Oct 13 08:48:46.891: INFO: Pod pod-with-prestop-http-hook no longer exists + STEP: check prestop hook 10/13/23 08:48:46.891 + [AfterEach] [sig-node] Container Lifecycle Hook + test/e2e/framework/node/init/init.go:32 + Oct 13 08:48:46.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Container Lifecycle Hook + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Container Lifecycle Hook + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Container Lifecycle Hook + tear down framework | framework.go:193 + STEP: Destroying namespace "container-lifecycle-hook-2829" for this suite. 10/13/23 08:48:46.9 + << End Captured GinkgoWriter Output +------------------------------ +SSS +------------------------------ +[sig-scheduling] SchedulerPreemption [Serial] + validates lower priority pod preemption by critical pod [Conformance] + test/e2e/scheduling/preemption.go:224 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:48:46.906 +Oct 13 08:48:46.906: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename sched-preemption 10/13/23 08:48:46.907 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:48:46.921 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:48:46.924 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/scheduling/preemption.go:97 +Oct 13 08:48:46.937: INFO: Waiting up to 1m0s for all nodes to be ready +Oct 13 08:49:46.969: INFO: Waiting for terminating namespaces to be deleted... +[It] validates lower priority pod preemption by critical pod [Conformance] + test/e2e/scheduling/preemption.go:224 +STEP: Create pods that use 4/5 of node resources. 10/13/23 08:49:46.973 +Oct 13 08:49:47.003: INFO: Created pod: pod0-0-sched-preemption-low-priority +Oct 13 08:49:47.010: INFO: Created pod: pod0-1-sched-preemption-medium-priority +Oct 13 08:49:47.026: INFO: Created pod: pod1-0-sched-preemption-medium-priority +Oct 13 08:49:47.033: INFO: Created pod: pod1-1-sched-preemption-medium-priority +Oct 13 08:49:47.054: INFO: Created pod: pod2-0-sched-preemption-medium-priority +Oct 13 08:49:47.062: INFO: Created pod: pod2-1-sched-preemption-medium-priority +STEP: Wait for pods to be scheduled. 10/13/23 08:49:47.062 +Oct 13 08:49:47.062: INFO: Waiting up to 5m0s for pod "pod0-0-sched-preemption-low-priority" in namespace "sched-preemption-6932" to be "running" +Oct 13 08:49:47.068: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 5.717879ms +Oct 13 08:49:49.074: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.011715755s +Oct 13 08:49:49.074: INFO: Pod "pod0-0-sched-preemption-low-priority" satisfied condition "running" +Oct 13 08:49:49.074: INFO: Waiting up to 5m0s for pod "pod0-1-sched-preemption-medium-priority" in namespace "sched-preemption-6932" to be "running" +Oct 13 08:49:49.078: INFO: Pod "pod0-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 4.008621ms +Oct 13 08:49:49.078: INFO: Pod "pod0-1-sched-preemption-medium-priority" satisfied condition "running" +Oct 13 08:49:49.078: INFO: Waiting up to 5m0s for pod "pod1-0-sched-preemption-medium-priority" in namespace "sched-preemption-6932" to be "running" +Oct 13 08:49:49.082: INFO: Pod "pod1-0-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 4.011743ms +Oct 13 08:49:49.082: INFO: Pod "pod1-0-sched-preemption-medium-priority" satisfied condition "running" +Oct 13 08:49:49.082: INFO: Waiting up to 5m0s for pod "pod1-1-sched-preemption-medium-priority" in namespace "sched-preemption-6932" to be "running" +Oct 13 08:49:49.086: INFO: Pod "pod1-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 3.6076ms +Oct 13 08:49:49.086: INFO: Pod "pod1-1-sched-preemption-medium-priority" satisfied condition "running" +Oct 13 08:49:49.086: INFO: Waiting up to 5m0s for pod "pod2-0-sched-preemption-medium-priority" in namespace "sched-preemption-6932" to be "running" +Oct 13 08:49:49.089: INFO: Pod "pod2-0-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 3.2232ms +Oct 13 08:49:49.089: INFO: Pod "pod2-0-sched-preemption-medium-priority" satisfied condition "running" +Oct 13 08:49:49.089: INFO: Waiting up to 5m0s for pod "pod2-1-sched-preemption-medium-priority" in namespace "sched-preemption-6932" to be "running" +Oct 13 08:49:49.092: INFO: Pod "pod2-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 3.030708ms +Oct 13 08:49:49.092: INFO: Pod "pod2-1-sched-preemption-medium-priority" satisfied condition "running" +STEP: Run a critical pod that use same resources as that of a lower priority pod 10/13/23 08:49:49.092 +Oct 13 08:49:49.104: INFO: Waiting up to 2m0s for pod "critical-pod" in namespace "kube-system" to be "running" +Oct 13 08:49:49.107: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.83113ms +Oct 13 08:49:51.113: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008525072s +Oct 13 08:49:53.114: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009457771s +Oct 13 08:49:55.114: INFO: Pod "critical-pod": Phase="Running", Reason="", readiness=true. Elapsed: 6.00994065s +Oct 13 08:49:55.114: INFO: Pod "critical-pod" satisfied condition "running" +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/node/init/init.go:32 +Oct 13 08:49:55.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/scheduling/preemption.go:84 +[DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] + tear down framework | framework.go:193 +STEP: Destroying namespace "sched-preemption-6932" for this suite. 10/13/23 08:49:55.211 +------------------------------ +• [SLOW TEST] [68.311 seconds] +[sig-scheduling] SchedulerPreemption [Serial] +test/e2e/scheduling/framework.go:40 + validates lower priority pod preemption by critical pod [Conformance] + test/e2e/scheduling/preemption.go:224 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:48:46.906 + Oct 13 08:48:46.906: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename sched-preemption 10/13/23 08:48:46.907 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:48:46.921 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:48:46.924 + [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/scheduling/preemption.go:97 + Oct 13 08:48:46.937: INFO: Waiting up to 1m0s for all nodes to be ready + Oct 13 08:49:46.969: INFO: Waiting for terminating namespaces to be deleted... + [It] validates lower priority pod preemption by critical pod [Conformance] + test/e2e/scheduling/preemption.go:224 + STEP: Create pods that use 4/5 of node resources. 10/13/23 08:49:46.973 + Oct 13 08:49:47.003: INFO: Created pod: pod0-0-sched-preemption-low-priority + Oct 13 08:49:47.010: INFO: Created pod: pod0-1-sched-preemption-medium-priority + Oct 13 08:49:47.026: INFO: Created pod: pod1-0-sched-preemption-medium-priority + Oct 13 08:49:47.033: INFO: Created pod: pod1-1-sched-preemption-medium-priority + Oct 13 08:49:47.054: INFO: Created pod: pod2-0-sched-preemption-medium-priority + Oct 13 08:49:47.062: INFO: Created pod: pod2-1-sched-preemption-medium-priority + STEP: Wait for pods to be scheduled. 10/13/23 08:49:47.062 + Oct 13 08:49:47.062: INFO: Waiting up to 5m0s for pod "pod0-0-sched-preemption-low-priority" in namespace "sched-preemption-6932" to be "running" + Oct 13 08:49:47.068: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 5.717879ms + Oct 13 08:49:49.074: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.011715755s + Oct 13 08:49:49.074: INFO: Pod "pod0-0-sched-preemption-low-priority" satisfied condition "running" + Oct 13 08:49:49.074: INFO: Waiting up to 5m0s for pod "pod0-1-sched-preemption-medium-priority" in namespace "sched-preemption-6932" to be "running" + Oct 13 08:49:49.078: INFO: Pod "pod0-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 4.008621ms + Oct 13 08:49:49.078: INFO: Pod "pod0-1-sched-preemption-medium-priority" satisfied condition "running" + Oct 13 08:49:49.078: INFO: Waiting up to 5m0s for pod "pod1-0-sched-preemption-medium-priority" in namespace "sched-preemption-6932" to be "running" + Oct 13 08:49:49.082: INFO: Pod "pod1-0-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 4.011743ms + Oct 13 08:49:49.082: INFO: Pod "pod1-0-sched-preemption-medium-priority" satisfied condition "running" + Oct 13 08:49:49.082: INFO: Waiting up to 5m0s for pod "pod1-1-sched-preemption-medium-priority" in namespace "sched-preemption-6932" to be "running" + Oct 13 08:49:49.086: INFO: Pod "pod1-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 3.6076ms + Oct 13 08:49:49.086: INFO: Pod "pod1-1-sched-preemption-medium-priority" satisfied condition "running" + Oct 13 08:49:49.086: INFO: Waiting up to 5m0s for pod "pod2-0-sched-preemption-medium-priority" in namespace "sched-preemption-6932" to be "running" + Oct 13 08:49:49.089: INFO: Pod "pod2-0-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 3.2232ms + Oct 13 08:49:49.089: INFO: Pod "pod2-0-sched-preemption-medium-priority" satisfied condition "running" + Oct 13 08:49:49.089: INFO: Waiting up to 5m0s for pod "pod2-1-sched-preemption-medium-priority" in namespace "sched-preemption-6932" to be "running" + Oct 13 08:49:49.092: INFO: Pod "pod2-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 3.030708ms + Oct 13 08:49:49.092: INFO: Pod "pod2-1-sched-preemption-medium-priority" satisfied condition "running" + STEP: Run a critical pod that use same resources as that of a lower priority pod 10/13/23 08:49:49.092 + Oct 13 08:49:49.104: INFO: Waiting up to 2m0s for pod "critical-pod" in namespace "kube-system" to be "running" + Oct 13 08:49:49.107: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.83113ms + Oct 13 08:49:51.113: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008525072s + Oct 13 08:49:53.114: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009457771s + Oct 13 08:49:55.114: INFO: Pod "critical-pod": Phase="Running", Reason="", readiness=true. Elapsed: 6.00994065s + Oct 13 08:49:55.114: INFO: Pod "critical-pod" satisfied condition "running" + [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/node/init/init.go:32 + Oct 13 08:49:55.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/scheduling/preemption.go:84 + [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] + tear down framework | framework.go:193 + STEP: Destroying namespace "sched-preemption-6932" for this suite. 10/13/23 08:49:55.211 + << End Captured GinkgoWriter Output +------------------------------ +SSS +------------------------------ +[sig-network] Services + should find a service from listing all namespaces [Conformance] + test/e2e/network/service.go:3219 +[BeforeEach] [sig-network] Services + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:49:55.218 +Oct 13 08:49:55.218: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename services 10/13/23 08:49:55.219 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:49:55.234 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:49:55.237 +[BeforeEach] [sig-network] Services + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 +[It] should find a service from listing all namespaces [Conformance] + test/e2e/network/service.go:3219 +STEP: fetching services 10/13/23 08:49:55.239 +[AfterEach] [sig-network] Services + test/e2e/framework/node/init/init.go:32 +Oct 13 08:49:55.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] Services + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] Services + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] Services + tear down framework | framework.go:193 +STEP: Destroying namespace "services-4224" for this suite. 10/13/23 08:49:55.245 +------------------------------ +• [0.033 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should find a service from listing all namespaces [Conformance] + test/e2e/network/service.go:3219 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Services + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:49:55.218 + Oct 13 08:49:55.218: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename services 10/13/23 08:49:55.219 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:49:55.234 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:49:55.237 + [BeforeEach] [sig-network] Services + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 + [It] should find a service from listing all namespaces [Conformance] + test/e2e/network/service.go:3219 + STEP: fetching services 10/13/23 08:49:55.239 + [AfterEach] [sig-network] Services + test/e2e/framework/node/init/init.go:32 + Oct 13 08:49:55.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] Services + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] Services + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] Services + tear down framework | framework.go:193 + STEP: Destroying namespace "services-4224" for this suite. 10/13/23 08:49:55.245 + << End Captured GinkgoWriter Output +------------------------------ +SSS +------------------------------ +[sig-cli] Kubectl client Update Demo + should scale a replication controller [Conformance] + test/e2e/kubectl/kubectl.go:352 +[BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:49:55.251 +Oct 13 08:49:55.251: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename kubectl 10/13/23 08:49:55.251 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:49:55.268 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:49:55.27 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 +[BeforeEach] Update Demo + test/e2e/kubectl/kubectl.go:326 +[It] should scale a replication controller [Conformance] + test/e2e/kubectl/kubectl.go:352 +STEP: creating a replication controller 10/13/23 08:49:55.272 +Oct 13 08:49:55.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-5477 create -f -' +Oct 13 08:49:55.514: INFO: stderr: "" +Oct 13 08:49:55.514: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" +STEP: waiting for all containers in name=update-demo pods to come up. 10/13/23 08:49:55.514 +Oct 13 08:49:55.514: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-5477 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 13 08:49:55.614: INFO: stderr: "" +Oct 13 08:49:55.614: INFO: stdout: "update-demo-nautilus-s4nhd update-demo-nautilus-vb8vq " +Oct 13 08:49:55.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-5477 get pods update-demo-nautilus-s4nhd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 13 08:49:55.709: INFO: stderr: "" +Oct 13 08:49:55.709: INFO: stdout: "" +Oct 13 08:49:55.709: INFO: update-demo-nautilus-s4nhd is created but not running +Oct 13 08:50:00.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-5477 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 13 08:50:00.790: INFO: stderr: "" +Oct 13 08:50:00.790: INFO: stdout: "update-demo-nautilus-s4nhd update-demo-nautilus-vb8vq " +Oct 13 08:50:00.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-5477 get pods update-demo-nautilus-s4nhd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 13 08:50:00.863: INFO: stderr: "" +Oct 13 08:50:00.863: INFO: stdout: "true" +Oct 13 08:50:00.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-5477 get pods update-demo-nautilus-s4nhd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Oct 13 08:50:00.938: INFO: stderr: "" +Oct 13 08:50:00.938: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.7" +Oct 13 08:50:00.938: INFO: validating pod update-demo-nautilus-s4nhd +Oct 13 08:50:00.943: INFO: got data: { + "image": "nautilus.jpg" +} + +Oct 13 08:50:00.943: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Oct 13 08:50:00.943: INFO: update-demo-nautilus-s4nhd is verified up and running +Oct 13 08:50:00.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-5477 get pods update-demo-nautilus-vb8vq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 13 08:50:01.015: INFO: stderr: "" +Oct 13 08:50:01.015: INFO: stdout: "true" +Oct 13 08:50:01.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-5477 get pods update-demo-nautilus-vb8vq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Oct 13 08:50:01.081: INFO: stderr: "" +Oct 13 08:50:01.081: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.7" +Oct 13 08:50:01.081: INFO: validating pod update-demo-nautilus-vb8vq +Oct 13 08:50:01.088: INFO: got data: { + "image": "nautilus.jpg" +} + +Oct 13 08:50:01.089: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Oct 13 08:50:01.089: INFO: update-demo-nautilus-vb8vq is verified up and running +STEP: scaling down the replication controller 10/13/23 08:50:01.089 +Oct 13 08:50:01.090: INFO: scanned /root for discovery docs: +Oct 13 08:50:01.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-5477 scale rc update-demo-nautilus --replicas=1 --timeout=5m' +Oct 13 08:50:02.171: INFO: stderr: "" +Oct 13 08:50:02.171: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" +STEP: waiting for all containers in name=update-demo pods to come up. 10/13/23 08:50:02.171 +Oct 13 08:50:02.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-5477 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 13 08:50:02.228: INFO: stderr: "" +Oct 13 08:50:02.228: INFO: stdout: "update-demo-nautilus-s4nhd " +Oct 13 08:50:02.228: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-5477 get pods update-demo-nautilus-s4nhd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 13 08:50:02.290: INFO: stderr: "" +Oct 13 08:50:02.290: INFO: stdout: "true" +Oct 13 08:50:02.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-5477 get pods update-demo-nautilus-s4nhd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Oct 13 08:50:02.350: INFO: stderr: "" +Oct 13 08:50:02.350: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.7" +Oct 13 08:50:02.350: INFO: validating pod update-demo-nautilus-s4nhd +Oct 13 08:50:02.353: INFO: got data: { + "image": "nautilus.jpg" +} + +Oct 13 08:50:02.353: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Oct 13 08:50:02.354: INFO: update-demo-nautilus-s4nhd is verified up and running +STEP: scaling up the replication controller 10/13/23 08:50:02.354 +Oct 13 08:50:02.354: INFO: scanned /root for discovery docs: +Oct 13 08:50:02.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-5477 scale rc update-demo-nautilus --replicas=2 --timeout=5m' +Oct 13 08:50:03.422: INFO: stderr: "" +Oct 13 08:50:03.422: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" +STEP: waiting for all containers in name=update-demo pods to come up. 10/13/23 08:50:03.422 +Oct 13 08:50:03.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-5477 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 13 08:50:03.485: INFO: stderr: "" +Oct 13 08:50:03.485: INFO: stdout: "update-demo-nautilus-4tct9 update-demo-nautilus-s4nhd " +Oct 13 08:50:03.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-5477 get pods update-demo-nautilus-4tct9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 13 08:50:03.544: INFO: stderr: "" +Oct 13 08:50:03.544: INFO: stdout: "" +Oct 13 08:50:03.544: INFO: update-demo-nautilus-4tct9 is created but not running +Oct 13 08:50:08.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-5477 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Oct 13 08:50:08.647: INFO: stderr: "" +Oct 13 08:50:08.647: INFO: stdout: "update-demo-nautilus-4tct9 update-demo-nautilus-s4nhd " +Oct 13 08:50:08.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-5477 get pods update-demo-nautilus-4tct9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 13 08:50:08.735: INFO: stderr: "" +Oct 13 08:50:08.735: INFO: stdout: "true" +Oct 13 08:50:08.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-5477 get pods update-demo-nautilus-4tct9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Oct 13 08:50:08.812: INFO: stderr: "" +Oct 13 08:50:08.812: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.7" +Oct 13 08:50:08.812: INFO: validating pod update-demo-nautilus-4tct9 +Oct 13 08:50:08.818: INFO: got data: { + "image": "nautilus.jpg" +} + +Oct 13 08:50:08.818: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Oct 13 08:50:08.818: INFO: update-demo-nautilus-4tct9 is verified up and running +Oct 13 08:50:08.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-5477 get pods update-demo-nautilus-s4nhd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Oct 13 08:50:08.903: INFO: stderr: "" +Oct 13 08:50:08.903: INFO: stdout: "true" +Oct 13 08:50:08.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-5477 get pods update-demo-nautilus-s4nhd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Oct 13 08:50:08.994: INFO: stderr: "" +Oct 13 08:50:08.994: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.7" +Oct 13 08:50:08.994: INFO: validating pod update-demo-nautilus-s4nhd +Oct 13 08:50:09.000: INFO: got data: { + "image": "nautilus.jpg" +} + +Oct 13 08:50:09.000: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Oct 13 08:50:09.000: INFO: update-demo-nautilus-s4nhd is verified up and running +STEP: using delete to clean up resources 10/13/23 08:50:09 +Oct 13 08:50:09.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-5477 delete --grace-period=0 --force -f -' +Oct 13 08:50:09.085: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Oct 13 08:50:09.085: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" +Oct 13 08:50:09.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-5477 get rc,svc -l name=update-demo --no-headers' +Oct 13 08:50:09.170: INFO: stderr: "No resources found in kubectl-5477 namespace.\n" +Oct 13 08:50:09.170: INFO: stdout: "" +Oct 13 08:50:09.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-5477 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Oct 13 08:50:09.255: INFO: stderr: "" +Oct 13 08:50:09.255: INFO: stdout: "" +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 +Oct 13 08:50:09.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 +STEP: Destroying namespace "kubectl-5477" for this suite. 10/13/23 08:50:09.263 +------------------------------ +• [SLOW TEST] [14.020 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Update Demo + test/e2e/kubectl/kubectl.go:324 + should scale a replication controller [Conformance] + test/e2e/kubectl/kubectl.go:352 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:49:55.251 + Oct 13 08:49:55.251: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename kubectl 10/13/23 08:49:55.251 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:49:55.268 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:49:55.27 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 + [BeforeEach] Update Demo + test/e2e/kubectl/kubectl.go:326 + [It] should scale a replication controller [Conformance] + test/e2e/kubectl/kubectl.go:352 + STEP: creating a replication controller 10/13/23 08:49:55.272 + Oct 13 08:49:55.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-5477 create -f -' + Oct 13 08:49:55.514: INFO: stderr: "" + Oct 13 08:49:55.514: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" + STEP: waiting for all containers in name=update-demo pods to come up. 10/13/23 08:49:55.514 + Oct 13 08:49:55.514: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-5477 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Oct 13 08:49:55.614: INFO: stderr: "" + Oct 13 08:49:55.614: INFO: stdout: "update-demo-nautilus-s4nhd update-demo-nautilus-vb8vq " + Oct 13 08:49:55.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-5477 get pods update-demo-nautilus-s4nhd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Oct 13 08:49:55.709: INFO: stderr: "" + Oct 13 08:49:55.709: INFO: stdout: "" + Oct 13 08:49:55.709: INFO: update-demo-nautilus-s4nhd is created but not running + Oct 13 08:50:00.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-5477 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Oct 13 08:50:00.790: INFO: stderr: "" + Oct 13 08:50:00.790: INFO: stdout: "update-demo-nautilus-s4nhd update-demo-nautilus-vb8vq " + Oct 13 08:50:00.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-5477 get pods update-demo-nautilus-s4nhd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Oct 13 08:50:00.863: INFO: stderr: "" + Oct 13 08:50:00.863: INFO: stdout: "true" + Oct 13 08:50:00.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-5477 get pods update-demo-nautilus-s4nhd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' + Oct 13 08:50:00.938: INFO: stderr: "" + Oct 13 08:50:00.938: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.7" + Oct 13 08:50:00.938: INFO: validating pod update-demo-nautilus-s4nhd + Oct 13 08:50:00.943: INFO: got data: { + "image": "nautilus.jpg" + } + + Oct 13 08:50:00.943: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . + Oct 13 08:50:00.943: INFO: update-demo-nautilus-s4nhd is verified up and running + Oct 13 08:50:00.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-5477 get pods update-demo-nautilus-vb8vq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Oct 13 08:50:01.015: INFO: stderr: "" + Oct 13 08:50:01.015: INFO: stdout: "true" + Oct 13 08:50:01.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-5477 get pods update-demo-nautilus-vb8vq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' + Oct 13 08:50:01.081: INFO: stderr: "" + Oct 13 08:50:01.081: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.7" + Oct 13 08:50:01.081: INFO: validating pod update-demo-nautilus-vb8vq + Oct 13 08:50:01.088: INFO: got data: { + "image": "nautilus.jpg" + } + + Oct 13 08:50:01.089: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . + Oct 13 08:50:01.089: INFO: update-demo-nautilus-vb8vq is verified up and running + STEP: scaling down the replication controller 10/13/23 08:50:01.089 + Oct 13 08:50:01.090: INFO: scanned /root for discovery docs: + Oct 13 08:50:01.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-5477 scale rc update-demo-nautilus --replicas=1 --timeout=5m' + Oct 13 08:50:02.171: INFO: stderr: "" + Oct 13 08:50:02.171: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" + STEP: waiting for all containers in name=update-demo pods to come up. 10/13/23 08:50:02.171 + Oct 13 08:50:02.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-5477 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Oct 13 08:50:02.228: INFO: stderr: "" + Oct 13 08:50:02.228: INFO: stdout: "update-demo-nautilus-s4nhd " + Oct 13 08:50:02.228: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-5477 get pods update-demo-nautilus-s4nhd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Oct 13 08:50:02.290: INFO: stderr: "" + Oct 13 08:50:02.290: INFO: stdout: "true" + Oct 13 08:50:02.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-5477 get pods update-demo-nautilus-s4nhd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' + Oct 13 08:50:02.350: INFO: stderr: "" + Oct 13 08:50:02.350: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.7" + Oct 13 08:50:02.350: INFO: validating pod update-demo-nautilus-s4nhd + Oct 13 08:50:02.353: INFO: got data: { + "image": "nautilus.jpg" + } + + Oct 13 08:50:02.353: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . + Oct 13 08:50:02.354: INFO: update-demo-nautilus-s4nhd is verified up and running + STEP: scaling up the replication controller 10/13/23 08:50:02.354 + Oct 13 08:50:02.354: INFO: scanned /root for discovery docs: + Oct 13 08:50:02.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-5477 scale rc update-demo-nautilus --replicas=2 --timeout=5m' + Oct 13 08:50:03.422: INFO: stderr: "" + Oct 13 08:50:03.422: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" + STEP: waiting for all containers in name=update-demo pods to come up. 10/13/23 08:50:03.422 + Oct 13 08:50:03.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-5477 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Oct 13 08:50:03.485: INFO: stderr: "" + Oct 13 08:50:03.485: INFO: stdout: "update-demo-nautilus-4tct9 update-demo-nautilus-s4nhd " + Oct 13 08:50:03.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-5477 get pods update-demo-nautilus-4tct9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Oct 13 08:50:03.544: INFO: stderr: "" + Oct 13 08:50:03.544: INFO: stdout: "" + Oct 13 08:50:03.544: INFO: update-demo-nautilus-4tct9 is created but not running + Oct 13 08:50:08.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-5477 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Oct 13 08:50:08.647: INFO: stderr: "" + Oct 13 08:50:08.647: INFO: stdout: "update-demo-nautilus-4tct9 update-demo-nautilus-s4nhd " + Oct 13 08:50:08.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-5477 get pods update-demo-nautilus-4tct9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Oct 13 08:50:08.735: INFO: stderr: "" + Oct 13 08:50:08.735: INFO: stdout: "true" + Oct 13 08:50:08.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-5477 get pods update-demo-nautilus-4tct9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' + Oct 13 08:50:08.812: INFO: stderr: "" + Oct 13 08:50:08.812: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.7" + Oct 13 08:50:08.812: INFO: validating pod update-demo-nautilus-4tct9 + Oct 13 08:50:08.818: INFO: got data: { + "image": "nautilus.jpg" + } + + Oct 13 08:50:08.818: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . + Oct 13 08:50:08.818: INFO: update-demo-nautilus-4tct9 is verified up and running + Oct 13 08:50:08.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-5477 get pods update-demo-nautilus-s4nhd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Oct 13 08:50:08.903: INFO: stderr: "" + Oct 13 08:50:08.903: INFO: stdout: "true" + Oct 13 08:50:08.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-5477 get pods update-demo-nautilus-s4nhd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' + Oct 13 08:50:08.994: INFO: stderr: "" + Oct 13 08:50:08.994: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.7" + Oct 13 08:50:08.994: INFO: validating pod update-demo-nautilus-s4nhd + Oct 13 08:50:09.000: INFO: got data: { + "image": "nautilus.jpg" + } + + Oct 13 08:50:09.000: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . + Oct 13 08:50:09.000: INFO: update-demo-nautilus-s4nhd is verified up and running + STEP: using delete to clean up resources 10/13/23 08:50:09 + Oct 13 08:50:09.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-5477 delete --grace-period=0 --force -f -' + Oct 13 08:50:09.085: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" + Oct 13 08:50:09.085: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" + Oct 13 08:50:09.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-5477 get rc,svc -l name=update-demo --no-headers' + Oct 13 08:50:09.170: INFO: stderr: "No resources found in kubectl-5477 namespace.\n" + Oct 13 08:50:09.170: INFO: stdout: "" + Oct 13 08:50:09.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-5477 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' + Oct 13 08:50:09.255: INFO: stderr: "" + Oct 13 08:50:09.255: INFO: stdout: "" + [AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 + Oct 13 08:50:09.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 + STEP: Destroying namespace "kubectl-5477" for this suite. 10/13/23 08:50:09.263 + << End Captured GinkgoWriter Output +------------------------------ +SSSS +------------------------------ +[sig-cli] Kubectl client Proxy server + should support proxy with --port 0 [Conformance] + test/e2e/kubectl/kubectl.go:1787 +[BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:50:09.271 +Oct 13 08:50:09.271: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename kubectl 10/13/23 08:50:09.272 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:50:09.288 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:50:09.291 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 +[It] should support proxy with --port 0 [Conformance] + test/e2e/kubectl/kubectl.go:1787 +STEP: starting the proxy server 10/13/23 08:50:09.293 +Oct 13 08:50:09.293: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1896 proxy -p 0 --disable-filter' +STEP: curling proxy /api/ output 10/13/23 08:50:09.348 +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 +Oct 13 08:50:09.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 +STEP: Destroying namespace "kubectl-1896" for this suite. 10/13/23 08:50:09.361 +------------------------------ +• [0.095 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Proxy server + test/e2e/kubectl/kubectl.go:1780 + should support proxy with --port 0 [Conformance] + test/e2e/kubectl/kubectl.go:1787 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:50:09.271 + Oct 13 08:50:09.271: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename kubectl 10/13/23 08:50:09.272 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:50:09.288 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:50:09.291 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 + [It] should support proxy with --port 0 [Conformance] + test/e2e/kubectl/kubectl.go:1787 + STEP: starting the proxy server 10/13/23 08:50:09.293 + Oct 13 08:50:09.293: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1896 proxy -p 0 --disable-filter' + STEP: curling proxy /api/ output 10/13/23 08:50:09.348 + [AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 + Oct 13 08:50:09.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 + STEP: Destroying namespace "kubectl-1896" for this suite. 10/13/23 08:50:09.361 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Containers + should be able to override the image's default command (container entrypoint) [NodeConformance] [Conformance] + test/e2e/common/node/containers.go:73 +[BeforeEach] [sig-node] Containers + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:50:09.368 +Oct 13 08:50:09.368: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename containers 10/13/23 08:50:09.369 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:50:09.385 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:50:09.387 +[BeforeEach] [sig-node] Containers + test/e2e/framework/metrics/init/init.go:31 +[It] should be able to override the image's default command (container entrypoint) [NodeConformance] [Conformance] + test/e2e/common/node/containers.go:73 +STEP: Creating a pod to test override command 10/13/23 08:50:09.389 +Oct 13 08:50:09.397: INFO: Waiting up to 5m0s for pod "client-containers-d505e2d8-586c-4db9-a20d-9ad418be30fa" in namespace "containers-1208" to be "Succeeded or Failed" +Oct 13 08:50:09.400: INFO: Pod "client-containers-d505e2d8-586c-4db9-a20d-9ad418be30fa": Phase="Pending", Reason="", readiness=false. Elapsed: 3.052125ms +Oct 13 08:50:11.406: INFO: Pod "client-containers-d505e2d8-586c-4db9-a20d-9ad418be30fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008627291s +Oct 13 08:50:13.407: INFO: Pod "client-containers-d505e2d8-586c-4db9-a20d-9ad418be30fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009902353s +STEP: Saw pod success 10/13/23 08:50:13.407 +Oct 13 08:50:13.407: INFO: Pod "client-containers-d505e2d8-586c-4db9-a20d-9ad418be30fa" satisfied condition "Succeeded or Failed" +Oct 13 08:50:13.412: INFO: Trying to get logs from node node2 pod client-containers-d505e2d8-586c-4db9-a20d-9ad418be30fa container agnhost-container: +STEP: delete the pod 10/13/23 08:50:13.418 +Oct 13 08:50:13.431: INFO: Waiting for pod client-containers-d505e2d8-586c-4db9-a20d-9ad418be30fa to disappear +Oct 13 08:50:13.434: INFO: Pod client-containers-d505e2d8-586c-4db9-a20d-9ad418be30fa no longer exists +[AfterEach] [sig-node] Containers + test/e2e/framework/node/init/init.go:32 +Oct 13 08:50:13.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Containers + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Containers + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Containers + tear down framework | framework.go:193 +STEP: Destroying namespace "containers-1208" for this suite. 10/13/23 08:50:13.438 +------------------------------ +• [4.076 seconds] +[sig-node] Containers +test/e2e/common/node/framework.go:23 + should be able to override the image's default command (container entrypoint) [NodeConformance] [Conformance] + test/e2e/common/node/containers.go:73 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Containers + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:50:09.368 + Oct 13 08:50:09.368: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename containers 10/13/23 08:50:09.369 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:50:09.385 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:50:09.387 + [BeforeEach] [sig-node] Containers + test/e2e/framework/metrics/init/init.go:31 + [It] should be able to override the image's default command (container entrypoint) [NodeConformance] [Conformance] + test/e2e/common/node/containers.go:73 + STEP: Creating a pod to test override command 10/13/23 08:50:09.389 + Oct 13 08:50:09.397: INFO: Waiting up to 5m0s for pod "client-containers-d505e2d8-586c-4db9-a20d-9ad418be30fa" in namespace "containers-1208" to be "Succeeded or Failed" + Oct 13 08:50:09.400: INFO: Pod "client-containers-d505e2d8-586c-4db9-a20d-9ad418be30fa": Phase="Pending", Reason="", readiness=false. Elapsed: 3.052125ms + Oct 13 08:50:11.406: INFO: Pod "client-containers-d505e2d8-586c-4db9-a20d-9ad418be30fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008627291s + Oct 13 08:50:13.407: INFO: Pod "client-containers-d505e2d8-586c-4db9-a20d-9ad418be30fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009902353s + STEP: Saw pod success 10/13/23 08:50:13.407 + Oct 13 08:50:13.407: INFO: Pod "client-containers-d505e2d8-586c-4db9-a20d-9ad418be30fa" satisfied condition "Succeeded or Failed" + Oct 13 08:50:13.412: INFO: Trying to get logs from node node2 pod client-containers-d505e2d8-586c-4db9-a20d-9ad418be30fa container agnhost-container: + STEP: delete the pod 10/13/23 08:50:13.418 + Oct 13 08:50:13.431: INFO: Waiting for pod client-containers-d505e2d8-586c-4db9-a20d-9ad418be30fa to disappear + Oct 13 08:50:13.434: INFO: Pod client-containers-d505e2d8-586c-4db9-a20d-9ad418be30fa no longer exists + [AfterEach] [sig-node] Containers + test/e2e/framework/node/init/init.go:32 + Oct 13 08:50:13.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Containers + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Containers + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Containers + tear down framework | framework.go:193 + STEP: Destroying namespace "containers-1208" for this suite. 10/13/23 08:50:13.438 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:375 +[BeforeEach] [sig-storage] Projected configMap + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:50:13.447 +Oct 13 08:50:13.447: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename projected 10/13/23 08:50:13.448 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:50:13.463 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:50:13.465 +[BeforeEach] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:31 +[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:375 +STEP: Creating configMap with name projected-configmap-test-volume-3dd9694b-d35a-4bec-8794-83b76331546d 10/13/23 08:50:13.467 +STEP: Creating a pod to test consume configMaps 10/13/23 08:50:13.471 +Oct 13 08:50:13.479: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-54bfb747-24da-432a-8691-36b3b62f0905" in namespace "projected-8670" to be "Succeeded or Failed" +Oct 13 08:50:13.482: INFO: Pod "pod-projected-configmaps-54bfb747-24da-432a-8691-36b3b62f0905": Phase="Pending", Reason="", readiness=false. Elapsed: 3.090903ms +Oct 13 08:50:15.487: INFO: Pod "pod-projected-configmaps-54bfb747-24da-432a-8691-36b3b62f0905": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007511654s +Oct 13 08:50:17.487: INFO: Pod "pod-projected-configmaps-54bfb747-24da-432a-8691-36b3b62f0905": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007583874s +STEP: Saw pod success 10/13/23 08:50:17.487 +Oct 13 08:50:17.487: INFO: Pod "pod-projected-configmaps-54bfb747-24da-432a-8691-36b3b62f0905" satisfied condition "Succeeded or Failed" +Oct 13 08:50:17.490: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-54bfb747-24da-432a-8691-36b3b62f0905 container projected-configmap-volume-test: +STEP: delete the pod 10/13/23 08:50:17.497 +Oct 13 08:50:17.513: INFO: Waiting for pod pod-projected-configmaps-54bfb747-24da-432a-8691-36b3b62f0905 to disappear +Oct 13 08:50:17.515: INFO: Pod pod-projected-configmaps-54bfb747-24da-432a-8691-36b3b62f0905 no longer exists +[AfterEach] [sig-storage] Projected configMap + test/e2e/framework/node/init/init.go:32 +Oct 13 08:50:17.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Projected configMap + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Projected configMap + tear down framework | framework.go:193 +STEP: Destroying namespace "projected-8670" for this suite. 10/13/23 08:50:17.519 +------------------------------ +• [4.076 seconds] +[sig-storage] Projected configMap +test/e2e/common/storage/framework.go:23 + should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:375 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected configMap + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:50:13.447 + Oct 13 08:50:13.447: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename projected 10/13/23 08:50:13.448 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:50:13.463 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:50:13.465 + [BeforeEach] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:31 + [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:375 + STEP: Creating configMap with name projected-configmap-test-volume-3dd9694b-d35a-4bec-8794-83b76331546d 10/13/23 08:50:13.467 + STEP: Creating a pod to test consume configMaps 10/13/23 08:50:13.471 + Oct 13 08:50:13.479: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-54bfb747-24da-432a-8691-36b3b62f0905" in namespace "projected-8670" to be "Succeeded or Failed" + Oct 13 08:50:13.482: INFO: Pod "pod-projected-configmaps-54bfb747-24da-432a-8691-36b3b62f0905": Phase="Pending", Reason="", readiness=false. Elapsed: 3.090903ms + Oct 13 08:50:15.487: INFO: Pod "pod-projected-configmaps-54bfb747-24da-432a-8691-36b3b62f0905": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007511654s + Oct 13 08:50:17.487: INFO: Pod "pod-projected-configmaps-54bfb747-24da-432a-8691-36b3b62f0905": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007583874s + STEP: Saw pod success 10/13/23 08:50:17.487 + Oct 13 08:50:17.487: INFO: Pod "pod-projected-configmaps-54bfb747-24da-432a-8691-36b3b62f0905" satisfied condition "Succeeded or Failed" + Oct 13 08:50:17.490: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-54bfb747-24da-432a-8691-36b3b62f0905 container projected-configmap-volume-test: + STEP: delete the pod 10/13/23 08:50:17.497 + Oct 13 08:50:17.513: INFO: Waiting for pod pod-projected-configmaps-54bfb747-24da-432a-8691-36b3b62f0905 to disappear + Oct 13 08:50:17.515: INFO: Pod pod-projected-configmaps-54bfb747-24da-432a-8691-36b3b62f0905 no longer exists + [AfterEach] [sig-storage] Projected configMap + test/e2e/framework/node/init/init.go:32 + Oct 13 08:50:17.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Projected configMap + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Projected configMap + tear down framework | framework.go:193 + STEP: Destroying namespace "projected-8670" for this suite. 10/13/23 08:50:17.519 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-instrumentation] Events + should manage the lifecycle of an event [Conformance] + test/e2e/instrumentation/core_events.go:57 +[BeforeEach] [sig-instrumentation] Events + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:50:17.524 +Oct 13 08:50:17.524: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename events 10/13/23 08:50:17.525 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:50:17.541 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:50:17.543 +[BeforeEach] [sig-instrumentation] Events + test/e2e/framework/metrics/init/init.go:31 +[It] should manage the lifecycle of an event [Conformance] + test/e2e/instrumentation/core_events.go:57 +STEP: creating a test event 10/13/23 08:50:17.546 +STEP: listing all events in all namespaces 10/13/23 08:50:17.551 +STEP: patching the test event 10/13/23 08:50:17.56 +STEP: fetching the test event 10/13/23 08:50:17.569 +STEP: updating the test event 10/13/23 08:50:17.572 +STEP: getting the test event 10/13/23 08:50:17.58 +STEP: deleting the test event 10/13/23 08:50:17.582 +STEP: listing all events in all namespaces 10/13/23 08:50:17.587 +[AfterEach] [sig-instrumentation] Events + test/e2e/framework/node/init/init.go:32 +Oct 13 08:50:17.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-instrumentation] Events + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-instrumentation] Events + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-instrumentation] Events + tear down framework | framework.go:193 +STEP: Destroying namespace "events-5035" for this suite. 10/13/23 08:50:17.598 +------------------------------ +• [0.079 seconds] +[sig-instrumentation] Events +test/e2e/instrumentation/common/framework.go:23 + should manage the lifecycle of an event [Conformance] + test/e2e/instrumentation/core_events.go:57 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-instrumentation] Events + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:50:17.524 + Oct 13 08:50:17.524: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename events 10/13/23 08:50:17.525 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:50:17.541 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:50:17.543 + [BeforeEach] [sig-instrumentation] Events + test/e2e/framework/metrics/init/init.go:31 + [It] should manage the lifecycle of an event [Conformance] + test/e2e/instrumentation/core_events.go:57 + STEP: creating a test event 10/13/23 08:50:17.546 + STEP: listing all events in all namespaces 10/13/23 08:50:17.551 + STEP: patching the test event 10/13/23 08:50:17.56 + STEP: fetching the test event 10/13/23 08:50:17.569 + STEP: updating the test event 10/13/23 08:50:17.572 + STEP: getting the test event 10/13/23 08:50:17.58 + STEP: deleting the test event 10/13/23 08:50:17.582 + STEP: listing all events in all namespaces 10/13/23 08:50:17.587 + [AfterEach] [sig-instrumentation] Events + test/e2e/framework/node/init/init.go:32 + Oct 13 08:50:17.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-instrumentation] Events + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-instrumentation] Events + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-instrumentation] Events + tear down framework | framework.go:193 + STEP: Destroying namespace "events-5035" for this suite. 10/13/23 08:50:17.598 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSS +------------------------------ +[sig-apps] CronJob + should not schedule jobs when suspended [Slow] [Conformance] + test/e2e/apps/cronjob.go:96 +[BeforeEach] [sig-apps] CronJob + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:50:17.604 +Oct 13 08:50:17.604: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename cronjob 10/13/23 08:50:17.605 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:50:17.62 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:50:17.623 +[BeforeEach] [sig-apps] CronJob + test/e2e/framework/metrics/init/init.go:31 +[It] should not schedule jobs when suspended [Slow] [Conformance] + test/e2e/apps/cronjob.go:96 +STEP: Creating a suspended cronjob 10/13/23 08:50:17.625 +STEP: Ensuring no jobs are scheduled 10/13/23 08:50:17.63 +STEP: Ensuring no job exists by listing jobs explicitly 10/13/23 08:55:17.64 +STEP: Removing cronjob 10/13/23 08:55:17.644 +[AfterEach] [sig-apps] CronJob + test/e2e/framework/node/init/init.go:32 +Oct 13 08:55:17.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] CronJob + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] CronJob + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] CronJob + tear down framework | framework.go:193 +STEP: Destroying namespace "cronjob-5953" for this suite. 10/13/23 08:55:17.657 +------------------------------ +• [SLOW TEST] [300.060 seconds] +[sig-apps] CronJob +test/e2e/apps/framework.go:23 + should not schedule jobs when suspended [Slow] [Conformance] + test/e2e/apps/cronjob.go:96 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] CronJob + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:50:17.604 + Oct 13 08:50:17.604: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename cronjob 10/13/23 08:50:17.605 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:50:17.62 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:50:17.623 + [BeforeEach] [sig-apps] CronJob + test/e2e/framework/metrics/init/init.go:31 + [It] should not schedule jobs when suspended [Slow] [Conformance] + test/e2e/apps/cronjob.go:96 + STEP: Creating a suspended cronjob 10/13/23 08:50:17.625 + STEP: Ensuring no jobs are scheduled 10/13/23 08:50:17.63 + STEP: Ensuring no job exists by listing jobs explicitly 10/13/23 08:55:17.64 + STEP: Removing cronjob 10/13/23 08:55:17.644 + [AfterEach] [sig-apps] CronJob + test/e2e/framework/node/init/init.go:32 + Oct 13 08:55:17.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] CronJob + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] CronJob + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] CronJob + tear down framework | framework.go:193 + STEP: Destroying namespace "cronjob-5953" for this suite. 10/13/23 08:55:17.657 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:97 +[BeforeEach] [sig-storage] EmptyDir volumes + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:55:17.665 +Oct 13 08:55:17.665: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename emptydir 10/13/23 08:55:17.666 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:55:17.687 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:55:17.69 +[BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:31 +[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:97 +STEP: Creating a pod to test emptydir 0644 on tmpfs 10/13/23 08:55:17.692 +Oct 13 08:55:17.700: INFO: Waiting up to 5m0s for pod "pod-e6c9cfef-ba32-4cdb-91be-c96457c78de8" in namespace "emptydir-5538" to be "Succeeded or Failed" +Oct 13 08:55:17.703: INFO: Pod "pod-e6c9cfef-ba32-4cdb-91be-c96457c78de8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.099284ms +Oct 13 08:55:19.708: INFO: Pod "pod-e6c9cfef-ba32-4cdb-91be-c96457c78de8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007505718s +Oct 13 08:55:21.710: INFO: Pod "pod-e6c9cfef-ba32-4cdb-91be-c96457c78de8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010002718s +STEP: Saw pod success 10/13/23 08:55:21.71 +Oct 13 08:55:21.710: INFO: Pod "pod-e6c9cfef-ba32-4cdb-91be-c96457c78de8" satisfied condition "Succeeded or Failed" +Oct 13 08:55:21.714: INFO: Trying to get logs from node node2 pod pod-e6c9cfef-ba32-4cdb-91be-c96457c78de8 container test-container: +STEP: delete the pod 10/13/23 08:55:21.728 +Oct 13 08:55:21.738: INFO: Waiting for pod pod-e6c9cfef-ba32-4cdb-91be-c96457c78de8 to disappear +Oct 13 08:55:21.740: INFO: Pod pod-e6c9cfef-ba32-4cdb-91be-c96457c78de8 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/node/init/init.go:32 +Oct 13 08:55:21.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + tear down framework | framework.go:193 +STEP: Destroying namespace "emptydir-5538" for this suite. 10/13/23 08:55:21.744 +------------------------------ +• [4.084 seconds] +[sig-storage] EmptyDir volumes +test/e2e/common/storage/framework.go:23 + should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:97 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] EmptyDir volumes + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:55:17.665 + Oct 13 08:55:17.665: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename emptydir 10/13/23 08:55:17.666 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:55:17.687 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:55:17.69 + [BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:31 + [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:97 + STEP: Creating a pod to test emptydir 0644 on tmpfs 10/13/23 08:55:17.692 + Oct 13 08:55:17.700: INFO: Waiting up to 5m0s for pod "pod-e6c9cfef-ba32-4cdb-91be-c96457c78de8" in namespace "emptydir-5538" to be "Succeeded or Failed" + Oct 13 08:55:17.703: INFO: Pod "pod-e6c9cfef-ba32-4cdb-91be-c96457c78de8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.099284ms + Oct 13 08:55:19.708: INFO: Pod "pod-e6c9cfef-ba32-4cdb-91be-c96457c78de8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007505718s + Oct 13 08:55:21.710: INFO: Pod "pod-e6c9cfef-ba32-4cdb-91be-c96457c78de8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010002718s + STEP: Saw pod success 10/13/23 08:55:21.71 + Oct 13 08:55:21.710: INFO: Pod "pod-e6c9cfef-ba32-4cdb-91be-c96457c78de8" satisfied condition "Succeeded or Failed" + Oct 13 08:55:21.714: INFO: Trying to get logs from node node2 pod pod-e6c9cfef-ba32-4cdb-91be-c96457c78de8 container test-container: + STEP: delete the pod 10/13/23 08:55:21.728 + Oct 13 08:55:21.738: INFO: Waiting for pod pod-e6c9cfef-ba32-4cdb-91be-c96457c78de8 to disappear + Oct 13 08:55:21.740: INFO: Pod pod-e6c9cfef-ba32-4cdb-91be-c96457c78de8 no longer exists + [AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/node/init/init.go:32 + Oct 13 08:55:21.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + tear down framework | framework.go:193 + STEP: Destroying namespace "emptydir-5538" for this suite. 10/13/23 08:55:21.744 + << End Captured GinkgoWriter Output +------------------------------ +S +------------------------------ +[sig-network] DNS + should support configurable pod DNS nameservers [Conformance] + test/e2e/network/dns.go:411 +[BeforeEach] [sig-network] DNS + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:55:21.75 +Oct 13 08:55:21.750: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename dns 10/13/23 08:55:21.751 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:55:21.766 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:55:21.769 +[BeforeEach] [sig-network] DNS + test/e2e/framework/metrics/init/init.go:31 +[It] should support configurable pod DNS nameservers [Conformance] + test/e2e/network/dns.go:411 +STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... 10/13/23 08:55:21.771 +Oct 13 08:55:21.778: INFO: Created pod &Pod{ObjectMeta:{test-dns-nameservers dns-7534 b7d45c06-8e59-47ca-ad17-c22b7a89b8ec 23738 0 2023-10-13 08:55:21 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2023-10-13 08:55:21 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-4hf8b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:registry.k8s.io/e2e-test-images/agnhost:2.43,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4hf8b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Oct 13 08:55:21.778: INFO: Waiting up to 5m0s for pod "test-dns-nameservers" in namespace "dns-7534" to be "running and ready" +Oct 13 08:55:21.782: INFO: Pod "test-dns-nameservers": Phase="Pending", Reason="", readiness=false. Elapsed: 3.306109ms +Oct 13 08:55:21.782: INFO: The phase of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) +Oct 13 08:55:23.788: INFO: Pod "test-dns-nameservers": Phase="Running", Reason="", readiness=true. Elapsed: 2.009908206s +Oct 13 08:55:23.788: INFO: The phase of Pod test-dns-nameservers is Running (Ready = true) +Oct 13 08:55:23.788: INFO: Pod "test-dns-nameservers" satisfied condition "running and ready" +STEP: Verifying customized DNS suffix list is configured on pod... 10/13/23 08:55:23.788 +Oct 13 08:55:23.788: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-7534 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 13 08:55:23.788: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +Oct 13 08:55:23.789: INFO: ExecWithOptions: Clientset creation +Oct 13 08:55:23.789: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/dns-7534/pods/test-dns-nameservers/exec?command=%2Fagnhost&command=dns-suffix&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) +STEP: Verifying customized DNS server is configured on pod... 10/13/23 08:55:23.882 +Oct 13 08:55:23.882: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-7534 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 13 08:55:23.882: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +Oct 13 08:55:23.883: INFO: ExecWithOptions: Clientset creation +Oct 13 08:55:23.883: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/dns-7534/pods/test-dns-nameservers/exec?command=%2Fagnhost&command=dns-server-list&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) +Oct 13 08:55:23.965: INFO: Deleting pod test-dns-nameservers... +[AfterEach] [sig-network] DNS + test/e2e/framework/node/init/init.go:32 +Oct 13 08:55:23.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] DNS + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] DNS + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] DNS + tear down framework | framework.go:193 +STEP: Destroying namespace "dns-7534" for this suite. 10/13/23 08:55:23.98 +------------------------------ +• [2.236 seconds] +[sig-network] DNS +test/e2e/network/common/framework.go:23 + should support configurable pod DNS nameservers [Conformance] + test/e2e/network/dns.go:411 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] DNS + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:55:21.75 + Oct 13 08:55:21.750: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename dns 10/13/23 08:55:21.751 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:55:21.766 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:55:21.769 + [BeforeEach] [sig-network] DNS + test/e2e/framework/metrics/init/init.go:31 + [It] should support configurable pod DNS nameservers [Conformance] + test/e2e/network/dns.go:411 + STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... 10/13/23 08:55:21.771 + Oct 13 08:55:21.778: INFO: Created pod &Pod{ObjectMeta:{test-dns-nameservers dns-7534 b7d45c06-8e59-47ca-ad17-c22b7a89b8ec 23738 0 2023-10-13 08:55:21 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2023-10-13 08:55:21 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-4hf8b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:registry.k8s.io/e2e-test-images/agnhost:2.43,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4hf8b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + Oct 13 08:55:21.778: INFO: Waiting up to 5m0s for pod "test-dns-nameservers" in namespace "dns-7534" to be "running and ready" + Oct 13 08:55:21.782: INFO: Pod "test-dns-nameservers": Phase="Pending", Reason="", readiness=false. Elapsed: 3.306109ms + Oct 13 08:55:21.782: INFO: The phase of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) + Oct 13 08:55:23.788: INFO: Pod "test-dns-nameservers": Phase="Running", Reason="", readiness=true. Elapsed: 2.009908206s + Oct 13 08:55:23.788: INFO: The phase of Pod test-dns-nameservers is Running (Ready = true) + Oct 13 08:55:23.788: INFO: Pod "test-dns-nameservers" satisfied condition "running and ready" + STEP: Verifying customized DNS suffix list is configured on pod... 10/13/23 08:55:23.788 + Oct 13 08:55:23.788: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-7534 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Oct 13 08:55:23.788: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + Oct 13 08:55:23.789: INFO: ExecWithOptions: Clientset creation + Oct 13 08:55:23.789: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/dns-7534/pods/test-dns-nameservers/exec?command=%2Fagnhost&command=dns-suffix&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) + STEP: Verifying customized DNS server is configured on pod... 10/13/23 08:55:23.882 + Oct 13 08:55:23.882: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-7534 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Oct 13 08:55:23.882: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + Oct 13 08:55:23.883: INFO: ExecWithOptions: Clientset creation + Oct 13 08:55:23.883: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/dns-7534/pods/test-dns-nameservers/exec?command=%2Fagnhost&command=dns-server-list&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) + Oct 13 08:55:23.965: INFO: Deleting pod test-dns-nameservers... + [AfterEach] [sig-network] DNS + test/e2e/framework/node/init/init.go:32 + Oct 13 08:55:23.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] DNS + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] DNS + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] DNS + tear down framework | framework.go:193 + STEP: Destroying namespace "dns-7534" for this suite. 10/13/23 08:55:23.98 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] DNS + should provide /etc/hosts entries for the cluster [Conformance] + test/e2e/network/dns.go:117 +[BeforeEach] [sig-network] DNS + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:55:23.986 +Oct 13 08:55:23.987: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename dns 10/13/23 08:55:23.988 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:55:24.002 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:55:24.004 +[BeforeEach] [sig-network] DNS + test/e2e/framework/metrics/init/init.go:31 +[It] should provide /etc/hosts entries for the cluster [Conformance] + test/e2e/network/dns.go:117 +STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-7039.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-7039.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;sleep 1; done + 10/13/23 08:55:24.006 +STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-7039.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-7039.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;sleep 1; done + 10/13/23 08:55:24.006 +STEP: creating a pod to probe /etc/hosts 10/13/23 08:55:24.006 +STEP: submitting the pod to kubernetes 10/13/23 08:55:24.007 +Oct 13 08:55:24.015: INFO: Waiting up to 15m0s for pod "dns-test-5f5f9ac8-4083-417e-8143-84510873fec6" in namespace "dns-7039" to be "running" +Oct 13 08:55:24.017: INFO: Pod "dns-test-5f5f9ac8-4083-417e-8143-84510873fec6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.929616ms +Oct 13 08:55:26.024: INFO: Pod "dns-test-5f5f9ac8-4083-417e-8143-84510873fec6": Phase="Running", Reason="", readiness=true. Elapsed: 2.009558759s +Oct 13 08:55:26.024: INFO: Pod "dns-test-5f5f9ac8-4083-417e-8143-84510873fec6" satisfied condition "running" +STEP: retrieving the pod 10/13/23 08:55:26.024 +STEP: looking for the results for each expected name from probers 10/13/23 08:55:26.029 +Oct 13 08:55:26.047: INFO: DNS probes using dns-7039/dns-test-5f5f9ac8-4083-417e-8143-84510873fec6 succeeded + +STEP: deleting the pod 10/13/23 08:55:26.047 +[AfterEach] [sig-network] DNS + test/e2e/framework/node/init/init.go:32 +Oct 13 08:55:26.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] DNS + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] DNS + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] DNS + tear down framework | framework.go:193 +STEP: Destroying namespace "dns-7039" for this suite. 10/13/23 08:55:26.063 +------------------------------ +• [2.082 seconds] +[sig-network] DNS +test/e2e/network/common/framework.go:23 + should provide /etc/hosts entries for the cluster [Conformance] + test/e2e/network/dns.go:117 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] DNS + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:55:23.986 + Oct 13 08:55:23.987: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename dns 10/13/23 08:55:23.988 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:55:24.002 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:55:24.004 + [BeforeEach] [sig-network] DNS + test/e2e/framework/metrics/init/init.go:31 + [It] should provide /etc/hosts entries for the cluster [Conformance] + test/e2e/network/dns.go:117 + STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-7039.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-7039.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;sleep 1; done + 10/13/23 08:55:24.006 + STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-7039.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-7039.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;sleep 1; done + 10/13/23 08:55:24.006 + STEP: creating a pod to probe /etc/hosts 10/13/23 08:55:24.006 + STEP: submitting the pod to kubernetes 10/13/23 08:55:24.007 + Oct 13 08:55:24.015: INFO: Waiting up to 15m0s for pod "dns-test-5f5f9ac8-4083-417e-8143-84510873fec6" in namespace "dns-7039" to be "running" + Oct 13 08:55:24.017: INFO: Pod "dns-test-5f5f9ac8-4083-417e-8143-84510873fec6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.929616ms + Oct 13 08:55:26.024: INFO: Pod "dns-test-5f5f9ac8-4083-417e-8143-84510873fec6": Phase="Running", Reason="", readiness=true. Elapsed: 2.009558759s + Oct 13 08:55:26.024: INFO: Pod "dns-test-5f5f9ac8-4083-417e-8143-84510873fec6" satisfied condition "running" + STEP: retrieving the pod 10/13/23 08:55:26.024 + STEP: looking for the results for each expected name from probers 10/13/23 08:55:26.029 + Oct 13 08:55:26.047: INFO: DNS probes using dns-7039/dns-test-5f5f9ac8-4083-417e-8143-84510873fec6 succeeded + + STEP: deleting the pod 10/13/23 08:55:26.047 + [AfterEach] [sig-network] DNS + test/e2e/framework/node/init/init.go:32 + Oct 13 08:55:26.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] DNS + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] DNS + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] DNS + tear down framework | framework.go:193 + STEP: Destroying namespace "dns-7039" for this suite. 10/13/23 08:55:26.063 + << End Captured GinkgoWriter Output +------------------------------ +SSSS +------------------------------ +[sig-apps] ReplicaSet + should adopt matching pods on creation and release no longer matching pods [Conformance] + test/e2e/apps/replica_set.go:131 +[BeforeEach] [sig-apps] ReplicaSet + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:55:26.069 +Oct 13 08:55:26.069: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename replicaset 10/13/23 08:55:26.07 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:55:26.085 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:55:26.087 +[BeforeEach] [sig-apps] ReplicaSet + test/e2e/framework/metrics/init/init.go:31 +[It] should adopt matching pods on creation and release no longer matching pods [Conformance] + test/e2e/apps/replica_set.go:131 +STEP: Given a Pod with a 'name' label pod-adoption-release is created 10/13/23 08:55:26.089 +Oct 13 08:55:26.096: INFO: Waiting up to 5m0s for pod "pod-adoption-release" in namespace "replicaset-9416" to be "running and ready" +Oct 13 08:55:26.099: INFO: Pod "pod-adoption-release": Phase="Pending", Reason="", readiness=false. Elapsed: 3.242727ms +Oct 13 08:55:26.099: INFO: The phase of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) +Oct 13 08:55:28.104: INFO: Pod "pod-adoption-release": Phase="Running", Reason="", readiness=true. Elapsed: 2.008316903s +Oct 13 08:55:28.104: INFO: The phase of Pod pod-adoption-release is Running (Ready = true) +Oct 13 08:55:28.104: INFO: Pod "pod-adoption-release" satisfied condition "running and ready" +STEP: When a replicaset with a matching selector is created 10/13/23 08:55:28.108 +STEP: Then the orphan pod is adopted 10/13/23 08:55:28.114 +STEP: When the matched label of one of its pods change 10/13/23 08:55:29.122 +Oct 13 08:55:29.126: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 +STEP: Then the pod is released 10/13/23 08:55:29.136 +[AfterEach] [sig-apps] ReplicaSet + test/e2e/framework/node/init/init.go:32 +Oct 13 08:55:30.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] ReplicaSet + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] ReplicaSet + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] ReplicaSet + tear down framework | framework.go:193 +STEP: Destroying namespace "replicaset-9416" for this suite. 10/13/23 08:55:30.147 +------------------------------ +• [4.084 seconds] +[sig-apps] ReplicaSet +test/e2e/apps/framework.go:23 + should adopt matching pods on creation and release no longer matching pods [Conformance] + test/e2e/apps/replica_set.go:131 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] ReplicaSet + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:55:26.069 + Oct 13 08:55:26.069: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename replicaset 10/13/23 08:55:26.07 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:55:26.085 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:55:26.087 + [BeforeEach] [sig-apps] ReplicaSet + test/e2e/framework/metrics/init/init.go:31 + [It] should adopt matching pods on creation and release no longer matching pods [Conformance] + test/e2e/apps/replica_set.go:131 + STEP: Given a Pod with a 'name' label pod-adoption-release is created 10/13/23 08:55:26.089 + Oct 13 08:55:26.096: INFO: Waiting up to 5m0s for pod "pod-adoption-release" in namespace "replicaset-9416" to be "running and ready" + Oct 13 08:55:26.099: INFO: Pod "pod-adoption-release": Phase="Pending", Reason="", readiness=false. Elapsed: 3.242727ms + Oct 13 08:55:26.099: INFO: The phase of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) + Oct 13 08:55:28.104: INFO: Pod "pod-adoption-release": Phase="Running", Reason="", readiness=true. Elapsed: 2.008316903s + Oct 13 08:55:28.104: INFO: The phase of Pod pod-adoption-release is Running (Ready = true) + Oct 13 08:55:28.104: INFO: Pod "pod-adoption-release" satisfied condition "running and ready" + STEP: When a replicaset with a matching selector is created 10/13/23 08:55:28.108 + STEP: Then the orphan pod is adopted 10/13/23 08:55:28.114 + STEP: When the matched label of one of its pods change 10/13/23 08:55:29.122 + Oct 13 08:55:29.126: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 + STEP: Then the pod is released 10/13/23 08:55:29.136 + [AfterEach] [sig-apps] ReplicaSet + test/e2e/framework/node/init/init.go:32 + Oct 13 08:55:30.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] ReplicaSet + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] ReplicaSet + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] ReplicaSet + tear down framework | framework.go:193 + STEP: Destroying namespace "replicaset-9416" for this suite. 10/13/23 08:55:30.147 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] CronJob + should support CronJob API operations [Conformance] + test/e2e/apps/cronjob.go:319 +[BeforeEach] [sig-apps] CronJob + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:55:30.154 +Oct 13 08:55:30.154: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename cronjob 10/13/23 08:55:30.155 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:55:30.173 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:55:30.175 +[BeforeEach] [sig-apps] CronJob + test/e2e/framework/metrics/init/init.go:31 +[It] should support CronJob API operations [Conformance] + test/e2e/apps/cronjob.go:319 +STEP: Creating a cronjob 10/13/23 08:55:30.177 +STEP: creating 10/13/23 08:55:30.178 +STEP: getting 10/13/23 08:55:30.182 +STEP: listing 10/13/23 08:55:30.185 +STEP: watching 10/13/23 08:55:30.187 +Oct 13 08:55:30.187: INFO: starting watch +STEP: cluster-wide listing 10/13/23 08:55:30.188 +STEP: cluster-wide watching 10/13/23 08:55:30.19 +Oct 13 08:55:30.191: INFO: starting watch +STEP: patching 10/13/23 08:55:30.191 +STEP: updating 10/13/23 08:55:30.196 +Oct 13 08:55:30.203: INFO: waiting for watch events with expected annotations +Oct 13 08:55:30.203: INFO: saw patched and updated annotations +STEP: patching /status 10/13/23 08:55:30.203 +STEP: updating /status 10/13/23 08:55:30.208 +STEP: get /status 10/13/23 08:55:30.215 +STEP: deleting 10/13/23 08:55:30.217 +STEP: deleting a collection 10/13/23 08:55:30.228 +[AfterEach] [sig-apps] CronJob + test/e2e/framework/node/init/init.go:32 +Oct 13 08:55:30.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] CronJob + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] CronJob + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] CronJob + tear down framework | framework.go:193 +STEP: Destroying namespace "cronjob-9466" for this suite. 10/13/23 08:55:30.239 +------------------------------ +• [0.089 seconds] +[sig-apps] CronJob +test/e2e/apps/framework.go:23 + should support CronJob API operations [Conformance] + test/e2e/apps/cronjob.go:319 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] CronJob + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:55:30.154 + Oct 13 08:55:30.154: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename cronjob 10/13/23 08:55:30.155 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:55:30.173 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:55:30.175 + [BeforeEach] [sig-apps] CronJob + test/e2e/framework/metrics/init/init.go:31 + [It] should support CronJob API operations [Conformance] + test/e2e/apps/cronjob.go:319 + STEP: Creating a cronjob 10/13/23 08:55:30.177 + STEP: creating 10/13/23 08:55:30.178 + STEP: getting 10/13/23 08:55:30.182 + STEP: listing 10/13/23 08:55:30.185 + STEP: watching 10/13/23 08:55:30.187 + Oct 13 08:55:30.187: INFO: starting watch + STEP: cluster-wide listing 10/13/23 08:55:30.188 + STEP: cluster-wide watching 10/13/23 08:55:30.19 + Oct 13 08:55:30.191: INFO: starting watch + STEP: patching 10/13/23 08:55:30.191 + STEP: updating 10/13/23 08:55:30.196 + Oct 13 08:55:30.203: INFO: waiting for watch events with expected annotations + Oct 13 08:55:30.203: INFO: saw patched and updated annotations + STEP: patching /status 10/13/23 08:55:30.203 + STEP: updating /status 10/13/23 08:55:30.208 + STEP: get /status 10/13/23 08:55:30.215 + STEP: deleting 10/13/23 08:55:30.217 + STEP: deleting a collection 10/13/23 08:55:30.228 + [AfterEach] [sig-apps] CronJob + test/e2e/framework/node/init/init.go:32 + Oct 13 08:55:30.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] CronJob + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] CronJob + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] CronJob + tear down framework | framework.go:193 + STEP: Destroying namespace "cronjob-9466" for this suite. 10/13/23 08:55:30.239 + << End Captured GinkgoWriter Output +------------------------------ +[sig-network] Services + should provide secure master service [Conformance] + test/e2e/network/service.go:777 +[BeforeEach] [sig-network] Services + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:55:30.244 +Oct 13 08:55:30.244: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename services 10/13/23 08:55:30.244 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:55:30.259 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:55:30.261 +[BeforeEach] [sig-network] Services + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 +[It] should provide secure master service [Conformance] + test/e2e/network/service.go:777 +[AfterEach] [sig-network] Services + test/e2e/framework/node/init/init.go:32 +Oct 13 08:55:30.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] Services + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] Services + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] Services + tear down framework | framework.go:193 +STEP: Destroying namespace "services-6033" for this suite. 10/13/23 08:55:30.27 +------------------------------ +• [0.032 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should provide secure master service [Conformance] + test/e2e/network/service.go:777 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Services + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:55:30.244 + Oct 13 08:55:30.244: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename services 10/13/23 08:55:30.244 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:55:30.259 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:55:30.261 + [BeforeEach] [sig-network] Services + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 + [It] should provide secure master service [Conformance] + test/e2e/network/service.go:777 + [AfterEach] [sig-network] Services + test/e2e/framework/node/init/init.go:32 + Oct 13 08:55:30.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] Services + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] Services + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] Services + tear down framework | framework.go:193 + STEP: Destroying namespace "services-6033" for this suite. 10/13/23 08:55:30.27 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + listing mutating webhooks should work [Conformance] + test/e2e/apimachinery/webhook.go:656 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:55:30.276 +Oct 13 08:55:30.276: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename webhook 10/13/23 08:55:30.277 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:55:30.291 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:55:30.293 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 +STEP: Setting up server cert 10/13/23 08:55:30.304 +STEP: Create role binding to let webhook read extension-apiserver-authentication 10/13/23 08:55:30.512 +STEP: Deploying the webhook pod 10/13/23 08:55:30.518 +STEP: Wait for the deployment to be ready 10/13/23 08:55:30.528 +Oct 13 08:55:30.533: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service 10/13/23 08:55:32.542 +STEP: Verifying the service has paired with the endpoint 10/13/23 08:55:32.554 +Oct 13 08:55:33.554: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] listing mutating webhooks should work [Conformance] + test/e2e/apimachinery/webhook.go:656 +STEP: Listing all of the created validation webhooks 10/13/23 08:55:33.611 +STEP: Creating a configMap that should be mutated 10/13/23 08:55:33.62 +STEP: Deleting the collection of validation webhooks 10/13/23 08:55:33.642 +STEP: Creating a configMap that should not be mutated 10/13/23 08:55:33.679 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Oct 13 08:55:33.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "webhook-9724" for this suite. 10/13/23 08:55:33.73 +STEP: Destroying namespace "webhook-9724-markers" for this suite. 10/13/23 08:55:33.736 +------------------------------ +• [3.469 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + listing mutating webhooks should work [Conformance] + test/e2e/apimachinery/webhook.go:656 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:55:30.276 + Oct 13 08:55:30.276: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename webhook 10/13/23 08:55:30.277 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:55:30.291 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:55:30.293 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 + STEP: Setting up server cert 10/13/23 08:55:30.304 + STEP: Create role binding to let webhook read extension-apiserver-authentication 10/13/23 08:55:30.512 + STEP: Deploying the webhook pod 10/13/23 08:55:30.518 + STEP: Wait for the deployment to be ready 10/13/23 08:55:30.528 + Oct 13 08:55:30.533: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set + STEP: Deploying the webhook service 10/13/23 08:55:32.542 + STEP: Verifying the service has paired with the endpoint 10/13/23 08:55:32.554 + Oct 13 08:55:33.554: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] listing mutating webhooks should work [Conformance] + test/e2e/apimachinery/webhook.go:656 + STEP: Listing all of the created validation webhooks 10/13/23 08:55:33.611 + STEP: Creating a configMap that should be mutated 10/13/23 08:55:33.62 + STEP: Deleting the collection of validation webhooks 10/13/23 08:55:33.642 + STEP: Creating a configMap that should not be mutated 10/13/23 08:55:33.679 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Oct 13 08:55:33.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "webhook-9724" for this suite. 10/13/23 08:55:33.73 + STEP: Destroying namespace "webhook-9724-markers" for this suite. 10/13/23 08:55:33.736 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] ConfigMap + should be consumable via environment variable [NodeConformance] [Conformance] + test/e2e/common/node/configmap.go:45 +[BeforeEach] [sig-node] ConfigMap + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:55:33.746 +Oct 13 08:55:33.746: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename configmap 10/13/23 08:55:33.747 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:55:33.762 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:55:33.765 +[BeforeEach] [sig-node] ConfigMap + test/e2e/framework/metrics/init/init.go:31 +[It] should be consumable via environment variable [NodeConformance] [Conformance] + test/e2e/common/node/configmap.go:45 +STEP: Creating configMap configmap-8555/configmap-test-e68caa0c-11fe-4f91-af4c-3d941d689f61 10/13/23 08:55:33.767 +STEP: Creating a pod to test consume configMaps 10/13/23 08:55:33.771 +Oct 13 08:55:33.778: INFO: Waiting up to 5m0s for pod "pod-configmaps-267ecc9a-1f88-4788-967a-0f051d3337d9" in namespace "configmap-8555" to be "Succeeded or Failed" +Oct 13 08:55:33.780: INFO: Pod "pod-configmaps-267ecc9a-1f88-4788-967a-0f051d3337d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.736673ms +Oct 13 08:55:35.785: INFO: Pod "pod-configmaps-267ecc9a-1f88-4788-967a-0f051d3337d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00690998s +Oct 13 08:55:37.786: INFO: Pod "pod-configmaps-267ecc9a-1f88-4788-967a-0f051d3337d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008175127s +STEP: Saw pod success 10/13/23 08:55:37.786 +Oct 13 08:55:37.786: INFO: Pod "pod-configmaps-267ecc9a-1f88-4788-967a-0f051d3337d9" satisfied condition "Succeeded or Failed" +Oct 13 08:55:37.789: INFO: Trying to get logs from node node1 pod pod-configmaps-267ecc9a-1f88-4788-967a-0f051d3337d9 container env-test: +STEP: delete the pod 10/13/23 08:55:37.802 +Oct 13 08:55:37.813: INFO: Waiting for pod pod-configmaps-267ecc9a-1f88-4788-967a-0f051d3337d9 to disappear +Oct 13 08:55:37.815: INFO: Pod pod-configmaps-267ecc9a-1f88-4788-967a-0f051d3337d9 no longer exists +[AfterEach] [sig-node] ConfigMap + test/e2e/framework/node/init/init.go:32 +Oct 13 08:55:37.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] ConfigMap + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] ConfigMap + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] ConfigMap + tear down framework | framework.go:193 +STEP: Destroying namespace "configmap-8555" for this suite. 10/13/23 08:55:37.819 +------------------------------ +• [4.078 seconds] +[sig-node] ConfigMap +test/e2e/common/node/framework.go:23 + should be consumable via environment variable [NodeConformance] [Conformance] + test/e2e/common/node/configmap.go:45 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] ConfigMap + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:55:33.746 + Oct 13 08:55:33.746: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename configmap 10/13/23 08:55:33.747 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:55:33.762 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:55:33.765 + [BeforeEach] [sig-node] ConfigMap + test/e2e/framework/metrics/init/init.go:31 + [It] should be consumable via environment variable [NodeConformance] [Conformance] + test/e2e/common/node/configmap.go:45 + STEP: Creating configMap configmap-8555/configmap-test-e68caa0c-11fe-4f91-af4c-3d941d689f61 10/13/23 08:55:33.767 + STEP: Creating a pod to test consume configMaps 10/13/23 08:55:33.771 + Oct 13 08:55:33.778: INFO: Waiting up to 5m0s for pod "pod-configmaps-267ecc9a-1f88-4788-967a-0f051d3337d9" in namespace "configmap-8555" to be "Succeeded or Failed" + Oct 13 08:55:33.780: INFO: Pod "pod-configmaps-267ecc9a-1f88-4788-967a-0f051d3337d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.736673ms + Oct 13 08:55:35.785: INFO: Pod "pod-configmaps-267ecc9a-1f88-4788-967a-0f051d3337d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00690998s + Oct 13 08:55:37.786: INFO: Pod "pod-configmaps-267ecc9a-1f88-4788-967a-0f051d3337d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008175127s + STEP: Saw pod success 10/13/23 08:55:37.786 + Oct 13 08:55:37.786: INFO: Pod "pod-configmaps-267ecc9a-1f88-4788-967a-0f051d3337d9" satisfied condition "Succeeded or Failed" + Oct 13 08:55:37.789: INFO: Trying to get logs from node node1 pod pod-configmaps-267ecc9a-1f88-4788-967a-0f051d3337d9 container env-test: + STEP: delete the pod 10/13/23 08:55:37.802 + Oct 13 08:55:37.813: INFO: Waiting for pod pod-configmaps-267ecc9a-1f88-4788-967a-0f051d3337d9 to disappear + Oct 13 08:55:37.815: INFO: Pod pod-configmaps-267ecc9a-1f88-4788-967a-0f051d3337d9 no longer exists + [AfterEach] [sig-node] ConfigMap + test/e2e/framework/node/init/init.go:32 + Oct 13 08:55:37.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] ConfigMap + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] ConfigMap + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] ConfigMap + tear down framework | framework.go:193 + STEP: Destroying namespace "configmap-8555" for this suite. 10/13/23 08:55:37.819 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl patch + should add annotations for pods in rc [Conformance] + test/e2e/kubectl/kubectl.go:1652 +[BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:55:37.825 +Oct 13 08:55:37.825: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename kubectl 10/13/23 08:55:37.826 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:55:37.841 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:55:37.843 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 +[It] should add annotations for pods in rc [Conformance] + test/e2e/kubectl/kubectl.go:1652 +STEP: creating Agnhost RC 10/13/23 08:55:37.845 +Oct 13 08:55:37.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1801 create -f -' +Oct 13 08:55:38.567: INFO: stderr: "" +Oct 13 08:55:38.567: INFO: stdout: "replicationcontroller/agnhost-primary created\n" +STEP: Waiting for Agnhost primary to start. 10/13/23 08:55:38.567 +Oct 13 08:55:39.572: INFO: Selector matched 1 pods for map[app:agnhost] +Oct 13 08:55:39.572: INFO: Found 0 / 1 +Oct 13 08:55:40.572: INFO: Selector matched 1 pods for map[app:agnhost] +Oct 13 08:55:40.572: INFO: Found 1 / 1 +Oct 13 08:55:40.572: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 +STEP: patching all pods 10/13/23 08:55:40.572 +Oct 13 08:55:40.577: INFO: Selector matched 1 pods for map[app:agnhost] +Oct 13 08:55:40.577: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +Oct 13 08:55:40.577: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1801 patch pod agnhost-primary-q8jxl -p {"metadata":{"annotations":{"x":"y"}}}' +Oct 13 08:55:40.640: INFO: stderr: "" +Oct 13 08:55:40.640: INFO: stdout: "pod/agnhost-primary-q8jxl patched\n" +STEP: checking annotations 10/13/23 08:55:40.64 +Oct 13 08:55:40.644: INFO: Selector matched 1 pods for map[app:agnhost] +Oct 13 08:55:40.644: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 +Oct 13 08:55:40.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 +STEP: Destroying namespace "kubectl-1801" for this suite. 10/13/23 08:55:40.647 +------------------------------ +• [2.828 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Kubectl patch + test/e2e/kubectl/kubectl.go:1646 + should add annotations for pods in rc [Conformance] + test/e2e/kubectl/kubectl.go:1652 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:55:37.825 + Oct 13 08:55:37.825: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename kubectl 10/13/23 08:55:37.826 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:55:37.841 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:55:37.843 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 + [It] should add annotations for pods in rc [Conformance] + test/e2e/kubectl/kubectl.go:1652 + STEP: creating Agnhost RC 10/13/23 08:55:37.845 + Oct 13 08:55:37.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1801 create -f -' + Oct 13 08:55:38.567: INFO: stderr: "" + Oct 13 08:55:38.567: INFO: stdout: "replicationcontroller/agnhost-primary created\n" + STEP: Waiting for Agnhost primary to start. 10/13/23 08:55:38.567 + Oct 13 08:55:39.572: INFO: Selector matched 1 pods for map[app:agnhost] + Oct 13 08:55:39.572: INFO: Found 0 / 1 + Oct 13 08:55:40.572: INFO: Selector matched 1 pods for map[app:agnhost] + Oct 13 08:55:40.572: INFO: Found 1 / 1 + Oct 13 08:55:40.572: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 + STEP: patching all pods 10/13/23 08:55:40.572 + Oct 13 08:55:40.577: INFO: Selector matched 1 pods for map[app:agnhost] + Oct 13 08:55:40.577: INFO: ForEach: Found 1 pods from the filter. Now looping through them. + Oct 13 08:55:40.577: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-1801 patch pod agnhost-primary-q8jxl -p {"metadata":{"annotations":{"x":"y"}}}' + Oct 13 08:55:40.640: INFO: stderr: "" + Oct 13 08:55:40.640: INFO: stdout: "pod/agnhost-primary-q8jxl patched\n" + STEP: checking annotations 10/13/23 08:55:40.64 + Oct 13 08:55:40.644: INFO: Selector matched 1 pods for map[app:agnhost] + Oct 13 08:55:40.644: INFO: ForEach: Found 1 pods from the filter. Now looping through them. + [AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 + Oct 13 08:55:40.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 + STEP: Destroying namespace "kubectl-1801" for this suite. 10/13/23 08:55:40.647 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should retry creating failed daemon pods [Conformance] + test/e2e/apps/daemon_set.go:294 +[BeforeEach] [sig-apps] Daemon set [Serial] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:55:40.653 +Oct 13 08:55:40.653: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename daemonsets 10/13/23 08:55:40.655 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:55:40.669 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:55:40.671 +[BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:146 +[It] should retry creating failed daemon pods [Conformance] + test/e2e/apps/daemon_set.go:294 +STEP: Creating a simple DaemonSet "daemon-set" 10/13/23 08:55:40.688 +STEP: Check that daemon pods launch on every node of the cluster. 10/13/23 08:55:40.694 +Oct 13 08:55:40.703: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Oct 13 08:55:40.703: INFO: Node node1 is running 0 daemon pod, expected 1 +Oct 13 08:55:41.709: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Oct 13 08:55:41.709: INFO: Node node1 is running 0 daemon pod, expected 1 +Oct 13 08:55:42.710: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 +Oct 13 08:55:42.710: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set +STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. 10/13/23 08:55:42.712 +Oct 13 08:55:42.734: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Oct 13 08:55:42.734: INFO: Node node3 is running 0 daemon pod, expected 1 +Oct 13 08:55:43.741: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Oct 13 08:55:43.741: INFO: Node node3 is running 0 daemon pod, expected 1 +Oct 13 08:55:44.741: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 +Oct 13 08:55:44.741: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set +STEP: Wait for the failed daemon pod to be completely deleted. 10/13/23 08:55:44.741 +[AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:111 +STEP: Deleting DaemonSet "daemon-set" 10/13/23 08:55:44.746 +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4861, will wait for the garbage collector to delete the pods 10/13/23 08:55:44.746 +Oct 13 08:55:44.804: INFO: Deleting DaemonSet.extensions daemon-set took: 5.162606ms +Oct 13 08:55:44.904: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.471692ms +Oct 13 08:55:47.008: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Oct 13 08:55:47.008: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set +Oct 13 08:55:47.010: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"24133"},"items":null} + +Oct 13 08:55:47.012: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"24133"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/node/init/init.go:32 +Oct 13 08:55:47.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + tear down framework | framework.go:193 +STEP: Destroying namespace "daemonsets-4861" for this suite. 10/13/23 08:55:47.026 +------------------------------ +• [SLOW TEST] [6.377 seconds] +[sig-apps] Daemon set [Serial] +test/e2e/apps/framework.go:23 + should retry creating failed daemon pods [Conformance] + test/e2e/apps/daemon_set.go:294 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] Daemon set [Serial] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:55:40.653 + Oct 13 08:55:40.653: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename daemonsets 10/13/23 08:55:40.655 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:55:40.669 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:55:40.671 + [BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:146 + [It] should retry creating failed daemon pods [Conformance] + test/e2e/apps/daemon_set.go:294 + STEP: Creating a simple DaemonSet "daemon-set" 10/13/23 08:55:40.688 + STEP: Check that daemon pods launch on every node of the cluster. 10/13/23 08:55:40.694 + Oct 13 08:55:40.703: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Oct 13 08:55:40.703: INFO: Node node1 is running 0 daemon pod, expected 1 + Oct 13 08:55:41.709: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Oct 13 08:55:41.709: INFO: Node node1 is running 0 daemon pod, expected 1 + Oct 13 08:55:42.710: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 + Oct 13 08:55:42.710: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set + STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. 10/13/23 08:55:42.712 + Oct 13 08:55:42.734: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 + Oct 13 08:55:42.734: INFO: Node node3 is running 0 daemon pod, expected 1 + Oct 13 08:55:43.741: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 + Oct 13 08:55:43.741: INFO: Node node3 is running 0 daemon pod, expected 1 + Oct 13 08:55:44.741: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 + Oct 13 08:55:44.741: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set + STEP: Wait for the failed daemon pod to be completely deleted. 10/13/23 08:55:44.741 + [AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:111 + STEP: Deleting DaemonSet "daemon-set" 10/13/23 08:55:44.746 + STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4861, will wait for the garbage collector to delete the pods 10/13/23 08:55:44.746 + Oct 13 08:55:44.804: INFO: Deleting DaemonSet.extensions daemon-set took: 5.162606ms + Oct 13 08:55:44.904: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.471692ms + Oct 13 08:55:47.008: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Oct 13 08:55:47.008: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set + Oct 13 08:55:47.010: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"24133"},"items":null} + + Oct 13 08:55:47.012: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"24133"},"items":null} + + [AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/node/init/init.go:32 + Oct 13 08:55:47.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + tear down framework | framework.go:193 + STEP: Destroying namespace "daemonsets-4861" for this suite. 10/13/23 08:55:47.026 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] CSIStorageCapacity + should support CSIStorageCapacities API operations [Conformance] + test/e2e/storage/csistoragecapacity.go:49 +[BeforeEach] [sig-storage] CSIStorageCapacity + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:55:47.032 +Oct 13 08:55:47.032: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename csistoragecapacity 10/13/23 08:55:47.033 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:55:47.049 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:55:47.051 +[BeforeEach] [sig-storage] CSIStorageCapacity + test/e2e/framework/metrics/init/init.go:31 +[It] should support CSIStorageCapacities API operations [Conformance] + test/e2e/storage/csistoragecapacity.go:49 +STEP: getting /apis 10/13/23 08:55:47.053 +STEP: getting /apis/storage.k8s.io 10/13/23 08:55:47.055 +STEP: getting /apis/storage.k8s.io/v1 10/13/23 08:55:47.056 +STEP: creating 10/13/23 08:55:47.057 +STEP: watching 10/13/23 08:55:47.072 +Oct 13 08:55:47.072: INFO: starting watch +STEP: getting 10/13/23 08:55:47.076 +STEP: listing in namespace 10/13/23 08:55:47.079 +STEP: listing across namespaces 10/13/23 08:55:47.081 +STEP: patching 10/13/23 08:55:47.084 +STEP: updating 10/13/23 08:55:47.088 +Oct 13 08:55:47.093: INFO: waiting for watch events with expected annotations in namespace +Oct 13 08:55:47.093: INFO: waiting for watch events with expected annotations across namespace +STEP: deleting 10/13/23 08:55:47.093 +STEP: deleting a collection 10/13/23 08:55:47.102 +[AfterEach] [sig-storage] CSIStorageCapacity + test/e2e/framework/node/init/init.go:32 +Oct 13 08:55:47.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] CSIStorageCapacity + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] CSIStorageCapacity + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] CSIStorageCapacity + tear down framework | framework.go:193 +STEP: Destroying namespace "csistoragecapacity-9044" for this suite. 10/13/23 08:55:47.116 +------------------------------ +• [0.089 seconds] +[sig-storage] CSIStorageCapacity +test/e2e/storage/utils/framework.go:23 + should support CSIStorageCapacities API operations [Conformance] + test/e2e/storage/csistoragecapacity.go:49 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] CSIStorageCapacity + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:55:47.032 + Oct 13 08:55:47.032: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename csistoragecapacity 10/13/23 08:55:47.033 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:55:47.049 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:55:47.051 + [BeforeEach] [sig-storage] CSIStorageCapacity + test/e2e/framework/metrics/init/init.go:31 + [It] should support CSIStorageCapacities API operations [Conformance] + test/e2e/storage/csistoragecapacity.go:49 + STEP: getting /apis 10/13/23 08:55:47.053 + STEP: getting /apis/storage.k8s.io 10/13/23 08:55:47.055 + STEP: getting /apis/storage.k8s.io/v1 10/13/23 08:55:47.056 + STEP: creating 10/13/23 08:55:47.057 + STEP: watching 10/13/23 08:55:47.072 + Oct 13 08:55:47.072: INFO: starting watch + STEP: getting 10/13/23 08:55:47.076 + STEP: listing in namespace 10/13/23 08:55:47.079 + STEP: listing across namespaces 10/13/23 08:55:47.081 + STEP: patching 10/13/23 08:55:47.084 + STEP: updating 10/13/23 08:55:47.088 + Oct 13 08:55:47.093: INFO: waiting for watch events with expected annotations in namespace + Oct 13 08:55:47.093: INFO: waiting for watch events with expected annotations across namespace + STEP: deleting 10/13/23 08:55:47.093 + STEP: deleting a collection 10/13/23 08:55:47.102 + [AfterEach] [sig-storage] CSIStorageCapacity + test/e2e/framework/node/init/init.go:32 + Oct 13 08:55:47.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] CSIStorageCapacity + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] CSIStorageCapacity + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] CSIStorageCapacity + tear down framework | framework.go:193 + STEP: Destroying namespace "csistoragecapacity-9044" for this suite. 10/13/23 08:55:47.116 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a pod. [Conformance] + test/e2e/apimachinery/resource_quota.go:230 +[BeforeEach] [sig-api-machinery] ResourceQuota + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:55:47.122 +Oct 13 08:55:47.122: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename resourcequota 10/13/23 08:55:47.123 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:55:47.137 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:55:47.139 +[BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:31 +[It] should create a ResourceQuota and capture the life of a pod. [Conformance] + test/e2e/apimachinery/resource_quota.go:230 +STEP: Counting existing ResourceQuota 10/13/23 08:55:47.141 +STEP: Creating a ResourceQuota 10/13/23 08:55:52.145 +STEP: Ensuring resource quota status is calculated 10/13/23 08:55:52.15 +STEP: Creating a Pod that fits quota 10/13/23 08:55:54.154 +STEP: Ensuring ResourceQuota status captures the pod usage 10/13/23 08:55:54.169 +STEP: Not allowing a pod to be created that exceeds remaining quota 10/13/23 08:55:56.175 +STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) 10/13/23 08:55:56.178 +STEP: Ensuring a pod cannot update its resource requirements 10/13/23 08:55:56.18 +STEP: Ensuring attempts to update pod resource requirements did not change quota usage 10/13/23 08:55:56.186 +STEP: Deleting the pod 10/13/23 08:55:58.192 +STEP: Ensuring resource quota status released the pod usage 10/13/23 08:55:58.21 +[AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/node/init/init.go:32 +Oct 13 08:56:00.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + tear down framework | framework.go:193 +STEP: Destroying namespace "resourcequota-5385" for this suite. 10/13/23 08:56:00.22 +------------------------------ +• [SLOW TEST] [13.106 seconds] +[sig-api-machinery] ResourceQuota +test/e2e/apimachinery/framework.go:23 + should create a ResourceQuota and capture the life of a pod. [Conformance] + test/e2e/apimachinery/resource_quota.go:230 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] ResourceQuota + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:55:47.122 + Oct 13 08:55:47.122: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename resourcequota 10/13/23 08:55:47.123 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:55:47.137 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:55:47.139 + [BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:31 + [It] should create a ResourceQuota and capture the life of a pod. [Conformance] + test/e2e/apimachinery/resource_quota.go:230 + STEP: Counting existing ResourceQuota 10/13/23 08:55:47.141 + STEP: Creating a ResourceQuota 10/13/23 08:55:52.145 + STEP: Ensuring resource quota status is calculated 10/13/23 08:55:52.15 + STEP: Creating a Pod that fits quota 10/13/23 08:55:54.154 + STEP: Ensuring ResourceQuota status captures the pod usage 10/13/23 08:55:54.169 + STEP: Not allowing a pod to be created that exceeds remaining quota 10/13/23 08:55:56.175 + STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) 10/13/23 08:55:56.178 + STEP: Ensuring a pod cannot update its resource requirements 10/13/23 08:55:56.18 + STEP: Ensuring attempts to update pod resource requirements did not change quota usage 10/13/23 08:55:56.186 + STEP: Deleting the pod 10/13/23 08:55:58.192 + STEP: Ensuring resource quota status released the pod usage 10/13/23 08:55:58.21 + [AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/node/init/init.go:32 + Oct 13 08:56:00.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + tear down framework | framework.go:193 + STEP: Destroying namespace "resourcequota-5385" for this suite. 10/13/23 08:56:00.22 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + removes definition from spec when one version gets changed to not be served [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:442 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:56:00.229 +Oct 13 08:56:00.229: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename crd-publish-openapi 10/13/23 08:56:00.23 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:56:00.246 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:56:00.249 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[It] removes definition from spec when one version gets changed to not be served [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:442 +STEP: set up a multi version CRD 10/13/23 08:56:00.251 +Oct 13 08:56:00.252: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: mark a version not serverd 10/13/23 08:56:04.447 +STEP: check the unserved version gets removed 10/13/23 08:56:04.47 +STEP: check the other version is not changed 10/13/23 08:56:06.343 +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Oct 13 08:56:09.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "crd-publish-openapi-4955" for this suite. 10/13/23 08:56:09.721 +------------------------------ +• [SLOW TEST] [9.498 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + removes definition from spec when one version gets changed to not be served [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:442 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:56:00.229 + Oct 13 08:56:00.229: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename crd-publish-openapi 10/13/23 08:56:00.23 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:56:00.246 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:56:00.249 + [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [It] removes definition from spec when one version gets changed to not be served [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:442 + STEP: set up a multi version CRD 10/13/23 08:56:00.251 + Oct 13 08:56:00.252: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: mark a version not serverd 10/13/23 08:56:04.447 + STEP: check the unserved version gets removed 10/13/23 08:56:04.47 + STEP: check the other version is not changed 10/13/23 08:56:06.343 + [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Oct 13 08:56:09.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "crd-publish-openapi-4955" for this suite. 10/13/23 08:56:09.721 + << End Captured GinkgoWriter Output +------------------------------ +SSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should honor timeout [Conformance] + test/e2e/apimachinery/webhook.go:381 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:56:09.727 +Oct 13 08:56:09.727: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename webhook 10/13/23 08:56:09.728 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:56:09.742 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:56:09.746 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 +STEP: Setting up server cert 10/13/23 08:56:09.758 +STEP: Create role binding to let webhook read extension-apiserver-authentication 10/13/23 08:56:10.244 +STEP: Deploying the webhook pod 10/13/23 08:56:10.256 +STEP: Wait for the deployment to be ready 10/13/23 08:56:10.268 +Oct 13 08:56:10.274: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created +STEP: Deploying the webhook service 10/13/23 08:56:12.284 +STEP: Verifying the service has paired with the endpoint 10/13/23 08:56:12.299 +Oct 13 08:56:13.300: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should honor timeout [Conformance] + test/e2e/apimachinery/webhook.go:381 +STEP: Setting timeout (1s) shorter than webhook latency (5s) 10/13/23 08:56:13.304 +STEP: Registering slow webhook via the AdmissionRegistration API 10/13/23 08:56:13.304 +STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) 10/13/23 08:56:13.319 +STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore 10/13/23 08:56:14.333 +STEP: Registering slow webhook via the AdmissionRegistration API 10/13/23 08:56:14.333 +STEP: Having no error when timeout is longer than webhook latency 10/13/23 08:56:15.36 +STEP: Registering slow webhook via the AdmissionRegistration API 10/13/23 08:56:15.36 +STEP: Having no error when timeout is empty (defaulted to 10s in v1) 10/13/23 08:56:20.387 +STEP: Registering slow webhook via the AdmissionRegistration API 10/13/23 08:56:20.388 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Oct 13 08:56:25.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "webhook-9295" for this suite. 10/13/23 08:56:25.464 +STEP: Destroying namespace "webhook-9295-markers" for this suite. 10/13/23 08:56:25.47 +------------------------------ +• [SLOW TEST] [15.749 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should honor timeout [Conformance] + test/e2e/apimachinery/webhook.go:381 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:56:09.727 + Oct 13 08:56:09.727: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename webhook 10/13/23 08:56:09.728 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:56:09.742 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:56:09.746 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 + STEP: Setting up server cert 10/13/23 08:56:09.758 + STEP: Create role binding to let webhook read extension-apiserver-authentication 10/13/23 08:56:10.244 + STEP: Deploying the webhook pod 10/13/23 08:56:10.256 + STEP: Wait for the deployment to be ready 10/13/23 08:56:10.268 + Oct 13 08:56:10.274: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created + STEP: Deploying the webhook service 10/13/23 08:56:12.284 + STEP: Verifying the service has paired with the endpoint 10/13/23 08:56:12.299 + Oct 13 08:56:13.300: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] should honor timeout [Conformance] + test/e2e/apimachinery/webhook.go:381 + STEP: Setting timeout (1s) shorter than webhook latency (5s) 10/13/23 08:56:13.304 + STEP: Registering slow webhook via the AdmissionRegistration API 10/13/23 08:56:13.304 + STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) 10/13/23 08:56:13.319 + STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore 10/13/23 08:56:14.333 + STEP: Registering slow webhook via the AdmissionRegistration API 10/13/23 08:56:14.333 + STEP: Having no error when timeout is longer than webhook latency 10/13/23 08:56:15.36 + STEP: Registering slow webhook via the AdmissionRegistration API 10/13/23 08:56:15.36 + STEP: Having no error when timeout is empty (defaulted to 10s in v1) 10/13/23 08:56:20.387 + STEP: Registering slow webhook via the AdmissionRegistration API 10/13/23 08:56:20.388 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Oct 13 08:56:25.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "webhook-9295" for this suite. 10/13/23 08:56:25.464 + STEP: Destroying namespace "webhook-9295-markers" for this suite. 10/13/23 08:56:25.47 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with configmap pod with mountPath of existing file [Conformance] + test/e2e/storage/subpath.go:80 +[BeforeEach] [sig-storage] Subpath + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:56:25.477 +Oct 13 08:56:25.477: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename subpath 10/13/23 08:56:25.478 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:56:25.498 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:56:25.5 +[BeforeEach] [sig-storage] Subpath + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] Atomic writer volumes + test/e2e/storage/subpath.go:40 +STEP: Setting up data 10/13/23 08:56:25.504 +[It] should support subpaths with configmap pod with mountPath of existing file [Conformance] + test/e2e/storage/subpath.go:80 +STEP: Creating pod pod-subpath-test-configmap-fmvs 10/13/23 08:56:25.513 +STEP: Creating a pod to test atomic-volume-subpath 10/13/23 08:56:25.513 +Oct 13 08:56:25.522: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-fmvs" in namespace "subpath-4471" to be "Succeeded or Failed" +Oct 13 08:56:25.526: INFO: Pod "pod-subpath-test-configmap-fmvs": Phase="Pending", Reason="", readiness=false. Elapsed: 3.722211ms +Oct 13 08:56:27.530: INFO: Pod "pod-subpath-test-configmap-fmvs": Phase="Running", Reason="", readiness=true. Elapsed: 2.007821152s +Oct 13 08:56:29.531: INFO: Pod "pod-subpath-test-configmap-fmvs": Phase="Running", Reason="", readiness=true. Elapsed: 4.00883253s +Oct 13 08:56:31.530: INFO: Pod "pod-subpath-test-configmap-fmvs": Phase="Running", Reason="", readiness=true. Elapsed: 6.007988155s +Oct 13 08:56:33.532: INFO: Pod "pod-subpath-test-configmap-fmvs": Phase="Running", Reason="", readiness=true. Elapsed: 8.009539079s +Oct 13 08:56:35.532: INFO: Pod "pod-subpath-test-configmap-fmvs": Phase="Running", Reason="", readiness=true. Elapsed: 10.010076471s +Oct 13 08:56:37.531: INFO: Pod "pod-subpath-test-configmap-fmvs": Phase="Running", Reason="", readiness=true. Elapsed: 12.008705243s +Oct 13 08:56:39.532: INFO: Pod "pod-subpath-test-configmap-fmvs": Phase="Running", Reason="", readiness=true. Elapsed: 14.00988199s +Oct 13 08:56:41.531: INFO: Pod "pod-subpath-test-configmap-fmvs": Phase="Running", Reason="", readiness=true. Elapsed: 16.009468791s +Oct 13 08:56:43.531: INFO: Pod "pod-subpath-test-configmap-fmvs": Phase="Running", Reason="", readiness=true. Elapsed: 18.008671225s +Oct 13 08:56:45.530: INFO: Pod "pod-subpath-test-configmap-fmvs": Phase="Running", Reason="", readiness=true. Elapsed: 20.007505224s +Oct 13 08:56:47.531: INFO: Pod "pod-subpath-test-configmap-fmvs": Phase="Running", Reason="", readiness=false. Elapsed: 22.009017854s +Oct 13 08:56:49.532: INFO: Pod "pod-subpath-test-configmap-fmvs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.009552776s +STEP: Saw pod success 10/13/23 08:56:49.532 +Oct 13 08:56:49.532: INFO: Pod "pod-subpath-test-configmap-fmvs" satisfied condition "Succeeded or Failed" +Oct 13 08:56:49.536: INFO: Trying to get logs from node node2 pod pod-subpath-test-configmap-fmvs container test-container-subpath-configmap-fmvs: +STEP: delete the pod 10/13/23 08:56:49.551 +Oct 13 08:56:49.564: INFO: Waiting for pod pod-subpath-test-configmap-fmvs to disappear +Oct 13 08:56:49.567: INFO: Pod pod-subpath-test-configmap-fmvs no longer exists +STEP: Deleting pod pod-subpath-test-configmap-fmvs 10/13/23 08:56:49.567 +Oct 13 08:56:49.567: INFO: Deleting pod "pod-subpath-test-configmap-fmvs" in namespace "subpath-4471" +[AfterEach] [sig-storage] Subpath + test/e2e/framework/node/init/init.go:32 +Oct 13 08:56:49.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Subpath + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Subpath + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Subpath + tear down framework | framework.go:193 +STEP: Destroying namespace "subpath-4471" for this suite. 10/13/23 08:56:49.575 +------------------------------ +• [SLOW TEST] [24.103 seconds] +[sig-storage] Subpath +test/e2e/storage/utils/framework.go:23 + Atomic writer volumes + test/e2e/storage/subpath.go:36 + should support subpaths with configmap pod with mountPath of existing file [Conformance] + test/e2e/storage/subpath.go:80 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Subpath + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:56:25.477 + Oct 13 08:56:25.477: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename subpath 10/13/23 08:56:25.478 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:56:25.498 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:56:25.5 + [BeforeEach] [sig-storage] Subpath + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] Atomic writer volumes + test/e2e/storage/subpath.go:40 + STEP: Setting up data 10/13/23 08:56:25.504 + [It] should support subpaths with configmap pod with mountPath of existing file [Conformance] + test/e2e/storage/subpath.go:80 + STEP: Creating pod pod-subpath-test-configmap-fmvs 10/13/23 08:56:25.513 + STEP: Creating a pod to test atomic-volume-subpath 10/13/23 08:56:25.513 + Oct 13 08:56:25.522: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-fmvs" in namespace "subpath-4471" to be "Succeeded or Failed" + Oct 13 08:56:25.526: INFO: Pod "pod-subpath-test-configmap-fmvs": Phase="Pending", Reason="", readiness=false. Elapsed: 3.722211ms + Oct 13 08:56:27.530: INFO: Pod "pod-subpath-test-configmap-fmvs": Phase="Running", Reason="", readiness=true. Elapsed: 2.007821152s + Oct 13 08:56:29.531: INFO: Pod "pod-subpath-test-configmap-fmvs": Phase="Running", Reason="", readiness=true. Elapsed: 4.00883253s + Oct 13 08:56:31.530: INFO: Pod "pod-subpath-test-configmap-fmvs": Phase="Running", Reason="", readiness=true. Elapsed: 6.007988155s + Oct 13 08:56:33.532: INFO: Pod "pod-subpath-test-configmap-fmvs": Phase="Running", Reason="", readiness=true. Elapsed: 8.009539079s + Oct 13 08:56:35.532: INFO: Pod "pod-subpath-test-configmap-fmvs": Phase="Running", Reason="", readiness=true. Elapsed: 10.010076471s + Oct 13 08:56:37.531: INFO: Pod "pod-subpath-test-configmap-fmvs": Phase="Running", Reason="", readiness=true. Elapsed: 12.008705243s + Oct 13 08:56:39.532: INFO: Pod "pod-subpath-test-configmap-fmvs": Phase="Running", Reason="", readiness=true. Elapsed: 14.00988199s + Oct 13 08:56:41.531: INFO: Pod "pod-subpath-test-configmap-fmvs": Phase="Running", Reason="", readiness=true. Elapsed: 16.009468791s + Oct 13 08:56:43.531: INFO: Pod "pod-subpath-test-configmap-fmvs": Phase="Running", Reason="", readiness=true. Elapsed: 18.008671225s + Oct 13 08:56:45.530: INFO: Pod "pod-subpath-test-configmap-fmvs": Phase="Running", Reason="", readiness=true. Elapsed: 20.007505224s + Oct 13 08:56:47.531: INFO: Pod "pod-subpath-test-configmap-fmvs": Phase="Running", Reason="", readiness=false. Elapsed: 22.009017854s + Oct 13 08:56:49.532: INFO: Pod "pod-subpath-test-configmap-fmvs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.009552776s + STEP: Saw pod success 10/13/23 08:56:49.532 + Oct 13 08:56:49.532: INFO: Pod "pod-subpath-test-configmap-fmvs" satisfied condition "Succeeded or Failed" + Oct 13 08:56:49.536: INFO: Trying to get logs from node node2 pod pod-subpath-test-configmap-fmvs container test-container-subpath-configmap-fmvs: + STEP: delete the pod 10/13/23 08:56:49.551 + Oct 13 08:56:49.564: INFO: Waiting for pod pod-subpath-test-configmap-fmvs to disappear + Oct 13 08:56:49.567: INFO: Pod pod-subpath-test-configmap-fmvs no longer exists + STEP: Deleting pod pod-subpath-test-configmap-fmvs 10/13/23 08:56:49.567 + Oct 13 08:56:49.567: INFO: Deleting pod "pod-subpath-test-configmap-fmvs" in namespace "subpath-4471" + [AfterEach] [sig-storage] Subpath + test/e2e/framework/node/init/init.go:32 + Oct 13 08:56:49.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Subpath + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Subpath + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Subpath + tear down framework | framework.go:193 + STEP: Destroying namespace "subpath-4471" for this suite. 10/13/23 08:56:49.575 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] EndpointSlice + should support creating EndpointSlice API operations [Conformance] + test/e2e/network/endpointslice.go:353 +[BeforeEach] [sig-network] EndpointSlice + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:56:49.582 +Oct 13 08:56:49.582: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename endpointslice 10/13/23 08:56:49.584 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:56:49.599 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:56:49.602 +[BeforeEach] [sig-network] EndpointSlice + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-network] EndpointSlice + test/e2e/network/endpointslice.go:52 +[It] should support creating EndpointSlice API operations [Conformance] + test/e2e/network/endpointslice.go:353 +STEP: getting /apis 10/13/23 08:56:49.605 +STEP: getting /apis/discovery.k8s.io 10/13/23 08:56:49.608 +STEP: getting /apis/discovery.k8s.iov1 10/13/23 08:56:49.609 +STEP: creating 10/13/23 08:56:49.61 +STEP: getting 10/13/23 08:56:49.622 +STEP: listing 10/13/23 08:56:49.625 +STEP: watching 10/13/23 08:56:49.628 +Oct 13 08:56:49.628: INFO: starting watch +STEP: cluster-wide listing 10/13/23 08:56:49.629 +STEP: cluster-wide watching 10/13/23 08:56:49.631 +Oct 13 08:56:49.631: INFO: starting watch +STEP: patching 10/13/23 08:56:49.633 +STEP: updating 10/13/23 08:56:49.637 +Oct 13 08:56:49.644: INFO: waiting for watch events with expected annotations +Oct 13 08:56:49.644: INFO: saw patched and updated annotations +STEP: deleting 10/13/23 08:56:49.644 +STEP: deleting a collection 10/13/23 08:56:49.653 +[AfterEach] [sig-network] EndpointSlice + test/e2e/framework/node/init/init.go:32 +Oct 13 08:56:49.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] EndpointSlice + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] EndpointSlice + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] EndpointSlice + tear down framework | framework.go:193 +STEP: Destroying namespace "endpointslice-4481" for this suite. 10/13/23 08:56:49.668 +------------------------------ +• [0.090 seconds] +[sig-network] EndpointSlice +test/e2e/network/common/framework.go:23 + should support creating EndpointSlice API operations [Conformance] + test/e2e/network/endpointslice.go:353 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] EndpointSlice + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:56:49.582 + Oct 13 08:56:49.582: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename endpointslice 10/13/23 08:56:49.584 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:56:49.599 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:56:49.602 + [BeforeEach] [sig-network] EndpointSlice + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-network] EndpointSlice + test/e2e/network/endpointslice.go:52 + [It] should support creating EndpointSlice API operations [Conformance] + test/e2e/network/endpointslice.go:353 + STEP: getting /apis 10/13/23 08:56:49.605 + STEP: getting /apis/discovery.k8s.io 10/13/23 08:56:49.608 + STEP: getting /apis/discovery.k8s.iov1 10/13/23 08:56:49.609 + STEP: creating 10/13/23 08:56:49.61 + STEP: getting 10/13/23 08:56:49.622 + STEP: listing 10/13/23 08:56:49.625 + STEP: watching 10/13/23 08:56:49.628 + Oct 13 08:56:49.628: INFO: starting watch + STEP: cluster-wide listing 10/13/23 08:56:49.629 + STEP: cluster-wide watching 10/13/23 08:56:49.631 + Oct 13 08:56:49.631: INFO: starting watch + STEP: patching 10/13/23 08:56:49.633 + STEP: updating 10/13/23 08:56:49.637 + Oct 13 08:56:49.644: INFO: waiting for watch events with expected annotations + Oct 13 08:56:49.644: INFO: saw patched and updated annotations + STEP: deleting 10/13/23 08:56:49.644 + STEP: deleting a collection 10/13/23 08:56:49.653 + [AfterEach] [sig-network] EndpointSlice + test/e2e/framework/node/init/init.go:32 + Oct 13 08:56:49.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] EndpointSlice + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] EndpointSlice + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] EndpointSlice + tear down framework | framework.go:193 + STEP: Destroying namespace "endpointslice-4481" for this suite. 10/13/23 08:56:49.668 + << End Captured GinkgoWriter Output +------------------------------ +SS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for CRD with validation schema [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:69 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:56:49.673 +Oct 13 08:56:49.673: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename crd-publish-openapi 10/13/23 08:56:49.674 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:56:49.688 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:56:49.691 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[It] works for CRD with validation schema [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:69 +Oct 13 08:56:49.694: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: kubectl validation (kubectl create and apply) allows request with known and required properties 10/13/23 08:56:51.608 +Oct 13 08:56:51.608: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=crd-publish-openapi-2348 --namespace=crd-publish-openapi-2348 create -f -' +Oct 13 08:56:52.262: INFO: stderr: "" +Oct 13 08:56:52.262: INFO: stdout: "e2e-test-crd-publish-openapi-5009-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" +Oct 13 08:56:52.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=crd-publish-openapi-2348 --namespace=crd-publish-openapi-2348 delete e2e-test-crd-publish-openapi-5009-crds test-foo' +Oct 13 08:56:52.390: INFO: stderr: "" +Oct 13 08:56:52.390: INFO: stdout: "e2e-test-crd-publish-openapi-5009-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" +Oct 13 08:56:52.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=crd-publish-openapi-2348 --namespace=crd-publish-openapi-2348 apply -f -' +Oct 13 08:56:52.962: INFO: stderr: "" +Oct 13 08:56:52.962: INFO: stdout: "e2e-test-crd-publish-openapi-5009-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" +Oct 13 08:56:52.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=crd-publish-openapi-2348 --namespace=crd-publish-openapi-2348 delete e2e-test-crd-publish-openapi-5009-crds test-foo' +Oct 13 08:56:53.047: INFO: stderr: "" +Oct 13 08:56:53.047: INFO: stdout: "e2e-test-crd-publish-openapi-5009-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" +STEP: kubectl validation (kubectl create and apply) rejects request with value outside defined enum values 10/13/23 08:56:53.047 +Oct 13 08:56:53.047: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=crd-publish-openapi-2348 --namespace=crd-publish-openapi-2348 create -f -' +Oct 13 08:56:53.249: INFO: rc: 1 +STEP: kubectl validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema 10/13/23 08:56:53.249 +Oct 13 08:56:53.250: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=crd-publish-openapi-2348 --namespace=crd-publish-openapi-2348 create -f -' +Oct 13 08:56:53.476: INFO: rc: 1 +Oct 13 08:56:53.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=crd-publish-openapi-2348 --namespace=crd-publish-openapi-2348 apply -f -' +Oct 13 08:56:53.692: INFO: rc: 1 +STEP: kubectl validation (kubectl create and apply) rejects request without required properties 10/13/23 08:56:53.692 +Oct 13 08:56:53.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=crd-publish-openapi-2348 --namespace=crd-publish-openapi-2348 create -f -' +Oct 13 08:56:54.219: INFO: rc: 1 +Oct 13 08:56:54.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=crd-publish-openapi-2348 --namespace=crd-publish-openapi-2348 apply -f -' +Oct 13 08:56:54.396: INFO: rc: 1 +STEP: kubectl explain works to explain CR properties 10/13/23 08:56:54.396 +Oct 13 08:56:54.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=crd-publish-openapi-2348 explain e2e-test-crd-publish-openapi-5009-crds' +Oct 13 08:56:54.597: INFO: stderr: "" +Oct 13 08:56:54.597: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-5009-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" +STEP: kubectl explain works to explain CR properties recursively 10/13/23 08:56:54.597 +Oct 13 08:56:54.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=crd-publish-openapi-2348 explain e2e-test-crd-publish-openapi-5009-crds.metadata' +Oct 13 08:56:54.770: INFO: stderr: "" +Oct 13 08:56:54.770: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-5009-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n return a 409.\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n Deprecated: selfLink is a legacy read-only field that is no longer\n populated by the system.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" +Oct 13 08:56:54.771: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=crd-publish-openapi-2348 explain e2e-test-crd-publish-openapi-5009-crds.spec' +Oct 13 08:56:54.941: INFO: stderr: "" +Oct 13 08:56:54.941: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-5009-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" +Oct 13 08:56:54.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=crd-publish-openapi-2348 explain e2e-test-crd-publish-openapi-5009-crds.spec.bars' +Oct 13 08:56:55.151: INFO: stderr: "" +Oct 13 08:56:55.151: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-5009-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n feeling\t\n Whether Bar is feeling great.\n\n name\t -required-\n Name of Bar.\n\n" +STEP: kubectl explain works to return error when explain is called on property that doesn't exist 10/13/23 08:56:55.151 +Oct 13 08:56:55.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=crd-publish-openapi-2348 explain e2e-test-crd-publish-openapi-5009-crds.spec.bars2' +Oct 13 08:56:55.343: INFO: rc: 1 +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Oct 13 08:56:57.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "crd-publish-openapi-2348" for this suite. 10/13/23 08:56:57.713 +------------------------------ +• [SLOW TEST] [8.045 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + works for CRD with validation schema [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:69 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:56:49.673 + Oct 13 08:56:49.673: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename crd-publish-openapi 10/13/23 08:56:49.674 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:56:49.688 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:56:49.691 + [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [It] works for CRD with validation schema [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:69 + Oct 13 08:56:49.694: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: kubectl validation (kubectl create and apply) allows request with known and required properties 10/13/23 08:56:51.608 + Oct 13 08:56:51.608: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=crd-publish-openapi-2348 --namespace=crd-publish-openapi-2348 create -f -' + Oct 13 08:56:52.262: INFO: stderr: "" + Oct 13 08:56:52.262: INFO: stdout: "e2e-test-crd-publish-openapi-5009-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" + Oct 13 08:56:52.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=crd-publish-openapi-2348 --namespace=crd-publish-openapi-2348 delete e2e-test-crd-publish-openapi-5009-crds test-foo' + Oct 13 08:56:52.390: INFO: stderr: "" + Oct 13 08:56:52.390: INFO: stdout: "e2e-test-crd-publish-openapi-5009-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" + Oct 13 08:56:52.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=crd-publish-openapi-2348 --namespace=crd-publish-openapi-2348 apply -f -' + Oct 13 08:56:52.962: INFO: stderr: "" + Oct 13 08:56:52.962: INFO: stdout: "e2e-test-crd-publish-openapi-5009-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" + Oct 13 08:56:52.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=crd-publish-openapi-2348 --namespace=crd-publish-openapi-2348 delete e2e-test-crd-publish-openapi-5009-crds test-foo' + Oct 13 08:56:53.047: INFO: stderr: "" + Oct 13 08:56:53.047: INFO: stdout: "e2e-test-crd-publish-openapi-5009-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" + STEP: kubectl validation (kubectl create and apply) rejects request with value outside defined enum values 10/13/23 08:56:53.047 + Oct 13 08:56:53.047: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=crd-publish-openapi-2348 --namespace=crd-publish-openapi-2348 create -f -' + Oct 13 08:56:53.249: INFO: rc: 1 + STEP: kubectl validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema 10/13/23 08:56:53.249 + Oct 13 08:56:53.250: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=crd-publish-openapi-2348 --namespace=crd-publish-openapi-2348 create -f -' + Oct 13 08:56:53.476: INFO: rc: 1 + Oct 13 08:56:53.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=crd-publish-openapi-2348 --namespace=crd-publish-openapi-2348 apply -f -' + Oct 13 08:56:53.692: INFO: rc: 1 + STEP: kubectl validation (kubectl create and apply) rejects request without required properties 10/13/23 08:56:53.692 + Oct 13 08:56:53.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=crd-publish-openapi-2348 --namespace=crd-publish-openapi-2348 create -f -' + Oct 13 08:56:54.219: INFO: rc: 1 + Oct 13 08:56:54.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=crd-publish-openapi-2348 --namespace=crd-publish-openapi-2348 apply -f -' + Oct 13 08:56:54.396: INFO: rc: 1 + STEP: kubectl explain works to explain CR properties 10/13/23 08:56:54.396 + Oct 13 08:56:54.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=crd-publish-openapi-2348 explain e2e-test-crd-publish-openapi-5009-crds' + Oct 13 08:56:54.597: INFO: stderr: "" + Oct 13 08:56:54.597: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-5009-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" + STEP: kubectl explain works to explain CR properties recursively 10/13/23 08:56:54.597 + Oct 13 08:56:54.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=crd-publish-openapi-2348 explain e2e-test-crd-publish-openapi-5009-crds.metadata' + Oct 13 08:56:54.770: INFO: stderr: "" + Oct 13 08:56:54.770: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-5009-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n return a 409.\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n Deprecated: selfLink is a legacy read-only field that is no longer\n populated by the system.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" + Oct 13 08:56:54.771: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=crd-publish-openapi-2348 explain e2e-test-crd-publish-openapi-5009-crds.spec' + Oct 13 08:56:54.941: INFO: stderr: "" + Oct 13 08:56:54.941: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-5009-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" + Oct 13 08:56:54.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=crd-publish-openapi-2348 explain e2e-test-crd-publish-openapi-5009-crds.spec.bars' + Oct 13 08:56:55.151: INFO: stderr: "" + Oct 13 08:56:55.151: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-5009-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n feeling\t\n Whether Bar is feeling great.\n\n name\t -required-\n Name of Bar.\n\n" + STEP: kubectl explain works to return error when explain is called on property that doesn't exist 10/13/23 08:56:55.151 + Oct 13 08:56:55.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=crd-publish-openapi-2348 explain e2e-test-crd-publish-openapi-5009-crds.spec.bars2' + Oct 13 08:56:55.343: INFO: rc: 1 + [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Oct 13 08:56:57.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "crd-publish-openapi-2348" for this suite. 10/13/23 08:56:57.713 + << End Captured GinkgoWriter Output +------------------------------ +SSSS +------------------------------ +[sig-network] Proxy version v1 + A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] + test/e2e/network/proxy.go:286 +[BeforeEach] version v1 + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:56:57.718 +Oct 13 08:56:57.718: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename proxy 10/13/23 08:56:57.719 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:56:57.736 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:56:57.74 +[BeforeEach] version v1 + test/e2e/framework/metrics/init/init.go:31 +[It] A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] + test/e2e/network/proxy.go:286 +Oct 13 08:56:57.743: INFO: Creating pod... +Oct 13 08:56:57.749: INFO: Waiting up to 5m0s for pod "agnhost" in namespace "proxy-1674" to be "running" +Oct 13 08:56:57.752: INFO: Pod "agnhost": Phase="Pending", Reason="", readiness=false. Elapsed: 2.650926ms +Oct 13 08:56:59.757: INFO: Pod "agnhost": Phase="Running", Reason="", readiness=true. Elapsed: 2.008335339s +Oct 13 08:56:59.757: INFO: Pod "agnhost" satisfied condition "running" +Oct 13 08:56:59.757: INFO: Creating service... +Oct 13 08:56:59.770: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-1674/pods/agnhost/proxy/some/path/with/DELETE +Oct 13 08:56:59.774: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE +Oct 13 08:56:59.774: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-1674/pods/agnhost/proxy/some/path/with/GET +Oct 13 08:56:59.782: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET +Oct 13 08:56:59.782: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-1674/pods/agnhost/proxy/some/path/with/HEAD +Oct 13 08:56:59.785: INFO: http.Client request:HEAD | StatusCode:200 +Oct 13 08:56:59.785: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-1674/pods/agnhost/proxy/some/path/with/OPTIONS +Oct 13 08:56:59.789: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS +Oct 13 08:56:59.789: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-1674/pods/agnhost/proxy/some/path/with/PATCH +Oct 13 08:56:59.792: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH +Oct 13 08:56:59.792: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-1674/pods/agnhost/proxy/some/path/with/POST +Oct 13 08:56:59.796: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST +Oct 13 08:56:59.796: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-1674/pods/agnhost/proxy/some/path/with/PUT +Oct 13 08:56:59.799: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT +Oct 13 08:56:59.799: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-1674/services/test-service/proxy/some/path/with/DELETE +Oct 13 08:56:59.804: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE +Oct 13 08:56:59.804: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-1674/services/test-service/proxy/some/path/with/GET +Oct 13 08:56:59.808: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET +Oct 13 08:56:59.808: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-1674/services/test-service/proxy/some/path/with/HEAD +Oct 13 08:56:59.813: INFO: http.Client request:HEAD | StatusCode:200 +Oct 13 08:56:59.813: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-1674/services/test-service/proxy/some/path/with/OPTIONS +Oct 13 08:56:59.818: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS +Oct 13 08:56:59.818: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-1674/services/test-service/proxy/some/path/with/PATCH +Oct 13 08:56:59.823: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH +Oct 13 08:56:59.823: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-1674/services/test-service/proxy/some/path/with/POST +Oct 13 08:56:59.828: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST +Oct 13 08:56:59.828: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-1674/services/test-service/proxy/some/path/with/PUT +Oct 13 08:56:59.832: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT +[AfterEach] version v1 + test/e2e/framework/node/init/init.go:32 +Oct 13 08:56:59.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] version v1 + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] version v1 + dump namespaces | framework.go:196 +[DeferCleanup (Each)] version v1 + tear down framework | framework.go:193 +STEP: Destroying namespace "proxy-1674" for this suite. 10/13/23 08:56:59.836 +------------------------------ +• [2.125 seconds] +[sig-network] Proxy +test/e2e/network/common/framework.go:23 + version v1 + test/e2e/network/proxy.go:74 + A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] + test/e2e/network/proxy.go:286 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] version v1 + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:56:57.718 + Oct 13 08:56:57.718: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename proxy 10/13/23 08:56:57.719 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:56:57.736 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:56:57.74 + [BeforeEach] version v1 + test/e2e/framework/metrics/init/init.go:31 + [It] A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] + test/e2e/network/proxy.go:286 + Oct 13 08:56:57.743: INFO: Creating pod... + Oct 13 08:56:57.749: INFO: Waiting up to 5m0s for pod "agnhost" in namespace "proxy-1674" to be "running" + Oct 13 08:56:57.752: INFO: Pod "agnhost": Phase="Pending", Reason="", readiness=false. Elapsed: 2.650926ms + Oct 13 08:56:59.757: INFO: Pod "agnhost": Phase="Running", Reason="", readiness=true. Elapsed: 2.008335339s + Oct 13 08:56:59.757: INFO: Pod "agnhost" satisfied condition "running" + Oct 13 08:56:59.757: INFO: Creating service... + Oct 13 08:56:59.770: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-1674/pods/agnhost/proxy/some/path/with/DELETE + Oct 13 08:56:59.774: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE + Oct 13 08:56:59.774: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-1674/pods/agnhost/proxy/some/path/with/GET + Oct 13 08:56:59.782: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET + Oct 13 08:56:59.782: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-1674/pods/agnhost/proxy/some/path/with/HEAD + Oct 13 08:56:59.785: INFO: http.Client request:HEAD | StatusCode:200 + Oct 13 08:56:59.785: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-1674/pods/agnhost/proxy/some/path/with/OPTIONS + Oct 13 08:56:59.789: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS + Oct 13 08:56:59.789: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-1674/pods/agnhost/proxy/some/path/with/PATCH + Oct 13 08:56:59.792: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH + Oct 13 08:56:59.792: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-1674/pods/agnhost/proxy/some/path/with/POST + Oct 13 08:56:59.796: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST + Oct 13 08:56:59.796: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-1674/pods/agnhost/proxy/some/path/with/PUT + Oct 13 08:56:59.799: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT + Oct 13 08:56:59.799: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-1674/services/test-service/proxy/some/path/with/DELETE + Oct 13 08:56:59.804: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE + Oct 13 08:56:59.804: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-1674/services/test-service/proxy/some/path/with/GET + Oct 13 08:56:59.808: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET + Oct 13 08:56:59.808: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-1674/services/test-service/proxy/some/path/with/HEAD + Oct 13 08:56:59.813: INFO: http.Client request:HEAD | StatusCode:200 + Oct 13 08:56:59.813: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-1674/services/test-service/proxy/some/path/with/OPTIONS + Oct 13 08:56:59.818: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS + Oct 13 08:56:59.818: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-1674/services/test-service/proxy/some/path/with/PATCH + Oct 13 08:56:59.823: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH + Oct 13 08:56:59.823: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-1674/services/test-service/proxy/some/path/with/POST + Oct 13 08:56:59.828: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST + Oct 13 08:56:59.828: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-1674/services/test-service/proxy/some/path/with/PUT + Oct 13 08:56:59.832: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT + [AfterEach] version v1 + test/e2e/framework/node/init/init.go:32 + Oct 13 08:56:59.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] version v1 + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] version v1 + dump namespaces | framework.go:196 + [DeferCleanup (Each)] version v1 + tear down framework | framework.go:193 + STEP: Destroying namespace "proxy-1674" for this suite. 10/13/23 08:56:59.836 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for intra-pod communication: udp [NodeConformance] [Conformance] + test/e2e/common/network/networking.go:93 +[BeforeEach] [sig-network] Networking + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:56:59.844 +Oct 13 08:56:59.844: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename pod-network-test 10/13/23 08:56:59.845 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:56:59.863 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:56:59.867 +[BeforeEach] [sig-network] Networking + test/e2e/framework/metrics/init/init.go:31 +[It] should function for intra-pod communication: udp [NodeConformance] [Conformance] + test/e2e/common/network/networking.go:93 +STEP: Performing setup for networking test in namespace pod-network-test-7702 10/13/23 08:56:59.87 +STEP: creating a selector 10/13/23 08:56:59.87 +STEP: Creating the service pods in kubernetes 10/13/23 08:56:59.87 +Oct 13 08:56:59.870: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +Oct 13 08:56:59.899: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "pod-network-test-7702" to be "running and ready" +Oct 13 08:56:59.902: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.496621ms +Oct 13 08:56:59.902: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Oct 13 08:57:01.907: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.008140648s +Oct 13 08:57:01.907: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Oct 13 08:57:03.909: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.009782555s +Oct 13 08:57:03.909: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Oct 13 08:57:05.908: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.008862463s +Oct 13 08:57:05.908: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Oct 13 08:57:07.909: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.009916575s +Oct 13 08:57:07.909: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Oct 13 08:57:09.907: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.007925619s +Oct 13 08:57:09.907: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Oct 13 08:57:11.907: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=true. Elapsed: 12.008199736s +Oct 13 08:57:11.907: INFO: The phase of Pod netserver-0 is Running (Ready = true) +Oct 13 08:57:11.907: INFO: Pod "netserver-0" satisfied condition "running and ready" +Oct 13 08:57:11.911: INFO: Waiting up to 5m0s for pod "netserver-1" in namespace "pod-network-test-7702" to be "running and ready" +Oct 13 08:57:11.914: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=true. Elapsed: 2.839323ms +Oct 13 08:57:11.914: INFO: The phase of Pod netserver-1 is Running (Ready = true) +Oct 13 08:57:11.914: INFO: Pod "netserver-1" satisfied condition "running and ready" +Oct 13 08:57:11.916: INFO: Waiting up to 5m0s for pod "netserver-2" in namespace "pod-network-test-7702" to be "running and ready" +Oct 13 08:57:11.919: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=true. Elapsed: 2.34802ms +Oct 13 08:57:11.919: INFO: The phase of Pod netserver-2 is Running (Ready = true) +Oct 13 08:57:11.919: INFO: Pod "netserver-2" satisfied condition "running and ready" +STEP: Creating test pods 10/13/23 08:57:11.921 +Oct 13 08:57:11.930: INFO: Waiting up to 5m0s for pod "test-container-pod" in namespace "pod-network-test-7702" to be "running" +Oct 13 08:57:11.933: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 3.378967ms +Oct 13 08:57:13.938: INFO: Pod "test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.00833264s +Oct 13 08:57:13.938: INFO: Pod "test-container-pod" satisfied condition "running" +Oct 13 08:57:13.942: INFO: Setting MaxTries for pod polling to 39 for networking test based on endpoint count 3 +Oct 13 08:57:13.942: INFO: Breadth first check of 10.244.0.39 on host 10.253.8.110... +Oct 13 08:57:13.946: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.184:9080/dial?request=hostname&protocol=udp&host=10.244.0.39&port=8081&tries=1'] Namespace:pod-network-test-7702 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 13 08:57:13.946: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +Oct 13 08:57:13.946: INFO: ExecWithOptions: Clientset creation +Oct 13 08:57:13.946: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/pod-network-test-7702/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.244.1.184%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dudp%26host%3D10.244.0.39%26port%3D8081%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) +Oct 13 08:57:14.013: INFO: Waiting for responses: map[] +Oct 13 08:57:14.013: INFO: reached 10.244.0.39 after 0/1 tries +Oct 13 08:57:14.013: INFO: Breadth first check of 10.244.1.183 on host 10.253.8.111... +Oct 13 08:57:14.017: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.184:9080/dial?request=hostname&protocol=udp&host=10.244.1.183&port=8081&tries=1'] Namespace:pod-network-test-7702 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 13 08:57:14.017: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +Oct 13 08:57:14.017: INFO: ExecWithOptions: Clientset creation +Oct 13 08:57:14.017: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/pod-network-test-7702/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.244.1.184%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dudp%26host%3D10.244.1.183%26port%3D8081%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) +Oct 13 08:57:14.084: INFO: Waiting for responses: map[] +Oct 13 08:57:14.084: INFO: reached 10.244.1.183 after 0/1 tries +Oct 13 08:57:14.084: INFO: Breadth first check of 10.244.2.117 on host 10.253.8.112... +Oct 13 08:57:14.087: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.184:9080/dial?request=hostname&protocol=udp&host=10.244.2.117&port=8081&tries=1'] Namespace:pod-network-test-7702 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 13 08:57:14.088: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +Oct 13 08:57:14.088: INFO: ExecWithOptions: Clientset creation +Oct 13 08:57:14.088: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/pod-network-test-7702/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.244.1.184%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dudp%26host%3D10.244.2.117%26port%3D8081%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) +Oct 13 08:57:14.152: INFO: Waiting for responses: map[] +Oct 13 08:57:14.152: INFO: reached 10.244.2.117 after 0/1 tries +Oct 13 08:57:14.152: INFO: Going to retry 0 out of 3 pods.... +[AfterEach] [sig-network] Networking + test/e2e/framework/node/init/init.go:32 +Oct 13 08:57:14.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] Networking + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] Networking + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] Networking + tear down framework | framework.go:193 +STEP: Destroying namespace "pod-network-test-7702" for this suite. 10/13/23 08:57:14.156 +------------------------------ +• [SLOW TEST] [14.319 seconds] +[sig-network] Networking +test/e2e/common/network/framework.go:23 + Granular Checks: Pods + test/e2e/common/network/networking.go:32 + should function for intra-pod communication: udp [NodeConformance] [Conformance] + test/e2e/common/network/networking.go:93 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Networking + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:56:59.844 + Oct 13 08:56:59.844: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename pod-network-test 10/13/23 08:56:59.845 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:56:59.863 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:56:59.867 + [BeforeEach] [sig-network] Networking + test/e2e/framework/metrics/init/init.go:31 + [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] + test/e2e/common/network/networking.go:93 + STEP: Performing setup for networking test in namespace pod-network-test-7702 10/13/23 08:56:59.87 + STEP: creating a selector 10/13/23 08:56:59.87 + STEP: Creating the service pods in kubernetes 10/13/23 08:56:59.87 + Oct 13 08:56:59.870: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable + Oct 13 08:56:59.899: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "pod-network-test-7702" to be "running and ready" + Oct 13 08:56:59.902: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.496621ms + Oct 13 08:56:59.902: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) + Oct 13 08:57:01.907: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.008140648s + Oct 13 08:57:01.907: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Oct 13 08:57:03.909: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.009782555s + Oct 13 08:57:03.909: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Oct 13 08:57:05.908: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.008862463s + Oct 13 08:57:05.908: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Oct 13 08:57:07.909: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.009916575s + Oct 13 08:57:07.909: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Oct 13 08:57:09.907: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.007925619s + Oct 13 08:57:09.907: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Oct 13 08:57:11.907: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=true. Elapsed: 12.008199736s + Oct 13 08:57:11.907: INFO: The phase of Pod netserver-0 is Running (Ready = true) + Oct 13 08:57:11.907: INFO: Pod "netserver-0" satisfied condition "running and ready" + Oct 13 08:57:11.911: INFO: Waiting up to 5m0s for pod "netserver-1" in namespace "pod-network-test-7702" to be "running and ready" + Oct 13 08:57:11.914: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=true. Elapsed: 2.839323ms + Oct 13 08:57:11.914: INFO: The phase of Pod netserver-1 is Running (Ready = true) + Oct 13 08:57:11.914: INFO: Pod "netserver-1" satisfied condition "running and ready" + Oct 13 08:57:11.916: INFO: Waiting up to 5m0s for pod "netserver-2" in namespace "pod-network-test-7702" to be "running and ready" + Oct 13 08:57:11.919: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=true. Elapsed: 2.34802ms + Oct 13 08:57:11.919: INFO: The phase of Pod netserver-2 is Running (Ready = true) + Oct 13 08:57:11.919: INFO: Pod "netserver-2" satisfied condition "running and ready" + STEP: Creating test pods 10/13/23 08:57:11.921 + Oct 13 08:57:11.930: INFO: Waiting up to 5m0s for pod "test-container-pod" in namespace "pod-network-test-7702" to be "running" + Oct 13 08:57:11.933: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 3.378967ms + Oct 13 08:57:13.938: INFO: Pod "test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.00833264s + Oct 13 08:57:13.938: INFO: Pod "test-container-pod" satisfied condition "running" + Oct 13 08:57:13.942: INFO: Setting MaxTries for pod polling to 39 for networking test based on endpoint count 3 + Oct 13 08:57:13.942: INFO: Breadth first check of 10.244.0.39 on host 10.253.8.110... + Oct 13 08:57:13.946: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.184:9080/dial?request=hostname&protocol=udp&host=10.244.0.39&port=8081&tries=1'] Namespace:pod-network-test-7702 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Oct 13 08:57:13.946: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + Oct 13 08:57:13.946: INFO: ExecWithOptions: Clientset creation + Oct 13 08:57:13.946: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/pod-network-test-7702/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.244.1.184%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dudp%26host%3D10.244.0.39%26port%3D8081%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) + Oct 13 08:57:14.013: INFO: Waiting for responses: map[] + Oct 13 08:57:14.013: INFO: reached 10.244.0.39 after 0/1 tries + Oct 13 08:57:14.013: INFO: Breadth first check of 10.244.1.183 on host 10.253.8.111... + Oct 13 08:57:14.017: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.184:9080/dial?request=hostname&protocol=udp&host=10.244.1.183&port=8081&tries=1'] Namespace:pod-network-test-7702 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Oct 13 08:57:14.017: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + Oct 13 08:57:14.017: INFO: ExecWithOptions: Clientset creation + Oct 13 08:57:14.017: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/pod-network-test-7702/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.244.1.184%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dudp%26host%3D10.244.1.183%26port%3D8081%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) + Oct 13 08:57:14.084: INFO: Waiting for responses: map[] + Oct 13 08:57:14.084: INFO: reached 10.244.1.183 after 0/1 tries + Oct 13 08:57:14.084: INFO: Breadth first check of 10.244.2.117 on host 10.253.8.112... + Oct 13 08:57:14.087: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.184:9080/dial?request=hostname&protocol=udp&host=10.244.2.117&port=8081&tries=1'] Namespace:pod-network-test-7702 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Oct 13 08:57:14.088: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + Oct 13 08:57:14.088: INFO: ExecWithOptions: Clientset creation + Oct 13 08:57:14.088: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/pod-network-test-7702/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.244.1.184%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dudp%26host%3D10.244.2.117%26port%3D8081%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) + Oct 13 08:57:14.152: INFO: Waiting for responses: map[] + Oct 13 08:57:14.152: INFO: reached 10.244.2.117 after 0/1 tries + Oct 13 08:57:14.152: INFO: Going to retry 0 out of 3 pods.... + [AfterEach] [sig-network] Networking + test/e2e/framework/node/init/init.go:32 + Oct 13 08:57:14.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] Networking + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] Networking + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] Networking + tear down framework | framework.go:193 + STEP: Destroying namespace "pod-network-test-7702" for this suite. 10/13/23 08:57:14.156 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] DNS + should provide DNS for pods for Hostname [Conformance] + test/e2e/network/dns.go:248 +[BeforeEach] [sig-network] DNS + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:57:14.165 +Oct 13 08:57:14.165: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename dns 10/13/23 08:57:14.166 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:57:14.184 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:57:14.187 +[BeforeEach] [sig-network] DNS + test/e2e/framework/metrics/init/init.go:31 +[It] should provide DNS for pods for Hostname [Conformance] + test/e2e/network/dns.go:248 +STEP: Creating a test headless service 10/13/23 08:57:14.19 +STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-9397.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-9397.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;sleep 1; done + 10/13/23 08:57:14.194 +STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-9397.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-9397.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;sleep 1; done + 10/13/23 08:57:14.194 +STEP: creating a pod to probe DNS 10/13/23 08:57:14.194 +STEP: submitting the pod to kubernetes 10/13/23 08:57:14.194 +Oct 13 08:57:14.202: INFO: Waiting up to 15m0s for pod "dns-test-6984e52c-d9cf-468a-95d7-1da2a51a73e3" in namespace "dns-9397" to be "running" +Oct 13 08:57:14.205: INFO: Pod "dns-test-6984e52c-d9cf-468a-95d7-1da2a51a73e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.99591ms +Oct 13 08:57:16.210: INFO: Pod "dns-test-6984e52c-d9cf-468a-95d7-1da2a51a73e3": Phase="Running", Reason="", readiness=true. Elapsed: 2.00808422s +Oct 13 08:57:16.210: INFO: Pod "dns-test-6984e52c-d9cf-468a-95d7-1da2a51a73e3" satisfied condition "running" +STEP: retrieving the pod 10/13/23 08:57:16.21 +STEP: looking for the results for each expected name from probers 10/13/23 08:57:16.215 +Oct 13 08:57:16.226: INFO: Unable to read jessie_hosts@dns-querier-2 from pod dns-9397/dns-test-6984e52c-d9cf-468a-95d7-1da2a51a73e3: the server could not find the requested resource (get pods dns-test-6984e52c-d9cf-468a-95d7-1da2a51a73e3) +Oct 13 08:57:16.227: INFO: Lookups using dns-9397/dns-test-6984e52c-d9cf-468a-95d7-1da2a51a73e3 failed for: [jessie_hosts@dns-querier-2] + +Oct 13 08:57:21.238: INFO: DNS probes using dns-9397/dns-test-6984e52c-d9cf-468a-95d7-1da2a51a73e3 succeeded + +STEP: deleting the pod 10/13/23 08:57:21.238 +STEP: deleting the test headless service 10/13/23 08:57:21.248 +[AfterEach] [sig-network] DNS + test/e2e/framework/node/init/init.go:32 +Oct 13 08:57:21.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] DNS + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] DNS + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] DNS + tear down framework | framework.go:193 +STEP: Destroying namespace "dns-9397" for this suite. 10/13/23 08:57:21.266 +------------------------------ +• [SLOW TEST] [7.107 seconds] +[sig-network] DNS +test/e2e/network/common/framework.go:23 + should provide DNS for pods for Hostname [Conformance] + test/e2e/network/dns.go:248 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] DNS + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:57:14.165 + Oct 13 08:57:14.165: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename dns 10/13/23 08:57:14.166 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:57:14.184 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:57:14.187 + [BeforeEach] [sig-network] DNS + test/e2e/framework/metrics/init/init.go:31 + [It] should provide DNS for pods for Hostname [Conformance] + test/e2e/network/dns.go:248 + STEP: Creating a test headless service 10/13/23 08:57:14.19 + STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-9397.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-9397.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;sleep 1; done + 10/13/23 08:57:14.194 + STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-9397.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-9397.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;sleep 1; done + 10/13/23 08:57:14.194 + STEP: creating a pod to probe DNS 10/13/23 08:57:14.194 + STEP: submitting the pod to kubernetes 10/13/23 08:57:14.194 + Oct 13 08:57:14.202: INFO: Waiting up to 15m0s for pod "dns-test-6984e52c-d9cf-468a-95d7-1da2a51a73e3" in namespace "dns-9397" to be "running" + Oct 13 08:57:14.205: INFO: Pod "dns-test-6984e52c-d9cf-468a-95d7-1da2a51a73e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.99591ms + Oct 13 08:57:16.210: INFO: Pod "dns-test-6984e52c-d9cf-468a-95d7-1da2a51a73e3": Phase="Running", Reason="", readiness=true. Elapsed: 2.00808422s + Oct 13 08:57:16.210: INFO: Pod "dns-test-6984e52c-d9cf-468a-95d7-1da2a51a73e3" satisfied condition "running" + STEP: retrieving the pod 10/13/23 08:57:16.21 + STEP: looking for the results for each expected name from probers 10/13/23 08:57:16.215 + Oct 13 08:57:16.226: INFO: Unable to read jessie_hosts@dns-querier-2 from pod dns-9397/dns-test-6984e52c-d9cf-468a-95d7-1da2a51a73e3: the server could not find the requested resource (get pods dns-test-6984e52c-d9cf-468a-95d7-1da2a51a73e3) + Oct 13 08:57:16.227: INFO: Lookups using dns-9397/dns-test-6984e52c-d9cf-468a-95d7-1da2a51a73e3 failed for: [jessie_hosts@dns-querier-2] + + Oct 13 08:57:21.238: INFO: DNS probes using dns-9397/dns-test-6984e52c-d9cf-468a-95d7-1da2a51a73e3 succeeded + + STEP: deleting the pod 10/13/23 08:57:21.238 + STEP: deleting the test headless service 10/13/23 08:57:21.248 + [AfterEach] [sig-network] DNS + test/e2e/framework/node/init/init.go:32 + Oct 13 08:57:21.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] DNS + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] DNS + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] DNS + tear down framework | framework.go:193 + STEP: Destroying namespace "dns-9397" for this suite. 10/13/23 08:57:21.266 + << End Captured GinkgoWriter Output +------------------------------ +S +------------------------------ +[sig-network] DNS + should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] + test/e2e/network/dns.go:193 +[BeforeEach] [sig-network] DNS + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:57:21.272 +Oct 13 08:57:21.272: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename dns 10/13/23 08:57:21.273 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:57:21.289 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:57:21.292 +[BeforeEach] [sig-network] DNS + test/e2e/framework/metrics/init/init.go:31 +[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] + test/e2e/network/dns.go:193 +STEP: Creating a test headless service 10/13/23 08:57:21.294 +STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7470 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7470;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7470 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7470;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7470.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7470.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7470.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7470.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7470.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-7470.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7470.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-7470.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7470.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-7470.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7470.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-7470.svc;check="$$(dig +notcp +noall +answer +search 253.62.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.62.253_udp@PTR;check="$$(dig +tcp +noall +answer +search 253.62.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.62.253_tcp@PTR;sleep 1; done + 10/13/23 08:57:21.309 +STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7470 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7470;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7470 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7470;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7470.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7470.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7470.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7470.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7470.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-7470.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7470.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-7470.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7470.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-7470.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7470.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-7470.svc;check="$$(dig +notcp +noall +answer +search 253.62.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.62.253_udp@PTR;check="$$(dig +tcp +noall +answer +search 253.62.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.62.253_tcp@PTR;sleep 1; done + 10/13/23 08:57:21.309 +STEP: creating a pod to probe DNS 10/13/23 08:57:21.309 +STEP: submitting the pod to kubernetes 10/13/23 08:57:21.309 +Oct 13 08:57:21.319: INFO: Waiting up to 15m0s for pod "dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd" in namespace "dns-7470" to be "running" +Oct 13 08:57:21.326: INFO: Pod "dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.826722ms +Oct 13 08:57:23.330: INFO: Pod "dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd": Phase="Running", Reason="", readiness=true. Elapsed: 2.011518899s +Oct 13 08:57:23.330: INFO: Pod "dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd" satisfied condition "running" +STEP: retrieving the pod 10/13/23 08:57:23.33 +STEP: looking for the results for each expected name from probers 10/13/23 08:57:23.333 +Oct 13 08:57:23.337: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7470/dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd: the server could not find the requested resource (get pods dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd) +Oct 13 08:57:23.340: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7470/dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd: the server could not find the requested resource (get pods dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd) +Oct 13 08:57:23.343: INFO: Unable to read wheezy_udp@dns-test-service.dns-7470 from pod dns-7470/dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd: the server could not find the requested resource (get pods dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd) +Oct 13 08:57:23.346: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7470 from pod dns-7470/dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd: the server could not find the requested resource (get pods dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd) +Oct 13 08:57:23.349: INFO: Unable to read wheezy_udp@dns-test-service.dns-7470.svc from pod dns-7470/dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd: the server could not find the requested resource (get pods dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd) +Oct 13 08:57:23.351: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7470.svc from pod dns-7470/dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd: the server could not find the requested resource (get pods dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd) +Oct 13 08:57:23.354: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7470.svc from pod dns-7470/dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd: the server could not find the requested resource (get pods dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd) +Oct 13 08:57:23.356: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7470.svc from pod dns-7470/dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd: the server could not find the requested resource (get pods dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd) +Oct 13 08:57:23.359: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-7470.svc from pod dns-7470/dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd: the server could not find the requested resource (get pods dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd) +Oct 13 08:57:23.361: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-7470.svc from pod dns-7470/dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd: the server could not find the requested resource (get pods dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd) +Oct 13 08:57:23.364: INFO: Unable to read 10.105.62.253_udp@PTR from pod dns-7470/dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd: the server could not find the requested resource (get pods dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd) +Oct 13 08:57:23.366: INFO: Unable to read 10.105.62.253_tcp@PTR from pod dns-7470/dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd: the server could not find the requested resource (get pods dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd) +Oct 13 08:57:23.368: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7470/dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd: the server could not find the requested resource (get pods dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd) +Oct 13 08:57:23.371: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7470/dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd: the server could not find the requested resource (get pods dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd) +Oct 13 08:57:23.373: INFO: Unable to read jessie_udp@dns-test-service.dns-7470 from pod dns-7470/dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd: the server could not find the requested resource (get pods dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd) +Oct 13 08:57:23.375: INFO: Unable to read jessie_tcp@dns-test-service.dns-7470 from pod dns-7470/dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd: the server could not find the requested resource (get pods dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd) +Oct 13 08:57:23.378: INFO: Unable to read jessie_udp@dns-test-service.dns-7470.svc from pod dns-7470/dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd: the server could not find the requested resource (get pods dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd) +Oct 13 08:57:23.380: INFO: Unable to read jessie_tcp@dns-test-service.dns-7470.svc from pod dns-7470/dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd: the server could not find the requested resource (get pods dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd) +Oct 13 08:57:23.382: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7470.svc from pod dns-7470/dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd: the server could not find the requested resource (get pods dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd) +Oct 13 08:57:23.385: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7470.svc from pod dns-7470/dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd: the server could not find the requested resource (get pods dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd) +Oct 13 08:57:23.387: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-7470.svc from pod dns-7470/dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd: the server could not find the requested resource (get pods dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd) +Oct 13 08:57:23.389: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-7470.svc from pod dns-7470/dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd: the server could not find the requested resource (get pods dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd) +Oct 13 08:57:23.392: INFO: Unable to read 10.105.62.253_udp@PTR from pod dns-7470/dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd: the server could not find the requested resource (get pods dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd) +Oct 13 08:57:23.394: INFO: Unable to read 10.105.62.253_tcp@PTR from pod dns-7470/dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd: the server could not find the requested resource (get pods dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd) +Oct 13 08:57:23.394: INFO: Lookups using dns-7470/dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7470 wheezy_tcp@dns-test-service.dns-7470 wheezy_udp@dns-test-service.dns-7470.svc wheezy_tcp@dns-test-service.dns-7470.svc wheezy_udp@_http._tcp.dns-test-service.dns-7470.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7470.svc wheezy_udp@_http._tcp.test-service-2.dns-7470.svc wheezy_tcp@_http._tcp.test-service-2.dns-7470.svc 10.105.62.253_udp@PTR 10.105.62.253_tcp@PTR jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7470 jessie_tcp@dns-test-service.dns-7470 jessie_udp@dns-test-service.dns-7470.svc jessie_tcp@dns-test-service.dns-7470.svc jessie_udp@_http._tcp.dns-test-service.dns-7470.svc jessie_tcp@_http._tcp.dns-test-service.dns-7470.svc jessie_udp@_http._tcp.test-service-2.dns-7470.svc jessie_tcp@_http._tcp.test-service-2.dns-7470.svc 10.105.62.253_udp@PTR 10.105.62.253_tcp@PTR] + +Oct 13 08:57:28.404: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7470/dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd: the server could not find the requested resource (get pods dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd) +Oct 13 08:57:28.408: INFO: Unable to read wheezy_udp@dns-test-service.dns-7470 from pod dns-7470/dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd: the server could not find the requested resource (get pods dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd) +Oct 13 08:57:28.415: INFO: Unable to read wheezy_udp@dns-test-service.dns-7470.svc from pod dns-7470/dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd: the server could not find the requested resource (get pods dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd) +Oct 13 08:57:28.417: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7470.svc from pod dns-7470/dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd: the server could not find the requested resource (get pods dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd) +Oct 13 08:57:28.440: INFO: Unable to read jessie_udp@dns-test-service.dns-7470 from pod dns-7470/dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd: the server could not find the requested resource (get pods dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd) +Oct 13 08:57:28.445: INFO: Unable to read jessie_udp@dns-test-service.dns-7470.svc from pod dns-7470/dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd: the server could not find the requested resource (get pods dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd) +Oct 13 08:57:28.462: INFO: Lookups using dns-7470/dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd failed for: [wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7470 wheezy_udp@dns-test-service.dns-7470.svc wheezy_tcp@dns-test-service.dns-7470.svc jessie_udp@dns-test-service.dns-7470 jessie_udp@dns-test-service.dns-7470.svc] + +Oct 13 08:57:33.450: INFO: DNS probes using dns-7470/dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd succeeded + +STEP: deleting the pod 10/13/23 08:57:33.45 +STEP: deleting the test service 10/13/23 08:57:33.459 +STEP: deleting the test headless service 10/13/23 08:57:33.484 +[AfterEach] [sig-network] DNS + test/e2e/framework/node/init/init.go:32 +Oct 13 08:57:33.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] DNS + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] DNS + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] DNS + tear down framework | framework.go:193 +STEP: Destroying namespace "dns-7470" for this suite. 10/13/23 08:57:33.5 +------------------------------ +• [SLOW TEST] [12.235 seconds] +[sig-network] DNS +test/e2e/network/common/framework.go:23 + should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] + test/e2e/network/dns.go:193 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] DNS + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:57:21.272 + Oct 13 08:57:21.272: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename dns 10/13/23 08:57:21.273 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:57:21.289 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:57:21.292 + [BeforeEach] [sig-network] DNS + test/e2e/framework/metrics/init/init.go:31 + [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] + test/e2e/network/dns.go:193 + STEP: Creating a test headless service 10/13/23 08:57:21.294 + STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7470 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7470;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7470 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7470;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7470.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7470.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7470.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7470.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7470.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-7470.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7470.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-7470.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7470.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-7470.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7470.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-7470.svc;check="$$(dig +notcp +noall +answer +search 253.62.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.62.253_udp@PTR;check="$$(dig +tcp +noall +answer +search 253.62.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.62.253_tcp@PTR;sleep 1; done + 10/13/23 08:57:21.309 + STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7470 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7470;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7470 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7470;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7470.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7470.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7470.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7470.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7470.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-7470.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7470.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-7470.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7470.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-7470.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7470.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-7470.svc;check="$$(dig +notcp +noall +answer +search 253.62.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.62.253_udp@PTR;check="$$(dig +tcp +noall +answer +search 253.62.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.62.253_tcp@PTR;sleep 1; done + 10/13/23 08:57:21.309 + STEP: creating a pod to probe DNS 10/13/23 08:57:21.309 + STEP: submitting the pod to kubernetes 10/13/23 08:57:21.309 + Oct 13 08:57:21.319: INFO: Waiting up to 15m0s for pod "dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd" in namespace "dns-7470" to be "running" + Oct 13 08:57:21.326: INFO: Pod "dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.826722ms + Oct 13 08:57:23.330: INFO: Pod "dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd": Phase="Running", Reason="", readiness=true. Elapsed: 2.011518899s + Oct 13 08:57:23.330: INFO: Pod "dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd" satisfied condition "running" + STEP: retrieving the pod 10/13/23 08:57:23.33 + STEP: looking for the results for each expected name from probers 10/13/23 08:57:23.333 + Oct 13 08:57:23.337: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7470/dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd: the server could not find the requested resource (get pods dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd) + Oct 13 08:57:23.340: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7470/dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd: the server could not find the requested resource (get pods dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd) + Oct 13 08:57:23.343: INFO: Unable to read wheezy_udp@dns-test-service.dns-7470 from pod dns-7470/dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd: the server could not find the requested resource (get pods dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd) + Oct 13 08:57:23.346: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7470 from pod dns-7470/dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd: the server could not find the requested resource (get pods dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd) + Oct 13 08:57:23.349: INFO: Unable to read wheezy_udp@dns-test-service.dns-7470.svc from pod dns-7470/dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd: the server could not find the requested resource (get pods dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd) + Oct 13 08:57:23.351: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7470.svc from pod dns-7470/dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd: the server could not find the requested resource (get pods dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd) + Oct 13 08:57:23.354: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7470.svc from pod dns-7470/dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd: the server could not find the requested resource (get pods dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd) + Oct 13 08:57:23.356: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7470.svc from pod dns-7470/dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd: the server could not find the requested resource (get pods dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd) + Oct 13 08:57:23.359: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-7470.svc from pod dns-7470/dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd: the server could not find the requested resource (get pods dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd) + Oct 13 08:57:23.361: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-7470.svc from pod dns-7470/dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd: the server could not find the requested resource (get pods dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd) + Oct 13 08:57:23.364: INFO: Unable to read 10.105.62.253_udp@PTR from pod dns-7470/dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd: the server could not find the requested resource (get pods dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd) + Oct 13 08:57:23.366: INFO: Unable to read 10.105.62.253_tcp@PTR from pod dns-7470/dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd: the server could not find the requested resource (get pods dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd) + Oct 13 08:57:23.368: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7470/dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd: the server could not find the requested resource (get pods dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd) + Oct 13 08:57:23.371: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7470/dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd: the server could not find the requested resource (get pods dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd) + Oct 13 08:57:23.373: INFO: Unable to read jessie_udp@dns-test-service.dns-7470 from pod dns-7470/dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd: the server could not find the requested resource (get pods dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd) + Oct 13 08:57:23.375: INFO: Unable to read jessie_tcp@dns-test-service.dns-7470 from pod dns-7470/dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd: the server could not find the requested resource (get pods dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd) + Oct 13 08:57:23.378: INFO: Unable to read jessie_udp@dns-test-service.dns-7470.svc from pod dns-7470/dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd: the server could not find the requested resource (get pods dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd) + Oct 13 08:57:23.380: INFO: Unable to read jessie_tcp@dns-test-service.dns-7470.svc from pod dns-7470/dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd: the server could not find the requested resource (get pods dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd) + Oct 13 08:57:23.382: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7470.svc from pod dns-7470/dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd: the server could not find the requested resource (get pods dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd) + Oct 13 08:57:23.385: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7470.svc from pod dns-7470/dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd: the server could not find the requested resource (get pods dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd) + Oct 13 08:57:23.387: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-7470.svc from pod dns-7470/dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd: the server could not find the requested resource (get pods dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd) + Oct 13 08:57:23.389: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-7470.svc from pod dns-7470/dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd: the server could not find the requested resource (get pods dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd) + Oct 13 08:57:23.392: INFO: Unable to read 10.105.62.253_udp@PTR from pod dns-7470/dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd: the server could not find the requested resource (get pods dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd) + Oct 13 08:57:23.394: INFO: Unable to read 10.105.62.253_tcp@PTR from pod dns-7470/dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd: the server could not find the requested resource (get pods dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd) + Oct 13 08:57:23.394: INFO: Lookups using dns-7470/dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7470 wheezy_tcp@dns-test-service.dns-7470 wheezy_udp@dns-test-service.dns-7470.svc wheezy_tcp@dns-test-service.dns-7470.svc wheezy_udp@_http._tcp.dns-test-service.dns-7470.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7470.svc wheezy_udp@_http._tcp.test-service-2.dns-7470.svc wheezy_tcp@_http._tcp.test-service-2.dns-7470.svc 10.105.62.253_udp@PTR 10.105.62.253_tcp@PTR jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7470 jessie_tcp@dns-test-service.dns-7470 jessie_udp@dns-test-service.dns-7470.svc jessie_tcp@dns-test-service.dns-7470.svc jessie_udp@_http._tcp.dns-test-service.dns-7470.svc jessie_tcp@_http._tcp.dns-test-service.dns-7470.svc jessie_udp@_http._tcp.test-service-2.dns-7470.svc jessie_tcp@_http._tcp.test-service-2.dns-7470.svc 10.105.62.253_udp@PTR 10.105.62.253_tcp@PTR] + + Oct 13 08:57:28.404: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7470/dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd: the server could not find the requested resource (get pods dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd) + Oct 13 08:57:28.408: INFO: Unable to read wheezy_udp@dns-test-service.dns-7470 from pod dns-7470/dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd: the server could not find the requested resource (get pods dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd) + Oct 13 08:57:28.415: INFO: Unable to read wheezy_udp@dns-test-service.dns-7470.svc from pod dns-7470/dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd: the server could not find the requested resource (get pods dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd) + Oct 13 08:57:28.417: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7470.svc from pod dns-7470/dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd: the server could not find the requested resource (get pods dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd) + Oct 13 08:57:28.440: INFO: Unable to read jessie_udp@dns-test-service.dns-7470 from pod dns-7470/dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd: the server could not find the requested resource (get pods dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd) + Oct 13 08:57:28.445: INFO: Unable to read jessie_udp@dns-test-service.dns-7470.svc from pod dns-7470/dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd: the server could not find the requested resource (get pods dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd) + Oct 13 08:57:28.462: INFO: Lookups using dns-7470/dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd failed for: [wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7470 wheezy_udp@dns-test-service.dns-7470.svc wheezy_tcp@dns-test-service.dns-7470.svc jessie_udp@dns-test-service.dns-7470 jessie_udp@dns-test-service.dns-7470.svc] + + Oct 13 08:57:33.450: INFO: DNS probes using dns-7470/dns-test-eb8b0793-9c2f-4056-a47d-3bf976395ffd succeeded + + STEP: deleting the pod 10/13/23 08:57:33.45 + STEP: deleting the test service 10/13/23 08:57:33.459 + STEP: deleting the test headless service 10/13/23 08:57:33.484 + [AfterEach] [sig-network] DNS + test/e2e/framework/node/init/init.go:32 + Oct 13 08:57:33.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] DNS + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] DNS + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] DNS + tear down framework | framework.go:193 + STEP: Destroying namespace "dns-7470" for this suite. 10/13/23 08:57:33.5 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should delete RS created by deployment when not orphaning [Conformance] + test/e2e/apimachinery/garbage_collector.go:491 +[BeforeEach] [sig-api-machinery] Garbage collector + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:57:33.509 +Oct 13 08:57:33.509: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename gc 10/13/23 08:57:33.51 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:57:33.532 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:57:33.535 +[BeforeEach] [sig-api-machinery] Garbage collector + test/e2e/framework/metrics/init/init.go:31 +[It] should delete RS created by deployment when not orphaning [Conformance] + test/e2e/apimachinery/garbage_collector.go:491 +STEP: create the deployment 10/13/23 08:57:33.538 +STEP: Wait for the Deployment to create new ReplicaSet 10/13/23 08:57:33.543 +STEP: delete the deployment 10/13/23 08:57:34.052 +STEP: wait for all rs to be garbage collected 10/13/23 08:57:34.058 +STEP: expected 0 pods, got 2 pods 10/13/23 08:57:34.062 +STEP: expected 0 rs, got 1 rs 10/13/23 08:57:34.068 +STEP: Gathering metrics 10/13/23 08:57:34.578 +Oct 13 08:57:34.602: INFO: Waiting up to 5m0s for pod "kube-controller-manager-node3" in namespace "kube-system" to be "running and ready" +Oct 13 08:57:34.607: INFO: Pod "kube-controller-manager-node3": Phase="Running", Reason="", readiness=true. Elapsed: 4.928561ms +Oct 13 08:57:34.607: INFO: The phase of Pod kube-controller-manager-node3 is Running (Ready = true) +Oct 13 08:57:34.607: INFO: Pod "kube-controller-manager-node3" satisfied condition "running and ready" +Oct 13 08:57:34.664: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +[AfterEach] [sig-api-machinery] Garbage collector + test/e2e/framework/node/init/init.go:32 +Oct 13 08:57:34.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] Garbage collector + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] Garbage collector + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] Garbage collector + tear down framework | framework.go:193 +STEP: Destroying namespace "gc-7002" for this suite. 10/13/23 08:57:34.669 +------------------------------ +• [1.166 seconds] +[sig-api-machinery] Garbage collector +test/e2e/apimachinery/framework.go:23 + should delete RS created by deployment when not orphaning [Conformance] + test/e2e/apimachinery/garbage_collector.go:491 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] Garbage collector + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:57:33.509 + Oct 13 08:57:33.509: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename gc 10/13/23 08:57:33.51 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:57:33.532 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:57:33.535 + [BeforeEach] [sig-api-machinery] Garbage collector + test/e2e/framework/metrics/init/init.go:31 + [It] should delete RS created by deployment when not orphaning [Conformance] + test/e2e/apimachinery/garbage_collector.go:491 + STEP: create the deployment 10/13/23 08:57:33.538 + STEP: Wait for the Deployment to create new ReplicaSet 10/13/23 08:57:33.543 + STEP: delete the deployment 10/13/23 08:57:34.052 + STEP: wait for all rs to be garbage collected 10/13/23 08:57:34.058 + STEP: expected 0 pods, got 2 pods 10/13/23 08:57:34.062 + STEP: expected 0 rs, got 1 rs 10/13/23 08:57:34.068 + STEP: Gathering metrics 10/13/23 08:57:34.578 + Oct 13 08:57:34.602: INFO: Waiting up to 5m0s for pod "kube-controller-manager-node3" in namespace "kube-system" to be "running and ready" + Oct 13 08:57:34.607: INFO: Pod "kube-controller-manager-node3": Phase="Running", Reason="", readiness=true. Elapsed: 4.928561ms + Oct 13 08:57:34.607: INFO: The phase of Pod kube-controller-manager-node3 is Running (Ready = true) + Oct 13 08:57:34.607: INFO: Pod "kube-controller-manager-node3" satisfied condition "running and ready" + Oct 13 08:57:34.664: INFO: For apiserver_request_total: + For apiserver_request_latency_seconds: + For apiserver_init_events_total: + For garbage_collector_attempt_to_delete_queue_latency: + For garbage_collector_attempt_to_delete_work_duration: + For garbage_collector_attempt_to_orphan_queue_latency: + For garbage_collector_attempt_to_orphan_work_duration: + For garbage_collector_dirty_processing_latency_microseconds: + For garbage_collector_event_processing_latency_microseconds: + For garbage_collector_graph_changes_queue_latency: + For garbage_collector_graph_changes_work_duration: + For garbage_collector_orphan_processing_latency_microseconds: + For namespace_queue_latency: + For namespace_queue_latency_sum: + For namespace_queue_latency_count: + For namespace_retries: + For namespace_work_duration: + For namespace_work_duration_sum: + For namespace_work_duration_count: + For function_duration_seconds: + For errors_total: + For evicted_pods_total: + + [AfterEach] [sig-api-machinery] Garbage collector + test/e2e/framework/node/init/init.go:32 + Oct 13 08:57:34.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + tear down framework | framework.go:193 + STEP: Destroying namespace "gc-7002" for this suite. 10/13/23 08:57:34.669 + << End Captured GinkgoWriter Output +------------------------------ +SSS +------------------------------ +[sig-node] Probing container + should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:152 +[BeforeEach] [sig-node] Probing container + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 08:57:34.675 +Oct 13 08:57:34.675: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename container-probe 10/13/23 08:57:34.676 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:57:34.691 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:57:34.694 +[BeforeEach] [sig-node] Probing container + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:63 +[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:152 +STEP: Creating pod busybox-eb5af4fe-a5a5-4f81-9014-e600767bcfe9 in namespace container-probe-107 10/13/23 08:57:34.696 +Oct 13 08:57:34.703: INFO: Waiting up to 5m0s for pod "busybox-eb5af4fe-a5a5-4f81-9014-e600767bcfe9" in namespace "container-probe-107" to be "not pending" +Oct 13 08:57:34.705: INFO: Pod "busybox-eb5af4fe-a5a5-4f81-9014-e600767bcfe9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.54148ms +Oct 13 08:57:36.709: INFO: Pod "busybox-eb5af4fe-a5a5-4f81-9014-e600767bcfe9": Phase="Running", Reason="", readiness=true. Elapsed: 2.005693585s +Oct 13 08:57:36.709: INFO: Pod "busybox-eb5af4fe-a5a5-4f81-9014-e600767bcfe9" satisfied condition "not pending" +Oct 13 08:57:36.709: INFO: Started pod busybox-eb5af4fe-a5a5-4f81-9014-e600767bcfe9 in namespace container-probe-107 +STEP: checking the pod's current state and verifying that restartCount is present 10/13/23 08:57:36.709 +Oct 13 08:57:36.711: INFO: Initial restart count of pod busybox-eb5af4fe-a5a5-4f81-9014-e600767bcfe9 is 0 +STEP: deleting the pod 10/13/23 09:01:37.379 +[AfterEach] [sig-node] Probing container + test/e2e/framework/node/init/init.go:32 +Oct 13 09:01:37.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Probing container + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Probing container + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Probing container + tear down framework | framework.go:193 +STEP: Destroying namespace "container-probe-107" for this suite. 10/13/23 09:01:37.398 +------------------------------ +• [SLOW TEST] [242.729 seconds] +[sig-node] Probing container +test/e2e/common/node/framework.go:23 + should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:152 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Probing container + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 08:57:34.675 + Oct 13 08:57:34.675: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename container-probe 10/13/23 08:57:34.676 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 08:57:34.691 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 08:57:34.694 + [BeforeEach] [sig-node] Probing container + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:63 + [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:152 + STEP: Creating pod busybox-eb5af4fe-a5a5-4f81-9014-e600767bcfe9 in namespace container-probe-107 10/13/23 08:57:34.696 + Oct 13 08:57:34.703: INFO: Waiting up to 5m0s for pod "busybox-eb5af4fe-a5a5-4f81-9014-e600767bcfe9" in namespace "container-probe-107" to be "not pending" + Oct 13 08:57:34.705: INFO: Pod "busybox-eb5af4fe-a5a5-4f81-9014-e600767bcfe9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.54148ms + Oct 13 08:57:36.709: INFO: Pod "busybox-eb5af4fe-a5a5-4f81-9014-e600767bcfe9": Phase="Running", Reason="", readiness=true. Elapsed: 2.005693585s + Oct 13 08:57:36.709: INFO: Pod "busybox-eb5af4fe-a5a5-4f81-9014-e600767bcfe9" satisfied condition "not pending" + Oct 13 08:57:36.709: INFO: Started pod busybox-eb5af4fe-a5a5-4f81-9014-e600767bcfe9 in namespace container-probe-107 + STEP: checking the pod's current state and verifying that restartCount is present 10/13/23 08:57:36.709 + Oct 13 08:57:36.711: INFO: Initial restart count of pod busybox-eb5af4fe-a5a5-4f81-9014-e600767bcfe9 is 0 + STEP: deleting the pod 10/13/23 09:01:37.379 + [AfterEach] [sig-node] Probing container + test/e2e/framework/node/init/init.go:32 + Oct 13 09:01:37.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Probing container + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Probing container + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Probing container + tear down framework | framework.go:193 + STEP: Destroying namespace "container-probe-107" for this suite. 10/13/23 09:01:37.398 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Job + should create pods for an Indexed job with completion indexes and specified hostname [Conformance] + test/e2e/apps/job.go:366 +[BeforeEach] [sig-apps] Job + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:01:37.405 +Oct 13 09:01:37.405: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename job 10/13/23 09:01:37.406 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:01:37.422 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:01:37.425 +[BeforeEach] [sig-apps] Job + test/e2e/framework/metrics/init/init.go:31 +[It] should create pods for an Indexed job with completion indexes and specified hostname [Conformance] + test/e2e/apps/job.go:366 +STEP: Creating Indexed job 10/13/23 09:01:37.428 +STEP: Ensuring job reaches completions 10/13/23 09:01:37.435 +STEP: Ensuring pods with index for job exist 10/13/23 09:01:45.44 +[AfterEach] [sig-apps] Job + test/e2e/framework/node/init/init.go:32 +Oct 13 09:01:45.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] Job + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] Job + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] Job + tear down framework | framework.go:193 +STEP: Destroying namespace "job-456" for this suite. 10/13/23 09:01:45.447 +------------------------------ +• [SLOW TEST] [8.047 seconds] +[sig-apps] Job +test/e2e/apps/framework.go:23 + should create pods for an Indexed job with completion indexes and specified hostname [Conformance] + test/e2e/apps/job.go:366 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] Job + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:01:37.405 + Oct 13 09:01:37.405: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename job 10/13/23 09:01:37.406 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:01:37.422 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:01:37.425 + [BeforeEach] [sig-apps] Job + test/e2e/framework/metrics/init/init.go:31 + [It] should create pods for an Indexed job with completion indexes and specified hostname [Conformance] + test/e2e/apps/job.go:366 + STEP: Creating Indexed job 10/13/23 09:01:37.428 + STEP: Ensuring job reaches completions 10/13/23 09:01:37.435 + STEP: Ensuring pods with index for job exist 10/13/23 09:01:45.44 + [AfterEach] [sig-apps] Job + test/e2e/framework/node/init/init.go:32 + Oct 13 09:01:45.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] Job + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] Job + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] Job + tear down framework | framework.go:193 + STEP: Destroying namespace "job-456" for this suite. 10/13/23 09:01:45.447 + << End Captured GinkgoWriter Output +------------------------------ +S +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/network/networking.go:105 +[BeforeEach] [sig-network] Networking + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:01:45.452 +Oct 13 09:01:45.452: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename pod-network-test 10/13/23 09:01:45.453 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:01:45.467 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:01:45.469 +[BeforeEach] [sig-network] Networking + test/e2e/framework/metrics/init/init.go:31 +[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/network/networking.go:105 +STEP: Performing setup for networking test in namespace pod-network-test-4271 10/13/23 09:01:45.472 +STEP: creating a selector 10/13/23 09:01:45.472 +STEP: Creating the service pods in kubernetes 10/13/23 09:01:45.472 +Oct 13 09:01:45.472: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +Oct 13 09:01:45.494: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "pod-network-test-4271" to be "running and ready" +Oct 13 09:01:45.498: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.396872ms +Oct 13 09:01:45.498: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Oct 13 09:01:47.502: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.00865432s +Oct 13 09:01:47.502: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Oct 13 09:01:49.504: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.010158183s +Oct 13 09:01:49.504: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Oct 13 09:01:51.502: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.008446874s +Oct 13 09:01:51.502: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Oct 13 09:01:53.505: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.011554011s +Oct 13 09:01:53.505: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Oct 13 09:01:55.502: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.008674372s +Oct 13 09:01:55.503: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Oct 13 09:01:57.502: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=true. Elapsed: 12.008440867s +Oct 13 09:01:57.502: INFO: The phase of Pod netserver-0 is Running (Ready = true) +Oct 13 09:01:57.502: INFO: Pod "netserver-0" satisfied condition "running and ready" +Oct 13 09:01:57.506: INFO: Waiting up to 5m0s for pod "netserver-1" in namespace "pod-network-test-4271" to be "running and ready" +Oct 13 09:01:57.511: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 4.71762ms +Oct 13 09:01:57.511: INFO: The phase of Pod netserver-1 is Running (Ready = false) +Oct 13 09:01:59.515: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2.008809068s +Oct 13 09:01:59.515: INFO: The phase of Pod netserver-1 is Running (Ready = false) +Oct 13 09:02:01.516: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 4.010318732s +Oct 13 09:02:01.516: INFO: The phase of Pod netserver-1 is Running (Ready = false) +Oct 13 09:02:03.516: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 6.010370243s +Oct 13 09:02:03.516: INFO: The phase of Pod netserver-1 is Running (Ready = false) +Oct 13 09:02:05.520: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 8.014108595s +Oct 13 09:02:05.520: INFO: The phase of Pod netserver-1 is Running (Ready = false) +Oct 13 09:02:07.516: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=true. Elapsed: 10.010314146s +Oct 13 09:02:07.516: INFO: The phase of Pod netserver-1 is Running (Ready = true) +Oct 13 09:02:07.516: INFO: Pod "netserver-1" satisfied condition "running and ready" +Oct 13 09:02:07.520: INFO: Waiting up to 5m0s for pod "netserver-2" in namespace "pod-network-test-4271" to be "running and ready" +Oct 13 09:02:07.523: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=true. Elapsed: 2.973244ms +Oct 13 09:02:07.523: INFO: The phase of Pod netserver-2 is Running (Ready = true) +Oct 13 09:02:07.523: INFO: Pod "netserver-2" satisfied condition "running and ready" +STEP: Creating test pods 10/13/23 09:02:07.526 +Oct 13 09:02:07.537: INFO: Waiting up to 5m0s for pod "test-container-pod" in namespace "pod-network-test-4271" to be "running" +Oct 13 09:02:07.540: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 3.739969ms +Oct 13 09:02:09.545: INFO: Pod "test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.008128339s +Oct 13 09:02:09.545: INFO: Pod "test-container-pod" satisfied condition "running" +Oct 13 09:02:09.548: INFO: Waiting up to 5m0s for pod "host-test-container-pod" in namespace "pod-network-test-4271" to be "running" +Oct 13 09:02:09.551: INFO: Pod "host-test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 3.097099ms +Oct 13 09:02:09.551: INFO: Pod "host-test-container-pod" satisfied condition "running" +Oct 13 09:02:09.554: INFO: Setting MaxTries for pod polling to 39 for networking test based on endpoint count 3 +Oct 13 09:02:09.554: INFO: Going to poll 10.244.0.41 on port 8083 at least 0 times, with a maximum of 39 tries before failing +Oct 13 09:02:09.556: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.0.41:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-4271 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 13 09:02:09.556: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +Oct 13 09:02:09.557: INFO: ExecWithOptions: Clientset creation +Oct 13 09:02:09.557: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/pod-network-test-4271/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F10.244.0.41%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) +Oct 13 09:02:09.612: INFO: Found all 1 expected endpoints: [netserver-0] +Oct 13 09:02:09.612: INFO: Going to poll 10.244.1.193 on port 8083 at least 0 times, with a maximum of 39 tries before failing +Oct 13 09:02:09.615: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.193:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-4271 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 13 09:02:09.615: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +Oct 13 09:02:09.616: INFO: ExecWithOptions: Clientset creation +Oct 13 09:02:09.616: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/pod-network-test-4271/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F10.244.1.193%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) +Oct 13 09:02:09.668: INFO: Found all 1 expected endpoints: [netserver-1] +Oct 13 09:02:09.668: INFO: Going to poll 10.244.2.118 on port 8083 at least 0 times, with a maximum of 39 tries before failing +Oct 13 09:02:09.671: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.118:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-4271 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 13 09:02:09.671: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +Oct 13 09:02:09.671: INFO: ExecWithOptions: Clientset creation +Oct 13 09:02:09.671: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/pod-network-test-4271/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F10.244.2.118%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) +Oct 13 09:02:09.724: INFO: Found all 1 expected endpoints: [netserver-2] +[AfterEach] [sig-network] Networking + test/e2e/framework/node/init/init.go:32 +Oct 13 09:02:09.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] Networking + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] Networking + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] Networking + tear down framework | framework.go:193 +STEP: Destroying namespace "pod-network-test-4271" for this suite. 10/13/23 09:02:09.728 +------------------------------ +• [SLOW TEST] [24.282 seconds] +[sig-network] Networking +test/e2e/common/network/framework.go:23 + Granular Checks: Pods + test/e2e/common/network/networking.go:32 + should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/network/networking.go:105 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Networking + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:01:45.452 + Oct 13 09:01:45.452: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename pod-network-test 10/13/23 09:01:45.453 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:01:45.467 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:01:45.469 + [BeforeEach] [sig-network] Networking + test/e2e/framework/metrics/init/init.go:31 + [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/network/networking.go:105 + STEP: Performing setup for networking test in namespace pod-network-test-4271 10/13/23 09:01:45.472 + STEP: creating a selector 10/13/23 09:01:45.472 + STEP: Creating the service pods in kubernetes 10/13/23 09:01:45.472 + Oct 13 09:01:45.472: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable + Oct 13 09:01:45.494: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "pod-network-test-4271" to be "running and ready" + Oct 13 09:01:45.498: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.396872ms + Oct 13 09:01:45.498: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) + Oct 13 09:01:47.502: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.00865432s + Oct 13 09:01:47.502: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Oct 13 09:01:49.504: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.010158183s + Oct 13 09:01:49.504: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Oct 13 09:01:51.502: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.008446874s + Oct 13 09:01:51.502: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Oct 13 09:01:53.505: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.011554011s + Oct 13 09:01:53.505: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Oct 13 09:01:55.502: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.008674372s + Oct 13 09:01:55.503: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Oct 13 09:01:57.502: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=true. Elapsed: 12.008440867s + Oct 13 09:01:57.502: INFO: The phase of Pod netserver-0 is Running (Ready = true) + Oct 13 09:01:57.502: INFO: Pod "netserver-0" satisfied condition "running and ready" + Oct 13 09:01:57.506: INFO: Waiting up to 5m0s for pod "netserver-1" in namespace "pod-network-test-4271" to be "running and ready" + Oct 13 09:01:57.511: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 4.71762ms + Oct 13 09:01:57.511: INFO: The phase of Pod netserver-1 is Running (Ready = false) + Oct 13 09:01:59.515: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2.008809068s + Oct 13 09:01:59.515: INFO: The phase of Pod netserver-1 is Running (Ready = false) + Oct 13 09:02:01.516: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 4.010318732s + Oct 13 09:02:01.516: INFO: The phase of Pod netserver-1 is Running (Ready = false) + Oct 13 09:02:03.516: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 6.010370243s + Oct 13 09:02:03.516: INFO: The phase of Pod netserver-1 is Running (Ready = false) + Oct 13 09:02:05.520: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 8.014108595s + Oct 13 09:02:05.520: INFO: The phase of Pod netserver-1 is Running (Ready = false) + Oct 13 09:02:07.516: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=true. Elapsed: 10.010314146s + Oct 13 09:02:07.516: INFO: The phase of Pod netserver-1 is Running (Ready = true) + Oct 13 09:02:07.516: INFO: Pod "netserver-1" satisfied condition "running and ready" + Oct 13 09:02:07.520: INFO: Waiting up to 5m0s for pod "netserver-2" in namespace "pod-network-test-4271" to be "running and ready" + Oct 13 09:02:07.523: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=true. Elapsed: 2.973244ms + Oct 13 09:02:07.523: INFO: The phase of Pod netserver-2 is Running (Ready = true) + Oct 13 09:02:07.523: INFO: Pod "netserver-2" satisfied condition "running and ready" + STEP: Creating test pods 10/13/23 09:02:07.526 + Oct 13 09:02:07.537: INFO: Waiting up to 5m0s for pod "test-container-pod" in namespace "pod-network-test-4271" to be "running" + Oct 13 09:02:07.540: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 3.739969ms + Oct 13 09:02:09.545: INFO: Pod "test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.008128339s + Oct 13 09:02:09.545: INFO: Pod "test-container-pod" satisfied condition "running" + Oct 13 09:02:09.548: INFO: Waiting up to 5m0s for pod "host-test-container-pod" in namespace "pod-network-test-4271" to be "running" + Oct 13 09:02:09.551: INFO: Pod "host-test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 3.097099ms + Oct 13 09:02:09.551: INFO: Pod "host-test-container-pod" satisfied condition "running" + Oct 13 09:02:09.554: INFO: Setting MaxTries for pod polling to 39 for networking test based on endpoint count 3 + Oct 13 09:02:09.554: INFO: Going to poll 10.244.0.41 on port 8083 at least 0 times, with a maximum of 39 tries before failing + Oct 13 09:02:09.556: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.0.41:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-4271 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Oct 13 09:02:09.556: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + Oct 13 09:02:09.557: INFO: ExecWithOptions: Clientset creation + Oct 13 09:02:09.557: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/pod-network-test-4271/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F10.244.0.41%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) + Oct 13 09:02:09.612: INFO: Found all 1 expected endpoints: [netserver-0] + Oct 13 09:02:09.612: INFO: Going to poll 10.244.1.193 on port 8083 at least 0 times, with a maximum of 39 tries before failing + Oct 13 09:02:09.615: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.193:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-4271 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Oct 13 09:02:09.615: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + Oct 13 09:02:09.616: INFO: ExecWithOptions: Clientset creation + Oct 13 09:02:09.616: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/pod-network-test-4271/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F10.244.1.193%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) + Oct 13 09:02:09.668: INFO: Found all 1 expected endpoints: [netserver-1] + Oct 13 09:02:09.668: INFO: Going to poll 10.244.2.118 on port 8083 at least 0 times, with a maximum of 39 tries before failing + Oct 13 09:02:09.671: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.118:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-4271 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Oct 13 09:02:09.671: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + Oct 13 09:02:09.671: INFO: ExecWithOptions: Clientset creation + Oct 13 09:02:09.671: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/pod-network-test-4271/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F10.244.2.118%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) + Oct 13 09:02:09.724: INFO: Found all 1 expected endpoints: [netserver-2] + [AfterEach] [sig-network] Networking + test/e2e/framework/node/init/init.go:32 + Oct 13 09:02:09.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] Networking + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] Networking + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] Networking + tear down framework | framework.go:193 + STEP: Destroying namespace "pod-network-test-4271" for this suite. 10/13/23 09:02:09.728 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should verify ResourceQuota with terminating scopes. [Conformance] + test/e2e/apimachinery/resource_quota.go:690 +[BeforeEach] [sig-api-machinery] ResourceQuota + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:02:09.735 +Oct 13 09:02:09.735: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename resourcequota 10/13/23 09:02:09.736 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:02:09.751 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:02:09.754 +[BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:31 +[It] should verify ResourceQuota with terminating scopes. [Conformance] + test/e2e/apimachinery/resource_quota.go:690 +STEP: Creating a ResourceQuota with terminating scope 10/13/23 09:02:09.756 +STEP: Ensuring ResourceQuota status is calculated 10/13/23 09:02:09.76 +STEP: Creating a ResourceQuota with not terminating scope 10/13/23 09:02:11.766 +STEP: Ensuring ResourceQuota status is calculated 10/13/23 09:02:11.775 +STEP: Creating a long running pod 10/13/23 09:02:13.782 +STEP: Ensuring resource quota with not terminating scope captures the pod usage 10/13/23 09:02:13.798 +STEP: Ensuring resource quota with terminating scope ignored the pod usage 10/13/23 09:02:15.802 +STEP: Deleting the pod 10/13/23 09:02:17.808 +STEP: Ensuring resource quota status released the pod usage 10/13/23 09:02:17.817 +STEP: Creating a terminating pod 10/13/23 09:02:19.824 +STEP: Ensuring resource quota with terminating scope captures the pod usage 10/13/23 09:02:19.838 +STEP: Ensuring resource quota with not terminating scope ignored the pod usage 10/13/23 09:02:21.844 +STEP: Deleting the pod 10/13/23 09:02:23.849 +STEP: Ensuring resource quota status released the pod usage 10/13/23 09:02:23.861 +[AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/node/init/init.go:32 +Oct 13 09:02:25.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + tear down framework | framework.go:193 +STEP: Destroying namespace "resourcequota-4577" for this suite. 10/13/23 09:02:25.872 +------------------------------ +• [SLOW TEST] [16.142 seconds] +[sig-api-machinery] ResourceQuota +test/e2e/apimachinery/framework.go:23 + should verify ResourceQuota with terminating scopes. [Conformance] + test/e2e/apimachinery/resource_quota.go:690 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] ResourceQuota + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:02:09.735 + Oct 13 09:02:09.735: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename resourcequota 10/13/23 09:02:09.736 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:02:09.751 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:02:09.754 + [BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:31 + [It] should verify ResourceQuota with terminating scopes. [Conformance] + test/e2e/apimachinery/resource_quota.go:690 + STEP: Creating a ResourceQuota with terminating scope 10/13/23 09:02:09.756 + STEP: Ensuring ResourceQuota status is calculated 10/13/23 09:02:09.76 + STEP: Creating a ResourceQuota with not terminating scope 10/13/23 09:02:11.766 + STEP: Ensuring ResourceQuota status is calculated 10/13/23 09:02:11.775 + STEP: Creating a long running pod 10/13/23 09:02:13.782 + STEP: Ensuring resource quota with not terminating scope captures the pod usage 10/13/23 09:02:13.798 + STEP: Ensuring resource quota with terminating scope ignored the pod usage 10/13/23 09:02:15.802 + STEP: Deleting the pod 10/13/23 09:02:17.808 + STEP: Ensuring resource quota status released the pod usage 10/13/23 09:02:17.817 + STEP: Creating a terminating pod 10/13/23 09:02:19.824 + STEP: Ensuring resource quota with terminating scope captures the pod usage 10/13/23 09:02:19.838 + STEP: Ensuring resource quota with not terminating scope ignored the pod usage 10/13/23 09:02:21.844 + STEP: Deleting the pod 10/13/23 09:02:23.849 + STEP: Ensuring resource quota status released the pod usage 10/13/23 09:02:23.861 + [AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/node/init/init.go:32 + Oct 13 09:02:25.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + tear down framework | framework.go:193 + STEP: Destroying namespace "resourcequota-4577" for this suite. 10/13/23 09:02:25.872 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should be able to deny custom resource creation, update and deletion [Conformance] + test/e2e/apimachinery/webhook.go:221 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:02:25.879 +Oct 13 09:02:25.879: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename webhook 10/13/23 09:02:25.88 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:02:25.897 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:02:25.899 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 +STEP: Setting up server cert 10/13/23 09:02:25.915 +STEP: Create role binding to let webhook read extension-apiserver-authentication 10/13/23 09:02:26.258 +STEP: Deploying the webhook pod 10/13/23 09:02:26.266 +STEP: Wait for the deployment to be ready 10/13/23 09:02:26.274 +Oct 13 09:02:26.281: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service 10/13/23 09:02:28.293 +STEP: Verifying the service has paired with the endpoint 10/13/23 09:02:28.305 +Oct 13 09:02:29.305: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should be able to deny custom resource creation, update and deletion [Conformance] + test/e2e/apimachinery/webhook.go:221 +Oct 13 09:02:29.310: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Registering the custom resource webhook via the AdmissionRegistration API 10/13/23 09:02:29.82 +STEP: Creating a custom resource that should be denied by the webhook 10/13/23 09:02:29.844 +STEP: Creating a custom resource whose deletion would be denied by the webhook 10/13/23 09:02:31.889 +STEP: Updating the custom resource with disallowed data should be denied 10/13/23 09:02:31.897 +STEP: Deleting the custom resource should be denied 10/13/23 09:02:31.908 +STEP: Remove the offending key and value from the custom resource data 10/13/23 09:02:31.913 +STEP: Deleting the updated custom resource should be successful 10/13/23 09:02:31.921 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Oct 13 09:02:32.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "webhook-9323" for this suite. 10/13/23 09:02:32.48 +STEP: Destroying namespace "webhook-9323-markers" for this suite. 10/13/23 09:02:32.487 +------------------------------ +• [SLOW TEST] [6.614 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should be able to deny custom resource creation, update and deletion [Conformance] + test/e2e/apimachinery/webhook.go:221 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:02:25.879 + Oct 13 09:02:25.879: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename webhook 10/13/23 09:02:25.88 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:02:25.897 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:02:25.899 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 + STEP: Setting up server cert 10/13/23 09:02:25.915 + STEP: Create role binding to let webhook read extension-apiserver-authentication 10/13/23 09:02:26.258 + STEP: Deploying the webhook pod 10/13/23 09:02:26.266 + STEP: Wait for the deployment to be ready 10/13/23 09:02:26.274 + Oct 13 09:02:26.281: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set + STEP: Deploying the webhook service 10/13/23 09:02:28.293 + STEP: Verifying the service has paired with the endpoint 10/13/23 09:02:28.305 + Oct 13 09:02:29.305: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] should be able to deny custom resource creation, update and deletion [Conformance] + test/e2e/apimachinery/webhook.go:221 + Oct 13 09:02:29.310: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Registering the custom resource webhook via the AdmissionRegistration API 10/13/23 09:02:29.82 + STEP: Creating a custom resource that should be denied by the webhook 10/13/23 09:02:29.844 + STEP: Creating a custom resource whose deletion would be denied by the webhook 10/13/23 09:02:31.889 + STEP: Updating the custom resource with disallowed data should be denied 10/13/23 09:02:31.897 + STEP: Deleting the custom resource should be denied 10/13/23 09:02:31.908 + STEP: Remove the offending key and value from the custom resource data 10/13/23 09:02:31.913 + STEP: Deleting the updated custom resource should be successful 10/13/23 09:02:31.921 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Oct 13 09:02:32.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "webhook-9323" for this suite. 10/13/23 09:02:32.48 + STEP: Destroying namespace "webhook-9323-markers" for this suite. 10/13/23 09:02:32.487 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSS +------------------------------ +[sig-network] Services + should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] + test/e2e/network/service.go:2191 +[BeforeEach] [sig-network] Services + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:02:32.493 +Oct 13 09:02:32.493: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename services 10/13/23 09:02:32.494 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:02:32.516 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:02:32.52 +[BeforeEach] [sig-network] Services + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 +[It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] + test/e2e/network/service.go:2191 +STEP: creating service in namespace services-5453 10/13/23 09:02:32.523 +STEP: creating service affinity-clusterip in namespace services-5453 10/13/23 09:02:32.523 +STEP: creating replication controller affinity-clusterip in namespace services-5453 10/13/23 09:02:32.534 +I1013 09:02:32.540759 23 runners.go:193] Created replication controller with name: affinity-clusterip, namespace: services-5453, replica count: 3 +I1013 09:02:35.592047 23 runners.go:193] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Oct 13 09:02:35.597: INFO: Creating new exec pod +Oct 13 09:02:35.603: INFO: Waiting up to 5m0s for pod "execpod-affinitylmqq8" in namespace "services-5453" to be "running" +Oct 13 09:02:35.606: INFO: Pod "execpod-affinitylmqq8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.776888ms +Oct 13 09:02:37.610: INFO: Pod "execpod-affinitylmqq8": Phase="Running", Reason="", readiness=true. Elapsed: 2.006927885s +Oct 13 09:02:37.610: INFO: Pod "execpod-affinitylmqq8" satisfied condition "running" +Oct 13 09:02:38.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-5453 exec execpod-affinitylmqq8 -- /bin/sh -x -c nc -v -z -w 2 affinity-clusterip 80' +Oct 13 09:02:38.756: INFO: stderr: "+ nc -v -z -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" +Oct 13 09:02:38.756: INFO: stdout: "" +Oct 13 09:02:38.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-5453 exec execpod-affinitylmqq8 -- /bin/sh -x -c nc -v -z -w 2 10.107.184.82 80' +Oct 13 09:02:38.883: INFO: stderr: "+ nc -v -z -w 2 10.107.184.82 80\nConnection to 10.107.184.82 80 port [tcp/http] succeeded!\n" +Oct 13 09:02:38.884: INFO: stdout: "" +Oct 13 09:02:38.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-5453 exec execpod-affinitylmqq8 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.107.184.82:80/ ; done' +Oct 13 09:02:39.079: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.184.82:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.184.82:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.184.82:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.184.82:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.184.82:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.184.82:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.184.82:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.184.82:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.184.82:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.184.82:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.184.82:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.184.82:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.184.82:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.184.82:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.184.82:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.184.82:80/\n" +Oct 13 09:02:39.079: INFO: stdout: "\naffinity-clusterip-s7pr2\naffinity-clusterip-s7pr2\naffinity-clusterip-s7pr2\naffinity-clusterip-s7pr2\naffinity-clusterip-s7pr2\naffinity-clusterip-s7pr2\naffinity-clusterip-s7pr2\naffinity-clusterip-s7pr2\naffinity-clusterip-s7pr2\naffinity-clusterip-s7pr2\naffinity-clusterip-s7pr2\naffinity-clusterip-s7pr2\naffinity-clusterip-s7pr2\naffinity-clusterip-s7pr2\naffinity-clusterip-s7pr2\naffinity-clusterip-s7pr2" +Oct 13 09:02:39.079: INFO: Received response from host: affinity-clusterip-s7pr2 +Oct 13 09:02:39.079: INFO: Received response from host: affinity-clusterip-s7pr2 +Oct 13 09:02:39.079: INFO: Received response from host: affinity-clusterip-s7pr2 +Oct 13 09:02:39.079: INFO: Received response from host: affinity-clusterip-s7pr2 +Oct 13 09:02:39.079: INFO: Received response from host: affinity-clusterip-s7pr2 +Oct 13 09:02:39.079: INFO: Received response from host: affinity-clusterip-s7pr2 +Oct 13 09:02:39.079: INFO: Received response from host: affinity-clusterip-s7pr2 +Oct 13 09:02:39.079: INFO: Received response from host: affinity-clusterip-s7pr2 +Oct 13 09:02:39.079: INFO: Received response from host: affinity-clusterip-s7pr2 +Oct 13 09:02:39.079: INFO: Received response from host: affinity-clusterip-s7pr2 +Oct 13 09:02:39.079: INFO: Received response from host: affinity-clusterip-s7pr2 +Oct 13 09:02:39.079: INFO: Received response from host: affinity-clusterip-s7pr2 +Oct 13 09:02:39.079: INFO: Received response from host: affinity-clusterip-s7pr2 +Oct 13 09:02:39.079: INFO: Received response from host: affinity-clusterip-s7pr2 +Oct 13 09:02:39.079: INFO: Received response from host: affinity-clusterip-s7pr2 +Oct 13 09:02:39.079: INFO: Received response from host: affinity-clusterip-s7pr2 +Oct 13 09:02:39.079: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-clusterip in namespace services-5453, will wait for the garbage collector to delete the pods 10/13/23 09:02:39.089 +Oct 13 09:02:39.152: INFO: Deleting ReplicationController affinity-clusterip took: 5.311391ms +Oct 13 09:02:39.252: INFO: Terminating ReplicationController affinity-clusterip pods took: 100.398089ms +[AfterEach] [sig-network] Services + test/e2e/framework/node/init/init.go:32 +Oct 13 09:02:41.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] Services + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] Services + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] Services + tear down framework | framework.go:193 +STEP: Destroying namespace "services-5453" for this suite. 10/13/23 09:02:41.17 +------------------------------ +• [SLOW TEST] [8.682 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] + test/e2e/network/service.go:2191 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Services + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:02:32.493 + Oct 13 09:02:32.493: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename services 10/13/23 09:02:32.494 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:02:32.516 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:02:32.52 + [BeforeEach] [sig-network] Services + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 + [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] + test/e2e/network/service.go:2191 + STEP: creating service in namespace services-5453 10/13/23 09:02:32.523 + STEP: creating service affinity-clusterip in namespace services-5453 10/13/23 09:02:32.523 + STEP: creating replication controller affinity-clusterip in namespace services-5453 10/13/23 09:02:32.534 + I1013 09:02:32.540759 23 runners.go:193] Created replication controller with name: affinity-clusterip, namespace: services-5453, replica count: 3 + I1013 09:02:35.592047 23 runners.go:193] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady + Oct 13 09:02:35.597: INFO: Creating new exec pod + Oct 13 09:02:35.603: INFO: Waiting up to 5m0s for pod "execpod-affinitylmqq8" in namespace "services-5453" to be "running" + Oct 13 09:02:35.606: INFO: Pod "execpod-affinitylmqq8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.776888ms + Oct 13 09:02:37.610: INFO: Pod "execpod-affinitylmqq8": Phase="Running", Reason="", readiness=true. Elapsed: 2.006927885s + Oct 13 09:02:37.610: INFO: Pod "execpod-affinitylmqq8" satisfied condition "running" + Oct 13 09:02:38.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-5453 exec execpod-affinitylmqq8 -- /bin/sh -x -c nc -v -z -w 2 affinity-clusterip 80' + Oct 13 09:02:38.756: INFO: stderr: "+ nc -v -z -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" + Oct 13 09:02:38.756: INFO: stdout: "" + Oct 13 09:02:38.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-5453 exec execpod-affinitylmqq8 -- /bin/sh -x -c nc -v -z -w 2 10.107.184.82 80' + Oct 13 09:02:38.883: INFO: stderr: "+ nc -v -z -w 2 10.107.184.82 80\nConnection to 10.107.184.82 80 port [tcp/http] succeeded!\n" + Oct 13 09:02:38.884: INFO: stdout: "" + Oct 13 09:02:38.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-5453 exec execpod-affinitylmqq8 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.107.184.82:80/ ; done' + Oct 13 09:02:39.079: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.184.82:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.184.82:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.184.82:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.184.82:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.184.82:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.184.82:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.184.82:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.184.82:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.184.82:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.184.82:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.184.82:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.184.82:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.184.82:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.184.82:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.184.82:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.184.82:80/\n" + Oct 13 09:02:39.079: INFO: stdout: "\naffinity-clusterip-s7pr2\naffinity-clusterip-s7pr2\naffinity-clusterip-s7pr2\naffinity-clusterip-s7pr2\naffinity-clusterip-s7pr2\naffinity-clusterip-s7pr2\naffinity-clusterip-s7pr2\naffinity-clusterip-s7pr2\naffinity-clusterip-s7pr2\naffinity-clusterip-s7pr2\naffinity-clusterip-s7pr2\naffinity-clusterip-s7pr2\naffinity-clusterip-s7pr2\naffinity-clusterip-s7pr2\naffinity-clusterip-s7pr2\naffinity-clusterip-s7pr2" + Oct 13 09:02:39.079: INFO: Received response from host: affinity-clusterip-s7pr2 + Oct 13 09:02:39.079: INFO: Received response from host: affinity-clusterip-s7pr2 + Oct 13 09:02:39.079: INFO: Received response from host: affinity-clusterip-s7pr2 + Oct 13 09:02:39.079: INFO: Received response from host: affinity-clusterip-s7pr2 + Oct 13 09:02:39.079: INFO: Received response from host: affinity-clusterip-s7pr2 + Oct 13 09:02:39.079: INFO: Received response from host: affinity-clusterip-s7pr2 + Oct 13 09:02:39.079: INFO: Received response from host: affinity-clusterip-s7pr2 + Oct 13 09:02:39.079: INFO: Received response from host: affinity-clusterip-s7pr2 + Oct 13 09:02:39.079: INFO: Received response from host: affinity-clusterip-s7pr2 + Oct 13 09:02:39.079: INFO: Received response from host: affinity-clusterip-s7pr2 + Oct 13 09:02:39.079: INFO: Received response from host: affinity-clusterip-s7pr2 + Oct 13 09:02:39.079: INFO: Received response from host: affinity-clusterip-s7pr2 + Oct 13 09:02:39.079: INFO: Received response from host: affinity-clusterip-s7pr2 + Oct 13 09:02:39.079: INFO: Received response from host: affinity-clusterip-s7pr2 + Oct 13 09:02:39.079: INFO: Received response from host: affinity-clusterip-s7pr2 + Oct 13 09:02:39.079: INFO: Received response from host: affinity-clusterip-s7pr2 + Oct 13 09:02:39.079: INFO: Cleaning up the exec pod + STEP: deleting ReplicationController affinity-clusterip in namespace services-5453, will wait for the garbage collector to delete the pods 10/13/23 09:02:39.089 + Oct 13 09:02:39.152: INFO: Deleting ReplicationController affinity-clusterip took: 5.311391ms + Oct 13 09:02:39.252: INFO: Terminating ReplicationController affinity-clusterip pods took: 100.398089ms + [AfterEach] [sig-network] Services + test/e2e/framework/node/init/init.go:32 + Oct 13 09:02:41.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] Services + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] Services + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] Services + tear down framework | framework.go:193 + STEP: Destroying namespace "services-5453" for this suite. 10/13/23 09:02:41.17 + << End Captured GinkgoWriter Output +------------------------------ +SSSSS +------------------------------ +[sig-storage] ConfigMap + binary data should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:175 +[BeforeEach] [sig-storage] ConfigMap + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:02:41.175 +Oct 13 09:02:41.175: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename configmap 10/13/23 09:02:41.176 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:02:41.19 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:02:41.193 +[BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:31 +[It] binary data should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:175 +STEP: Creating configMap with name configmap-test-upd-85ee8f93-55d6-4815-9db3-9936f82dcd31 10/13/23 09:02:41.199 +STEP: Creating the pod 10/13/23 09:02:41.203 +Oct 13 09:02:41.209: INFO: Waiting up to 5m0s for pod "pod-configmaps-6cb9e294-0c39-45af-80f5-e321336cdd75" in namespace "configmap-7033" to be "running" +Oct 13 09:02:41.212: INFO: Pod "pod-configmaps-6cb9e294-0c39-45af-80f5-e321336cdd75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.819211ms +Oct 13 09:02:43.216: INFO: Pod "pod-configmaps-6cb9e294-0c39-45af-80f5-e321336cdd75": Phase="Running", Reason="", readiness=false. Elapsed: 2.006739808s +Oct 13 09:02:43.216: INFO: Pod "pod-configmaps-6cb9e294-0c39-45af-80f5-e321336cdd75" satisfied condition "running" +STEP: Waiting for pod with text data 10/13/23 09:02:43.216 +STEP: Waiting for pod with binary data 10/13/23 09:02:43.23 +[AfterEach] [sig-storage] ConfigMap + test/e2e/framework/node/init/init.go:32 +Oct 13 09:02:43.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] ConfigMap + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] ConfigMap + tear down framework | framework.go:193 +STEP: Destroying namespace "configmap-7033" for this suite. 10/13/23 09:02:43.24 +------------------------------ +• [2.073 seconds] +[sig-storage] ConfigMap +test/e2e/common/storage/framework.go:23 + binary data should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:175 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] ConfigMap + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:02:41.175 + Oct 13 09:02:41.175: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename configmap 10/13/23 09:02:41.176 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:02:41.19 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:02:41.193 + [BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:31 + [It] binary data should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:175 + STEP: Creating configMap with name configmap-test-upd-85ee8f93-55d6-4815-9db3-9936f82dcd31 10/13/23 09:02:41.199 + STEP: Creating the pod 10/13/23 09:02:41.203 + Oct 13 09:02:41.209: INFO: Waiting up to 5m0s for pod "pod-configmaps-6cb9e294-0c39-45af-80f5-e321336cdd75" in namespace "configmap-7033" to be "running" + Oct 13 09:02:41.212: INFO: Pod "pod-configmaps-6cb9e294-0c39-45af-80f5-e321336cdd75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.819211ms + Oct 13 09:02:43.216: INFO: Pod "pod-configmaps-6cb9e294-0c39-45af-80f5-e321336cdd75": Phase="Running", Reason="", readiness=false. Elapsed: 2.006739808s + Oct 13 09:02:43.216: INFO: Pod "pod-configmaps-6cb9e294-0c39-45af-80f5-e321336cdd75" satisfied condition "running" + STEP: Waiting for pod with text data 10/13/23 09:02:43.216 + STEP: Waiting for pod with binary data 10/13/23 09:02:43.23 + [AfterEach] [sig-storage] ConfigMap + test/e2e/framework/node/init/init.go:32 + Oct 13 09:02:43.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] ConfigMap + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] ConfigMap + tear down framework | framework.go:193 + STEP: Destroying namespace "configmap-7033" for this suite. 10/13/23 09:02:43.24 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSS +------------------------------ +[sig-apps] ReplicaSet + Replace and Patch tests [Conformance] + test/e2e/apps/replica_set.go:154 +[BeforeEach] [sig-apps] ReplicaSet + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:02:43.249 +Oct 13 09:02:43.249: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename replicaset 10/13/23 09:02:43.25 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:02:43.268 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:02:43.271 +[BeforeEach] [sig-apps] ReplicaSet + test/e2e/framework/metrics/init/init.go:31 +[It] Replace and Patch tests [Conformance] + test/e2e/apps/replica_set.go:154 +Oct 13 09:02:43.282: INFO: Pod name sample-pod: Found 0 pods out of 1 +Oct 13 09:02:48.288: INFO: Pod name sample-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running 10/13/23 09:02:48.288 +STEP: Scaling up "test-rs" replicaset 10/13/23 09:02:48.288 +Oct 13 09:02:48.300: INFO: Updating replica set "test-rs" +STEP: patching the ReplicaSet 10/13/23 09:02:48.3 +W1013 09:02:48.306867 23 warnings.go:70] unknown field "spec.template.spec.TerminationGracePeriodSeconds" +Oct 13 09:02:48.308: INFO: observed ReplicaSet test-rs in namespace replicaset-4325 with ReadyReplicas 1, AvailableReplicas 1 +Oct 13 09:02:48.327: INFO: observed ReplicaSet test-rs in namespace replicaset-4325 with ReadyReplicas 1, AvailableReplicas 1 +Oct 13 09:02:48.354: INFO: observed ReplicaSet test-rs in namespace replicaset-4325 with ReadyReplicas 1, AvailableReplicas 1 +Oct 13 09:02:48.364: INFO: observed ReplicaSet test-rs in namespace replicaset-4325 with ReadyReplicas 1, AvailableReplicas 1 +Oct 13 09:02:49.020: INFO: observed ReplicaSet test-rs in namespace replicaset-4325 with ReadyReplicas 2, AvailableReplicas 2 +Oct 13 09:02:49.141: INFO: observed Replicaset test-rs in namespace replicaset-4325 with ReadyReplicas 3 found true +[AfterEach] [sig-apps] ReplicaSet + test/e2e/framework/node/init/init.go:32 +Oct 13 09:02:49.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] ReplicaSet + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] ReplicaSet + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] ReplicaSet + tear down framework | framework.go:193 +STEP: Destroying namespace "replicaset-4325" for this suite. 10/13/23 09:02:49.144 +------------------------------ +• [SLOW TEST] [5.900 seconds] +[sig-apps] ReplicaSet +test/e2e/apps/framework.go:23 + Replace and Patch tests [Conformance] + test/e2e/apps/replica_set.go:154 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] ReplicaSet + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:02:43.249 + Oct 13 09:02:43.249: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename replicaset 10/13/23 09:02:43.25 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:02:43.268 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:02:43.271 + [BeforeEach] [sig-apps] ReplicaSet + test/e2e/framework/metrics/init/init.go:31 + [It] Replace and Patch tests [Conformance] + test/e2e/apps/replica_set.go:154 + Oct 13 09:02:43.282: INFO: Pod name sample-pod: Found 0 pods out of 1 + Oct 13 09:02:48.288: INFO: Pod name sample-pod: Found 1 pods out of 1 + STEP: ensuring each pod is running 10/13/23 09:02:48.288 + STEP: Scaling up "test-rs" replicaset 10/13/23 09:02:48.288 + Oct 13 09:02:48.300: INFO: Updating replica set "test-rs" + STEP: patching the ReplicaSet 10/13/23 09:02:48.3 + W1013 09:02:48.306867 23 warnings.go:70] unknown field "spec.template.spec.TerminationGracePeriodSeconds" + Oct 13 09:02:48.308: INFO: observed ReplicaSet test-rs in namespace replicaset-4325 with ReadyReplicas 1, AvailableReplicas 1 + Oct 13 09:02:48.327: INFO: observed ReplicaSet test-rs in namespace replicaset-4325 with ReadyReplicas 1, AvailableReplicas 1 + Oct 13 09:02:48.354: INFO: observed ReplicaSet test-rs in namespace replicaset-4325 with ReadyReplicas 1, AvailableReplicas 1 + Oct 13 09:02:48.364: INFO: observed ReplicaSet test-rs in namespace replicaset-4325 with ReadyReplicas 1, AvailableReplicas 1 + Oct 13 09:02:49.020: INFO: observed ReplicaSet test-rs in namespace replicaset-4325 with ReadyReplicas 2, AvailableReplicas 2 + Oct 13 09:02:49.141: INFO: observed Replicaset test-rs in namespace replicaset-4325 with ReadyReplicas 3 found true + [AfterEach] [sig-apps] ReplicaSet + test/e2e/framework/node/init/init.go:32 + Oct 13 09:02:49.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] ReplicaSet + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] ReplicaSet + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] ReplicaSet + tear down framework | framework.go:193 + STEP: Destroying namespace "replicaset-4325" for this suite. 10/13/23 09:02:49.144 + << End Captured GinkgoWriter Output +------------------------------ +SSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl server-side dry-run + should check if kubectl can dry-run update Pods [Conformance] + test/e2e/kubectl/kubectl.go:962 +[BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:02:49.149 +Oct 13 09:02:49.149: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename kubectl 10/13/23 09:02:49.15 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:02:49.162 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:02:49.165 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 +[It] should check if kubectl can dry-run update Pods [Conformance] + test/e2e/kubectl/kubectl.go:962 +STEP: running the image registry.k8s.io/e2e-test-images/httpd:2.4.38-4 10/13/23 09:02:49.167 +Oct 13 09:02:49.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-5735 run e2e-test-httpd-pod --image=registry.k8s.io/e2e-test-images/httpd:2.4.38-4 --pod-running-timeout=2m0s --labels=run=e2e-test-httpd-pod' +Oct 13 09:02:49.231: INFO: stderr: "" +Oct 13 09:02:49.232: INFO: stdout: "pod/e2e-test-httpd-pod created\n" +STEP: replace the image in the pod with server-side dry-run 10/13/23 09:02:49.232 +Oct 13 09:02:49.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-5735 patch pod e2e-test-httpd-pod -p {"spec":{"containers":[{"name": "e2e-test-httpd-pod","image": "registry.k8s.io/e2e-test-images/busybox:1.29-4"}]}} --dry-run=server' +Oct 13 09:02:49.882: INFO: stderr: "" +Oct 13 09:02:49.882: INFO: stdout: "pod/e2e-test-httpd-pod patched\n" +STEP: verifying the pod e2e-test-httpd-pod has the right image registry.k8s.io/e2e-test-images/httpd:2.4.38-4 10/13/23 09:02:49.882 +Oct 13 09:02:49.885: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-5735 delete pods e2e-test-httpd-pod' +Oct 13 09:02:51.034: INFO: stderr: "" +Oct 13 09:02:51.034: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 +Oct 13 09:02:51.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 +STEP: Destroying namespace "kubectl-5735" for this suite. 10/13/23 09:02:51.038 +------------------------------ +• [1.893 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Kubectl server-side dry-run + test/e2e/kubectl/kubectl.go:956 + should check if kubectl can dry-run update Pods [Conformance] + test/e2e/kubectl/kubectl.go:962 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:02:49.149 + Oct 13 09:02:49.149: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename kubectl 10/13/23 09:02:49.15 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:02:49.162 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:02:49.165 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 + [It] should check if kubectl can dry-run update Pods [Conformance] + test/e2e/kubectl/kubectl.go:962 + STEP: running the image registry.k8s.io/e2e-test-images/httpd:2.4.38-4 10/13/23 09:02:49.167 + Oct 13 09:02:49.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-5735 run e2e-test-httpd-pod --image=registry.k8s.io/e2e-test-images/httpd:2.4.38-4 --pod-running-timeout=2m0s --labels=run=e2e-test-httpd-pod' + Oct 13 09:02:49.231: INFO: stderr: "" + Oct 13 09:02:49.232: INFO: stdout: "pod/e2e-test-httpd-pod created\n" + STEP: replace the image in the pod with server-side dry-run 10/13/23 09:02:49.232 + Oct 13 09:02:49.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-5735 patch pod e2e-test-httpd-pod -p {"spec":{"containers":[{"name": "e2e-test-httpd-pod","image": "registry.k8s.io/e2e-test-images/busybox:1.29-4"}]}} --dry-run=server' + Oct 13 09:02:49.882: INFO: stderr: "" + Oct 13 09:02:49.882: INFO: stdout: "pod/e2e-test-httpd-pod patched\n" + STEP: verifying the pod e2e-test-httpd-pod has the right image registry.k8s.io/e2e-test-images/httpd:2.4.38-4 10/13/23 09:02:49.882 + Oct 13 09:02:49.885: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-5735 delete pods e2e-test-httpd-pod' + Oct 13 09:02:51.034: INFO: stderr: "" + Oct 13 09:02:51.034: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" + [AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 + Oct 13 09:02:51.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 + STEP: Destroying namespace "kubectl-5735" for this suite. 10/13/23 09:02:51.038 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Job + should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] + test/e2e/apps/job.go:426 +[BeforeEach] [sig-apps] Job + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:02:51.043 +Oct 13 09:02:51.043: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename job 10/13/23 09:02:51.044 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:02:51.058 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:02:51.06 +[BeforeEach] [sig-apps] Job + test/e2e/framework/metrics/init/init.go:31 +[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] + test/e2e/apps/job.go:426 +STEP: Creating a job 10/13/23 09:02:51.062 +STEP: Ensuring job reaches completions 10/13/23 09:02:51.067 +[AfterEach] [sig-apps] Job + test/e2e/framework/node/init/init.go:32 +Oct 13 09:03:01.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] Job + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] Job + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] Job + tear down framework | framework.go:193 +STEP: Destroying namespace "job-5749" for this suite. 10/13/23 09:03:01.079 +------------------------------ +• [SLOW TEST] [10.041 seconds] +[sig-apps] Job +test/e2e/apps/framework.go:23 + should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] + test/e2e/apps/job.go:426 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] Job + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:02:51.043 + Oct 13 09:02:51.043: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename job 10/13/23 09:02:51.044 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:02:51.058 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:02:51.06 + [BeforeEach] [sig-apps] Job + test/e2e/framework/metrics/init/init.go:31 + [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] + test/e2e/apps/job.go:426 + STEP: Creating a job 10/13/23 09:02:51.062 + STEP: Ensuring job reaches completions 10/13/23 09:02:51.067 + [AfterEach] [sig-apps] Job + test/e2e/framework/node/init/init.go:32 + Oct 13 09:03:01.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] Job + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] Job + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] Job + tear down framework | framework.go:193 + STEP: Destroying namespace "job-5749" for this suite. 10/13/23 09:03:01.079 + << End Captured GinkgoWriter Output +------------------------------ +SS +------------------------------ +[sig-apps] ReplicationController + should surface a failure condition on a common issue like exceeded quota [Conformance] + test/e2e/apps/rc.go:83 +[BeforeEach] [sig-apps] ReplicationController + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:03:01.085 +Oct 13 09:03:01.085: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename replication-controller 10/13/23 09:03:01.086 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:03:01.102 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:03:01.105 +[BeforeEach] [sig-apps] ReplicationController + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] ReplicationController + test/e2e/apps/rc.go:57 +[It] should surface a failure condition on a common issue like exceeded quota [Conformance] + test/e2e/apps/rc.go:83 +Oct 13 09:03:01.107: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace +STEP: Creating rc "condition-test" that asks for more than the allowed pod quota 10/13/23 09:03:02.117 +STEP: Checking rc "condition-test" has the desired failure condition set 10/13/23 09:03:02.124 +STEP: Scaling down rc "condition-test" to satisfy pod quota 10/13/23 09:03:03.131 +Oct 13 09:03:03.138: INFO: Updating replication controller "condition-test" +STEP: Checking rc "condition-test" has no failure condition set 10/13/23 09:03:03.138 +[AfterEach] [sig-apps] ReplicationController + test/e2e/framework/node/init/init.go:32 +Oct 13 09:03:04.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] ReplicationController + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] ReplicationController + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] ReplicationController + tear down framework | framework.go:193 +STEP: Destroying namespace "replication-controller-711" for this suite. 10/13/23 09:03:04.15 +------------------------------ +• [3.071 seconds] +[sig-apps] ReplicationController +test/e2e/apps/framework.go:23 + should surface a failure condition on a common issue like exceeded quota [Conformance] + test/e2e/apps/rc.go:83 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] ReplicationController + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:03:01.085 + Oct 13 09:03:01.085: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename replication-controller 10/13/23 09:03:01.086 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:03:01.102 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:03:01.105 + [BeforeEach] [sig-apps] ReplicationController + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] ReplicationController + test/e2e/apps/rc.go:57 + [It] should surface a failure condition on a common issue like exceeded quota [Conformance] + test/e2e/apps/rc.go:83 + Oct 13 09:03:01.107: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace + STEP: Creating rc "condition-test" that asks for more than the allowed pod quota 10/13/23 09:03:02.117 + STEP: Checking rc "condition-test" has the desired failure condition set 10/13/23 09:03:02.124 + STEP: Scaling down rc "condition-test" to satisfy pod quota 10/13/23 09:03:03.131 + Oct 13 09:03:03.138: INFO: Updating replication controller "condition-test" + STEP: Checking rc "condition-test" has no failure condition set 10/13/23 09:03:03.138 + [AfterEach] [sig-apps] ReplicationController + test/e2e/framework/node/init/init.go:32 + Oct 13 09:03:04.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] ReplicationController + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] ReplicationController + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] ReplicationController + tear down framework | framework.go:193 + STEP: Destroying namespace "replication-controller-711" for this suite. 10/13/23 09:03:04.15 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for multiple CRDs of different groups [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:276 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:03:04.156 +Oct 13 09:03:04.156: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename crd-publish-openapi 10/13/23 09:03:04.157 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:03:04.178 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:03:04.181 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[It] works for multiple CRDs of different groups [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:276 +STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation 10/13/23 09:03:04.184 +Oct 13 09:03:04.184: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +Oct 13 09:03:06.078: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Oct 13 09:03:12.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "crd-publish-openapi-5320" for this suite. 10/13/23 09:03:12.915 +------------------------------ +• [SLOW TEST] [8.765 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + works for multiple CRDs of different groups [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:276 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:03:04.156 + Oct 13 09:03:04.156: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename crd-publish-openapi 10/13/23 09:03:04.157 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:03:04.178 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:03:04.181 + [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [It] works for multiple CRDs of different groups [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:276 + STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation 10/13/23 09:03:04.184 + Oct 13 09:03:04.184: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + Oct 13 09:03:06.078: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Oct 13 09:03:12.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "crd-publish-openapi-5320" for this suite. 10/13/23 09:03:12.915 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should update pod when spec was updated and update strategy is RollingUpdate [Conformance] + test/e2e/apps/daemon_set.go:374 +[BeforeEach] [sig-apps] Daemon set [Serial] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:03:12.922 +Oct 13 09:03:12.922: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename daemonsets 10/13/23 09:03:12.923 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:03:12.941 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:03:12.943 +[BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:146 +[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] + test/e2e/apps/daemon_set.go:374 +Oct 13 09:03:12.960: INFO: Creating simple daemon set daemon-set +STEP: Check that daemon pods launch on every node of the cluster. 10/13/23 09:03:12.964 +Oct 13 09:03:12.975: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Oct 13 09:03:12.975: INFO: Node node1 is running 0 daemon pod, expected 1 +Oct 13 09:03:13.984: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Oct 13 09:03:13.984: INFO: Node node1 is running 0 daemon pod, expected 1 +Oct 13 09:03:14.983: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 +Oct 13 09:03:14.983: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set +STEP: Update daemon pods image. 10/13/23 09:03:15.002 +STEP: Check that daemon pods images are updated. 10/13/23 09:03:15.015 +Oct 13 09:03:15.018: INFO: Wrong image for pod: daemon-set-2wpp5. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. +Oct 13 09:03:15.018: INFO: Wrong image for pod: daemon-set-54d2v. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. +Oct 13 09:03:15.018: INFO: Wrong image for pod: daemon-set-9n9bc. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. +Oct 13 09:03:16.028: INFO: Wrong image for pod: daemon-set-54d2v. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. +Oct 13 09:03:16.028: INFO: Wrong image for pod: daemon-set-9n9bc. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. +Oct 13 09:03:17.030: INFO: Wrong image for pod: daemon-set-54d2v. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. +Oct 13 09:03:17.030: INFO: Wrong image for pod: daemon-set-9n9bc. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. +Oct 13 09:03:18.026: INFO: Wrong image for pod: daemon-set-54d2v. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. +Oct 13 09:03:18.026: INFO: Wrong image for pod: daemon-set-9n9bc. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. +Oct 13 09:03:18.026: INFO: Pod daemon-set-jrh56 is not available +Oct 13 09:03:19.028: INFO: Wrong image for pod: daemon-set-9n9bc. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. +Oct 13 09:03:20.029: INFO: Wrong image for pod: daemon-set-9n9bc. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. +Oct 13 09:03:20.029: INFO: Pod daemon-set-fh7xg is not available +Oct 13 09:03:22.030: INFO: Pod daemon-set-c7k6l is not available +STEP: Check that daemon pods are still running on every node of the cluster. 10/13/23 09:03:22.036 +Oct 13 09:03:22.043: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Oct 13 09:03:22.043: INFO: Node node2 is running 0 daemon pod, expected 1 +Oct 13 09:03:23.051: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 +Oct 13 09:03:23.051: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set +[AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:111 +STEP: Deleting DaemonSet "daemon-set" 10/13/23 09:03:23.067 +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3201, will wait for the garbage collector to delete the pods 10/13/23 09:03:23.067 +Oct 13 09:03:23.128: INFO: Deleting DaemonSet.extensions daemon-set took: 6.710817ms +Oct 13 09:03:23.228: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.355168ms +Oct 13 09:03:25.232: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Oct 13 09:03:25.232: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set +Oct 13 09:03:25.236: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"26410"},"items":null} + +Oct 13 09:03:25.238: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"26410"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/node/init/init.go:32 +Oct 13 09:03:25.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + tear down framework | framework.go:193 +STEP: Destroying namespace "daemonsets-3201" for this suite. 10/13/23 09:03:25.252 +------------------------------ +• [SLOW TEST] [12.337 seconds] +[sig-apps] Daemon set [Serial] +test/e2e/apps/framework.go:23 + should update pod when spec was updated and update strategy is RollingUpdate [Conformance] + test/e2e/apps/daemon_set.go:374 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] Daemon set [Serial] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:03:12.922 + Oct 13 09:03:12.922: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename daemonsets 10/13/23 09:03:12.923 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:03:12.941 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:03:12.943 + [BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:146 + [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] + test/e2e/apps/daemon_set.go:374 + Oct 13 09:03:12.960: INFO: Creating simple daemon set daemon-set + STEP: Check that daemon pods launch on every node of the cluster. 10/13/23 09:03:12.964 + Oct 13 09:03:12.975: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Oct 13 09:03:12.975: INFO: Node node1 is running 0 daemon pod, expected 1 + Oct 13 09:03:13.984: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Oct 13 09:03:13.984: INFO: Node node1 is running 0 daemon pod, expected 1 + Oct 13 09:03:14.983: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 + Oct 13 09:03:14.983: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set + STEP: Update daemon pods image. 10/13/23 09:03:15.002 + STEP: Check that daemon pods images are updated. 10/13/23 09:03:15.015 + Oct 13 09:03:15.018: INFO: Wrong image for pod: daemon-set-2wpp5. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. + Oct 13 09:03:15.018: INFO: Wrong image for pod: daemon-set-54d2v. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. + Oct 13 09:03:15.018: INFO: Wrong image for pod: daemon-set-9n9bc. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. + Oct 13 09:03:16.028: INFO: Wrong image for pod: daemon-set-54d2v. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. + Oct 13 09:03:16.028: INFO: Wrong image for pod: daemon-set-9n9bc. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. + Oct 13 09:03:17.030: INFO: Wrong image for pod: daemon-set-54d2v. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. + Oct 13 09:03:17.030: INFO: Wrong image for pod: daemon-set-9n9bc. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. + Oct 13 09:03:18.026: INFO: Wrong image for pod: daemon-set-54d2v. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. + Oct 13 09:03:18.026: INFO: Wrong image for pod: daemon-set-9n9bc. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. + Oct 13 09:03:18.026: INFO: Pod daemon-set-jrh56 is not available + Oct 13 09:03:19.028: INFO: Wrong image for pod: daemon-set-9n9bc. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. + Oct 13 09:03:20.029: INFO: Wrong image for pod: daemon-set-9n9bc. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. + Oct 13 09:03:20.029: INFO: Pod daemon-set-fh7xg is not available + Oct 13 09:03:22.030: INFO: Pod daemon-set-c7k6l is not available + STEP: Check that daemon pods are still running on every node of the cluster. 10/13/23 09:03:22.036 + Oct 13 09:03:22.043: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 + Oct 13 09:03:22.043: INFO: Node node2 is running 0 daemon pod, expected 1 + Oct 13 09:03:23.051: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 + Oct 13 09:03:23.051: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set + [AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:111 + STEP: Deleting DaemonSet "daemon-set" 10/13/23 09:03:23.067 + STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3201, will wait for the garbage collector to delete the pods 10/13/23 09:03:23.067 + Oct 13 09:03:23.128: INFO: Deleting DaemonSet.extensions daemon-set took: 6.710817ms + Oct 13 09:03:23.228: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.355168ms + Oct 13 09:03:25.232: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Oct 13 09:03:25.232: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set + Oct 13 09:03:25.236: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"26410"},"items":null} + + Oct 13 09:03:25.238: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"26410"},"items":null} + + [AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/node/init/init.go:32 + Oct 13 09:03:25.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + tear down framework | framework.go:193 + STEP: Destroying namespace "daemonsets-3201" for this suite. 10/13/23 09:03:25.252 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a configMap. [Conformance] + test/e2e/apimachinery/resource_quota.go:326 +[BeforeEach] [sig-api-machinery] ResourceQuota + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:03:25.26 +Oct 13 09:03:25.260: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename resourcequota 10/13/23 09:03:25.261 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:03:25.274 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:03:25.276 +[BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:31 +[It] should create a ResourceQuota and capture the life of a configMap. [Conformance] + test/e2e/apimachinery/resource_quota.go:326 +STEP: Counting existing ResourceQuota 10/13/23 09:03:42.282 +STEP: Creating a ResourceQuota 10/13/23 09:03:47.288 +STEP: Ensuring resource quota status is calculated 10/13/23 09:03:47.295 +STEP: Creating a ConfigMap 10/13/23 09:03:49.301 +STEP: Ensuring resource quota status captures configMap creation 10/13/23 09:03:49.317 +STEP: Deleting a ConfigMap 10/13/23 09:03:51.323 +STEP: Ensuring resource quota status released usage 10/13/23 09:03:51.33 +[AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/node/init/init.go:32 +Oct 13 09:03:53.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + tear down framework | framework.go:193 +STEP: Destroying namespace "resourcequota-3546" for this suite. 10/13/23 09:03:53.341 +------------------------------ +• [SLOW TEST] [28.091 seconds] +[sig-api-machinery] ResourceQuota +test/e2e/apimachinery/framework.go:23 + should create a ResourceQuota and capture the life of a configMap. [Conformance] + test/e2e/apimachinery/resource_quota.go:326 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] ResourceQuota + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:03:25.26 + Oct 13 09:03:25.260: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename resourcequota 10/13/23 09:03:25.261 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:03:25.274 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:03:25.276 + [BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:31 + [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] + test/e2e/apimachinery/resource_quota.go:326 + STEP: Counting existing ResourceQuota 10/13/23 09:03:42.282 + STEP: Creating a ResourceQuota 10/13/23 09:03:47.288 + STEP: Ensuring resource quota status is calculated 10/13/23 09:03:47.295 + STEP: Creating a ConfigMap 10/13/23 09:03:49.301 + STEP: Ensuring resource quota status captures configMap creation 10/13/23 09:03:49.317 + STEP: Deleting a ConfigMap 10/13/23 09:03:51.323 + STEP: Ensuring resource quota status released usage 10/13/23 09:03:51.33 + [AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/node/init/init.go:32 + Oct 13 09:03:53.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + tear down framework | framework.go:193 + STEP: Destroying namespace "resourcequota-3546" for this suite. 10/13/23 09:03:53.341 + << End Captured GinkgoWriter Output +------------------------------ +S +------------------------------ +[sig-apps] CronJob + should schedule multiple jobs concurrently [Conformance] + test/e2e/apps/cronjob.go:69 +[BeforeEach] [sig-apps] CronJob + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:03:53.35 +Oct 13 09:03:53.350: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename cronjob 10/13/23 09:03:53.351 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:03:53.365 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:03:53.367 +[BeforeEach] [sig-apps] CronJob + test/e2e/framework/metrics/init/init.go:31 +[It] should schedule multiple jobs concurrently [Conformance] + test/e2e/apps/cronjob.go:69 +STEP: Creating a cronjob 10/13/23 09:03:53.369 +STEP: Ensuring more than one job is running at a time 10/13/23 09:03:53.376 +STEP: Ensuring at least two running jobs exists by listing jobs explicitly 10/13/23 09:05:01.381 +STEP: Removing cronjob 10/13/23 09:05:01.386 +[AfterEach] [sig-apps] CronJob + test/e2e/framework/node/init/init.go:32 +Oct 13 09:05:01.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] CronJob + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] CronJob + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] CronJob + tear down framework | framework.go:193 +STEP: Destroying namespace "cronjob-1336" for this suite. 10/13/23 09:05:01.397 +------------------------------ +• [SLOW TEST] [68.056 seconds] +[sig-apps] CronJob +test/e2e/apps/framework.go:23 + should schedule multiple jobs concurrently [Conformance] + test/e2e/apps/cronjob.go:69 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] CronJob + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:03:53.35 + Oct 13 09:03:53.350: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename cronjob 10/13/23 09:03:53.351 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:03:53.365 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:03:53.367 + [BeforeEach] [sig-apps] CronJob + test/e2e/framework/metrics/init/init.go:31 + [It] should schedule multiple jobs concurrently [Conformance] + test/e2e/apps/cronjob.go:69 + STEP: Creating a cronjob 10/13/23 09:03:53.369 + STEP: Ensuring more than one job is running at a time 10/13/23 09:03:53.376 + STEP: Ensuring at least two running jobs exists by listing jobs explicitly 10/13/23 09:05:01.381 + STEP: Removing cronjob 10/13/23 09:05:01.386 + [AfterEach] [sig-apps] CronJob + test/e2e/framework/node/init/init.go:32 + Oct 13 09:05:01.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] CronJob + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] CronJob + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] CronJob + tear down framework | framework.go:193 + STEP: Destroying namespace "cronjob-1336" for this suite. 10/13/23 09:05:01.397 + << End Captured GinkgoWriter Output +------------------------------ +[sig-node] Sysctls [LinuxOnly] [NodeConformance] + should support sysctls [MinimumKubeletVersion:1.21] [Conformance] + test/e2e/common/node/sysctl.go:77 +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/common/node/sysctl.go:37 +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:05:01.406 +Oct 13 09:05:01.407: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename sysctl 10/13/23 09:05:01.407 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:05:01.428 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:05:01.432 +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/common/node/sysctl.go:67 +[It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance] + test/e2e/common/node/sysctl.go:77 +STEP: Creating a pod with the kernel.shm_rmid_forced sysctl 10/13/23 09:05:01.436 +STEP: Watching for error events or started pod 10/13/23 09:05:01.445 +STEP: Waiting for pod completion 10/13/23 09:05:03.451 +Oct 13 09:05:03.451: INFO: Waiting up to 3m0s for pod "sysctl-637e758f-224c-423c-9df4-4c09fc173c8a" in namespace "sysctl-1131" to be "completed" +Oct 13 09:05:03.455: INFO: Pod "sysctl-637e758f-224c-423c-9df4-4c09fc173c8a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.10927ms +Oct 13 09:05:05.461: INFO: Pod "sysctl-637e758f-224c-423c-9df4-4c09fc173c8a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009479387s +Oct 13 09:05:05.461: INFO: Pod "sysctl-637e758f-224c-423c-9df4-4c09fc173c8a" satisfied condition "completed" +STEP: Checking that the pod succeeded 10/13/23 09:05:05.464 +STEP: Getting logs from the pod 10/13/23 09:05:05.465 +STEP: Checking that the sysctl is actually updated 10/13/23 09:05:05.473 +[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/framework/node/init/init.go:32 +Oct 13 09:05:05.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + tear down framework | framework.go:193 +STEP: Destroying namespace "sysctl-1131" for this suite. 10/13/23 09:05:05.476 +------------------------------ +• [4.076 seconds] +[sig-node] Sysctls [LinuxOnly] [NodeConformance] +test/e2e/common/node/framework.go:23 + should support sysctls [MinimumKubeletVersion:1.21] [Conformance] + test/e2e/common/node/sysctl.go:77 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/common/node/sysctl.go:37 + [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:05:01.406 + Oct 13 09:05:01.407: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename sysctl 10/13/23 09:05:01.407 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:05:01.428 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:05:01.432 + [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/common/node/sysctl.go:67 + [It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance] + test/e2e/common/node/sysctl.go:77 + STEP: Creating a pod with the kernel.shm_rmid_forced sysctl 10/13/23 09:05:01.436 + STEP: Watching for error events or started pod 10/13/23 09:05:01.445 + STEP: Waiting for pod completion 10/13/23 09:05:03.451 + Oct 13 09:05:03.451: INFO: Waiting up to 3m0s for pod "sysctl-637e758f-224c-423c-9df4-4c09fc173c8a" in namespace "sysctl-1131" to be "completed" + Oct 13 09:05:03.455: INFO: Pod "sysctl-637e758f-224c-423c-9df4-4c09fc173c8a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.10927ms + Oct 13 09:05:05.461: INFO: Pod "sysctl-637e758f-224c-423c-9df4-4c09fc173c8a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009479387s + Oct 13 09:05:05.461: INFO: Pod "sysctl-637e758f-224c-423c-9df4-4c09fc173c8a" satisfied condition "completed" + STEP: Checking that the pod succeeded 10/13/23 09:05:05.464 + STEP: Getting logs from the pod 10/13/23 09:05:05.465 + STEP: Checking that the sysctl is actually updated 10/13/23 09:05:05.473 + [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/framework/node/init/init.go:32 + Oct 13 09:05:05.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + tear down framework | framework.go:193 + STEP: Destroying namespace "sysctl-1131" for this suite. 10/13/23 09:05:05.476 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should unconditionally reject operations on fail closed webhook [Conformance] + test/e2e/apimachinery/webhook.go:239 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:05:05.484 +Oct 13 09:05:05.484: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename webhook 10/13/23 09:05:05.485 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:05:05.497 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:05:05.501 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 +STEP: Setting up server cert 10/13/23 09:05:05.516 +STEP: Create role binding to let webhook read extension-apiserver-authentication 10/13/23 09:05:05.895 +STEP: Deploying the webhook pod 10/13/23 09:05:05.904 +STEP: Wait for the deployment to be ready 10/13/23 09:05:05.916 +Oct 13 09:05:05.923: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service 10/13/23 09:05:07.935 +STEP: Verifying the service has paired with the endpoint 10/13/23 09:05:07.953 +Oct 13 09:05:08.955: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should unconditionally reject operations on fail closed webhook [Conformance] + test/e2e/apimachinery/webhook.go:239 +STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API 10/13/23 09:05:08.958 +STEP: create a namespace for the webhook 10/13/23 09:05:08.977 +STEP: create a configmap should be unconditionally rejected by the webhook 10/13/23 09:05:08.983 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Oct 13 09:05:09.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "webhook-4098" for this suite. 10/13/23 09:05:09.056 +STEP: Destroying namespace "webhook-4098-markers" for this suite. 10/13/23 09:05:09.063 +------------------------------ +• [3.589 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should unconditionally reject operations on fail closed webhook [Conformance] + test/e2e/apimachinery/webhook.go:239 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:05:05.484 + Oct 13 09:05:05.484: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename webhook 10/13/23 09:05:05.485 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:05:05.497 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:05:05.501 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 + STEP: Setting up server cert 10/13/23 09:05:05.516 + STEP: Create role binding to let webhook read extension-apiserver-authentication 10/13/23 09:05:05.895 + STEP: Deploying the webhook pod 10/13/23 09:05:05.904 + STEP: Wait for the deployment to be ready 10/13/23 09:05:05.916 + Oct 13 09:05:05.923: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set + STEP: Deploying the webhook service 10/13/23 09:05:07.935 + STEP: Verifying the service has paired with the endpoint 10/13/23 09:05:07.953 + Oct 13 09:05:08.955: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] should unconditionally reject operations on fail closed webhook [Conformance] + test/e2e/apimachinery/webhook.go:239 + STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API 10/13/23 09:05:08.958 + STEP: create a namespace for the webhook 10/13/23 09:05:08.977 + STEP: create a configmap should be unconditionally rejected by the webhook 10/13/23 09:05:08.983 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Oct 13 09:05:09.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "webhook-4098" for this suite. 10/13/23 09:05:09.056 + STEP: Destroying namespace "webhook-4098-markers" for this suite. 10/13/23 09:05:09.063 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should complete a service status lifecycle [Conformance] + test/e2e/network/service.go:3428 +[BeforeEach] [sig-network] Services + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:05:09.074 +Oct 13 09:05:09.074: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename services 10/13/23 09:05:09.075 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:05:09.092 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:05:09.095 +[BeforeEach] [sig-network] Services + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 +[It] should complete a service status lifecycle [Conformance] + test/e2e/network/service.go:3428 +STEP: creating a Service 10/13/23 09:05:09.1 +STEP: watching for the Service to be added 10/13/23 09:05:09.11 +Oct 13 09:05:09.112: INFO: Found Service test-service-7rxkj in namespace services-4589 with labels: map[test-service-static:true] & ports [{http TCP 80 {0 80 } 0}] +Oct 13 09:05:09.112: INFO: Service test-service-7rxkj created +STEP: Getting /status 10/13/23 09:05:09.112 +Oct 13 09:05:09.115: INFO: Service test-service-7rxkj has LoadBalancer: {[]} +STEP: patching the ServiceStatus 10/13/23 09:05:09.115 +STEP: watching for the Service to be patched 10/13/23 09:05:09.12 +Oct 13 09:05:09.122: INFO: observed Service test-service-7rxkj in namespace services-4589 with annotations: map[] & LoadBalancer: {[]} +Oct 13 09:05:09.122: INFO: Found Service test-service-7rxkj in namespace services-4589 with annotations: map[patchedstatus:true] & LoadBalancer: {[{203.0.113.1 []}]} +Oct 13 09:05:09.122: INFO: Service test-service-7rxkj has service status patched +STEP: updating the ServiceStatus 10/13/23 09:05:09.122 +Oct 13 09:05:09.130: INFO: updatedStatus.Conditions: []v1.Condition{v1.Condition{Type:"StatusUpdate", Status:"True", ObservedGeneration:0, LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} +STEP: watching for the Service to be updated 10/13/23 09:05:09.13 +Oct 13 09:05:09.132: INFO: Observed Service test-service-7rxkj in namespace services-4589 with annotations: map[] & Conditions: {[]} +Oct 13 09:05:09.132: INFO: Observed event: &Service{ObjectMeta:{test-service-7rxkj services-4589 8d9f845e-99c2-45b6-bc42-22d6832042e3 26789 0 2023-10-13 09:05:09 +0000 UTC map[test-service-static:true] map[patchedstatus:true] [] [] [{e2e.test Update v1 2023-10-13 09:05:09 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:test-service-static":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:sessionAffinity":{},"f:type":{}}} } {e2e.test Update v1 2023-10-13 09:05:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:patchedstatus":{}}},"f:status":{"f:loadBalancer":{"f:ingress":{}}}} status}]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{0 80 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.105.115.36,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.105.115.36],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{LoadBalancerIngress{IP:203.0.113.1,Hostname:,Ports:[]PortStatus{},},},},Conditions:[]Condition{},},} +Oct 13 09:05:09.132: INFO: Found Service test-service-7rxkj in namespace services-4589 with annotations: map[patchedstatus:true] & Conditions: [{StatusUpdate True 0 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] +Oct 13 09:05:09.132: INFO: Service test-service-7rxkj has service status updated +STEP: patching the service 10/13/23 09:05:09.132 +STEP: watching for the Service to be patched 10/13/23 09:05:09.144 +Oct 13 09:05:09.145: INFO: observed Service test-service-7rxkj in namespace services-4589 with labels: map[test-service-static:true] +Oct 13 09:05:09.145: INFO: observed Service test-service-7rxkj in namespace services-4589 with labels: map[test-service-static:true] +Oct 13 09:05:09.145: INFO: observed Service test-service-7rxkj in namespace services-4589 with labels: map[test-service-static:true] +Oct 13 09:05:09.145: INFO: Found Service test-service-7rxkj in namespace services-4589 with labels: map[test-service:patched test-service-static:true] +Oct 13 09:05:09.145: INFO: Service test-service-7rxkj patched +STEP: deleting the service 10/13/23 09:05:09.145 +STEP: watching for the Service to be deleted 10/13/23 09:05:09.158 +Oct 13 09:05:09.160: INFO: Observed event: ADDED +Oct 13 09:05:09.160: INFO: Observed event: MODIFIED +Oct 13 09:05:09.160: INFO: Observed event: MODIFIED +Oct 13 09:05:09.160: INFO: Observed event: MODIFIED +Oct 13 09:05:09.160: INFO: Found Service test-service-7rxkj in namespace services-4589 with labels: map[test-service:patched test-service-static:true] & annotations: map[patchedstatus:true] +Oct 13 09:05:09.160: INFO: Service test-service-7rxkj deleted +[AfterEach] [sig-network] Services + test/e2e/framework/node/init/init.go:32 +Oct 13 09:05:09.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] Services + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] Services + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] Services + tear down framework | framework.go:193 +STEP: Destroying namespace "services-4589" for this suite. 10/13/23 09:05:09.163 +------------------------------ +• [0.095 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should complete a service status lifecycle [Conformance] + test/e2e/network/service.go:3428 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Services + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:05:09.074 + Oct 13 09:05:09.074: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename services 10/13/23 09:05:09.075 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:05:09.092 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:05:09.095 + [BeforeEach] [sig-network] Services + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 + [It] should complete a service status lifecycle [Conformance] + test/e2e/network/service.go:3428 + STEP: creating a Service 10/13/23 09:05:09.1 + STEP: watching for the Service to be added 10/13/23 09:05:09.11 + Oct 13 09:05:09.112: INFO: Found Service test-service-7rxkj in namespace services-4589 with labels: map[test-service-static:true] & ports [{http TCP 80 {0 80 } 0}] + Oct 13 09:05:09.112: INFO: Service test-service-7rxkj created + STEP: Getting /status 10/13/23 09:05:09.112 + Oct 13 09:05:09.115: INFO: Service test-service-7rxkj has LoadBalancer: {[]} + STEP: patching the ServiceStatus 10/13/23 09:05:09.115 + STEP: watching for the Service to be patched 10/13/23 09:05:09.12 + Oct 13 09:05:09.122: INFO: observed Service test-service-7rxkj in namespace services-4589 with annotations: map[] & LoadBalancer: {[]} + Oct 13 09:05:09.122: INFO: Found Service test-service-7rxkj in namespace services-4589 with annotations: map[patchedstatus:true] & LoadBalancer: {[{203.0.113.1 []}]} + Oct 13 09:05:09.122: INFO: Service test-service-7rxkj has service status patched + STEP: updating the ServiceStatus 10/13/23 09:05:09.122 + Oct 13 09:05:09.130: INFO: updatedStatus.Conditions: []v1.Condition{v1.Condition{Type:"StatusUpdate", Status:"True", ObservedGeneration:0, LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} + STEP: watching for the Service to be updated 10/13/23 09:05:09.13 + Oct 13 09:05:09.132: INFO: Observed Service test-service-7rxkj in namespace services-4589 with annotations: map[] & Conditions: {[]} + Oct 13 09:05:09.132: INFO: Observed event: &Service{ObjectMeta:{test-service-7rxkj services-4589 8d9f845e-99c2-45b6-bc42-22d6832042e3 26789 0 2023-10-13 09:05:09 +0000 UTC map[test-service-static:true] map[patchedstatus:true] [] [] [{e2e.test Update v1 2023-10-13 09:05:09 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:test-service-static":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:sessionAffinity":{},"f:type":{}}} } {e2e.test Update v1 2023-10-13 09:05:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:patchedstatus":{}}},"f:status":{"f:loadBalancer":{"f:ingress":{}}}} status}]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{0 80 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.105.115.36,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.105.115.36],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{LoadBalancerIngress{IP:203.0.113.1,Hostname:,Ports:[]PortStatus{},},},},Conditions:[]Condition{},},} + Oct 13 09:05:09.132: INFO: Found Service test-service-7rxkj in namespace services-4589 with annotations: map[patchedstatus:true] & Conditions: [{StatusUpdate True 0 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] + Oct 13 09:05:09.132: INFO: Service test-service-7rxkj has service status updated + STEP: patching the service 10/13/23 09:05:09.132 + STEP: watching for the Service to be patched 10/13/23 09:05:09.144 + Oct 13 09:05:09.145: INFO: observed Service test-service-7rxkj in namespace services-4589 with labels: map[test-service-static:true] + Oct 13 09:05:09.145: INFO: observed Service test-service-7rxkj in namespace services-4589 with labels: map[test-service-static:true] + Oct 13 09:05:09.145: INFO: observed Service test-service-7rxkj in namespace services-4589 with labels: map[test-service-static:true] + Oct 13 09:05:09.145: INFO: Found Service test-service-7rxkj in namespace services-4589 with labels: map[test-service:patched test-service-static:true] + Oct 13 09:05:09.145: INFO: Service test-service-7rxkj patched + STEP: deleting the service 10/13/23 09:05:09.145 + STEP: watching for the Service to be deleted 10/13/23 09:05:09.158 + Oct 13 09:05:09.160: INFO: Observed event: ADDED + Oct 13 09:05:09.160: INFO: Observed event: MODIFIED + Oct 13 09:05:09.160: INFO: Observed event: MODIFIED + Oct 13 09:05:09.160: INFO: Observed event: MODIFIED + Oct 13 09:05:09.160: INFO: Found Service test-service-7rxkj in namespace services-4589 with labels: map[test-service:patched test-service-static:true] & annotations: map[patchedstatus:true] + Oct 13 09:05:09.160: INFO: Service test-service-7rxkj deleted + [AfterEach] [sig-network] Services + test/e2e/framework/node/init/init.go:32 + Oct 13 09:05:09.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] Services + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] Services + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] Services + tear down framework | framework.go:193 + STEP: Destroying namespace "services-4589" for this suite. 10/13/23 09:05:09.163 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] EndpointSlice + should have Endpoints and EndpointSlices pointing to API Server [Conformance] + test/e2e/network/endpointslice.go:66 +[BeforeEach] [sig-network] EndpointSlice + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:05:09.171 +Oct 13 09:05:09.171: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename endpointslice 10/13/23 09:05:09.172 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:05:09.189 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:05:09.192 +[BeforeEach] [sig-network] EndpointSlice + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-network] EndpointSlice + test/e2e/network/endpointslice.go:52 +[It] should have Endpoints and EndpointSlices pointing to API Server [Conformance] + test/e2e/network/endpointslice.go:66 +Oct 13 09:05:09.202: INFO: Endpoints addresses: [10.253.8.110 10.253.8.111 10.253.8.112] , ports: [6443] +Oct 13 09:05:09.202: INFO: EndpointSlices addresses: [10.253.8.110 10.253.8.111 10.253.8.112] , ports: [6443] +[AfterEach] [sig-network] EndpointSlice + test/e2e/framework/node/init/init.go:32 +Oct 13 09:05:09.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] EndpointSlice + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] EndpointSlice + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] EndpointSlice + tear down framework | framework.go:193 +STEP: Destroying namespace "endpointslice-2646" for this suite. 10/13/23 09:05:09.205 +------------------------------ +• [0.041 seconds] +[sig-network] EndpointSlice +test/e2e/network/common/framework.go:23 + should have Endpoints and EndpointSlices pointing to API Server [Conformance] + test/e2e/network/endpointslice.go:66 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] EndpointSlice + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:05:09.171 + Oct 13 09:05:09.171: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename endpointslice 10/13/23 09:05:09.172 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:05:09.189 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:05:09.192 + [BeforeEach] [sig-network] EndpointSlice + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-network] EndpointSlice + test/e2e/network/endpointslice.go:52 + [It] should have Endpoints and EndpointSlices pointing to API Server [Conformance] + test/e2e/network/endpointslice.go:66 + Oct 13 09:05:09.202: INFO: Endpoints addresses: [10.253.8.110 10.253.8.111 10.253.8.112] , ports: [6443] + Oct 13 09:05:09.202: INFO: EndpointSlices addresses: [10.253.8.110 10.253.8.111 10.253.8.112] , ports: [6443] + [AfterEach] [sig-network] EndpointSlice + test/e2e/framework/node/init/init.go:32 + Oct 13 09:05:09.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] EndpointSlice + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] EndpointSlice + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] EndpointSlice + tear down framework | framework.go:193 + STEP: Destroying namespace "endpointslice-2646" for this suite. 10/13/23 09:05:09.205 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Watchers + should receive events on concurrent watches in same order [Conformance] + test/e2e/apimachinery/watch.go:334 +[BeforeEach] [sig-api-machinery] Watchers + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:05:09.218 +Oct 13 09:05:09.218: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename watch 10/13/23 09:05:09.218 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:05:09.232 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:05:09.234 +[BeforeEach] [sig-api-machinery] Watchers + test/e2e/framework/metrics/init/init.go:31 +[It] should receive events on concurrent watches in same order [Conformance] + test/e2e/apimachinery/watch.go:334 +STEP: getting a starting resourceVersion 10/13/23 09:05:09.237 +STEP: starting a background goroutine to produce watch events 10/13/23 09:05:09.24 +STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order 10/13/23 09:05:09.24 +[AfterEach] [sig-api-machinery] Watchers + test/e2e/framework/node/init/init.go:32 +Oct 13 09:05:12.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] Watchers + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] Watchers + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] Watchers + tear down framework | framework.go:193 +STEP: Destroying namespace "watch-7817" for this suite. 10/13/23 09:05:12.073 +------------------------------ +• [2.908 seconds] +[sig-api-machinery] Watchers +test/e2e/apimachinery/framework.go:23 + should receive events on concurrent watches in same order [Conformance] + test/e2e/apimachinery/watch.go:334 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] Watchers + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:05:09.218 + Oct 13 09:05:09.218: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename watch 10/13/23 09:05:09.218 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:05:09.232 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:05:09.234 + [BeforeEach] [sig-api-machinery] Watchers + test/e2e/framework/metrics/init/init.go:31 + [It] should receive events on concurrent watches in same order [Conformance] + test/e2e/apimachinery/watch.go:334 + STEP: getting a starting resourceVersion 10/13/23 09:05:09.237 + STEP: starting a background goroutine to produce watch events 10/13/23 09:05:09.24 + STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order 10/13/23 09:05:09.24 + [AfterEach] [sig-api-machinery] Watchers + test/e2e/framework/node/init/init.go:32 + Oct 13 09:05:12.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] Watchers + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] Watchers + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] Watchers + tear down framework | framework.go:193 + STEP: Destroying namespace "watch-7817" for this suite. 10/13/23 09:05:12.073 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Ephemeral Containers [NodeConformance] + will start an ephemeral container in an existing pod [Conformance] + test/e2e/common/node/ephemeral_containers.go:45 +[BeforeEach] [sig-node] Ephemeral Containers [NodeConformance] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:05:12.127 +Oct 13 09:05:12.127: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename ephemeral-containers-test 10/13/23 09:05:12.128 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:05:12.14 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:05:12.143 +[BeforeEach] [sig-node] Ephemeral Containers [NodeConformance] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] Ephemeral Containers [NodeConformance] + test/e2e/common/node/ephemeral_containers.go:38 +[It] will start an ephemeral container in an existing pod [Conformance] + test/e2e/common/node/ephemeral_containers.go:45 +STEP: creating a target pod 10/13/23 09:05:12.145 +Oct 13 09:05:12.151: INFO: Waiting up to 5m0s for pod "ephemeral-containers-target-pod" in namespace "ephemeral-containers-test-1686" to be "running and ready" +Oct 13 09:05:12.156: INFO: Pod "ephemeral-containers-target-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 5.074786ms +Oct 13 09:05:12.156: INFO: The phase of Pod ephemeral-containers-target-pod is Pending, waiting for it to be Running (with Ready = true) +Oct 13 09:05:14.177: INFO: Pod "ephemeral-containers-target-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.026337076s +Oct 13 09:05:14.177: INFO: The phase of Pod ephemeral-containers-target-pod is Running (Ready = true) +Oct 13 09:05:14.177: INFO: Pod "ephemeral-containers-target-pod" satisfied condition "running and ready" +STEP: adding an ephemeral container 10/13/23 09:05:14.182 +Oct 13 09:05:14.196: INFO: Waiting up to 1m0s for pod "ephemeral-containers-target-pod" in namespace "ephemeral-containers-test-1686" to be "container debugger running" +Oct 13 09:05:14.200: INFO: Pod "ephemeral-containers-target-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4.04786ms +Oct 13 09:05:16.204: INFO: Pod "ephemeral-containers-target-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.007900221s +Oct 13 09:05:18.205: INFO: Pod "ephemeral-containers-target-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4.008840384s +Oct 13 09:05:18.205: INFO: Pod "ephemeral-containers-target-pod" satisfied condition "container debugger running" +STEP: checking pod container endpoints 10/13/23 09:05:18.205 +Oct 13 09:05:18.205: INFO: ExecWithOptions {Command:[/bin/echo marco] Namespace:ephemeral-containers-test-1686 PodName:ephemeral-containers-target-pod ContainerName:debugger Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 13 09:05:18.205: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +Oct 13 09:05:18.206: INFO: ExecWithOptions: Clientset creation +Oct 13 09:05:18.206: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/ephemeral-containers-test-1686/pods/ephemeral-containers-target-pod/exec?command=%2Fbin%2Fecho&command=marco&container=debugger&container=debugger&stderr=true&stdout=true) +Oct 13 09:05:18.257: INFO: Exec stderr: "" +[AfterEach] [sig-node] Ephemeral Containers [NodeConformance] + test/e2e/framework/node/init/init.go:32 +Oct 13 09:05:18.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Ephemeral Containers [NodeConformance] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Ephemeral Containers [NodeConformance] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Ephemeral Containers [NodeConformance] + tear down framework | framework.go:193 +STEP: Destroying namespace "ephemeral-containers-test-1686" for this suite. 10/13/23 09:05:18.267 +------------------------------ +• [SLOW TEST] [6.145 seconds] +[sig-node] Ephemeral Containers [NodeConformance] +test/e2e/common/node/framework.go:23 + will start an ephemeral container in an existing pod [Conformance] + test/e2e/common/node/ephemeral_containers.go:45 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Ephemeral Containers [NodeConformance] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:05:12.127 + Oct 13 09:05:12.127: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename ephemeral-containers-test 10/13/23 09:05:12.128 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:05:12.14 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:05:12.143 + [BeforeEach] [sig-node] Ephemeral Containers [NodeConformance] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] Ephemeral Containers [NodeConformance] + test/e2e/common/node/ephemeral_containers.go:38 + [It] will start an ephemeral container in an existing pod [Conformance] + test/e2e/common/node/ephemeral_containers.go:45 + STEP: creating a target pod 10/13/23 09:05:12.145 + Oct 13 09:05:12.151: INFO: Waiting up to 5m0s for pod "ephemeral-containers-target-pod" in namespace "ephemeral-containers-test-1686" to be "running and ready" + Oct 13 09:05:12.156: INFO: Pod "ephemeral-containers-target-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 5.074786ms + Oct 13 09:05:12.156: INFO: The phase of Pod ephemeral-containers-target-pod is Pending, waiting for it to be Running (with Ready = true) + Oct 13 09:05:14.177: INFO: Pod "ephemeral-containers-target-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.026337076s + Oct 13 09:05:14.177: INFO: The phase of Pod ephemeral-containers-target-pod is Running (Ready = true) + Oct 13 09:05:14.177: INFO: Pod "ephemeral-containers-target-pod" satisfied condition "running and ready" + STEP: adding an ephemeral container 10/13/23 09:05:14.182 + Oct 13 09:05:14.196: INFO: Waiting up to 1m0s for pod "ephemeral-containers-target-pod" in namespace "ephemeral-containers-test-1686" to be "container debugger running" + Oct 13 09:05:14.200: INFO: Pod "ephemeral-containers-target-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4.04786ms + Oct 13 09:05:16.204: INFO: Pod "ephemeral-containers-target-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.007900221s + Oct 13 09:05:18.205: INFO: Pod "ephemeral-containers-target-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4.008840384s + Oct 13 09:05:18.205: INFO: Pod "ephemeral-containers-target-pod" satisfied condition "container debugger running" + STEP: checking pod container endpoints 10/13/23 09:05:18.205 + Oct 13 09:05:18.205: INFO: ExecWithOptions {Command:[/bin/echo marco] Namespace:ephemeral-containers-test-1686 PodName:ephemeral-containers-target-pod ContainerName:debugger Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Oct 13 09:05:18.205: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + Oct 13 09:05:18.206: INFO: ExecWithOptions: Clientset creation + Oct 13 09:05:18.206: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/ephemeral-containers-test-1686/pods/ephemeral-containers-target-pod/exec?command=%2Fbin%2Fecho&command=marco&container=debugger&container=debugger&stderr=true&stdout=true) + Oct 13 09:05:18.257: INFO: Exec stderr: "" + [AfterEach] [sig-node] Ephemeral Containers [NodeConformance] + test/e2e/framework/node/init/init.go:32 + Oct 13 09:05:18.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Ephemeral Containers [NodeConformance] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Ephemeral Containers [NodeConformance] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Ephemeral Containers [NodeConformance] + tear down framework | framework.go:193 + STEP: Destroying namespace "ephemeral-containers-test-1686" for this suite. 10/13/23 09:05:18.267 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should update labels on modification [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:130 +[BeforeEach] [sig-storage] Projected downwardAPI + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:05:18.272 +Oct 13 09:05:18.272: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename projected 10/13/23 09:05:18.273 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:05:18.283 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:05:18.286 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:44 +[It] should update labels on modification [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:130 +STEP: Creating the pod 10/13/23 09:05:18.288 +Oct 13 09:05:18.297: INFO: Waiting up to 5m0s for pod "labelsupdatef8ab543e-4272-4faa-829e-e7a76dcb520d" in namespace "projected-1709" to be "running and ready" +Oct 13 09:05:18.301: INFO: Pod "labelsupdatef8ab543e-4272-4faa-829e-e7a76dcb520d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.857637ms +Oct 13 09:05:18.302: INFO: The phase of Pod labelsupdatef8ab543e-4272-4faa-829e-e7a76dcb520d is Pending, waiting for it to be Running (with Ready = true) +Oct 13 09:05:20.308: INFO: Pod "labelsupdatef8ab543e-4272-4faa-829e-e7a76dcb520d": Phase="Running", Reason="", readiness=true. Elapsed: 2.011392896s +Oct 13 09:05:20.308: INFO: The phase of Pod labelsupdatef8ab543e-4272-4faa-829e-e7a76dcb520d is Running (Ready = true) +Oct 13 09:05:20.308: INFO: Pod "labelsupdatef8ab543e-4272-4faa-829e-e7a76dcb520d" satisfied condition "running and ready" +Oct 13 09:05:20.828: INFO: Successfully updated pod "labelsupdatef8ab543e-4272-4faa-829e-e7a76dcb520d" +[AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/node/init/init.go:32 +Oct 13 09:05:24.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI + tear down framework | framework.go:193 +STEP: Destroying namespace "projected-1709" for this suite. 10/13/23 09:05:24.858 +------------------------------ +• [SLOW TEST] [6.593 seconds] +[sig-storage] Projected downwardAPI +test/e2e/common/storage/framework.go:23 + should update labels on modification [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:130 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected downwardAPI + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:05:18.272 + Oct 13 09:05:18.272: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename projected 10/13/23 09:05:18.273 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:05:18.283 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:05:18.286 + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:44 + [It] should update labels on modification [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:130 + STEP: Creating the pod 10/13/23 09:05:18.288 + Oct 13 09:05:18.297: INFO: Waiting up to 5m0s for pod "labelsupdatef8ab543e-4272-4faa-829e-e7a76dcb520d" in namespace "projected-1709" to be "running and ready" + Oct 13 09:05:18.301: INFO: Pod "labelsupdatef8ab543e-4272-4faa-829e-e7a76dcb520d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.857637ms + Oct 13 09:05:18.302: INFO: The phase of Pod labelsupdatef8ab543e-4272-4faa-829e-e7a76dcb520d is Pending, waiting for it to be Running (with Ready = true) + Oct 13 09:05:20.308: INFO: Pod "labelsupdatef8ab543e-4272-4faa-829e-e7a76dcb520d": Phase="Running", Reason="", readiness=true. Elapsed: 2.011392896s + Oct 13 09:05:20.308: INFO: The phase of Pod labelsupdatef8ab543e-4272-4faa-829e-e7a76dcb520d is Running (Ready = true) + Oct 13 09:05:20.308: INFO: Pod "labelsupdatef8ab543e-4272-4faa-829e-e7a76dcb520d" satisfied condition "running and ready" + Oct 13 09:05:20.828: INFO: Successfully updated pod "labelsupdatef8ab543e-4272-4faa-829e-e7a76dcb520d" + [AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/node/init/init.go:32 + Oct 13 09:05:24.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + tear down framework | framework.go:193 + STEP: Destroying namespace "projected-1709" for this suite. 10/13/23 09:05:24.858 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:124 +[BeforeEach] [sig-storage] ConfigMap + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:05:24.866 +Oct 13 09:05:24.866: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename configmap 10/13/23 09:05:24.866 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:05:24.88 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:05:24.882 +[BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:31 +[It] updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:124 +STEP: Creating configMap with name configmap-test-upd-014e8266-6032-4f6a-8a40-61b8a52acd09 10/13/23 09:05:24.888 +STEP: Creating the pod 10/13/23 09:05:24.892 +Oct 13 09:05:24.899: INFO: Waiting up to 5m0s for pod "pod-configmaps-7b0c80c8-5ed2-4833-a966-41dc9a232fce" in namespace "configmap-5357" to be "running and ready" +Oct 13 09:05:24.903: INFO: Pod "pod-configmaps-7b0c80c8-5ed2-4833-a966-41dc9a232fce": Phase="Pending", Reason="", readiness=false. Elapsed: 3.378875ms +Oct 13 09:05:24.903: INFO: The phase of Pod pod-configmaps-7b0c80c8-5ed2-4833-a966-41dc9a232fce is Pending, waiting for it to be Running (with Ready = true) +Oct 13 09:05:26.907: INFO: Pod "pod-configmaps-7b0c80c8-5ed2-4833-a966-41dc9a232fce": Phase="Running", Reason="", readiness=true. Elapsed: 2.008141817s +Oct 13 09:05:26.907: INFO: The phase of Pod pod-configmaps-7b0c80c8-5ed2-4833-a966-41dc9a232fce is Running (Ready = true) +Oct 13 09:05:26.907: INFO: Pod "pod-configmaps-7b0c80c8-5ed2-4833-a966-41dc9a232fce" satisfied condition "running and ready" +STEP: Updating configmap configmap-test-upd-014e8266-6032-4f6a-8a40-61b8a52acd09 10/13/23 09:05:26.916 +STEP: waiting to observe update in volume 10/13/23 09:05:26.921 +[AfterEach] [sig-storage] ConfigMap + test/e2e/framework/node/init/init.go:32 +Oct 13 09:05:30.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] ConfigMap + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] ConfigMap + tear down framework | framework.go:193 +STEP: Destroying namespace "configmap-5357" for this suite. 10/13/23 09:05:30.946 +------------------------------ +• [SLOW TEST] [6.090 seconds] +[sig-storage] ConfigMap +test/e2e/common/storage/framework.go:23 + updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:124 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] ConfigMap + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:05:24.866 + Oct 13 09:05:24.866: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename configmap 10/13/23 09:05:24.866 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:05:24.88 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:05:24.882 + [BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:31 + [It] updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:124 + STEP: Creating configMap with name configmap-test-upd-014e8266-6032-4f6a-8a40-61b8a52acd09 10/13/23 09:05:24.888 + STEP: Creating the pod 10/13/23 09:05:24.892 + Oct 13 09:05:24.899: INFO: Waiting up to 5m0s for pod "pod-configmaps-7b0c80c8-5ed2-4833-a966-41dc9a232fce" in namespace "configmap-5357" to be "running and ready" + Oct 13 09:05:24.903: INFO: Pod "pod-configmaps-7b0c80c8-5ed2-4833-a966-41dc9a232fce": Phase="Pending", Reason="", readiness=false. Elapsed: 3.378875ms + Oct 13 09:05:24.903: INFO: The phase of Pod pod-configmaps-7b0c80c8-5ed2-4833-a966-41dc9a232fce is Pending, waiting for it to be Running (with Ready = true) + Oct 13 09:05:26.907: INFO: Pod "pod-configmaps-7b0c80c8-5ed2-4833-a966-41dc9a232fce": Phase="Running", Reason="", readiness=true. Elapsed: 2.008141817s + Oct 13 09:05:26.907: INFO: The phase of Pod pod-configmaps-7b0c80c8-5ed2-4833-a966-41dc9a232fce is Running (Ready = true) + Oct 13 09:05:26.907: INFO: Pod "pod-configmaps-7b0c80c8-5ed2-4833-a966-41dc9a232fce" satisfied condition "running and ready" + STEP: Updating configmap configmap-test-upd-014e8266-6032-4f6a-8a40-61b8a52acd09 10/13/23 09:05:26.916 + STEP: waiting to observe update in volume 10/13/23 09:05:26.921 + [AfterEach] [sig-storage] ConfigMap + test/e2e/framework/node/init/init.go:32 + Oct 13 09:05:30.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] ConfigMap + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] ConfigMap + tear down framework | framework.go:193 + STEP: Destroying namespace "configmap-5357" for this suite. 10/13/23 09:05:30.946 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSS +------------------------------ +[sig-node] Pods + should patch a pod status [Conformance] + test/e2e/common/node/pods.go:1083 +[BeforeEach] [sig-node] Pods + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:05:30.956 +Oct 13 09:05:30.956: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename pods 10/13/23 09:05:30.957 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:05:30.968 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:05:30.97 +[BeforeEach] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:194 +[It] should patch a pod status [Conformance] + test/e2e/common/node/pods.go:1083 +STEP: Create a pod 10/13/23 09:05:30.973 +Oct 13 09:05:30.980: INFO: Waiting up to 5m0s for pod "pod-x59fj" in namespace "pods-1180" to be "running" +Oct 13 09:05:30.983: INFO: Pod "pod-x59fj": Phase="Pending", Reason="", readiness=false. Elapsed: 3.018727ms +Oct 13 09:05:32.988: INFO: Pod "pod-x59fj": Phase="Running", Reason="", readiness=true. Elapsed: 2.007415284s +Oct 13 09:05:32.988: INFO: Pod "pod-x59fj" satisfied condition "running" +STEP: patching /status 10/13/23 09:05:32.988 +Oct 13 09:05:32.996: INFO: Status Message: "Patched by e2e test" and Reason: "E2E" +[AfterEach] [sig-node] Pods + test/e2e/framework/node/init/init.go:32 +Oct 13 09:05:32.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Pods + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Pods + tear down framework | framework.go:193 +STEP: Destroying namespace "pods-1180" for this suite. 10/13/23 09:05:33 +------------------------------ +• [2.050 seconds] +[sig-node] Pods +test/e2e/common/node/framework.go:23 + should patch a pod status [Conformance] + test/e2e/common/node/pods.go:1083 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Pods + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:05:30.956 + Oct 13 09:05:30.956: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename pods 10/13/23 09:05:30.957 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:05:30.968 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:05:30.97 + [BeforeEach] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:194 + [It] should patch a pod status [Conformance] + test/e2e/common/node/pods.go:1083 + STEP: Create a pod 10/13/23 09:05:30.973 + Oct 13 09:05:30.980: INFO: Waiting up to 5m0s for pod "pod-x59fj" in namespace "pods-1180" to be "running" + Oct 13 09:05:30.983: INFO: Pod "pod-x59fj": Phase="Pending", Reason="", readiness=false. Elapsed: 3.018727ms + Oct 13 09:05:32.988: INFO: Pod "pod-x59fj": Phase="Running", Reason="", readiness=true. Elapsed: 2.007415284s + Oct 13 09:05:32.988: INFO: Pod "pod-x59fj" satisfied condition "running" + STEP: patching /status 10/13/23 09:05:32.988 + Oct 13 09:05:32.996: INFO: Status Message: "Patched by e2e test" and Reason: "E2E" + [AfterEach] [sig-node] Pods + test/e2e/framework/node/init/init.go:32 + Oct 13 09:05:32.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Pods + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Pods + tear down framework | framework.go:193 + STEP: Destroying namespace "pods-1180" for this suite. 10/13/23 09:05:33 + << End Captured GinkgoWriter Output +------------------------------ +S +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:137 +[BeforeEach] [sig-storage] EmptyDir volumes + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:05:33.006 +Oct 13 09:05:33.006: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename emptydir 10/13/23 09:05:33.008 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:05:33.02 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:05:33.023 +[BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:31 +[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:137 +STEP: Creating a pod to test emptydir 0666 on tmpfs 10/13/23 09:05:33.025 +Oct 13 09:05:33.032: INFO: Waiting up to 5m0s for pod "pod-fccb5522-790a-4cf3-951d-fcb52b89d34f" in namespace "emptydir-8253" to be "Succeeded or Failed" +Oct 13 09:05:33.037: INFO: Pod "pod-fccb5522-790a-4cf3-951d-fcb52b89d34f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.244555ms +Oct 13 09:05:35.042: INFO: Pod "pod-fccb5522-790a-4cf3-951d-fcb52b89d34f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009985118s +Oct 13 09:05:37.042: INFO: Pod "pod-fccb5522-790a-4cf3-951d-fcb52b89d34f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010013562s +STEP: Saw pod success 10/13/23 09:05:37.042 +Oct 13 09:05:37.042: INFO: Pod "pod-fccb5522-790a-4cf3-951d-fcb52b89d34f" satisfied condition "Succeeded or Failed" +Oct 13 09:05:37.046: INFO: Trying to get logs from node node2 pod pod-fccb5522-790a-4cf3-951d-fcb52b89d34f container test-container: +STEP: delete the pod 10/13/23 09:05:37.051 +Oct 13 09:05:37.067: INFO: Waiting for pod pod-fccb5522-790a-4cf3-951d-fcb52b89d34f to disappear +Oct 13 09:05:37.070: INFO: Pod pod-fccb5522-790a-4cf3-951d-fcb52b89d34f no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/node/init/init.go:32 +Oct 13 09:05:37.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + tear down framework | framework.go:193 +STEP: Destroying namespace "emptydir-8253" for this suite. 10/13/23 09:05:37.073 +------------------------------ +• [4.073 seconds] +[sig-storage] EmptyDir volumes +test/e2e/common/storage/framework.go:23 + should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:137 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] EmptyDir volumes + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:05:33.006 + Oct 13 09:05:33.006: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename emptydir 10/13/23 09:05:33.008 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:05:33.02 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:05:33.023 + [BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:31 + [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:137 + STEP: Creating a pod to test emptydir 0666 on tmpfs 10/13/23 09:05:33.025 + Oct 13 09:05:33.032: INFO: Waiting up to 5m0s for pod "pod-fccb5522-790a-4cf3-951d-fcb52b89d34f" in namespace "emptydir-8253" to be "Succeeded or Failed" + Oct 13 09:05:33.037: INFO: Pod "pod-fccb5522-790a-4cf3-951d-fcb52b89d34f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.244555ms + Oct 13 09:05:35.042: INFO: Pod "pod-fccb5522-790a-4cf3-951d-fcb52b89d34f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009985118s + Oct 13 09:05:37.042: INFO: Pod "pod-fccb5522-790a-4cf3-951d-fcb52b89d34f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010013562s + STEP: Saw pod success 10/13/23 09:05:37.042 + Oct 13 09:05:37.042: INFO: Pod "pod-fccb5522-790a-4cf3-951d-fcb52b89d34f" satisfied condition "Succeeded or Failed" + Oct 13 09:05:37.046: INFO: Trying to get logs from node node2 pod pod-fccb5522-790a-4cf3-951d-fcb52b89d34f container test-container: + STEP: delete the pod 10/13/23 09:05:37.051 + Oct 13 09:05:37.067: INFO: Waiting for pod pod-fccb5522-790a-4cf3-951d-fcb52b89d34f to disappear + Oct 13 09:05:37.070: INFO: Pod pod-fccb5522-790a-4cf3-951d-fcb52b89d34f no longer exists + [AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/node/init/init.go:32 + Oct 13 09:05:37.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + tear down framework | framework.go:193 + STEP: Destroying namespace "emptydir-8253" for this suite. 10/13/23 09:05:37.073 + << End Captured GinkgoWriter Output +------------------------------ +SSSS +------------------------------ +[sig-storage] EmptyDir volumes + volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:87 +[BeforeEach] [sig-storage] EmptyDir volumes + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:05:37.079 +Oct 13 09:05:37.079: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename emptydir 10/13/23 09:05:37.08 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:05:37.092 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:05:37.094 +[BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:31 +[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:87 +STEP: Creating a pod to test emptydir volume type on tmpfs 10/13/23 09:05:37.096 +Oct 13 09:05:37.105: INFO: Waiting up to 5m0s for pod "pod-efa26a43-7312-4f8e-8f97-6b06eae47666" in namespace "emptydir-4656" to be "Succeeded or Failed" +Oct 13 09:05:37.113: INFO: Pod "pod-efa26a43-7312-4f8e-8f97-6b06eae47666": Phase="Pending", Reason="", readiness=false. Elapsed: 7.980623ms +Oct 13 09:05:39.118: INFO: Pod "pod-efa26a43-7312-4f8e-8f97-6b06eae47666": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013286895s +Oct 13 09:05:41.117: INFO: Pod "pod-efa26a43-7312-4f8e-8f97-6b06eae47666": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012311015s +STEP: Saw pod success 10/13/23 09:05:41.117 +Oct 13 09:05:41.117: INFO: Pod "pod-efa26a43-7312-4f8e-8f97-6b06eae47666" satisfied condition "Succeeded or Failed" +Oct 13 09:05:41.120: INFO: Trying to get logs from node node2 pod pod-efa26a43-7312-4f8e-8f97-6b06eae47666 container test-container: +STEP: delete the pod 10/13/23 09:05:41.125 +Oct 13 09:05:41.135: INFO: Waiting for pod pod-efa26a43-7312-4f8e-8f97-6b06eae47666 to disappear +Oct 13 09:05:41.137: INFO: Pod pod-efa26a43-7312-4f8e-8f97-6b06eae47666 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/node/init/init.go:32 +Oct 13 09:05:41.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + tear down framework | framework.go:193 +STEP: Destroying namespace "emptydir-4656" for this suite. 10/13/23 09:05:41.141 +------------------------------ +• [4.066 seconds] +[sig-storage] EmptyDir volumes +test/e2e/common/storage/framework.go:23 + volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:87 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] EmptyDir volumes + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:05:37.079 + Oct 13 09:05:37.079: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename emptydir 10/13/23 09:05:37.08 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:05:37.092 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:05:37.094 + [BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:31 + [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:87 + STEP: Creating a pod to test emptydir volume type on tmpfs 10/13/23 09:05:37.096 + Oct 13 09:05:37.105: INFO: Waiting up to 5m0s for pod "pod-efa26a43-7312-4f8e-8f97-6b06eae47666" in namespace "emptydir-4656" to be "Succeeded or Failed" + Oct 13 09:05:37.113: INFO: Pod "pod-efa26a43-7312-4f8e-8f97-6b06eae47666": Phase="Pending", Reason="", readiness=false. Elapsed: 7.980623ms + Oct 13 09:05:39.118: INFO: Pod "pod-efa26a43-7312-4f8e-8f97-6b06eae47666": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013286895s + Oct 13 09:05:41.117: INFO: Pod "pod-efa26a43-7312-4f8e-8f97-6b06eae47666": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012311015s + STEP: Saw pod success 10/13/23 09:05:41.117 + Oct 13 09:05:41.117: INFO: Pod "pod-efa26a43-7312-4f8e-8f97-6b06eae47666" satisfied condition "Succeeded or Failed" + Oct 13 09:05:41.120: INFO: Trying to get logs from node node2 pod pod-efa26a43-7312-4f8e-8f97-6b06eae47666 container test-container: + STEP: delete the pod 10/13/23 09:05:41.125 + Oct 13 09:05:41.135: INFO: Waiting for pod pod-efa26a43-7312-4f8e-8f97-6b06eae47666 to disappear + Oct 13 09:05:41.137: INFO: Pod pod-efa26a43-7312-4f8e-8f97-6b06eae47666 no longer exists + [AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/node/init/init.go:32 + Oct 13 09:05:41.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + tear down framework | framework.go:193 + STEP: Destroying namespace "emptydir-4656" for this suite. 10/13/23 09:05:41.141 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:56 +[BeforeEach] [sig-storage] Projected secret + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:05:41.146 +Oct 13 09:05:41.146: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename projected 10/13/23 09:05:41.147 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:05:41.158 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:05:41.16 +[BeforeEach] [sig-storage] Projected secret + test/e2e/framework/metrics/init/init.go:31 +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:56 +STEP: Creating projection with secret that has name projected-secret-test-4a6c9e73-9f40-4a6c-8e29-18a100c83550 10/13/23 09:05:41.162 +STEP: Creating a pod to test consume secrets 10/13/23 09:05:41.166 +Oct 13 09:05:41.173: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5bf8b881-47f2-439d-8350-0e45ec09c7a5" in namespace "projected-6634" to be "Succeeded or Failed" +Oct 13 09:05:41.177: INFO: Pod "pod-projected-secrets-5bf8b881-47f2-439d-8350-0e45ec09c7a5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.159789ms +Oct 13 09:05:43.181: INFO: Pod "pod-projected-secrets-5bf8b881-47f2-439d-8350-0e45ec09c7a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008320442s +Oct 13 09:05:45.180: INFO: Pod "pod-projected-secrets-5bf8b881-47f2-439d-8350-0e45ec09c7a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007612813s +STEP: Saw pod success 10/13/23 09:05:45.18 +Oct 13 09:05:45.181: INFO: Pod "pod-projected-secrets-5bf8b881-47f2-439d-8350-0e45ec09c7a5" satisfied condition "Succeeded or Failed" +Oct 13 09:05:45.183: INFO: Trying to get logs from node node2 pod pod-projected-secrets-5bf8b881-47f2-439d-8350-0e45ec09c7a5 container projected-secret-volume-test: +STEP: delete the pod 10/13/23 09:05:45.189 +Oct 13 09:05:45.202: INFO: Waiting for pod pod-projected-secrets-5bf8b881-47f2-439d-8350-0e45ec09c7a5 to disappear +Oct 13 09:05:45.205: INFO: Pod pod-projected-secrets-5bf8b881-47f2-439d-8350-0e45ec09c7a5 no longer exists +[AfterEach] [sig-storage] Projected secret + test/e2e/framework/node/init/init.go:32 +Oct 13 09:05:45.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected secret + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Projected secret + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Projected secret + tear down framework | framework.go:193 +STEP: Destroying namespace "projected-6634" for this suite. 10/13/23 09:05:45.208 +------------------------------ +• [4.067 seconds] +[sig-storage] Projected secret +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:56 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected secret + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:05:41.146 + Oct 13 09:05:41.146: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename projected 10/13/23 09:05:41.147 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:05:41.158 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:05:41.16 + [BeforeEach] [sig-storage] Projected secret + test/e2e/framework/metrics/init/init.go:31 + [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:56 + STEP: Creating projection with secret that has name projected-secret-test-4a6c9e73-9f40-4a6c-8e29-18a100c83550 10/13/23 09:05:41.162 + STEP: Creating a pod to test consume secrets 10/13/23 09:05:41.166 + Oct 13 09:05:41.173: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5bf8b881-47f2-439d-8350-0e45ec09c7a5" in namespace "projected-6634" to be "Succeeded or Failed" + Oct 13 09:05:41.177: INFO: Pod "pod-projected-secrets-5bf8b881-47f2-439d-8350-0e45ec09c7a5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.159789ms + Oct 13 09:05:43.181: INFO: Pod "pod-projected-secrets-5bf8b881-47f2-439d-8350-0e45ec09c7a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008320442s + Oct 13 09:05:45.180: INFO: Pod "pod-projected-secrets-5bf8b881-47f2-439d-8350-0e45ec09c7a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007612813s + STEP: Saw pod success 10/13/23 09:05:45.18 + Oct 13 09:05:45.181: INFO: Pod "pod-projected-secrets-5bf8b881-47f2-439d-8350-0e45ec09c7a5" satisfied condition "Succeeded or Failed" + Oct 13 09:05:45.183: INFO: Trying to get logs from node node2 pod pod-projected-secrets-5bf8b881-47f2-439d-8350-0e45ec09c7a5 container projected-secret-volume-test: + STEP: delete the pod 10/13/23 09:05:45.189 + Oct 13 09:05:45.202: INFO: Waiting for pod pod-projected-secrets-5bf8b881-47f2-439d-8350-0e45ec09c7a5 to disappear + Oct 13 09:05:45.205: INFO: Pod pod-projected-secrets-5bf8b881-47f2-439d-8350-0e45ec09c7a5 no longer exists + [AfterEach] [sig-storage] Projected secret + test/e2e/framework/node/init/init.go:32 + Oct 13 09:05:45.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected secret + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Projected secret + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Projected secret + tear down framework | framework.go:193 + STEP: Destroying namespace "projected-6634" for this suite. 10/13/23 09:05:45.208 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-auth] ServiceAccounts + should mount projected service account token [Conformance] + test/e2e/auth/service_accounts.go:275 +[BeforeEach] [sig-auth] ServiceAccounts + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:05:45.214 +Oct 13 09:05:45.214: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename svcaccounts 10/13/23 09:05:45.215 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:05:45.226 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:05:45.228 +[BeforeEach] [sig-auth] ServiceAccounts + test/e2e/framework/metrics/init/init.go:31 +[It] should mount projected service account token [Conformance] + test/e2e/auth/service_accounts.go:275 +STEP: Creating a pod to test service account token: 10/13/23 09:05:45.23 +Oct 13 09:05:45.237: INFO: Waiting up to 5m0s for pod "test-pod-963e5957-0ffc-4816-a3bb-1a6c5af716ae" in namespace "svcaccounts-6300" to be "Succeeded or Failed" +Oct 13 09:05:45.240: INFO: Pod "test-pod-963e5957-0ffc-4816-a3bb-1a6c5af716ae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.826486ms +Oct 13 09:05:47.244: INFO: Pod "test-pod-963e5957-0ffc-4816-a3bb-1a6c5af716ae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006723919s +Oct 13 09:05:49.245: INFO: Pod "test-pod-963e5957-0ffc-4816-a3bb-1a6c5af716ae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008136978s +STEP: Saw pod success 10/13/23 09:05:49.246 +Oct 13 09:05:49.246: INFO: Pod "test-pod-963e5957-0ffc-4816-a3bb-1a6c5af716ae" satisfied condition "Succeeded or Failed" +Oct 13 09:05:49.250: INFO: Trying to get logs from node node2 pod test-pod-963e5957-0ffc-4816-a3bb-1a6c5af716ae container agnhost-container: +STEP: delete the pod 10/13/23 09:05:49.258 +Oct 13 09:05:49.273: INFO: Waiting for pod test-pod-963e5957-0ffc-4816-a3bb-1a6c5af716ae to disappear +Oct 13 09:05:49.276: INFO: Pod test-pod-963e5957-0ffc-4816-a3bb-1a6c5af716ae no longer exists +[AfterEach] [sig-auth] ServiceAccounts + test/e2e/framework/node/init/init.go:32 +Oct 13 09:05:49.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-auth] ServiceAccounts + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-auth] ServiceAccounts + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-auth] ServiceAccounts + tear down framework | framework.go:193 +STEP: Destroying namespace "svcaccounts-6300" for this suite. 10/13/23 09:05:49.279 +------------------------------ +• [4.074 seconds] +[sig-auth] ServiceAccounts +test/e2e/auth/framework.go:23 + should mount projected service account token [Conformance] + test/e2e/auth/service_accounts.go:275 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-auth] ServiceAccounts + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:05:45.214 + Oct 13 09:05:45.214: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename svcaccounts 10/13/23 09:05:45.215 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:05:45.226 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:05:45.228 + [BeforeEach] [sig-auth] ServiceAccounts + test/e2e/framework/metrics/init/init.go:31 + [It] should mount projected service account token [Conformance] + test/e2e/auth/service_accounts.go:275 + STEP: Creating a pod to test service account token: 10/13/23 09:05:45.23 + Oct 13 09:05:45.237: INFO: Waiting up to 5m0s for pod "test-pod-963e5957-0ffc-4816-a3bb-1a6c5af716ae" in namespace "svcaccounts-6300" to be "Succeeded or Failed" + Oct 13 09:05:45.240: INFO: Pod "test-pod-963e5957-0ffc-4816-a3bb-1a6c5af716ae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.826486ms + Oct 13 09:05:47.244: INFO: Pod "test-pod-963e5957-0ffc-4816-a3bb-1a6c5af716ae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006723919s + Oct 13 09:05:49.245: INFO: Pod "test-pod-963e5957-0ffc-4816-a3bb-1a6c5af716ae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008136978s + STEP: Saw pod success 10/13/23 09:05:49.246 + Oct 13 09:05:49.246: INFO: Pod "test-pod-963e5957-0ffc-4816-a3bb-1a6c5af716ae" satisfied condition "Succeeded or Failed" + Oct 13 09:05:49.250: INFO: Trying to get logs from node node2 pod test-pod-963e5957-0ffc-4816-a3bb-1a6c5af716ae container agnhost-container: + STEP: delete the pod 10/13/23 09:05:49.258 + Oct 13 09:05:49.273: INFO: Waiting for pod test-pod-963e5957-0ffc-4816-a3bb-1a6c5af716ae to disappear + Oct 13 09:05:49.276: INFO: Pod test-pod-963e5957-0ffc-4816-a3bb-1a6c5af716ae no longer exists + [AfterEach] [sig-auth] ServiceAccounts + test/e2e/framework/node/init/init.go:32 + Oct 13 09:05:49.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-auth] ServiceAccounts + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-auth] ServiceAccounts + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-auth] ServiceAccounts + tear down framework | framework.go:193 + STEP: Destroying namespace "svcaccounts-6300" for this suite. 10/13/23 09:05:49.279 + << End Captured GinkgoWriter Output +------------------------------ +SSSSS +------------------------------ +[sig-scheduling] SchedulerPredicates [Serial] + validates resource limits of pods that are allowed to run [Conformance] + test/e2e/scheduling/predicates.go:331 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:05:49.288 +Oct 13 09:05:49.289: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename sched-pred 10/13/23 09:05:49.291 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:05:49.302 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:05:49.304 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:97 +Oct 13 09:05:49.307: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Oct 13 09:05:49.314: INFO: Waiting for terminating namespaces to be deleted... +Oct 13 09:05:49.317: INFO: +Logging pods the apiserver thinks is on node node1 before test +Oct 13 09:05:49.324: INFO: ephemeral-containers-target-pod from ephemeral-containers-test-1686 started at 2023-10-13 09:05:12 +0000 UTC (1 container statuses recorded) +Oct 13 09:05:49.324: INFO: Container test-container-1 ready: true, restart count 0 +Oct 13 09:05:49.324: INFO: kube-flannel-ds-jtxbm from kube-flannel started at 2023-10-13 08:11:42 +0000 UTC (1 container statuses recorded) +Oct 13 09:05:49.324: INFO: Container kube-flannel ready: true, restart count 0 +Oct 13 09:05:49.324: INFO: coredns-787d4945fb-89krv from kube-system started at 2023-10-13 08:19:33 +0000 UTC (1 container statuses recorded) +Oct 13 09:05:49.324: INFO: Container coredns ready: true, restart count 0 +Oct 13 09:05:49.324: INFO: etcd-node1 from kube-system started at 2023-10-13 07:51:38 +0000 UTC (1 container statuses recorded) +Oct 13 09:05:49.324: INFO: Container etcd ready: true, restart count 8 +Oct 13 09:05:49.324: INFO: haproxy-node1 from kube-system started at 2023-10-13 07:51:38 +0000 UTC (1 container statuses recorded) +Oct 13 09:05:49.324: INFO: Container haproxy ready: true, restart count 3 +Oct 13 09:05:49.324: INFO: keepalived-node1 from kube-system started at 2023-10-13 07:51:38 +0000 UTC (1 container statuses recorded) +Oct 13 09:05:49.324: INFO: Container keepalived ready: true, restart count 9 +Oct 13 09:05:49.324: INFO: kube-apiserver-node1 from kube-system started at 2023-10-13 07:05:26 +0000 UTC (1 container statuses recorded) +Oct 13 09:05:49.324: INFO: Container kube-apiserver ready: true, restart count 8 +Oct 13 09:05:49.324: INFO: kube-controller-manager-node1 from kube-system started at 2023-10-13 07:51:38 +0000 UTC (1 container statuses recorded) +Oct 13 09:05:49.324: INFO: Container kube-controller-manager ready: true, restart count 8 +Oct 13 09:05:49.324: INFO: kube-proxy-dqr76 from kube-system started at 2023-10-13 07:05:37 +0000 UTC (1 container statuses recorded) +Oct 13 09:05:49.324: INFO: Container kube-proxy ready: true, restart count 1 +Oct 13 09:05:49.324: INFO: kube-scheduler-node1 from kube-system started at 2023-10-13 07:51:38 +0000 UTC (1 container statuses recorded) +Oct 13 09:05:49.324: INFO: Container kube-scheduler ready: true, restart count 11 +Oct 13 09:05:49.324: INFO: sonobuoy from sonobuoy started at 2023-10-13 08:13:29 +0000 UTC (1 container statuses recorded) +Oct 13 09:05:49.324: INFO: Container kube-sonobuoy ready: true, restart count 0 +Oct 13 09:05:49.324: INFO: sonobuoy-systemd-logs-daemon-set-4d2891c1fd524ac6-jr4nt from sonobuoy started at 2023-10-13 08:13:30 +0000 UTC (2 container statuses recorded) +Oct 13 09:05:49.324: INFO: Container sonobuoy-worker ready: true, restart count 0 +Oct 13 09:05:49.324: INFO: Container systemd-logs ready: true, restart count 0 +Oct 13 09:05:49.324: INFO: +Logging pods the apiserver thinks is on node node2 before test +Oct 13 09:05:49.333: INFO: kube-flannel-ds-5qldx from kube-flannel started at 2023-10-13 08:19:34 +0000 UTC (1 container statuses recorded) +Oct 13 09:05:49.333: INFO: Container kube-flannel ready: true, restart count 0 +Oct 13 09:05:49.333: INFO: etcd-node2 from kube-system started at 2023-10-13 07:06:06 +0000 UTC (1 container statuses recorded) +Oct 13 09:05:49.333: INFO: Container etcd ready: true, restart count 1 +Oct 13 09:05:49.333: INFO: haproxy-node2 from kube-system started at 2023-10-13 07:09:12 +0000 UTC (1 container statuses recorded) +Oct 13 09:05:49.333: INFO: Container haproxy ready: true, restart count 1 +Oct 13 09:05:49.333: INFO: keepalived-node2 from kube-system started at 2023-10-13 07:09:12 +0000 UTC (1 container statuses recorded) +Oct 13 09:05:49.333: INFO: Container keepalived ready: true, restart count 1 +Oct 13 09:05:49.333: INFO: kube-apiserver-node2 from kube-system started at 2023-10-13 07:05:53 +0000 UTC (1 container statuses recorded) +Oct 13 09:05:49.333: INFO: Container kube-apiserver ready: true, restart count 2 +Oct 13 09:05:49.333: INFO: kube-controller-manager-node2 from kube-system started at 2023-10-13 07:05:53 +0000 UTC (1 container statuses recorded) +Oct 13 09:05:49.333: INFO: Container kube-controller-manager ready: true, restart count 1 +Oct 13 09:05:49.333: INFO: kube-proxy-tkvwh from kube-system started at 2023-10-13 07:06:06 +0000 UTC (1 container statuses recorded) +Oct 13 09:05:49.333: INFO: Container kube-proxy ready: true, restart count 1 +Oct 13 09:05:49.333: INFO: kube-scheduler-node2 from kube-system started at 2023-10-13 07:05:53 +0000 UTC (1 container statuses recorded) +Oct 13 09:05:49.333: INFO: Container kube-scheduler ready: true, restart count 1 +Oct 13 09:05:49.333: INFO: sonobuoy-systemd-logs-daemon-set-4d2891c1fd524ac6-rhwwd from sonobuoy started at 2023-10-13 08:13:30 +0000 UTC (2 container statuses recorded) +Oct 13 09:05:49.333: INFO: Container sonobuoy-worker ready: true, restart count 0 +Oct 13 09:05:49.333: INFO: Container systemd-logs ready: true, restart count 0 +Oct 13 09:05:49.333: INFO: +Logging pods the apiserver thinks is on node node3 before test +Oct 13 09:05:49.342: INFO: kube-flannel-ds-dzrwh from kube-flannel started at 2023-10-13 08:11:42 +0000 UTC (1 container statuses recorded) +Oct 13 09:05:49.342: INFO: Container kube-flannel ready: true, restart count 0 +Oct 13 09:05:49.342: INFO: coredns-787d4945fb-5dqqv from kube-system started at 2023-10-13 08:12:41 +0000 UTC (1 container statuses recorded) +Oct 13 09:05:49.342: INFO: Container coredns ready: true, restart count 0 +Oct 13 09:05:49.342: INFO: etcd-node3 from kube-system started at 2023-10-13 07:07:40 +0000 UTC (1 container statuses recorded) +Oct 13 09:05:49.342: INFO: Container etcd ready: true, restart count 1 +Oct 13 09:05:49.342: INFO: haproxy-node3 from kube-system started at 2023-10-13 07:07:26 +0000 UTC (1 container statuses recorded) +Oct 13 09:05:49.342: INFO: Container haproxy ready: true, restart count 1 +Oct 13 09:05:49.342: INFO: keepalived-node3 from kube-system started at 2023-10-13 07:07:26 +0000 UTC (1 container statuses recorded) +Oct 13 09:05:49.342: INFO: Container keepalived ready: true, restart count 1 +Oct 13 09:05:49.342: INFO: kube-apiserver-node3 from kube-system started at 2023-10-13 07:07:26 +0000 UTC (1 container statuses recorded) +Oct 13 09:05:49.342: INFO: Container kube-apiserver ready: true, restart count 1 +Oct 13 09:05:49.342: INFO: kube-controller-manager-node3 from kube-system started at 2023-10-13 07:07:26 +0000 UTC (1 container statuses recorded) +Oct 13 09:05:49.342: INFO: Container kube-controller-manager ready: true, restart count 1 +Oct 13 09:05:49.342: INFO: kube-proxy-dkrp7 from kube-system started at 2023-10-13 07:07:46 +0000 UTC (1 container statuses recorded) +Oct 13 09:05:49.342: INFO: Container kube-proxy ready: true, restart count 1 +Oct 13 09:05:49.342: INFO: kube-scheduler-node3 from kube-system started at 2023-10-13 07:07:26 +0000 UTC (1 container statuses recorded) +Oct 13 09:05:49.342: INFO: Container kube-scheduler ready: true, restart count 1 +Oct 13 09:05:49.342: INFO: sonobuoy-e2e-job-bfbf16dda205467f from sonobuoy started at 2023-10-13 08:13:29 +0000 UTC (2 container statuses recorded) +Oct 13 09:05:49.342: INFO: Container e2e ready: true, restart count 0 +Oct 13 09:05:49.342: INFO: Container sonobuoy-worker ready: true, restart count 0 +Oct 13 09:05:49.342: INFO: sonobuoy-systemd-logs-daemon-set-4d2891c1fd524ac6-xj7b7 from sonobuoy started at 2023-10-13 08:13:30 +0000 UTC (2 container statuses recorded) +Oct 13 09:05:49.342: INFO: Container sonobuoy-worker ready: true, restart count 0 +Oct 13 09:05:49.342: INFO: Container systemd-logs ready: true, restart count 0 +[It] validates resource limits of pods that are allowed to run [Conformance] + test/e2e/scheduling/predicates.go:331 +STEP: verifying the node has the label node node1 10/13/23 09:05:49.358 +STEP: verifying the node has the label node node2 10/13/23 09:05:49.373 +STEP: verifying the node has the label node node3 10/13/23 09:05:49.393 +Oct 13 09:05:49.411: INFO: Pod ephemeral-containers-target-pod requesting resource cpu=0m on Node node1 +Oct 13 09:05:49.411: INFO: Pod kube-flannel-ds-5qldx requesting resource cpu=100m on Node node2 +Oct 13 09:05:49.411: INFO: Pod kube-flannel-ds-dzrwh requesting resource cpu=100m on Node node3 +Oct 13 09:05:49.411: INFO: Pod kube-flannel-ds-jtxbm requesting resource cpu=100m on Node node1 +Oct 13 09:05:49.411: INFO: Pod coredns-787d4945fb-5dqqv requesting resource cpu=100m on Node node3 +Oct 13 09:05:49.411: INFO: Pod coredns-787d4945fb-89krv requesting resource cpu=100m on Node node1 +Oct 13 09:05:49.411: INFO: Pod etcd-node1 requesting resource cpu=100m on Node node1 +Oct 13 09:05:49.411: INFO: Pod etcd-node2 requesting resource cpu=100m on Node node2 +Oct 13 09:05:49.411: INFO: Pod etcd-node3 requesting resource cpu=100m on Node node3 +Oct 13 09:05:49.411: INFO: Pod haproxy-node1 requesting resource cpu=0m on Node node1 +Oct 13 09:05:49.411: INFO: Pod haproxy-node2 requesting resource cpu=0m on Node node2 +Oct 13 09:05:49.411: INFO: Pod haproxy-node3 requesting resource cpu=0m on Node node3 +Oct 13 09:05:49.411: INFO: Pod keepalived-node1 requesting resource cpu=0m on Node node1 +Oct 13 09:05:49.411: INFO: Pod keepalived-node2 requesting resource cpu=0m on Node node2 +Oct 13 09:05:49.411: INFO: Pod keepalived-node3 requesting resource cpu=0m on Node node3 +Oct 13 09:05:49.411: INFO: Pod kube-apiserver-node1 requesting resource cpu=250m on Node node1 +Oct 13 09:05:49.411: INFO: Pod kube-apiserver-node2 requesting resource cpu=250m on Node node2 +Oct 13 09:05:49.411: INFO: Pod kube-apiserver-node3 requesting resource cpu=250m on Node node3 +Oct 13 09:05:49.411: INFO: Pod kube-controller-manager-node1 requesting resource cpu=200m on Node node1 +Oct 13 09:05:49.411: INFO: Pod kube-controller-manager-node2 requesting resource cpu=200m on Node node2 +Oct 13 09:05:49.411: INFO: Pod kube-controller-manager-node3 requesting resource cpu=200m on Node node3 +Oct 13 09:05:49.411: INFO: Pod kube-proxy-dkrp7 requesting resource cpu=0m on Node node3 +Oct 13 09:05:49.411: INFO: Pod kube-proxy-dqr76 requesting resource cpu=0m on Node node1 +Oct 13 09:05:49.411: INFO: Pod kube-proxy-tkvwh requesting resource cpu=0m on Node node2 +Oct 13 09:05:49.411: INFO: Pod kube-scheduler-node1 requesting resource cpu=100m on Node node1 +Oct 13 09:05:49.411: INFO: Pod kube-scheduler-node2 requesting resource cpu=100m on Node node2 +Oct 13 09:05:49.411: INFO: Pod kube-scheduler-node3 requesting resource cpu=100m on Node node3 +Oct 13 09:05:49.411: INFO: Pod sonobuoy requesting resource cpu=0m on Node node1 +Oct 13 09:05:49.411: INFO: Pod sonobuoy-e2e-job-bfbf16dda205467f requesting resource cpu=0m on Node node3 +Oct 13 09:05:49.411: INFO: Pod sonobuoy-systemd-logs-daemon-set-4d2891c1fd524ac6-jr4nt requesting resource cpu=0m on Node node1 +Oct 13 09:05:49.411: INFO: Pod sonobuoy-systemd-logs-daemon-set-4d2891c1fd524ac6-rhwwd requesting resource cpu=0m on Node node2 +Oct 13 09:05:49.411: INFO: Pod sonobuoy-systemd-logs-daemon-set-4d2891c1fd524ac6-xj7b7 requesting resource cpu=0m on Node node3 +STEP: Starting Pods to consume most of the cluster CPU. 10/13/23 09:05:49.411 +Oct 13 09:05:49.411: INFO: Creating a pod which consumes cpu=5005m on Node node3 +Oct 13 09:05:49.424: INFO: Creating a pod which consumes cpu=5005m on Node node1 +Oct 13 09:05:49.435: INFO: Creating a pod which consumes cpu=5075m on Node node2 +Oct 13 09:05:49.444: INFO: Waiting up to 5m0s for pod "filler-pod-95f604d0-f04e-48ce-a7e1-a932c173a33d" in namespace "sched-pred-3157" to be "running" +Oct 13 09:05:49.455: INFO: Pod "filler-pod-95f604d0-f04e-48ce-a7e1-a932c173a33d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.423578ms +Oct 13 09:05:51.459: INFO: Pod "filler-pod-95f604d0-f04e-48ce-a7e1-a932c173a33d": Phase="Running", Reason="", readiness=true. Elapsed: 2.014514821s +Oct 13 09:05:51.459: INFO: Pod "filler-pod-95f604d0-f04e-48ce-a7e1-a932c173a33d" satisfied condition "running" +Oct 13 09:05:51.459: INFO: Waiting up to 5m0s for pod "filler-pod-36cb1e1f-fda2-4d77-8a72-0c26092caae3" in namespace "sched-pred-3157" to be "running" +Oct 13 09:05:51.462: INFO: Pod "filler-pod-36cb1e1f-fda2-4d77-8a72-0c26092caae3": Phase="Running", Reason="", readiness=true. Elapsed: 2.793936ms +Oct 13 09:05:51.462: INFO: Pod "filler-pod-36cb1e1f-fda2-4d77-8a72-0c26092caae3" satisfied condition "running" +Oct 13 09:05:51.462: INFO: Waiting up to 5m0s for pod "filler-pod-1e0378d3-556f-425b-b8ae-c4269e2cb270" in namespace "sched-pred-3157" to be "running" +Oct 13 09:05:51.464: INFO: Pod "filler-pod-1e0378d3-556f-425b-b8ae-c4269e2cb270": Phase="Running", Reason="", readiness=true. Elapsed: 2.201522ms +Oct 13 09:05:51.464: INFO: Pod "filler-pod-1e0378d3-556f-425b-b8ae-c4269e2cb270" satisfied condition "running" +STEP: Creating another pod that requires unavailable amount of CPU. 10/13/23 09:05:51.464 +STEP: Considering event: +Type = [Normal], Name = [filler-pod-1e0378d3-556f-425b-b8ae-c4269e2cb270.178d9f72c1e160dc], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3157/filler-pod-1e0378d3-556f-425b-b8ae-c4269e2cb270 to node2] 10/13/23 09:05:51.467 +STEP: Considering event: +Type = [Normal], Name = [filler-pod-1e0378d3-556f-425b-b8ae-c4269e2cb270.178d9f72f9d3fd4e], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 10/13/23 09:05:51.467 +STEP: Considering event: +Type = [Normal], Name = [filler-pod-1e0378d3-556f-425b-b8ae-c4269e2cb270.178d9f72fba7695f], Reason = [Created], Message = [Created container filler-pod-1e0378d3-556f-425b-b8ae-c4269e2cb270] 10/13/23 09:05:51.467 +STEP: Considering event: +Type = [Normal], Name = [filler-pod-1e0378d3-556f-425b-b8ae-c4269e2cb270.178d9f7302f64113], Reason = [Started], Message = [Started container filler-pod-1e0378d3-556f-425b-b8ae-c4269e2cb270] 10/13/23 09:05:51.467 +STEP: Considering event: +Type = [Normal], Name = [filler-pod-36cb1e1f-fda2-4d77-8a72-0c26092caae3.178d9f72c0fa484e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3157/filler-pod-36cb1e1f-fda2-4d77-8a72-0c26092caae3 to node1] 10/13/23 09:05:51.467 +STEP: Considering event: +Type = [Normal], Name = [filler-pod-36cb1e1f-fda2-4d77-8a72-0c26092caae3.178d9f72e055271b], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 10/13/23 09:05:51.467 +STEP: Considering event: +Type = [Normal], Name = [filler-pod-36cb1e1f-fda2-4d77-8a72-0c26092caae3.178d9f72e1d54d8d], Reason = [Created], Message = [Created container filler-pod-36cb1e1f-fda2-4d77-8a72-0c26092caae3] 10/13/23 09:05:51.467 +STEP: Considering event: +Type = [Normal], Name = [filler-pod-36cb1e1f-fda2-4d77-8a72-0c26092caae3.178d9f72e8b86a98], Reason = [Started], Message = [Started container filler-pod-36cb1e1f-fda2-4d77-8a72-0c26092caae3] 10/13/23 09:05:51.467 +STEP: Considering event: +Type = [Normal], Name = [filler-pod-95f604d0-f04e-48ce-a7e1-a932c173a33d.178d9f72c064d971], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3157/filler-pod-95f604d0-f04e-48ce-a7e1-a932c173a33d to node3] 10/13/23 09:05:51.467 +STEP: Considering event: +Type = [Normal], Name = [filler-pod-95f604d0-f04e-48ce-a7e1-a932c173a33d.178d9f72c903a82b], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 10/13/23 09:05:51.467 +STEP: Considering event: +Type = [Normal], Name = [filler-pod-95f604d0-f04e-48ce-a7e1-a932c173a33d.178d9f72cac5d199], Reason = [Created], Message = [Created container filler-pod-95f604d0-f04e-48ce-a7e1-a932c173a33d] 10/13/23 09:05:51.467 +STEP: Considering event: +Type = [Normal], Name = [filler-pod-95f604d0-f04e-48ce-a7e1-a932c173a33d.178d9f72d1d174bc], Reason = [Started], Message = [Started container filler-pod-95f604d0-f04e-48ce-a7e1-a932c173a33d] 10/13/23 09:05:51.467 +STEP: Considering event: +Type = [Warning], Name = [additional-pod.178d9f7339c5596c], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 Insufficient cpu. preemption: 0/3 nodes are available: 3 No preemption victims found for incoming pod..] 10/13/23 09:05:51.479 +STEP: removing the label node off the node node3 10/13/23 09:05:52.478 +STEP: verifying the node doesn't have the label node 10/13/23 09:05:52.494 +STEP: removing the label node off the node node1 10/13/23 09:05:52.499 +STEP: verifying the node doesn't have the label node 10/13/23 09:05:52.512 +STEP: removing the label node off the node node2 10/13/23 09:05:52.516 +STEP: verifying the node doesn't have the label node 10/13/23 09:05:52.53 +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/node/init/init.go:32 +Oct 13 09:05:52.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:88 +[DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] + tear down framework | framework.go:193 +STEP: Destroying namespace "sched-pred-3157" for this suite. 10/13/23 09:05:52.54 +------------------------------ +• [3.261 seconds] +[sig-scheduling] SchedulerPredicates [Serial] +test/e2e/scheduling/framework.go:40 + validates resource limits of pods that are allowed to run [Conformance] + test/e2e/scheduling/predicates.go:331 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:05:49.288 + Oct 13 09:05:49.289: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename sched-pred 10/13/23 09:05:49.291 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:05:49.302 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:05:49.304 + [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:97 + Oct 13 09:05:49.307: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready + Oct 13 09:05:49.314: INFO: Waiting for terminating namespaces to be deleted... + Oct 13 09:05:49.317: INFO: + Logging pods the apiserver thinks is on node node1 before test + Oct 13 09:05:49.324: INFO: ephemeral-containers-target-pod from ephemeral-containers-test-1686 started at 2023-10-13 09:05:12 +0000 UTC (1 container statuses recorded) + Oct 13 09:05:49.324: INFO: Container test-container-1 ready: true, restart count 0 + Oct 13 09:05:49.324: INFO: kube-flannel-ds-jtxbm from kube-flannel started at 2023-10-13 08:11:42 +0000 UTC (1 container statuses recorded) + Oct 13 09:05:49.324: INFO: Container kube-flannel ready: true, restart count 0 + Oct 13 09:05:49.324: INFO: coredns-787d4945fb-89krv from kube-system started at 2023-10-13 08:19:33 +0000 UTC (1 container statuses recorded) + Oct 13 09:05:49.324: INFO: Container coredns ready: true, restart count 0 + Oct 13 09:05:49.324: INFO: etcd-node1 from kube-system started at 2023-10-13 07:51:38 +0000 UTC (1 container statuses recorded) + Oct 13 09:05:49.324: INFO: Container etcd ready: true, restart count 8 + Oct 13 09:05:49.324: INFO: haproxy-node1 from kube-system started at 2023-10-13 07:51:38 +0000 UTC (1 container statuses recorded) + Oct 13 09:05:49.324: INFO: Container haproxy ready: true, restart count 3 + Oct 13 09:05:49.324: INFO: keepalived-node1 from kube-system started at 2023-10-13 07:51:38 +0000 UTC (1 container statuses recorded) + Oct 13 09:05:49.324: INFO: Container keepalived ready: true, restart count 9 + Oct 13 09:05:49.324: INFO: kube-apiserver-node1 from kube-system started at 2023-10-13 07:05:26 +0000 UTC (1 container statuses recorded) + Oct 13 09:05:49.324: INFO: Container kube-apiserver ready: true, restart count 8 + Oct 13 09:05:49.324: INFO: kube-controller-manager-node1 from kube-system started at 2023-10-13 07:51:38 +0000 UTC (1 container statuses recorded) + Oct 13 09:05:49.324: INFO: Container kube-controller-manager ready: true, restart count 8 + Oct 13 09:05:49.324: INFO: kube-proxy-dqr76 from kube-system started at 2023-10-13 07:05:37 +0000 UTC (1 container statuses recorded) + Oct 13 09:05:49.324: INFO: Container kube-proxy ready: true, restart count 1 + Oct 13 09:05:49.324: INFO: kube-scheduler-node1 from kube-system started at 2023-10-13 07:51:38 +0000 UTC (1 container statuses recorded) + Oct 13 09:05:49.324: INFO: Container kube-scheduler ready: true, restart count 11 + Oct 13 09:05:49.324: INFO: sonobuoy from sonobuoy started at 2023-10-13 08:13:29 +0000 UTC (1 container statuses recorded) + Oct 13 09:05:49.324: INFO: Container kube-sonobuoy ready: true, restart count 0 + Oct 13 09:05:49.324: INFO: sonobuoy-systemd-logs-daemon-set-4d2891c1fd524ac6-jr4nt from sonobuoy started at 2023-10-13 08:13:30 +0000 UTC (2 container statuses recorded) + Oct 13 09:05:49.324: INFO: Container sonobuoy-worker ready: true, restart count 0 + Oct 13 09:05:49.324: INFO: Container systemd-logs ready: true, restart count 0 + Oct 13 09:05:49.324: INFO: + Logging pods the apiserver thinks is on node node2 before test + Oct 13 09:05:49.333: INFO: kube-flannel-ds-5qldx from kube-flannel started at 2023-10-13 08:19:34 +0000 UTC (1 container statuses recorded) + Oct 13 09:05:49.333: INFO: Container kube-flannel ready: true, restart count 0 + Oct 13 09:05:49.333: INFO: etcd-node2 from kube-system started at 2023-10-13 07:06:06 +0000 UTC (1 container statuses recorded) + Oct 13 09:05:49.333: INFO: Container etcd ready: true, restart count 1 + Oct 13 09:05:49.333: INFO: haproxy-node2 from kube-system started at 2023-10-13 07:09:12 +0000 UTC (1 container statuses recorded) + Oct 13 09:05:49.333: INFO: Container haproxy ready: true, restart count 1 + Oct 13 09:05:49.333: INFO: keepalived-node2 from kube-system started at 2023-10-13 07:09:12 +0000 UTC (1 container statuses recorded) + Oct 13 09:05:49.333: INFO: Container keepalived ready: true, restart count 1 + Oct 13 09:05:49.333: INFO: kube-apiserver-node2 from kube-system started at 2023-10-13 07:05:53 +0000 UTC (1 container statuses recorded) + Oct 13 09:05:49.333: INFO: Container kube-apiserver ready: true, restart count 2 + Oct 13 09:05:49.333: INFO: kube-controller-manager-node2 from kube-system started at 2023-10-13 07:05:53 +0000 UTC (1 container statuses recorded) + Oct 13 09:05:49.333: INFO: Container kube-controller-manager ready: true, restart count 1 + Oct 13 09:05:49.333: INFO: kube-proxy-tkvwh from kube-system started at 2023-10-13 07:06:06 +0000 UTC (1 container statuses recorded) + Oct 13 09:05:49.333: INFO: Container kube-proxy ready: true, restart count 1 + Oct 13 09:05:49.333: INFO: kube-scheduler-node2 from kube-system started at 2023-10-13 07:05:53 +0000 UTC (1 container statuses recorded) + Oct 13 09:05:49.333: INFO: Container kube-scheduler ready: true, restart count 1 + Oct 13 09:05:49.333: INFO: sonobuoy-systemd-logs-daemon-set-4d2891c1fd524ac6-rhwwd from sonobuoy started at 2023-10-13 08:13:30 +0000 UTC (2 container statuses recorded) + Oct 13 09:05:49.333: INFO: Container sonobuoy-worker ready: true, restart count 0 + Oct 13 09:05:49.333: INFO: Container systemd-logs ready: true, restart count 0 + Oct 13 09:05:49.333: INFO: + Logging pods the apiserver thinks is on node node3 before test + Oct 13 09:05:49.342: INFO: kube-flannel-ds-dzrwh from kube-flannel started at 2023-10-13 08:11:42 +0000 UTC (1 container statuses recorded) + Oct 13 09:05:49.342: INFO: Container kube-flannel ready: true, restart count 0 + Oct 13 09:05:49.342: INFO: coredns-787d4945fb-5dqqv from kube-system started at 2023-10-13 08:12:41 +0000 UTC (1 container statuses recorded) + Oct 13 09:05:49.342: INFO: Container coredns ready: true, restart count 0 + Oct 13 09:05:49.342: INFO: etcd-node3 from kube-system started at 2023-10-13 07:07:40 +0000 UTC (1 container statuses recorded) + Oct 13 09:05:49.342: INFO: Container etcd ready: true, restart count 1 + Oct 13 09:05:49.342: INFO: haproxy-node3 from kube-system started at 2023-10-13 07:07:26 +0000 UTC (1 container statuses recorded) + Oct 13 09:05:49.342: INFO: Container haproxy ready: true, restart count 1 + Oct 13 09:05:49.342: INFO: keepalived-node3 from kube-system started at 2023-10-13 07:07:26 +0000 UTC (1 container statuses recorded) + Oct 13 09:05:49.342: INFO: Container keepalived ready: true, restart count 1 + Oct 13 09:05:49.342: INFO: kube-apiserver-node3 from kube-system started at 2023-10-13 07:07:26 +0000 UTC (1 container statuses recorded) + Oct 13 09:05:49.342: INFO: Container kube-apiserver ready: true, restart count 1 + Oct 13 09:05:49.342: INFO: kube-controller-manager-node3 from kube-system started at 2023-10-13 07:07:26 +0000 UTC (1 container statuses recorded) + Oct 13 09:05:49.342: INFO: Container kube-controller-manager ready: true, restart count 1 + Oct 13 09:05:49.342: INFO: kube-proxy-dkrp7 from kube-system started at 2023-10-13 07:07:46 +0000 UTC (1 container statuses recorded) + Oct 13 09:05:49.342: INFO: Container kube-proxy ready: true, restart count 1 + Oct 13 09:05:49.342: INFO: kube-scheduler-node3 from kube-system started at 2023-10-13 07:07:26 +0000 UTC (1 container statuses recorded) + Oct 13 09:05:49.342: INFO: Container kube-scheduler ready: true, restart count 1 + Oct 13 09:05:49.342: INFO: sonobuoy-e2e-job-bfbf16dda205467f from sonobuoy started at 2023-10-13 08:13:29 +0000 UTC (2 container statuses recorded) + Oct 13 09:05:49.342: INFO: Container e2e ready: true, restart count 0 + Oct 13 09:05:49.342: INFO: Container sonobuoy-worker ready: true, restart count 0 + Oct 13 09:05:49.342: INFO: sonobuoy-systemd-logs-daemon-set-4d2891c1fd524ac6-xj7b7 from sonobuoy started at 2023-10-13 08:13:30 +0000 UTC (2 container statuses recorded) + Oct 13 09:05:49.342: INFO: Container sonobuoy-worker ready: true, restart count 0 + Oct 13 09:05:49.342: INFO: Container systemd-logs ready: true, restart count 0 + [It] validates resource limits of pods that are allowed to run [Conformance] + test/e2e/scheduling/predicates.go:331 + STEP: verifying the node has the label node node1 10/13/23 09:05:49.358 + STEP: verifying the node has the label node node2 10/13/23 09:05:49.373 + STEP: verifying the node has the label node node3 10/13/23 09:05:49.393 + Oct 13 09:05:49.411: INFO: Pod ephemeral-containers-target-pod requesting resource cpu=0m on Node node1 + Oct 13 09:05:49.411: INFO: Pod kube-flannel-ds-5qldx requesting resource cpu=100m on Node node2 + Oct 13 09:05:49.411: INFO: Pod kube-flannel-ds-dzrwh requesting resource cpu=100m on Node node3 + Oct 13 09:05:49.411: INFO: Pod kube-flannel-ds-jtxbm requesting resource cpu=100m on Node node1 + Oct 13 09:05:49.411: INFO: Pod coredns-787d4945fb-5dqqv requesting resource cpu=100m on Node node3 + Oct 13 09:05:49.411: INFO: Pod coredns-787d4945fb-89krv requesting resource cpu=100m on Node node1 + Oct 13 09:05:49.411: INFO: Pod etcd-node1 requesting resource cpu=100m on Node node1 + Oct 13 09:05:49.411: INFO: Pod etcd-node2 requesting resource cpu=100m on Node node2 + Oct 13 09:05:49.411: INFO: Pod etcd-node3 requesting resource cpu=100m on Node node3 + Oct 13 09:05:49.411: INFO: Pod haproxy-node1 requesting resource cpu=0m on Node node1 + Oct 13 09:05:49.411: INFO: Pod haproxy-node2 requesting resource cpu=0m on Node node2 + Oct 13 09:05:49.411: INFO: Pod haproxy-node3 requesting resource cpu=0m on Node node3 + Oct 13 09:05:49.411: INFO: Pod keepalived-node1 requesting resource cpu=0m on Node node1 + Oct 13 09:05:49.411: INFO: Pod keepalived-node2 requesting resource cpu=0m on Node node2 + Oct 13 09:05:49.411: INFO: Pod keepalived-node3 requesting resource cpu=0m on Node node3 + Oct 13 09:05:49.411: INFO: Pod kube-apiserver-node1 requesting resource cpu=250m on Node node1 + Oct 13 09:05:49.411: INFO: Pod kube-apiserver-node2 requesting resource cpu=250m on Node node2 + Oct 13 09:05:49.411: INFO: Pod kube-apiserver-node3 requesting resource cpu=250m on Node node3 + Oct 13 09:05:49.411: INFO: Pod kube-controller-manager-node1 requesting resource cpu=200m on Node node1 + Oct 13 09:05:49.411: INFO: Pod kube-controller-manager-node2 requesting resource cpu=200m on Node node2 + Oct 13 09:05:49.411: INFO: Pod kube-controller-manager-node3 requesting resource cpu=200m on Node node3 + Oct 13 09:05:49.411: INFO: Pod kube-proxy-dkrp7 requesting resource cpu=0m on Node node3 + Oct 13 09:05:49.411: INFO: Pod kube-proxy-dqr76 requesting resource cpu=0m on Node node1 + Oct 13 09:05:49.411: INFO: Pod kube-proxy-tkvwh requesting resource cpu=0m on Node node2 + Oct 13 09:05:49.411: INFO: Pod kube-scheduler-node1 requesting resource cpu=100m on Node node1 + Oct 13 09:05:49.411: INFO: Pod kube-scheduler-node2 requesting resource cpu=100m on Node node2 + Oct 13 09:05:49.411: INFO: Pod kube-scheduler-node3 requesting resource cpu=100m on Node node3 + Oct 13 09:05:49.411: INFO: Pod sonobuoy requesting resource cpu=0m on Node node1 + Oct 13 09:05:49.411: INFO: Pod sonobuoy-e2e-job-bfbf16dda205467f requesting resource cpu=0m on Node node3 + Oct 13 09:05:49.411: INFO: Pod sonobuoy-systemd-logs-daemon-set-4d2891c1fd524ac6-jr4nt requesting resource cpu=0m on Node node1 + Oct 13 09:05:49.411: INFO: Pod sonobuoy-systemd-logs-daemon-set-4d2891c1fd524ac6-rhwwd requesting resource cpu=0m on Node node2 + Oct 13 09:05:49.411: INFO: Pod sonobuoy-systemd-logs-daemon-set-4d2891c1fd524ac6-xj7b7 requesting resource cpu=0m on Node node3 + STEP: Starting Pods to consume most of the cluster CPU. 10/13/23 09:05:49.411 + Oct 13 09:05:49.411: INFO: Creating a pod which consumes cpu=5005m on Node node3 + Oct 13 09:05:49.424: INFO: Creating a pod which consumes cpu=5005m on Node node1 + Oct 13 09:05:49.435: INFO: Creating a pod which consumes cpu=5075m on Node node2 + Oct 13 09:05:49.444: INFO: Waiting up to 5m0s for pod "filler-pod-95f604d0-f04e-48ce-a7e1-a932c173a33d" in namespace "sched-pred-3157" to be "running" + Oct 13 09:05:49.455: INFO: Pod "filler-pod-95f604d0-f04e-48ce-a7e1-a932c173a33d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.423578ms + Oct 13 09:05:51.459: INFO: Pod "filler-pod-95f604d0-f04e-48ce-a7e1-a932c173a33d": Phase="Running", Reason="", readiness=true. Elapsed: 2.014514821s + Oct 13 09:05:51.459: INFO: Pod "filler-pod-95f604d0-f04e-48ce-a7e1-a932c173a33d" satisfied condition "running" + Oct 13 09:05:51.459: INFO: Waiting up to 5m0s for pod "filler-pod-36cb1e1f-fda2-4d77-8a72-0c26092caae3" in namespace "sched-pred-3157" to be "running" + Oct 13 09:05:51.462: INFO: Pod "filler-pod-36cb1e1f-fda2-4d77-8a72-0c26092caae3": Phase="Running", Reason="", readiness=true. Elapsed: 2.793936ms + Oct 13 09:05:51.462: INFO: Pod "filler-pod-36cb1e1f-fda2-4d77-8a72-0c26092caae3" satisfied condition "running" + Oct 13 09:05:51.462: INFO: Waiting up to 5m0s for pod "filler-pod-1e0378d3-556f-425b-b8ae-c4269e2cb270" in namespace "sched-pred-3157" to be "running" + Oct 13 09:05:51.464: INFO: Pod "filler-pod-1e0378d3-556f-425b-b8ae-c4269e2cb270": Phase="Running", Reason="", readiness=true. Elapsed: 2.201522ms + Oct 13 09:05:51.464: INFO: Pod "filler-pod-1e0378d3-556f-425b-b8ae-c4269e2cb270" satisfied condition "running" + STEP: Creating another pod that requires unavailable amount of CPU. 10/13/23 09:05:51.464 + STEP: Considering event: + Type = [Normal], Name = [filler-pod-1e0378d3-556f-425b-b8ae-c4269e2cb270.178d9f72c1e160dc], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3157/filler-pod-1e0378d3-556f-425b-b8ae-c4269e2cb270 to node2] 10/13/23 09:05:51.467 + STEP: Considering event: + Type = [Normal], Name = [filler-pod-1e0378d3-556f-425b-b8ae-c4269e2cb270.178d9f72f9d3fd4e], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 10/13/23 09:05:51.467 + STEP: Considering event: + Type = [Normal], Name = [filler-pod-1e0378d3-556f-425b-b8ae-c4269e2cb270.178d9f72fba7695f], Reason = [Created], Message = [Created container filler-pod-1e0378d3-556f-425b-b8ae-c4269e2cb270] 10/13/23 09:05:51.467 + STEP: Considering event: + Type = [Normal], Name = [filler-pod-1e0378d3-556f-425b-b8ae-c4269e2cb270.178d9f7302f64113], Reason = [Started], Message = [Started container filler-pod-1e0378d3-556f-425b-b8ae-c4269e2cb270] 10/13/23 09:05:51.467 + STEP: Considering event: + Type = [Normal], Name = [filler-pod-36cb1e1f-fda2-4d77-8a72-0c26092caae3.178d9f72c0fa484e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3157/filler-pod-36cb1e1f-fda2-4d77-8a72-0c26092caae3 to node1] 10/13/23 09:05:51.467 + STEP: Considering event: + Type = [Normal], Name = [filler-pod-36cb1e1f-fda2-4d77-8a72-0c26092caae3.178d9f72e055271b], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 10/13/23 09:05:51.467 + STEP: Considering event: + Type = [Normal], Name = [filler-pod-36cb1e1f-fda2-4d77-8a72-0c26092caae3.178d9f72e1d54d8d], Reason = [Created], Message = [Created container filler-pod-36cb1e1f-fda2-4d77-8a72-0c26092caae3] 10/13/23 09:05:51.467 + STEP: Considering event: + Type = [Normal], Name = [filler-pod-36cb1e1f-fda2-4d77-8a72-0c26092caae3.178d9f72e8b86a98], Reason = [Started], Message = [Started container filler-pod-36cb1e1f-fda2-4d77-8a72-0c26092caae3] 10/13/23 09:05:51.467 + STEP: Considering event: + Type = [Normal], Name = [filler-pod-95f604d0-f04e-48ce-a7e1-a932c173a33d.178d9f72c064d971], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3157/filler-pod-95f604d0-f04e-48ce-a7e1-a932c173a33d to node3] 10/13/23 09:05:51.467 + STEP: Considering event: + Type = [Normal], Name = [filler-pod-95f604d0-f04e-48ce-a7e1-a932c173a33d.178d9f72c903a82b], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 10/13/23 09:05:51.467 + STEP: Considering event: + Type = [Normal], Name = [filler-pod-95f604d0-f04e-48ce-a7e1-a932c173a33d.178d9f72cac5d199], Reason = [Created], Message = [Created container filler-pod-95f604d0-f04e-48ce-a7e1-a932c173a33d] 10/13/23 09:05:51.467 + STEP: Considering event: + Type = [Normal], Name = [filler-pod-95f604d0-f04e-48ce-a7e1-a932c173a33d.178d9f72d1d174bc], Reason = [Started], Message = [Started container filler-pod-95f604d0-f04e-48ce-a7e1-a932c173a33d] 10/13/23 09:05:51.467 + STEP: Considering event: + Type = [Warning], Name = [additional-pod.178d9f7339c5596c], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 Insufficient cpu. preemption: 0/3 nodes are available: 3 No preemption victims found for incoming pod..] 10/13/23 09:05:51.479 + STEP: removing the label node off the node node3 10/13/23 09:05:52.478 + STEP: verifying the node doesn't have the label node 10/13/23 09:05:52.494 + STEP: removing the label node off the node node1 10/13/23 09:05:52.499 + STEP: verifying the node doesn't have the label node 10/13/23 09:05:52.512 + STEP: removing the label node off the node node2 10/13/23 09:05:52.516 + STEP: verifying the node doesn't have the label node 10/13/23 09:05:52.53 + [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/node/init/init.go:32 + Oct 13 09:05:52.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:88 + [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] + tear down framework | framework.go:193 + STEP: Destroying namespace "sched-pred-3157" for this suite. 10/13/23 09:05:52.54 + << End Captured GinkgoWriter Output +------------------------------ +S +------------------------------ +[sig-node] Secrets + should fail to create secret due to empty secret key [Conformance] + test/e2e/common/node/secrets.go:140 +[BeforeEach] [sig-node] Secrets + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:05:52.549 +Oct 13 09:05:52.549: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename secrets 10/13/23 09:05:52.55 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:05:52.567 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:05:52.57 +[BeforeEach] [sig-node] Secrets + test/e2e/framework/metrics/init/init.go:31 +[It] should fail to create secret due to empty secret key [Conformance] + test/e2e/common/node/secrets.go:140 +STEP: Creating projection with secret that has name secret-emptykey-test-05115523-a5d4-43e9-babb-dc606db3f66a 10/13/23 09:05:52.572 +[AfterEach] [sig-node] Secrets + test/e2e/framework/node/init/init.go:32 +Oct 13 09:05:52.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Secrets + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Secrets + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Secrets + tear down framework | framework.go:193 +STEP: Destroying namespace "secrets-4166" for this suite. 10/13/23 09:05:52.579 +------------------------------ +• [0.036 seconds] +[sig-node] Secrets +test/e2e/common/node/framework.go:23 + should fail to create secret due to empty secret key [Conformance] + test/e2e/common/node/secrets.go:140 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Secrets + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:05:52.549 + Oct 13 09:05:52.549: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename secrets 10/13/23 09:05:52.55 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:05:52.567 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:05:52.57 + [BeforeEach] [sig-node] Secrets + test/e2e/framework/metrics/init/init.go:31 + [It] should fail to create secret due to empty secret key [Conformance] + test/e2e/common/node/secrets.go:140 + STEP: Creating projection with secret that has name secret-emptykey-test-05115523-a5d4-43e9-babb-dc606db3f66a 10/13/23 09:05:52.572 + [AfterEach] [sig-node] Secrets + test/e2e/framework/node/init/init.go:32 + Oct 13 09:05:52.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Secrets + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Secrets + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Secrets + tear down framework | framework.go:193 + STEP: Destroying namespace "secrets-4166" for this suite. 10/13/23 09:05:52.579 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] RuntimeClass + should reject a Pod requesting a non-existent RuntimeClass [NodeConformance] [Conformance] + test/e2e/common/node/runtimeclass.go:55 +[BeforeEach] [sig-node] RuntimeClass + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:05:52.586 +Oct 13 09:05:52.586: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename runtimeclass 10/13/23 09:05:52.587 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:05:52.598 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:05:52.6 +[BeforeEach] [sig-node] RuntimeClass + test/e2e/framework/metrics/init/init.go:31 +[It] should reject a Pod requesting a non-existent RuntimeClass [NodeConformance] [Conformance] + test/e2e/common/node/runtimeclass.go:55 +[AfterEach] [sig-node] RuntimeClass + test/e2e/framework/node/init/init.go:32 +Oct 13 09:05:52.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] RuntimeClass + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] RuntimeClass + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] RuntimeClass + tear down framework | framework.go:193 +STEP: Destroying namespace "runtimeclass-4415" for this suite. 10/13/23 09:05:52.612 +------------------------------ +• [0.031 seconds] +[sig-node] RuntimeClass +test/e2e/common/node/framework.go:23 + should reject a Pod requesting a non-existent RuntimeClass [NodeConformance] [Conformance] + test/e2e/common/node/runtimeclass.go:55 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] RuntimeClass + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:05:52.586 + Oct 13 09:05:52.586: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename runtimeclass 10/13/23 09:05:52.587 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:05:52.598 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:05:52.6 + [BeforeEach] [sig-node] RuntimeClass + test/e2e/framework/metrics/init/init.go:31 + [It] should reject a Pod requesting a non-existent RuntimeClass [NodeConformance] [Conformance] + test/e2e/common/node/runtimeclass.go:55 + [AfterEach] [sig-node] RuntimeClass + test/e2e/framework/node/init/init.go:32 + Oct 13 09:05:52.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] RuntimeClass + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] RuntimeClass + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] RuntimeClass + tear down framework | framework.go:193 + STEP: Destroying namespace "runtimeclass-4415" for this suite. 10/13/23 09:05:52.612 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSS +------------------------------ +[sig-apps] Job + should adopt matching orphans and release non-matching pods [Conformance] + test/e2e/apps/job.go:507 +[BeforeEach] [sig-apps] Job + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:05:52.617 +Oct 13 09:05:52.617: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename job 10/13/23 09:05:52.618 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:05:52.631 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:05:52.633 +[BeforeEach] [sig-apps] Job + test/e2e/framework/metrics/init/init.go:31 +[It] should adopt matching orphans and release non-matching pods [Conformance] + test/e2e/apps/job.go:507 +STEP: Creating a job 10/13/23 09:05:52.635 +STEP: Ensuring active pods == parallelism 10/13/23 09:05:52.643 +STEP: Orphaning one of the Job's Pods 10/13/23 09:05:54.647 +Oct 13 09:05:55.167: INFO: Successfully updated pod "adopt-release-gprtx" +STEP: Checking that the Job readopts the Pod 10/13/23 09:05:55.167 +Oct 13 09:05:55.167: INFO: Waiting up to 15m0s for pod "adopt-release-gprtx" in namespace "job-8322" to be "adopted" +Oct 13 09:05:55.171: INFO: Pod "adopt-release-gprtx": Phase="Running", Reason="", readiness=true. Elapsed: 3.508945ms +Oct 13 09:05:57.175: INFO: Pod "adopt-release-gprtx": Phase="Running", Reason="", readiness=true. Elapsed: 2.008157358s +Oct 13 09:05:57.175: INFO: Pod "adopt-release-gprtx" satisfied condition "adopted" +STEP: Removing the labels from the Job's Pod 10/13/23 09:05:57.175 +Oct 13 09:05:57.688: INFO: Successfully updated pod "adopt-release-gprtx" +STEP: Checking that the Job releases the Pod 10/13/23 09:05:57.688 +Oct 13 09:05:57.688: INFO: Waiting up to 15m0s for pod "adopt-release-gprtx" in namespace "job-8322" to be "released" +Oct 13 09:05:57.692: INFO: Pod "adopt-release-gprtx": Phase="Running", Reason="", readiness=true. Elapsed: 3.370581ms +Oct 13 09:05:59.696: INFO: Pod "adopt-release-gprtx": Phase="Running", Reason="", readiness=true. Elapsed: 2.00813546s +Oct 13 09:05:59.696: INFO: Pod "adopt-release-gprtx" satisfied condition "released" +[AfterEach] [sig-apps] Job + test/e2e/framework/node/init/init.go:32 +Oct 13 09:05:59.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] Job + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] Job + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] Job + tear down framework | framework.go:193 +STEP: Destroying namespace "job-8322" for this suite. 10/13/23 09:05:59.705 +------------------------------ +• [SLOW TEST] [7.093 seconds] +[sig-apps] Job +test/e2e/apps/framework.go:23 + should adopt matching orphans and release non-matching pods [Conformance] + test/e2e/apps/job.go:507 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] Job + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:05:52.617 + Oct 13 09:05:52.617: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename job 10/13/23 09:05:52.618 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:05:52.631 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:05:52.633 + [BeforeEach] [sig-apps] Job + test/e2e/framework/metrics/init/init.go:31 + [It] should adopt matching orphans and release non-matching pods [Conformance] + test/e2e/apps/job.go:507 + STEP: Creating a job 10/13/23 09:05:52.635 + STEP: Ensuring active pods == parallelism 10/13/23 09:05:52.643 + STEP: Orphaning one of the Job's Pods 10/13/23 09:05:54.647 + Oct 13 09:05:55.167: INFO: Successfully updated pod "adopt-release-gprtx" + STEP: Checking that the Job readopts the Pod 10/13/23 09:05:55.167 + Oct 13 09:05:55.167: INFO: Waiting up to 15m0s for pod "adopt-release-gprtx" in namespace "job-8322" to be "adopted" + Oct 13 09:05:55.171: INFO: Pod "adopt-release-gprtx": Phase="Running", Reason="", readiness=true. Elapsed: 3.508945ms + Oct 13 09:05:57.175: INFO: Pod "adopt-release-gprtx": Phase="Running", Reason="", readiness=true. Elapsed: 2.008157358s + Oct 13 09:05:57.175: INFO: Pod "adopt-release-gprtx" satisfied condition "adopted" + STEP: Removing the labels from the Job's Pod 10/13/23 09:05:57.175 + Oct 13 09:05:57.688: INFO: Successfully updated pod "adopt-release-gprtx" + STEP: Checking that the Job releases the Pod 10/13/23 09:05:57.688 + Oct 13 09:05:57.688: INFO: Waiting up to 15m0s for pod "adopt-release-gprtx" in namespace "job-8322" to be "released" + Oct 13 09:05:57.692: INFO: Pod "adopt-release-gprtx": Phase="Running", Reason="", readiness=true. Elapsed: 3.370581ms + Oct 13 09:05:59.696: INFO: Pod "adopt-release-gprtx": Phase="Running", Reason="", readiness=true. Elapsed: 2.00813546s + Oct 13 09:05:59.696: INFO: Pod "adopt-release-gprtx" satisfied condition "released" + [AfterEach] [sig-apps] Job + test/e2e/framework/node/init/init.go:32 + Oct 13 09:05:59.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] Job + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] Job + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] Job + tear down framework | framework.go:193 + STEP: Destroying namespace "job-8322" for this suite. 10/13/23 09:05:59.705 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] ConfigMap + should be consumable via the environment [NodeConformance] [Conformance] + test/e2e/common/node/configmap.go:93 +[BeforeEach] [sig-node] ConfigMap + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:05:59.711 +Oct 13 09:05:59.711: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename configmap 10/13/23 09:05:59.712 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:05:59.723 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:05:59.725 +[BeforeEach] [sig-node] ConfigMap + test/e2e/framework/metrics/init/init.go:31 +[It] should be consumable via the environment [NodeConformance] [Conformance] + test/e2e/common/node/configmap.go:93 +STEP: Creating configMap configmap-66/configmap-test-851b0586-cadc-4100-b68f-bf636e1462e0 10/13/23 09:05:59.727 +STEP: Creating a pod to test consume configMaps 10/13/23 09:05:59.731 +Oct 13 09:05:59.738: INFO: Waiting up to 5m0s for pod "pod-configmaps-70cf27d0-28b9-4466-9a37-05aec6ae138d" in namespace "configmap-66" to be "Succeeded or Failed" +Oct 13 09:05:59.743: INFO: Pod "pod-configmaps-70cf27d0-28b9-4466-9a37-05aec6ae138d": Phase="Pending", Reason="", readiness=false. Elapsed: 5.032689ms +Oct 13 09:06:01.747: INFO: Pod "pod-configmaps-70cf27d0-28b9-4466-9a37-05aec6ae138d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008969363s +Oct 13 09:06:03.747: INFO: Pod "pod-configmaps-70cf27d0-28b9-4466-9a37-05aec6ae138d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008521813s +STEP: Saw pod success 10/13/23 09:06:03.747 +Oct 13 09:06:03.747: INFO: Pod "pod-configmaps-70cf27d0-28b9-4466-9a37-05aec6ae138d" satisfied condition "Succeeded or Failed" +Oct 13 09:06:03.749: INFO: Trying to get logs from node node2 pod pod-configmaps-70cf27d0-28b9-4466-9a37-05aec6ae138d container env-test: +STEP: delete the pod 10/13/23 09:06:03.754 +Oct 13 09:06:03.773: INFO: Waiting for pod pod-configmaps-70cf27d0-28b9-4466-9a37-05aec6ae138d to disappear +Oct 13 09:06:03.786: INFO: Pod pod-configmaps-70cf27d0-28b9-4466-9a37-05aec6ae138d no longer exists +[AfterEach] [sig-node] ConfigMap + test/e2e/framework/node/init/init.go:32 +Oct 13 09:06:03.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] ConfigMap + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] ConfigMap + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] ConfigMap + tear down framework | framework.go:193 +STEP: Destroying namespace "configmap-66" for this suite. 10/13/23 09:06:03.801 +------------------------------ +• [4.103 seconds] +[sig-node] ConfigMap +test/e2e/common/node/framework.go:23 + should be consumable via the environment [NodeConformance] [Conformance] + test/e2e/common/node/configmap.go:93 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] ConfigMap + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:05:59.711 + Oct 13 09:05:59.711: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename configmap 10/13/23 09:05:59.712 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:05:59.723 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:05:59.725 + [BeforeEach] [sig-node] ConfigMap + test/e2e/framework/metrics/init/init.go:31 + [It] should be consumable via the environment [NodeConformance] [Conformance] + test/e2e/common/node/configmap.go:93 + STEP: Creating configMap configmap-66/configmap-test-851b0586-cadc-4100-b68f-bf636e1462e0 10/13/23 09:05:59.727 + STEP: Creating a pod to test consume configMaps 10/13/23 09:05:59.731 + Oct 13 09:05:59.738: INFO: Waiting up to 5m0s for pod "pod-configmaps-70cf27d0-28b9-4466-9a37-05aec6ae138d" in namespace "configmap-66" to be "Succeeded or Failed" + Oct 13 09:05:59.743: INFO: Pod "pod-configmaps-70cf27d0-28b9-4466-9a37-05aec6ae138d": Phase="Pending", Reason="", readiness=false. Elapsed: 5.032689ms + Oct 13 09:06:01.747: INFO: Pod "pod-configmaps-70cf27d0-28b9-4466-9a37-05aec6ae138d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008969363s + Oct 13 09:06:03.747: INFO: Pod "pod-configmaps-70cf27d0-28b9-4466-9a37-05aec6ae138d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008521813s + STEP: Saw pod success 10/13/23 09:06:03.747 + Oct 13 09:06:03.747: INFO: Pod "pod-configmaps-70cf27d0-28b9-4466-9a37-05aec6ae138d" satisfied condition "Succeeded or Failed" + Oct 13 09:06:03.749: INFO: Trying to get logs from node node2 pod pod-configmaps-70cf27d0-28b9-4466-9a37-05aec6ae138d container env-test: + STEP: delete the pod 10/13/23 09:06:03.754 + Oct 13 09:06:03.773: INFO: Waiting for pod pod-configmaps-70cf27d0-28b9-4466-9a37-05aec6ae138d to disappear + Oct 13 09:06:03.786: INFO: Pod pod-configmaps-70cf27d0-28b9-4466-9a37-05aec6ae138d no longer exists + [AfterEach] [sig-node] ConfigMap + test/e2e/framework/node/init/init.go:32 + Oct 13 09:06:03.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] ConfigMap + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] ConfigMap + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] ConfigMap + tear down framework | framework.go:193 + STEP: Destroying namespace "configmap-66" for this suite. 10/13/23 09:06:03.801 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-auth] ServiceAccounts + should allow opting out of API token automount [Conformance] + test/e2e/auth/service_accounts.go:161 +[BeforeEach] [sig-auth] ServiceAccounts + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:06:03.815 +Oct 13 09:06:03.816: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename svcaccounts 10/13/23 09:06:03.817 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:06:03.829 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:06:03.831 +[BeforeEach] [sig-auth] ServiceAccounts + test/e2e/framework/metrics/init/init.go:31 +[It] should allow opting out of API token automount [Conformance] + test/e2e/auth/service_accounts.go:161 +Oct 13 09:06:03.847: INFO: created pod pod-service-account-defaultsa +Oct 13 09:06:03.847: INFO: pod pod-service-account-defaultsa service account token volume mount: true +Oct 13 09:06:03.852: INFO: created pod pod-service-account-mountsa +Oct 13 09:06:03.852: INFO: pod pod-service-account-mountsa service account token volume mount: true +Oct 13 09:06:03.861: INFO: created pod pod-service-account-nomountsa +Oct 13 09:06:03.861: INFO: pod pod-service-account-nomountsa service account token volume mount: false +Oct 13 09:06:03.870: INFO: created pod pod-service-account-defaultsa-mountspec +Oct 13 09:06:03.870: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true +Oct 13 09:06:03.878: INFO: created pod pod-service-account-mountsa-mountspec +Oct 13 09:06:03.878: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true +Oct 13 09:06:03.886: INFO: created pod pod-service-account-nomountsa-mountspec +Oct 13 09:06:03.886: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true +Oct 13 09:06:03.898: INFO: created pod pod-service-account-defaultsa-nomountspec +Oct 13 09:06:03.898: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false +Oct 13 09:06:03.906: INFO: created pod pod-service-account-mountsa-nomountspec +Oct 13 09:06:03.906: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false +Oct 13 09:06:03.913: INFO: created pod pod-service-account-nomountsa-nomountspec +Oct 13 09:06:03.913: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false +[AfterEach] [sig-auth] ServiceAccounts + test/e2e/framework/node/init/init.go:32 +Oct 13 09:06:03.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-auth] ServiceAccounts + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-auth] ServiceAccounts + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-auth] ServiceAccounts + tear down framework | framework.go:193 +STEP: Destroying namespace "svcaccounts-894" for this suite. 10/13/23 09:06:03.921 +------------------------------ +• [0.118 seconds] +[sig-auth] ServiceAccounts +test/e2e/auth/framework.go:23 + should allow opting out of API token automount [Conformance] + test/e2e/auth/service_accounts.go:161 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-auth] ServiceAccounts + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:06:03.815 + Oct 13 09:06:03.816: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename svcaccounts 10/13/23 09:06:03.817 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:06:03.829 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:06:03.831 + [BeforeEach] [sig-auth] ServiceAccounts + test/e2e/framework/metrics/init/init.go:31 + [It] should allow opting out of API token automount [Conformance] + test/e2e/auth/service_accounts.go:161 + Oct 13 09:06:03.847: INFO: created pod pod-service-account-defaultsa + Oct 13 09:06:03.847: INFO: pod pod-service-account-defaultsa service account token volume mount: true + Oct 13 09:06:03.852: INFO: created pod pod-service-account-mountsa + Oct 13 09:06:03.852: INFO: pod pod-service-account-mountsa service account token volume mount: true + Oct 13 09:06:03.861: INFO: created pod pod-service-account-nomountsa + Oct 13 09:06:03.861: INFO: pod pod-service-account-nomountsa service account token volume mount: false + Oct 13 09:06:03.870: INFO: created pod pod-service-account-defaultsa-mountspec + Oct 13 09:06:03.870: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true + Oct 13 09:06:03.878: INFO: created pod pod-service-account-mountsa-mountspec + Oct 13 09:06:03.878: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true + Oct 13 09:06:03.886: INFO: created pod pod-service-account-nomountsa-mountspec + Oct 13 09:06:03.886: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true + Oct 13 09:06:03.898: INFO: created pod pod-service-account-defaultsa-nomountspec + Oct 13 09:06:03.898: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false + Oct 13 09:06:03.906: INFO: created pod pod-service-account-mountsa-nomountspec + Oct 13 09:06:03.906: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false + Oct 13 09:06:03.913: INFO: created pod pod-service-account-nomountsa-nomountspec + Oct 13 09:06:03.913: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false + [AfterEach] [sig-auth] ServiceAccounts + test/e2e/framework/node/init/init.go:32 + Oct 13 09:06:03.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-auth] ServiceAccounts + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-auth] ServiceAccounts + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-auth] ServiceAccounts + tear down framework | framework.go:193 + STEP: Destroying namespace "svcaccounts-894" for this suite. 10/13/23 09:06:03.921 + << End Captured GinkgoWriter Output +------------------------------ +SSS +------------------------------ +[sig-cli] Kubectl client Kubectl replace + should update a single-container pod's image [Conformance] + test/e2e/kubectl/kubectl.go:1747 +[BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:06:03.934 +Oct 13 09:06:03.934: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename kubectl 10/13/23 09:06:03.935 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:06:03.962 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:06:03.965 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 +[BeforeEach] Kubectl replace + test/e2e/kubectl/kubectl.go:1734 +[It] should update a single-container pod's image [Conformance] + test/e2e/kubectl/kubectl.go:1747 +STEP: running the image registry.k8s.io/e2e-test-images/httpd:2.4.38-4 10/13/23 09:06:03.974 +Oct 13 09:06:03.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-5377 run e2e-test-httpd-pod --image=registry.k8s.io/e2e-test-images/httpd:2.4.38-4 --pod-running-timeout=2m0s --labels=run=e2e-test-httpd-pod' +Oct 13 09:06:04.055: INFO: stderr: "" +Oct 13 09:06:04.055: INFO: stdout: "pod/e2e-test-httpd-pod created\n" +STEP: verifying the pod e2e-test-httpd-pod is running 10/13/23 09:06:04.055 +STEP: verifying the pod e2e-test-httpd-pod was created 10/13/23 09:06:09.107 +Oct 13 09:06:09.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-5377 get pod e2e-test-httpd-pod -o json' +Oct 13 09:06:09.188: INFO: stderr: "" +Oct 13 09:06:09.188: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2023-10-13T09:06:04Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-5377\",\n \"resourceVersion\": \"27512\",\n \"uid\": \"1c1ba6ea-cf86-4c1c-9496-795e95ea846f\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"registry.k8s.io/e2e-test-images/httpd:2.4.38-4\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"kube-api-access-7qg9p\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"node2\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"kube-api-access-7qg9p\",\n \"projected\": {\n \"defaultMode\": 420,\n \"sources\": [\n {\n \"serviceAccountToken\": {\n \"expirationSeconds\": 3607,\n \"path\": \"token\"\n }\n },\n {\n \"configMap\": {\n \"items\": [\n {\n \"key\": \"ca.crt\",\n \"path\": \"ca.crt\"\n }\n ],\n \"name\": \"kube-root-ca.crt\"\n }\n },\n {\n \"downwardAPI\": {\n \"items\": [\n {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"metadata.namespace\"\n },\n \"path\": \"namespace\"\n }\n ]\n }\n }\n ]\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-10-13T09:06:04Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-10-13T09:06:06Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-10-13T09:06:06Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-10-13T09:06:04Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://4414e3725f366f2b5a65889455bfad88818dea872e1ab1d6d804dba6d5287f94\",\n \"image\": \"registry.k8s.io/e2e-test-images/httpd:2.4.38-4\",\n \"imageID\": \"sha256:987dae5a853a43c663ab2f902708e65874bd2c0189aa0bc57d81ffb57187d089\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2023-10-13T09:06:05Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.253.8.111\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.221\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.221\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2023-10-13T09:06:04Z\"\n }\n}\n" +STEP: replace the image in the pod 10/13/23 09:06:09.188 +Oct 13 09:06:09.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-5377 replace -f -' +Oct 13 09:06:09.635: INFO: stderr: "" +Oct 13 09:06:09.635: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" +STEP: verifying the pod e2e-test-httpd-pod has the right image registry.k8s.io/e2e-test-images/busybox:1.29-4 10/13/23 09:06:09.635 +[AfterEach] Kubectl replace + test/e2e/kubectl/kubectl.go:1738 +Oct 13 09:06:09.642: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-5377 delete pods e2e-test-httpd-pod' +Oct 13 09:06:11.656: INFO: stderr: "" +Oct 13 09:06:11.656: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 +Oct 13 09:06:11.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 +STEP: Destroying namespace "kubectl-5377" for this suite. 10/13/23 09:06:11.659 +------------------------------ +• [SLOW TEST] [7.733 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Kubectl replace + test/e2e/kubectl/kubectl.go:1731 + should update a single-container pod's image [Conformance] + test/e2e/kubectl/kubectl.go:1747 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:06:03.934 + Oct 13 09:06:03.934: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename kubectl 10/13/23 09:06:03.935 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:06:03.962 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:06:03.965 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 + [BeforeEach] Kubectl replace + test/e2e/kubectl/kubectl.go:1734 + [It] should update a single-container pod's image [Conformance] + test/e2e/kubectl/kubectl.go:1747 + STEP: running the image registry.k8s.io/e2e-test-images/httpd:2.4.38-4 10/13/23 09:06:03.974 + Oct 13 09:06:03.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-5377 run e2e-test-httpd-pod --image=registry.k8s.io/e2e-test-images/httpd:2.4.38-4 --pod-running-timeout=2m0s --labels=run=e2e-test-httpd-pod' + Oct 13 09:06:04.055: INFO: stderr: "" + Oct 13 09:06:04.055: INFO: stdout: "pod/e2e-test-httpd-pod created\n" + STEP: verifying the pod e2e-test-httpd-pod is running 10/13/23 09:06:04.055 + STEP: verifying the pod e2e-test-httpd-pod was created 10/13/23 09:06:09.107 + Oct 13 09:06:09.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-5377 get pod e2e-test-httpd-pod -o json' + Oct 13 09:06:09.188: INFO: stderr: "" + Oct 13 09:06:09.188: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2023-10-13T09:06:04Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-5377\",\n \"resourceVersion\": \"27512\",\n \"uid\": \"1c1ba6ea-cf86-4c1c-9496-795e95ea846f\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"registry.k8s.io/e2e-test-images/httpd:2.4.38-4\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"kube-api-access-7qg9p\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"node2\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"kube-api-access-7qg9p\",\n \"projected\": {\n \"defaultMode\": 420,\n \"sources\": [\n {\n \"serviceAccountToken\": {\n \"expirationSeconds\": 3607,\n \"path\": \"token\"\n }\n },\n {\n \"configMap\": {\n \"items\": [\n {\n \"key\": \"ca.crt\",\n \"path\": \"ca.crt\"\n }\n ],\n \"name\": \"kube-root-ca.crt\"\n }\n },\n {\n \"downwardAPI\": {\n \"items\": [\n {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"metadata.namespace\"\n },\n \"path\": \"namespace\"\n }\n ]\n }\n }\n ]\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-10-13T09:06:04Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-10-13T09:06:06Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-10-13T09:06:06Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-10-13T09:06:04Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://4414e3725f366f2b5a65889455bfad88818dea872e1ab1d6d804dba6d5287f94\",\n \"image\": \"registry.k8s.io/e2e-test-images/httpd:2.4.38-4\",\n \"imageID\": \"sha256:987dae5a853a43c663ab2f902708e65874bd2c0189aa0bc57d81ffb57187d089\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2023-10-13T09:06:05Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.253.8.111\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.221\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.221\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2023-10-13T09:06:04Z\"\n }\n}\n" + STEP: replace the image in the pod 10/13/23 09:06:09.188 + Oct 13 09:06:09.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-5377 replace -f -' + Oct 13 09:06:09.635: INFO: stderr: "" + Oct 13 09:06:09.635: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" + STEP: verifying the pod e2e-test-httpd-pod has the right image registry.k8s.io/e2e-test-images/busybox:1.29-4 10/13/23 09:06:09.635 + [AfterEach] Kubectl replace + test/e2e/kubectl/kubectl.go:1738 + Oct 13 09:06:09.642: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-5377 delete pods e2e-test-httpd-pod' + Oct 13 09:06:11.656: INFO: stderr: "" + Oct 13 09:06:11.656: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" + [AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 + Oct 13 09:06:11.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 + STEP: Destroying namespace "kubectl-5377" for this suite. 10/13/23 09:06:11.659 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSS +------------------------------ +[sig-node] Container Runtime blackbox test on terminated container + should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:232 +[BeforeEach] [sig-node] Container Runtime + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:06:11.667 +Oct 13 09:06:11.667: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename container-runtime 10/13/23 09:06:11.668 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:06:11.679 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:06:11.682 +[BeforeEach] [sig-node] Container Runtime + test/e2e/framework/metrics/init/init.go:31 +[It] should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:232 +STEP: create the container 10/13/23 09:06:11.684 +STEP: wait for the container to reach Succeeded 10/13/23 09:06:11.695 +STEP: get the container status 10/13/23 09:06:14.71 +STEP: the container should be terminated 10/13/23 09:06:14.713 +STEP: the termination message should be set 10/13/23 09:06:14.713 +Oct 13 09:06:14.713: INFO: Expected: &{} to match Container's Termination Message: -- +STEP: delete the container 10/13/23 09:06:14.713 +[AfterEach] [sig-node] Container Runtime + test/e2e/framework/node/init/init.go:32 +Oct 13 09:06:14.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Container Runtime + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Container Runtime + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Container Runtime + tear down framework | framework.go:193 +STEP: Destroying namespace "container-runtime-8168" for this suite. 10/13/23 09:06:14.728 +------------------------------ +• [3.067 seconds] +[sig-node] Container Runtime +test/e2e/common/node/framework.go:23 + blackbox test + test/e2e/common/node/runtime.go:44 + on terminated container + test/e2e/common/node/runtime.go:137 + should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:232 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Container Runtime + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:06:11.667 + Oct 13 09:06:11.667: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename container-runtime 10/13/23 09:06:11.668 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:06:11.679 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:06:11.682 + [BeforeEach] [sig-node] Container Runtime + test/e2e/framework/metrics/init/init.go:31 + [It] should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:232 + STEP: create the container 10/13/23 09:06:11.684 + STEP: wait for the container to reach Succeeded 10/13/23 09:06:11.695 + STEP: get the container status 10/13/23 09:06:14.71 + STEP: the container should be terminated 10/13/23 09:06:14.713 + STEP: the termination message should be set 10/13/23 09:06:14.713 + Oct 13 09:06:14.713: INFO: Expected: &{} to match Container's Termination Message: -- + STEP: delete the container 10/13/23 09:06:14.713 + [AfterEach] [sig-node] Container Runtime + test/e2e/framework/node/init/init.go:32 + Oct 13 09:06:14.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Container Runtime + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Container Runtime + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Container Runtime + tear down framework | framework.go:193 + STEP: Destroying namespace "container-runtime-8168" for this suite. 10/13/23 09:06:14.728 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSS +------------------------------ +[sig-node] Security Context when creating containers with AllowPrivilegeEscalation + should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/security_context.go:609 +[BeforeEach] [sig-node] Security Context + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:06:14.735 +Oct 13 09:06:14.735: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename security-context-test 10/13/23 09:06:14.736 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:06:14.748 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:06:14.75 +[BeforeEach] [sig-node] Security Context + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] Security Context + test/e2e/common/node/security_context.go:50 +[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/security_context.go:609 +Oct 13 09:06:14.759: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-035534b5-e17c-4221-b89c-d0dfccb172f5" in namespace "security-context-test-9350" to be "Succeeded or Failed" +Oct 13 09:06:14.766: INFO: Pod "alpine-nnp-false-035534b5-e17c-4221-b89c-d0dfccb172f5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.34184ms +Oct 13 09:06:16.772: INFO: Pod "alpine-nnp-false-035534b5-e17c-4221-b89c-d0dfccb172f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012402553s +Oct 13 09:06:18.769: INFO: Pod "alpine-nnp-false-035534b5-e17c-4221-b89c-d0dfccb172f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009962688s +Oct 13 09:06:18.769: INFO: Pod "alpine-nnp-false-035534b5-e17c-4221-b89c-d0dfccb172f5" satisfied condition "Succeeded or Failed" +[AfterEach] [sig-node] Security Context + test/e2e/framework/node/init/init.go:32 +Oct 13 09:06:18.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Security Context + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Security Context + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Security Context + tear down framework | framework.go:193 +STEP: Destroying namespace "security-context-test-9350" for this suite. 10/13/23 09:06:18.779 +------------------------------ +• [4.050 seconds] +[sig-node] Security Context +test/e2e/common/node/framework.go:23 + when creating containers with AllowPrivilegeEscalation + test/e2e/common/node/security_context.go:555 + should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/security_context.go:609 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Security Context + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:06:14.735 + Oct 13 09:06:14.735: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename security-context-test 10/13/23 09:06:14.736 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:06:14.748 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:06:14.75 + [BeforeEach] [sig-node] Security Context + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] Security Context + test/e2e/common/node/security_context.go:50 + [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/security_context.go:609 + Oct 13 09:06:14.759: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-035534b5-e17c-4221-b89c-d0dfccb172f5" in namespace "security-context-test-9350" to be "Succeeded or Failed" + Oct 13 09:06:14.766: INFO: Pod "alpine-nnp-false-035534b5-e17c-4221-b89c-d0dfccb172f5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.34184ms + Oct 13 09:06:16.772: INFO: Pod "alpine-nnp-false-035534b5-e17c-4221-b89c-d0dfccb172f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012402553s + Oct 13 09:06:18.769: INFO: Pod "alpine-nnp-false-035534b5-e17c-4221-b89c-d0dfccb172f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009962688s + Oct 13 09:06:18.769: INFO: Pod "alpine-nnp-false-035534b5-e17c-4221-b89c-d0dfccb172f5" satisfied condition "Succeeded or Failed" + [AfterEach] [sig-node] Security Context + test/e2e/framework/node/init/init.go:32 + Oct 13 09:06:18.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Security Context + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Security Context + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Security Context + tear down framework | framework.go:193 + STEP: Destroying namespace "security-context-test-9350" for this suite. 10/13/23 09:06:18.779 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Pods Extended Pods Set QOS Class + should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] + test/e2e/node/pods.go:161 +[BeforeEach] [sig-node] Pods Extended + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:06:18.785 +Oct 13 09:06:18.786: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename pods 10/13/23 09:06:18.786 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:06:18.797 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:06:18.799 +[BeforeEach] [sig-node] Pods Extended + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] Pods Set QOS Class + test/e2e/node/pods.go:152 +[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] + test/e2e/node/pods.go:161 +STEP: creating the pod 10/13/23 09:06:18.801 +STEP: submitting the pod to kubernetes 10/13/23 09:06:18.801 +STEP: verifying QOS class is set on the pod 10/13/23 09:06:18.81 +[AfterEach] [sig-node] Pods Extended + test/e2e/framework/node/init/init.go:32 +Oct 13 09:06:18.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Pods Extended + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Pods Extended + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Pods Extended + tear down framework | framework.go:193 +STEP: Destroying namespace "pods-7675" for this suite. 10/13/23 09:06:18.835 +------------------------------ +• [0.056 seconds] +[sig-node] Pods Extended +test/e2e/node/framework.go:23 + Pods Set QOS Class + test/e2e/node/pods.go:150 + should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] + test/e2e/node/pods.go:161 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Pods Extended + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:06:18.785 + Oct 13 09:06:18.786: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename pods 10/13/23 09:06:18.786 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:06:18.797 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:06:18.799 + [BeforeEach] [sig-node] Pods Extended + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] Pods Set QOS Class + test/e2e/node/pods.go:152 + [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] + test/e2e/node/pods.go:161 + STEP: creating the pod 10/13/23 09:06:18.801 + STEP: submitting the pod to kubernetes 10/13/23 09:06:18.801 + STEP: verifying QOS class is set on the pod 10/13/23 09:06:18.81 + [AfterEach] [sig-node] Pods Extended + test/e2e/framework/node/init/init.go:32 + Oct 13 09:06:18.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Pods Extended + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Pods Extended + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Pods Extended + tear down framework | framework.go:193 + STEP: Destroying namespace "pods-7675" for this suite. 10/13/23 09:06:18.835 + << End Captured GinkgoWriter Output +------------------------------ +SS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:109 +[BeforeEach] [sig-storage] Projected configMap + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:06:18.842 +Oct 13 09:06:18.842: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename projected 10/13/23 09:06:18.843 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:06:18.854 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:06:18.856 +[BeforeEach] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:31 +[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:109 +STEP: Creating configMap with name projected-configmap-test-volume-map-49a69157-fedf-4c66-9331-ad003fe7c1b4 10/13/23 09:06:18.858 +STEP: Creating a pod to test consume configMaps 10/13/23 09:06:18.862 +Oct 13 09:06:18.871: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-82a77351-6044-47dc-8e6a-d5d703413703" in namespace "projected-7520" to be "Succeeded or Failed" +Oct 13 09:06:18.873: INFO: Pod "pod-projected-configmaps-82a77351-6044-47dc-8e6a-d5d703413703": Phase="Pending", Reason="", readiness=false. Elapsed: 2.722178ms +Oct 13 09:06:20.878: INFO: Pod "pod-projected-configmaps-82a77351-6044-47dc-8e6a-d5d703413703": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007471667s +Oct 13 09:06:22.878: INFO: Pod "pod-projected-configmaps-82a77351-6044-47dc-8e6a-d5d703413703": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007592075s +STEP: Saw pod success 10/13/23 09:06:22.878 +Oct 13 09:06:22.878: INFO: Pod "pod-projected-configmaps-82a77351-6044-47dc-8e6a-d5d703413703" satisfied condition "Succeeded or Failed" +Oct 13 09:06:22.882: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-82a77351-6044-47dc-8e6a-d5d703413703 container agnhost-container: +STEP: delete the pod 10/13/23 09:06:22.887 +Oct 13 09:06:22.897: INFO: Waiting for pod pod-projected-configmaps-82a77351-6044-47dc-8e6a-d5d703413703 to disappear +Oct 13 09:06:22.899: INFO: Pod pod-projected-configmaps-82a77351-6044-47dc-8e6a-d5d703413703 no longer exists +[AfterEach] [sig-storage] Projected configMap + test/e2e/framework/node/init/init.go:32 +Oct 13 09:06:22.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Projected configMap + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Projected configMap + tear down framework | framework.go:193 +STEP: Destroying namespace "projected-7520" for this suite. 10/13/23 09:06:22.903 +------------------------------ +• [4.066 seconds] +[sig-storage] Projected configMap +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:109 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected configMap + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:06:18.842 + Oct 13 09:06:18.842: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename projected 10/13/23 09:06:18.843 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:06:18.854 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:06:18.856 + [BeforeEach] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:31 + [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:109 + STEP: Creating configMap with name projected-configmap-test-volume-map-49a69157-fedf-4c66-9331-ad003fe7c1b4 10/13/23 09:06:18.858 + STEP: Creating a pod to test consume configMaps 10/13/23 09:06:18.862 + Oct 13 09:06:18.871: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-82a77351-6044-47dc-8e6a-d5d703413703" in namespace "projected-7520" to be "Succeeded or Failed" + Oct 13 09:06:18.873: INFO: Pod "pod-projected-configmaps-82a77351-6044-47dc-8e6a-d5d703413703": Phase="Pending", Reason="", readiness=false. Elapsed: 2.722178ms + Oct 13 09:06:20.878: INFO: Pod "pod-projected-configmaps-82a77351-6044-47dc-8e6a-d5d703413703": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007471667s + Oct 13 09:06:22.878: INFO: Pod "pod-projected-configmaps-82a77351-6044-47dc-8e6a-d5d703413703": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007592075s + STEP: Saw pod success 10/13/23 09:06:22.878 + Oct 13 09:06:22.878: INFO: Pod "pod-projected-configmaps-82a77351-6044-47dc-8e6a-d5d703413703" satisfied condition "Succeeded or Failed" + Oct 13 09:06:22.882: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-82a77351-6044-47dc-8e6a-d5d703413703 container agnhost-container: + STEP: delete the pod 10/13/23 09:06:22.887 + Oct 13 09:06:22.897: INFO: Waiting for pod pod-projected-configmaps-82a77351-6044-47dc-8e6a-d5d703413703 to disappear + Oct 13 09:06:22.899: INFO: Pod pod-projected-configmaps-82a77351-6044-47dc-8e6a-d5d703413703 no longer exists + [AfterEach] [sig-storage] Projected configMap + test/e2e/framework/node/init/init.go:32 + Oct 13 09:06:22.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Projected configMap + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Projected configMap + tear down framework | framework.go:193 + STEP: Destroying namespace "projected-7520" for this suite. 10/13/23 09:06:22.903 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:109 +[BeforeEach] [sig-storage] ConfigMap + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:06:22.908 +Oct 13 09:06:22.908: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename configmap 10/13/23 09:06:22.909 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:06:22.921 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:06:22.924 +[BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:31 +[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:109 +STEP: Creating configMap with name configmap-test-volume-map-74cb3d80-1443-49b6-a01c-6e140771174f 10/13/23 09:06:22.926 +STEP: Creating a pod to test consume configMaps 10/13/23 09:06:22.93 +Oct 13 09:06:22.938: INFO: Waiting up to 5m0s for pod "pod-configmaps-357e3377-d358-4dea-9f4b-173132052d11" in namespace "configmap-7161" to be "Succeeded or Failed" +Oct 13 09:06:22.943: INFO: Pod "pod-configmaps-357e3377-d358-4dea-9f4b-173132052d11": Phase="Pending", Reason="", readiness=false. Elapsed: 4.377214ms +Oct 13 09:06:24.947: INFO: Pod "pod-configmaps-357e3377-d358-4dea-9f4b-173132052d11": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008348068s +Oct 13 09:06:26.948: INFO: Pod "pod-configmaps-357e3377-d358-4dea-9f4b-173132052d11": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009365917s +STEP: Saw pod success 10/13/23 09:06:26.948 +Oct 13 09:06:26.948: INFO: Pod "pod-configmaps-357e3377-d358-4dea-9f4b-173132052d11" satisfied condition "Succeeded or Failed" +Oct 13 09:06:26.954: INFO: Trying to get logs from node node3 pod pod-configmaps-357e3377-d358-4dea-9f4b-173132052d11 container agnhost-container: +STEP: delete the pod 10/13/23 09:06:26.96 +Oct 13 09:06:26.971: INFO: Waiting for pod pod-configmaps-357e3377-d358-4dea-9f4b-173132052d11 to disappear +Oct 13 09:06:26.974: INFO: Pod pod-configmaps-357e3377-d358-4dea-9f4b-173132052d11 no longer exists +[AfterEach] [sig-storage] ConfigMap + test/e2e/framework/node/init/init.go:32 +Oct 13 09:06:26.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] ConfigMap + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] ConfigMap + tear down framework | framework.go:193 +STEP: Destroying namespace "configmap-7161" for this suite. 10/13/23 09:06:26.977 +------------------------------ +• [4.074 seconds] +[sig-storage] ConfigMap +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:109 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] ConfigMap + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:06:22.908 + Oct 13 09:06:22.908: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename configmap 10/13/23 09:06:22.909 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:06:22.921 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:06:22.924 + [BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:31 + [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:109 + STEP: Creating configMap with name configmap-test-volume-map-74cb3d80-1443-49b6-a01c-6e140771174f 10/13/23 09:06:22.926 + STEP: Creating a pod to test consume configMaps 10/13/23 09:06:22.93 + Oct 13 09:06:22.938: INFO: Waiting up to 5m0s for pod "pod-configmaps-357e3377-d358-4dea-9f4b-173132052d11" in namespace "configmap-7161" to be "Succeeded or Failed" + Oct 13 09:06:22.943: INFO: Pod "pod-configmaps-357e3377-d358-4dea-9f4b-173132052d11": Phase="Pending", Reason="", readiness=false. Elapsed: 4.377214ms + Oct 13 09:06:24.947: INFO: Pod "pod-configmaps-357e3377-d358-4dea-9f4b-173132052d11": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008348068s + Oct 13 09:06:26.948: INFO: Pod "pod-configmaps-357e3377-d358-4dea-9f4b-173132052d11": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009365917s + STEP: Saw pod success 10/13/23 09:06:26.948 + Oct 13 09:06:26.948: INFO: Pod "pod-configmaps-357e3377-d358-4dea-9f4b-173132052d11" satisfied condition "Succeeded or Failed" + Oct 13 09:06:26.954: INFO: Trying to get logs from node node3 pod pod-configmaps-357e3377-d358-4dea-9f4b-173132052d11 container agnhost-container: + STEP: delete the pod 10/13/23 09:06:26.96 + Oct 13 09:06:26.971: INFO: Waiting for pod pod-configmaps-357e3377-d358-4dea-9f4b-173132052d11 to disappear + Oct 13 09:06:26.974: INFO: Pod pod-configmaps-357e3377-d358-4dea-9f4b-173132052d11 no longer exists + [AfterEach] [sig-storage] ConfigMap + test/e2e/framework/node/init/init.go:32 + Oct 13 09:06:26.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] ConfigMap + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] ConfigMap + tear down framework | framework.go:193 + STEP: Destroying namespace "configmap-7161" for this suite. 10/13/23 09:06:26.977 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] NoExecuteTaintManager Multiple Pods [Serial] + evicts pods with minTolerationSeconds [Disruptive] [Conformance] + test/e2e/node/taints.go:455 +[BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:06:26.984 +Oct 13 09:06:26.984: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename taint-multiple-pods 10/13/23 09:06:26.985 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:06:26.997 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:06:26.999 +[BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + test/e2e/node/taints.go:383 +Oct 13 09:06:27.001: INFO: Waiting up to 1m0s for all nodes to be ready +Oct 13 09:07:27.029: INFO: Waiting for terminating namespaces to be deleted... +[It] evicts pods with minTolerationSeconds [Disruptive] [Conformance] + test/e2e/node/taints.go:455 +Oct 13 09:07:27.033: INFO: Starting informer... +STEP: Starting pods... 10/13/23 09:07:27.033 +Oct 13 09:07:27.252: INFO: Pod1 is running on node2. Tainting Node +Oct 13 09:07:27.465: INFO: Waiting up to 5m0s for pod "taint-eviction-b1" in namespace "taint-multiple-pods-9872" to be "running" +Oct 13 09:07:27.474: INFO: Pod "taint-eviction-b1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.80597ms +Oct 13 09:07:29.479: INFO: Pod "taint-eviction-b1": Phase="Running", Reason="", readiness=true. Elapsed: 2.013735188s +Oct 13 09:07:29.479: INFO: Pod "taint-eviction-b1" satisfied condition "running" +Oct 13 09:07:29.479: INFO: Waiting up to 5m0s for pod "taint-eviction-b2" in namespace "taint-multiple-pods-9872" to be "running" +Oct 13 09:07:29.482: INFO: Pod "taint-eviction-b2": Phase="Running", Reason="", readiness=true. Elapsed: 3.588319ms +Oct 13 09:07:29.482: INFO: Pod "taint-eviction-b2" satisfied condition "running" +Oct 13 09:07:29.482: INFO: Pod2 is running on node2. Tainting Node +STEP: Trying to apply a taint on the Node 10/13/23 09:07:29.482 +STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute 10/13/23 09:07:29.5 +STEP: Waiting for Pod1 and Pod2 to be deleted 10/13/23 09:07:29.504 +Oct 13 09:07:34.865: INFO: Noticed Pod "taint-eviction-b1" gets evicted. +Oct 13 09:07:54.925: INFO: Noticed Pod "taint-eviction-b2" gets evicted. +STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute 10/13/23 09:07:54.938 +[AfterEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + test/e2e/framework/node/init/init.go:32 +Oct 13 09:07:54.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + tear down framework | framework.go:193 +STEP: Destroying namespace "taint-multiple-pods-9872" for this suite. 10/13/23 09:07:54.947 +------------------------------ +• [SLOW TEST] [87.972 seconds] +[sig-node] NoExecuteTaintManager Multiple Pods [Serial] +test/e2e/node/framework.go:23 + evicts pods with minTolerationSeconds [Disruptive] [Conformance] + test/e2e/node/taints.go:455 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:06:26.984 + Oct 13 09:06:26.984: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename taint-multiple-pods 10/13/23 09:06:26.985 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:06:26.997 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:06:26.999 + [BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + test/e2e/node/taints.go:383 + Oct 13 09:06:27.001: INFO: Waiting up to 1m0s for all nodes to be ready + Oct 13 09:07:27.029: INFO: Waiting for terminating namespaces to be deleted... + [It] evicts pods with minTolerationSeconds [Disruptive] [Conformance] + test/e2e/node/taints.go:455 + Oct 13 09:07:27.033: INFO: Starting informer... + STEP: Starting pods... 10/13/23 09:07:27.033 + Oct 13 09:07:27.252: INFO: Pod1 is running on node2. Tainting Node + Oct 13 09:07:27.465: INFO: Waiting up to 5m0s for pod "taint-eviction-b1" in namespace "taint-multiple-pods-9872" to be "running" + Oct 13 09:07:27.474: INFO: Pod "taint-eviction-b1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.80597ms + Oct 13 09:07:29.479: INFO: Pod "taint-eviction-b1": Phase="Running", Reason="", readiness=true. Elapsed: 2.013735188s + Oct 13 09:07:29.479: INFO: Pod "taint-eviction-b1" satisfied condition "running" + Oct 13 09:07:29.479: INFO: Waiting up to 5m0s for pod "taint-eviction-b2" in namespace "taint-multiple-pods-9872" to be "running" + Oct 13 09:07:29.482: INFO: Pod "taint-eviction-b2": Phase="Running", Reason="", readiness=true. Elapsed: 3.588319ms + Oct 13 09:07:29.482: INFO: Pod "taint-eviction-b2" satisfied condition "running" + Oct 13 09:07:29.482: INFO: Pod2 is running on node2. Tainting Node + STEP: Trying to apply a taint on the Node 10/13/23 09:07:29.482 + STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute 10/13/23 09:07:29.5 + STEP: Waiting for Pod1 and Pod2 to be deleted 10/13/23 09:07:29.504 + Oct 13 09:07:34.865: INFO: Noticed Pod "taint-eviction-b1" gets evicted. + Oct 13 09:07:54.925: INFO: Noticed Pod "taint-eviction-b2" gets evicted. + STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute 10/13/23 09:07:54.938 + [AfterEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + test/e2e/framework/node/init/init.go:32 + Oct 13 09:07:54.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + tear down framework | framework.go:193 + STEP: Destroying namespace "taint-multiple-pods-9872" for this suite. 10/13/23 09:07:54.947 + << End Captured GinkgoWriter Output +------------------------------ +SSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should apply changes to a resourcequota status [Conformance] + test/e2e/apimachinery/resource_quota.go:1010 +[BeforeEach] [sig-api-machinery] ResourceQuota + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:07:54.956 +Oct 13 09:07:54.956: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename resourcequota 10/13/23 09:07:54.957 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:07:54.975 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:07:54.978 +[BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:31 +[It] should apply changes to a resourcequota status [Conformance] + test/e2e/apimachinery/resource_quota.go:1010 +STEP: Creating resourceQuota "e2e-rq-status-hrvdb" 10/13/23 09:07:54.988 +Oct 13 09:07:54.995: INFO: Resource quota "e2e-rq-status-hrvdb" reports spec: hard cpu limit of 500m +Oct 13 09:07:54.996: INFO: Resource quota "e2e-rq-status-hrvdb" reports spec: hard memory limit of 500Mi +STEP: Updating resourceQuota "e2e-rq-status-hrvdb" /status 10/13/23 09:07:54.996 +STEP: Confirm /status for "e2e-rq-status-hrvdb" resourceQuota via watch 10/13/23 09:07:55.004 +Oct 13 09:07:55.006: INFO: observed resourceQuota "e2e-rq-status-hrvdb" in namespace "resourcequota-3783" with hard status: v1.ResourceList(nil) +Oct 13 09:07:55.006: INFO: Found resourceQuota "e2e-rq-status-hrvdb" in namespace "resourcequota-3783" with hard status: v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:500, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}} +Oct 13 09:07:55.006: INFO: ResourceQuota "e2e-rq-status-hrvdb" /status was updated +STEP: Patching hard spec values for cpu & memory 10/13/23 09:07:55.009 +Oct 13 09:07:55.015: INFO: Resource quota "e2e-rq-status-hrvdb" reports spec: hard cpu limit of 1 +Oct 13 09:07:55.015: INFO: Resource quota "e2e-rq-status-hrvdb" reports spec: hard memory limit of 1Gi +STEP: Patching "e2e-rq-status-hrvdb" /status 10/13/23 09:07:55.015 +STEP: Confirm /status for "e2e-rq-status-hrvdb" resourceQuota via watch 10/13/23 09:07:55.022 +Oct 13 09:07:55.024: INFO: observed resourceQuota "e2e-rq-status-hrvdb" in namespace "resourcequota-3783" with hard status: v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:500, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}} +Oct 13 09:07:55.024: INFO: Found resourceQuota "e2e-rq-status-hrvdb" in namespace "resourcequota-3783" with hard status: v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:1, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}} +Oct 13 09:07:55.024: INFO: ResourceQuota "e2e-rq-status-hrvdb" /status was patched +STEP: Get "e2e-rq-status-hrvdb" /status 10/13/23 09:07:55.024 +Oct 13 09:07:55.028: INFO: Resourcequota "e2e-rq-status-hrvdb" reports status: hard cpu of 1 +Oct 13 09:07:55.028: INFO: Resourcequota "e2e-rq-status-hrvdb" reports status: hard memory of 1Gi +STEP: Repatching "e2e-rq-status-hrvdb" /status before checking Spec is unchanged 10/13/23 09:07:55.03 +Oct 13 09:07:55.035: INFO: Resourcequota "e2e-rq-status-hrvdb" reports status: hard cpu of 2 +Oct 13 09:07:55.035: INFO: Resourcequota "e2e-rq-status-hrvdb" reports status: hard memory of 2Gi +Oct 13 09:07:55.037: INFO: Found resourceQuota "e2e-rq-status-hrvdb" in namespace "resourcequota-3783" with hard status: v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:2, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"2", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:2147483648, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"2Gi", Format:"BinarySI"}} +Oct 13 09:12:20.049: INFO: ResourceQuota "e2e-rq-status-hrvdb" Spec was unchanged and /status reset +[AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/node/init/init.go:32 +Oct 13 09:12:20.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + tear down framework | framework.go:193 +STEP: Destroying namespace "resourcequota-3783" for this suite. 10/13/23 09:12:20.056 +------------------------------ +• [SLOW TEST] [265.109 seconds] +[sig-api-machinery] ResourceQuota +test/e2e/apimachinery/framework.go:23 + should apply changes to a resourcequota status [Conformance] + test/e2e/apimachinery/resource_quota.go:1010 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] ResourceQuota + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:07:54.956 + Oct 13 09:07:54.956: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename resourcequota 10/13/23 09:07:54.957 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:07:54.975 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:07:54.978 + [BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:31 + [It] should apply changes to a resourcequota status [Conformance] + test/e2e/apimachinery/resource_quota.go:1010 + STEP: Creating resourceQuota "e2e-rq-status-hrvdb" 10/13/23 09:07:54.988 + Oct 13 09:07:54.995: INFO: Resource quota "e2e-rq-status-hrvdb" reports spec: hard cpu limit of 500m + Oct 13 09:07:54.996: INFO: Resource quota "e2e-rq-status-hrvdb" reports spec: hard memory limit of 500Mi + STEP: Updating resourceQuota "e2e-rq-status-hrvdb" /status 10/13/23 09:07:54.996 + STEP: Confirm /status for "e2e-rq-status-hrvdb" resourceQuota via watch 10/13/23 09:07:55.004 + Oct 13 09:07:55.006: INFO: observed resourceQuota "e2e-rq-status-hrvdb" in namespace "resourcequota-3783" with hard status: v1.ResourceList(nil) + Oct 13 09:07:55.006: INFO: Found resourceQuota "e2e-rq-status-hrvdb" in namespace "resourcequota-3783" with hard status: v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:500, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}} + Oct 13 09:07:55.006: INFO: ResourceQuota "e2e-rq-status-hrvdb" /status was updated + STEP: Patching hard spec values for cpu & memory 10/13/23 09:07:55.009 + Oct 13 09:07:55.015: INFO: Resource quota "e2e-rq-status-hrvdb" reports spec: hard cpu limit of 1 + Oct 13 09:07:55.015: INFO: Resource quota "e2e-rq-status-hrvdb" reports spec: hard memory limit of 1Gi + STEP: Patching "e2e-rq-status-hrvdb" /status 10/13/23 09:07:55.015 + STEP: Confirm /status for "e2e-rq-status-hrvdb" resourceQuota via watch 10/13/23 09:07:55.022 + Oct 13 09:07:55.024: INFO: observed resourceQuota "e2e-rq-status-hrvdb" in namespace "resourcequota-3783" with hard status: v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:500, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}} + Oct 13 09:07:55.024: INFO: Found resourceQuota "e2e-rq-status-hrvdb" in namespace "resourcequota-3783" with hard status: v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:1, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}} + Oct 13 09:07:55.024: INFO: ResourceQuota "e2e-rq-status-hrvdb" /status was patched + STEP: Get "e2e-rq-status-hrvdb" /status 10/13/23 09:07:55.024 + Oct 13 09:07:55.028: INFO: Resourcequota "e2e-rq-status-hrvdb" reports status: hard cpu of 1 + Oct 13 09:07:55.028: INFO: Resourcequota "e2e-rq-status-hrvdb" reports status: hard memory of 1Gi + STEP: Repatching "e2e-rq-status-hrvdb" /status before checking Spec is unchanged 10/13/23 09:07:55.03 + Oct 13 09:07:55.035: INFO: Resourcequota "e2e-rq-status-hrvdb" reports status: hard cpu of 2 + Oct 13 09:07:55.035: INFO: Resourcequota "e2e-rq-status-hrvdb" reports status: hard memory of 2Gi + Oct 13 09:07:55.037: INFO: Found resourceQuota "e2e-rq-status-hrvdb" in namespace "resourcequota-3783" with hard status: v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:2, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"2", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:2147483648, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"2Gi", Format:"BinarySI"}} + Oct 13 09:12:20.049: INFO: ResourceQuota "e2e-rq-status-hrvdb" Spec was unchanged and /status reset + [AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/node/init/init.go:32 + Oct 13 09:12:20.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + tear down framework | framework.go:193 + STEP: Destroying namespace "resourcequota-3783" for this suite. 10/13/23 09:12:20.056 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSS +------------------------------ +[sig-node] Downward API + should provide host IP as an env var [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:90 +[BeforeEach] [sig-node] Downward API + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:12:20.067 +Oct 13 09:12:20.067: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename downward-api 10/13/23 09:12:20.069 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:12:20.082 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:12:20.085 +[BeforeEach] [sig-node] Downward API + test/e2e/framework/metrics/init/init.go:31 +[It] should provide host IP as an env var [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:90 +STEP: Creating a pod to test downward api env vars 10/13/23 09:12:20.087 +Oct 13 09:12:20.097: INFO: Waiting up to 5m0s for pod "downward-api-ac4055d4-da8c-4773-92c6-709802e5dd6c" in namespace "downward-api-6134" to be "Succeeded or Failed" +Oct 13 09:12:20.102: INFO: Pod "downward-api-ac4055d4-da8c-4773-92c6-709802e5dd6c": Phase="Pending", Reason="", readiness=false. Elapsed: 5.541345ms +Oct 13 09:12:22.108: INFO: Pod "downward-api-ac4055d4-da8c-4773-92c6-709802e5dd6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01149945s +Oct 13 09:12:24.109: INFO: Pod "downward-api-ac4055d4-da8c-4773-92c6-709802e5dd6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012591648s +STEP: Saw pod success 10/13/23 09:12:24.109 +Oct 13 09:12:24.110: INFO: Pod "downward-api-ac4055d4-da8c-4773-92c6-709802e5dd6c" satisfied condition "Succeeded or Failed" +Oct 13 09:12:24.114: INFO: Trying to get logs from node node2 pod downward-api-ac4055d4-da8c-4773-92c6-709802e5dd6c container dapi-container: +STEP: delete the pod 10/13/23 09:12:24.125 +Oct 13 09:12:24.135: INFO: Waiting for pod downward-api-ac4055d4-da8c-4773-92c6-709802e5dd6c to disappear +Oct 13 09:12:24.138: INFO: Pod downward-api-ac4055d4-da8c-4773-92c6-709802e5dd6c no longer exists +[AfterEach] [sig-node] Downward API + test/e2e/framework/node/init/init.go:32 +Oct 13 09:12:24.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Downward API + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Downward API + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Downward API + tear down framework | framework.go:193 +STEP: Destroying namespace "downward-api-6134" for this suite. 10/13/23 09:12:24.142 +------------------------------ +• [4.082 seconds] +[sig-node] Downward API +test/e2e/common/node/framework.go:23 + should provide host IP as an env var [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:90 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Downward API + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:12:20.067 + Oct 13 09:12:20.067: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename downward-api 10/13/23 09:12:20.069 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:12:20.082 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:12:20.085 + [BeforeEach] [sig-node] Downward API + test/e2e/framework/metrics/init/init.go:31 + [It] should provide host IP as an env var [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:90 + STEP: Creating a pod to test downward api env vars 10/13/23 09:12:20.087 + Oct 13 09:12:20.097: INFO: Waiting up to 5m0s for pod "downward-api-ac4055d4-da8c-4773-92c6-709802e5dd6c" in namespace "downward-api-6134" to be "Succeeded or Failed" + Oct 13 09:12:20.102: INFO: Pod "downward-api-ac4055d4-da8c-4773-92c6-709802e5dd6c": Phase="Pending", Reason="", readiness=false. Elapsed: 5.541345ms + Oct 13 09:12:22.108: INFO: Pod "downward-api-ac4055d4-da8c-4773-92c6-709802e5dd6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01149945s + Oct 13 09:12:24.109: INFO: Pod "downward-api-ac4055d4-da8c-4773-92c6-709802e5dd6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012591648s + STEP: Saw pod success 10/13/23 09:12:24.109 + Oct 13 09:12:24.110: INFO: Pod "downward-api-ac4055d4-da8c-4773-92c6-709802e5dd6c" satisfied condition "Succeeded or Failed" + Oct 13 09:12:24.114: INFO: Trying to get logs from node node2 pod downward-api-ac4055d4-da8c-4773-92c6-709802e5dd6c container dapi-container: + STEP: delete the pod 10/13/23 09:12:24.125 + Oct 13 09:12:24.135: INFO: Waiting for pod downward-api-ac4055d4-da8c-4773-92c6-709802e5dd6c to disappear + Oct 13 09:12:24.138: INFO: Pod downward-api-ac4055d4-da8c-4773-92c6-709802e5dd6c no longer exists + [AfterEach] [sig-node] Downward API + test/e2e/framework/node/init/init.go:32 + Oct 13 09:12:24.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Downward API + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Downward API + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Downward API + tear down framework | framework.go:193 + STEP: Destroying namespace "downward-api-6134" for this suite. 10/13/23 09:12:24.142 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Namespaces [Serial] + should ensure that all pods are removed when a namespace is deleted [Conformance] + test/e2e/apimachinery/namespace.go:243 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:12:24.151 +Oct 13 09:12:24.151: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename namespaces 10/13/23 09:12:24.152 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:12:24.165 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:12:24.168 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/metrics/init/init.go:31 +[It] should ensure that all pods are removed when a namespace is deleted [Conformance] + test/e2e/apimachinery/namespace.go:243 +STEP: Creating a test namespace 10/13/23 09:12:24.17 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:12:24.181 +STEP: Creating a pod in the namespace 10/13/23 09:12:24.184 +STEP: Waiting for the pod to have running status 10/13/23 09:12:24.191 +Oct 13 09:12:24.192: INFO: Waiting up to 5m0s for pod "test-pod" in namespace "nsdeletetest-5057" to be "running" +Oct 13 09:12:24.196: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066693ms +Oct 13 09:12:26.202: INFO: Pod "test-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.01013691s +Oct 13 09:12:26.202: INFO: Pod "test-pod" satisfied condition "running" +STEP: Deleting the namespace 10/13/23 09:12:26.202 +STEP: Waiting for the namespace to be removed. 10/13/23 09:12:26.21 +STEP: Recreating the namespace 10/13/23 09:12:37.215 +STEP: Verifying there are no pods in the namespace 10/13/23 09:12:37.228 +[AfterEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/node/init/init.go:32 +Oct 13 09:12:37.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + tear down framework | framework.go:193 +STEP: Destroying namespace "namespaces-1474" for this suite. 10/13/23 09:12:37.234 +STEP: Destroying namespace "nsdeletetest-5057" for this suite. 10/13/23 09:12:37.239 +Oct 13 09:12:37.242: INFO: Namespace nsdeletetest-5057 was already deleted +STEP: Destroying namespace "nsdeletetest-5311" for this suite. 10/13/23 09:12:37.242 +------------------------------ +• [SLOW TEST] [13.095 seconds] +[sig-api-machinery] Namespaces [Serial] +test/e2e/apimachinery/framework.go:23 + should ensure that all pods are removed when a namespace is deleted [Conformance] + test/e2e/apimachinery/namespace.go:243 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] Namespaces [Serial] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:12:24.151 + Oct 13 09:12:24.151: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename namespaces 10/13/23 09:12:24.152 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:12:24.165 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:12:24.168 + [BeforeEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/metrics/init/init.go:31 + [It] should ensure that all pods are removed when a namespace is deleted [Conformance] + test/e2e/apimachinery/namespace.go:243 + STEP: Creating a test namespace 10/13/23 09:12:24.17 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:12:24.181 + STEP: Creating a pod in the namespace 10/13/23 09:12:24.184 + STEP: Waiting for the pod to have running status 10/13/23 09:12:24.191 + Oct 13 09:12:24.192: INFO: Waiting up to 5m0s for pod "test-pod" in namespace "nsdeletetest-5057" to be "running" + Oct 13 09:12:24.196: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066693ms + Oct 13 09:12:26.202: INFO: Pod "test-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.01013691s + Oct 13 09:12:26.202: INFO: Pod "test-pod" satisfied condition "running" + STEP: Deleting the namespace 10/13/23 09:12:26.202 + STEP: Waiting for the namespace to be removed. 10/13/23 09:12:26.21 + STEP: Recreating the namespace 10/13/23 09:12:37.215 + STEP: Verifying there are no pods in the namespace 10/13/23 09:12:37.228 + [AfterEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/node/init/init.go:32 + Oct 13 09:12:37.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + tear down framework | framework.go:193 + STEP: Destroying namespace "namespaces-1474" for this suite. 10/13/23 09:12:37.234 + STEP: Destroying namespace "nsdeletetest-5057" for this suite. 10/13/23 09:12:37.239 + Oct 13 09:12:37.242: INFO: Namespace nsdeletetest-5057 was already deleted + STEP: Destroying namespace "nsdeletetest-5311" for this suite. 10/13/23 09:12:37.242 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:107 +[BeforeEach] [sig-storage] EmptyDir volumes + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:12:37.247 +Oct 13 09:12:37.247: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename emptydir 10/13/23 09:12:37.248 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:12:37.262 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:12:37.265 +[BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:31 +[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:107 +STEP: Creating a pod to test emptydir 0666 on tmpfs 10/13/23 09:12:37.267 +Oct 13 09:12:37.275: INFO: Waiting up to 5m0s for pod "pod-3a5a7302-928d-4871-9826-5b968c44f31c" in namespace "emptydir-9478" to be "Succeeded or Failed" +Oct 13 09:12:37.278: INFO: Pod "pod-3a5a7302-928d-4871-9826-5b968c44f31c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.170743ms +Oct 13 09:12:39.282: INFO: Pod "pod-3a5a7302-928d-4871-9826-5b968c44f31c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007655185s +Oct 13 09:12:41.283: INFO: Pod "pod-3a5a7302-928d-4871-9826-5b968c44f31c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008343022s +STEP: Saw pod success 10/13/23 09:12:41.283 +Oct 13 09:12:41.283: INFO: Pod "pod-3a5a7302-928d-4871-9826-5b968c44f31c" satisfied condition "Succeeded or Failed" +Oct 13 09:12:41.287: INFO: Trying to get logs from node node2 pod pod-3a5a7302-928d-4871-9826-5b968c44f31c container test-container: +STEP: delete the pod 10/13/23 09:12:41.294 +Oct 13 09:12:41.305: INFO: Waiting for pod pod-3a5a7302-928d-4871-9826-5b968c44f31c to disappear +Oct 13 09:12:41.308: INFO: Pod pod-3a5a7302-928d-4871-9826-5b968c44f31c no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/node/init/init.go:32 +Oct 13 09:12:41.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + tear down framework | framework.go:193 +STEP: Destroying namespace "emptydir-9478" for this suite. 10/13/23 09:12:41.311 +------------------------------ +• [4.070 seconds] +[sig-storage] EmptyDir volumes +test/e2e/common/storage/framework.go:23 + should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:107 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] EmptyDir volumes + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:12:37.247 + Oct 13 09:12:37.247: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename emptydir 10/13/23 09:12:37.248 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:12:37.262 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:12:37.265 + [BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:31 + [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:107 + STEP: Creating a pod to test emptydir 0666 on tmpfs 10/13/23 09:12:37.267 + Oct 13 09:12:37.275: INFO: Waiting up to 5m0s for pod "pod-3a5a7302-928d-4871-9826-5b968c44f31c" in namespace "emptydir-9478" to be "Succeeded or Failed" + Oct 13 09:12:37.278: INFO: Pod "pod-3a5a7302-928d-4871-9826-5b968c44f31c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.170743ms + Oct 13 09:12:39.282: INFO: Pod "pod-3a5a7302-928d-4871-9826-5b968c44f31c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007655185s + Oct 13 09:12:41.283: INFO: Pod "pod-3a5a7302-928d-4871-9826-5b968c44f31c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008343022s + STEP: Saw pod success 10/13/23 09:12:41.283 + Oct 13 09:12:41.283: INFO: Pod "pod-3a5a7302-928d-4871-9826-5b968c44f31c" satisfied condition "Succeeded or Failed" + Oct 13 09:12:41.287: INFO: Trying to get logs from node node2 pod pod-3a5a7302-928d-4871-9826-5b968c44f31c container test-container: + STEP: delete the pod 10/13/23 09:12:41.294 + Oct 13 09:12:41.305: INFO: Waiting for pod pod-3a5a7302-928d-4871-9826-5b968c44f31c to disappear + Oct 13 09:12:41.308: INFO: Pod pod-3a5a7302-928d-4871-9826-5b968c44f31c no longer exists + [AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/node/init/init.go:32 + Oct 13 09:12:41.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + tear down framework | framework.go:193 + STEP: Destroying namespace "emptydir-9478" for this suite. 10/13/23 09:12:41.311 + << End Captured GinkgoWriter Output +------------------------------ +SSSS +------------------------------ +[sig-api-machinery] Namespaces [Serial] + should apply changes to a namespace status [Conformance] + test/e2e/apimachinery/namespace.go:299 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:12:41.317 +Oct 13 09:12:41.318: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename namespaces 10/13/23 09:12:41.318 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:12:41.335 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:12:41.337 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/metrics/init/init.go:31 +[It] should apply changes to a namespace status [Conformance] + test/e2e/apimachinery/namespace.go:299 +STEP: Read namespace status 10/13/23 09:12:41.339 +Oct 13 09:12:41.343: INFO: Status: v1.NamespaceStatus{Phase:"Active", Conditions:[]v1.NamespaceCondition(nil)} +STEP: Patch namespace status 10/13/23 09:12:41.343 +Oct 13 09:12:41.350: INFO: Status.Condition: v1.NamespaceCondition{Type:"StatusPatch", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Patched by an e2e test"} +STEP: Update namespace status 10/13/23 09:12:41.35 +Oct 13 09:12:41.357: INFO: Status.Condition: v1.NamespaceCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Updated by an e2e test"} +[AfterEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/node/init/init.go:32 +Oct 13 09:12:41.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + tear down framework | framework.go:193 +STEP: Destroying namespace "namespaces-6545" for this suite. 10/13/23 09:12:41.362 +------------------------------ +• [0.050 seconds] +[sig-api-machinery] Namespaces [Serial] +test/e2e/apimachinery/framework.go:23 + should apply changes to a namespace status [Conformance] + test/e2e/apimachinery/namespace.go:299 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] Namespaces [Serial] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:12:41.317 + Oct 13 09:12:41.318: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename namespaces 10/13/23 09:12:41.318 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:12:41.335 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:12:41.337 + [BeforeEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/metrics/init/init.go:31 + [It] should apply changes to a namespace status [Conformance] + test/e2e/apimachinery/namespace.go:299 + STEP: Read namespace status 10/13/23 09:12:41.339 + Oct 13 09:12:41.343: INFO: Status: v1.NamespaceStatus{Phase:"Active", Conditions:[]v1.NamespaceCondition(nil)} + STEP: Patch namespace status 10/13/23 09:12:41.343 + Oct 13 09:12:41.350: INFO: Status.Condition: v1.NamespaceCondition{Type:"StatusPatch", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Patched by an e2e test"} + STEP: Update namespace status 10/13/23 09:12:41.35 + Oct 13 09:12:41.357: INFO: Status.Condition: v1.NamespaceCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Updated by an e2e test"} + [AfterEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/node/init/init.go:32 + Oct 13 09:12:41.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + tear down framework | framework.go:193 + STEP: Destroying namespace "namespaces-6545" for this suite. 10/13/23 09:12:41.362 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's memory limit [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:207 +[BeforeEach] [sig-storage] Projected downwardAPI + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:12:41.37 +Oct 13 09:12:41.370: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename projected 10/13/23 09:12:41.371 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:12:41.386 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:12:41.388 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:44 +[It] should provide container's memory limit [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:207 +STEP: Creating a pod to test downward API volume plugin 10/13/23 09:12:41.39 +Oct 13 09:12:41.398: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d4a3c578-70d7-4e0f-9b94-117aac8f1799" in namespace "projected-3770" to be "Succeeded or Failed" +Oct 13 09:12:41.401: INFO: Pod "downwardapi-volume-d4a3c578-70d7-4e0f-9b94-117aac8f1799": Phase="Pending", Reason="", readiness=false. Elapsed: 3.510124ms +Oct 13 09:12:43.406: INFO: Pod "downwardapi-volume-d4a3c578-70d7-4e0f-9b94-117aac8f1799": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008635016s +Oct 13 09:12:45.406: INFO: Pod "downwardapi-volume-d4a3c578-70d7-4e0f-9b94-117aac8f1799": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007920851s +STEP: Saw pod success 10/13/23 09:12:45.406 +Oct 13 09:12:45.406: INFO: Pod "downwardapi-volume-d4a3c578-70d7-4e0f-9b94-117aac8f1799" satisfied condition "Succeeded or Failed" +Oct 13 09:12:45.409: INFO: Trying to get logs from node node2 pod downwardapi-volume-d4a3c578-70d7-4e0f-9b94-117aac8f1799 container client-container: +STEP: delete the pod 10/13/23 09:12:45.415 +Oct 13 09:12:45.429: INFO: Waiting for pod downwardapi-volume-d4a3c578-70d7-4e0f-9b94-117aac8f1799 to disappear +Oct 13 09:12:45.433: INFO: Pod downwardapi-volume-d4a3c578-70d7-4e0f-9b94-117aac8f1799 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/node/init/init.go:32 +Oct 13 09:12:45.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI + tear down framework | framework.go:193 +STEP: Destroying namespace "projected-3770" for this suite. 10/13/23 09:12:45.437 +------------------------------ +• [4.073 seconds] +[sig-storage] Projected downwardAPI +test/e2e/common/storage/framework.go:23 + should provide container's memory limit [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:207 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected downwardAPI + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:12:41.37 + Oct 13 09:12:41.370: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename projected 10/13/23 09:12:41.371 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:12:41.386 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:12:41.388 + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:44 + [It] should provide container's memory limit [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:207 + STEP: Creating a pod to test downward API volume plugin 10/13/23 09:12:41.39 + Oct 13 09:12:41.398: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d4a3c578-70d7-4e0f-9b94-117aac8f1799" in namespace "projected-3770" to be "Succeeded or Failed" + Oct 13 09:12:41.401: INFO: Pod "downwardapi-volume-d4a3c578-70d7-4e0f-9b94-117aac8f1799": Phase="Pending", Reason="", readiness=false. Elapsed: 3.510124ms + Oct 13 09:12:43.406: INFO: Pod "downwardapi-volume-d4a3c578-70d7-4e0f-9b94-117aac8f1799": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008635016s + Oct 13 09:12:45.406: INFO: Pod "downwardapi-volume-d4a3c578-70d7-4e0f-9b94-117aac8f1799": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007920851s + STEP: Saw pod success 10/13/23 09:12:45.406 + Oct 13 09:12:45.406: INFO: Pod "downwardapi-volume-d4a3c578-70d7-4e0f-9b94-117aac8f1799" satisfied condition "Succeeded or Failed" + Oct 13 09:12:45.409: INFO: Trying to get logs from node node2 pod downwardapi-volume-d4a3c578-70d7-4e0f-9b94-117aac8f1799 container client-container: + STEP: delete the pod 10/13/23 09:12:45.415 + Oct 13 09:12:45.429: INFO: Waiting for pod downwardapi-volume-d4a3c578-70d7-4e0f-9b94-117aac8f1799 to disappear + Oct 13 09:12:45.433: INFO: Pod downwardapi-volume-d4a3c578-70d7-4e0f-9b94-117aac8f1799 no longer exists + [AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/node/init/init.go:32 + Oct 13 09:12:45.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + tear down framework | framework.go:193 + STEP: Destroying namespace "projected-3770" for this suite. 10/13/23 09:12:45.437 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicaSet + Replicaset should have a working scale subresource [Conformance] + test/e2e/apps/replica_set.go:143 +[BeforeEach] [sig-apps] ReplicaSet + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:12:45.445 +Oct 13 09:12:45.445: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename replicaset 10/13/23 09:12:45.446 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:12:45.457 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:12:45.459 +[BeforeEach] [sig-apps] ReplicaSet + test/e2e/framework/metrics/init/init.go:31 +[It] Replicaset should have a working scale subresource [Conformance] + test/e2e/apps/replica_set.go:143 +STEP: Creating replica set "test-rs" that asks for more than the allowed pod quota 10/13/23 09:12:45.461 +Oct 13 09:12:45.469: INFO: Pod name sample-pod: Found 0 pods out of 1 +Oct 13 09:12:50.474: INFO: Pod name sample-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running 10/13/23 09:12:50.474 +STEP: getting scale subresource 10/13/23 09:12:50.474 +STEP: updating a scale subresource 10/13/23 09:12:50.478 +STEP: verifying the replicaset Spec.Replicas was modified 10/13/23 09:12:50.484 +STEP: Patch a scale subresource 10/13/23 09:12:50.488 +[AfterEach] [sig-apps] ReplicaSet + test/e2e/framework/node/init/init.go:32 +Oct 13 09:12:50.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] ReplicaSet + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] ReplicaSet + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] ReplicaSet + tear down framework | framework.go:193 +STEP: Destroying namespace "replicaset-9463" for this suite. 10/13/23 09:12:50.507 +------------------------------ +• [SLOW TEST] [5.073 seconds] +[sig-apps] ReplicaSet +test/e2e/apps/framework.go:23 + Replicaset should have a working scale subresource [Conformance] + test/e2e/apps/replica_set.go:143 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] ReplicaSet + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:12:45.445 + Oct 13 09:12:45.445: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename replicaset 10/13/23 09:12:45.446 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:12:45.457 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:12:45.459 + [BeforeEach] [sig-apps] ReplicaSet + test/e2e/framework/metrics/init/init.go:31 + [It] Replicaset should have a working scale subresource [Conformance] + test/e2e/apps/replica_set.go:143 + STEP: Creating replica set "test-rs" that asks for more than the allowed pod quota 10/13/23 09:12:45.461 + Oct 13 09:12:45.469: INFO: Pod name sample-pod: Found 0 pods out of 1 + Oct 13 09:12:50.474: INFO: Pod name sample-pod: Found 1 pods out of 1 + STEP: ensuring each pod is running 10/13/23 09:12:50.474 + STEP: getting scale subresource 10/13/23 09:12:50.474 + STEP: updating a scale subresource 10/13/23 09:12:50.478 + STEP: verifying the replicaset Spec.Replicas was modified 10/13/23 09:12:50.484 + STEP: Patch a scale subresource 10/13/23 09:12:50.488 + [AfterEach] [sig-apps] ReplicaSet + test/e2e/framework/node/init/init.go:32 + Oct 13 09:12:50.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] ReplicaSet + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] ReplicaSet + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] ReplicaSet + tear down framework | framework.go:193 + STEP: Destroying namespace "replicaset-9463" for this suite. 10/13/23 09:12:50.507 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate configmap [Conformance] + test/e2e/apimachinery/webhook.go:252 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:12:50.518 +Oct 13 09:12:50.518: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename webhook 10/13/23 09:12:50.52 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:12:50.534 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:12:50.537 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 +STEP: Setting up server cert 10/13/23 09:12:50.554 +STEP: Create role binding to let webhook read extension-apiserver-authentication 10/13/23 09:12:50.94 +STEP: Deploying the webhook pod 10/13/23 09:12:50.948 +STEP: Wait for the deployment to be ready 10/13/23 09:12:50.958 +Oct 13 09:12:50.970: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service 10/13/23 09:12:52.983 +STEP: Verifying the service has paired with the endpoint 10/13/23 09:12:53.006 +Oct 13 09:12:54.007: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate configmap [Conformance] + test/e2e/apimachinery/webhook.go:252 +STEP: Registering the mutating configmap webhook via the AdmissionRegistration API 10/13/23 09:12:54.011 +STEP: create a configmap that should be updated by the webhook 10/13/23 09:12:54.028 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Oct 13 09:12:54.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "webhook-9678" for this suite. 10/13/23 09:12:54.098 +STEP: Destroying namespace "webhook-9678-markers" for this suite. 10/13/23 09:12:54.106 +------------------------------ +• [3.597 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should mutate configmap [Conformance] + test/e2e/apimachinery/webhook.go:252 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:12:50.518 + Oct 13 09:12:50.518: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename webhook 10/13/23 09:12:50.52 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:12:50.534 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:12:50.537 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 + STEP: Setting up server cert 10/13/23 09:12:50.554 + STEP: Create role binding to let webhook read extension-apiserver-authentication 10/13/23 09:12:50.94 + STEP: Deploying the webhook pod 10/13/23 09:12:50.948 + STEP: Wait for the deployment to be ready 10/13/23 09:12:50.958 + Oct 13 09:12:50.970: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set + STEP: Deploying the webhook service 10/13/23 09:12:52.983 + STEP: Verifying the service has paired with the endpoint 10/13/23 09:12:53.006 + Oct 13 09:12:54.007: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] should mutate configmap [Conformance] + test/e2e/apimachinery/webhook.go:252 + STEP: Registering the mutating configmap webhook via the AdmissionRegistration API 10/13/23 09:12:54.011 + STEP: create a configmap that should be updated by the webhook 10/13/23 09:12:54.028 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Oct 13 09:12:54.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "webhook-9678" for this suite. 10/13/23 09:12:54.098 + STEP: Destroying namespace "webhook-9678-markers" for this suite. 10/13/23 09:12:54.106 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:99 +[BeforeEach] [sig-storage] Secrets + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:12:54.116 +Oct 13 09:12:54.116: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename secrets 10/13/23 09:12:54.117 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:12:54.13 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:12:54.133 +[BeforeEach] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:31 +[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:99 +STEP: Creating secret with name secret-test-72ba2715-6ad0-4074-9678-d13c8159d0f9 10/13/23 09:12:54.15 +STEP: Creating a pod to test consume secrets 10/13/23 09:12:54.154 +Oct 13 09:12:54.161: INFO: Waiting up to 5m0s for pod "pod-secrets-34d44269-3291-4c3a-a21f-14f49a32129d" in namespace "secrets-751" to be "Succeeded or Failed" +Oct 13 09:12:54.165: INFO: Pod "pod-secrets-34d44269-3291-4c3a-a21f-14f49a32129d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.936526ms +Oct 13 09:12:56.169: INFO: Pod "pod-secrets-34d44269-3291-4c3a-a21f-14f49a32129d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007893346s +Oct 13 09:12:58.171: INFO: Pod "pod-secrets-34d44269-3291-4c3a-a21f-14f49a32129d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009456872s +STEP: Saw pod success 10/13/23 09:12:58.171 +Oct 13 09:12:58.171: INFO: Pod "pod-secrets-34d44269-3291-4c3a-a21f-14f49a32129d" satisfied condition "Succeeded or Failed" +Oct 13 09:12:58.176: INFO: Trying to get logs from node node2 pod pod-secrets-34d44269-3291-4c3a-a21f-14f49a32129d container secret-volume-test: +STEP: delete the pod 10/13/23 09:12:58.184 +Oct 13 09:12:58.194: INFO: Waiting for pod pod-secrets-34d44269-3291-4c3a-a21f-14f49a32129d to disappear +Oct 13 09:12:58.197: INFO: Pod pod-secrets-34d44269-3291-4c3a-a21f-14f49a32129d no longer exists +[AfterEach] [sig-storage] Secrets + test/e2e/framework/node/init/init.go:32 +Oct 13 09:12:58.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Secrets + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Secrets + tear down framework | framework.go:193 +STEP: Destroying namespace "secrets-751" for this suite. 10/13/23 09:12:58.201 +STEP: Destroying namespace "secret-namespace-46" for this suite. 10/13/23 09:12:58.206 +------------------------------ +• [4.096 seconds] +[sig-storage] Secrets +test/e2e/common/storage/framework.go:23 + should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:99 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Secrets + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:12:54.116 + Oct 13 09:12:54.116: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename secrets 10/13/23 09:12:54.117 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:12:54.13 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:12:54.133 + [BeforeEach] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:31 + [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:99 + STEP: Creating secret with name secret-test-72ba2715-6ad0-4074-9678-d13c8159d0f9 10/13/23 09:12:54.15 + STEP: Creating a pod to test consume secrets 10/13/23 09:12:54.154 + Oct 13 09:12:54.161: INFO: Waiting up to 5m0s for pod "pod-secrets-34d44269-3291-4c3a-a21f-14f49a32129d" in namespace "secrets-751" to be "Succeeded or Failed" + Oct 13 09:12:54.165: INFO: Pod "pod-secrets-34d44269-3291-4c3a-a21f-14f49a32129d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.936526ms + Oct 13 09:12:56.169: INFO: Pod "pod-secrets-34d44269-3291-4c3a-a21f-14f49a32129d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007893346s + Oct 13 09:12:58.171: INFO: Pod "pod-secrets-34d44269-3291-4c3a-a21f-14f49a32129d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009456872s + STEP: Saw pod success 10/13/23 09:12:58.171 + Oct 13 09:12:58.171: INFO: Pod "pod-secrets-34d44269-3291-4c3a-a21f-14f49a32129d" satisfied condition "Succeeded or Failed" + Oct 13 09:12:58.176: INFO: Trying to get logs from node node2 pod pod-secrets-34d44269-3291-4c3a-a21f-14f49a32129d container secret-volume-test: + STEP: delete the pod 10/13/23 09:12:58.184 + Oct 13 09:12:58.194: INFO: Waiting for pod pod-secrets-34d44269-3291-4c3a-a21f-14f49a32129d to disappear + Oct 13 09:12:58.197: INFO: Pod pod-secrets-34d44269-3291-4c3a-a21f-14f49a32129d no longer exists + [AfterEach] [sig-storage] Secrets + test/e2e/framework/node/init/init.go:32 + Oct 13 09:12:58.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Secrets + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Secrets + tear down framework | framework.go:193 + STEP: Destroying namespace "secrets-751" for this suite. 10/13/23 09:12:58.201 + STEP: Destroying namespace "secret-namespace-46" for this suite. 10/13/23 09:12:58.206 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:89 +[BeforeEach] [sig-storage] ConfigMap + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:12:58.212 +Oct 13 09:12:58.213: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename configmap 10/13/23 09:12:58.214 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:12:58.226 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:12:58.229 +[BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:31 +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:89 +STEP: Creating configMap with name configmap-test-volume-map-4c455051-df3f-4914-b7e0-d46aeaf696a7 10/13/23 09:12:58.231 +STEP: Creating a pod to test consume configMaps 10/13/23 09:12:58.235 +Oct 13 09:12:58.243: INFO: Waiting up to 5m0s for pod "pod-configmaps-f6944adf-39e1-4578-bcb0-d329398713a7" in namespace "configmap-1714" to be "Succeeded or Failed" +Oct 13 09:12:58.246: INFO: Pod "pod-configmaps-f6944adf-39e1-4578-bcb0-d329398713a7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.153833ms +Oct 13 09:13:00.250: INFO: Pod "pod-configmaps-f6944adf-39e1-4578-bcb0-d329398713a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007423385s +Oct 13 09:13:02.250: INFO: Pod "pod-configmaps-f6944adf-39e1-4578-bcb0-d329398713a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007333918s +STEP: Saw pod success 10/13/23 09:13:02.25 +Oct 13 09:13:02.250: INFO: Pod "pod-configmaps-f6944adf-39e1-4578-bcb0-d329398713a7" satisfied condition "Succeeded or Failed" +Oct 13 09:13:02.254: INFO: Trying to get logs from node node2 pod pod-configmaps-f6944adf-39e1-4578-bcb0-d329398713a7 container agnhost-container: +STEP: delete the pod 10/13/23 09:13:02.26 +Oct 13 09:13:02.270: INFO: Waiting for pod pod-configmaps-f6944adf-39e1-4578-bcb0-d329398713a7 to disappear +Oct 13 09:13:02.273: INFO: Pod pod-configmaps-f6944adf-39e1-4578-bcb0-d329398713a7 no longer exists +[AfterEach] [sig-storage] ConfigMap + test/e2e/framework/node/init/init.go:32 +Oct 13 09:13:02.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] ConfigMap + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] ConfigMap + tear down framework | framework.go:193 +STEP: Destroying namespace "configmap-1714" for this suite. 10/13/23 09:13:02.276 +------------------------------ +• [4.069 seconds] +[sig-storage] ConfigMap +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:89 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] ConfigMap + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:12:58.212 + Oct 13 09:12:58.213: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename configmap 10/13/23 09:12:58.214 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:12:58.226 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:12:58.229 + [BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:31 + [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:89 + STEP: Creating configMap with name configmap-test-volume-map-4c455051-df3f-4914-b7e0-d46aeaf696a7 10/13/23 09:12:58.231 + STEP: Creating a pod to test consume configMaps 10/13/23 09:12:58.235 + Oct 13 09:12:58.243: INFO: Waiting up to 5m0s for pod "pod-configmaps-f6944adf-39e1-4578-bcb0-d329398713a7" in namespace "configmap-1714" to be "Succeeded or Failed" + Oct 13 09:12:58.246: INFO: Pod "pod-configmaps-f6944adf-39e1-4578-bcb0-d329398713a7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.153833ms + Oct 13 09:13:00.250: INFO: Pod "pod-configmaps-f6944adf-39e1-4578-bcb0-d329398713a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007423385s + Oct 13 09:13:02.250: INFO: Pod "pod-configmaps-f6944adf-39e1-4578-bcb0-d329398713a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007333918s + STEP: Saw pod success 10/13/23 09:13:02.25 + Oct 13 09:13:02.250: INFO: Pod "pod-configmaps-f6944adf-39e1-4578-bcb0-d329398713a7" satisfied condition "Succeeded or Failed" + Oct 13 09:13:02.254: INFO: Trying to get logs from node node2 pod pod-configmaps-f6944adf-39e1-4578-bcb0-d329398713a7 container agnhost-container: + STEP: delete the pod 10/13/23 09:13:02.26 + Oct 13 09:13:02.270: INFO: Waiting for pod pod-configmaps-f6944adf-39e1-4578-bcb0-d329398713a7 to disappear + Oct 13 09:13:02.273: INFO: Pod pod-configmaps-f6944adf-39e1-4578-bcb0-d329398713a7 no longer exists + [AfterEach] [sig-storage] ConfigMap + test/e2e/framework/node/init/init.go:32 + Oct 13 09:13:02.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] ConfigMap + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] ConfigMap + tear down framework | framework.go:193 + STEP: Destroying namespace "configmap-1714" for this suite. 10/13/23 09:13:02.276 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for multiple CRDs of same group but different versions [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:309 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:13:02.283 +Oct 13 09:13:02.283: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename crd-publish-openapi 10/13/23 09:13:02.284 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:13:02.296 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:13:02.298 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[It] works for multiple CRDs of same group but different versions [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:309 +STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation 10/13/23 09:13:02.3 +Oct 13 09:13:02.301: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation 10/13/23 09:13:08.983 +Oct 13 09:13:08.983: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +Oct 13 09:13:10.876: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Oct 13 09:13:17.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "crd-publish-openapi-9030" for this suite. 10/13/23 09:13:17.992 +------------------------------ +• [SLOW TEST] [15.715 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + works for multiple CRDs of same group but different versions [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:309 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:13:02.283 + Oct 13 09:13:02.283: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename crd-publish-openapi 10/13/23 09:13:02.284 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:13:02.296 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:13:02.298 + [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [It] works for multiple CRDs of same group but different versions [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:309 + STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation 10/13/23 09:13:02.3 + Oct 13 09:13:02.301: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation 10/13/23 09:13:08.983 + Oct 13 09:13:08.983: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + Oct 13 09:13:10.876: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Oct 13 09:13:17.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "crd-publish-openapi-9030" for this suite. 10/13/23 09:13:17.992 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/network/networking.go:122 +[BeforeEach] [sig-network] Networking + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:13:18 +Oct 13 09:13:18.000: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename pod-network-test 10/13/23 09:13:18 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:13:18.016 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:13:18.018 +[BeforeEach] [sig-network] Networking + test/e2e/framework/metrics/init/init.go:31 +[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/network/networking.go:122 +STEP: Performing setup for networking test in namespace pod-network-test-6789 10/13/23 09:13:18.021 +STEP: creating a selector 10/13/23 09:13:18.021 +STEP: Creating the service pods in kubernetes 10/13/23 09:13:18.021 +Oct 13 09:13:18.021: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +Oct 13 09:13:18.045: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "pod-network-test-6789" to be "running and ready" +Oct 13 09:13:18.049: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.876787ms +Oct 13 09:13:18.049: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Oct 13 09:13:20.055: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.009731806s +Oct 13 09:13:20.055: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Oct 13 09:13:22.054: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.009027246s +Oct 13 09:13:22.054: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Oct 13 09:13:24.056: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.011250805s +Oct 13 09:13:24.056: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Oct 13 09:13:26.055: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.009939347s +Oct 13 09:13:26.055: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Oct 13 09:13:28.054: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.008818821s +Oct 13 09:13:28.054: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Oct 13 09:13:30.054: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.00923086s +Oct 13 09:13:30.054: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Oct 13 09:13:32.055: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.010177622s +Oct 13 09:13:32.055: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Oct 13 09:13:34.056: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.011469773s +Oct 13 09:13:34.056: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Oct 13 09:13:36.055: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.01023469s +Oct 13 09:13:36.055: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Oct 13 09:13:38.054: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.00895396s +Oct 13 09:13:38.054: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Oct 13 09:13:40.056: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=true. Elapsed: 22.010599042s +Oct 13 09:13:40.056: INFO: The phase of Pod netserver-0 is Running (Ready = true) +Oct 13 09:13:40.056: INFO: Pod "netserver-0" satisfied condition "running and ready" +Oct 13 09:13:40.060: INFO: Waiting up to 5m0s for pod "netserver-1" in namespace "pod-network-test-6789" to be "running and ready" +Oct 13 09:13:40.064: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=true. Elapsed: 4.100539ms +Oct 13 09:13:40.064: INFO: The phase of Pod netserver-1 is Running (Ready = true) +Oct 13 09:13:40.064: INFO: Pod "netserver-1" satisfied condition "running and ready" +Oct 13 09:13:40.068: INFO: Waiting up to 5m0s for pod "netserver-2" in namespace "pod-network-test-6789" to be "running and ready" +Oct 13 09:13:40.071: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=true. Elapsed: 3.434924ms +Oct 13 09:13:40.071: INFO: The phase of Pod netserver-2 is Running (Ready = true) +Oct 13 09:13:40.071: INFO: Pod "netserver-2" satisfied condition "running and ready" +STEP: Creating test pods 10/13/23 09:13:40.075 +Oct 13 09:13:40.093: INFO: Waiting up to 5m0s for pod "test-container-pod" in namespace "pod-network-test-6789" to be "running" +Oct 13 09:13:40.096: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 3.481857ms +Oct 13 09:13:42.101: INFO: Pod "test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.007986184s +Oct 13 09:13:42.101: INFO: Pod "test-container-pod" satisfied condition "running" +Oct 13 09:13:42.104: INFO: Waiting up to 5m0s for pod "host-test-container-pod" in namespace "pod-network-test-6789" to be "running" +Oct 13 09:13:42.108: INFO: Pod "host-test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 3.395137ms +Oct 13 09:13:42.108: INFO: Pod "host-test-container-pod" satisfied condition "running" +Oct 13 09:13:42.112: INFO: Setting MaxTries for pod polling to 39 for networking test based on endpoint count 3 +Oct 13 09:13:42.112: INFO: Going to poll 10.244.0.54 on port 8081 at least 0 times, with a maximum of 39 tries before failing +Oct 13 09:13:42.115: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.0.54 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6789 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 13 09:13:42.115: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +Oct 13 09:13:42.116: INFO: ExecWithOptions: Clientset creation +Oct 13 09:13:42.116: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/pod-network-test-6789/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostName+%7C+nc+-w+1+-u+10.244.0.54+8081+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) +Oct 13 09:13:43.188: INFO: Found all 1 expected endpoints: [netserver-0] +Oct 13 09:13:43.188: INFO: Going to poll 10.244.1.233 on port 8081 at least 0 times, with a maximum of 39 tries before failing +Oct 13 09:13:43.192: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.233 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6789 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 13 09:13:43.192: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +Oct 13 09:13:43.192: INFO: ExecWithOptions: Clientset creation +Oct 13 09:13:43.192: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/pod-network-test-6789/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostName+%7C+nc+-w+1+-u+10.244.1.233+8081+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) +Oct 13 09:13:44.270: INFO: Found all 1 expected endpoints: [netserver-1] +Oct 13 09:13:44.270: INFO: Going to poll 10.244.2.129 on port 8081 at least 0 times, with a maximum of 39 tries before failing +Oct 13 09:13:44.275: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.129 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6789 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 13 09:13:44.275: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +Oct 13 09:13:44.276: INFO: ExecWithOptions: Clientset creation +Oct 13 09:13:44.276: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/pod-network-test-6789/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostName+%7C+nc+-w+1+-u+10.244.2.129+8081+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) +Oct 13 09:13:45.352: INFO: Found all 1 expected endpoints: [netserver-2] +[AfterEach] [sig-network] Networking + test/e2e/framework/node/init/init.go:32 +Oct 13 09:13:45.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] Networking + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] Networking + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] Networking + tear down framework | framework.go:193 +STEP: Destroying namespace "pod-network-test-6789" for this suite. 10/13/23 09:13:45.358 +------------------------------ +• [SLOW TEST] [27.367 seconds] +[sig-network] Networking +test/e2e/common/network/framework.go:23 + Granular Checks: Pods + test/e2e/common/network/networking.go:32 + should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/network/networking.go:122 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Networking + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:13:18 + Oct 13 09:13:18.000: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename pod-network-test 10/13/23 09:13:18 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:13:18.016 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:13:18.018 + [BeforeEach] [sig-network] Networking + test/e2e/framework/metrics/init/init.go:31 + [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/network/networking.go:122 + STEP: Performing setup for networking test in namespace pod-network-test-6789 10/13/23 09:13:18.021 + STEP: creating a selector 10/13/23 09:13:18.021 + STEP: Creating the service pods in kubernetes 10/13/23 09:13:18.021 + Oct 13 09:13:18.021: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable + Oct 13 09:13:18.045: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "pod-network-test-6789" to be "running and ready" + Oct 13 09:13:18.049: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.876787ms + Oct 13 09:13:18.049: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) + Oct 13 09:13:20.055: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.009731806s + Oct 13 09:13:20.055: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Oct 13 09:13:22.054: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.009027246s + Oct 13 09:13:22.054: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Oct 13 09:13:24.056: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.011250805s + Oct 13 09:13:24.056: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Oct 13 09:13:26.055: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.009939347s + Oct 13 09:13:26.055: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Oct 13 09:13:28.054: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.008818821s + Oct 13 09:13:28.054: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Oct 13 09:13:30.054: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.00923086s + Oct 13 09:13:30.054: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Oct 13 09:13:32.055: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.010177622s + Oct 13 09:13:32.055: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Oct 13 09:13:34.056: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.011469773s + Oct 13 09:13:34.056: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Oct 13 09:13:36.055: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.01023469s + Oct 13 09:13:36.055: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Oct 13 09:13:38.054: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.00895396s + Oct 13 09:13:38.054: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Oct 13 09:13:40.056: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=true. Elapsed: 22.010599042s + Oct 13 09:13:40.056: INFO: The phase of Pod netserver-0 is Running (Ready = true) + Oct 13 09:13:40.056: INFO: Pod "netserver-0" satisfied condition "running and ready" + Oct 13 09:13:40.060: INFO: Waiting up to 5m0s for pod "netserver-1" in namespace "pod-network-test-6789" to be "running and ready" + Oct 13 09:13:40.064: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=true. Elapsed: 4.100539ms + Oct 13 09:13:40.064: INFO: The phase of Pod netserver-1 is Running (Ready = true) + Oct 13 09:13:40.064: INFO: Pod "netserver-1" satisfied condition "running and ready" + Oct 13 09:13:40.068: INFO: Waiting up to 5m0s for pod "netserver-2" in namespace "pod-network-test-6789" to be "running and ready" + Oct 13 09:13:40.071: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=true. Elapsed: 3.434924ms + Oct 13 09:13:40.071: INFO: The phase of Pod netserver-2 is Running (Ready = true) + Oct 13 09:13:40.071: INFO: Pod "netserver-2" satisfied condition "running and ready" + STEP: Creating test pods 10/13/23 09:13:40.075 + Oct 13 09:13:40.093: INFO: Waiting up to 5m0s for pod "test-container-pod" in namespace "pod-network-test-6789" to be "running" + Oct 13 09:13:40.096: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 3.481857ms + Oct 13 09:13:42.101: INFO: Pod "test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.007986184s + Oct 13 09:13:42.101: INFO: Pod "test-container-pod" satisfied condition "running" + Oct 13 09:13:42.104: INFO: Waiting up to 5m0s for pod "host-test-container-pod" in namespace "pod-network-test-6789" to be "running" + Oct 13 09:13:42.108: INFO: Pod "host-test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 3.395137ms + Oct 13 09:13:42.108: INFO: Pod "host-test-container-pod" satisfied condition "running" + Oct 13 09:13:42.112: INFO: Setting MaxTries for pod polling to 39 for networking test based on endpoint count 3 + Oct 13 09:13:42.112: INFO: Going to poll 10.244.0.54 on port 8081 at least 0 times, with a maximum of 39 tries before failing + Oct 13 09:13:42.115: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.0.54 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6789 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Oct 13 09:13:42.115: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + Oct 13 09:13:42.116: INFO: ExecWithOptions: Clientset creation + Oct 13 09:13:42.116: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/pod-network-test-6789/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostName+%7C+nc+-w+1+-u+10.244.0.54+8081+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) + Oct 13 09:13:43.188: INFO: Found all 1 expected endpoints: [netserver-0] + Oct 13 09:13:43.188: INFO: Going to poll 10.244.1.233 on port 8081 at least 0 times, with a maximum of 39 tries before failing + Oct 13 09:13:43.192: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.233 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6789 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Oct 13 09:13:43.192: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + Oct 13 09:13:43.192: INFO: ExecWithOptions: Clientset creation + Oct 13 09:13:43.192: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/pod-network-test-6789/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostName+%7C+nc+-w+1+-u+10.244.1.233+8081+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) + Oct 13 09:13:44.270: INFO: Found all 1 expected endpoints: [netserver-1] + Oct 13 09:13:44.270: INFO: Going to poll 10.244.2.129 on port 8081 at least 0 times, with a maximum of 39 tries before failing + Oct 13 09:13:44.275: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.129 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6789 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Oct 13 09:13:44.275: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + Oct 13 09:13:44.276: INFO: ExecWithOptions: Clientset creation + Oct 13 09:13:44.276: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/pod-network-test-6789/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostName+%7C+nc+-w+1+-u+10.244.2.129+8081+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) + Oct 13 09:13:45.352: INFO: Found all 1 expected endpoints: [netserver-2] + [AfterEach] [sig-network] Networking + test/e2e/framework/node/init/init.go:32 + Oct 13 09:13:45.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] Networking + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] Networking + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] Networking + tear down framework | framework.go:193 + STEP: Destroying namespace "pod-network-test-6789" for this suite. 10/13/23 09:13:45.358 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPredicates [Serial] + validates that NodeSelector is respected if matching [Conformance] + test/e2e/scheduling/predicates.go:466 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:13:45.367 +Oct 13 09:13:45.368: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename sched-pred 10/13/23 09:13:45.369 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:13:45.384 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:13:45.386 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:97 +Oct 13 09:13:45.388: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Oct 13 09:13:45.395: INFO: Waiting for terminating namespaces to be deleted... +Oct 13 09:13:45.397: INFO: +Logging pods the apiserver thinks is on node node1 before test +Oct 13 09:13:45.403: INFO: kube-flannel-ds-jtxbm from kube-flannel started at 2023-10-13 08:11:42 +0000 UTC (1 container statuses recorded) +Oct 13 09:13:45.403: INFO: Container kube-flannel ready: true, restart count 0 +Oct 13 09:13:45.403: INFO: coredns-787d4945fb-89krv from kube-system started at 2023-10-13 08:19:33 +0000 UTC (1 container statuses recorded) +Oct 13 09:13:45.403: INFO: Container coredns ready: true, restart count 0 +Oct 13 09:13:45.403: INFO: etcd-node1 from kube-system started at 2023-10-13 07:51:38 +0000 UTC (1 container statuses recorded) +Oct 13 09:13:45.403: INFO: Container etcd ready: true, restart count 8 +Oct 13 09:13:45.403: INFO: haproxy-node1 from kube-system started at 2023-10-13 07:51:38 +0000 UTC (1 container statuses recorded) +Oct 13 09:13:45.403: INFO: Container haproxy ready: true, restart count 3 +Oct 13 09:13:45.403: INFO: keepalived-node1 from kube-system started at 2023-10-13 07:51:38 +0000 UTC (1 container statuses recorded) +Oct 13 09:13:45.403: INFO: Container keepalived ready: true, restart count 9 +Oct 13 09:13:45.403: INFO: kube-apiserver-node1 from kube-system started at 2023-10-13 07:05:26 +0000 UTC (1 container statuses recorded) +Oct 13 09:13:45.403: INFO: Container kube-apiserver ready: true, restart count 8 +Oct 13 09:13:45.403: INFO: kube-controller-manager-node1 from kube-system started at 2023-10-13 07:51:38 +0000 UTC (1 container statuses recorded) +Oct 13 09:13:45.403: INFO: Container kube-controller-manager ready: true, restart count 8 +Oct 13 09:13:45.403: INFO: kube-proxy-dqr76 from kube-system started at 2023-10-13 07:05:37 +0000 UTC (1 container statuses recorded) +Oct 13 09:13:45.403: INFO: Container kube-proxy ready: true, restart count 1 +Oct 13 09:13:45.403: INFO: kube-scheduler-node1 from kube-system started at 2023-10-13 07:51:38 +0000 UTC (1 container statuses recorded) +Oct 13 09:13:45.403: INFO: Container kube-scheduler ready: true, restart count 11 +Oct 13 09:13:45.403: INFO: netserver-0 from pod-network-test-6789 started at 2023-10-13 09:13:18 +0000 UTC (1 container statuses recorded) +Oct 13 09:13:45.403: INFO: Container webserver ready: true, restart count 0 +Oct 13 09:13:45.403: INFO: sonobuoy from sonobuoy started at 2023-10-13 08:13:29 +0000 UTC (1 container statuses recorded) +Oct 13 09:13:45.403: INFO: Container kube-sonobuoy ready: true, restart count 0 +Oct 13 09:13:45.403: INFO: sonobuoy-systemd-logs-daemon-set-4d2891c1fd524ac6-jr4nt from sonobuoy started at 2023-10-13 08:13:30 +0000 UTC (2 container statuses recorded) +Oct 13 09:13:45.403: INFO: Container sonobuoy-worker ready: true, restart count 0 +Oct 13 09:13:45.403: INFO: Container systemd-logs ready: true, restart count 0 +Oct 13 09:13:45.403: INFO: +Logging pods the apiserver thinks is on node node2 before test +Oct 13 09:13:45.409: INFO: kube-flannel-ds-6t9lq from kube-flannel started at 2023-10-13 09:07:55 +0000 UTC (1 container statuses recorded) +Oct 13 09:13:45.409: INFO: Container kube-flannel ready: true, restart count 0 +Oct 13 09:13:45.409: INFO: etcd-node2 from kube-system started at 2023-10-13 07:06:06 +0000 UTC (1 container statuses recorded) +Oct 13 09:13:45.409: INFO: Container etcd ready: true, restart count 1 +Oct 13 09:13:45.409: INFO: haproxy-node2 from kube-system started at 2023-10-13 07:09:12 +0000 UTC (1 container statuses recorded) +Oct 13 09:13:45.409: INFO: Container haproxy ready: true, restart count 1 +Oct 13 09:13:45.409: INFO: keepalived-node2 from kube-system started at 2023-10-13 07:09:12 +0000 UTC (1 container statuses recorded) +Oct 13 09:13:45.409: INFO: Container keepalived ready: true, restart count 1 +Oct 13 09:13:45.409: INFO: kube-apiserver-node2 from kube-system started at 2023-10-13 07:05:53 +0000 UTC (1 container statuses recorded) +Oct 13 09:13:45.409: INFO: Container kube-apiserver ready: true, restart count 2 +Oct 13 09:13:45.409: INFO: kube-controller-manager-node2 from kube-system started at 2023-10-13 07:05:53 +0000 UTC (1 container statuses recorded) +Oct 13 09:13:45.409: INFO: Container kube-controller-manager ready: true, restart count 1 +Oct 13 09:13:45.409: INFO: kube-proxy-tkvwh from kube-system started at 2023-10-13 07:06:06 +0000 UTC (1 container statuses recorded) +Oct 13 09:13:45.409: INFO: Container kube-proxy ready: true, restart count 1 +Oct 13 09:13:45.409: INFO: kube-scheduler-node2 from kube-system started at 2023-10-13 07:05:53 +0000 UTC (1 container statuses recorded) +Oct 13 09:13:45.409: INFO: Container kube-scheduler ready: true, restart count 1 +Oct 13 09:13:45.409: INFO: host-test-container-pod from pod-network-test-6789 started at 2023-10-13 09:13:40 +0000 UTC (1 container statuses recorded) +Oct 13 09:13:45.409: INFO: Container agnhost-container ready: true, restart count 0 +Oct 13 09:13:45.409: INFO: netserver-1 from pod-network-test-6789 started at 2023-10-13 09:13:18 +0000 UTC (1 container statuses recorded) +Oct 13 09:13:45.409: INFO: Container webserver ready: true, restart count 0 +Oct 13 09:13:45.409: INFO: test-container-pod from pod-network-test-6789 started at 2023-10-13 09:13:40 +0000 UTC (1 container statuses recorded) +Oct 13 09:13:45.409: INFO: Container webserver ready: true, restart count 0 +Oct 13 09:13:45.409: INFO: sonobuoy-systemd-logs-daemon-set-4d2891c1fd524ac6-rhwwd from sonobuoy started at 2023-10-13 08:13:30 +0000 UTC (2 container statuses recorded) +Oct 13 09:13:45.409: INFO: Container sonobuoy-worker ready: true, restart count 0 +Oct 13 09:13:45.409: INFO: Container systemd-logs ready: true, restart count 0 +Oct 13 09:13:45.409: INFO: +Logging pods the apiserver thinks is on node node3 before test +Oct 13 09:13:45.417: INFO: kube-flannel-ds-dzrwh from kube-flannel started at 2023-10-13 08:11:42 +0000 UTC (1 container statuses recorded) +Oct 13 09:13:45.417: INFO: Container kube-flannel ready: true, restart count 0 +Oct 13 09:13:45.417: INFO: coredns-787d4945fb-5dqqv from kube-system started at 2023-10-13 08:12:41 +0000 UTC (1 container statuses recorded) +Oct 13 09:13:45.417: INFO: Container coredns ready: true, restart count 0 +Oct 13 09:13:45.417: INFO: etcd-node3 from kube-system started at 2023-10-13 07:07:40 +0000 UTC (1 container statuses recorded) +Oct 13 09:13:45.417: INFO: Container etcd ready: true, restart count 1 +Oct 13 09:13:45.417: INFO: haproxy-node3 from kube-system started at 2023-10-13 07:07:26 +0000 UTC (1 container statuses recorded) +Oct 13 09:13:45.417: INFO: Container haproxy ready: true, restart count 1 +Oct 13 09:13:45.417: INFO: keepalived-node3 from kube-system started at 2023-10-13 07:07:26 +0000 UTC (1 container statuses recorded) +Oct 13 09:13:45.417: INFO: Container keepalived ready: true, restart count 1 +Oct 13 09:13:45.417: INFO: kube-apiserver-node3 from kube-system started at 2023-10-13 07:07:26 +0000 UTC (1 container statuses recorded) +Oct 13 09:13:45.417: INFO: Container kube-apiserver ready: true, restart count 1 +Oct 13 09:13:45.417: INFO: kube-controller-manager-node3 from kube-system started at 2023-10-13 07:07:26 +0000 UTC (1 container statuses recorded) +Oct 13 09:13:45.417: INFO: Container kube-controller-manager ready: true, restart count 1 +Oct 13 09:13:45.417: INFO: kube-proxy-dkrp7 from kube-system started at 2023-10-13 07:07:46 +0000 UTC (1 container statuses recorded) +Oct 13 09:13:45.417: INFO: Container kube-proxy ready: true, restart count 1 +Oct 13 09:13:45.417: INFO: kube-scheduler-node3 from kube-system started at 2023-10-13 07:07:26 +0000 UTC (1 container statuses recorded) +Oct 13 09:13:45.417: INFO: Container kube-scheduler ready: true, restart count 1 +Oct 13 09:13:45.417: INFO: netserver-2 from pod-network-test-6789 started at 2023-10-13 09:13:18 +0000 UTC (1 container statuses recorded) +Oct 13 09:13:45.417: INFO: Container webserver ready: true, restart count 0 +Oct 13 09:13:45.417: INFO: sonobuoy-e2e-job-bfbf16dda205467f from sonobuoy started at 2023-10-13 08:13:29 +0000 UTC (2 container statuses recorded) +Oct 13 09:13:45.417: INFO: Container e2e ready: true, restart count 0 +Oct 13 09:13:45.417: INFO: Container sonobuoy-worker ready: true, restart count 0 +Oct 13 09:13:45.417: INFO: sonobuoy-systemd-logs-daemon-set-4d2891c1fd524ac6-xj7b7 from sonobuoy started at 2023-10-13 08:13:30 +0000 UTC (2 container statuses recorded) +Oct 13 09:13:45.417: INFO: Container sonobuoy-worker ready: true, restart count 0 +Oct 13 09:13:45.417: INFO: Container systemd-logs ready: true, restart count 0 +[It] validates that NodeSelector is respected if matching [Conformance] + test/e2e/scheduling/predicates.go:466 +STEP: Trying to launch a pod without a label to get a node which can launch it. 10/13/23 09:13:45.417 +Oct 13 09:13:45.425: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-8096" to be "running" +Oct 13 09:13:45.428: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 3.271036ms +Oct 13 09:13:47.435: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.009771093s +Oct 13 09:13:47.435: INFO: Pod "without-label" satisfied condition "running" +STEP: Explicitly delete pod here to free the resource it takes. 10/13/23 09:13:47.439 +STEP: Trying to apply a random label on the found node. 10/13/23 09:13:47.451 +STEP: verifying the node has the label kubernetes.io/e2e-421bc7f7-92f4-40a0-9d73-5df9d98fdb99 42 10/13/23 09:13:47.46 +STEP: Trying to relaunch the pod, now with labels. 10/13/23 09:13:47.465 +Oct 13 09:13:47.470: INFO: Waiting up to 5m0s for pod "with-labels" in namespace "sched-pred-8096" to be "not pending" +Oct 13 09:13:47.474: INFO: Pod "with-labels": Phase="Pending", Reason="", readiness=false. Elapsed: 3.303975ms +Oct 13 09:13:49.479: INFO: Pod "with-labels": Phase="Running", Reason="", readiness=true. Elapsed: 2.008118549s +Oct 13 09:13:49.479: INFO: Pod "with-labels" satisfied condition "not pending" +STEP: removing the label kubernetes.io/e2e-421bc7f7-92f4-40a0-9d73-5df9d98fdb99 off the node node1 10/13/23 09:13:49.483 +STEP: verifying the node doesn't have the label kubernetes.io/e2e-421bc7f7-92f4-40a0-9d73-5df9d98fdb99 10/13/23 09:13:49.502 +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/node/init/init.go:32 +Oct 13 09:13:49.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:88 +[DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] + tear down framework | framework.go:193 +STEP: Destroying namespace "sched-pred-8096" for this suite. 10/13/23 09:13:49.511 +------------------------------ +• [4.149 seconds] +[sig-scheduling] SchedulerPredicates [Serial] +test/e2e/scheduling/framework.go:40 + validates that NodeSelector is respected if matching [Conformance] + test/e2e/scheduling/predicates.go:466 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:13:45.367 + Oct 13 09:13:45.368: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename sched-pred 10/13/23 09:13:45.369 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:13:45.384 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:13:45.386 + [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:97 + Oct 13 09:13:45.388: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready + Oct 13 09:13:45.395: INFO: Waiting for terminating namespaces to be deleted... + Oct 13 09:13:45.397: INFO: + Logging pods the apiserver thinks is on node node1 before test + Oct 13 09:13:45.403: INFO: kube-flannel-ds-jtxbm from kube-flannel started at 2023-10-13 08:11:42 +0000 UTC (1 container statuses recorded) + Oct 13 09:13:45.403: INFO: Container kube-flannel ready: true, restart count 0 + Oct 13 09:13:45.403: INFO: coredns-787d4945fb-89krv from kube-system started at 2023-10-13 08:19:33 +0000 UTC (1 container statuses recorded) + Oct 13 09:13:45.403: INFO: Container coredns ready: true, restart count 0 + Oct 13 09:13:45.403: INFO: etcd-node1 from kube-system started at 2023-10-13 07:51:38 +0000 UTC (1 container statuses recorded) + Oct 13 09:13:45.403: INFO: Container etcd ready: true, restart count 8 + Oct 13 09:13:45.403: INFO: haproxy-node1 from kube-system started at 2023-10-13 07:51:38 +0000 UTC (1 container statuses recorded) + Oct 13 09:13:45.403: INFO: Container haproxy ready: true, restart count 3 + Oct 13 09:13:45.403: INFO: keepalived-node1 from kube-system started at 2023-10-13 07:51:38 +0000 UTC (1 container statuses recorded) + Oct 13 09:13:45.403: INFO: Container keepalived ready: true, restart count 9 + Oct 13 09:13:45.403: INFO: kube-apiserver-node1 from kube-system started at 2023-10-13 07:05:26 +0000 UTC (1 container statuses recorded) + Oct 13 09:13:45.403: INFO: Container kube-apiserver ready: true, restart count 8 + Oct 13 09:13:45.403: INFO: kube-controller-manager-node1 from kube-system started at 2023-10-13 07:51:38 +0000 UTC (1 container statuses recorded) + Oct 13 09:13:45.403: INFO: Container kube-controller-manager ready: true, restart count 8 + Oct 13 09:13:45.403: INFO: kube-proxy-dqr76 from kube-system started at 2023-10-13 07:05:37 +0000 UTC (1 container statuses recorded) + Oct 13 09:13:45.403: INFO: Container kube-proxy ready: true, restart count 1 + Oct 13 09:13:45.403: INFO: kube-scheduler-node1 from kube-system started at 2023-10-13 07:51:38 +0000 UTC (1 container statuses recorded) + Oct 13 09:13:45.403: INFO: Container kube-scheduler ready: true, restart count 11 + Oct 13 09:13:45.403: INFO: netserver-0 from pod-network-test-6789 started at 2023-10-13 09:13:18 +0000 UTC (1 container statuses recorded) + Oct 13 09:13:45.403: INFO: Container webserver ready: true, restart count 0 + Oct 13 09:13:45.403: INFO: sonobuoy from sonobuoy started at 2023-10-13 08:13:29 +0000 UTC (1 container statuses recorded) + Oct 13 09:13:45.403: INFO: Container kube-sonobuoy ready: true, restart count 0 + Oct 13 09:13:45.403: INFO: sonobuoy-systemd-logs-daemon-set-4d2891c1fd524ac6-jr4nt from sonobuoy started at 2023-10-13 08:13:30 +0000 UTC (2 container statuses recorded) + Oct 13 09:13:45.403: INFO: Container sonobuoy-worker ready: true, restart count 0 + Oct 13 09:13:45.403: INFO: Container systemd-logs ready: true, restart count 0 + Oct 13 09:13:45.403: INFO: + Logging pods the apiserver thinks is on node node2 before test + Oct 13 09:13:45.409: INFO: kube-flannel-ds-6t9lq from kube-flannel started at 2023-10-13 09:07:55 +0000 UTC (1 container statuses recorded) + Oct 13 09:13:45.409: INFO: Container kube-flannel ready: true, restart count 0 + Oct 13 09:13:45.409: INFO: etcd-node2 from kube-system started at 2023-10-13 07:06:06 +0000 UTC (1 container statuses recorded) + Oct 13 09:13:45.409: INFO: Container etcd ready: true, restart count 1 + Oct 13 09:13:45.409: INFO: haproxy-node2 from kube-system started at 2023-10-13 07:09:12 +0000 UTC (1 container statuses recorded) + Oct 13 09:13:45.409: INFO: Container haproxy ready: true, restart count 1 + Oct 13 09:13:45.409: INFO: keepalived-node2 from kube-system started at 2023-10-13 07:09:12 +0000 UTC (1 container statuses recorded) + Oct 13 09:13:45.409: INFO: Container keepalived ready: true, restart count 1 + Oct 13 09:13:45.409: INFO: kube-apiserver-node2 from kube-system started at 2023-10-13 07:05:53 +0000 UTC (1 container statuses recorded) + Oct 13 09:13:45.409: INFO: Container kube-apiserver ready: true, restart count 2 + Oct 13 09:13:45.409: INFO: kube-controller-manager-node2 from kube-system started at 2023-10-13 07:05:53 +0000 UTC (1 container statuses recorded) + Oct 13 09:13:45.409: INFO: Container kube-controller-manager ready: true, restart count 1 + Oct 13 09:13:45.409: INFO: kube-proxy-tkvwh from kube-system started at 2023-10-13 07:06:06 +0000 UTC (1 container statuses recorded) + Oct 13 09:13:45.409: INFO: Container kube-proxy ready: true, restart count 1 + Oct 13 09:13:45.409: INFO: kube-scheduler-node2 from kube-system started at 2023-10-13 07:05:53 +0000 UTC (1 container statuses recorded) + Oct 13 09:13:45.409: INFO: Container kube-scheduler ready: true, restart count 1 + Oct 13 09:13:45.409: INFO: host-test-container-pod from pod-network-test-6789 started at 2023-10-13 09:13:40 +0000 UTC (1 container statuses recorded) + Oct 13 09:13:45.409: INFO: Container agnhost-container ready: true, restart count 0 + Oct 13 09:13:45.409: INFO: netserver-1 from pod-network-test-6789 started at 2023-10-13 09:13:18 +0000 UTC (1 container statuses recorded) + Oct 13 09:13:45.409: INFO: Container webserver ready: true, restart count 0 + Oct 13 09:13:45.409: INFO: test-container-pod from pod-network-test-6789 started at 2023-10-13 09:13:40 +0000 UTC (1 container statuses recorded) + Oct 13 09:13:45.409: INFO: Container webserver ready: true, restart count 0 + Oct 13 09:13:45.409: INFO: sonobuoy-systemd-logs-daemon-set-4d2891c1fd524ac6-rhwwd from sonobuoy started at 2023-10-13 08:13:30 +0000 UTC (2 container statuses recorded) + Oct 13 09:13:45.409: INFO: Container sonobuoy-worker ready: true, restart count 0 + Oct 13 09:13:45.409: INFO: Container systemd-logs ready: true, restart count 0 + Oct 13 09:13:45.409: INFO: + Logging pods the apiserver thinks is on node node3 before test + Oct 13 09:13:45.417: INFO: kube-flannel-ds-dzrwh from kube-flannel started at 2023-10-13 08:11:42 +0000 UTC (1 container statuses recorded) + Oct 13 09:13:45.417: INFO: Container kube-flannel ready: true, restart count 0 + Oct 13 09:13:45.417: INFO: coredns-787d4945fb-5dqqv from kube-system started at 2023-10-13 08:12:41 +0000 UTC (1 container statuses recorded) + Oct 13 09:13:45.417: INFO: Container coredns ready: true, restart count 0 + Oct 13 09:13:45.417: INFO: etcd-node3 from kube-system started at 2023-10-13 07:07:40 +0000 UTC (1 container statuses recorded) + Oct 13 09:13:45.417: INFO: Container etcd ready: true, restart count 1 + Oct 13 09:13:45.417: INFO: haproxy-node3 from kube-system started at 2023-10-13 07:07:26 +0000 UTC (1 container statuses recorded) + Oct 13 09:13:45.417: INFO: Container haproxy ready: true, restart count 1 + Oct 13 09:13:45.417: INFO: keepalived-node3 from kube-system started at 2023-10-13 07:07:26 +0000 UTC (1 container statuses recorded) + Oct 13 09:13:45.417: INFO: Container keepalived ready: true, restart count 1 + Oct 13 09:13:45.417: INFO: kube-apiserver-node3 from kube-system started at 2023-10-13 07:07:26 +0000 UTC (1 container statuses recorded) + Oct 13 09:13:45.417: INFO: Container kube-apiserver ready: true, restart count 1 + Oct 13 09:13:45.417: INFO: kube-controller-manager-node3 from kube-system started at 2023-10-13 07:07:26 +0000 UTC (1 container statuses recorded) + Oct 13 09:13:45.417: INFO: Container kube-controller-manager ready: true, restart count 1 + Oct 13 09:13:45.417: INFO: kube-proxy-dkrp7 from kube-system started at 2023-10-13 07:07:46 +0000 UTC (1 container statuses recorded) + Oct 13 09:13:45.417: INFO: Container kube-proxy ready: true, restart count 1 + Oct 13 09:13:45.417: INFO: kube-scheduler-node3 from kube-system started at 2023-10-13 07:07:26 +0000 UTC (1 container statuses recorded) + Oct 13 09:13:45.417: INFO: Container kube-scheduler ready: true, restart count 1 + Oct 13 09:13:45.417: INFO: netserver-2 from pod-network-test-6789 started at 2023-10-13 09:13:18 +0000 UTC (1 container statuses recorded) + Oct 13 09:13:45.417: INFO: Container webserver ready: true, restart count 0 + Oct 13 09:13:45.417: INFO: sonobuoy-e2e-job-bfbf16dda205467f from sonobuoy started at 2023-10-13 08:13:29 +0000 UTC (2 container statuses recorded) + Oct 13 09:13:45.417: INFO: Container e2e ready: true, restart count 0 + Oct 13 09:13:45.417: INFO: Container sonobuoy-worker ready: true, restart count 0 + Oct 13 09:13:45.417: INFO: sonobuoy-systemd-logs-daemon-set-4d2891c1fd524ac6-xj7b7 from sonobuoy started at 2023-10-13 08:13:30 +0000 UTC (2 container statuses recorded) + Oct 13 09:13:45.417: INFO: Container sonobuoy-worker ready: true, restart count 0 + Oct 13 09:13:45.417: INFO: Container systemd-logs ready: true, restart count 0 + [It] validates that NodeSelector is respected if matching [Conformance] + test/e2e/scheduling/predicates.go:466 + STEP: Trying to launch a pod without a label to get a node which can launch it. 10/13/23 09:13:45.417 + Oct 13 09:13:45.425: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-8096" to be "running" + Oct 13 09:13:45.428: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 3.271036ms + Oct 13 09:13:47.435: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.009771093s + Oct 13 09:13:47.435: INFO: Pod "without-label" satisfied condition "running" + STEP: Explicitly delete pod here to free the resource it takes. 10/13/23 09:13:47.439 + STEP: Trying to apply a random label on the found node. 10/13/23 09:13:47.451 + STEP: verifying the node has the label kubernetes.io/e2e-421bc7f7-92f4-40a0-9d73-5df9d98fdb99 42 10/13/23 09:13:47.46 + STEP: Trying to relaunch the pod, now with labels. 10/13/23 09:13:47.465 + Oct 13 09:13:47.470: INFO: Waiting up to 5m0s for pod "with-labels" in namespace "sched-pred-8096" to be "not pending" + Oct 13 09:13:47.474: INFO: Pod "with-labels": Phase="Pending", Reason="", readiness=false. Elapsed: 3.303975ms + Oct 13 09:13:49.479: INFO: Pod "with-labels": Phase="Running", Reason="", readiness=true. Elapsed: 2.008118549s + Oct 13 09:13:49.479: INFO: Pod "with-labels" satisfied condition "not pending" + STEP: removing the label kubernetes.io/e2e-421bc7f7-92f4-40a0-9d73-5df9d98fdb99 off the node node1 10/13/23 09:13:49.483 + STEP: verifying the node doesn't have the label kubernetes.io/e2e-421bc7f7-92f4-40a0-9d73-5df9d98fdb99 10/13/23 09:13:49.502 + [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/node/init/init.go:32 + Oct 13 09:13:49.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:88 + [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] + tear down framework | framework.go:193 + STEP: Destroying namespace "sched-pred-8096" for this suite. 10/13/23 09:13:49.511 + << End Captured GinkgoWriter Output +------------------------------ +SSSS +------------------------------ +[sig-storage] Downward API volume + should provide container's memory request [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:235 +[BeforeEach] [sig-storage] Downward API volume + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:13:49.517 +Oct 13 09:13:49.517: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename downward-api 10/13/23 09:13:49.518 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:13:49.533 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:13:49.536 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:44 +[It] should provide container's memory request [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:235 +STEP: Creating a pod to test downward API volume plugin 10/13/23 09:13:49.538 +Oct 13 09:13:49.547: INFO: Waiting up to 5m0s for pod "downwardapi-volume-27711404-f108-480a-9cf3-21d6159e965b" in namespace "downward-api-2159" to be "Succeeded or Failed" +Oct 13 09:13:49.551: INFO: Pod "downwardapi-volume-27711404-f108-480a-9cf3-21d6159e965b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.800963ms +Oct 13 09:13:51.555: INFO: Pod "downwardapi-volume-27711404-f108-480a-9cf3-21d6159e965b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008233732s +Oct 13 09:13:53.556: INFO: Pod "downwardapi-volume-27711404-f108-480a-9cf3-21d6159e965b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009218706s +STEP: Saw pod success 10/13/23 09:13:53.556 +Oct 13 09:13:53.556: INFO: Pod "downwardapi-volume-27711404-f108-480a-9cf3-21d6159e965b" satisfied condition "Succeeded or Failed" +Oct 13 09:13:53.559: INFO: Trying to get logs from node node2 pod downwardapi-volume-27711404-f108-480a-9cf3-21d6159e965b container client-container: +STEP: delete the pod 10/13/23 09:13:53.571 +Oct 13 09:13:53.590: INFO: Waiting for pod downwardapi-volume-27711404-f108-480a-9cf3-21d6159e965b to disappear +Oct 13 09:13:53.593: INFO: Pod downwardapi-volume-27711404-f108-480a-9cf3-21d6159e965b no longer exists +[AfterEach] [sig-storage] Downward API volume + test/e2e/framework/node/init/init.go:32 +Oct 13 09:13:53.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Downward API volume + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Downward API volume + tear down framework | framework.go:193 +STEP: Destroying namespace "downward-api-2159" for this suite. 10/13/23 09:13:53.597 +------------------------------ +• [4.086 seconds] +[sig-storage] Downward API volume +test/e2e/common/storage/framework.go:23 + should provide container's memory request [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:235 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Downward API volume + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:13:49.517 + Oct 13 09:13:49.517: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename downward-api 10/13/23 09:13:49.518 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:13:49.533 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:13:49.536 + [BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:44 + [It] should provide container's memory request [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:235 + STEP: Creating a pod to test downward API volume plugin 10/13/23 09:13:49.538 + Oct 13 09:13:49.547: INFO: Waiting up to 5m0s for pod "downwardapi-volume-27711404-f108-480a-9cf3-21d6159e965b" in namespace "downward-api-2159" to be "Succeeded or Failed" + Oct 13 09:13:49.551: INFO: Pod "downwardapi-volume-27711404-f108-480a-9cf3-21d6159e965b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.800963ms + Oct 13 09:13:51.555: INFO: Pod "downwardapi-volume-27711404-f108-480a-9cf3-21d6159e965b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008233732s + Oct 13 09:13:53.556: INFO: Pod "downwardapi-volume-27711404-f108-480a-9cf3-21d6159e965b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009218706s + STEP: Saw pod success 10/13/23 09:13:53.556 + Oct 13 09:13:53.556: INFO: Pod "downwardapi-volume-27711404-f108-480a-9cf3-21d6159e965b" satisfied condition "Succeeded or Failed" + Oct 13 09:13:53.559: INFO: Trying to get logs from node node2 pod downwardapi-volume-27711404-f108-480a-9cf3-21d6159e965b container client-container: + STEP: delete the pod 10/13/23 09:13:53.571 + Oct 13 09:13:53.590: INFO: Waiting for pod downwardapi-volume-27711404-f108-480a-9cf3-21d6159e965b to disappear + Oct 13 09:13:53.593: INFO: Pod downwardapi-volume-27711404-f108-480a-9cf3-21d6159e965b no longer exists + [AfterEach] [sig-storage] Downward API volume + test/e2e/framework/node/init/init.go:32 + Oct 13 09:13:53.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Downward API volume + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Downward API volume + tear down framework | framework.go:193 + STEP: Destroying namespace "downward-api-2159" for this suite. 10/13/23 09:13:53.597 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-apps] CronJob + should replace jobs when ReplaceConcurrent [Conformance] + test/e2e/apps/cronjob.go:160 +[BeforeEach] [sig-apps] CronJob + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:13:53.603 +Oct 13 09:13:53.603: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename cronjob 10/13/23 09:13:53.604 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:13:53.619 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:13:53.622 +[BeforeEach] [sig-apps] CronJob + test/e2e/framework/metrics/init/init.go:31 +[It] should replace jobs when ReplaceConcurrent [Conformance] + test/e2e/apps/cronjob.go:160 +STEP: Creating a ReplaceConcurrent cronjob 10/13/23 09:13:53.624 +STEP: Ensuring a job is scheduled 10/13/23 09:13:53.63 +STEP: Ensuring exactly one is scheduled 10/13/23 09:14:01.636 +STEP: Ensuring exactly one running job exists by listing jobs explicitly 10/13/23 09:14:01.639 +STEP: Ensuring the job is replaced with a new one 10/13/23 09:14:01.642 +STEP: Removing cronjob 10/13/23 09:15:01.649 +[AfterEach] [sig-apps] CronJob + test/e2e/framework/node/init/init.go:32 +Oct 13 09:15:01.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] CronJob + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] CronJob + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] CronJob + tear down framework | framework.go:193 +STEP: Destroying namespace "cronjob-9757" for this suite. 10/13/23 09:15:01.661 +------------------------------ +• [SLOW TEST] [68.065 seconds] +[sig-apps] CronJob +test/e2e/apps/framework.go:23 + should replace jobs when ReplaceConcurrent [Conformance] + test/e2e/apps/cronjob.go:160 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] CronJob + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:13:53.603 + Oct 13 09:13:53.603: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename cronjob 10/13/23 09:13:53.604 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:13:53.619 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:13:53.622 + [BeforeEach] [sig-apps] CronJob + test/e2e/framework/metrics/init/init.go:31 + [It] should replace jobs when ReplaceConcurrent [Conformance] + test/e2e/apps/cronjob.go:160 + STEP: Creating a ReplaceConcurrent cronjob 10/13/23 09:13:53.624 + STEP: Ensuring a job is scheduled 10/13/23 09:13:53.63 + STEP: Ensuring exactly one is scheduled 10/13/23 09:14:01.636 + STEP: Ensuring exactly one running job exists by listing jobs explicitly 10/13/23 09:14:01.639 + STEP: Ensuring the job is replaced with a new one 10/13/23 09:14:01.642 + STEP: Removing cronjob 10/13/23 09:15:01.649 + [AfterEach] [sig-apps] CronJob + test/e2e/framework/node/init/init.go:32 + Oct 13 09:15:01.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] CronJob + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] CronJob + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] CronJob + tear down framework | framework.go:193 + STEP: Destroying namespace "cronjob-9757" for this suite. 10/13/23 09:15:01.661 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should delete a collection of services [Conformance] + test/e2e/network/service.go:3654 +[BeforeEach] [sig-network] Services + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:15:01.669 +Oct 13 09:15:01.669: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename services 10/13/23 09:15:01.671 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:15:01.689 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:15:01.692 +[BeforeEach] [sig-network] Services + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 +[It] should delete a collection of services [Conformance] + test/e2e/network/service.go:3654 +STEP: creating a collection of services 10/13/23 09:15:01.694 +Oct 13 09:15:01.694: INFO: Creating e2e-svc-a-m4bxl +Oct 13 09:15:01.704: INFO: Creating e2e-svc-b-kjpq9 +Oct 13 09:15:01.720: INFO: Creating e2e-svc-c-mlcpm +STEP: deleting service collection 10/13/23 09:15:01.738 +Oct 13 09:15:01.775: INFO: Collection of services has been deleted +[AfterEach] [sig-network] Services + test/e2e/framework/node/init/init.go:32 +Oct 13 09:15:01.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] Services + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] Services + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] Services + tear down framework | framework.go:193 +STEP: Destroying namespace "services-6022" for this suite. 10/13/23 09:15:01.78 +------------------------------ +• [0.117 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should delete a collection of services [Conformance] + test/e2e/network/service.go:3654 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Services + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:15:01.669 + Oct 13 09:15:01.669: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename services 10/13/23 09:15:01.671 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:15:01.689 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:15:01.692 + [BeforeEach] [sig-network] Services + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 + [It] should delete a collection of services [Conformance] + test/e2e/network/service.go:3654 + STEP: creating a collection of services 10/13/23 09:15:01.694 + Oct 13 09:15:01.694: INFO: Creating e2e-svc-a-m4bxl + Oct 13 09:15:01.704: INFO: Creating e2e-svc-b-kjpq9 + Oct 13 09:15:01.720: INFO: Creating e2e-svc-c-mlcpm + STEP: deleting service collection 10/13/23 09:15:01.738 + Oct 13 09:15:01.775: INFO: Collection of services has been deleted + [AfterEach] [sig-network] Services + test/e2e/framework/node/init/init.go:32 + Oct 13 09:15:01.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] Services + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] Services + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] Services + tear down framework | framework.go:193 + STEP: Destroying namespace "services-6022" for this suite. 10/13/23 09:15:01.78 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-instrumentation] Events + should delete a collection of events [Conformance] + test/e2e/instrumentation/core_events.go:175 +[BeforeEach] [sig-instrumentation] Events + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:15:01.787 +Oct 13 09:15:01.787: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename events 10/13/23 09:15:01.788 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:15:01.808 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:15:01.811 +[BeforeEach] [sig-instrumentation] Events + test/e2e/framework/metrics/init/init.go:31 +[It] should delete a collection of events [Conformance] + test/e2e/instrumentation/core_events.go:175 +STEP: Create set of events 10/13/23 09:15:01.813 +Oct 13 09:15:01.820: INFO: created test-event-1 +Oct 13 09:15:01.825: INFO: created test-event-2 +Oct 13 09:15:01.830: INFO: created test-event-3 +STEP: get a list of Events with a label in the current namespace 10/13/23 09:15:01.83 +STEP: delete collection of events 10/13/23 09:15:01.833 +Oct 13 09:15:01.833: INFO: requesting DeleteCollection of events +STEP: check that the list of events matches the requested quantity 10/13/23 09:15:01.858 +Oct 13 09:15:01.858: INFO: requesting list of events to confirm quantity +[AfterEach] [sig-instrumentation] Events + test/e2e/framework/node/init/init.go:32 +Oct 13 09:15:01.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-instrumentation] Events + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-instrumentation] Events + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-instrumentation] Events + tear down framework | framework.go:193 +STEP: Destroying namespace "events-9030" for this suite. 10/13/23 09:15:01.865 +------------------------------ +• [0.084 seconds] +[sig-instrumentation] Events +test/e2e/instrumentation/common/framework.go:23 + should delete a collection of events [Conformance] + test/e2e/instrumentation/core_events.go:175 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-instrumentation] Events + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:15:01.787 + Oct 13 09:15:01.787: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename events 10/13/23 09:15:01.788 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:15:01.808 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:15:01.811 + [BeforeEach] [sig-instrumentation] Events + test/e2e/framework/metrics/init/init.go:31 + [It] should delete a collection of events [Conformance] + test/e2e/instrumentation/core_events.go:175 + STEP: Create set of events 10/13/23 09:15:01.813 + Oct 13 09:15:01.820: INFO: created test-event-1 + Oct 13 09:15:01.825: INFO: created test-event-2 + Oct 13 09:15:01.830: INFO: created test-event-3 + STEP: get a list of Events with a label in the current namespace 10/13/23 09:15:01.83 + STEP: delete collection of events 10/13/23 09:15:01.833 + Oct 13 09:15:01.833: INFO: requesting DeleteCollection of events + STEP: check that the list of events matches the requested quantity 10/13/23 09:15:01.858 + Oct 13 09:15:01.858: INFO: requesting list of events to confirm quantity + [AfterEach] [sig-instrumentation] Events + test/e2e/framework/node/init/init.go:32 + Oct 13 09:15:01.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-instrumentation] Events + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-instrumentation] Events + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-instrumentation] Events + tear down framework | framework.go:193 + STEP: Destroying namespace "events-9030" for this suite. 10/13/23 09:15:01.865 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSS +------------------------------ +[sig-node] Probing container + should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:215 +[BeforeEach] [sig-node] Probing container + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:15:01.872 +Oct 13 09:15:01.872: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename container-probe 10/13/23 09:15:01.873 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:15:01.888 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:15:01.891 +[BeforeEach] [sig-node] Probing container + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:63 +[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:215 +STEP: Creating pod test-webserver-1d81ec29-dd1b-4a0c-b9af-957622a53456 in namespace container-probe-2896 10/13/23 09:15:01.893 +Oct 13 09:15:01.900: INFO: Waiting up to 5m0s for pod "test-webserver-1d81ec29-dd1b-4a0c-b9af-957622a53456" in namespace "container-probe-2896" to be "not pending" +Oct 13 09:15:01.908: INFO: Pod "test-webserver-1d81ec29-dd1b-4a0c-b9af-957622a53456": Phase="Pending", Reason="", readiness=false. Elapsed: 7.735772ms +Oct 13 09:15:03.916: INFO: Pod "test-webserver-1d81ec29-dd1b-4a0c-b9af-957622a53456": Phase="Running", Reason="", readiness=true. Elapsed: 2.01585911s +Oct 13 09:15:03.916: INFO: Pod "test-webserver-1d81ec29-dd1b-4a0c-b9af-957622a53456" satisfied condition "not pending" +Oct 13 09:15:03.916: INFO: Started pod test-webserver-1d81ec29-dd1b-4a0c-b9af-957622a53456 in namespace container-probe-2896 +STEP: checking the pod's current state and verifying that restartCount is present 10/13/23 09:15:03.916 +Oct 13 09:15:03.920: INFO: Initial restart count of pod test-webserver-1d81ec29-dd1b-4a0c-b9af-957622a53456 is 0 +STEP: deleting the pod 10/13/23 09:19:04.626 +[AfterEach] [sig-node] Probing container + test/e2e/framework/node/init/init.go:32 +Oct 13 09:19:04.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Probing container + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Probing container + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Probing container + tear down framework | framework.go:193 +STEP: Destroying namespace "container-probe-2896" for this suite. 10/13/23 09:19:04.646 +------------------------------ +• [SLOW TEST] [242.780 seconds] +[sig-node] Probing container +test/e2e/common/node/framework.go:23 + should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:215 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Probing container + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:15:01.872 + Oct 13 09:15:01.872: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename container-probe 10/13/23 09:15:01.873 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:15:01.888 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:15:01.891 + [BeforeEach] [sig-node] Probing container + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:63 + [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:215 + STEP: Creating pod test-webserver-1d81ec29-dd1b-4a0c-b9af-957622a53456 in namespace container-probe-2896 10/13/23 09:15:01.893 + Oct 13 09:15:01.900: INFO: Waiting up to 5m0s for pod "test-webserver-1d81ec29-dd1b-4a0c-b9af-957622a53456" in namespace "container-probe-2896" to be "not pending" + Oct 13 09:15:01.908: INFO: Pod "test-webserver-1d81ec29-dd1b-4a0c-b9af-957622a53456": Phase="Pending", Reason="", readiness=false. Elapsed: 7.735772ms + Oct 13 09:15:03.916: INFO: Pod "test-webserver-1d81ec29-dd1b-4a0c-b9af-957622a53456": Phase="Running", Reason="", readiness=true. Elapsed: 2.01585911s + Oct 13 09:15:03.916: INFO: Pod "test-webserver-1d81ec29-dd1b-4a0c-b9af-957622a53456" satisfied condition "not pending" + Oct 13 09:15:03.916: INFO: Started pod test-webserver-1d81ec29-dd1b-4a0c-b9af-957622a53456 in namespace container-probe-2896 + STEP: checking the pod's current state and verifying that restartCount is present 10/13/23 09:15:03.916 + Oct 13 09:15:03.920: INFO: Initial restart count of pod test-webserver-1d81ec29-dd1b-4a0c-b9af-957622a53456 is 0 + STEP: deleting the pod 10/13/23 09:19:04.626 + [AfterEach] [sig-node] Probing container + test/e2e/framework/node/init/init.go:32 + Oct 13 09:19:04.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Probing container + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Probing container + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Probing container + tear down framework | framework.go:193 + STEP: Destroying namespace "container-probe-2896" for this suite. 10/13/23 09:19:04.646 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] DisruptionController + should create a PodDisruptionBudget [Conformance] + test/e2e/apps/disruption.go:108 +[BeforeEach] [sig-apps] DisruptionController + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:19:04.653 +Oct 13 09:19:04.653: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename disruption 10/13/23 09:19:04.654 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:19:04.672 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:19:04.675 +[BeforeEach] [sig-apps] DisruptionController + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] DisruptionController + test/e2e/apps/disruption.go:72 +[It] should create a PodDisruptionBudget [Conformance] + test/e2e/apps/disruption.go:108 +STEP: creating the pdb 10/13/23 09:19:04.677 +STEP: Waiting for the pdb to be processed 10/13/23 09:19:04.687 +STEP: updating the pdb 10/13/23 09:19:06.695 +STEP: Waiting for the pdb to be processed 10/13/23 09:19:06.704 +STEP: patching the pdb 10/13/23 09:19:08.714 +STEP: Waiting for the pdb to be processed 10/13/23 09:19:08.729 +STEP: Waiting for the pdb to be deleted 10/13/23 09:19:10.742 +[AfterEach] [sig-apps] DisruptionController + test/e2e/framework/node/init/init.go:32 +Oct 13 09:19:10.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] DisruptionController + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] DisruptionController + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] DisruptionController + tear down framework | framework.go:193 +STEP: Destroying namespace "disruption-8315" for this suite. 10/13/23 09:19:10.749 +------------------------------ +• [SLOW TEST] [6.101 seconds] +[sig-apps] DisruptionController +test/e2e/apps/framework.go:23 + should create a PodDisruptionBudget [Conformance] + test/e2e/apps/disruption.go:108 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] DisruptionController + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:19:04.653 + Oct 13 09:19:04.653: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename disruption 10/13/23 09:19:04.654 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:19:04.672 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:19:04.675 + [BeforeEach] [sig-apps] DisruptionController + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] DisruptionController + test/e2e/apps/disruption.go:72 + [It] should create a PodDisruptionBudget [Conformance] + test/e2e/apps/disruption.go:108 + STEP: creating the pdb 10/13/23 09:19:04.677 + STEP: Waiting for the pdb to be processed 10/13/23 09:19:04.687 + STEP: updating the pdb 10/13/23 09:19:06.695 + STEP: Waiting for the pdb to be processed 10/13/23 09:19:06.704 + STEP: patching the pdb 10/13/23 09:19:08.714 + STEP: Waiting for the pdb to be processed 10/13/23 09:19:08.729 + STEP: Waiting for the pdb to be deleted 10/13/23 09:19:10.742 + [AfterEach] [sig-apps] DisruptionController + test/e2e/framework/node/init/init.go:32 + Oct 13 09:19:10.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] DisruptionController + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] DisruptionController + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] DisruptionController + tear down framework | framework.go:193 + STEP: Destroying namespace "disruption-8315" for this suite. 10/13/23 09:19:10.749 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Probing container + should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:184 +[BeforeEach] [sig-node] Probing container + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:19:10.755 +Oct 13 09:19:10.755: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename container-probe 10/13/23 09:19:10.756 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:19:10.77 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:19:10.772 +[BeforeEach] [sig-node] Probing container + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:63 +[It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:184 +STEP: Creating pod liveness-f9f759b4-750a-41d3-81c7-a1ed3906a2bb in namespace container-probe-7315 10/13/23 09:19:10.775 +Oct 13 09:19:10.783: INFO: Waiting up to 5m0s for pod "liveness-f9f759b4-750a-41d3-81c7-a1ed3906a2bb" in namespace "container-probe-7315" to be "not pending" +Oct 13 09:19:10.787: INFO: Pod "liveness-f9f759b4-750a-41d3-81c7-a1ed3906a2bb": Phase="Pending", Reason="", readiness=false. Elapsed: 3.446083ms +Oct 13 09:19:12.792: INFO: Pod "liveness-f9f759b4-750a-41d3-81c7-a1ed3906a2bb": Phase="Running", Reason="", readiness=true. Elapsed: 2.008096398s +Oct 13 09:19:12.792: INFO: Pod "liveness-f9f759b4-750a-41d3-81c7-a1ed3906a2bb" satisfied condition "not pending" +Oct 13 09:19:12.792: INFO: Started pod liveness-f9f759b4-750a-41d3-81c7-a1ed3906a2bb in namespace container-probe-7315 +STEP: checking the pod's current state and verifying that restartCount is present 10/13/23 09:19:12.792 +Oct 13 09:19:12.795: INFO: Initial restart count of pod liveness-f9f759b4-750a-41d3-81c7-a1ed3906a2bb is 0 +STEP: deleting the pod 10/13/23 09:23:13.533 +[AfterEach] [sig-node] Probing container + test/e2e/framework/node/init/init.go:32 +Oct 13 09:23:13.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Probing container + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Probing container + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Probing container + tear down framework | framework.go:193 +STEP: Destroying namespace "container-probe-7315" for this suite. 10/13/23 09:23:13.559 +------------------------------ +• [SLOW TEST] [242.810 seconds] +[sig-node] Probing container +test/e2e/common/node/framework.go:23 + should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:184 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Probing container + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:19:10.755 + Oct 13 09:19:10.755: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename container-probe 10/13/23 09:19:10.756 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:19:10.77 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:19:10.772 + [BeforeEach] [sig-node] Probing container + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:63 + [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:184 + STEP: Creating pod liveness-f9f759b4-750a-41d3-81c7-a1ed3906a2bb in namespace container-probe-7315 10/13/23 09:19:10.775 + Oct 13 09:19:10.783: INFO: Waiting up to 5m0s for pod "liveness-f9f759b4-750a-41d3-81c7-a1ed3906a2bb" in namespace "container-probe-7315" to be "not pending" + Oct 13 09:19:10.787: INFO: Pod "liveness-f9f759b4-750a-41d3-81c7-a1ed3906a2bb": Phase="Pending", Reason="", readiness=false. Elapsed: 3.446083ms + Oct 13 09:19:12.792: INFO: Pod "liveness-f9f759b4-750a-41d3-81c7-a1ed3906a2bb": Phase="Running", Reason="", readiness=true. Elapsed: 2.008096398s + Oct 13 09:19:12.792: INFO: Pod "liveness-f9f759b4-750a-41d3-81c7-a1ed3906a2bb" satisfied condition "not pending" + Oct 13 09:19:12.792: INFO: Started pod liveness-f9f759b4-750a-41d3-81c7-a1ed3906a2bb in namespace container-probe-7315 + STEP: checking the pod's current state and verifying that restartCount is present 10/13/23 09:19:12.792 + Oct 13 09:19:12.795: INFO: Initial restart count of pod liveness-f9f759b4-750a-41d3-81c7-a1ed3906a2bb is 0 + STEP: deleting the pod 10/13/23 09:23:13.533 + [AfterEach] [sig-node] Probing container + test/e2e/framework/node/init/init.go:32 + Oct 13 09:23:13.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Probing container + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Probing container + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Probing container + tear down framework | framework.go:193 + STEP: Destroying namespace "container-probe-7315" for this suite. 10/13/23 09:23:13.559 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:423 +[BeforeEach] [sig-storage] ConfigMap + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:23:13.566 +Oct 13 09:23:13.566: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename configmap 10/13/23 09:23:13.567 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:23:13.585 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:23:13.589 +[BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:31 +[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:423 +STEP: Creating configMap with name configmap-test-volume-06f54c30-1dab-4b77-b972-8e358e1fdc43 10/13/23 09:23:13.591 +STEP: Creating a pod to test consume configMaps 10/13/23 09:23:13.596 +Oct 13 09:23:13.605: INFO: Waiting up to 5m0s for pod "pod-configmaps-f2f1a376-5d2e-413b-87dd-a53d7abb9253" in namespace "configmap-9704" to be "Succeeded or Failed" +Oct 13 09:23:13.609: INFO: Pod "pod-configmaps-f2f1a376-5d2e-413b-87dd-a53d7abb9253": Phase="Pending", Reason="", readiness=false. Elapsed: 4.194024ms +Oct 13 09:23:15.615: INFO: Pod "pod-configmaps-f2f1a376-5d2e-413b-87dd-a53d7abb9253": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010051399s +Oct 13 09:23:17.617: INFO: Pod "pod-configmaps-f2f1a376-5d2e-413b-87dd-a53d7abb9253": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01167873s +STEP: Saw pod success 10/13/23 09:23:17.617 +Oct 13 09:23:17.617: INFO: Pod "pod-configmaps-f2f1a376-5d2e-413b-87dd-a53d7abb9253" satisfied condition "Succeeded or Failed" +Oct 13 09:23:17.621: INFO: Trying to get logs from node node2 pod pod-configmaps-f2f1a376-5d2e-413b-87dd-a53d7abb9253 container configmap-volume-test: +STEP: delete the pod 10/13/23 09:23:17.64 +Oct 13 09:23:17.657: INFO: Waiting for pod pod-configmaps-f2f1a376-5d2e-413b-87dd-a53d7abb9253 to disappear +Oct 13 09:23:17.661: INFO: Pod pod-configmaps-f2f1a376-5d2e-413b-87dd-a53d7abb9253 no longer exists +[AfterEach] [sig-storage] ConfigMap + test/e2e/framework/node/init/init.go:32 +Oct 13 09:23:17.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] ConfigMap + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] ConfigMap + tear down framework | framework.go:193 +STEP: Destroying namespace "configmap-9704" for this suite. 10/13/23 09:23:17.664 +------------------------------ +• [4.103 seconds] +[sig-storage] ConfigMap +test/e2e/common/storage/framework.go:23 + should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:423 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] ConfigMap + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:23:13.566 + Oct 13 09:23:13.566: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename configmap 10/13/23 09:23:13.567 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:23:13.585 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:23:13.589 + [BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:31 + [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:423 + STEP: Creating configMap with name configmap-test-volume-06f54c30-1dab-4b77-b972-8e358e1fdc43 10/13/23 09:23:13.591 + STEP: Creating a pod to test consume configMaps 10/13/23 09:23:13.596 + Oct 13 09:23:13.605: INFO: Waiting up to 5m0s for pod "pod-configmaps-f2f1a376-5d2e-413b-87dd-a53d7abb9253" in namespace "configmap-9704" to be "Succeeded or Failed" + Oct 13 09:23:13.609: INFO: Pod "pod-configmaps-f2f1a376-5d2e-413b-87dd-a53d7abb9253": Phase="Pending", Reason="", readiness=false. Elapsed: 4.194024ms + Oct 13 09:23:15.615: INFO: Pod "pod-configmaps-f2f1a376-5d2e-413b-87dd-a53d7abb9253": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010051399s + Oct 13 09:23:17.617: INFO: Pod "pod-configmaps-f2f1a376-5d2e-413b-87dd-a53d7abb9253": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01167873s + STEP: Saw pod success 10/13/23 09:23:17.617 + Oct 13 09:23:17.617: INFO: Pod "pod-configmaps-f2f1a376-5d2e-413b-87dd-a53d7abb9253" satisfied condition "Succeeded or Failed" + Oct 13 09:23:17.621: INFO: Trying to get logs from node node2 pod pod-configmaps-f2f1a376-5d2e-413b-87dd-a53d7abb9253 container configmap-volume-test: + STEP: delete the pod 10/13/23 09:23:17.64 + Oct 13 09:23:17.657: INFO: Waiting for pod pod-configmaps-f2f1a376-5d2e-413b-87dd-a53d7abb9253 to disappear + Oct 13 09:23:17.661: INFO: Pod pod-configmaps-f2f1a376-5d2e-413b-87dd-a53d7abb9253 no longer exists + [AfterEach] [sig-storage] ConfigMap + test/e2e/framework/node/init/init.go:32 + Oct 13 09:23:17.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] ConfigMap + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] ConfigMap + tear down framework | framework.go:193 + STEP: Destroying namespace "configmap-9704" for this suite. 10/13/23 09:23:17.664 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces + should list and delete a collection of PodDisruptionBudgets [Conformance] + test/e2e/apps/disruption.go:87 +[BeforeEach] [sig-apps] DisruptionController + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:23:17.672 +Oct 13 09:23:17.673: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename disruption 10/13/23 09:23:17.673 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:23:17.689 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:23:17.691 +[BeforeEach] [sig-apps] DisruptionController + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] DisruptionController + test/e2e/apps/disruption.go:72 +[BeforeEach] Listing PodDisruptionBudgets for all namespaces + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:23:17.694 +Oct 13 09:23:17.694: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename disruption-2 10/13/23 09:23:17.695 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:23:17.711 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:23:17.714 +[BeforeEach] Listing PodDisruptionBudgets for all namespaces + test/e2e/framework/metrics/init/init.go:31 +[It] should list and delete a collection of PodDisruptionBudgets [Conformance] + test/e2e/apps/disruption.go:87 +STEP: Waiting for the pdb to be processed 10/13/23 09:23:17.723 +STEP: Waiting for the pdb to be processed 10/13/23 09:23:19.735 +STEP: Waiting for the pdb to be processed 10/13/23 09:23:21.75 +STEP: listing a collection of PDBs across all namespaces 10/13/23 09:23:23.761 +STEP: listing a collection of PDBs in namespace disruption-7568 10/13/23 09:23:23.764 +STEP: deleting a collection of PDBs 10/13/23 09:23:23.767 +STEP: Waiting for the PDB collection to be deleted 10/13/23 09:23:23.777 +[AfterEach] Listing PodDisruptionBudgets for all namespaces + test/e2e/framework/node/init/init.go:32 +Oct 13 09:23:23.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-apps] DisruptionController + test/e2e/framework/node/init/init.go:32 +Oct 13 09:23:23.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] Listing PodDisruptionBudgets for all namespaces + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] Listing PodDisruptionBudgets for all namespaces + dump namespaces | framework.go:196 +[DeferCleanup (Each)] Listing PodDisruptionBudgets for all namespaces + tear down framework | framework.go:193 +STEP: Destroying namespace "disruption-2-1710" for this suite. 10/13/23 09:23:23.786 +[DeferCleanup (Each)] [sig-apps] DisruptionController + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] DisruptionController + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] DisruptionController + tear down framework | framework.go:193 +STEP: Destroying namespace "disruption-7568" for this suite. 10/13/23 09:23:23.792 +------------------------------ +• [SLOW TEST] [6.124 seconds] +[sig-apps] DisruptionController +test/e2e/apps/framework.go:23 + Listing PodDisruptionBudgets for all namespaces + test/e2e/apps/disruption.go:78 + should list and delete a collection of PodDisruptionBudgets [Conformance] + test/e2e/apps/disruption.go:87 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] DisruptionController + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:23:17.672 + Oct 13 09:23:17.673: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename disruption 10/13/23 09:23:17.673 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:23:17.689 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:23:17.691 + [BeforeEach] [sig-apps] DisruptionController + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] DisruptionController + test/e2e/apps/disruption.go:72 + [BeforeEach] Listing PodDisruptionBudgets for all namespaces + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:23:17.694 + Oct 13 09:23:17.694: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename disruption-2 10/13/23 09:23:17.695 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:23:17.711 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:23:17.714 + [BeforeEach] Listing PodDisruptionBudgets for all namespaces + test/e2e/framework/metrics/init/init.go:31 + [It] should list and delete a collection of PodDisruptionBudgets [Conformance] + test/e2e/apps/disruption.go:87 + STEP: Waiting for the pdb to be processed 10/13/23 09:23:17.723 + STEP: Waiting for the pdb to be processed 10/13/23 09:23:19.735 + STEP: Waiting for the pdb to be processed 10/13/23 09:23:21.75 + STEP: listing a collection of PDBs across all namespaces 10/13/23 09:23:23.761 + STEP: listing a collection of PDBs in namespace disruption-7568 10/13/23 09:23:23.764 + STEP: deleting a collection of PDBs 10/13/23 09:23:23.767 + STEP: Waiting for the PDB collection to be deleted 10/13/23 09:23:23.777 + [AfterEach] Listing PodDisruptionBudgets for all namespaces + test/e2e/framework/node/init/init.go:32 + Oct 13 09:23:23.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-apps] DisruptionController + test/e2e/framework/node/init/init.go:32 + Oct 13 09:23:23.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] Listing PodDisruptionBudgets for all namespaces + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] Listing PodDisruptionBudgets for all namespaces + dump namespaces | framework.go:196 + [DeferCleanup (Each)] Listing PodDisruptionBudgets for all namespaces + tear down framework | framework.go:193 + STEP: Destroying namespace "disruption-2-1710" for this suite. 10/13/23 09:23:23.786 + [DeferCleanup (Each)] [sig-apps] DisruptionController + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] DisruptionController + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] DisruptionController + tear down framework | framework.go:193 + STEP: Destroying namespace "disruption-7568" for this suite. 10/13/23 09:23:23.792 + << End Captured GinkgoWriter Output +------------------------------ +SSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] + test/e2e/apimachinery/webhook.go:277 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:23:23.797 +Oct 13 09:23:23.797: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename webhook 10/13/23 09:23:23.799 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:23:23.814 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:23:23.816 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 +STEP: Setting up server cert 10/13/23 09:23:23.829 +STEP: Create role binding to let webhook read extension-apiserver-authentication 10/13/23 09:23:24.475 +STEP: Deploying the webhook pod 10/13/23 09:23:24.483 +STEP: Wait for the deployment to be ready 10/13/23 09:23:24.495 +Oct 13 09:23:24.504: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service 10/13/23 09:23:26.517 +STEP: Verifying the service has paired with the endpoint 10/13/23 09:23:26.531 +Oct 13 09:23:27.531: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] + test/e2e/apimachinery/webhook.go:277 +STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API 10/13/23 09:23:27.535 +STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API 10/13/23 09:23:27.55 +STEP: Creating a dummy validating-webhook-configuration object 10/13/23 09:23:27.561 +STEP: Deleting the validating-webhook-configuration, which should be possible to remove 10/13/23 09:23:27.568 +STEP: Creating a dummy mutating-webhook-configuration object 10/13/23 09:23:27.573 +STEP: Deleting the mutating-webhook-configuration, which should be possible to remove 10/13/23 09:23:27.581 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Oct 13 09:23:27.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "webhook-3675" for this suite. 10/13/23 09:23:27.643 +STEP: Destroying namespace "webhook-3675-markers" for this suite. 10/13/23 09:23:27.658 +------------------------------ +• [3.868 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] + test/e2e/apimachinery/webhook.go:277 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:23:23.797 + Oct 13 09:23:23.797: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename webhook 10/13/23 09:23:23.799 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:23:23.814 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:23:23.816 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 + STEP: Setting up server cert 10/13/23 09:23:23.829 + STEP: Create role binding to let webhook read extension-apiserver-authentication 10/13/23 09:23:24.475 + STEP: Deploying the webhook pod 10/13/23 09:23:24.483 + STEP: Wait for the deployment to be ready 10/13/23 09:23:24.495 + Oct 13 09:23:24.504: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set + STEP: Deploying the webhook service 10/13/23 09:23:26.517 + STEP: Verifying the service has paired with the endpoint 10/13/23 09:23:26.531 + Oct 13 09:23:27.531: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] + test/e2e/apimachinery/webhook.go:277 + STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API 10/13/23 09:23:27.535 + STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API 10/13/23 09:23:27.55 + STEP: Creating a dummy validating-webhook-configuration object 10/13/23 09:23:27.561 + STEP: Deleting the validating-webhook-configuration, which should be possible to remove 10/13/23 09:23:27.568 + STEP: Creating a dummy mutating-webhook-configuration object 10/13/23 09:23:27.573 + STEP: Deleting the mutating-webhook-configuration, which should be possible to remove 10/13/23 09:23:27.581 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Oct 13 09:23:27.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "webhook-3675" for this suite. 10/13/23 09:23:27.643 + STEP: Destroying namespace "webhook-3675-markers" for this suite. 10/13/23 09:23:27.658 + << End Captured GinkgoWriter Output +------------------------------ +[sig-apps] Daemon set [Serial] + should rollback without unnecessary restarts [Conformance] + test/e2e/apps/daemon_set.go:432 +[BeforeEach] [sig-apps] Daemon set [Serial] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:23:27.665 +Oct 13 09:23:27.665: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename daemonsets 10/13/23 09:23:27.666 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:23:27.683 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:23:27.686 +[BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:146 +[It] should rollback without unnecessary restarts [Conformance] + test/e2e/apps/daemon_set.go:432 +Oct 13 09:23:27.707: INFO: Create a RollingUpdate DaemonSet +Oct 13 09:23:27.712: INFO: Check that daemon pods launch on every node of the cluster +Oct 13 09:23:27.718: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Oct 13 09:23:27.718: INFO: Node node1 is running 0 daemon pod, expected 1 +Oct 13 09:23:28.732: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 +Oct 13 09:23:28.732: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set +Oct 13 09:23:28.732: INFO: Update the DaemonSet to trigger a rollout +Oct 13 09:23:28.741: INFO: Updating DaemonSet daemon-set +Oct 13 09:23:31.758: INFO: Roll back the DaemonSet before rollout is complete +Oct 13 09:23:31.767: INFO: Updating DaemonSet daemon-set +Oct 13 09:23:31.767: INFO: Make sure DaemonSet rollback is complete +Oct 13 09:23:31.770: INFO: Wrong image for pod: daemon-set-lfql4. Expected: registry.k8s.io/e2e-test-images/httpd:2.4.38-4, got: foo:non-existent. +Oct 13 09:23:31.770: INFO: Pod daemon-set-lfql4 is not available +Oct 13 09:23:33.778: INFO: Pod daemon-set-b6dx7 is not available +[AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:111 +STEP: Deleting DaemonSet "daemon-set" 10/13/23 09:23:33.787 +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9278, will wait for the garbage collector to delete the pods 10/13/23 09:23:33.787 +Oct 13 09:23:33.846: INFO: Deleting DaemonSet.extensions daemon-set took: 6.95343ms +Oct 13 09:23:33.947: INFO: Terminating DaemonSet.extensions daemon-set pods took: 101.146532ms +Oct 13 09:23:35.751: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Oct 13 09:23:35.751: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set +Oct 13 09:23:35.753: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"30708"},"items":null} + +Oct 13 09:23:35.756: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"30708"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/node/init/init.go:32 +Oct 13 09:23:35.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + tear down framework | framework.go:193 +STEP: Destroying namespace "daemonsets-9278" for this suite. 10/13/23 09:23:35.767 +------------------------------ +• [SLOW TEST] [8.107 seconds] +[sig-apps] Daemon set [Serial] +test/e2e/apps/framework.go:23 + should rollback without unnecessary restarts [Conformance] + test/e2e/apps/daemon_set.go:432 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] Daemon set [Serial] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:23:27.665 + Oct 13 09:23:27.665: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename daemonsets 10/13/23 09:23:27.666 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:23:27.683 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:23:27.686 + [BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:146 + [It] should rollback without unnecessary restarts [Conformance] + test/e2e/apps/daemon_set.go:432 + Oct 13 09:23:27.707: INFO: Create a RollingUpdate DaemonSet + Oct 13 09:23:27.712: INFO: Check that daemon pods launch on every node of the cluster + Oct 13 09:23:27.718: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Oct 13 09:23:27.718: INFO: Node node1 is running 0 daemon pod, expected 1 + Oct 13 09:23:28.732: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 + Oct 13 09:23:28.732: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set + Oct 13 09:23:28.732: INFO: Update the DaemonSet to trigger a rollout + Oct 13 09:23:28.741: INFO: Updating DaemonSet daemon-set + Oct 13 09:23:31.758: INFO: Roll back the DaemonSet before rollout is complete + Oct 13 09:23:31.767: INFO: Updating DaemonSet daemon-set + Oct 13 09:23:31.767: INFO: Make sure DaemonSet rollback is complete + Oct 13 09:23:31.770: INFO: Wrong image for pod: daemon-set-lfql4. Expected: registry.k8s.io/e2e-test-images/httpd:2.4.38-4, got: foo:non-existent. + Oct 13 09:23:31.770: INFO: Pod daemon-set-lfql4 is not available + Oct 13 09:23:33.778: INFO: Pod daemon-set-b6dx7 is not available + [AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:111 + STEP: Deleting DaemonSet "daemon-set" 10/13/23 09:23:33.787 + STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9278, will wait for the garbage collector to delete the pods 10/13/23 09:23:33.787 + Oct 13 09:23:33.846: INFO: Deleting DaemonSet.extensions daemon-set took: 6.95343ms + Oct 13 09:23:33.947: INFO: Terminating DaemonSet.extensions daemon-set pods took: 101.146532ms + Oct 13 09:23:35.751: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Oct 13 09:23:35.751: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set + Oct 13 09:23:35.753: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"30708"},"items":null} + + Oct 13 09:23:35.756: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"30708"},"items":null} + + [AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/node/init/init.go:32 + Oct 13 09:23:35.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + tear down framework | framework.go:193 + STEP: Destroying namespace "daemonsets-9278" for this suite. 10/13/23 09:23:35.767 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSS +------------------------------ +[sig-node] Variable Expansion + should fail substituting values in a volume subpath with backticks [Slow] [Conformance] + test/e2e/common/node/expansion.go:152 +[BeforeEach] [sig-node] Variable Expansion + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:23:35.773 +Oct 13 09:23:35.773: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename var-expansion 10/13/23 09:23:35.774 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:23:35.788 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:23:35.79 +[BeforeEach] [sig-node] Variable Expansion + test/e2e/framework/metrics/init/init.go:31 +[It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance] + test/e2e/common/node/expansion.go:152 +Oct 13 09:23:35.799: INFO: Waiting up to 2m0s for pod "var-expansion-7b89fa38-79b5-4c4c-85ba-45cd51d00c54" in namespace "var-expansion-3144" to be "container 0 failed with reason CreateContainerConfigError" +Oct 13 09:23:35.803: INFO: Pod "var-expansion-7b89fa38-79b5-4c4c-85ba-45cd51d00c54": Phase="Pending", Reason="", readiness=false. Elapsed: 3.352512ms +Oct 13 09:23:37.809: INFO: Pod "var-expansion-7b89fa38-79b5-4c4c-85ba-45cd51d00c54": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00947934s +Oct 13 09:23:37.809: INFO: Pod "var-expansion-7b89fa38-79b5-4c4c-85ba-45cd51d00c54" satisfied condition "container 0 failed with reason CreateContainerConfigError" +Oct 13 09:23:37.809: INFO: Deleting pod "var-expansion-7b89fa38-79b5-4c4c-85ba-45cd51d00c54" in namespace "var-expansion-3144" +Oct 13 09:23:37.818: INFO: Wait up to 5m0s for pod "var-expansion-7b89fa38-79b5-4c4c-85ba-45cd51d00c54" to be fully deleted +[AfterEach] [sig-node] Variable Expansion + test/e2e/framework/node/init/init.go:32 +Oct 13 09:23:39.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Variable Expansion + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Variable Expansion + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Variable Expansion + tear down framework | framework.go:193 +STEP: Destroying namespace "var-expansion-3144" for this suite. 10/13/23 09:23:39.84 +------------------------------ +• [4.075 seconds] +[sig-node] Variable Expansion +test/e2e/common/node/framework.go:23 + should fail substituting values in a volume subpath with backticks [Slow] [Conformance] + test/e2e/common/node/expansion.go:152 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Variable Expansion + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:23:35.773 + Oct 13 09:23:35.773: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename var-expansion 10/13/23 09:23:35.774 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:23:35.788 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:23:35.79 + [BeforeEach] [sig-node] Variable Expansion + test/e2e/framework/metrics/init/init.go:31 + [It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance] + test/e2e/common/node/expansion.go:152 + Oct 13 09:23:35.799: INFO: Waiting up to 2m0s for pod "var-expansion-7b89fa38-79b5-4c4c-85ba-45cd51d00c54" in namespace "var-expansion-3144" to be "container 0 failed with reason CreateContainerConfigError" + Oct 13 09:23:35.803: INFO: Pod "var-expansion-7b89fa38-79b5-4c4c-85ba-45cd51d00c54": Phase="Pending", Reason="", readiness=false. Elapsed: 3.352512ms + Oct 13 09:23:37.809: INFO: Pod "var-expansion-7b89fa38-79b5-4c4c-85ba-45cd51d00c54": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00947934s + Oct 13 09:23:37.809: INFO: Pod "var-expansion-7b89fa38-79b5-4c4c-85ba-45cd51d00c54" satisfied condition "container 0 failed with reason CreateContainerConfigError" + Oct 13 09:23:37.809: INFO: Deleting pod "var-expansion-7b89fa38-79b5-4c4c-85ba-45cd51d00c54" in namespace "var-expansion-3144" + Oct 13 09:23:37.818: INFO: Wait up to 5m0s for pod "var-expansion-7b89fa38-79b5-4c4c-85ba-45cd51d00c54" to be fully deleted + [AfterEach] [sig-node] Variable Expansion + test/e2e/framework/node/init/init.go:32 + Oct 13 09:23:39.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Variable Expansion + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Variable Expansion + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Variable Expansion + tear down framework | framework.go:193 + STEP: Destroying namespace "var-expansion-3144" for this suite. 10/13/23 09:23:39.84 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Namespaces [Serial] + should patch a Namespace [Conformance] + test/e2e/apimachinery/namespace.go:268 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:23:39.85 +Oct 13 09:23:39.850: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename namespaces 10/13/23 09:23:39.851 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:23:39.867 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:23:39.869 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/metrics/init/init.go:31 +[It] should patch a Namespace [Conformance] + test/e2e/apimachinery/namespace.go:268 +STEP: creating a Namespace 10/13/23 09:23:39.872 +STEP: patching the Namespace 10/13/23 09:23:39.891 +STEP: get the Namespace and ensuring it has the label 10/13/23 09:23:39.896 +[AfterEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/node/init/init.go:32 +Oct 13 09:23:39.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + tear down framework | framework.go:193 +STEP: Destroying namespace "namespaces-2126" for this suite. 10/13/23 09:23:39.903 +STEP: Destroying namespace "nspatchtest-ae6bcefa-1e15-40f7-a8a3-40c87a606aaf-7520" for this suite. 10/13/23 09:23:39.908 +------------------------------ +• [0.064 seconds] +[sig-api-machinery] Namespaces [Serial] +test/e2e/apimachinery/framework.go:23 + should patch a Namespace [Conformance] + test/e2e/apimachinery/namespace.go:268 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] Namespaces [Serial] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:23:39.85 + Oct 13 09:23:39.850: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename namespaces 10/13/23 09:23:39.851 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:23:39.867 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:23:39.869 + [BeforeEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/metrics/init/init.go:31 + [It] should patch a Namespace [Conformance] + test/e2e/apimachinery/namespace.go:268 + STEP: creating a Namespace 10/13/23 09:23:39.872 + STEP: patching the Namespace 10/13/23 09:23:39.891 + STEP: get the Namespace and ensuring it has the label 10/13/23 09:23:39.896 + [AfterEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/node/init/init.go:32 + Oct 13 09:23:39.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + tear down framework | framework.go:193 + STEP: Destroying namespace "namespaces-2126" for this suite. 10/13/23 09:23:39.903 + STEP: Destroying namespace "nspatchtest-ae6bcefa-1e15-40f7-a8a3-40c87a606aaf-7520" for this suite. 10/13/23 09:23:39.908 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSS +------------------------------ +[sig-node] RuntimeClass + should reject a Pod requesting a deleted RuntimeClass [NodeConformance] [Conformance] + test/e2e/common/node/runtimeclass.go:156 +[BeforeEach] [sig-node] RuntimeClass + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:23:39.914 +Oct 13 09:23:39.914: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename runtimeclass 10/13/23 09:23:39.915 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:23:39.929 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:23:39.932 +[BeforeEach] [sig-node] RuntimeClass + test/e2e/framework/metrics/init/init.go:31 +[It] should reject a Pod requesting a deleted RuntimeClass [NodeConformance] [Conformance] + test/e2e/common/node/runtimeclass.go:156 +STEP: Deleting RuntimeClass runtimeclass-7090-delete-me 10/13/23 09:23:39.941 +STEP: Waiting for the RuntimeClass to disappear 10/13/23 09:23:39.947 +[AfterEach] [sig-node] RuntimeClass + test/e2e/framework/node/init/init.go:32 +Oct 13 09:23:39.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] RuntimeClass + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] RuntimeClass + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] RuntimeClass + tear down framework | framework.go:193 +STEP: Destroying namespace "runtimeclass-7090" for this suite. 10/13/23 09:23:39.961 +------------------------------ +• [0.053 seconds] +[sig-node] RuntimeClass +test/e2e/common/node/framework.go:23 + should reject a Pod requesting a deleted RuntimeClass [NodeConformance] [Conformance] + test/e2e/common/node/runtimeclass.go:156 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] RuntimeClass + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:23:39.914 + Oct 13 09:23:39.914: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename runtimeclass 10/13/23 09:23:39.915 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:23:39.929 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:23:39.932 + [BeforeEach] [sig-node] RuntimeClass + test/e2e/framework/metrics/init/init.go:31 + [It] should reject a Pod requesting a deleted RuntimeClass [NodeConformance] [Conformance] + test/e2e/common/node/runtimeclass.go:156 + STEP: Deleting RuntimeClass runtimeclass-7090-delete-me 10/13/23 09:23:39.941 + STEP: Waiting for the RuntimeClass to disappear 10/13/23 09:23:39.947 + [AfterEach] [sig-node] RuntimeClass + test/e2e/framework/node/init/init.go:32 + Oct 13 09:23:39.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] RuntimeClass + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] RuntimeClass + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] RuntimeClass + tear down framework | framework.go:193 + STEP: Destroying namespace "runtimeclass-7090" for this suite. 10/13/23 09:23:39.961 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Service endpoints latency + should not be very high [Conformance] + test/e2e/network/service_latency.go:59 +[BeforeEach] [sig-network] Service endpoints latency + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:23:39.969 +Oct 13 09:23:39.969: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename svc-latency 10/13/23 09:23:39.97 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:23:39.986 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:23:39.989 +[BeforeEach] [sig-network] Service endpoints latency + test/e2e/framework/metrics/init/init.go:31 +[It] should not be very high [Conformance] + test/e2e/network/service_latency.go:59 +Oct 13 09:23:39.992: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: creating replication controller svc-latency-rc in namespace svc-latency-8055 10/13/23 09:23:39.993 +I1013 09:23:39.998557 23 runners.go:193] Created replication controller with name: svc-latency-rc, namespace: svc-latency-8055, replica count: 1 +I1013 09:23:41.050032 23 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I1013 09:23:42.050935 23 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Oct 13 09:23:42.166: INFO: Created: latency-svc-27h2r +Oct 13 09:23:42.172: INFO: Got endpoints: latency-svc-27h2r [21.544646ms] +Oct 13 09:23:42.194: INFO: Created: latency-svc-m7sm4 +Oct 13 09:23:42.203: INFO: Got endpoints: latency-svc-m7sm4 [30.379694ms] +Oct 13 09:23:42.207: INFO: Created: latency-svc-z8vcb +Oct 13 09:23:42.214: INFO: Got endpoints: latency-svc-z8vcb [41.72386ms] +Oct 13 09:23:42.219: INFO: Created: latency-svc-l5mx5 +Oct 13 09:23:42.228: INFO: Got endpoints: latency-svc-l5mx5 [55.668666ms] +Oct 13 09:23:42.230: INFO: Created: latency-svc-sn4tc +Oct 13 09:23:42.239: INFO: Got endpoints: latency-svc-sn4tc [66.295566ms] +Oct 13 09:23:42.243: INFO: Created: latency-svc-9snf7 +Oct 13 09:23:42.254: INFO: Got endpoints: latency-svc-9snf7 [81.012567ms] +Oct 13 09:23:42.257: INFO: Created: latency-svc-mflbf +Oct 13 09:23:42.266: INFO: Got endpoints: latency-svc-mflbf [93.176146ms] +Oct 13 09:23:42.270: INFO: Created: latency-svc-m4lxz +Oct 13 09:23:42.278: INFO: Got endpoints: latency-svc-m4lxz [104.871353ms] +Oct 13 09:23:42.280: INFO: Created: latency-svc-62kxc +Oct 13 09:23:42.289: INFO: Got endpoints: latency-svc-62kxc [115.885006ms] +Oct 13 09:23:42.299: INFO: Created: latency-svc-7cwn4 +Oct 13 09:23:42.306: INFO: Got endpoints: latency-svc-7cwn4 [133.478541ms] +Oct 13 09:23:42.310: INFO: Created: latency-svc-blkpl +Oct 13 09:23:42.319: INFO: Got endpoints: latency-svc-blkpl [146.550881ms] +Oct 13 09:23:42.322: INFO: Created: latency-svc-9qj9m +Oct 13 09:23:42.329: INFO: Got endpoints: latency-svc-9qj9m [156.241309ms] +Oct 13 09:23:42.333: INFO: Created: latency-svc-fb2q5 +Oct 13 09:23:42.335: INFO: Got endpoints: latency-svc-fb2q5 [162.845442ms] +Oct 13 09:23:42.341: INFO: Created: latency-svc-hr2ld +Oct 13 09:23:42.349: INFO: Got endpoints: latency-svc-hr2ld [175.831584ms] +Oct 13 09:23:42.351: INFO: Created: latency-svc-24z54 +Oct 13 09:23:42.356: INFO: Got endpoints: latency-svc-24z54 [183.474915ms] +Oct 13 09:23:42.361: INFO: Created: latency-svc-zcbdv +Oct 13 09:23:42.369: INFO: Got endpoints: latency-svc-zcbdv [195.950268ms] +Oct 13 09:23:42.372: INFO: Created: latency-svc-ctxm5 +Oct 13 09:23:42.379: INFO: Got endpoints: latency-svc-ctxm5 [175.945588ms] +Oct 13 09:23:42.381: INFO: Created: latency-svc-4tfmh +Oct 13 09:23:42.392: INFO: Got endpoints: latency-svc-4tfmh [177.121551ms] +Oct 13 09:23:42.395: INFO: Created: latency-svc-7mcz9 +Oct 13 09:23:42.402: INFO: Got endpoints: latency-svc-7mcz9 [173.520307ms] +Oct 13 09:23:42.405: INFO: Created: latency-svc-vrbcd +Oct 13 09:23:42.411: INFO: Got endpoints: latency-svc-vrbcd [172.548875ms] +Oct 13 09:23:42.418: INFO: Created: latency-svc-pb5ps +Oct 13 09:23:42.425: INFO: Got endpoints: latency-svc-pb5ps [171.419277ms] +Oct 13 09:23:42.429: INFO: Created: latency-svc-2vwcp +Oct 13 09:23:42.435: INFO: Got endpoints: latency-svc-2vwcp [169.132568ms] +Oct 13 09:23:42.438: INFO: Created: latency-svc-jcx5c +Oct 13 09:23:42.444: INFO: Got endpoints: latency-svc-jcx5c [166.743014ms] +Oct 13 09:23:42.448: INFO: Created: latency-svc-zznnb +Oct 13 09:23:42.454: INFO: Got endpoints: latency-svc-zznnb [165.003903ms] +Oct 13 09:23:42.456: INFO: Created: latency-svc-v7tkr +Oct 13 09:23:42.463: INFO: Got endpoints: latency-svc-v7tkr [156.323283ms] +Oct 13 09:23:42.465: INFO: Created: latency-svc-l4pjb +Oct 13 09:23:42.472: INFO: Got endpoints: latency-svc-l4pjb [152.158974ms] +Oct 13 09:23:42.478: INFO: Created: latency-svc-vnphm +Oct 13 09:23:42.484: INFO: Got endpoints: latency-svc-vnphm [154.900774ms] +Oct 13 09:23:42.488: INFO: Created: latency-svc-z5x7r +Oct 13 09:23:42.498: INFO: Got endpoints: latency-svc-z5x7r [162.572596ms] +Oct 13 09:23:42.501: INFO: Created: latency-svc-rhfgk +Oct 13 09:23:42.507: INFO: Got endpoints: latency-svc-rhfgk [158.585162ms] +Oct 13 09:23:42.510: INFO: Created: latency-svc-pgssl +Oct 13 09:23:42.517: INFO: Got endpoints: latency-svc-pgssl [160.676915ms] +Oct 13 09:23:42.519: INFO: Created: latency-svc-768l6 +Oct 13 09:23:42.526: INFO: Got endpoints: latency-svc-768l6 [157.193884ms] +Oct 13 09:23:42.528: INFO: Created: latency-svc-6xq4t +Oct 13 09:23:42.535: INFO: Got endpoints: latency-svc-6xq4t [155.690524ms] +Oct 13 09:23:42.538: INFO: Created: latency-svc-cgt6w +Oct 13 09:23:42.545: INFO: Got endpoints: latency-svc-cgt6w [153.040967ms] +Oct 13 09:23:42.547: INFO: Created: latency-svc-nplpc +Oct 13 09:23:42.553: INFO: Got endpoints: latency-svc-nplpc [151.498681ms] +Oct 13 09:23:42.557: INFO: Created: latency-svc-6cgxs +Oct 13 09:23:42.565: INFO: Got endpoints: latency-svc-6cgxs [153.217956ms] +Oct 13 09:23:42.568: INFO: Created: latency-svc-ndl2d +Oct 13 09:23:42.577: INFO: Got endpoints: latency-svc-ndl2d [151.668985ms] +Oct 13 09:23:42.580: INFO: Created: latency-svc-x6jw9 +Oct 13 09:23:42.587: INFO: Got endpoints: latency-svc-x6jw9 [151.37877ms] +Oct 13 09:23:42.589: INFO: Created: latency-svc-zsnmd +Oct 13 09:23:42.597: INFO: Got endpoints: latency-svc-zsnmd [152.981089ms] +Oct 13 09:23:42.605: INFO: Created: latency-svc-mwtdn +Oct 13 09:23:42.624: INFO: Created: latency-svc-pvnt2 +Oct 13 09:23:42.632: INFO: Got endpoints: latency-svc-mwtdn [177.927549ms] +Oct 13 09:23:42.642: INFO: Created: latency-svc-6rshl +Oct 13 09:23:42.650: INFO: Created: latency-svc-qlq59 +Oct 13 09:23:42.659: INFO: Created: latency-svc-hmppp +Oct 13 09:23:42.666: INFO: Created: latency-svc-j2z48 +Oct 13 09:23:42.675: INFO: Got endpoints: latency-svc-pvnt2 [212.002749ms] +Oct 13 09:23:42.677: INFO: Created: latency-svc-jfzbm +Oct 13 09:23:42.684: INFO: Created: latency-svc-gpvqq +Oct 13 09:23:42.693: INFO: Created: latency-svc-qlt84 +Oct 13 09:23:42.701: INFO: Created: latency-svc-x5twv +Oct 13 09:23:42.716: INFO: Created: latency-svc-t7c84 +Oct 13 09:23:42.722: INFO: Got endpoints: latency-svc-6rshl [250.384174ms] +Oct 13 09:23:42.725: INFO: Created: latency-svc-k5vpf +Oct 13 09:23:42.733: INFO: Created: latency-svc-jkjl2 +Oct 13 09:23:42.740: INFO: Created: latency-svc-c5tff +Oct 13 09:23:42.747: INFO: Created: latency-svc-4rbvt +Oct 13 09:23:42.757: INFO: Created: latency-svc-v9jwr +Oct 13 09:23:42.764: INFO: Created: latency-svc-vmcdz +Oct 13 09:23:42.771: INFO: Created: latency-svc-l5p9j +Oct 13 09:23:42.773: INFO: Got endpoints: latency-svc-qlq59 [288.652778ms] +Oct 13 09:23:42.784: INFO: Created: latency-svc-56p44 +Oct 13 09:23:42.822: INFO: Got endpoints: latency-svc-hmppp [323.916064ms] +Oct 13 09:23:42.836: INFO: Created: latency-svc-j9xhv +Oct 13 09:23:42.874: INFO: Got endpoints: latency-svc-j2z48 [366.269599ms] +Oct 13 09:23:42.885: INFO: Created: latency-svc-c9gtg +Oct 13 09:23:42.923: INFO: Got endpoints: latency-svc-jfzbm [406.26641ms] +Oct 13 09:23:42.935: INFO: Created: latency-svc-8gfgq +Oct 13 09:23:42.973: INFO: Got endpoints: latency-svc-gpvqq [447.295797ms] +Oct 13 09:23:42.989: INFO: Created: latency-svc-6chn8 +Oct 13 09:23:43.025: INFO: Got endpoints: latency-svc-qlt84 [490.725456ms] +Oct 13 09:23:43.039: INFO: Created: latency-svc-6rbp9 +Oct 13 09:23:43.072: INFO: Got endpoints: latency-svc-x5twv [527.762529ms] +Oct 13 09:23:43.083: INFO: Created: latency-svc-2kfw8 +Oct 13 09:23:43.122: INFO: Got endpoints: latency-svc-t7c84 [568.614542ms] +Oct 13 09:23:43.136: INFO: Created: latency-svc-qqv6c +Oct 13 09:23:43.173: INFO: Got endpoints: latency-svc-k5vpf [608.217754ms] +Oct 13 09:23:43.190: INFO: Created: latency-svc-lc2mh +Oct 13 09:23:43.224: INFO: Got endpoints: latency-svc-jkjl2 [647.382806ms] +Oct 13 09:23:43.237: INFO: Created: latency-svc-ghhjc +Oct 13 09:23:43.272: INFO: Got endpoints: latency-svc-c5tff [685.31686ms] +Oct 13 09:23:43.283: INFO: Created: latency-svc-xggsh +Oct 13 09:23:43.322: INFO: Got endpoints: latency-svc-4rbvt [724.291481ms] +Oct 13 09:23:43.335: INFO: Created: latency-svc-jfcvh +Oct 13 09:23:43.372: INFO: Got endpoints: latency-svc-v9jwr [740.251058ms] +Oct 13 09:23:43.383: INFO: Created: latency-svc-8vpsr +Oct 13 09:23:43.422: INFO: Got endpoints: latency-svc-vmcdz [747.602928ms] +Oct 13 09:23:43.438: INFO: Created: latency-svc-xlwj2 +Oct 13 09:23:43.472: INFO: Got endpoints: latency-svc-l5p9j [750.039792ms] +Oct 13 09:23:43.483: INFO: Created: latency-svc-jjvhc +Oct 13 09:23:43.522: INFO: Got endpoints: latency-svc-56p44 [749.39757ms] +Oct 13 09:23:43.533: INFO: Created: latency-svc-skp2q +Oct 13 09:23:43.573: INFO: Got endpoints: latency-svc-j9xhv [751.181379ms] +Oct 13 09:23:43.584: INFO: Created: latency-svc-ndfw9 +Oct 13 09:23:43.622: INFO: Got endpoints: latency-svc-c9gtg [748.077453ms] +Oct 13 09:23:43.633: INFO: Created: latency-svc-nhlhn +Oct 13 09:23:43.673: INFO: Got endpoints: latency-svc-8gfgq [749.232692ms] +Oct 13 09:23:43.684: INFO: Created: latency-svc-2sdv8 +Oct 13 09:23:43.722: INFO: Got endpoints: latency-svc-6chn8 [748.426102ms] +Oct 13 09:23:43.733: INFO: Created: latency-svc-dkpcl +Oct 13 09:23:43.772: INFO: Got endpoints: latency-svc-6rbp9 [746.619376ms] +Oct 13 09:23:43.785: INFO: Created: latency-svc-76tgr +Oct 13 09:23:43.822: INFO: Got endpoints: latency-svc-2kfw8 [749.427964ms] +Oct 13 09:23:43.834: INFO: Created: latency-svc-7rlc7 +Oct 13 09:23:43.873: INFO: Got endpoints: latency-svc-qqv6c [750.368765ms] +Oct 13 09:23:43.884: INFO: Created: latency-svc-8r5wd +Oct 13 09:23:43.923: INFO: Got endpoints: latency-svc-lc2mh [749.937391ms] +Oct 13 09:23:43.935: INFO: Created: latency-svc-s4fw5 +Oct 13 09:23:43.972: INFO: Got endpoints: latency-svc-ghhjc [748.126065ms] +Oct 13 09:23:43.985: INFO: Created: latency-svc-57t98 +Oct 13 09:23:44.021: INFO: Got endpoints: latency-svc-xggsh [749.414356ms] +Oct 13 09:23:44.036: INFO: Created: latency-svc-vw5cs +Oct 13 09:23:44.072: INFO: Got endpoints: latency-svc-jfcvh [749.647786ms] +Oct 13 09:23:44.084: INFO: Created: latency-svc-sh88n +Oct 13 09:23:44.122: INFO: Got endpoints: latency-svc-8vpsr [750.071677ms] +Oct 13 09:23:44.135: INFO: Created: latency-svc-jcc4n +Oct 13 09:23:44.173: INFO: Got endpoints: latency-svc-xlwj2 [750.303336ms] +Oct 13 09:23:44.190: INFO: Created: latency-svc-95qbj +Oct 13 09:23:44.224: INFO: Got endpoints: latency-svc-jjvhc [751.631578ms] +Oct 13 09:23:44.238: INFO: Created: latency-svc-nrls8 +Oct 13 09:23:44.272: INFO: Got endpoints: latency-svc-skp2q [749.723313ms] +Oct 13 09:23:44.285: INFO: Created: latency-svc-bsf72 +Oct 13 09:23:44.324: INFO: Got endpoints: latency-svc-ndfw9 [750.259836ms] +Oct 13 09:23:44.336: INFO: Created: latency-svc-j8ljh +Oct 13 09:23:44.373: INFO: Got endpoints: latency-svc-nhlhn [750.986769ms] +Oct 13 09:23:44.386: INFO: Created: latency-svc-2v7wp +Oct 13 09:23:44.423: INFO: Got endpoints: latency-svc-2sdv8 [750.132067ms] +Oct 13 09:23:44.434: INFO: Created: latency-svc-fjmrn +Oct 13 09:23:44.474: INFO: Got endpoints: latency-svc-dkpcl [752.252883ms] +Oct 13 09:23:44.486: INFO: Created: latency-svc-z6x8p +Oct 13 09:23:44.522: INFO: Got endpoints: latency-svc-76tgr [749.901973ms] +Oct 13 09:23:44.533: INFO: Created: latency-svc-bmbjm +Oct 13 09:23:44.572: INFO: Got endpoints: latency-svc-7rlc7 [750.050531ms] +Oct 13 09:23:44.585: INFO: Created: latency-svc-z69hd +Oct 13 09:23:44.622: INFO: Got endpoints: latency-svc-8r5wd [749.317406ms] +Oct 13 09:23:44.636: INFO: Created: latency-svc-jscgr +Oct 13 09:23:44.673: INFO: Got endpoints: latency-svc-s4fw5 [749.609848ms] +Oct 13 09:23:44.685: INFO: Created: latency-svc-9lwwd +Oct 13 09:23:44.723: INFO: Got endpoints: latency-svc-57t98 [750.498604ms] +Oct 13 09:23:44.735: INFO: Created: latency-svc-fj2f5 +Oct 13 09:23:44.772: INFO: Got endpoints: latency-svc-vw5cs [750.720376ms] +Oct 13 09:23:44.789: INFO: Created: latency-svc-9r9g7 +Oct 13 09:23:44.823: INFO: Got endpoints: latency-svc-sh88n [751.258544ms] +Oct 13 09:23:44.834: INFO: Created: latency-svc-k72fs +Oct 13 09:23:44.872: INFO: Got endpoints: latency-svc-jcc4n [749.597501ms] +Oct 13 09:23:44.883: INFO: Created: latency-svc-mx2tl +Oct 13 09:23:44.921: INFO: Got endpoints: latency-svc-95qbj [748.106608ms] +Oct 13 09:23:44.934: INFO: Created: latency-svc-25x4m +Oct 13 09:23:44.973: INFO: Got endpoints: latency-svc-nrls8 [749.214405ms] +Oct 13 09:23:44.989: INFO: Created: latency-svc-zhj2h +Oct 13 09:23:45.021: INFO: Got endpoints: latency-svc-bsf72 [748.975803ms] +Oct 13 09:23:45.037: INFO: Created: latency-svc-86qhj +Oct 13 09:23:45.072: INFO: Got endpoints: latency-svc-j8ljh [748.249222ms] +Oct 13 09:23:45.107: INFO: Created: latency-svc-nmznq +Oct 13 09:23:45.122: INFO: Got endpoints: latency-svc-2v7wp [749.120643ms] +Oct 13 09:23:45.135: INFO: Created: latency-svc-9j4bg +Oct 13 09:23:45.172: INFO: Got endpoints: latency-svc-fjmrn [748.82874ms] +Oct 13 09:23:45.185: INFO: Created: latency-svc-gjkx4 +Oct 13 09:23:45.223: INFO: Got endpoints: latency-svc-z6x8p [748.513547ms] +Oct 13 09:23:45.236: INFO: Created: latency-svc-qnwtp +Oct 13 09:23:45.272: INFO: Got endpoints: latency-svc-bmbjm [749.907265ms] +Oct 13 09:23:45.284: INFO: Created: latency-svc-4kh8p +Oct 13 09:23:45.322: INFO: Got endpoints: latency-svc-z69hd [750.098965ms] +Oct 13 09:23:45.334: INFO: Created: latency-svc-j74rb +Oct 13 09:23:45.373: INFO: Got endpoints: latency-svc-jscgr [751.014374ms] +Oct 13 09:23:45.384: INFO: Created: latency-svc-r84cz +Oct 13 09:23:45.421: INFO: Got endpoints: latency-svc-9lwwd [748.291517ms] +Oct 13 09:23:45.435: INFO: Created: latency-svc-2bmmb +Oct 13 09:23:45.472: INFO: Got endpoints: latency-svc-fj2f5 [749.075944ms] +Oct 13 09:23:45.483: INFO: Created: latency-svc-n42q8 +Oct 13 09:23:45.522: INFO: Got endpoints: latency-svc-9r9g7 [749.799014ms] +Oct 13 09:23:45.533: INFO: Created: latency-svc-ls26j +Oct 13 09:23:45.572: INFO: Got endpoints: latency-svc-k72fs [749.595674ms] +Oct 13 09:23:45.584: INFO: Created: latency-svc-wnd75 +Oct 13 09:23:45.623: INFO: Got endpoints: latency-svc-mx2tl [751.232612ms] +Oct 13 09:23:45.634: INFO: Created: latency-svc-rj9zp +Oct 13 09:23:45.674: INFO: Got endpoints: latency-svc-25x4m [752.960087ms] +Oct 13 09:23:45.685: INFO: Created: latency-svc-cd9zs +Oct 13 09:23:45.722: INFO: Got endpoints: latency-svc-zhj2h [749.256647ms] +Oct 13 09:23:45.735: INFO: Created: latency-svc-nkmkd +Oct 13 09:23:45.771: INFO: Got endpoints: latency-svc-86qhj [750.567893ms] +Oct 13 09:23:45.783: INFO: Created: latency-svc-vr2j5 +Oct 13 09:23:45.826: INFO: Got endpoints: latency-svc-nmznq [753.993433ms] +Oct 13 09:23:45.838: INFO: Created: latency-svc-bxfjx +Oct 13 09:23:45.872: INFO: Got endpoints: latency-svc-9j4bg [749.849944ms] +Oct 13 09:23:45.883: INFO: Created: latency-svc-gggfn +Oct 13 09:23:45.921: INFO: Got endpoints: latency-svc-gjkx4 [749.203825ms] +Oct 13 09:23:45.932: INFO: Created: latency-svc-s9lzm +Oct 13 09:23:45.972: INFO: Got endpoints: latency-svc-qnwtp [749.18384ms] +Oct 13 09:23:45.982: INFO: Created: latency-svc-jjxb6 +Oct 13 09:23:46.021: INFO: Got endpoints: latency-svc-4kh8p [749.376287ms] +Oct 13 09:23:46.035: INFO: Created: latency-svc-d7l9x +Oct 13 09:23:46.072: INFO: Got endpoints: latency-svc-j74rb [750.229717ms] +Oct 13 09:23:46.084: INFO: Created: latency-svc-hmf9z +Oct 13 09:23:46.122: INFO: Got endpoints: latency-svc-r84cz [748.924676ms] +Oct 13 09:23:46.139: INFO: Created: latency-svc-ntgl5 +Oct 13 09:23:46.173: INFO: Got endpoints: latency-svc-2bmmb [751.75429ms] +Oct 13 09:23:46.188: INFO: Created: latency-svc-2j6rj +Oct 13 09:23:46.224: INFO: Got endpoints: latency-svc-n42q8 [752.000094ms] +Oct 13 09:23:46.243: INFO: Created: latency-svc-gjqc8 +Oct 13 09:23:46.273: INFO: Got endpoints: latency-svc-ls26j [751.133481ms] +Oct 13 09:23:46.285: INFO: Created: latency-svc-vtvzj +Oct 13 09:23:46.323: INFO: Got endpoints: latency-svc-wnd75 [750.045415ms] +Oct 13 09:23:46.335: INFO: Created: latency-svc-vxclm +Oct 13 09:23:46.373: INFO: Got endpoints: latency-svc-rj9zp [750.150446ms] +Oct 13 09:23:46.384: INFO: Created: latency-svc-x7xl5 +Oct 13 09:23:46.423: INFO: Got endpoints: latency-svc-cd9zs [749.0355ms] +Oct 13 09:23:46.434: INFO: Created: latency-svc-4th75 +Oct 13 09:23:46.472: INFO: Got endpoints: latency-svc-nkmkd [749.347312ms] +Oct 13 09:23:46.483: INFO: Created: latency-svc-7r6dd +Oct 13 09:23:46.522: INFO: Got endpoints: latency-svc-vr2j5 [750.179979ms] +Oct 13 09:23:46.533: INFO: Created: latency-svc-jm7dh +Oct 13 09:23:46.573: INFO: Got endpoints: latency-svc-bxfjx [746.940438ms] +Oct 13 09:23:46.586: INFO: Created: latency-svc-9546m +Oct 13 09:23:46.623: INFO: Got endpoints: latency-svc-gggfn [750.922013ms] +Oct 13 09:23:46.636: INFO: Created: latency-svc-ll58t +Oct 13 09:23:46.672: INFO: Got endpoints: latency-svc-s9lzm [751.361995ms] +Oct 13 09:23:46.684: INFO: Created: latency-svc-mrgnn +Oct 13 09:23:46.722: INFO: Got endpoints: latency-svc-jjxb6 [750.421062ms] +Oct 13 09:23:46.733: INFO: Created: latency-svc-lsxfj +Oct 13 09:23:46.772: INFO: Got endpoints: latency-svc-d7l9x [751.104759ms] +Oct 13 09:23:46.784: INFO: Created: latency-svc-d9tfr +Oct 13 09:23:46.822: INFO: Got endpoints: latency-svc-hmf9z [749.723544ms] +Oct 13 09:23:46.834: INFO: Created: latency-svc-z9x9h +Oct 13 09:23:46.871: INFO: Got endpoints: latency-svc-ntgl5 [748.802729ms] +Oct 13 09:23:46.882: INFO: Created: latency-svc-tfhd5 +Oct 13 09:23:46.923: INFO: Got endpoints: latency-svc-2j6rj [749.782874ms] +Oct 13 09:23:46.934: INFO: Created: latency-svc-zfsvp +Oct 13 09:23:46.973: INFO: Got endpoints: latency-svc-gjqc8 [749.422001ms] +Oct 13 09:23:46.986: INFO: Created: latency-svc-vgrwx +Oct 13 09:23:47.023: INFO: Got endpoints: latency-svc-vtvzj [749.991172ms] +Oct 13 09:23:47.036: INFO: Created: latency-svc-mncsc +Oct 13 09:23:47.072: INFO: Got endpoints: latency-svc-vxclm [749.250092ms] +Oct 13 09:23:47.083: INFO: Created: latency-svc-7bxxf +Oct 13 09:23:47.123: INFO: Got endpoints: latency-svc-x7xl5 [749.793107ms] +Oct 13 09:23:47.135: INFO: Created: latency-svc-mqnjm +Oct 13 09:23:47.175: INFO: Got endpoints: latency-svc-4th75 [752.138892ms] +Oct 13 09:23:47.189: INFO: Created: latency-svc-b5zzw +Oct 13 09:23:47.224: INFO: Got endpoints: latency-svc-7r6dd [751.998398ms] +Oct 13 09:23:47.238: INFO: Created: latency-svc-dbsdb +Oct 13 09:23:47.273: INFO: Got endpoints: latency-svc-jm7dh [751.246779ms] +Oct 13 09:23:47.287: INFO: Created: latency-svc-n228h +Oct 13 09:23:47.322: INFO: Got endpoints: latency-svc-9546m [749.127229ms] +Oct 13 09:23:47.334: INFO: Created: latency-svc-qgn6n +Oct 13 09:23:47.372: INFO: Got endpoints: latency-svc-ll58t [749.641469ms] +Oct 13 09:23:47.386: INFO: Created: latency-svc-h7ctr +Oct 13 09:23:47.422: INFO: Got endpoints: latency-svc-mrgnn [750.109296ms] +Oct 13 09:23:47.435: INFO: Created: latency-svc-l5vn4 +Oct 13 09:23:47.474: INFO: Got endpoints: latency-svc-lsxfj [751.135942ms] +Oct 13 09:23:47.492: INFO: Created: latency-svc-qgsnm +Oct 13 09:23:47.522: INFO: Got endpoints: latency-svc-d9tfr [749.802769ms] +Oct 13 09:23:47.534: INFO: Created: latency-svc-lt925 +Oct 13 09:23:47.572: INFO: Got endpoints: latency-svc-z9x9h [749.721201ms] +Oct 13 09:23:47.583: INFO: Created: latency-svc-7xmpl +Oct 13 09:23:47.623: INFO: Got endpoints: latency-svc-tfhd5 [752.525289ms] +Oct 13 09:23:47.638: INFO: Created: latency-svc-flv7k +Oct 13 09:23:47.672: INFO: Got endpoints: latency-svc-zfsvp [749.034562ms] +Oct 13 09:23:47.684: INFO: Created: latency-svc-c26kh +Oct 13 09:23:47.723: INFO: Got endpoints: latency-svc-vgrwx [749.044428ms] +Oct 13 09:23:47.734: INFO: Created: latency-svc-bz8qw +Oct 13 09:23:47.773: INFO: Got endpoints: latency-svc-mncsc [749.707197ms] +Oct 13 09:23:47.787: INFO: Created: latency-svc-dv9pm +Oct 13 09:23:47.822: INFO: Got endpoints: latency-svc-7bxxf [749.938751ms] +Oct 13 09:23:47.837: INFO: Created: latency-svc-tr9pb +Oct 13 09:23:47.872: INFO: Got endpoints: latency-svc-mqnjm [749.430544ms] +Oct 13 09:23:47.883: INFO: Created: latency-svc-hb5d9 +Oct 13 09:23:47.923: INFO: Got endpoints: latency-svc-b5zzw [747.83553ms] +Oct 13 09:23:47.934: INFO: Created: latency-svc-8k8xw +Oct 13 09:23:47.973: INFO: Got endpoints: latency-svc-dbsdb [749.620639ms] +Oct 13 09:23:47.984: INFO: Created: latency-svc-cp7nn +Oct 13 09:23:48.022: INFO: Got endpoints: latency-svc-n228h [749.322145ms] +Oct 13 09:23:48.033: INFO: Created: latency-svc-f7654 +Oct 13 09:23:48.072: INFO: Got endpoints: latency-svc-qgn6n [750.188377ms] +Oct 13 09:23:48.084: INFO: Created: latency-svc-5kc28 +Oct 13 09:23:48.124: INFO: Got endpoints: latency-svc-h7ctr [751.183443ms] +Oct 13 09:23:48.152: INFO: Created: latency-svc-4675l +Oct 13 09:23:48.172: INFO: Got endpoints: latency-svc-l5vn4 [749.765501ms] +Oct 13 09:23:48.189: INFO: Created: latency-svc-gmd4d +Oct 13 09:23:48.222: INFO: Got endpoints: latency-svc-qgsnm [748.524023ms] +Oct 13 09:23:48.234: INFO: Created: latency-svc-qn22q +Oct 13 09:23:48.272: INFO: Got endpoints: latency-svc-lt925 [750.030731ms] +Oct 13 09:23:48.284: INFO: Created: latency-svc-ltjtm +Oct 13 09:23:48.323: INFO: Got endpoints: latency-svc-7xmpl [751.250542ms] +Oct 13 09:23:48.335: INFO: Created: latency-svc-npzt2 +Oct 13 09:23:48.372: INFO: Got endpoints: latency-svc-flv7k [748.46198ms] +Oct 13 09:23:48.383: INFO: Created: latency-svc-pjljk +Oct 13 09:23:48.423: INFO: Got endpoints: latency-svc-c26kh [750.963061ms] +Oct 13 09:23:48.434: INFO: Created: latency-svc-lxnlt +Oct 13 09:23:48.472: INFO: Got endpoints: latency-svc-bz8qw [749.213566ms] +Oct 13 09:23:48.483: INFO: Created: latency-svc-4xbkl +Oct 13 09:23:48.524: INFO: Got endpoints: latency-svc-dv9pm [751.169412ms] +Oct 13 09:23:48.536: INFO: Created: latency-svc-9pk8g +Oct 13 09:23:48.573: INFO: Got endpoints: latency-svc-tr9pb [750.747921ms] +Oct 13 09:23:48.584: INFO: Created: latency-svc-2c7bz +Oct 13 09:23:48.622: INFO: Got endpoints: latency-svc-hb5d9 [749.968404ms] +Oct 13 09:23:48.640: INFO: Created: latency-svc-l6kks +Oct 13 09:23:48.672: INFO: Got endpoints: latency-svc-8k8xw [749.241464ms] +Oct 13 09:23:48.700: INFO: Created: latency-svc-vtdg8 +Oct 13 09:23:48.736: INFO: Got endpoints: latency-svc-cp7nn [762.188806ms] +Oct 13 09:23:48.769: INFO: Created: latency-svc-6b8cf +Oct 13 09:23:48.772: INFO: Got endpoints: latency-svc-f7654 [749.994463ms] +Oct 13 09:23:48.791: INFO: Created: latency-svc-z8fxv +Oct 13 09:23:48.831: INFO: Got endpoints: latency-svc-5kc28 [758.21013ms] +Oct 13 09:23:48.868: INFO: Created: latency-svc-hrtvz +Oct 13 09:23:48.871: INFO: Got endpoints: latency-svc-4675l [747.675817ms] +Oct 13 09:23:48.882: INFO: Created: latency-svc-b5bw2 +Oct 13 09:23:48.921: INFO: Got endpoints: latency-svc-gmd4d [748.769868ms] +Oct 13 09:23:48.932: INFO: Created: latency-svc-r7zqs +Oct 13 09:23:48.973: INFO: Got endpoints: latency-svc-qn22q [751.001529ms] +Oct 13 09:23:48.985: INFO: Created: latency-svc-494mb +Oct 13 09:23:49.022: INFO: Got endpoints: latency-svc-ltjtm [749.579495ms] +Oct 13 09:23:49.035: INFO: Created: latency-svc-tv7pn +Oct 13 09:23:49.073: INFO: Got endpoints: latency-svc-npzt2 [749.691054ms] +Oct 13 09:23:49.085: INFO: Created: latency-svc-8zhrl +Oct 13 09:23:49.122: INFO: Got endpoints: latency-svc-pjljk [749.933278ms] +Oct 13 09:23:49.133: INFO: Created: latency-svc-5cm5w +Oct 13 09:23:49.173: INFO: Got endpoints: latency-svc-lxnlt [750.223755ms] +Oct 13 09:23:49.188: INFO: Created: latency-svc-qgsjx +Oct 13 09:23:49.222: INFO: Got endpoints: latency-svc-4xbkl [750.601439ms] +Oct 13 09:23:49.234: INFO: Created: latency-svc-kvv9h +Oct 13 09:23:49.273: INFO: Got endpoints: latency-svc-9pk8g [748.37263ms] +Oct 13 09:23:49.284: INFO: Created: latency-svc-pq96b +Oct 13 09:23:49.323: INFO: Got endpoints: latency-svc-2c7bz [750.38374ms] +Oct 13 09:23:49.334: INFO: Created: latency-svc-lghpj +Oct 13 09:23:49.373: INFO: Got endpoints: latency-svc-l6kks [750.132838ms] +Oct 13 09:23:49.384: INFO: Created: latency-svc-rzmgj +Oct 13 09:23:49.422: INFO: Got endpoints: latency-svc-vtdg8 [749.797566ms] +Oct 13 09:23:49.434: INFO: Created: latency-svc-c7htg +Oct 13 09:23:49.474: INFO: Got endpoints: latency-svc-6b8cf [738.768889ms] +Oct 13 09:23:49.485: INFO: Created: latency-svc-89gkv +Oct 13 09:23:49.524: INFO: Got endpoints: latency-svc-z8fxv [751.684292ms] +Oct 13 09:23:49.536: INFO: Created: latency-svc-hzztn +Oct 13 09:23:49.572: INFO: Got endpoints: latency-svc-hrtvz [741.438647ms] +Oct 13 09:23:49.585: INFO: Created: latency-svc-h4chf +Oct 13 09:23:49.622: INFO: Got endpoints: latency-svc-b5bw2 [750.82168ms] +Oct 13 09:23:49.633: INFO: Created: latency-svc-wddrf +Oct 13 09:23:49.672: INFO: Got endpoints: latency-svc-r7zqs [751.17256ms] +Oct 13 09:23:49.683: INFO: Created: latency-svc-85rlg +Oct 13 09:23:49.722: INFO: Got endpoints: latency-svc-494mb [748.876595ms] +Oct 13 09:23:49.735: INFO: Created: latency-svc-cxqms +Oct 13 09:23:49.771: INFO: Got endpoints: latency-svc-tv7pn [749.378842ms] +Oct 13 09:23:49.784: INFO: Created: latency-svc-rhzlq +Oct 13 09:23:49.823: INFO: Got endpoints: latency-svc-8zhrl [750.037399ms] +Oct 13 09:23:49.835: INFO: Created: latency-svc-6hbmr +Oct 13 09:23:49.873: INFO: Got endpoints: latency-svc-5cm5w [750.736304ms] +Oct 13 09:23:49.883: INFO: Created: latency-svc-gj7w9 +Oct 13 09:23:49.922: INFO: Got endpoints: latency-svc-qgsjx [749.215332ms] +Oct 13 09:23:49.933: INFO: Created: latency-svc-nxstg +Oct 13 09:23:49.971: INFO: Got endpoints: latency-svc-kvv9h [749.012761ms] +Oct 13 09:23:49.986: INFO: Created: latency-svc-pb54v +Oct 13 09:23:50.022: INFO: Got endpoints: latency-svc-pq96b [749.875674ms] +Oct 13 09:23:50.073: INFO: Got endpoints: latency-svc-lghpj [749.737394ms] +Oct 13 09:23:50.122: INFO: Got endpoints: latency-svc-rzmgj [749.817858ms] +Oct 13 09:23:50.173: INFO: Got endpoints: latency-svc-c7htg [750.688059ms] +Oct 13 09:23:50.223: INFO: Got endpoints: latency-svc-89gkv [748.276394ms] +Oct 13 09:23:50.273: INFO: Got endpoints: latency-svc-hzztn [749.286254ms] +Oct 13 09:23:50.323: INFO: Got endpoints: latency-svc-h4chf [750.609846ms] +Oct 13 09:23:50.374: INFO: Got endpoints: latency-svc-wddrf [751.451691ms] +Oct 13 09:23:50.423: INFO: Got endpoints: latency-svc-85rlg [750.822164ms] +Oct 13 09:23:50.472: INFO: Got endpoints: latency-svc-cxqms [750.186597ms] +Oct 13 09:23:50.522: INFO: Got endpoints: latency-svc-rhzlq [750.905241ms] +Oct 13 09:23:50.572: INFO: Got endpoints: latency-svc-6hbmr [749.201544ms] +Oct 13 09:23:50.623: INFO: Got endpoints: latency-svc-gj7w9 [750.019587ms] +Oct 13 09:23:50.672: INFO: Got endpoints: latency-svc-nxstg [750.100001ms] +Oct 13 09:23:50.722: INFO: Got endpoints: latency-svc-pb54v [750.72455ms] +Oct 13 09:23:50.722: INFO: Latencies: [30.379694ms 41.72386ms 55.668666ms 66.295566ms 81.012567ms 93.176146ms 104.871353ms 115.885006ms 133.478541ms 146.550881ms 151.37877ms 151.498681ms 151.668985ms 152.158974ms 152.981089ms 153.040967ms 153.217956ms 154.900774ms 155.690524ms 156.241309ms 156.323283ms 157.193884ms 158.585162ms 160.676915ms 162.572596ms 162.845442ms 165.003903ms 166.743014ms 169.132568ms 171.419277ms 172.548875ms 173.520307ms 175.831584ms 175.945588ms 177.121551ms 177.927549ms 183.474915ms 195.950268ms 212.002749ms 250.384174ms 288.652778ms 323.916064ms 366.269599ms 406.26641ms 447.295797ms 490.725456ms 527.762529ms 568.614542ms 608.217754ms 647.382806ms 685.31686ms 724.291481ms 738.768889ms 740.251058ms 741.438647ms 746.619376ms 746.940438ms 747.602928ms 747.675817ms 747.83553ms 748.077453ms 748.106608ms 748.126065ms 748.249222ms 748.276394ms 748.291517ms 748.37263ms 748.426102ms 748.46198ms 748.513547ms 748.524023ms 748.769868ms 748.802729ms 748.82874ms 748.876595ms 748.924676ms 748.975803ms 749.012761ms 749.034562ms 749.0355ms 749.044428ms 749.075944ms 749.120643ms 749.127229ms 749.18384ms 749.201544ms 749.203825ms 749.213566ms 749.214405ms 749.215332ms 749.232692ms 749.241464ms 749.250092ms 749.256647ms 749.286254ms 749.317406ms 749.322145ms 749.347312ms 749.376287ms 749.378842ms 749.39757ms 749.414356ms 749.422001ms 749.427964ms 749.430544ms 749.579495ms 749.595674ms 749.597501ms 749.609848ms 749.620639ms 749.641469ms 749.647786ms 749.691054ms 749.707197ms 749.721201ms 749.723313ms 749.723544ms 749.737394ms 749.765501ms 749.782874ms 749.793107ms 749.797566ms 749.799014ms 749.802769ms 749.817858ms 749.849944ms 749.875674ms 749.901973ms 749.907265ms 749.933278ms 749.937391ms 749.938751ms 749.968404ms 749.991172ms 749.994463ms 750.019587ms 750.030731ms 750.037399ms 750.039792ms 750.045415ms 750.050531ms 750.071677ms 750.098965ms 750.100001ms 750.109296ms 750.132067ms 750.132838ms 750.150446ms 750.179979ms 750.186597ms 750.188377ms 750.223755ms 750.229717ms 750.259836ms 750.303336ms 750.368765ms 750.38374ms 750.421062ms 750.498604ms 750.567893ms 750.601439ms 750.609846ms 750.688059ms 750.720376ms 750.72455ms 750.736304ms 750.747921ms 750.82168ms 750.822164ms 750.905241ms 750.922013ms 750.963061ms 750.986769ms 751.001529ms 751.014374ms 751.104759ms 751.133481ms 751.135942ms 751.169412ms 751.17256ms 751.181379ms 751.183443ms 751.232612ms 751.246779ms 751.250542ms 751.258544ms 751.361995ms 751.451691ms 751.631578ms 751.684292ms 751.75429ms 751.998398ms 752.000094ms 752.138892ms 752.252883ms 752.525289ms 752.960087ms 753.993433ms 758.21013ms 762.188806ms] +Oct 13 09:23:50.722: INFO: 50 %ile: 749.39757ms +Oct 13 09:23:50.722: INFO: 90 %ile: 751.181379ms +Oct 13 09:23:50.722: INFO: 99 %ile: 758.21013ms +Oct 13 09:23:50.722: INFO: Total sample count: 200 +[AfterEach] [sig-network] Service endpoints latency + test/e2e/framework/node/init/init.go:32 +Oct 13 09:23:50.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] Service endpoints latency + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] Service endpoints latency + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] Service endpoints latency + tear down framework | framework.go:193 +STEP: Destroying namespace "svc-latency-8055" for this suite. 10/13/23 09:23:50.727 +------------------------------ +• [SLOW TEST] [10.763 seconds] +[sig-network] Service endpoints latency +test/e2e/network/common/framework.go:23 + should not be very high [Conformance] + test/e2e/network/service_latency.go:59 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Service endpoints latency + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:23:39.969 + Oct 13 09:23:39.969: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename svc-latency 10/13/23 09:23:39.97 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:23:39.986 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:23:39.989 + [BeforeEach] [sig-network] Service endpoints latency + test/e2e/framework/metrics/init/init.go:31 + [It] should not be very high [Conformance] + test/e2e/network/service_latency.go:59 + Oct 13 09:23:39.992: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: creating replication controller svc-latency-rc in namespace svc-latency-8055 10/13/23 09:23:39.993 + I1013 09:23:39.998557 23 runners.go:193] Created replication controller with name: svc-latency-rc, namespace: svc-latency-8055, replica count: 1 + I1013 09:23:41.050032 23 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady + I1013 09:23:42.050935 23 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady + Oct 13 09:23:42.166: INFO: Created: latency-svc-27h2r + Oct 13 09:23:42.172: INFO: Got endpoints: latency-svc-27h2r [21.544646ms] + Oct 13 09:23:42.194: INFO: Created: latency-svc-m7sm4 + Oct 13 09:23:42.203: INFO: Got endpoints: latency-svc-m7sm4 [30.379694ms] + Oct 13 09:23:42.207: INFO: Created: latency-svc-z8vcb + Oct 13 09:23:42.214: INFO: Got endpoints: latency-svc-z8vcb [41.72386ms] + Oct 13 09:23:42.219: INFO: Created: latency-svc-l5mx5 + Oct 13 09:23:42.228: INFO: Got endpoints: latency-svc-l5mx5 [55.668666ms] + Oct 13 09:23:42.230: INFO: Created: latency-svc-sn4tc + Oct 13 09:23:42.239: INFO: Got endpoints: latency-svc-sn4tc [66.295566ms] + Oct 13 09:23:42.243: INFO: Created: latency-svc-9snf7 + Oct 13 09:23:42.254: INFO: Got endpoints: latency-svc-9snf7 [81.012567ms] + Oct 13 09:23:42.257: INFO: Created: latency-svc-mflbf + Oct 13 09:23:42.266: INFO: Got endpoints: latency-svc-mflbf [93.176146ms] + Oct 13 09:23:42.270: INFO: Created: latency-svc-m4lxz + Oct 13 09:23:42.278: INFO: Got endpoints: latency-svc-m4lxz [104.871353ms] + Oct 13 09:23:42.280: INFO: Created: latency-svc-62kxc + Oct 13 09:23:42.289: INFO: Got endpoints: latency-svc-62kxc [115.885006ms] + Oct 13 09:23:42.299: INFO: Created: latency-svc-7cwn4 + Oct 13 09:23:42.306: INFO: Got endpoints: latency-svc-7cwn4 [133.478541ms] + Oct 13 09:23:42.310: INFO: Created: latency-svc-blkpl + Oct 13 09:23:42.319: INFO: Got endpoints: latency-svc-blkpl [146.550881ms] + Oct 13 09:23:42.322: INFO: Created: latency-svc-9qj9m + Oct 13 09:23:42.329: INFO: Got endpoints: latency-svc-9qj9m [156.241309ms] + Oct 13 09:23:42.333: INFO: Created: latency-svc-fb2q5 + Oct 13 09:23:42.335: INFO: Got endpoints: latency-svc-fb2q5 [162.845442ms] + Oct 13 09:23:42.341: INFO: Created: latency-svc-hr2ld + Oct 13 09:23:42.349: INFO: Got endpoints: latency-svc-hr2ld [175.831584ms] + Oct 13 09:23:42.351: INFO: Created: latency-svc-24z54 + Oct 13 09:23:42.356: INFO: Got endpoints: latency-svc-24z54 [183.474915ms] + Oct 13 09:23:42.361: INFO: Created: latency-svc-zcbdv + Oct 13 09:23:42.369: INFO: Got endpoints: latency-svc-zcbdv [195.950268ms] + Oct 13 09:23:42.372: INFO: Created: latency-svc-ctxm5 + Oct 13 09:23:42.379: INFO: Got endpoints: latency-svc-ctxm5 [175.945588ms] + Oct 13 09:23:42.381: INFO: Created: latency-svc-4tfmh + Oct 13 09:23:42.392: INFO: Got endpoints: latency-svc-4tfmh [177.121551ms] + Oct 13 09:23:42.395: INFO: Created: latency-svc-7mcz9 + Oct 13 09:23:42.402: INFO: Got endpoints: latency-svc-7mcz9 [173.520307ms] + Oct 13 09:23:42.405: INFO: Created: latency-svc-vrbcd + Oct 13 09:23:42.411: INFO: Got endpoints: latency-svc-vrbcd [172.548875ms] + Oct 13 09:23:42.418: INFO: Created: latency-svc-pb5ps + Oct 13 09:23:42.425: INFO: Got endpoints: latency-svc-pb5ps [171.419277ms] + Oct 13 09:23:42.429: INFO: Created: latency-svc-2vwcp + Oct 13 09:23:42.435: INFO: Got endpoints: latency-svc-2vwcp [169.132568ms] + Oct 13 09:23:42.438: INFO: Created: latency-svc-jcx5c + Oct 13 09:23:42.444: INFO: Got endpoints: latency-svc-jcx5c [166.743014ms] + Oct 13 09:23:42.448: INFO: Created: latency-svc-zznnb + Oct 13 09:23:42.454: INFO: Got endpoints: latency-svc-zznnb [165.003903ms] + Oct 13 09:23:42.456: INFO: Created: latency-svc-v7tkr + Oct 13 09:23:42.463: INFO: Got endpoints: latency-svc-v7tkr [156.323283ms] + Oct 13 09:23:42.465: INFO: Created: latency-svc-l4pjb + Oct 13 09:23:42.472: INFO: Got endpoints: latency-svc-l4pjb [152.158974ms] + Oct 13 09:23:42.478: INFO: Created: latency-svc-vnphm + Oct 13 09:23:42.484: INFO: Got endpoints: latency-svc-vnphm [154.900774ms] + Oct 13 09:23:42.488: INFO: Created: latency-svc-z5x7r + Oct 13 09:23:42.498: INFO: Got endpoints: latency-svc-z5x7r [162.572596ms] + Oct 13 09:23:42.501: INFO: Created: latency-svc-rhfgk + Oct 13 09:23:42.507: INFO: Got endpoints: latency-svc-rhfgk [158.585162ms] + Oct 13 09:23:42.510: INFO: Created: latency-svc-pgssl + Oct 13 09:23:42.517: INFO: Got endpoints: latency-svc-pgssl [160.676915ms] + Oct 13 09:23:42.519: INFO: Created: latency-svc-768l6 + Oct 13 09:23:42.526: INFO: Got endpoints: latency-svc-768l6 [157.193884ms] + Oct 13 09:23:42.528: INFO: Created: latency-svc-6xq4t + Oct 13 09:23:42.535: INFO: Got endpoints: latency-svc-6xq4t [155.690524ms] + Oct 13 09:23:42.538: INFO: Created: latency-svc-cgt6w + Oct 13 09:23:42.545: INFO: Got endpoints: latency-svc-cgt6w [153.040967ms] + Oct 13 09:23:42.547: INFO: Created: latency-svc-nplpc + Oct 13 09:23:42.553: INFO: Got endpoints: latency-svc-nplpc [151.498681ms] + Oct 13 09:23:42.557: INFO: Created: latency-svc-6cgxs + Oct 13 09:23:42.565: INFO: Got endpoints: latency-svc-6cgxs [153.217956ms] + Oct 13 09:23:42.568: INFO: Created: latency-svc-ndl2d + Oct 13 09:23:42.577: INFO: Got endpoints: latency-svc-ndl2d [151.668985ms] + Oct 13 09:23:42.580: INFO: Created: latency-svc-x6jw9 + Oct 13 09:23:42.587: INFO: Got endpoints: latency-svc-x6jw9 [151.37877ms] + Oct 13 09:23:42.589: INFO: Created: latency-svc-zsnmd + Oct 13 09:23:42.597: INFO: Got endpoints: latency-svc-zsnmd [152.981089ms] + Oct 13 09:23:42.605: INFO: Created: latency-svc-mwtdn + Oct 13 09:23:42.624: INFO: Created: latency-svc-pvnt2 + Oct 13 09:23:42.632: INFO: Got endpoints: latency-svc-mwtdn [177.927549ms] + Oct 13 09:23:42.642: INFO: Created: latency-svc-6rshl + Oct 13 09:23:42.650: INFO: Created: latency-svc-qlq59 + Oct 13 09:23:42.659: INFO: Created: latency-svc-hmppp + Oct 13 09:23:42.666: INFO: Created: latency-svc-j2z48 + Oct 13 09:23:42.675: INFO: Got endpoints: latency-svc-pvnt2 [212.002749ms] + Oct 13 09:23:42.677: INFO: Created: latency-svc-jfzbm + Oct 13 09:23:42.684: INFO: Created: latency-svc-gpvqq + Oct 13 09:23:42.693: INFO: Created: latency-svc-qlt84 + Oct 13 09:23:42.701: INFO: Created: latency-svc-x5twv + Oct 13 09:23:42.716: INFO: Created: latency-svc-t7c84 + Oct 13 09:23:42.722: INFO: Got endpoints: latency-svc-6rshl [250.384174ms] + Oct 13 09:23:42.725: INFO: Created: latency-svc-k5vpf + Oct 13 09:23:42.733: INFO: Created: latency-svc-jkjl2 + Oct 13 09:23:42.740: INFO: Created: latency-svc-c5tff + Oct 13 09:23:42.747: INFO: Created: latency-svc-4rbvt + Oct 13 09:23:42.757: INFO: Created: latency-svc-v9jwr + Oct 13 09:23:42.764: INFO: Created: latency-svc-vmcdz + Oct 13 09:23:42.771: INFO: Created: latency-svc-l5p9j + Oct 13 09:23:42.773: INFO: Got endpoints: latency-svc-qlq59 [288.652778ms] + Oct 13 09:23:42.784: INFO: Created: latency-svc-56p44 + Oct 13 09:23:42.822: INFO: Got endpoints: latency-svc-hmppp [323.916064ms] + Oct 13 09:23:42.836: INFO: Created: latency-svc-j9xhv + Oct 13 09:23:42.874: INFO: Got endpoints: latency-svc-j2z48 [366.269599ms] + Oct 13 09:23:42.885: INFO: Created: latency-svc-c9gtg + Oct 13 09:23:42.923: INFO: Got endpoints: latency-svc-jfzbm [406.26641ms] + Oct 13 09:23:42.935: INFO: Created: latency-svc-8gfgq + Oct 13 09:23:42.973: INFO: Got endpoints: latency-svc-gpvqq [447.295797ms] + Oct 13 09:23:42.989: INFO: Created: latency-svc-6chn8 + Oct 13 09:23:43.025: INFO: Got endpoints: latency-svc-qlt84 [490.725456ms] + Oct 13 09:23:43.039: INFO: Created: latency-svc-6rbp9 + Oct 13 09:23:43.072: INFO: Got endpoints: latency-svc-x5twv [527.762529ms] + Oct 13 09:23:43.083: INFO: Created: latency-svc-2kfw8 + Oct 13 09:23:43.122: INFO: Got endpoints: latency-svc-t7c84 [568.614542ms] + Oct 13 09:23:43.136: INFO: Created: latency-svc-qqv6c + Oct 13 09:23:43.173: INFO: Got endpoints: latency-svc-k5vpf [608.217754ms] + Oct 13 09:23:43.190: INFO: Created: latency-svc-lc2mh + Oct 13 09:23:43.224: INFO: Got endpoints: latency-svc-jkjl2 [647.382806ms] + Oct 13 09:23:43.237: INFO: Created: latency-svc-ghhjc + Oct 13 09:23:43.272: INFO: Got endpoints: latency-svc-c5tff [685.31686ms] + Oct 13 09:23:43.283: INFO: Created: latency-svc-xggsh + Oct 13 09:23:43.322: INFO: Got endpoints: latency-svc-4rbvt [724.291481ms] + Oct 13 09:23:43.335: INFO: Created: latency-svc-jfcvh + Oct 13 09:23:43.372: INFO: Got endpoints: latency-svc-v9jwr [740.251058ms] + Oct 13 09:23:43.383: INFO: Created: latency-svc-8vpsr + Oct 13 09:23:43.422: INFO: Got endpoints: latency-svc-vmcdz [747.602928ms] + Oct 13 09:23:43.438: INFO: Created: latency-svc-xlwj2 + Oct 13 09:23:43.472: INFO: Got endpoints: latency-svc-l5p9j [750.039792ms] + Oct 13 09:23:43.483: INFO: Created: latency-svc-jjvhc + Oct 13 09:23:43.522: INFO: Got endpoints: latency-svc-56p44 [749.39757ms] + Oct 13 09:23:43.533: INFO: Created: latency-svc-skp2q + Oct 13 09:23:43.573: INFO: Got endpoints: latency-svc-j9xhv [751.181379ms] + Oct 13 09:23:43.584: INFO: Created: latency-svc-ndfw9 + Oct 13 09:23:43.622: INFO: Got endpoints: latency-svc-c9gtg [748.077453ms] + Oct 13 09:23:43.633: INFO: Created: latency-svc-nhlhn + Oct 13 09:23:43.673: INFO: Got endpoints: latency-svc-8gfgq [749.232692ms] + Oct 13 09:23:43.684: INFO: Created: latency-svc-2sdv8 + Oct 13 09:23:43.722: INFO: Got endpoints: latency-svc-6chn8 [748.426102ms] + Oct 13 09:23:43.733: INFO: Created: latency-svc-dkpcl + Oct 13 09:23:43.772: INFO: Got endpoints: latency-svc-6rbp9 [746.619376ms] + Oct 13 09:23:43.785: INFO: Created: latency-svc-76tgr + Oct 13 09:23:43.822: INFO: Got endpoints: latency-svc-2kfw8 [749.427964ms] + Oct 13 09:23:43.834: INFO: Created: latency-svc-7rlc7 + Oct 13 09:23:43.873: INFO: Got endpoints: latency-svc-qqv6c [750.368765ms] + Oct 13 09:23:43.884: INFO: Created: latency-svc-8r5wd + Oct 13 09:23:43.923: INFO: Got endpoints: latency-svc-lc2mh [749.937391ms] + Oct 13 09:23:43.935: INFO: Created: latency-svc-s4fw5 + Oct 13 09:23:43.972: INFO: Got endpoints: latency-svc-ghhjc [748.126065ms] + Oct 13 09:23:43.985: INFO: Created: latency-svc-57t98 + Oct 13 09:23:44.021: INFO: Got endpoints: latency-svc-xggsh [749.414356ms] + Oct 13 09:23:44.036: INFO: Created: latency-svc-vw5cs + Oct 13 09:23:44.072: INFO: Got endpoints: latency-svc-jfcvh [749.647786ms] + Oct 13 09:23:44.084: INFO: Created: latency-svc-sh88n + Oct 13 09:23:44.122: INFO: Got endpoints: latency-svc-8vpsr [750.071677ms] + Oct 13 09:23:44.135: INFO: Created: latency-svc-jcc4n + Oct 13 09:23:44.173: INFO: Got endpoints: latency-svc-xlwj2 [750.303336ms] + Oct 13 09:23:44.190: INFO: Created: latency-svc-95qbj + Oct 13 09:23:44.224: INFO: Got endpoints: latency-svc-jjvhc [751.631578ms] + Oct 13 09:23:44.238: INFO: Created: latency-svc-nrls8 + Oct 13 09:23:44.272: INFO: Got endpoints: latency-svc-skp2q [749.723313ms] + Oct 13 09:23:44.285: INFO: Created: latency-svc-bsf72 + Oct 13 09:23:44.324: INFO: Got endpoints: latency-svc-ndfw9 [750.259836ms] + Oct 13 09:23:44.336: INFO: Created: latency-svc-j8ljh + Oct 13 09:23:44.373: INFO: Got endpoints: latency-svc-nhlhn [750.986769ms] + Oct 13 09:23:44.386: INFO: Created: latency-svc-2v7wp + Oct 13 09:23:44.423: INFO: Got endpoints: latency-svc-2sdv8 [750.132067ms] + Oct 13 09:23:44.434: INFO: Created: latency-svc-fjmrn + Oct 13 09:23:44.474: INFO: Got endpoints: latency-svc-dkpcl [752.252883ms] + Oct 13 09:23:44.486: INFO: Created: latency-svc-z6x8p + Oct 13 09:23:44.522: INFO: Got endpoints: latency-svc-76tgr [749.901973ms] + Oct 13 09:23:44.533: INFO: Created: latency-svc-bmbjm + Oct 13 09:23:44.572: INFO: Got endpoints: latency-svc-7rlc7 [750.050531ms] + Oct 13 09:23:44.585: INFO: Created: latency-svc-z69hd + Oct 13 09:23:44.622: INFO: Got endpoints: latency-svc-8r5wd [749.317406ms] + Oct 13 09:23:44.636: INFO: Created: latency-svc-jscgr + Oct 13 09:23:44.673: INFO: Got endpoints: latency-svc-s4fw5 [749.609848ms] + Oct 13 09:23:44.685: INFO: Created: latency-svc-9lwwd + Oct 13 09:23:44.723: INFO: Got endpoints: latency-svc-57t98 [750.498604ms] + Oct 13 09:23:44.735: INFO: Created: latency-svc-fj2f5 + Oct 13 09:23:44.772: INFO: Got endpoints: latency-svc-vw5cs [750.720376ms] + Oct 13 09:23:44.789: INFO: Created: latency-svc-9r9g7 + Oct 13 09:23:44.823: INFO: Got endpoints: latency-svc-sh88n [751.258544ms] + Oct 13 09:23:44.834: INFO: Created: latency-svc-k72fs + Oct 13 09:23:44.872: INFO: Got endpoints: latency-svc-jcc4n [749.597501ms] + Oct 13 09:23:44.883: INFO: Created: latency-svc-mx2tl + Oct 13 09:23:44.921: INFO: Got endpoints: latency-svc-95qbj [748.106608ms] + Oct 13 09:23:44.934: INFO: Created: latency-svc-25x4m + Oct 13 09:23:44.973: INFO: Got endpoints: latency-svc-nrls8 [749.214405ms] + Oct 13 09:23:44.989: INFO: Created: latency-svc-zhj2h + Oct 13 09:23:45.021: INFO: Got endpoints: latency-svc-bsf72 [748.975803ms] + Oct 13 09:23:45.037: INFO: Created: latency-svc-86qhj + Oct 13 09:23:45.072: INFO: Got endpoints: latency-svc-j8ljh [748.249222ms] + Oct 13 09:23:45.107: INFO: Created: latency-svc-nmznq + Oct 13 09:23:45.122: INFO: Got endpoints: latency-svc-2v7wp [749.120643ms] + Oct 13 09:23:45.135: INFO: Created: latency-svc-9j4bg + Oct 13 09:23:45.172: INFO: Got endpoints: latency-svc-fjmrn [748.82874ms] + Oct 13 09:23:45.185: INFO: Created: latency-svc-gjkx4 + Oct 13 09:23:45.223: INFO: Got endpoints: latency-svc-z6x8p [748.513547ms] + Oct 13 09:23:45.236: INFO: Created: latency-svc-qnwtp + Oct 13 09:23:45.272: INFO: Got endpoints: latency-svc-bmbjm [749.907265ms] + Oct 13 09:23:45.284: INFO: Created: latency-svc-4kh8p + Oct 13 09:23:45.322: INFO: Got endpoints: latency-svc-z69hd [750.098965ms] + Oct 13 09:23:45.334: INFO: Created: latency-svc-j74rb + Oct 13 09:23:45.373: INFO: Got endpoints: latency-svc-jscgr [751.014374ms] + Oct 13 09:23:45.384: INFO: Created: latency-svc-r84cz + Oct 13 09:23:45.421: INFO: Got endpoints: latency-svc-9lwwd [748.291517ms] + Oct 13 09:23:45.435: INFO: Created: latency-svc-2bmmb + Oct 13 09:23:45.472: INFO: Got endpoints: latency-svc-fj2f5 [749.075944ms] + Oct 13 09:23:45.483: INFO: Created: latency-svc-n42q8 + Oct 13 09:23:45.522: INFO: Got endpoints: latency-svc-9r9g7 [749.799014ms] + Oct 13 09:23:45.533: INFO: Created: latency-svc-ls26j + Oct 13 09:23:45.572: INFO: Got endpoints: latency-svc-k72fs [749.595674ms] + Oct 13 09:23:45.584: INFO: Created: latency-svc-wnd75 + Oct 13 09:23:45.623: INFO: Got endpoints: latency-svc-mx2tl [751.232612ms] + Oct 13 09:23:45.634: INFO: Created: latency-svc-rj9zp + Oct 13 09:23:45.674: INFO: Got endpoints: latency-svc-25x4m [752.960087ms] + Oct 13 09:23:45.685: INFO: Created: latency-svc-cd9zs + Oct 13 09:23:45.722: INFO: Got endpoints: latency-svc-zhj2h [749.256647ms] + Oct 13 09:23:45.735: INFO: Created: latency-svc-nkmkd + Oct 13 09:23:45.771: INFO: Got endpoints: latency-svc-86qhj [750.567893ms] + Oct 13 09:23:45.783: INFO: Created: latency-svc-vr2j5 + Oct 13 09:23:45.826: INFO: Got endpoints: latency-svc-nmznq [753.993433ms] + Oct 13 09:23:45.838: INFO: Created: latency-svc-bxfjx + Oct 13 09:23:45.872: INFO: Got endpoints: latency-svc-9j4bg [749.849944ms] + Oct 13 09:23:45.883: INFO: Created: latency-svc-gggfn + Oct 13 09:23:45.921: INFO: Got endpoints: latency-svc-gjkx4 [749.203825ms] + Oct 13 09:23:45.932: INFO: Created: latency-svc-s9lzm + Oct 13 09:23:45.972: INFO: Got endpoints: latency-svc-qnwtp [749.18384ms] + Oct 13 09:23:45.982: INFO: Created: latency-svc-jjxb6 + Oct 13 09:23:46.021: INFO: Got endpoints: latency-svc-4kh8p [749.376287ms] + Oct 13 09:23:46.035: INFO: Created: latency-svc-d7l9x + Oct 13 09:23:46.072: INFO: Got endpoints: latency-svc-j74rb [750.229717ms] + Oct 13 09:23:46.084: INFO: Created: latency-svc-hmf9z + Oct 13 09:23:46.122: INFO: Got endpoints: latency-svc-r84cz [748.924676ms] + Oct 13 09:23:46.139: INFO: Created: latency-svc-ntgl5 + Oct 13 09:23:46.173: INFO: Got endpoints: latency-svc-2bmmb [751.75429ms] + Oct 13 09:23:46.188: INFO: Created: latency-svc-2j6rj + Oct 13 09:23:46.224: INFO: Got endpoints: latency-svc-n42q8 [752.000094ms] + Oct 13 09:23:46.243: INFO: Created: latency-svc-gjqc8 + Oct 13 09:23:46.273: INFO: Got endpoints: latency-svc-ls26j [751.133481ms] + Oct 13 09:23:46.285: INFO: Created: latency-svc-vtvzj + Oct 13 09:23:46.323: INFO: Got endpoints: latency-svc-wnd75 [750.045415ms] + Oct 13 09:23:46.335: INFO: Created: latency-svc-vxclm + Oct 13 09:23:46.373: INFO: Got endpoints: latency-svc-rj9zp [750.150446ms] + Oct 13 09:23:46.384: INFO: Created: latency-svc-x7xl5 + Oct 13 09:23:46.423: INFO: Got endpoints: latency-svc-cd9zs [749.0355ms] + Oct 13 09:23:46.434: INFO: Created: latency-svc-4th75 + Oct 13 09:23:46.472: INFO: Got endpoints: latency-svc-nkmkd [749.347312ms] + Oct 13 09:23:46.483: INFO: Created: latency-svc-7r6dd + Oct 13 09:23:46.522: INFO: Got endpoints: latency-svc-vr2j5 [750.179979ms] + Oct 13 09:23:46.533: INFO: Created: latency-svc-jm7dh + Oct 13 09:23:46.573: INFO: Got endpoints: latency-svc-bxfjx [746.940438ms] + Oct 13 09:23:46.586: INFO: Created: latency-svc-9546m + Oct 13 09:23:46.623: INFO: Got endpoints: latency-svc-gggfn [750.922013ms] + Oct 13 09:23:46.636: INFO: Created: latency-svc-ll58t + Oct 13 09:23:46.672: INFO: Got endpoints: latency-svc-s9lzm [751.361995ms] + Oct 13 09:23:46.684: INFO: Created: latency-svc-mrgnn + Oct 13 09:23:46.722: INFO: Got endpoints: latency-svc-jjxb6 [750.421062ms] + Oct 13 09:23:46.733: INFO: Created: latency-svc-lsxfj + Oct 13 09:23:46.772: INFO: Got endpoints: latency-svc-d7l9x [751.104759ms] + Oct 13 09:23:46.784: INFO: Created: latency-svc-d9tfr + Oct 13 09:23:46.822: INFO: Got endpoints: latency-svc-hmf9z [749.723544ms] + Oct 13 09:23:46.834: INFO: Created: latency-svc-z9x9h + Oct 13 09:23:46.871: INFO: Got endpoints: latency-svc-ntgl5 [748.802729ms] + Oct 13 09:23:46.882: INFO: Created: latency-svc-tfhd5 + Oct 13 09:23:46.923: INFO: Got endpoints: latency-svc-2j6rj [749.782874ms] + Oct 13 09:23:46.934: INFO: Created: latency-svc-zfsvp + Oct 13 09:23:46.973: INFO: Got endpoints: latency-svc-gjqc8 [749.422001ms] + Oct 13 09:23:46.986: INFO: Created: latency-svc-vgrwx + Oct 13 09:23:47.023: INFO: Got endpoints: latency-svc-vtvzj [749.991172ms] + Oct 13 09:23:47.036: INFO: Created: latency-svc-mncsc + Oct 13 09:23:47.072: INFO: Got endpoints: latency-svc-vxclm [749.250092ms] + Oct 13 09:23:47.083: INFO: Created: latency-svc-7bxxf + Oct 13 09:23:47.123: INFO: Got endpoints: latency-svc-x7xl5 [749.793107ms] + Oct 13 09:23:47.135: INFO: Created: latency-svc-mqnjm + Oct 13 09:23:47.175: INFO: Got endpoints: latency-svc-4th75 [752.138892ms] + Oct 13 09:23:47.189: INFO: Created: latency-svc-b5zzw + Oct 13 09:23:47.224: INFO: Got endpoints: latency-svc-7r6dd [751.998398ms] + Oct 13 09:23:47.238: INFO: Created: latency-svc-dbsdb + Oct 13 09:23:47.273: INFO: Got endpoints: latency-svc-jm7dh [751.246779ms] + Oct 13 09:23:47.287: INFO: Created: latency-svc-n228h + Oct 13 09:23:47.322: INFO: Got endpoints: latency-svc-9546m [749.127229ms] + Oct 13 09:23:47.334: INFO: Created: latency-svc-qgn6n + Oct 13 09:23:47.372: INFO: Got endpoints: latency-svc-ll58t [749.641469ms] + Oct 13 09:23:47.386: INFO: Created: latency-svc-h7ctr + Oct 13 09:23:47.422: INFO: Got endpoints: latency-svc-mrgnn [750.109296ms] + Oct 13 09:23:47.435: INFO: Created: latency-svc-l5vn4 + Oct 13 09:23:47.474: INFO: Got endpoints: latency-svc-lsxfj [751.135942ms] + Oct 13 09:23:47.492: INFO: Created: latency-svc-qgsnm + Oct 13 09:23:47.522: INFO: Got endpoints: latency-svc-d9tfr [749.802769ms] + Oct 13 09:23:47.534: INFO: Created: latency-svc-lt925 + Oct 13 09:23:47.572: INFO: Got endpoints: latency-svc-z9x9h [749.721201ms] + Oct 13 09:23:47.583: INFO: Created: latency-svc-7xmpl + Oct 13 09:23:47.623: INFO: Got endpoints: latency-svc-tfhd5 [752.525289ms] + Oct 13 09:23:47.638: INFO: Created: latency-svc-flv7k + Oct 13 09:23:47.672: INFO: Got endpoints: latency-svc-zfsvp [749.034562ms] + Oct 13 09:23:47.684: INFO: Created: latency-svc-c26kh + Oct 13 09:23:47.723: INFO: Got endpoints: latency-svc-vgrwx [749.044428ms] + Oct 13 09:23:47.734: INFO: Created: latency-svc-bz8qw + Oct 13 09:23:47.773: INFO: Got endpoints: latency-svc-mncsc [749.707197ms] + Oct 13 09:23:47.787: INFO: Created: latency-svc-dv9pm + Oct 13 09:23:47.822: INFO: Got endpoints: latency-svc-7bxxf [749.938751ms] + Oct 13 09:23:47.837: INFO: Created: latency-svc-tr9pb + Oct 13 09:23:47.872: INFO: Got endpoints: latency-svc-mqnjm [749.430544ms] + Oct 13 09:23:47.883: INFO: Created: latency-svc-hb5d9 + Oct 13 09:23:47.923: INFO: Got endpoints: latency-svc-b5zzw [747.83553ms] + Oct 13 09:23:47.934: INFO: Created: latency-svc-8k8xw + Oct 13 09:23:47.973: INFO: Got endpoints: latency-svc-dbsdb [749.620639ms] + Oct 13 09:23:47.984: INFO: Created: latency-svc-cp7nn + Oct 13 09:23:48.022: INFO: Got endpoints: latency-svc-n228h [749.322145ms] + Oct 13 09:23:48.033: INFO: Created: latency-svc-f7654 + Oct 13 09:23:48.072: INFO: Got endpoints: latency-svc-qgn6n [750.188377ms] + Oct 13 09:23:48.084: INFO: Created: latency-svc-5kc28 + Oct 13 09:23:48.124: INFO: Got endpoints: latency-svc-h7ctr [751.183443ms] + Oct 13 09:23:48.152: INFO: Created: latency-svc-4675l + Oct 13 09:23:48.172: INFO: Got endpoints: latency-svc-l5vn4 [749.765501ms] + Oct 13 09:23:48.189: INFO: Created: latency-svc-gmd4d + Oct 13 09:23:48.222: INFO: Got endpoints: latency-svc-qgsnm [748.524023ms] + Oct 13 09:23:48.234: INFO: Created: latency-svc-qn22q + Oct 13 09:23:48.272: INFO: Got endpoints: latency-svc-lt925 [750.030731ms] + Oct 13 09:23:48.284: INFO: Created: latency-svc-ltjtm + Oct 13 09:23:48.323: INFO: Got endpoints: latency-svc-7xmpl [751.250542ms] + Oct 13 09:23:48.335: INFO: Created: latency-svc-npzt2 + Oct 13 09:23:48.372: INFO: Got endpoints: latency-svc-flv7k [748.46198ms] + Oct 13 09:23:48.383: INFO: Created: latency-svc-pjljk + Oct 13 09:23:48.423: INFO: Got endpoints: latency-svc-c26kh [750.963061ms] + Oct 13 09:23:48.434: INFO: Created: latency-svc-lxnlt + Oct 13 09:23:48.472: INFO: Got endpoints: latency-svc-bz8qw [749.213566ms] + Oct 13 09:23:48.483: INFO: Created: latency-svc-4xbkl + Oct 13 09:23:48.524: INFO: Got endpoints: latency-svc-dv9pm [751.169412ms] + Oct 13 09:23:48.536: INFO: Created: latency-svc-9pk8g + Oct 13 09:23:48.573: INFO: Got endpoints: latency-svc-tr9pb [750.747921ms] + Oct 13 09:23:48.584: INFO: Created: latency-svc-2c7bz + Oct 13 09:23:48.622: INFO: Got endpoints: latency-svc-hb5d9 [749.968404ms] + Oct 13 09:23:48.640: INFO: Created: latency-svc-l6kks + Oct 13 09:23:48.672: INFO: Got endpoints: latency-svc-8k8xw [749.241464ms] + Oct 13 09:23:48.700: INFO: Created: latency-svc-vtdg8 + Oct 13 09:23:48.736: INFO: Got endpoints: latency-svc-cp7nn [762.188806ms] + Oct 13 09:23:48.769: INFO: Created: latency-svc-6b8cf + Oct 13 09:23:48.772: INFO: Got endpoints: latency-svc-f7654 [749.994463ms] + Oct 13 09:23:48.791: INFO: Created: latency-svc-z8fxv + Oct 13 09:23:48.831: INFO: Got endpoints: latency-svc-5kc28 [758.21013ms] + Oct 13 09:23:48.868: INFO: Created: latency-svc-hrtvz + Oct 13 09:23:48.871: INFO: Got endpoints: latency-svc-4675l [747.675817ms] + Oct 13 09:23:48.882: INFO: Created: latency-svc-b5bw2 + Oct 13 09:23:48.921: INFO: Got endpoints: latency-svc-gmd4d [748.769868ms] + Oct 13 09:23:48.932: INFO: Created: latency-svc-r7zqs + Oct 13 09:23:48.973: INFO: Got endpoints: latency-svc-qn22q [751.001529ms] + Oct 13 09:23:48.985: INFO: Created: latency-svc-494mb + Oct 13 09:23:49.022: INFO: Got endpoints: latency-svc-ltjtm [749.579495ms] + Oct 13 09:23:49.035: INFO: Created: latency-svc-tv7pn + Oct 13 09:23:49.073: INFO: Got endpoints: latency-svc-npzt2 [749.691054ms] + Oct 13 09:23:49.085: INFO: Created: latency-svc-8zhrl + Oct 13 09:23:49.122: INFO: Got endpoints: latency-svc-pjljk [749.933278ms] + Oct 13 09:23:49.133: INFO: Created: latency-svc-5cm5w + Oct 13 09:23:49.173: INFO: Got endpoints: latency-svc-lxnlt [750.223755ms] + Oct 13 09:23:49.188: INFO: Created: latency-svc-qgsjx + Oct 13 09:23:49.222: INFO: Got endpoints: latency-svc-4xbkl [750.601439ms] + Oct 13 09:23:49.234: INFO: Created: latency-svc-kvv9h + Oct 13 09:23:49.273: INFO: Got endpoints: latency-svc-9pk8g [748.37263ms] + Oct 13 09:23:49.284: INFO: Created: latency-svc-pq96b + Oct 13 09:23:49.323: INFO: Got endpoints: latency-svc-2c7bz [750.38374ms] + Oct 13 09:23:49.334: INFO: Created: latency-svc-lghpj + Oct 13 09:23:49.373: INFO: Got endpoints: latency-svc-l6kks [750.132838ms] + Oct 13 09:23:49.384: INFO: Created: latency-svc-rzmgj + Oct 13 09:23:49.422: INFO: Got endpoints: latency-svc-vtdg8 [749.797566ms] + Oct 13 09:23:49.434: INFO: Created: latency-svc-c7htg + Oct 13 09:23:49.474: INFO: Got endpoints: latency-svc-6b8cf [738.768889ms] + Oct 13 09:23:49.485: INFO: Created: latency-svc-89gkv + Oct 13 09:23:49.524: INFO: Got endpoints: latency-svc-z8fxv [751.684292ms] + Oct 13 09:23:49.536: INFO: Created: latency-svc-hzztn + Oct 13 09:23:49.572: INFO: Got endpoints: latency-svc-hrtvz [741.438647ms] + Oct 13 09:23:49.585: INFO: Created: latency-svc-h4chf + Oct 13 09:23:49.622: INFO: Got endpoints: latency-svc-b5bw2 [750.82168ms] + Oct 13 09:23:49.633: INFO: Created: latency-svc-wddrf + Oct 13 09:23:49.672: INFO: Got endpoints: latency-svc-r7zqs [751.17256ms] + Oct 13 09:23:49.683: INFO: Created: latency-svc-85rlg + Oct 13 09:23:49.722: INFO: Got endpoints: latency-svc-494mb [748.876595ms] + Oct 13 09:23:49.735: INFO: Created: latency-svc-cxqms + Oct 13 09:23:49.771: INFO: Got endpoints: latency-svc-tv7pn [749.378842ms] + Oct 13 09:23:49.784: INFO: Created: latency-svc-rhzlq + Oct 13 09:23:49.823: INFO: Got endpoints: latency-svc-8zhrl [750.037399ms] + Oct 13 09:23:49.835: INFO: Created: latency-svc-6hbmr + Oct 13 09:23:49.873: INFO: Got endpoints: latency-svc-5cm5w [750.736304ms] + Oct 13 09:23:49.883: INFO: Created: latency-svc-gj7w9 + Oct 13 09:23:49.922: INFO: Got endpoints: latency-svc-qgsjx [749.215332ms] + Oct 13 09:23:49.933: INFO: Created: latency-svc-nxstg + Oct 13 09:23:49.971: INFO: Got endpoints: latency-svc-kvv9h [749.012761ms] + Oct 13 09:23:49.986: INFO: Created: latency-svc-pb54v + Oct 13 09:23:50.022: INFO: Got endpoints: latency-svc-pq96b [749.875674ms] + Oct 13 09:23:50.073: INFO: Got endpoints: latency-svc-lghpj [749.737394ms] + Oct 13 09:23:50.122: INFO: Got endpoints: latency-svc-rzmgj [749.817858ms] + Oct 13 09:23:50.173: INFO: Got endpoints: latency-svc-c7htg [750.688059ms] + Oct 13 09:23:50.223: INFO: Got endpoints: latency-svc-89gkv [748.276394ms] + Oct 13 09:23:50.273: INFO: Got endpoints: latency-svc-hzztn [749.286254ms] + Oct 13 09:23:50.323: INFO: Got endpoints: latency-svc-h4chf [750.609846ms] + Oct 13 09:23:50.374: INFO: Got endpoints: latency-svc-wddrf [751.451691ms] + Oct 13 09:23:50.423: INFO: Got endpoints: latency-svc-85rlg [750.822164ms] + Oct 13 09:23:50.472: INFO: Got endpoints: latency-svc-cxqms [750.186597ms] + Oct 13 09:23:50.522: INFO: Got endpoints: latency-svc-rhzlq [750.905241ms] + Oct 13 09:23:50.572: INFO: Got endpoints: latency-svc-6hbmr [749.201544ms] + Oct 13 09:23:50.623: INFO: Got endpoints: latency-svc-gj7w9 [750.019587ms] + Oct 13 09:23:50.672: INFO: Got endpoints: latency-svc-nxstg [750.100001ms] + Oct 13 09:23:50.722: INFO: Got endpoints: latency-svc-pb54v [750.72455ms] + Oct 13 09:23:50.722: INFO: Latencies: [30.379694ms 41.72386ms 55.668666ms 66.295566ms 81.012567ms 93.176146ms 104.871353ms 115.885006ms 133.478541ms 146.550881ms 151.37877ms 151.498681ms 151.668985ms 152.158974ms 152.981089ms 153.040967ms 153.217956ms 154.900774ms 155.690524ms 156.241309ms 156.323283ms 157.193884ms 158.585162ms 160.676915ms 162.572596ms 162.845442ms 165.003903ms 166.743014ms 169.132568ms 171.419277ms 172.548875ms 173.520307ms 175.831584ms 175.945588ms 177.121551ms 177.927549ms 183.474915ms 195.950268ms 212.002749ms 250.384174ms 288.652778ms 323.916064ms 366.269599ms 406.26641ms 447.295797ms 490.725456ms 527.762529ms 568.614542ms 608.217754ms 647.382806ms 685.31686ms 724.291481ms 738.768889ms 740.251058ms 741.438647ms 746.619376ms 746.940438ms 747.602928ms 747.675817ms 747.83553ms 748.077453ms 748.106608ms 748.126065ms 748.249222ms 748.276394ms 748.291517ms 748.37263ms 748.426102ms 748.46198ms 748.513547ms 748.524023ms 748.769868ms 748.802729ms 748.82874ms 748.876595ms 748.924676ms 748.975803ms 749.012761ms 749.034562ms 749.0355ms 749.044428ms 749.075944ms 749.120643ms 749.127229ms 749.18384ms 749.201544ms 749.203825ms 749.213566ms 749.214405ms 749.215332ms 749.232692ms 749.241464ms 749.250092ms 749.256647ms 749.286254ms 749.317406ms 749.322145ms 749.347312ms 749.376287ms 749.378842ms 749.39757ms 749.414356ms 749.422001ms 749.427964ms 749.430544ms 749.579495ms 749.595674ms 749.597501ms 749.609848ms 749.620639ms 749.641469ms 749.647786ms 749.691054ms 749.707197ms 749.721201ms 749.723313ms 749.723544ms 749.737394ms 749.765501ms 749.782874ms 749.793107ms 749.797566ms 749.799014ms 749.802769ms 749.817858ms 749.849944ms 749.875674ms 749.901973ms 749.907265ms 749.933278ms 749.937391ms 749.938751ms 749.968404ms 749.991172ms 749.994463ms 750.019587ms 750.030731ms 750.037399ms 750.039792ms 750.045415ms 750.050531ms 750.071677ms 750.098965ms 750.100001ms 750.109296ms 750.132067ms 750.132838ms 750.150446ms 750.179979ms 750.186597ms 750.188377ms 750.223755ms 750.229717ms 750.259836ms 750.303336ms 750.368765ms 750.38374ms 750.421062ms 750.498604ms 750.567893ms 750.601439ms 750.609846ms 750.688059ms 750.720376ms 750.72455ms 750.736304ms 750.747921ms 750.82168ms 750.822164ms 750.905241ms 750.922013ms 750.963061ms 750.986769ms 751.001529ms 751.014374ms 751.104759ms 751.133481ms 751.135942ms 751.169412ms 751.17256ms 751.181379ms 751.183443ms 751.232612ms 751.246779ms 751.250542ms 751.258544ms 751.361995ms 751.451691ms 751.631578ms 751.684292ms 751.75429ms 751.998398ms 752.000094ms 752.138892ms 752.252883ms 752.525289ms 752.960087ms 753.993433ms 758.21013ms 762.188806ms] + Oct 13 09:23:50.722: INFO: 50 %ile: 749.39757ms + Oct 13 09:23:50.722: INFO: 90 %ile: 751.181379ms + Oct 13 09:23:50.722: INFO: 99 %ile: 758.21013ms + Oct 13 09:23:50.722: INFO: Total sample count: 200 + [AfterEach] [sig-network] Service endpoints latency + test/e2e/framework/node/init/init.go:32 + Oct 13 09:23:50.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] Service endpoints latency + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] Service endpoints latency + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] Service endpoints latency + tear down framework | framework.go:193 + STEP: Destroying namespace "svc-latency-8055" for this suite. 10/13/23 09:23:50.727 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide container's cpu limit [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:193 +[BeforeEach] [sig-storage] Downward API volume + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:23:50.734 +Oct 13 09:23:50.734: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename downward-api 10/13/23 09:23:50.735 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:23:50.75 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:23:50.752 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:44 +[It] should provide container's cpu limit [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:193 +STEP: Creating a pod to test downward API volume plugin 10/13/23 09:23:50.754 +Oct 13 09:23:50.761: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e22e6c0b-2743-4013-80da-95c77a10be1f" in namespace "downward-api-6966" to be "Succeeded or Failed" +Oct 13 09:23:50.764: INFO: Pod "downwardapi-volume-e22e6c0b-2743-4013-80da-95c77a10be1f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.804471ms +Oct 13 09:23:52.768: INFO: Pod "downwardapi-volume-e22e6c0b-2743-4013-80da-95c77a10be1f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007037534s +Oct 13 09:23:54.767: INFO: Pod "downwardapi-volume-e22e6c0b-2743-4013-80da-95c77a10be1f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00674013s +STEP: Saw pod success 10/13/23 09:23:54.767 +Oct 13 09:23:54.768: INFO: Pod "downwardapi-volume-e22e6c0b-2743-4013-80da-95c77a10be1f" satisfied condition "Succeeded or Failed" +Oct 13 09:23:54.770: INFO: Trying to get logs from node node2 pod downwardapi-volume-e22e6c0b-2743-4013-80da-95c77a10be1f container client-container: +STEP: delete the pod 10/13/23 09:23:54.775 +Oct 13 09:23:54.787: INFO: Waiting for pod downwardapi-volume-e22e6c0b-2743-4013-80da-95c77a10be1f to disappear +Oct 13 09:23:54.790: INFO: Pod downwardapi-volume-e22e6c0b-2743-4013-80da-95c77a10be1f no longer exists +[AfterEach] [sig-storage] Downward API volume + test/e2e/framework/node/init/init.go:32 +Oct 13 09:23:54.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Downward API volume + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Downward API volume + tear down framework | framework.go:193 +STEP: Destroying namespace "downward-api-6966" for this suite. 10/13/23 09:23:54.793 +------------------------------ +• [4.064 seconds] +[sig-storage] Downward API volume +test/e2e/common/storage/framework.go:23 + should provide container's cpu limit [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:193 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Downward API volume + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:23:50.734 + Oct 13 09:23:50.734: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename downward-api 10/13/23 09:23:50.735 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:23:50.75 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:23:50.752 + [BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:44 + [It] should provide container's cpu limit [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:193 + STEP: Creating a pod to test downward API volume plugin 10/13/23 09:23:50.754 + Oct 13 09:23:50.761: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e22e6c0b-2743-4013-80da-95c77a10be1f" in namespace "downward-api-6966" to be "Succeeded or Failed" + Oct 13 09:23:50.764: INFO: Pod "downwardapi-volume-e22e6c0b-2743-4013-80da-95c77a10be1f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.804471ms + Oct 13 09:23:52.768: INFO: Pod "downwardapi-volume-e22e6c0b-2743-4013-80da-95c77a10be1f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007037534s + Oct 13 09:23:54.767: INFO: Pod "downwardapi-volume-e22e6c0b-2743-4013-80da-95c77a10be1f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00674013s + STEP: Saw pod success 10/13/23 09:23:54.767 + Oct 13 09:23:54.768: INFO: Pod "downwardapi-volume-e22e6c0b-2743-4013-80da-95c77a10be1f" satisfied condition "Succeeded or Failed" + Oct 13 09:23:54.770: INFO: Trying to get logs from node node2 pod downwardapi-volume-e22e6c0b-2743-4013-80da-95c77a10be1f container client-container: + STEP: delete the pod 10/13/23 09:23:54.775 + Oct 13 09:23:54.787: INFO: Waiting for pod downwardapi-volume-e22e6c0b-2743-4013-80da-95c77a10be1f to disappear + Oct 13 09:23:54.790: INFO: Pod downwardapi-volume-e22e6c0b-2743-4013-80da-95c77a10be1f no longer exists + [AfterEach] [sig-storage] Downward API volume + test/e2e/framework/node/init/init.go:32 + Oct 13 09:23:54.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Downward API volume + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Downward API volume + tear down framework | framework.go:193 + STEP: Destroying namespace "downward-api-6966" for this suite. 10/13/23 09:23:54.793 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicaSet + should validate Replicaset Status endpoints [Conformance] + test/e2e/apps/replica_set.go:176 +[BeforeEach] [sig-apps] ReplicaSet + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:23:54.801 +Oct 13 09:23:54.801: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename replicaset 10/13/23 09:23:54.802 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:23:54.814 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:23:54.816 +[BeforeEach] [sig-apps] ReplicaSet + test/e2e/framework/metrics/init/init.go:31 +[It] should validate Replicaset Status endpoints [Conformance] + test/e2e/apps/replica_set.go:176 +STEP: Create a Replicaset 10/13/23 09:23:54.821 +STEP: Verify that the required pods have come up. 10/13/23 09:23:54.825 +Oct 13 09:23:54.828: INFO: Pod name sample-pod: Found 0 pods out of 1 +Oct 13 09:23:59.834: INFO: Pod name sample-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running 10/13/23 09:23:59.834 +STEP: Getting /status 10/13/23 09:23:59.834 +Oct 13 09:23:59.838: INFO: Replicaset test-rs has Conditions: [] +STEP: updating the Replicaset Status 10/13/23 09:23:59.838 +Oct 13 09:23:59.848: INFO: updatedStatus.Conditions: []v1.ReplicaSetCondition{v1.ReplicaSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} +STEP: watching for the ReplicaSet status to be updated 10/13/23 09:23:59.848 +Oct 13 09:23:59.850: INFO: Observed &ReplicaSet event: ADDED +Oct 13 09:23:59.850: INFO: Observed &ReplicaSet event: MODIFIED +Oct 13 09:23:59.850: INFO: Observed &ReplicaSet event: MODIFIED +Oct 13 09:23:59.851: INFO: Observed &ReplicaSet event: MODIFIED +Oct 13 09:23:59.851: INFO: Found replicaset test-rs in namespace replicaset-5234 with labels: map[name:sample-pod pod:httpd] annotations: map[] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] +Oct 13 09:23:59.851: INFO: Replicaset test-rs has an updated status +STEP: patching the Replicaset Status 10/13/23 09:23:59.851 +Oct 13 09:23:59.851: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} +Oct 13 09:23:59.857: INFO: Patched status conditions: []v1.ReplicaSetCondition{v1.ReplicaSetCondition{Type:"StatusPatched", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"", Message:""}} +STEP: watching for the Replicaset status to be patched 10/13/23 09:23:59.857 +Oct 13 09:23:59.859: INFO: Observed &ReplicaSet event: ADDED +Oct 13 09:23:59.859: INFO: Observed &ReplicaSet event: MODIFIED +Oct 13 09:23:59.859: INFO: Observed &ReplicaSet event: MODIFIED +Oct 13 09:23:59.860: INFO: Observed &ReplicaSet event: MODIFIED +Oct 13 09:23:59.860: INFO: Observed replicaset test-rs in namespace replicaset-5234 with annotations: map[] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} +Oct 13 09:23:59.860: INFO: Observed &ReplicaSet event: MODIFIED +Oct 13 09:23:59.860: INFO: Found replicaset test-rs in namespace replicaset-5234 with labels: map[name:sample-pod pod:httpd] annotations: map[] & Conditions: {StatusPatched True 0001-01-01 00:00:00 +0000 UTC } +Oct 13 09:23:59.860: INFO: Replicaset test-rs has a patched status +[AfterEach] [sig-apps] ReplicaSet + test/e2e/framework/node/init/init.go:32 +Oct 13 09:23:59.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] ReplicaSet + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] ReplicaSet + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] ReplicaSet + tear down framework | framework.go:193 +STEP: Destroying namespace "replicaset-5234" for this suite. 10/13/23 09:23:59.865 +------------------------------ +• [SLOW TEST] [5.069 seconds] +[sig-apps] ReplicaSet +test/e2e/apps/framework.go:23 + should validate Replicaset Status endpoints [Conformance] + test/e2e/apps/replica_set.go:176 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] ReplicaSet + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:23:54.801 + Oct 13 09:23:54.801: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename replicaset 10/13/23 09:23:54.802 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:23:54.814 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:23:54.816 + [BeforeEach] [sig-apps] ReplicaSet + test/e2e/framework/metrics/init/init.go:31 + [It] should validate Replicaset Status endpoints [Conformance] + test/e2e/apps/replica_set.go:176 + STEP: Create a Replicaset 10/13/23 09:23:54.821 + STEP: Verify that the required pods have come up. 10/13/23 09:23:54.825 + Oct 13 09:23:54.828: INFO: Pod name sample-pod: Found 0 pods out of 1 + Oct 13 09:23:59.834: INFO: Pod name sample-pod: Found 1 pods out of 1 + STEP: ensuring each pod is running 10/13/23 09:23:59.834 + STEP: Getting /status 10/13/23 09:23:59.834 + Oct 13 09:23:59.838: INFO: Replicaset test-rs has Conditions: [] + STEP: updating the Replicaset Status 10/13/23 09:23:59.838 + Oct 13 09:23:59.848: INFO: updatedStatus.Conditions: []v1.ReplicaSetCondition{v1.ReplicaSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} + STEP: watching for the ReplicaSet status to be updated 10/13/23 09:23:59.848 + Oct 13 09:23:59.850: INFO: Observed &ReplicaSet event: ADDED + Oct 13 09:23:59.850: INFO: Observed &ReplicaSet event: MODIFIED + Oct 13 09:23:59.850: INFO: Observed &ReplicaSet event: MODIFIED + Oct 13 09:23:59.851: INFO: Observed &ReplicaSet event: MODIFIED + Oct 13 09:23:59.851: INFO: Found replicaset test-rs in namespace replicaset-5234 with labels: map[name:sample-pod pod:httpd] annotations: map[] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] + Oct 13 09:23:59.851: INFO: Replicaset test-rs has an updated status + STEP: patching the Replicaset Status 10/13/23 09:23:59.851 + Oct 13 09:23:59.851: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} + Oct 13 09:23:59.857: INFO: Patched status conditions: []v1.ReplicaSetCondition{v1.ReplicaSetCondition{Type:"StatusPatched", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"", Message:""}} + STEP: watching for the Replicaset status to be patched 10/13/23 09:23:59.857 + Oct 13 09:23:59.859: INFO: Observed &ReplicaSet event: ADDED + Oct 13 09:23:59.859: INFO: Observed &ReplicaSet event: MODIFIED + Oct 13 09:23:59.859: INFO: Observed &ReplicaSet event: MODIFIED + Oct 13 09:23:59.860: INFO: Observed &ReplicaSet event: MODIFIED + Oct 13 09:23:59.860: INFO: Observed replicaset test-rs in namespace replicaset-5234 with annotations: map[] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} + Oct 13 09:23:59.860: INFO: Observed &ReplicaSet event: MODIFIED + Oct 13 09:23:59.860: INFO: Found replicaset test-rs in namespace replicaset-5234 with labels: map[name:sample-pod pod:httpd] annotations: map[] & Conditions: {StatusPatched True 0001-01-01 00:00:00 +0000 UTC } + Oct 13 09:23:59.860: INFO: Replicaset test-rs has a patched status + [AfterEach] [sig-apps] ReplicaSet + test/e2e/framework/node/init/init.go:32 + Oct 13 09:23:59.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] ReplicaSet + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] ReplicaSet + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] ReplicaSet + tear down framework | framework.go:193 + STEP: Destroying namespace "replicaset-5234" for this suite. 10/13/23 09:23:59.865 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Pods + should contain environment variables for services [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:444 +[BeforeEach] [sig-node] Pods + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:23:59.874 +Oct 13 09:23:59.874: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename pods 10/13/23 09:23:59.875 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:23:59.891 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:23:59.894 +[BeforeEach] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:194 +[It] should contain environment variables for services [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:444 +Oct 13 09:23:59.905: INFO: Waiting up to 5m0s for pod "server-envvars-f44b76c0-fc7b-461f-892a-d010a972e8a1" in namespace "pods-1223" to be "running and ready" +Oct 13 09:23:59.911: INFO: Pod "server-envvars-f44b76c0-fc7b-461f-892a-d010a972e8a1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.395837ms +Oct 13 09:23:59.911: INFO: The phase of Pod server-envvars-f44b76c0-fc7b-461f-892a-d010a972e8a1 is Pending, waiting for it to be Running (with Ready = true) +Oct 13 09:24:01.915: INFO: Pod "server-envvars-f44b76c0-fc7b-461f-892a-d010a972e8a1": Phase="Running", Reason="", readiness=true. Elapsed: 2.010075704s +Oct 13 09:24:01.915: INFO: The phase of Pod server-envvars-f44b76c0-fc7b-461f-892a-d010a972e8a1 is Running (Ready = true) +Oct 13 09:24:01.915: INFO: Pod "server-envvars-f44b76c0-fc7b-461f-892a-d010a972e8a1" satisfied condition "running and ready" +Oct 13 09:24:01.931: INFO: Waiting up to 5m0s for pod "client-envvars-6460ee93-14e2-4acf-b597-74ec914e85a0" in namespace "pods-1223" to be "Succeeded or Failed" +Oct 13 09:24:01.935: INFO: Pod "client-envvars-6460ee93-14e2-4acf-b597-74ec914e85a0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.352877ms +Oct 13 09:24:03.939: INFO: Pod "client-envvars-6460ee93-14e2-4acf-b597-74ec914e85a0": Phase="Running", Reason="", readiness=false. Elapsed: 2.00708351s +Oct 13 09:24:05.938: INFO: Pod "client-envvars-6460ee93-14e2-4acf-b597-74ec914e85a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007042637s +STEP: Saw pod success 10/13/23 09:24:05.938 +Oct 13 09:24:05.939: INFO: Pod "client-envvars-6460ee93-14e2-4acf-b597-74ec914e85a0" satisfied condition "Succeeded or Failed" +Oct 13 09:24:05.941: INFO: Trying to get logs from node node2 pod client-envvars-6460ee93-14e2-4acf-b597-74ec914e85a0 container env3cont: +STEP: delete the pod 10/13/23 09:24:05.946 +Oct 13 09:24:05.955: INFO: Waiting for pod client-envvars-6460ee93-14e2-4acf-b597-74ec914e85a0 to disappear +Oct 13 09:24:05.958: INFO: Pod client-envvars-6460ee93-14e2-4acf-b597-74ec914e85a0 no longer exists +[AfterEach] [sig-node] Pods + test/e2e/framework/node/init/init.go:32 +Oct 13 09:24:05.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Pods + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Pods + tear down framework | framework.go:193 +STEP: Destroying namespace "pods-1223" for this suite. 10/13/23 09:24:05.961 +------------------------------ +• [SLOW TEST] [6.091 seconds] +[sig-node] Pods +test/e2e/common/node/framework.go:23 + should contain environment variables for services [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:444 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Pods + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:23:59.874 + Oct 13 09:23:59.874: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename pods 10/13/23 09:23:59.875 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:23:59.891 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:23:59.894 + [BeforeEach] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:194 + [It] should contain environment variables for services [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:444 + Oct 13 09:23:59.905: INFO: Waiting up to 5m0s for pod "server-envvars-f44b76c0-fc7b-461f-892a-d010a972e8a1" in namespace "pods-1223" to be "running and ready" + Oct 13 09:23:59.911: INFO: Pod "server-envvars-f44b76c0-fc7b-461f-892a-d010a972e8a1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.395837ms + Oct 13 09:23:59.911: INFO: The phase of Pod server-envvars-f44b76c0-fc7b-461f-892a-d010a972e8a1 is Pending, waiting for it to be Running (with Ready = true) + Oct 13 09:24:01.915: INFO: Pod "server-envvars-f44b76c0-fc7b-461f-892a-d010a972e8a1": Phase="Running", Reason="", readiness=true. Elapsed: 2.010075704s + Oct 13 09:24:01.915: INFO: The phase of Pod server-envvars-f44b76c0-fc7b-461f-892a-d010a972e8a1 is Running (Ready = true) + Oct 13 09:24:01.915: INFO: Pod "server-envvars-f44b76c0-fc7b-461f-892a-d010a972e8a1" satisfied condition "running and ready" + Oct 13 09:24:01.931: INFO: Waiting up to 5m0s for pod "client-envvars-6460ee93-14e2-4acf-b597-74ec914e85a0" in namespace "pods-1223" to be "Succeeded or Failed" + Oct 13 09:24:01.935: INFO: Pod "client-envvars-6460ee93-14e2-4acf-b597-74ec914e85a0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.352877ms + Oct 13 09:24:03.939: INFO: Pod "client-envvars-6460ee93-14e2-4acf-b597-74ec914e85a0": Phase="Running", Reason="", readiness=false. Elapsed: 2.00708351s + Oct 13 09:24:05.938: INFO: Pod "client-envvars-6460ee93-14e2-4acf-b597-74ec914e85a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007042637s + STEP: Saw pod success 10/13/23 09:24:05.938 + Oct 13 09:24:05.939: INFO: Pod "client-envvars-6460ee93-14e2-4acf-b597-74ec914e85a0" satisfied condition "Succeeded or Failed" + Oct 13 09:24:05.941: INFO: Trying to get logs from node node2 pod client-envvars-6460ee93-14e2-4acf-b597-74ec914e85a0 container env3cont: + STEP: delete the pod 10/13/23 09:24:05.946 + Oct 13 09:24:05.955: INFO: Waiting for pod client-envvars-6460ee93-14e2-4acf-b597-74ec914e85a0 to disappear + Oct 13 09:24:05.958: INFO: Pod client-envvars-6460ee93-14e2-4acf-b597-74ec914e85a0 no longer exists + [AfterEach] [sig-node] Pods + test/e2e/framework/node/init/init.go:32 + Oct 13 09:24:05.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Pods + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Pods + tear down framework | framework.go:193 + STEP: Destroying namespace "pods-1223" for this suite. 10/13/23 09:24:05.961 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSS +------------------------------ +[sig-node] Kubelet when scheduling an agnhost Pod with hostAliases + should write entries to /etc/hosts [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:148 +[BeforeEach] [sig-node] Kubelet + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:24:05.966 +Oct 13 09:24:05.966: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename kubelet-test 10/13/23 09:24:05.967 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:24:05.979 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:24:05.981 +[BeforeEach] [sig-node] Kubelet + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] Kubelet + test/e2e/common/node/kubelet.go:41 +[It] should write entries to /etc/hosts [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:148 +STEP: Waiting for pod completion 10/13/23 09:24:05.99 +Oct 13 09:24:05.990: INFO: Waiting up to 3m0s for pod "agnhost-host-aliasesa0804e54-cc05-4213-a1d4-81ee1f24d11c" in namespace "kubelet-test-1191" to be "completed" +Oct 13 09:24:05.992: INFO: Pod "agnhost-host-aliasesa0804e54-cc05-4213-a1d4-81ee1f24d11c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.389634ms +Oct 13 09:24:07.996: INFO: Pod "agnhost-host-aliasesa0804e54-cc05-4213-a1d4-81ee1f24d11c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005850113s +Oct 13 09:24:09.996: INFO: Pod "agnhost-host-aliasesa0804e54-cc05-4213-a1d4-81ee1f24d11c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00606285s +Oct 13 09:24:09.996: INFO: Pod "agnhost-host-aliasesa0804e54-cc05-4213-a1d4-81ee1f24d11c" satisfied condition "completed" +[AfterEach] [sig-node] Kubelet + test/e2e/framework/node/init/init.go:32 +Oct 13 09:24:10.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Kubelet + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Kubelet + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Kubelet + tear down framework | framework.go:193 +STEP: Destroying namespace "kubelet-test-1191" for this suite. 10/13/23 09:24:10.008 +------------------------------ +• [4.052 seconds] +[sig-node] Kubelet +test/e2e/common/node/framework.go:23 + when scheduling an agnhost Pod with hostAliases + test/e2e/common/node/kubelet.go:140 + should write entries to /etc/hosts [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:148 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Kubelet + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:24:05.966 + Oct 13 09:24:05.966: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename kubelet-test 10/13/23 09:24:05.967 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:24:05.979 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:24:05.981 + [BeforeEach] [sig-node] Kubelet + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] Kubelet + test/e2e/common/node/kubelet.go:41 + [It] should write entries to /etc/hosts [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:148 + STEP: Waiting for pod completion 10/13/23 09:24:05.99 + Oct 13 09:24:05.990: INFO: Waiting up to 3m0s for pod "agnhost-host-aliasesa0804e54-cc05-4213-a1d4-81ee1f24d11c" in namespace "kubelet-test-1191" to be "completed" + Oct 13 09:24:05.992: INFO: Pod "agnhost-host-aliasesa0804e54-cc05-4213-a1d4-81ee1f24d11c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.389634ms + Oct 13 09:24:07.996: INFO: Pod "agnhost-host-aliasesa0804e54-cc05-4213-a1d4-81ee1f24d11c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005850113s + Oct 13 09:24:09.996: INFO: Pod "agnhost-host-aliasesa0804e54-cc05-4213-a1d4-81ee1f24d11c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00606285s + Oct 13 09:24:09.996: INFO: Pod "agnhost-host-aliasesa0804e54-cc05-4213-a1d4-81ee1f24d11c" satisfied condition "completed" + [AfterEach] [sig-node] Kubelet + test/e2e/framework/node/init/init.go:32 + Oct 13 09:24:10.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Kubelet + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Kubelet + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Kubelet + tear down framework | framework.go:193 + STEP: Destroying namespace "kubelet-test-1191" for this suite. 10/13/23 09:24:10.008 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:99 +[BeforeEach] [sig-storage] ConfigMap + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:24:10.02 +Oct 13 09:24:10.021: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename configmap 10/13/23 09:24:10.022 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:24:10.037 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:24:10.039 +[BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:31 +[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:99 +STEP: Creating configMap with name configmap-test-volume-map-d380627e-1caa-4b53-8c2b-1725c9260e63 10/13/23 09:24:10.041 +STEP: Creating a pod to test consume configMaps 10/13/23 09:24:10.045 +Oct 13 09:24:10.053: INFO: Waiting up to 5m0s for pod "pod-configmaps-cde724ff-2e84-4f57-b5e7-34710abb254e" in namespace "configmap-8960" to be "Succeeded or Failed" +Oct 13 09:24:10.057: INFO: Pod "pod-configmaps-cde724ff-2e84-4f57-b5e7-34710abb254e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.428769ms +Oct 13 09:24:12.060: INFO: Pod "pod-configmaps-cde724ff-2e84-4f57-b5e7-34710abb254e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006630778s +Oct 13 09:24:14.061: INFO: Pod "pod-configmaps-cde724ff-2e84-4f57-b5e7-34710abb254e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007886648s +STEP: Saw pod success 10/13/23 09:24:14.061 +Oct 13 09:24:14.061: INFO: Pod "pod-configmaps-cde724ff-2e84-4f57-b5e7-34710abb254e" satisfied condition "Succeeded or Failed" +Oct 13 09:24:14.065: INFO: Trying to get logs from node node2 pod pod-configmaps-cde724ff-2e84-4f57-b5e7-34710abb254e container agnhost-container: +STEP: delete the pod 10/13/23 09:24:14.071 +Oct 13 09:24:14.085: INFO: Waiting for pod pod-configmaps-cde724ff-2e84-4f57-b5e7-34710abb254e to disappear +Oct 13 09:24:14.088: INFO: Pod pod-configmaps-cde724ff-2e84-4f57-b5e7-34710abb254e no longer exists +[AfterEach] [sig-storage] ConfigMap + test/e2e/framework/node/init/init.go:32 +Oct 13 09:24:14.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] ConfigMap + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] ConfigMap + tear down framework | framework.go:193 +STEP: Destroying namespace "configmap-8960" for this suite. 10/13/23 09:24:14.091 +------------------------------ +• [4.076 seconds] +[sig-storage] ConfigMap +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:99 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] ConfigMap + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:24:10.02 + Oct 13 09:24:10.021: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename configmap 10/13/23 09:24:10.022 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:24:10.037 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:24:10.039 + [BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:31 + [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:99 + STEP: Creating configMap with name configmap-test-volume-map-d380627e-1caa-4b53-8c2b-1725c9260e63 10/13/23 09:24:10.041 + STEP: Creating a pod to test consume configMaps 10/13/23 09:24:10.045 + Oct 13 09:24:10.053: INFO: Waiting up to 5m0s for pod "pod-configmaps-cde724ff-2e84-4f57-b5e7-34710abb254e" in namespace "configmap-8960" to be "Succeeded or Failed" + Oct 13 09:24:10.057: INFO: Pod "pod-configmaps-cde724ff-2e84-4f57-b5e7-34710abb254e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.428769ms + Oct 13 09:24:12.060: INFO: Pod "pod-configmaps-cde724ff-2e84-4f57-b5e7-34710abb254e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006630778s + Oct 13 09:24:14.061: INFO: Pod "pod-configmaps-cde724ff-2e84-4f57-b5e7-34710abb254e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007886648s + STEP: Saw pod success 10/13/23 09:24:14.061 + Oct 13 09:24:14.061: INFO: Pod "pod-configmaps-cde724ff-2e84-4f57-b5e7-34710abb254e" satisfied condition "Succeeded or Failed" + Oct 13 09:24:14.065: INFO: Trying to get logs from node node2 pod pod-configmaps-cde724ff-2e84-4f57-b5e7-34710abb254e container agnhost-container: + STEP: delete the pod 10/13/23 09:24:14.071 + Oct 13 09:24:14.085: INFO: Waiting for pod pod-configmaps-cde724ff-2e84-4f57-b5e7-34710abb254e to disappear + Oct 13 09:24:14.088: INFO: Pod pod-configmaps-cde724ff-2e84-4f57-b5e7-34710abb254e no longer exists + [AfterEach] [sig-storage] ConfigMap + test/e2e/framework/node/init/init.go:32 + Oct 13 09:24:14.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] ConfigMap + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] ConfigMap + tear down framework | framework.go:193 + STEP: Destroying namespace "configmap-8960" for this suite. 10/13/23 09:24:14.091 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Container Runtime blackbox test on terminated container + should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:248 +[BeforeEach] [sig-node] Container Runtime + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:24:14.097 +Oct 13 09:24:14.098: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename container-runtime 10/13/23 09:24:14.098 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:24:14.113 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:24:14.115 +[BeforeEach] [sig-node] Container Runtime + test/e2e/framework/metrics/init/init.go:31 +[It] should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:248 +STEP: create the container 10/13/23 09:24:14.117 +STEP: wait for the container to reach Succeeded 10/13/23 09:24:14.125 +STEP: get the container status 10/13/23 09:24:18.146 +STEP: the container should be terminated 10/13/23 09:24:18.149 +STEP: the termination message should be set 10/13/23 09:24:18.149 +Oct 13 09:24:18.149: INFO: Expected: &{OK} to match Container's Termination Message: OK -- +STEP: delete the container 10/13/23 09:24:18.149 +[AfterEach] [sig-node] Container Runtime + test/e2e/framework/node/init/init.go:32 +Oct 13 09:24:18.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Container Runtime + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Container Runtime + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Container Runtime + tear down framework | framework.go:193 +STEP: Destroying namespace "container-runtime-2668" for this suite. 10/13/23 09:24:18.167 +------------------------------ +• [4.074 seconds] +[sig-node] Container Runtime +test/e2e/common/node/framework.go:23 + blackbox test + test/e2e/common/node/runtime.go:44 + on terminated container + test/e2e/common/node/runtime.go:137 + should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:248 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Container Runtime + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:24:14.097 + Oct 13 09:24:14.098: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename container-runtime 10/13/23 09:24:14.098 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:24:14.113 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:24:14.115 + [BeforeEach] [sig-node] Container Runtime + test/e2e/framework/metrics/init/init.go:31 + [It] should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:248 + STEP: create the container 10/13/23 09:24:14.117 + STEP: wait for the container to reach Succeeded 10/13/23 09:24:14.125 + STEP: get the container status 10/13/23 09:24:18.146 + STEP: the container should be terminated 10/13/23 09:24:18.149 + STEP: the termination message should be set 10/13/23 09:24:18.149 + Oct 13 09:24:18.149: INFO: Expected: &{OK} to match Container's Termination Message: OK -- + STEP: delete the container 10/13/23 09:24:18.149 + [AfterEach] [sig-node] Container Runtime + test/e2e/framework/node/init/init.go:32 + Oct 13 09:24:18.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Container Runtime + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Container Runtime + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Container Runtime + tear down framework | framework.go:193 + STEP: Destroying namespace "container-runtime-2668" for this suite. 10/13/23 09:24:18.167 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Containers + should be able to override the image's default arguments (container cmd) [NodeConformance] [Conformance] + test/e2e/common/node/containers.go:59 +[BeforeEach] [sig-node] Containers + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:24:18.172 +Oct 13 09:24:18.173: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename containers 10/13/23 09:24:18.173 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:24:18.188 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:24:18.19 +[BeforeEach] [sig-node] Containers + test/e2e/framework/metrics/init/init.go:31 +[It] should be able to override the image's default arguments (container cmd) [NodeConformance] [Conformance] + test/e2e/common/node/containers.go:59 +STEP: Creating a pod to test override arguments 10/13/23 09:24:18.192 +Oct 13 09:24:18.199: INFO: Waiting up to 5m0s for pod "client-containers-a049d81e-68a7-4ae4-ae84-c27b942e3b8a" in namespace "containers-8791" to be "Succeeded or Failed" +Oct 13 09:24:18.202: INFO: Pod "client-containers-a049d81e-68a7-4ae4-ae84-c27b942e3b8a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.00376ms +Oct 13 09:24:20.206: INFO: Pod "client-containers-a049d81e-68a7-4ae4-ae84-c27b942e3b8a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007208296s +Oct 13 09:24:22.207: INFO: Pod "client-containers-a049d81e-68a7-4ae4-ae84-c27b942e3b8a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007769196s +STEP: Saw pod success 10/13/23 09:24:22.207 +Oct 13 09:24:22.207: INFO: Pod "client-containers-a049d81e-68a7-4ae4-ae84-c27b942e3b8a" satisfied condition "Succeeded or Failed" +Oct 13 09:24:22.212: INFO: Trying to get logs from node node2 pod client-containers-a049d81e-68a7-4ae4-ae84-c27b942e3b8a container agnhost-container: +STEP: delete the pod 10/13/23 09:24:22.219 +Oct 13 09:24:22.232: INFO: Waiting for pod client-containers-a049d81e-68a7-4ae4-ae84-c27b942e3b8a to disappear +Oct 13 09:24:22.236: INFO: Pod client-containers-a049d81e-68a7-4ae4-ae84-c27b942e3b8a no longer exists +[AfterEach] [sig-node] Containers + test/e2e/framework/node/init/init.go:32 +Oct 13 09:24:22.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Containers + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Containers + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Containers + tear down framework | framework.go:193 +STEP: Destroying namespace "containers-8791" for this suite. 10/13/23 09:24:22.239 +------------------------------ +• [4.072 seconds] +[sig-node] Containers +test/e2e/common/node/framework.go:23 + should be able to override the image's default arguments (container cmd) [NodeConformance] [Conformance] + test/e2e/common/node/containers.go:59 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Containers + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:24:18.172 + Oct 13 09:24:18.173: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename containers 10/13/23 09:24:18.173 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:24:18.188 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:24:18.19 + [BeforeEach] [sig-node] Containers + test/e2e/framework/metrics/init/init.go:31 + [It] should be able to override the image's default arguments (container cmd) [NodeConformance] [Conformance] + test/e2e/common/node/containers.go:59 + STEP: Creating a pod to test override arguments 10/13/23 09:24:18.192 + Oct 13 09:24:18.199: INFO: Waiting up to 5m0s for pod "client-containers-a049d81e-68a7-4ae4-ae84-c27b942e3b8a" in namespace "containers-8791" to be "Succeeded or Failed" + Oct 13 09:24:18.202: INFO: Pod "client-containers-a049d81e-68a7-4ae4-ae84-c27b942e3b8a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.00376ms + Oct 13 09:24:20.206: INFO: Pod "client-containers-a049d81e-68a7-4ae4-ae84-c27b942e3b8a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007208296s + Oct 13 09:24:22.207: INFO: Pod "client-containers-a049d81e-68a7-4ae4-ae84-c27b942e3b8a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007769196s + STEP: Saw pod success 10/13/23 09:24:22.207 + Oct 13 09:24:22.207: INFO: Pod "client-containers-a049d81e-68a7-4ae4-ae84-c27b942e3b8a" satisfied condition "Succeeded or Failed" + Oct 13 09:24:22.212: INFO: Trying to get logs from node node2 pod client-containers-a049d81e-68a7-4ae4-ae84-c27b942e3b8a container agnhost-container: + STEP: delete the pod 10/13/23 09:24:22.219 + Oct 13 09:24:22.232: INFO: Waiting for pod client-containers-a049d81e-68a7-4ae4-ae84-c27b942e3b8a to disappear + Oct 13 09:24:22.236: INFO: Pod client-containers-a049d81e-68a7-4ae4-ae84-c27b942e3b8a no longer exists + [AfterEach] [sig-node] Containers + test/e2e/framework/node/init/init.go:32 + Oct 13 09:24:22.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Containers + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Containers + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Containers + tear down framework | framework.go:193 + STEP: Destroying namespace "containers-8791" for this suite. 10/13/23 09:24:22.239 + << End Captured GinkgoWriter Output +------------------------------ +SS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition + creating/deleting custom resource definition objects works [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:58 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:24:22.245 +Oct 13 09:24:22.245: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename custom-resource-definition 10/13/23 09:24:22.246 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:24:22.262 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:24:22.265 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[It] creating/deleting custom resource definition objects works [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:58 +Oct 13 09:24:22.267: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Oct 13 09:24:23.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "custom-resource-definition-3359" for this suite. 10/13/23 09:24:23.295 +------------------------------ +• [1.056 seconds] +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + Simple CustomResourceDefinition + test/e2e/apimachinery/custom_resource_definition.go:50 + creating/deleting custom resource definition objects works [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:58 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:24:22.245 + Oct 13 09:24:22.245: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename custom-resource-definition 10/13/23 09:24:22.246 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:24:22.262 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:24:22.265 + [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [It] creating/deleting custom resource definition objects works [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:58 + Oct 13 09:24:22.267: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Oct 13 09:24:23.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "custom-resource-definition-3359" for this suite. 10/13/23 09:24:23.295 + << End Captured GinkgoWriter Output +------------------------------ +SSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for CRD preserving unknown fields in an embedded object [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:236 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:24:23.301 +Oct 13 09:24:23.302: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename crd-publish-openapi 10/13/23 09:24:23.303 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:24:23.317 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:24:23.32 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[It] works for CRD preserving unknown fields in an embedded object [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:236 +Oct 13 09:24:23.323: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: kubectl validation (kubectl create and apply) allows request with any unknown properties 10/13/23 09:24:25.174 +Oct 13 09:24:25.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=crd-publish-openapi-1026 --namespace=crd-publish-openapi-1026 create -f -' +Oct 13 09:24:25.884: INFO: stderr: "" +Oct 13 09:24:25.884: INFO: stdout: "e2e-test-crd-publish-openapi-5584-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" +Oct 13 09:24:25.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=crd-publish-openapi-1026 --namespace=crd-publish-openapi-1026 delete e2e-test-crd-publish-openapi-5584-crds test-cr' +Oct 13 09:24:25.989: INFO: stderr: "" +Oct 13 09:24:25.989: INFO: stdout: "e2e-test-crd-publish-openapi-5584-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" +Oct 13 09:24:25.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=crd-publish-openapi-1026 --namespace=crd-publish-openapi-1026 apply -f -' +Oct 13 09:24:26.160: INFO: stderr: "" +Oct 13 09:24:26.160: INFO: stdout: "e2e-test-crd-publish-openapi-5584-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" +Oct 13 09:24:26.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=crd-publish-openapi-1026 --namespace=crd-publish-openapi-1026 delete e2e-test-crd-publish-openapi-5584-crds test-cr' +Oct 13 09:24:26.259: INFO: stderr: "" +Oct 13 09:24:26.259: INFO: stdout: "e2e-test-crd-publish-openapi-5584-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" +STEP: kubectl explain works to explain CR 10/13/23 09:24:26.259 +Oct 13 09:24:26.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=crd-publish-openapi-1026 explain e2e-test-crd-publish-openapi-5584-crds' +Oct 13 09:24:26.422: INFO: stderr: "" +Oct 13 09:24:26.422: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-5584-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<>\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Oct 13 09:24:28.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "crd-publish-openapi-1026" for this suite. 10/13/23 09:24:28.253 +------------------------------ +• [4.959 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + works for CRD preserving unknown fields in an embedded object [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:236 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:24:23.301 + Oct 13 09:24:23.302: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename crd-publish-openapi 10/13/23 09:24:23.303 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:24:23.317 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:24:23.32 + [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [It] works for CRD preserving unknown fields in an embedded object [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:236 + Oct 13 09:24:23.323: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: kubectl validation (kubectl create and apply) allows request with any unknown properties 10/13/23 09:24:25.174 + Oct 13 09:24:25.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=crd-publish-openapi-1026 --namespace=crd-publish-openapi-1026 create -f -' + Oct 13 09:24:25.884: INFO: stderr: "" + Oct 13 09:24:25.884: INFO: stdout: "e2e-test-crd-publish-openapi-5584-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" + Oct 13 09:24:25.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=crd-publish-openapi-1026 --namespace=crd-publish-openapi-1026 delete e2e-test-crd-publish-openapi-5584-crds test-cr' + Oct 13 09:24:25.989: INFO: stderr: "" + Oct 13 09:24:25.989: INFO: stdout: "e2e-test-crd-publish-openapi-5584-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" + Oct 13 09:24:25.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=crd-publish-openapi-1026 --namespace=crd-publish-openapi-1026 apply -f -' + Oct 13 09:24:26.160: INFO: stderr: "" + Oct 13 09:24:26.160: INFO: stdout: "e2e-test-crd-publish-openapi-5584-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" + Oct 13 09:24:26.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=crd-publish-openapi-1026 --namespace=crd-publish-openapi-1026 delete e2e-test-crd-publish-openapi-5584-crds test-cr' + Oct 13 09:24:26.259: INFO: stderr: "" + Oct 13 09:24:26.259: INFO: stdout: "e2e-test-crd-publish-openapi-5584-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" + STEP: kubectl explain works to explain CR 10/13/23 09:24:26.259 + Oct 13 09:24:26.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=crd-publish-openapi-1026 explain e2e-test-crd-publish-openapi-5584-crds' + Oct 13 09:24:26.422: INFO: stderr: "" + Oct 13 09:24:26.422: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-5584-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<>\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" + [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Oct 13 09:24:28.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "crd-publish-openapi-1026" for this suite. 10/13/23 09:24:28.253 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSS +------------------------------ +[sig-node] Downward API + should provide pod UID as env vars [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:267 +[BeforeEach] [sig-node] Downward API + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:24:28.261 +Oct 13 09:24:28.262: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename downward-api 10/13/23 09:24:28.262 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:24:28.273 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:24:28.276 +[BeforeEach] [sig-node] Downward API + test/e2e/framework/metrics/init/init.go:31 +[It] should provide pod UID as env vars [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:267 +STEP: Creating a pod to test downward api env vars 10/13/23 09:24:28.278 +Oct 13 09:24:28.285: INFO: Waiting up to 5m0s for pod "downward-api-5ce24ad4-ab64-4f06-8898-f6351e378120" in namespace "downward-api-3277" to be "Succeeded or Failed" +Oct 13 09:24:28.288: INFO: Pod "downward-api-5ce24ad4-ab64-4f06-8898-f6351e378120": Phase="Pending", Reason="", readiness=false. Elapsed: 3.402535ms +Oct 13 09:24:30.293: INFO: Pod "downward-api-5ce24ad4-ab64-4f06-8898-f6351e378120": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007858026s +Oct 13 09:24:32.292: INFO: Pod "downward-api-5ce24ad4-ab64-4f06-8898-f6351e378120": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007186825s +STEP: Saw pod success 10/13/23 09:24:32.292 +Oct 13 09:24:32.292: INFO: Pod "downward-api-5ce24ad4-ab64-4f06-8898-f6351e378120" satisfied condition "Succeeded or Failed" +Oct 13 09:24:32.295: INFO: Trying to get logs from node node2 pod downward-api-5ce24ad4-ab64-4f06-8898-f6351e378120 container dapi-container: +STEP: delete the pod 10/13/23 09:24:32.302 +Oct 13 09:24:32.314: INFO: Waiting for pod downward-api-5ce24ad4-ab64-4f06-8898-f6351e378120 to disappear +Oct 13 09:24:32.317: INFO: Pod downward-api-5ce24ad4-ab64-4f06-8898-f6351e378120 no longer exists +[AfterEach] [sig-node] Downward API + test/e2e/framework/node/init/init.go:32 +Oct 13 09:24:32.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Downward API + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Downward API + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Downward API + tear down framework | framework.go:193 +STEP: Destroying namespace "downward-api-3277" for this suite. 10/13/23 09:24:32.321 +------------------------------ +• [4.064 seconds] +[sig-node] Downward API +test/e2e/common/node/framework.go:23 + should provide pod UID as env vars [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:267 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Downward API + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:24:28.261 + Oct 13 09:24:28.262: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename downward-api 10/13/23 09:24:28.262 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:24:28.273 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:24:28.276 + [BeforeEach] [sig-node] Downward API + test/e2e/framework/metrics/init/init.go:31 + [It] should provide pod UID as env vars [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:267 + STEP: Creating a pod to test downward api env vars 10/13/23 09:24:28.278 + Oct 13 09:24:28.285: INFO: Waiting up to 5m0s for pod "downward-api-5ce24ad4-ab64-4f06-8898-f6351e378120" in namespace "downward-api-3277" to be "Succeeded or Failed" + Oct 13 09:24:28.288: INFO: Pod "downward-api-5ce24ad4-ab64-4f06-8898-f6351e378120": Phase="Pending", Reason="", readiness=false. Elapsed: 3.402535ms + Oct 13 09:24:30.293: INFO: Pod "downward-api-5ce24ad4-ab64-4f06-8898-f6351e378120": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007858026s + Oct 13 09:24:32.292: INFO: Pod "downward-api-5ce24ad4-ab64-4f06-8898-f6351e378120": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007186825s + STEP: Saw pod success 10/13/23 09:24:32.292 + Oct 13 09:24:32.292: INFO: Pod "downward-api-5ce24ad4-ab64-4f06-8898-f6351e378120" satisfied condition "Succeeded or Failed" + Oct 13 09:24:32.295: INFO: Trying to get logs from node node2 pod downward-api-5ce24ad4-ab64-4f06-8898-f6351e378120 container dapi-container: + STEP: delete the pod 10/13/23 09:24:32.302 + Oct 13 09:24:32.314: INFO: Waiting for pod downward-api-5ce24ad4-ab64-4f06-8898-f6351e378120 to disappear + Oct 13 09:24:32.317: INFO: Pod downward-api-5ce24ad4-ab64-4f06-8898-f6351e378120 no longer exists + [AfterEach] [sig-node] Downward API + test/e2e/framework/node/init/init.go:32 + Oct 13 09:24:32.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Downward API + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Downward API + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Downward API + tear down framework | framework.go:193 + STEP: Destroying namespace "downward-api-3277" for this suite. 10/13/23 09:24:32.321 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSS +------------------------------ +[sig-node] Security Context When creating a pod with privileged + should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/security_context.go:528 +[BeforeEach] [sig-node] Security Context + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:24:32.326 +Oct 13 09:24:32.327: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename security-context-test 10/13/23 09:24:32.327 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:24:32.339 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:24:32.341 +[BeforeEach] [sig-node] Security Context + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] Security Context + test/e2e/common/node/security_context.go:50 +[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/security_context.go:528 +Oct 13 09:24:32.350: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-f9326879-560a-4fce-aaff-190a95fb9849" in namespace "security-context-test-8081" to be "Succeeded or Failed" +Oct 13 09:24:32.353: INFO: Pod "busybox-privileged-false-f9326879-560a-4fce-aaff-190a95fb9849": Phase="Pending", Reason="", readiness=false. Elapsed: 2.92392ms +Oct 13 09:24:34.358: INFO: Pod "busybox-privileged-false-f9326879-560a-4fce-aaff-190a95fb9849": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007093927s +Oct 13 09:24:36.359: INFO: Pod "busybox-privileged-false-f9326879-560a-4fce-aaff-190a95fb9849": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008430018s +Oct 13 09:24:36.359: INFO: Pod "busybox-privileged-false-f9326879-560a-4fce-aaff-190a95fb9849" satisfied condition "Succeeded or Failed" +Oct 13 09:24:36.366: INFO: Got logs for pod "busybox-privileged-false-f9326879-560a-4fce-aaff-190a95fb9849": "ip: RTNETLINK answers: Operation not permitted\n" +[AfterEach] [sig-node] Security Context + test/e2e/framework/node/init/init.go:32 +Oct 13 09:24:36.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Security Context + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Security Context + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Security Context + tear down framework | framework.go:193 +STEP: Destroying namespace "security-context-test-8081" for this suite. 10/13/23 09:24:36.37 +------------------------------ +• [4.056 seconds] +[sig-node] Security Context +test/e2e/common/node/framework.go:23 + When creating a pod with privileged + test/e2e/common/node/security_context.go:491 + should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/security_context.go:528 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Security Context + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:24:32.326 + Oct 13 09:24:32.327: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename security-context-test 10/13/23 09:24:32.327 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:24:32.339 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:24:32.341 + [BeforeEach] [sig-node] Security Context + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] Security Context + test/e2e/common/node/security_context.go:50 + [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/security_context.go:528 + Oct 13 09:24:32.350: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-f9326879-560a-4fce-aaff-190a95fb9849" in namespace "security-context-test-8081" to be "Succeeded or Failed" + Oct 13 09:24:32.353: INFO: Pod "busybox-privileged-false-f9326879-560a-4fce-aaff-190a95fb9849": Phase="Pending", Reason="", readiness=false. Elapsed: 2.92392ms + Oct 13 09:24:34.358: INFO: Pod "busybox-privileged-false-f9326879-560a-4fce-aaff-190a95fb9849": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007093927s + Oct 13 09:24:36.359: INFO: Pod "busybox-privileged-false-f9326879-560a-4fce-aaff-190a95fb9849": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008430018s + Oct 13 09:24:36.359: INFO: Pod "busybox-privileged-false-f9326879-560a-4fce-aaff-190a95fb9849" satisfied condition "Succeeded or Failed" + Oct 13 09:24:36.366: INFO: Got logs for pod "busybox-privileged-false-f9326879-560a-4fce-aaff-190a95fb9849": "ip: RTNETLINK answers: Operation not permitted\n" + [AfterEach] [sig-node] Security Context + test/e2e/framework/node/init/init.go:32 + Oct 13 09:24:36.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Security Context + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Security Context + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Security Context + tear down framework | framework.go:193 + STEP: Destroying namespace "security-context-test-8081" for this suite. 10/13/23 09:24:36.37 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Deployment + should run the lifecycle of a Deployment [Conformance] + test/e2e/apps/deployment.go:185 +[BeforeEach] [sig-apps] Deployment + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:24:36.385 +Oct 13 09:24:36.385: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename deployment 10/13/23 09:24:36.386 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:24:36.398 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:24:36.401 +[BeforeEach] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 +[It] should run the lifecycle of a Deployment [Conformance] + test/e2e/apps/deployment.go:185 +STEP: creating a Deployment 10/13/23 09:24:36.406 +STEP: waiting for Deployment to be created 10/13/23 09:24:36.411 +STEP: waiting for all Replicas to be Ready 10/13/23 09:24:36.414 +Oct 13 09:24:36.416: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Oct 13 09:24:36.416: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Oct 13 09:24:36.422: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Oct 13 09:24:36.422: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Oct 13 09:24:36.436: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Oct 13 09:24:36.436: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Oct 13 09:24:36.462: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Oct 13 09:24:36.462: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Oct 13 09:24:37.753: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 1 and labels map[test-deployment-static:true] +Oct 13 09:24:37.753: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 1 and labels map[test-deployment-static:true] +Oct 13 09:24:37.831: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 2 and labels map[test-deployment-static:true] +STEP: patching the Deployment 10/13/23 09:24:37.831 +W1013 09:24:37.838382 23 warnings.go:70] unknown field "spec.template.spec.TerminationGracePeriodSeconds" +Oct 13 09:24:37.840: INFO: observed event type ADDED +STEP: waiting for Replicas to scale 10/13/23 09:24:37.84 +Oct 13 09:24:37.841: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 0 +Oct 13 09:24:37.841: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 0 +Oct 13 09:24:37.841: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 0 +Oct 13 09:24:37.841: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 0 +Oct 13 09:24:37.841: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 0 +Oct 13 09:24:37.841: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 0 +Oct 13 09:24:37.841: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 0 +Oct 13 09:24:37.841: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 0 +Oct 13 09:24:37.841: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 1 +Oct 13 09:24:37.841: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 1 +Oct 13 09:24:37.841: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 2 +Oct 13 09:24:37.841: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 2 +Oct 13 09:24:37.841: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 2 +Oct 13 09:24:37.841: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 2 +Oct 13 09:24:37.849: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 2 +Oct 13 09:24:37.849: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 2 +Oct 13 09:24:37.870: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 2 +Oct 13 09:24:37.871: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 2 +Oct 13 09:24:37.877: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 1 +Oct 13 09:24:37.877: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 1 +Oct 13 09:24:37.889: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 1 +Oct 13 09:24:37.889: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 1 +Oct 13 09:24:38.759: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 2 +Oct 13 09:24:38.759: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 2 +Oct 13 09:24:38.778: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 1 +STEP: listing Deployments 10/13/23 09:24:38.778 +Oct 13 09:24:38.781: INFO: Found test-deployment with labels: map[test-deployment:patched test-deployment-static:true] +STEP: updating the Deployment 10/13/23 09:24:38.781 +Oct 13 09:24:38.803: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 1 +STEP: fetching the DeploymentStatus 10/13/23 09:24:38.803 +Oct 13 09:24:38.810: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Oct 13 09:24:38.814: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Oct 13 09:24:38.830: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Oct 13 09:24:38.856: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Oct 13 09:24:38.866: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Oct 13 09:24:39.774: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] +Oct 13 09:24:39.810: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] +Oct 13 09:24:39.825: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] +Oct 13 09:24:40.871: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] +STEP: patching the DeploymentStatus 10/13/23 09:24:40.917 +STEP: fetching the DeploymentStatus 10/13/23 09:24:40.923 +Oct 13 09:24:40.928: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 1 +Oct 13 09:24:40.928: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 1 +Oct 13 09:24:40.928: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 1 +Oct 13 09:24:40.928: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 1 +Oct 13 09:24:40.928: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 1 +Oct 13 09:24:40.928: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 2 +Oct 13 09:24:40.928: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 2 +Oct 13 09:24:40.928: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 2 +Oct 13 09:24:40.928: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 3 +STEP: deleting the Deployment 10/13/23 09:24:40.928 +Oct 13 09:24:40.937: INFO: observed event type MODIFIED +Oct 13 09:24:40.937: INFO: observed event type MODIFIED +Oct 13 09:24:40.937: INFO: observed event type MODIFIED +Oct 13 09:24:40.937: INFO: observed event type MODIFIED +Oct 13 09:24:40.937: INFO: observed event type MODIFIED +Oct 13 09:24:40.937: INFO: observed event type MODIFIED +Oct 13 09:24:40.937: INFO: observed event type MODIFIED +Oct 13 09:24:40.937: INFO: observed event type MODIFIED +Oct 13 09:24:40.937: INFO: observed event type MODIFIED +Oct 13 09:24:40.937: INFO: observed event type MODIFIED +[AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 +Oct 13 09:24:40.941: INFO: Log out all the ReplicaSets if there is no deployment created +Oct 13 09:24:40.945: INFO: ReplicaSet "test-deployment-7b7876f9d6": +&ReplicaSet{ObjectMeta:{test-deployment-7b7876f9d6 deployment-5307 5aad7da8-4b6e-4019-87d1-d546c66314f4 32966 2 2023-10-13 09:24:39 +0000 UTC map[pod-template-hash:7b7876f9d6 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:3] [{apps/v1 Deployment test-deployment d4497cef-8c50-439a-9572-879277cea549 0xc0036c9ce7 0xc0036c9ce8}] [] [{kube-controller-manager Update apps/v1 2023-10-13 09:24:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4497cef-8c50-439a-9572-879277cea549\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-10-13 09:24:41 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 7b7876f9d6,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:7b7876f9d6 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0036c9d80 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:2,FullyLabeledReplicas:2,ObservedGeneration:2,ReadyReplicas:2,AvailableReplicas:2,Conditions:[]ReplicaSetCondition{},},} + +Oct 13 09:24:40.949: INFO: pod: "test-deployment-7b7876f9d6-69gzq": +&Pod{ObjectMeta:{test-deployment-7b7876f9d6-69gzq test-deployment-7b7876f9d6- deployment-5307 be8766ae-138e-4804-be6c-72040024dc53 32965 0 2023-10-13 09:24:40 +0000 UTC map[pod-template-hash:7b7876f9d6 test-deployment-static:true] map[] [{apps/v1 ReplicaSet test-deployment-7b7876f9d6 5aad7da8-4b6e-4019-87d1-d546c66314f4 0xc005258f57 0xc005258f58}] [] [{kube-controller-manager Update v1 2023-10-13 09:24:40 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5aad7da8-4b6e-4019-87d1-d546c66314f4\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-10-13 09:24:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.0.63\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-x7hqq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x7hqq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 09:24:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 09:24:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 09:24:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 09:24:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.253.8.110,PodIP:10.244.0.63,StartTime:2023-10-13 09:24:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-10-13 09:24:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:sha256:987dae5a853a43c663ab2f902708e65874bd2c0189aa0bc57d81ffb57187d089,ContainerID:containerd://10c90e6ae660b82c98ae069800d49f36ccc4ec639cf0e94aaf0171a8545f5e08,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.0.63,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + +Oct 13 09:24:40.949: INFO: pod: "test-deployment-7b7876f9d6-jtg29": +&Pod{ObjectMeta:{test-deployment-7b7876f9d6-jtg29 test-deployment-7b7876f9d6- deployment-5307 7794d4ac-d4dc-4477-ad1f-d7eb09e198c8 32931 0 2023-10-13 09:24:39 +0000 UTC map[pod-template-hash:7b7876f9d6 test-deployment-static:true] map[] [{apps/v1 ReplicaSet test-deployment-7b7876f9d6 5aad7da8-4b6e-4019-87d1-d546c66314f4 0xc005259167 0xc005259168}] [] [{kube-controller-manager Update v1 2023-10-13 09:24:39 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5aad7da8-4b6e-4019-87d1-d546c66314f4\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-10-13 09:24:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.2\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-stzgn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-stzgn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 09:24:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 09:24:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 09:24:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 09:24:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.253.8.111,PodIP:10.244.1.2,StartTime:2023-10-13 09:24:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-10-13 09:24:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:sha256:987dae5a853a43c663ab2f902708e65874bd2c0189aa0bc57d81ffb57187d089,ContainerID:containerd://b4c48f7b8886bd2cb2c8c21a30a19c68f7377cd1aaeb16240014475e2d055849,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + +Oct 13 09:24:40.950: INFO: ReplicaSet "test-deployment-7df74c55ff": +&ReplicaSet{ObjectMeta:{test-deployment-7df74c55ff deployment-5307 f21ca95d-b314-4430-8217-02e8b08d04b9 32974 4 2023-10-13 09:24:38 +0000 UTC map[pod-template-hash:7df74c55ff test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-deployment d4497cef-8c50-439a-9572-879277cea549 0xc0036c9de7 0xc0036c9de8}] [] [{kube-controller-manager Update apps/v1 2023-10-13 09:24:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4497cef-8c50-439a-9572-879277cea549\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-10-13 09:24:41 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 7df74c55ff,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:7df74c55ff test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment registry.k8s.io/pause:3.9 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0036c9e70 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:4,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} + +Oct 13 09:24:40.954: INFO: pod: "test-deployment-7df74c55ff-5kz59": +&Pod{ObjectMeta:{test-deployment-7df74c55ff-5kz59 test-deployment-7df74c55ff- deployment-5307 d66cc47a-ed38-405f-9487-13dfe37608a3 32969 0 2023-10-13 09:24:38 +0000 UTC 2023-10-13 09:24:42 +0000 UTC 0xc0044ce5c8 map[pod-template-hash:7df74c55ff test-deployment-static:true] map[] [{apps/v1 ReplicaSet test-deployment-7df74c55ff f21ca95d-b314-4430-8217-02e8b08d04b9 0xc0044ce5f7 0xc0044ce5f8}] [] [{kube-controller-manager Update v1 2023-10-13 09:24:38 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f21ca95d-b314-4430-8217-02e8b08d04b9\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-10-13 09:24:39 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.254\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-4rhn2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:registry.k8s.io/pause:3.9,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4rhn2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 09:24:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 09:24:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 09:24:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 09:24:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.253.8.111,PodIP:10.244.1.254,StartTime:2023-10-13 09:24:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-10-13 09:24:39 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/pause:3.9,ImageID:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,ContainerID:containerd://9377b434e0074c17f6c51f4e873c9a6c925abccb6d05d9196c4ef7101a682100,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.254,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + +Oct 13 09:24:40.955: INFO: pod: "test-deployment-7df74c55ff-p96xj": +&Pod{ObjectMeta:{test-deployment-7df74c55ff-p96xj test-deployment-7df74c55ff- deployment-5307 b748d4ad-ee35-430b-898f-608492289802 32937 0 2023-10-13 09:24:39 +0000 UTC 2023-10-13 09:24:41 +0000 UTC 0xc0044ce7c0 map[pod-template-hash:7df74c55ff test-deployment-static:true] map[] [{apps/v1 ReplicaSet test-deployment-7df74c55ff f21ca95d-b314-4430-8217-02e8b08d04b9 0xc0044ce7f7 0xc0044ce7f8}] [] [{kube-controller-manager Update v1 2023-10-13 09:24:39 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f21ca95d-b314-4430-8217-02e8b08d04b9\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-10-13 09:24:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.131\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-ccvwt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:registry.k8s.io/pause:3.9,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ccvwt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node3,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 09:24:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 09:24:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 09:24:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 09:24:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.253.8.112,PodIP:10.244.2.131,StartTime:2023-10-13 09:24:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-10-13 09:24:39 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/pause:3.9,ImageID:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,ContainerID:containerd://2327c98101907c8cfc288ba6f0e1a5c949e87aae63138a304154cf09a9626476,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.131,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + +Oct 13 09:24:40.955: INFO: ReplicaSet "test-deployment-f4dbc4647": +&ReplicaSet{ObjectMeta:{test-deployment-f4dbc4647 deployment-5307 3b4ee80f-76f9-4990-8287-954860ba1216 32899 3 2023-10-13 09:24:36 +0000 UTC map[pod-template-hash:f4dbc4647 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment d4497cef-8c50-439a-9572-879277cea549 0xc0036c9ed7 0xc0036c9ed8}] [] [{kube-controller-manager Update apps/v1 2023-10-13 09:24:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4497cef-8c50-439a-9572-879277cea549\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-10-13 09:24:39 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: f4dbc4647,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:f4dbc4647 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment registry.k8s.io/e2e-test-images/agnhost:2.43 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0036c9f60 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} + +[AfterEach] [sig-apps] Deployment + test/e2e/framework/node/init/init.go:32 +Oct 13 09:24:40.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] Deployment + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] Deployment + tear down framework | framework.go:193 +STEP: Destroying namespace "deployment-5307" for this suite. 10/13/23 09:24:40.964 +------------------------------ +• [4.586 seconds] +[sig-apps] Deployment +test/e2e/apps/framework.go:23 + should run the lifecycle of a Deployment [Conformance] + test/e2e/apps/deployment.go:185 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] Deployment + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:24:36.385 + Oct 13 09:24:36.385: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename deployment 10/13/23 09:24:36.386 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:24:36.398 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:24:36.401 + [BeforeEach] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 + [It] should run the lifecycle of a Deployment [Conformance] + test/e2e/apps/deployment.go:185 + STEP: creating a Deployment 10/13/23 09:24:36.406 + STEP: waiting for Deployment to be created 10/13/23 09:24:36.411 + STEP: waiting for all Replicas to be Ready 10/13/23 09:24:36.414 + Oct 13 09:24:36.416: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 0 and labels map[test-deployment-static:true] + Oct 13 09:24:36.416: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 0 and labels map[test-deployment-static:true] + Oct 13 09:24:36.422: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 0 and labels map[test-deployment-static:true] + Oct 13 09:24:36.422: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 0 and labels map[test-deployment-static:true] + Oct 13 09:24:36.436: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 0 and labels map[test-deployment-static:true] + Oct 13 09:24:36.436: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 0 and labels map[test-deployment-static:true] + Oct 13 09:24:36.462: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 0 and labels map[test-deployment-static:true] + Oct 13 09:24:36.462: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 0 and labels map[test-deployment-static:true] + Oct 13 09:24:37.753: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 1 and labels map[test-deployment-static:true] + Oct 13 09:24:37.753: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 1 and labels map[test-deployment-static:true] + Oct 13 09:24:37.831: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 2 and labels map[test-deployment-static:true] + STEP: patching the Deployment 10/13/23 09:24:37.831 + W1013 09:24:37.838382 23 warnings.go:70] unknown field "spec.template.spec.TerminationGracePeriodSeconds" + Oct 13 09:24:37.840: INFO: observed event type ADDED + STEP: waiting for Replicas to scale 10/13/23 09:24:37.84 + Oct 13 09:24:37.841: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 0 + Oct 13 09:24:37.841: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 0 + Oct 13 09:24:37.841: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 0 + Oct 13 09:24:37.841: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 0 + Oct 13 09:24:37.841: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 0 + Oct 13 09:24:37.841: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 0 + Oct 13 09:24:37.841: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 0 + Oct 13 09:24:37.841: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 0 + Oct 13 09:24:37.841: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 1 + Oct 13 09:24:37.841: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 1 + Oct 13 09:24:37.841: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 2 + Oct 13 09:24:37.841: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 2 + Oct 13 09:24:37.841: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 2 + Oct 13 09:24:37.841: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 2 + Oct 13 09:24:37.849: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 2 + Oct 13 09:24:37.849: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 2 + Oct 13 09:24:37.870: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 2 + Oct 13 09:24:37.871: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 2 + Oct 13 09:24:37.877: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 1 + Oct 13 09:24:37.877: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 1 + Oct 13 09:24:37.889: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 1 + Oct 13 09:24:37.889: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 1 + Oct 13 09:24:38.759: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 2 + Oct 13 09:24:38.759: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 2 + Oct 13 09:24:38.778: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 1 + STEP: listing Deployments 10/13/23 09:24:38.778 + Oct 13 09:24:38.781: INFO: Found test-deployment with labels: map[test-deployment:patched test-deployment-static:true] + STEP: updating the Deployment 10/13/23 09:24:38.781 + Oct 13 09:24:38.803: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 1 + STEP: fetching the DeploymentStatus 10/13/23 09:24:38.803 + Oct 13 09:24:38.810: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] + Oct 13 09:24:38.814: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] + Oct 13 09:24:38.830: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] + Oct 13 09:24:38.856: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] + Oct 13 09:24:38.866: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] + Oct 13 09:24:39.774: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] + Oct 13 09:24:39.810: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] + Oct 13 09:24:39.825: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] + Oct 13 09:24:40.871: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] + STEP: patching the DeploymentStatus 10/13/23 09:24:40.917 + STEP: fetching the DeploymentStatus 10/13/23 09:24:40.923 + Oct 13 09:24:40.928: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 1 + Oct 13 09:24:40.928: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 1 + Oct 13 09:24:40.928: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 1 + Oct 13 09:24:40.928: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 1 + Oct 13 09:24:40.928: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 1 + Oct 13 09:24:40.928: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 2 + Oct 13 09:24:40.928: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 2 + Oct 13 09:24:40.928: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 2 + Oct 13 09:24:40.928: INFO: observed Deployment test-deployment in namespace deployment-5307 with ReadyReplicas 3 + STEP: deleting the Deployment 10/13/23 09:24:40.928 + Oct 13 09:24:40.937: INFO: observed event type MODIFIED + Oct 13 09:24:40.937: INFO: observed event type MODIFIED + Oct 13 09:24:40.937: INFO: observed event type MODIFIED + Oct 13 09:24:40.937: INFO: observed event type MODIFIED + Oct 13 09:24:40.937: INFO: observed event type MODIFIED + Oct 13 09:24:40.937: INFO: observed event type MODIFIED + Oct 13 09:24:40.937: INFO: observed event type MODIFIED + Oct 13 09:24:40.937: INFO: observed event type MODIFIED + Oct 13 09:24:40.937: INFO: observed event type MODIFIED + Oct 13 09:24:40.937: INFO: observed event type MODIFIED + [AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 + Oct 13 09:24:40.941: INFO: Log out all the ReplicaSets if there is no deployment created + Oct 13 09:24:40.945: INFO: ReplicaSet "test-deployment-7b7876f9d6": + &ReplicaSet{ObjectMeta:{test-deployment-7b7876f9d6 deployment-5307 5aad7da8-4b6e-4019-87d1-d546c66314f4 32966 2 2023-10-13 09:24:39 +0000 UTC map[pod-template-hash:7b7876f9d6 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:3] [{apps/v1 Deployment test-deployment d4497cef-8c50-439a-9572-879277cea549 0xc0036c9ce7 0xc0036c9ce8}] [] [{kube-controller-manager Update apps/v1 2023-10-13 09:24:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4497cef-8c50-439a-9572-879277cea549\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-10-13 09:24:41 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 7b7876f9d6,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:7b7876f9d6 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0036c9d80 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:2,FullyLabeledReplicas:2,ObservedGeneration:2,ReadyReplicas:2,AvailableReplicas:2,Conditions:[]ReplicaSetCondition{},},} + + Oct 13 09:24:40.949: INFO: pod: "test-deployment-7b7876f9d6-69gzq": + &Pod{ObjectMeta:{test-deployment-7b7876f9d6-69gzq test-deployment-7b7876f9d6- deployment-5307 be8766ae-138e-4804-be6c-72040024dc53 32965 0 2023-10-13 09:24:40 +0000 UTC map[pod-template-hash:7b7876f9d6 test-deployment-static:true] map[] [{apps/v1 ReplicaSet test-deployment-7b7876f9d6 5aad7da8-4b6e-4019-87d1-d546c66314f4 0xc005258f57 0xc005258f58}] [] [{kube-controller-manager Update v1 2023-10-13 09:24:40 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5aad7da8-4b6e-4019-87d1-d546c66314f4\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-10-13 09:24:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.0.63\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-x7hqq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x7hqq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 09:24:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 09:24:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 09:24:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 09:24:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.253.8.110,PodIP:10.244.0.63,StartTime:2023-10-13 09:24:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-10-13 09:24:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:sha256:987dae5a853a43c663ab2f902708e65874bd2c0189aa0bc57d81ffb57187d089,ContainerID:containerd://10c90e6ae660b82c98ae069800d49f36ccc4ec639cf0e94aaf0171a8545f5e08,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.0.63,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + + Oct 13 09:24:40.949: INFO: pod: "test-deployment-7b7876f9d6-jtg29": + &Pod{ObjectMeta:{test-deployment-7b7876f9d6-jtg29 test-deployment-7b7876f9d6- deployment-5307 7794d4ac-d4dc-4477-ad1f-d7eb09e198c8 32931 0 2023-10-13 09:24:39 +0000 UTC map[pod-template-hash:7b7876f9d6 test-deployment-static:true] map[] [{apps/v1 ReplicaSet test-deployment-7b7876f9d6 5aad7da8-4b6e-4019-87d1-d546c66314f4 0xc005259167 0xc005259168}] [] [{kube-controller-manager Update v1 2023-10-13 09:24:39 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5aad7da8-4b6e-4019-87d1-d546c66314f4\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-10-13 09:24:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.2\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-stzgn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-stzgn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 09:24:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 09:24:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 09:24:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 09:24:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.253.8.111,PodIP:10.244.1.2,StartTime:2023-10-13 09:24:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-10-13 09:24:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:sha256:987dae5a853a43c663ab2f902708e65874bd2c0189aa0bc57d81ffb57187d089,ContainerID:containerd://b4c48f7b8886bd2cb2c8c21a30a19c68f7377cd1aaeb16240014475e2d055849,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + + Oct 13 09:24:40.950: INFO: ReplicaSet "test-deployment-7df74c55ff": + &ReplicaSet{ObjectMeta:{test-deployment-7df74c55ff deployment-5307 f21ca95d-b314-4430-8217-02e8b08d04b9 32974 4 2023-10-13 09:24:38 +0000 UTC map[pod-template-hash:7df74c55ff test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-deployment d4497cef-8c50-439a-9572-879277cea549 0xc0036c9de7 0xc0036c9de8}] [] [{kube-controller-manager Update apps/v1 2023-10-13 09:24:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4497cef-8c50-439a-9572-879277cea549\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-10-13 09:24:41 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 7df74c55ff,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:7df74c55ff test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment registry.k8s.io/pause:3.9 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0036c9e70 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:4,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} + + Oct 13 09:24:40.954: INFO: pod: "test-deployment-7df74c55ff-5kz59": + &Pod{ObjectMeta:{test-deployment-7df74c55ff-5kz59 test-deployment-7df74c55ff- deployment-5307 d66cc47a-ed38-405f-9487-13dfe37608a3 32969 0 2023-10-13 09:24:38 +0000 UTC 2023-10-13 09:24:42 +0000 UTC 0xc0044ce5c8 map[pod-template-hash:7df74c55ff test-deployment-static:true] map[] [{apps/v1 ReplicaSet test-deployment-7df74c55ff f21ca95d-b314-4430-8217-02e8b08d04b9 0xc0044ce5f7 0xc0044ce5f8}] [] [{kube-controller-manager Update v1 2023-10-13 09:24:38 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f21ca95d-b314-4430-8217-02e8b08d04b9\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-10-13 09:24:39 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.254\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-4rhn2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:registry.k8s.io/pause:3.9,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4rhn2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 09:24:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 09:24:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 09:24:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 09:24:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.253.8.111,PodIP:10.244.1.254,StartTime:2023-10-13 09:24:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-10-13 09:24:39 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/pause:3.9,ImageID:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,ContainerID:containerd://9377b434e0074c17f6c51f4e873c9a6c925abccb6d05d9196c4ef7101a682100,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.254,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + + Oct 13 09:24:40.955: INFO: pod: "test-deployment-7df74c55ff-p96xj": + &Pod{ObjectMeta:{test-deployment-7df74c55ff-p96xj test-deployment-7df74c55ff- deployment-5307 b748d4ad-ee35-430b-898f-608492289802 32937 0 2023-10-13 09:24:39 +0000 UTC 2023-10-13 09:24:41 +0000 UTC 0xc0044ce7c0 map[pod-template-hash:7df74c55ff test-deployment-static:true] map[] [{apps/v1 ReplicaSet test-deployment-7df74c55ff f21ca95d-b314-4430-8217-02e8b08d04b9 0xc0044ce7f7 0xc0044ce7f8}] [] [{kube-controller-manager Update v1 2023-10-13 09:24:39 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f21ca95d-b314-4430-8217-02e8b08d04b9\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-10-13 09:24:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.131\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-ccvwt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:registry.k8s.io/pause:3.9,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ccvwt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node3,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 09:24:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 09:24:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 09:24:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 09:24:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.253.8.112,PodIP:10.244.2.131,StartTime:2023-10-13 09:24:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-10-13 09:24:39 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/pause:3.9,ImageID:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,ContainerID:containerd://2327c98101907c8cfc288ba6f0e1a5c949e87aae63138a304154cf09a9626476,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.131,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + + Oct 13 09:24:40.955: INFO: ReplicaSet "test-deployment-f4dbc4647": + &ReplicaSet{ObjectMeta:{test-deployment-f4dbc4647 deployment-5307 3b4ee80f-76f9-4990-8287-954860ba1216 32899 3 2023-10-13 09:24:36 +0000 UTC map[pod-template-hash:f4dbc4647 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment d4497cef-8c50-439a-9572-879277cea549 0xc0036c9ed7 0xc0036c9ed8}] [] [{kube-controller-manager Update apps/v1 2023-10-13 09:24:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4497cef-8c50-439a-9572-879277cea549\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-10-13 09:24:39 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: f4dbc4647,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:f4dbc4647 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment registry.k8s.io/e2e-test-images/agnhost:2.43 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0036c9f60 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} + + [AfterEach] [sig-apps] Deployment + test/e2e/framework/node/init/init.go:32 + Oct 13 09:24:40.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] Deployment + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] Deployment + tear down framework | framework.go:193 + STEP: Destroying namespace "deployment-5307" for this suite. 10/13/23 09:24:40.964 + << End Captured GinkgoWriter Output +------------------------------ +SSSSS +------------------------------ +[sig-node] KubeletManagedEtcHosts + should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/kubelet_etc_hosts.go:63 +[BeforeEach] [sig-node] KubeletManagedEtcHosts + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:24:40.971 +Oct 13 09:24:40.971: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts 10/13/23 09:24:40.972 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:24:40.985 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:24:40.987 +[BeforeEach] [sig-node] KubeletManagedEtcHosts + test/e2e/framework/metrics/init/init.go:31 +[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/kubelet_etc_hosts.go:63 +STEP: Setting up the test 10/13/23 09:24:40.99 +STEP: Creating hostNetwork=false pod 10/13/23 09:24:40.99 +Oct 13 09:24:40.998: INFO: Waiting up to 5m0s for pod "test-pod" in namespace "e2e-kubelet-etc-hosts-6772" to be "running and ready" +Oct 13 09:24:41.001: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 3.164575ms +Oct 13 09:24:41.001: INFO: The phase of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) +Oct 13 09:24:43.005: INFO: Pod "test-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.006401746s +Oct 13 09:24:43.005: INFO: The phase of Pod test-pod is Running (Ready = true) +Oct 13 09:24:43.005: INFO: Pod "test-pod" satisfied condition "running and ready" +STEP: Creating hostNetwork=true pod 10/13/23 09:24:43.007 +Oct 13 09:24:43.014: INFO: Waiting up to 5m0s for pod "test-host-network-pod" in namespace "e2e-kubelet-etc-hosts-6772" to be "running and ready" +Oct 13 09:24:43.020: INFO: Pod "test-host-network-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 5.957622ms +Oct 13 09:24:43.020: INFO: The phase of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) +Oct 13 09:24:45.023: INFO: Pod "test-host-network-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.009224728s +Oct 13 09:24:45.023: INFO: The phase of Pod test-host-network-pod is Running (Ready = true) +Oct 13 09:24:45.023: INFO: Pod "test-host-network-pod" satisfied condition "running and ready" +STEP: Running the test 10/13/23 09:24:45.026 +STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false 10/13/23 09:24:45.026 +Oct 13 09:24:45.026: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6772 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 13 09:24:45.026: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +Oct 13 09:24:45.026: INFO: ExecWithOptions: Clientset creation +Oct 13 09:24:45.026: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-6772/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-1&container=busybox-1&stderr=true&stdout=true) +Oct 13 09:24:45.076: INFO: Exec stderr: "" +Oct 13 09:24:45.076: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6772 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 13 09:24:45.076: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +Oct 13 09:24:45.077: INFO: ExecWithOptions: Clientset creation +Oct 13 09:24:45.077: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-6772/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-1&container=busybox-1&stderr=true&stdout=true) +Oct 13 09:24:45.119: INFO: Exec stderr: "" +Oct 13 09:24:45.119: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6772 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 13 09:24:45.119: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +Oct 13 09:24:45.120: INFO: ExecWithOptions: Clientset creation +Oct 13 09:24:45.120: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-6772/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-2&container=busybox-2&stderr=true&stdout=true) +Oct 13 09:24:45.169: INFO: Exec stderr: "" +Oct 13 09:24:45.169: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6772 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 13 09:24:45.169: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +Oct 13 09:24:45.170: INFO: ExecWithOptions: Clientset creation +Oct 13 09:24:45.170: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-6772/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-2&container=busybox-2&stderr=true&stdout=true) +Oct 13 09:24:45.212: INFO: Exec stderr: "" +STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount 10/13/23 09:24:45.212 +Oct 13 09:24:45.212: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6772 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 13 09:24:45.212: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +Oct 13 09:24:45.213: INFO: ExecWithOptions: Clientset creation +Oct 13 09:24:45.213: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-6772/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-3&container=busybox-3&stderr=true&stdout=true) +Oct 13 09:24:45.259: INFO: Exec stderr: "" +Oct 13 09:24:45.259: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6772 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 13 09:24:45.259: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +Oct 13 09:24:45.260: INFO: ExecWithOptions: Clientset creation +Oct 13 09:24:45.260: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-6772/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-3&container=busybox-3&stderr=true&stdout=true) +Oct 13 09:24:45.302: INFO: Exec stderr: "" +STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true 10/13/23 09:24:45.302 +Oct 13 09:24:45.302: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6772 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 13 09:24:45.302: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +Oct 13 09:24:45.302: INFO: ExecWithOptions: Clientset creation +Oct 13 09:24:45.302: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-6772/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-1&container=busybox-1&stderr=true&stdout=true) +Oct 13 09:24:45.347: INFO: Exec stderr: "" +Oct 13 09:24:45.347: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6772 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 13 09:24:45.347: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +Oct 13 09:24:45.347: INFO: ExecWithOptions: Clientset creation +Oct 13 09:24:45.347: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-6772/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-1&container=busybox-1&stderr=true&stdout=true) +Oct 13 09:24:45.389: INFO: Exec stderr: "" +Oct 13 09:24:45.389: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6772 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 13 09:24:45.389: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +Oct 13 09:24:45.390: INFO: ExecWithOptions: Clientset creation +Oct 13 09:24:45.390: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-6772/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-2&container=busybox-2&stderr=true&stdout=true) +Oct 13 09:24:45.430: INFO: Exec stderr: "" +Oct 13 09:24:45.430: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6772 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 13 09:24:45.430: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +Oct 13 09:24:45.430: INFO: ExecWithOptions: Clientset creation +Oct 13 09:24:45.430: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-6772/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-2&container=busybox-2&stderr=true&stdout=true) +Oct 13 09:24:45.473: INFO: Exec stderr: "" +[AfterEach] [sig-node] KubeletManagedEtcHosts + test/e2e/framework/node/init/init.go:32 +Oct 13 09:24:45.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] KubeletManagedEtcHosts + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] KubeletManagedEtcHosts + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] KubeletManagedEtcHosts + tear down framework | framework.go:193 +STEP: Destroying namespace "e2e-kubelet-etc-hosts-6772" for this suite. 10/13/23 09:24:45.477 +------------------------------ +• [4.512 seconds] +[sig-node] KubeletManagedEtcHosts +test/e2e/common/node/framework.go:23 + should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/kubelet_etc_hosts.go:63 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] KubeletManagedEtcHosts + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:24:40.971 + Oct 13 09:24:40.971: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts 10/13/23 09:24:40.972 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:24:40.985 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:24:40.987 + [BeforeEach] [sig-node] KubeletManagedEtcHosts + test/e2e/framework/metrics/init/init.go:31 + [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/kubelet_etc_hosts.go:63 + STEP: Setting up the test 10/13/23 09:24:40.99 + STEP: Creating hostNetwork=false pod 10/13/23 09:24:40.99 + Oct 13 09:24:40.998: INFO: Waiting up to 5m0s for pod "test-pod" in namespace "e2e-kubelet-etc-hosts-6772" to be "running and ready" + Oct 13 09:24:41.001: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 3.164575ms + Oct 13 09:24:41.001: INFO: The phase of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) + Oct 13 09:24:43.005: INFO: Pod "test-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.006401746s + Oct 13 09:24:43.005: INFO: The phase of Pod test-pod is Running (Ready = true) + Oct 13 09:24:43.005: INFO: Pod "test-pod" satisfied condition "running and ready" + STEP: Creating hostNetwork=true pod 10/13/23 09:24:43.007 + Oct 13 09:24:43.014: INFO: Waiting up to 5m0s for pod "test-host-network-pod" in namespace "e2e-kubelet-etc-hosts-6772" to be "running and ready" + Oct 13 09:24:43.020: INFO: Pod "test-host-network-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 5.957622ms + Oct 13 09:24:43.020: INFO: The phase of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) + Oct 13 09:24:45.023: INFO: Pod "test-host-network-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.009224728s + Oct 13 09:24:45.023: INFO: The phase of Pod test-host-network-pod is Running (Ready = true) + Oct 13 09:24:45.023: INFO: Pod "test-host-network-pod" satisfied condition "running and ready" + STEP: Running the test 10/13/23 09:24:45.026 + STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false 10/13/23 09:24:45.026 + Oct 13 09:24:45.026: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6772 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Oct 13 09:24:45.026: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + Oct 13 09:24:45.026: INFO: ExecWithOptions: Clientset creation + Oct 13 09:24:45.026: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-6772/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-1&container=busybox-1&stderr=true&stdout=true) + Oct 13 09:24:45.076: INFO: Exec stderr: "" + Oct 13 09:24:45.076: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6772 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Oct 13 09:24:45.076: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + Oct 13 09:24:45.077: INFO: ExecWithOptions: Clientset creation + Oct 13 09:24:45.077: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-6772/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-1&container=busybox-1&stderr=true&stdout=true) + Oct 13 09:24:45.119: INFO: Exec stderr: "" + Oct 13 09:24:45.119: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6772 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Oct 13 09:24:45.119: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + Oct 13 09:24:45.120: INFO: ExecWithOptions: Clientset creation + Oct 13 09:24:45.120: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-6772/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-2&container=busybox-2&stderr=true&stdout=true) + Oct 13 09:24:45.169: INFO: Exec stderr: "" + Oct 13 09:24:45.169: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6772 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Oct 13 09:24:45.169: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + Oct 13 09:24:45.170: INFO: ExecWithOptions: Clientset creation + Oct 13 09:24:45.170: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-6772/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-2&container=busybox-2&stderr=true&stdout=true) + Oct 13 09:24:45.212: INFO: Exec stderr: "" + STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount 10/13/23 09:24:45.212 + Oct 13 09:24:45.212: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6772 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Oct 13 09:24:45.212: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + Oct 13 09:24:45.213: INFO: ExecWithOptions: Clientset creation + Oct 13 09:24:45.213: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-6772/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-3&container=busybox-3&stderr=true&stdout=true) + Oct 13 09:24:45.259: INFO: Exec stderr: "" + Oct 13 09:24:45.259: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6772 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Oct 13 09:24:45.259: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + Oct 13 09:24:45.260: INFO: ExecWithOptions: Clientset creation + Oct 13 09:24:45.260: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-6772/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-3&container=busybox-3&stderr=true&stdout=true) + Oct 13 09:24:45.302: INFO: Exec stderr: "" + STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true 10/13/23 09:24:45.302 + Oct 13 09:24:45.302: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6772 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Oct 13 09:24:45.302: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + Oct 13 09:24:45.302: INFO: ExecWithOptions: Clientset creation + Oct 13 09:24:45.302: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-6772/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-1&container=busybox-1&stderr=true&stdout=true) + Oct 13 09:24:45.347: INFO: Exec stderr: "" + Oct 13 09:24:45.347: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6772 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Oct 13 09:24:45.347: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + Oct 13 09:24:45.347: INFO: ExecWithOptions: Clientset creation + Oct 13 09:24:45.347: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-6772/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-1&container=busybox-1&stderr=true&stdout=true) + Oct 13 09:24:45.389: INFO: Exec stderr: "" + Oct 13 09:24:45.389: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6772 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Oct 13 09:24:45.389: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + Oct 13 09:24:45.390: INFO: ExecWithOptions: Clientset creation + Oct 13 09:24:45.390: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-6772/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-2&container=busybox-2&stderr=true&stdout=true) + Oct 13 09:24:45.430: INFO: Exec stderr: "" + Oct 13 09:24:45.430: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6772 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Oct 13 09:24:45.430: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + Oct 13 09:24:45.430: INFO: ExecWithOptions: Clientset creation + Oct 13 09:24:45.430: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-6772/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-2&container=busybox-2&stderr=true&stdout=true) + Oct 13 09:24:45.473: INFO: Exec stderr: "" + [AfterEach] [sig-node] KubeletManagedEtcHosts + test/e2e/framework/node/init/init.go:32 + Oct 13 09:24:45.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] KubeletManagedEtcHosts + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] KubeletManagedEtcHosts + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] KubeletManagedEtcHosts + tear down framework | framework.go:193 + STEP: Destroying namespace "e2e-kubelet-etc-hosts-6772" for this suite. 10/13/23 09:24:45.477 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:74 +[BeforeEach] [sig-storage] ConfigMap + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:24:45.484 +Oct 13 09:24:45.484: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename configmap 10/13/23 09:24:45.484 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:24:45.498 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:24:45.5 +[BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:31 +[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:74 +STEP: Creating configMap with name configmap-test-volume-0f3e439b-b0ba-46c2-a3b3-d7d2ae8a50f9 10/13/23 09:24:45.502 +STEP: Creating a pod to test consume configMaps 10/13/23 09:24:45.506 +Oct 13 09:24:45.514: INFO: Waiting up to 5m0s for pod "pod-configmaps-214f65c8-d8f5-46be-902b-6c5e348ae280" in namespace "configmap-1338" to be "Succeeded or Failed" +Oct 13 09:24:45.518: INFO: Pod "pod-configmaps-214f65c8-d8f5-46be-902b-6c5e348ae280": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022803ms +Oct 13 09:24:47.523: INFO: Pod "pod-configmaps-214f65c8-d8f5-46be-902b-6c5e348ae280": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008594592s +Oct 13 09:24:49.523: INFO: Pod "pod-configmaps-214f65c8-d8f5-46be-902b-6c5e348ae280": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008708374s +STEP: Saw pod success 10/13/23 09:24:49.523 +Oct 13 09:24:49.523: INFO: Pod "pod-configmaps-214f65c8-d8f5-46be-902b-6c5e348ae280" satisfied condition "Succeeded or Failed" +Oct 13 09:24:49.526: INFO: Trying to get logs from node node3 pod pod-configmaps-214f65c8-d8f5-46be-902b-6c5e348ae280 container agnhost-container: +STEP: delete the pod 10/13/23 09:24:49.533 +Oct 13 09:24:49.548: INFO: Waiting for pod pod-configmaps-214f65c8-d8f5-46be-902b-6c5e348ae280 to disappear +Oct 13 09:24:49.552: INFO: Pod pod-configmaps-214f65c8-d8f5-46be-902b-6c5e348ae280 no longer exists +[AfterEach] [sig-storage] ConfigMap + test/e2e/framework/node/init/init.go:32 +Oct 13 09:24:49.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] ConfigMap + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] ConfigMap + tear down framework | framework.go:193 +STEP: Destroying namespace "configmap-1338" for this suite. 10/13/23 09:24:49.556 +------------------------------ +• [4.078 seconds] +[sig-storage] ConfigMap +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:74 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] ConfigMap + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:24:45.484 + Oct 13 09:24:45.484: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename configmap 10/13/23 09:24:45.484 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:24:45.498 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:24:45.5 + [BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:31 + [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:74 + STEP: Creating configMap with name configmap-test-volume-0f3e439b-b0ba-46c2-a3b3-d7d2ae8a50f9 10/13/23 09:24:45.502 + STEP: Creating a pod to test consume configMaps 10/13/23 09:24:45.506 + Oct 13 09:24:45.514: INFO: Waiting up to 5m0s for pod "pod-configmaps-214f65c8-d8f5-46be-902b-6c5e348ae280" in namespace "configmap-1338" to be "Succeeded or Failed" + Oct 13 09:24:45.518: INFO: Pod "pod-configmaps-214f65c8-d8f5-46be-902b-6c5e348ae280": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022803ms + Oct 13 09:24:47.523: INFO: Pod "pod-configmaps-214f65c8-d8f5-46be-902b-6c5e348ae280": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008594592s + Oct 13 09:24:49.523: INFO: Pod "pod-configmaps-214f65c8-d8f5-46be-902b-6c5e348ae280": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008708374s + STEP: Saw pod success 10/13/23 09:24:49.523 + Oct 13 09:24:49.523: INFO: Pod "pod-configmaps-214f65c8-d8f5-46be-902b-6c5e348ae280" satisfied condition "Succeeded or Failed" + Oct 13 09:24:49.526: INFO: Trying to get logs from node node3 pod pod-configmaps-214f65c8-d8f5-46be-902b-6c5e348ae280 container agnhost-container: + STEP: delete the pod 10/13/23 09:24:49.533 + Oct 13 09:24:49.548: INFO: Waiting for pod pod-configmaps-214f65c8-d8f5-46be-902b-6c5e348ae280 to disappear + Oct 13 09:24:49.552: INFO: Pod pod-configmaps-214f65c8-d8f5-46be-902b-6c5e348ae280 no longer exists + [AfterEach] [sig-storage] ConfigMap + test/e2e/framework/node/init/init.go:32 + Oct 13 09:24:49.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] ConfigMap + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] ConfigMap + tear down framework | framework.go:193 + STEP: Destroying namespace "configmap-1338" for this suite. 10/13/23 09:24:49.556 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for multiple CRDs of same group and version but different kinds [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:357 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:24:49.569 +Oct 13 09:24:49.569: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename crd-publish-openapi 10/13/23 09:24:49.57 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:24:49.583 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:24:49.585 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[It] works for multiple CRDs of same group and version but different kinds [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:357 +STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation 10/13/23 09:24:49.587 +Oct 13 09:24:49.588: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +Oct 13 09:24:51.342: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Oct 13 09:24:58.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "crd-publish-openapi-7999" for this suite. 10/13/23 09:24:58.612 +------------------------------ +• [SLOW TEST] [9.053 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + works for multiple CRDs of same group and version but different kinds [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:357 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:24:49.569 + Oct 13 09:24:49.569: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename crd-publish-openapi 10/13/23 09:24:49.57 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:24:49.583 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:24:49.585 + [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [It] works for multiple CRDs of same group and version but different kinds [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:357 + STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation 10/13/23 09:24:49.587 + Oct 13 09:24:49.588: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + Oct 13 09:24:51.342: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Oct 13 09:24:58.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "crd-publish-openapi-7999" for this suite. 10/13/23 09:24:58.612 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:68 +[BeforeEach] [sig-storage] Projected downwardAPI + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:24:58.622 +Oct 13 09:24:58.623: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename projected 10/13/23 09:24:58.624 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:24:58.642 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:24:58.644 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:44 +[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:68 +STEP: Creating a pod to test downward API volume plugin 10/13/23 09:24:58.646 +Oct 13 09:24:58.654: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d94161ef-a3e9-4d58-a877-8ac590de7134" in namespace "projected-1741" to be "Succeeded or Failed" +Oct 13 09:24:58.658: INFO: Pod "downwardapi-volume-d94161ef-a3e9-4d58-a877-8ac590de7134": Phase="Pending", Reason="", readiness=false. Elapsed: 3.208244ms +Oct 13 09:25:00.662: INFO: Pod "downwardapi-volume-d94161ef-a3e9-4d58-a877-8ac590de7134": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007236008s +Oct 13 09:25:02.662: INFO: Pod "downwardapi-volume-d94161ef-a3e9-4d58-a877-8ac590de7134": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007344783s +STEP: Saw pod success 10/13/23 09:25:02.662 +Oct 13 09:25:02.662: INFO: Pod "downwardapi-volume-d94161ef-a3e9-4d58-a877-8ac590de7134" satisfied condition "Succeeded or Failed" +Oct 13 09:25:02.666: INFO: Trying to get logs from node node2 pod downwardapi-volume-d94161ef-a3e9-4d58-a877-8ac590de7134 container client-container: +STEP: delete the pod 10/13/23 09:25:02.672 +Oct 13 09:25:02.683: INFO: Waiting for pod downwardapi-volume-d94161ef-a3e9-4d58-a877-8ac590de7134 to disappear +Oct 13 09:25:02.690: INFO: Pod downwardapi-volume-d94161ef-a3e9-4d58-a877-8ac590de7134 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/node/init/init.go:32 +Oct 13 09:25:02.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI + tear down framework | framework.go:193 +STEP: Destroying namespace "projected-1741" for this suite. 10/13/23 09:25:02.695 +------------------------------ +• [4.078 seconds] +[sig-storage] Projected downwardAPI +test/e2e/common/storage/framework.go:23 + should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:68 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected downwardAPI + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:24:58.622 + Oct 13 09:24:58.623: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename projected 10/13/23 09:24:58.624 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:24:58.642 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:24:58.644 + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:44 + [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:68 + STEP: Creating a pod to test downward API volume plugin 10/13/23 09:24:58.646 + Oct 13 09:24:58.654: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d94161ef-a3e9-4d58-a877-8ac590de7134" in namespace "projected-1741" to be "Succeeded or Failed" + Oct 13 09:24:58.658: INFO: Pod "downwardapi-volume-d94161ef-a3e9-4d58-a877-8ac590de7134": Phase="Pending", Reason="", readiness=false. Elapsed: 3.208244ms + Oct 13 09:25:00.662: INFO: Pod "downwardapi-volume-d94161ef-a3e9-4d58-a877-8ac590de7134": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007236008s + Oct 13 09:25:02.662: INFO: Pod "downwardapi-volume-d94161ef-a3e9-4d58-a877-8ac590de7134": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007344783s + STEP: Saw pod success 10/13/23 09:25:02.662 + Oct 13 09:25:02.662: INFO: Pod "downwardapi-volume-d94161ef-a3e9-4d58-a877-8ac590de7134" satisfied condition "Succeeded or Failed" + Oct 13 09:25:02.666: INFO: Trying to get logs from node node2 pod downwardapi-volume-d94161ef-a3e9-4d58-a877-8ac590de7134 container client-container: + STEP: delete the pod 10/13/23 09:25:02.672 + Oct 13 09:25:02.683: INFO: Waiting for pod downwardapi-volume-d94161ef-a3e9-4d58-a877-8ac590de7134 to disappear + Oct 13 09:25:02.690: INFO: Pod downwardapi-volume-d94161ef-a3e9-4d58-a877-8ac590de7134 no longer exists + [AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/node/init/init.go:32 + Oct 13 09:25:02.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + tear down framework | framework.go:193 + STEP: Destroying namespace "projected-1741" for this suite. 10/13/23 09:25:02.695 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:84 +[BeforeEach] [sig-storage] Downward API volume + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:25:02.702 +Oct 13 09:25:02.702: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename downward-api 10/13/23 09:25:02.703 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:25:02.72 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:25:02.723 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:44 +[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:84 +STEP: Creating a pod to test downward API volume plugin 10/13/23 09:25:02.725 +Oct 13 09:25:02.733: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7c554463-80a6-49b1-8ef6-a01ba6bbbf71" in namespace "downward-api-9539" to be "Succeeded or Failed" +Oct 13 09:25:02.736: INFO: Pod "downwardapi-volume-7c554463-80a6-49b1-8ef6-a01ba6bbbf71": Phase="Pending", Reason="", readiness=false. Elapsed: 3.099926ms +Oct 13 09:25:04.742: INFO: Pod "downwardapi-volume-7c554463-80a6-49b1-8ef6-a01ba6bbbf71": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008898694s +Oct 13 09:25:06.740: INFO: Pod "downwardapi-volume-7c554463-80a6-49b1-8ef6-a01ba6bbbf71": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007519793s +STEP: Saw pod success 10/13/23 09:25:06.74 +Oct 13 09:25:06.740: INFO: Pod "downwardapi-volume-7c554463-80a6-49b1-8ef6-a01ba6bbbf71" satisfied condition "Succeeded or Failed" +Oct 13 09:25:06.746: INFO: Trying to get logs from node node2 pod downwardapi-volume-7c554463-80a6-49b1-8ef6-a01ba6bbbf71 container client-container: +STEP: delete the pod 10/13/23 09:25:06.751 +Oct 13 09:25:06.763: INFO: Waiting for pod downwardapi-volume-7c554463-80a6-49b1-8ef6-a01ba6bbbf71 to disappear +Oct 13 09:25:06.766: INFO: Pod downwardapi-volume-7c554463-80a6-49b1-8ef6-a01ba6bbbf71 no longer exists +[AfterEach] [sig-storage] Downward API volume + test/e2e/framework/node/init/init.go:32 +Oct 13 09:25:06.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Downward API volume + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Downward API volume + tear down framework | framework.go:193 +STEP: Destroying namespace "downward-api-9539" for this suite. 10/13/23 09:25:06.769 +------------------------------ +• [4.072 seconds] +[sig-storage] Downward API volume +test/e2e/common/storage/framework.go:23 + should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:84 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Downward API volume + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:25:02.702 + Oct 13 09:25:02.702: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename downward-api 10/13/23 09:25:02.703 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:25:02.72 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:25:02.723 + [BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:44 + [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:84 + STEP: Creating a pod to test downward API volume plugin 10/13/23 09:25:02.725 + Oct 13 09:25:02.733: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7c554463-80a6-49b1-8ef6-a01ba6bbbf71" in namespace "downward-api-9539" to be "Succeeded or Failed" + Oct 13 09:25:02.736: INFO: Pod "downwardapi-volume-7c554463-80a6-49b1-8ef6-a01ba6bbbf71": Phase="Pending", Reason="", readiness=false. Elapsed: 3.099926ms + Oct 13 09:25:04.742: INFO: Pod "downwardapi-volume-7c554463-80a6-49b1-8ef6-a01ba6bbbf71": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008898694s + Oct 13 09:25:06.740: INFO: Pod "downwardapi-volume-7c554463-80a6-49b1-8ef6-a01ba6bbbf71": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007519793s + STEP: Saw pod success 10/13/23 09:25:06.74 + Oct 13 09:25:06.740: INFO: Pod "downwardapi-volume-7c554463-80a6-49b1-8ef6-a01ba6bbbf71" satisfied condition "Succeeded or Failed" + Oct 13 09:25:06.746: INFO: Trying to get logs from node node2 pod downwardapi-volume-7c554463-80a6-49b1-8ef6-a01ba6bbbf71 container client-container: + STEP: delete the pod 10/13/23 09:25:06.751 + Oct 13 09:25:06.763: INFO: Waiting for pod downwardapi-volume-7c554463-80a6-49b1-8ef6-a01ba6bbbf71 to disappear + Oct 13 09:25:06.766: INFO: Pod downwardapi-volume-7c554463-80a6-49b1-8ef6-a01ba6bbbf71 no longer exists + [AfterEach] [sig-storage] Downward API volume + test/e2e/framework/node/init/init.go:32 + Oct 13 09:25:06.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Downward API volume + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Downward API volume + tear down framework | framework.go:193 + STEP: Destroying namespace "downward-api-9539" for this suite. 10/13/23 09:25:06.769 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicationController + should get and update a ReplicationController scale [Conformance] + test/e2e/apps/rc.go:402 +[BeforeEach] [sig-apps] ReplicationController + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:25:06.775 +Oct 13 09:25:06.775: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename replication-controller 10/13/23 09:25:06.776 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:25:06.791 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:25:06.793 +[BeforeEach] [sig-apps] ReplicationController + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] ReplicationController + test/e2e/apps/rc.go:57 +[It] should get and update a ReplicationController scale [Conformance] + test/e2e/apps/rc.go:402 +STEP: Creating ReplicationController "e2e-rc-2rfb4" 10/13/23 09:25:06.795 +Oct 13 09:25:06.801: INFO: Get Replication Controller "e2e-rc-2rfb4" to confirm replicas +Oct 13 09:25:07.805: INFO: Get Replication Controller "e2e-rc-2rfb4" to confirm replicas +Oct 13 09:25:07.808: INFO: Found 1 replicas for "e2e-rc-2rfb4" replication controller +STEP: Getting scale subresource for ReplicationController "e2e-rc-2rfb4" 10/13/23 09:25:07.808 +STEP: Updating a scale subresource 10/13/23 09:25:07.812 +STEP: Verifying replicas where modified for replication controller "e2e-rc-2rfb4" 10/13/23 09:25:07.817 +Oct 13 09:25:07.817: INFO: Get Replication Controller "e2e-rc-2rfb4" to confirm replicas +Oct 13 09:25:08.821: INFO: Get Replication Controller "e2e-rc-2rfb4" to confirm replicas +Oct 13 09:25:08.825: INFO: Found 2 replicas for "e2e-rc-2rfb4" replication controller +[AfterEach] [sig-apps] ReplicationController + test/e2e/framework/node/init/init.go:32 +Oct 13 09:25:08.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] ReplicationController + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] ReplicationController + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] ReplicationController + tear down framework | framework.go:193 +STEP: Destroying namespace "replication-controller-8378" for this suite. 10/13/23 09:25:08.829 +------------------------------ +• [2.059 seconds] +[sig-apps] ReplicationController +test/e2e/apps/framework.go:23 + should get and update a ReplicationController scale [Conformance] + test/e2e/apps/rc.go:402 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] ReplicationController + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:25:06.775 + Oct 13 09:25:06.775: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename replication-controller 10/13/23 09:25:06.776 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:25:06.791 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:25:06.793 + [BeforeEach] [sig-apps] ReplicationController + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] ReplicationController + test/e2e/apps/rc.go:57 + [It] should get and update a ReplicationController scale [Conformance] + test/e2e/apps/rc.go:402 + STEP: Creating ReplicationController "e2e-rc-2rfb4" 10/13/23 09:25:06.795 + Oct 13 09:25:06.801: INFO: Get Replication Controller "e2e-rc-2rfb4" to confirm replicas + Oct 13 09:25:07.805: INFO: Get Replication Controller "e2e-rc-2rfb4" to confirm replicas + Oct 13 09:25:07.808: INFO: Found 1 replicas for "e2e-rc-2rfb4" replication controller + STEP: Getting scale subresource for ReplicationController "e2e-rc-2rfb4" 10/13/23 09:25:07.808 + STEP: Updating a scale subresource 10/13/23 09:25:07.812 + STEP: Verifying replicas where modified for replication controller "e2e-rc-2rfb4" 10/13/23 09:25:07.817 + Oct 13 09:25:07.817: INFO: Get Replication Controller "e2e-rc-2rfb4" to confirm replicas + Oct 13 09:25:08.821: INFO: Get Replication Controller "e2e-rc-2rfb4" to confirm replicas + Oct 13 09:25:08.825: INFO: Found 2 replicas for "e2e-rc-2rfb4" replication controller + [AfterEach] [sig-apps] ReplicationController + test/e2e/framework/node/init/init.go:32 + Oct 13 09:25:08.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] ReplicationController + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] ReplicationController + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] ReplicationController + tear down framework | framework.go:193 + STEP: Destroying namespace "replication-controller-8378" for this suite. 10/13/23 09:25:08.829 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + should perform rolling updates and roll backs of template modifications [Conformance] + test/e2e/apps/statefulset.go:306 +[BeforeEach] [sig-apps] StatefulSet + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:25:08.836 +Oct 13 09:25:08.836: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename statefulset 10/13/23 09:25:08.837 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:25:08.851 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:25:08.854 +[BeforeEach] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:98 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:113 +STEP: Creating service test in namespace statefulset-6141 10/13/23 09:25:08.856 +[It] should perform rolling updates and roll backs of template modifications [Conformance] + test/e2e/apps/statefulset.go:306 +STEP: Creating a new StatefulSet 10/13/23 09:25:08.86 +Oct 13 09:25:08.869: INFO: Found 0 stateful pods, waiting for 3 +Oct 13 09:25:18.876: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true +Oct 13 09:25:18.876: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true +Oct 13 09:25:18.876: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true +Oct 13 09:25:18.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=statefulset-6141 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Oct 13 09:25:19.095: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Oct 13 09:25:19.095: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Oct 13 09:25:19.095: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +STEP: Updating StatefulSet template: update image from registry.k8s.io/e2e-test-images/httpd:2.4.38-4 to registry.k8s.io/e2e-test-images/httpd:2.4.39-4 10/13/23 09:25:29.113 +Oct 13 09:25:29.136: INFO: Updating stateful set ss2 +STEP: Creating a new revision 10/13/23 09:25:29.136 +STEP: Updating Pods in reverse ordinal order 10/13/23 09:25:39.159 +Oct 13 09:25:39.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=statefulset-6141 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 13 09:25:39.318: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Oct 13 09:25:39.318: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Oct 13 09:25:39.318: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +STEP: Rolling back to a previous revision 10/13/23 09:25:49.347 +Oct 13 09:25:49.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=statefulset-6141 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Oct 13 09:25:49.544: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Oct 13 09:25:49.544: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Oct 13 09:25:49.544: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Oct 13 09:25:59.589: INFO: Updating stateful set ss2 +STEP: Rolling back update in reverse ordinal order 10/13/23 09:26:09.607 +Oct 13 09:26:09.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=statefulset-6141 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 13 09:26:09.779: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Oct 13 09:26:09.779: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Oct 13 09:26:09.779: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:124 +Oct 13 09:26:19.808: INFO: Deleting all statefulset in ns statefulset-6141 +Oct 13 09:26:19.812: INFO: Scaling statefulset ss2 to 0 +Oct 13 09:26:29.839: INFO: Waiting for statefulset status.replicas updated to 0 +Oct 13 09:26:29.844: INFO: Deleting statefulset ss2 +[AfterEach] [sig-apps] StatefulSet + test/e2e/framework/node/init/init.go:32 +Oct 13 09:26:29.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] StatefulSet + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] StatefulSet + tear down framework | framework.go:193 +STEP: Destroying namespace "statefulset-6141" for this suite. 10/13/23 09:26:29.861 +------------------------------ +• [SLOW TEST] [81.032 seconds] +[sig-apps] StatefulSet +test/e2e/apps/framework.go:23 + Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:103 + should perform rolling updates and roll backs of template modifications [Conformance] + test/e2e/apps/statefulset.go:306 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] StatefulSet + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:25:08.836 + Oct 13 09:25:08.836: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename statefulset 10/13/23 09:25:08.837 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:25:08.851 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:25:08.854 + [BeforeEach] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:98 + [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:113 + STEP: Creating service test in namespace statefulset-6141 10/13/23 09:25:08.856 + [It] should perform rolling updates and roll backs of template modifications [Conformance] + test/e2e/apps/statefulset.go:306 + STEP: Creating a new StatefulSet 10/13/23 09:25:08.86 + Oct 13 09:25:08.869: INFO: Found 0 stateful pods, waiting for 3 + Oct 13 09:25:18.876: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true + Oct 13 09:25:18.876: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true + Oct 13 09:25:18.876: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true + Oct 13 09:25:18.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=statefulset-6141 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' + Oct 13 09:25:19.095: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" + Oct 13 09:25:19.095: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" + Oct 13 09:25:19.095: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + + STEP: Updating StatefulSet template: update image from registry.k8s.io/e2e-test-images/httpd:2.4.38-4 to registry.k8s.io/e2e-test-images/httpd:2.4.39-4 10/13/23 09:25:29.113 + Oct 13 09:25:29.136: INFO: Updating stateful set ss2 + STEP: Creating a new revision 10/13/23 09:25:29.136 + STEP: Updating Pods in reverse ordinal order 10/13/23 09:25:39.159 + Oct 13 09:25:39.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=statefulset-6141 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' + Oct 13 09:25:39.318: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" + Oct 13 09:25:39.318: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" + Oct 13 09:25:39.318: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + + STEP: Rolling back to a previous revision 10/13/23 09:25:49.347 + Oct 13 09:25:49.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=statefulset-6141 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' + Oct 13 09:25:49.544: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" + Oct 13 09:25:49.544: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" + Oct 13 09:25:49.544: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + + Oct 13 09:25:59.589: INFO: Updating stateful set ss2 + STEP: Rolling back update in reverse ordinal order 10/13/23 09:26:09.607 + Oct 13 09:26:09.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=statefulset-6141 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' + Oct 13 09:26:09.779: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" + Oct 13 09:26:09.779: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" + Oct 13 09:26:09.779: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + + [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:124 + Oct 13 09:26:19.808: INFO: Deleting all statefulset in ns statefulset-6141 + Oct 13 09:26:19.812: INFO: Scaling statefulset ss2 to 0 + Oct 13 09:26:29.839: INFO: Waiting for statefulset status.replicas updated to 0 + Oct 13 09:26:29.844: INFO: Deleting statefulset ss2 + [AfterEach] [sig-apps] StatefulSet + test/e2e/framework/node/init/init.go:32 + Oct 13 09:26:29.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] StatefulSet + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] StatefulSet + tear down framework | framework.go:193 + STEP: Destroying namespace "statefulset-6141" for this suite. 10/13/23 09:26:29.861 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSS +------------------------------ +[sig-apps] Deployment + deployment should support rollover [Conformance] + test/e2e/apps/deployment.go:132 +[BeforeEach] [sig-apps] Deployment + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:26:29.868 +Oct 13 09:26:29.868: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename deployment 10/13/23 09:26:29.869 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:26:29.886 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:26:29.888 +[BeforeEach] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 +[It] deployment should support rollover [Conformance] + test/e2e/apps/deployment.go:132 +Oct 13 09:26:29.898: INFO: Pod name rollover-pod: Found 0 pods out of 1 +Oct 13 09:26:34.905: INFO: Pod name rollover-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running 10/13/23 09:26:34.905 +Oct 13 09:26:34.905: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready +Oct 13 09:26:36.910: INFO: Creating deployment "test-rollover-deployment" +Oct 13 09:26:36.921: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations +Oct 13 09:26:38.929: INFO: Check revision of new replica set for deployment "test-rollover-deployment" +Oct 13 09:26:38.937: INFO: Ensure that both replica sets have 1 created replica +Oct 13 09:26:38.942: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update +Oct 13 09:26:38.951: INFO: Updating deployment test-rollover-deployment +Oct 13 09:26:38.951: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller +Oct 13 09:26:40.959: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 +Oct 13 09:26:40.967: INFO: Make sure deployment "test-rollover-deployment" is complete +Oct 13 09:26:40.974: INFO: all replica sets need to contain the pod-template-hash label +Oct 13 09:26:40.974: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.October, 13, 9, 26, 37, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 9, 26, 37, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.October, 13, 9, 26, 40, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 9, 26, 37, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6c6df9974f\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 13 09:26:42.984: INFO: all replica sets need to contain the pod-template-hash label +Oct 13 09:26:42.984: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.October, 13, 9, 26, 37, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 9, 26, 37, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.October, 13, 9, 26, 40, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 9, 26, 37, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6c6df9974f\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 13 09:26:44.986: INFO: all replica sets need to contain the pod-template-hash label +Oct 13 09:26:44.986: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.October, 13, 9, 26, 37, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 9, 26, 37, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.October, 13, 9, 26, 40, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 9, 26, 37, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6c6df9974f\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 13 09:26:46.985: INFO: all replica sets need to contain the pod-template-hash label +Oct 13 09:26:46.985: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.October, 13, 9, 26, 37, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 9, 26, 37, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.October, 13, 9, 26, 40, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 9, 26, 37, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6c6df9974f\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 13 09:26:48.986: INFO: all replica sets need to contain the pod-template-hash label +Oct 13 09:26:48.986: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.October, 13, 9, 26, 37, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 9, 26, 37, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.October, 13, 9, 26, 40, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 9, 26, 37, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6c6df9974f\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 13 09:26:50.985: INFO: +Oct 13 09:26:50.985: INFO: Ensure that both old replica sets have no replicas +[AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 +Oct 13 09:26:50.996: INFO: Deployment "test-rollover-deployment": +&Deployment{ObjectMeta:{test-rollover-deployment deployment-8282 ab45c16b-3560-4bb1-bb07-98f488c9fd79 33978 2 2023-10-13 09:26:36 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2023-10-13 09:26:38 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-10-13 09:26:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost registry.k8s.io/e2e-test-images/agnhost:2.43 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005365bb8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2023-10-13 09:26:37 +0000 UTC,LastTransitionTime:2023-10-13 09:26:37 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-6c6df9974f" has successfully progressed.,LastUpdateTime:2023-10-13 09:26:50 +0000 UTC,LastTransitionTime:2023-10-13 09:26:37 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + +Oct 13 09:26:51.000: INFO: New ReplicaSet "test-rollover-deployment-6c6df9974f" of Deployment "test-rollover-deployment": +&ReplicaSet{ObjectMeta:{test-rollover-deployment-6c6df9974f deployment-8282 9dfbb5ca-a384-4c5e-ba1f-c3710016c576 33968 2 2023-10-13 09:26:39 +0000 UTC map[name:rollover-pod pod-template-hash:6c6df9974f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment ab45c16b-3560-4bb1-bb07-98f488c9fd79 0xc004d81137 0xc004d81138}] [] [{kube-controller-manager Update apps/v1 2023-10-13 09:26:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ab45c16b-3560-4bb1-bb07-98f488c9fd79\"}":{}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-10-13 09:26:50 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 6c6df9974f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:6c6df9974f] map[] [] [] []} {[] [] [{agnhost registry.k8s.io/e2e-test-images/agnhost:2.43 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004d811e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Oct 13 09:26:51.000: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": +Oct 13 09:26:51.000: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-8282 1f1e4593-5e0b-415c-94e7-2c89f9b4611b 33977 2 2023-10-13 09:26:29 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment ab45c16b-3560-4bb1-bb07-98f488c9fd79 0xc004d81007 0xc004d81008}] [] [{e2e.test Update apps/v1 2023-10-13 09:26:29 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-10-13 09:26:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ab45c16b-3560-4bb1-bb07-98f488c9fd79\"}":{}}},"f:spec":{"f:replicas":{}}} } {kube-controller-manager Update apps/v1 2023-10-13 09:26:50 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004d810c8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Oct 13 09:26:51.000: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-768dcbc65b deployment-8282 00ca1237-447d-4799-bc8a-9ef184e0c7be 33934 2 2023-10-13 09:26:37 +0000 UTC map[name:rollover-pod pod-template-hash:768dcbc65b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment ab45c16b-3560-4bb1-bb07-98f488c9fd79 0xc004d81257 0xc004d81258}] [] [{kube-controller-manager Update apps/v1 2023-10-13 09:26:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ab45c16b-3560-4bb1-bb07-98f488c9fd79\"}":{}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-10-13 09:26:39 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 768dcbc65b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:768dcbc65b] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004d81308 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Oct 13 09:26:51.004: INFO: Pod "test-rollover-deployment-6c6df9974f-fmh84" is available: +&Pod{ObjectMeta:{test-rollover-deployment-6c6df9974f-fmh84 test-rollover-deployment-6c6df9974f- deployment-8282 b477ebbc-e6f9-4620-8f80-000480a290bf 33945 0 2023-10-13 09:26:39 +0000 UTC map[name:rollover-pod pod-template-hash:6c6df9974f] map[] [{apps/v1 ReplicaSet test-rollover-deployment-6c6df9974f 9dfbb5ca-a384-4c5e-ba1f-c3710016c576 0xc005e89967 0xc005e89968}] [] [{kube-controller-manager Update v1 2023-10-13 09:26:39 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9dfbb5ca-a384-4c5e-ba1f-c3710016c576\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-10-13 09:26:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.12\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-xb7xd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:registry.k8s.io/e2e-test-images/agnhost:2.43,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xb7xd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 09:26:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 09:26:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 09:26:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 09:26:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.253.8.111,PodIP:10.244.1.12,StartTime:2023-10-13 09:26:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-10-13 09:26:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/agnhost:2.43,ImageID:sha256:30e3d1b869f4d327bd612179ed37850611b462c8415691b3de9eb55787f81e71,ContainerID:containerd://2886b7d9231e1dd821640ded7d8bacba4fc792ac8deeb314ba865cde373415a9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.12,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + test/e2e/framework/node/init/init.go:32 +Oct 13 09:26:51.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] Deployment + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] Deployment + tear down framework | framework.go:193 +STEP: Destroying namespace "deployment-8282" for this suite. 10/13/23 09:26:51.008 +------------------------------ +• [SLOW TEST] [21.145 seconds] +[sig-apps] Deployment +test/e2e/apps/framework.go:23 + deployment should support rollover [Conformance] + test/e2e/apps/deployment.go:132 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] Deployment + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:26:29.868 + Oct 13 09:26:29.868: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename deployment 10/13/23 09:26:29.869 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:26:29.886 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:26:29.888 + [BeforeEach] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 + [It] deployment should support rollover [Conformance] + test/e2e/apps/deployment.go:132 + Oct 13 09:26:29.898: INFO: Pod name rollover-pod: Found 0 pods out of 1 + Oct 13 09:26:34.905: INFO: Pod name rollover-pod: Found 1 pods out of 1 + STEP: ensuring each pod is running 10/13/23 09:26:34.905 + Oct 13 09:26:34.905: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready + Oct 13 09:26:36.910: INFO: Creating deployment "test-rollover-deployment" + Oct 13 09:26:36.921: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations + Oct 13 09:26:38.929: INFO: Check revision of new replica set for deployment "test-rollover-deployment" + Oct 13 09:26:38.937: INFO: Ensure that both replica sets have 1 created replica + Oct 13 09:26:38.942: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update + Oct 13 09:26:38.951: INFO: Updating deployment test-rollover-deployment + Oct 13 09:26:38.951: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller + Oct 13 09:26:40.959: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 + Oct 13 09:26:40.967: INFO: Make sure deployment "test-rollover-deployment" is complete + Oct 13 09:26:40.974: INFO: all replica sets need to contain the pod-template-hash label + Oct 13 09:26:40.974: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.October, 13, 9, 26, 37, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 9, 26, 37, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.October, 13, 9, 26, 40, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 9, 26, 37, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6c6df9974f\" is progressing."}}, CollisionCount:(*int32)(nil)} + Oct 13 09:26:42.984: INFO: all replica sets need to contain the pod-template-hash label + Oct 13 09:26:42.984: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.October, 13, 9, 26, 37, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 9, 26, 37, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.October, 13, 9, 26, 40, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 9, 26, 37, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6c6df9974f\" is progressing."}}, CollisionCount:(*int32)(nil)} + Oct 13 09:26:44.986: INFO: all replica sets need to contain the pod-template-hash label + Oct 13 09:26:44.986: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.October, 13, 9, 26, 37, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 9, 26, 37, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.October, 13, 9, 26, 40, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 9, 26, 37, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6c6df9974f\" is progressing."}}, CollisionCount:(*int32)(nil)} + Oct 13 09:26:46.985: INFO: all replica sets need to contain the pod-template-hash label + Oct 13 09:26:46.985: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.October, 13, 9, 26, 37, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 9, 26, 37, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.October, 13, 9, 26, 40, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 9, 26, 37, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6c6df9974f\" is progressing."}}, CollisionCount:(*int32)(nil)} + Oct 13 09:26:48.986: INFO: all replica sets need to contain the pod-template-hash label + Oct 13 09:26:48.986: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.October, 13, 9, 26, 37, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 9, 26, 37, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.October, 13, 9, 26, 40, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 9, 26, 37, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6c6df9974f\" is progressing."}}, CollisionCount:(*int32)(nil)} + Oct 13 09:26:50.985: INFO: + Oct 13 09:26:50.985: INFO: Ensure that both old replica sets have no replicas + [AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 + Oct 13 09:26:50.996: INFO: Deployment "test-rollover-deployment": + &Deployment{ObjectMeta:{test-rollover-deployment deployment-8282 ab45c16b-3560-4bb1-bb07-98f488c9fd79 33978 2 2023-10-13 09:26:36 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2023-10-13 09:26:38 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-10-13 09:26:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost registry.k8s.io/e2e-test-images/agnhost:2.43 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005365bb8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2023-10-13 09:26:37 +0000 UTC,LastTransitionTime:2023-10-13 09:26:37 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-6c6df9974f" has successfully progressed.,LastUpdateTime:2023-10-13 09:26:50 +0000 UTC,LastTransitionTime:2023-10-13 09:26:37 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + + Oct 13 09:26:51.000: INFO: New ReplicaSet "test-rollover-deployment-6c6df9974f" of Deployment "test-rollover-deployment": + &ReplicaSet{ObjectMeta:{test-rollover-deployment-6c6df9974f deployment-8282 9dfbb5ca-a384-4c5e-ba1f-c3710016c576 33968 2 2023-10-13 09:26:39 +0000 UTC map[name:rollover-pod pod-template-hash:6c6df9974f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment ab45c16b-3560-4bb1-bb07-98f488c9fd79 0xc004d81137 0xc004d81138}] [] [{kube-controller-manager Update apps/v1 2023-10-13 09:26:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ab45c16b-3560-4bb1-bb07-98f488c9fd79\"}":{}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-10-13 09:26:50 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 6c6df9974f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:6c6df9974f] map[] [] [] []} {[] [] [{agnhost registry.k8s.io/e2e-test-images/agnhost:2.43 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004d811e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} + Oct 13 09:26:51.000: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": + Oct 13 09:26:51.000: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-8282 1f1e4593-5e0b-415c-94e7-2c89f9b4611b 33977 2 2023-10-13 09:26:29 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment ab45c16b-3560-4bb1-bb07-98f488c9fd79 0xc004d81007 0xc004d81008}] [] [{e2e.test Update apps/v1 2023-10-13 09:26:29 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-10-13 09:26:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ab45c16b-3560-4bb1-bb07-98f488c9fd79\"}":{}}},"f:spec":{"f:replicas":{}}} } {kube-controller-manager Update apps/v1 2023-10-13 09:26:50 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004d810c8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} + Oct 13 09:26:51.000: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-768dcbc65b deployment-8282 00ca1237-447d-4799-bc8a-9ef184e0c7be 33934 2 2023-10-13 09:26:37 +0000 UTC map[name:rollover-pod pod-template-hash:768dcbc65b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment ab45c16b-3560-4bb1-bb07-98f488c9fd79 0xc004d81257 0xc004d81258}] [] [{kube-controller-manager Update apps/v1 2023-10-13 09:26:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ab45c16b-3560-4bb1-bb07-98f488c9fd79\"}":{}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-10-13 09:26:39 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 768dcbc65b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:768dcbc65b] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004d81308 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} + Oct 13 09:26:51.004: INFO: Pod "test-rollover-deployment-6c6df9974f-fmh84" is available: + &Pod{ObjectMeta:{test-rollover-deployment-6c6df9974f-fmh84 test-rollover-deployment-6c6df9974f- deployment-8282 b477ebbc-e6f9-4620-8f80-000480a290bf 33945 0 2023-10-13 09:26:39 +0000 UTC map[name:rollover-pod pod-template-hash:6c6df9974f] map[] [{apps/v1 ReplicaSet test-rollover-deployment-6c6df9974f 9dfbb5ca-a384-4c5e-ba1f-c3710016c576 0xc005e89967 0xc005e89968}] [] [{kube-controller-manager Update v1 2023-10-13 09:26:39 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9dfbb5ca-a384-4c5e-ba1f-c3710016c576\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-10-13 09:26:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.12\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-xb7xd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:registry.k8s.io/e2e-test-images/agnhost:2.43,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xb7xd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 09:26:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 09:26:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 09:26:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 09:26:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.253.8.111,PodIP:10.244.1.12,StartTime:2023-10-13 09:26:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-10-13 09:26:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/agnhost:2.43,ImageID:sha256:30e3d1b869f4d327bd612179ed37850611b462c8415691b3de9eb55787f81e71,ContainerID:containerd://2886b7d9231e1dd821640ded7d8bacba4fc792ac8deeb314ba865cde373415a9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.12,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + [AfterEach] [sig-apps] Deployment + test/e2e/framework/node/init/init.go:32 + Oct 13 09:26:51.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] Deployment + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] Deployment + tear down framework | framework.go:193 + STEP: Destroying namespace "deployment-8282" for this suite. 10/13/23 09:26:51.008 + << End Captured GinkgoWriter Output +------------------------------ +S +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for intra-pod communication: http [NodeConformance] [Conformance] + test/e2e/common/network/networking.go:82 +[BeforeEach] [sig-network] Networking + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:26:51.014 +Oct 13 09:26:51.014: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename pod-network-test 10/13/23 09:26:51.015 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:26:51.03 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:26:51.032 +[BeforeEach] [sig-network] Networking + test/e2e/framework/metrics/init/init.go:31 +[It] should function for intra-pod communication: http [NodeConformance] [Conformance] + test/e2e/common/network/networking.go:82 +STEP: Performing setup for networking test in namespace pod-network-test-5125 10/13/23 09:26:51.035 +STEP: creating a selector 10/13/23 09:26:51.035 +STEP: Creating the service pods in kubernetes 10/13/23 09:26:51.035 +Oct 13 09:26:51.035: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +Oct 13 09:26:51.062: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "pod-network-test-5125" to be "running and ready" +Oct 13 09:26:51.067: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.388113ms +Oct 13 09:26:51.067: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Oct 13 09:26:53.072: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.009099919s +Oct 13 09:26:53.072: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Oct 13 09:26:55.073: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.010143846s +Oct 13 09:26:55.073: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Oct 13 09:26:57.071: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.008796908s +Oct 13 09:26:57.071: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Oct 13 09:26:59.075: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.012817421s +Oct 13 09:26:59.075: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Oct 13 09:27:01.075: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.012179799s +Oct 13 09:27:01.075: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Oct 13 09:27:03.074: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.011118711s +Oct 13 09:27:03.074: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Oct 13 09:27:05.073: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.010470705s +Oct 13 09:27:05.073: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Oct 13 09:27:07.073: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.010504938s +Oct 13 09:27:07.073: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Oct 13 09:27:09.074: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.011183391s +Oct 13 09:27:09.074: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Oct 13 09:27:11.074: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.011539s +Oct 13 09:27:11.074: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Oct 13 09:27:13.073: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=true. Elapsed: 22.010122012s +Oct 13 09:27:13.073: INFO: The phase of Pod netserver-0 is Running (Ready = true) +Oct 13 09:27:13.073: INFO: Pod "netserver-0" satisfied condition "running and ready" +Oct 13 09:27:13.078: INFO: Waiting up to 5m0s for pod "netserver-1" in namespace "pod-network-test-5125" to be "running and ready" +Oct 13 09:27:13.082: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=true. Elapsed: 4.482269ms +Oct 13 09:27:13.082: INFO: The phase of Pod netserver-1 is Running (Ready = true) +Oct 13 09:27:13.082: INFO: Pod "netserver-1" satisfied condition "running and ready" +Oct 13 09:27:13.086: INFO: Waiting up to 5m0s for pod "netserver-2" in namespace "pod-network-test-5125" to be "running and ready" +Oct 13 09:27:13.091: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=true. Elapsed: 4.518703ms +Oct 13 09:27:13.091: INFO: The phase of Pod netserver-2 is Running (Ready = true) +Oct 13 09:27:13.091: INFO: Pod "netserver-2" satisfied condition "running and ready" +STEP: Creating test pods 10/13/23 09:27:13.096 +Oct 13 09:27:13.107: INFO: Waiting up to 5m0s for pod "test-container-pod" in namespace "pod-network-test-5125" to be "running" +Oct 13 09:27:13.112: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.828076ms +Oct 13 09:27:15.117: INFO: Pod "test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.010104352s +Oct 13 09:27:15.117: INFO: Pod "test-container-pod" satisfied condition "running" +Oct 13 09:27:15.121: INFO: Setting MaxTries for pod polling to 39 for networking test based on endpoint count 3 +Oct 13 09:27:15.121: INFO: Breadth first check of 10.244.0.68 on host 10.253.8.110... +Oct 13 09:27:15.125: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.14:9080/dial?request=hostname&protocol=http&host=10.244.0.68&port=8083&tries=1'] Namespace:pod-network-test-5125 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 13 09:27:15.125: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +Oct 13 09:27:15.126: INFO: ExecWithOptions: Clientset creation +Oct 13 09:27:15.126: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/pod-network-test-5125/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.244.1.14%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D10.244.0.68%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) +Oct 13 09:27:15.192: INFO: Waiting for responses: map[] +Oct 13 09:27:15.192: INFO: reached 10.244.0.68 after 0/1 tries +Oct 13 09:27:15.192: INFO: Breadth first check of 10.244.1.13 on host 10.253.8.111... +Oct 13 09:27:15.197: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.14:9080/dial?request=hostname&protocol=http&host=10.244.1.13&port=8083&tries=1'] Namespace:pod-network-test-5125 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 13 09:27:15.197: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +Oct 13 09:27:15.198: INFO: ExecWithOptions: Clientset creation +Oct 13 09:27:15.199: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/pod-network-test-5125/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.244.1.14%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D10.244.1.13%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) +Oct 13 09:27:15.286: INFO: Waiting for responses: map[] +Oct 13 09:27:15.286: INFO: reached 10.244.1.13 after 0/1 tries +Oct 13 09:27:15.286: INFO: Breadth first check of 10.244.2.136 on host 10.253.8.112... +Oct 13 09:27:15.291: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.14:9080/dial?request=hostname&protocol=http&host=10.244.2.136&port=8083&tries=1'] Namespace:pod-network-test-5125 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 13 09:27:15.291: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +Oct 13 09:27:15.292: INFO: ExecWithOptions: Clientset creation +Oct 13 09:27:15.292: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/pod-network-test-5125/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.244.1.14%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D10.244.2.136%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) +Oct 13 09:27:15.360: INFO: Waiting for responses: map[] +Oct 13 09:27:15.360: INFO: reached 10.244.2.136 after 0/1 tries +Oct 13 09:27:15.360: INFO: Going to retry 0 out of 3 pods.... +[AfterEach] [sig-network] Networking + test/e2e/framework/node/init/init.go:32 +Oct 13 09:27:15.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] Networking + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] Networking + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] Networking + tear down framework | framework.go:193 +STEP: Destroying namespace "pod-network-test-5125" for this suite. 10/13/23 09:27:15.365 +------------------------------ +• [SLOW TEST] [24.358 seconds] +[sig-network] Networking +test/e2e/common/network/framework.go:23 + Granular Checks: Pods + test/e2e/common/network/networking.go:32 + should function for intra-pod communication: http [NodeConformance] [Conformance] + test/e2e/common/network/networking.go:82 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Networking + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:26:51.014 + Oct 13 09:26:51.014: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename pod-network-test 10/13/23 09:26:51.015 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:26:51.03 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:26:51.032 + [BeforeEach] [sig-network] Networking + test/e2e/framework/metrics/init/init.go:31 + [It] should function for intra-pod communication: http [NodeConformance] [Conformance] + test/e2e/common/network/networking.go:82 + STEP: Performing setup for networking test in namespace pod-network-test-5125 10/13/23 09:26:51.035 + STEP: creating a selector 10/13/23 09:26:51.035 + STEP: Creating the service pods in kubernetes 10/13/23 09:26:51.035 + Oct 13 09:26:51.035: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable + Oct 13 09:26:51.062: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "pod-network-test-5125" to be "running and ready" + Oct 13 09:26:51.067: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.388113ms + Oct 13 09:26:51.067: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) + Oct 13 09:26:53.072: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.009099919s + Oct 13 09:26:53.072: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Oct 13 09:26:55.073: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.010143846s + Oct 13 09:26:55.073: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Oct 13 09:26:57.071: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.008796908s + Oct 13 09:26:57.071: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Oct 13 09:26:59.075: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.012817421s + Oct 13 09:26:59.075: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Oct 13 09:27:01.075: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.012179799s + Oct 13 09:27:01.075: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Oct 13 09:27:03.074: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.011118711s + Oct 13 09:27:03.074: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Oct 13 09:27:05.073: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.010470705s + Oct 13 09:27:05.073: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Oct 13 09:27:07.073: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.010504938s + Oct 13 09:27:07.073: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Oct 13 09:27:09.074: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.011183391s + Oct 13 09:27:09.074: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Oct 13 09:27:11.074: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.011539s + Oct 13 09:27:11.074: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Oct 13 09:27:13.073: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=true. Elapsed: 22.010122012s + Oct 13 09:27:13.073: INFO: The phase of Pod netserver-0 is Running (Ready = true) + Oct 13 09:27:13.073: INFO: Pod "netserver-0" satisfied condition "running and ready" + Oct 13 09:27:13.078: INFO: Waiting up to 5m0s for pod "netserver-1" in namespace "pod-network-test-5125" to be "running and ready" + Oct 13 09:27:13.082: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=true. Elapsed: 4.482269ms + Oct 13 09:27:13.082: INFO: The phase of Pod netserver-1 is Running (Ready = true) + Oct 13 09:27:13.082: INFO: Pod "netserver-1" satisfied condition "running and ready" + Oct 13 09:27:13.086: INFO: Waiting up to 5m0s for pod "netserver-2" in namespace "pod-network-test-5125" to be "running and ready" + Oct 13 09:27:13.091: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=true. Elapsed: 4.518703ms + Oct 13 09:27:13.091: INFO: The phase of Pod netserver-2 is Running (Ready = true) + Oct 13 09:27:13.091: INFO: Pod "netserver-2" satisfied condition "running and ready" + STEP: Creating test pods 10/13/23 09:27:13.096 + Oct 13 09:27:13.107: INFO: Waiting up to 5m0s for pod "test-container-pod" in namespace "pod-network-test-5125" to be "running" + Oct 13 09:27:13.112: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.828076ms + Oct 13 09:27:15.117: INFO: Pod "test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.010104352s + Oct 13 09:27:15.117: INFO: Pod "test-container-pod" satisfied condition "running" + Oct 13 09:27:15.121: INFO: Setting MaxTries for pod polling to 39 for networking test based on endpoint count 3 + Oct 13 09:27:15.121: INFO: Breadth first check of 10.244.0.68 on host 10.253.8.110... + Oct 13 09:27:15.125: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.14:9080/dial?request=hostname&protocol=http&host=10.244.0.68&port=8083&tries=1'] Namespace:pod-network-test-5125 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Oct 13 09:27:15.125: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + Oct 13 09:27:15.126: INFO: ExecWithOptions: Clientset creation + Oct 13 09:27:15.126: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/pod-network-test-5125/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.244.1.14%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D10.244.0.68%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) + Oct 13 09:27:15.192: INFO: Waiting for responses: map[] + Oct 13 09:27:15.192: INFO: reached 10.244.0.68 after 0/1 tries + Oct 13 09:27:15.192: INFO: Breadth first check of 10.244.1.13 on host 10.253.8.111... + Oct 13 09:27:15.197: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.14:9080/dial?request=hostname&protocol=http&host=10.244.1.13&port=8083&tries=1'] Namespace:pod-network-test-5125 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Oct 13 09:27:15.197: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + Oct 13 09:27:15.198: INFO: ExecWithOptions: Clientset creation + Oct 13 09:27:15.199: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/pod-network-test-5125/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.244.1.14%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D10.244.1.13%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) + Oct 13 09:27:15.286: INFO: Waiting for responses: map[] + Oct 13 09:27:15.286: INFO: reached 10.244.1.13 after 0/1 tries + Oct 13 09:27:15.286: INFO: Breadth first check of 10.244.2.136 on host 10.253.8.112... + Oct 13 09:27:15.291: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.14:9080/dial?request=hostname&protocol=http&host=10.244.2.136&port=8083&tries=1'] Namespace:pod-network-test-5125 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Oct 13 09:27:15.291: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + Oct 13 09:27:15.292: INFO: ExecWithOptions: Clientset creation + Oct 13 09:27:15.292: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/pod-network-test-5125/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.244.1.14%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D10.244.2.136%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) + Oct 13 09:27:15.360: INFO: Waiting for responses: map[] + Oct 13 09:27:15.360: INFO: reached 10.244.2.136 after 0/1 tries + Oct 13 09:27:15.360: INFO: Going to retry 0 out of 3 pods.... + [AfterEach] [sig-network] Networking + test/e2e/framework/node/init/init.go:32 + Oct 13 09:27:15.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] Networking + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] Networking + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] Networking + tear down framework | framework.go:193 + STEP: Destroying namespace "pod-network-test-5125" for this suite. 10/13/23 09:27:15.365 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's cpu limit [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:193 +[BeforeEach] [sig-storage] Projected downwardAPI + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:27:15.374 +Oct 13 09:27:15.374: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename projected 10/13/23 09:27:15.375 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:27:15.392 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:27:15.394 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:44 +[It] should provide container's cpu limit [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:193 +STEP: Creating a pod to test downward API volume plugin 10/13/23 09:27:15.396 +Oct 13 09:27:15.407: INFO: Waiting up to 5m0s for pod "downwardapi-volume-71a7c7c6-6d21-48a6-af20-91cbdac11be2" in namespace "projected-2928" to be "Succeeded or Failed" +Oct 13 09:27:15.410: INFO: Pod "downwardapi-volume-71a7c7c6-6d21-48a6-af20-91cbdac11be2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.206574ms +Oct 13 09:27:17.416: INFO: Pod "downwardapi-volume-71a7c7c6-6d21-48a6-af20-91cbdac11be2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009126983s +Oct 13 09:27:19.418: INFO: Pod "downwardapi-volume-71a7c7c6-6d21-48a6-af20-91cbdac11be2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010818518s +STEP: Saw pod success 10/13/23 09:27:19.418 +Oct 13 09:27:19.418: INFO: Pod "downwardapi-volume-71a7c7c6-6d21-48a6-af20-91cbdac11be2" satisfied condition "Succeeded or Failed" +Oct 13 09:27:19.423: INFO: Trying to get logs from node node2 pod downwardapi-volume-71a7c7c6-6d21-48a6-af20-91cbdac11be2 container client-container: +STEP: delete the pod 10/13/23 09:27:19.441 +Oct 13 09:27:19.453: INFO: Waiting for pod downwardapi-volume-71a7c7c6-6d21-48a6-af20-91cbdac11be2 to disappear +Oct 13 09:27:19.456: INFO: Pod downwardapi-volume-71a7c7c6-6d21-48a6-af20-91cbdac11be2 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/node/init/init.go:32 +Oct 13 09:27:19.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI + tear down framework | framework.go:193 +STEP: Destroying namespace "projected-2928" for this suite. 10/13/23 09:27:19.46 +------------------------------ +• [4.092 seconds] +[sig-storage] Projected downwardAPI +test/e2e/common/storage/framework.go:23 + should provide container's cpu limit [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:193 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected downwardAPI + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:27:15.374 + Oct 13 09:27:15.374: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename projected 10/13/23 09:27:15.375 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:27:15.392 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:27:15.394 + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:44 + [It] should provide container's cpu limit [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:193 + STEP: Creating a pod to test downward API volume plugin 10/13/23 09:27:15.396 + Oct 13 09:27:15.407: INFO: Waiting up to 5m0s for pod "downwardapi-volume-71a7c7c6-6d21-48a6-af20-91cbdac11be2" in namespace "projected-2928" to be "Succeeded or Failed" + Oct 13 09:27:15.410: INFO: Pod "downwardapi-volume-71a7c7c6-6d21-48a6-af20-91cbdac11be2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.206574ms + Oct 13 09:27:17.416: INFO: Pod "downwardapi-volume-71a7c7c6-6d21-48a6-af20-91cbdac11be2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009126983s + Oct 13 09:27:19.418: INFO: Pod "downwardapi-volume-71a7c7c6-6d21-48a6-af20-91cbdac11be2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010818518s + STEP: Saw pod success 10/13/23 09:27:19.418 + Oct 13 09:27:19.418: INFO: Pod "downwardapi-volume-71a7c7c6-6d21-48a6-af20-91cbdac11be2" satisfied condition "Succeeded or Failed" + Oct 13 09:27:19.423: INFO: Trying to get logs from node node2 pod downwardapi-volume-71a7c7c6-6d21-48a6-af20-91cbdac11be2 container client-container: + STEP: delete the pod 10/13/23 09:27:19.441 + Oct 13 09:27:19.453: INFO: Waiting for pod downwardapi-volume-71a7c7c6-6d21-48a6-af20-91cbdac11be2 to disappear + Oct 13 09:27:19.456: INFO: Pod downwardapi-volume-71a7c7c6-6d21-48a6-af20-91cbdac11be2 no longer exists + [AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/node/init/init.go:32 + Oct 13 09:27:19.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + tear down framework | framework.go:193 + STEP: Destroying namespace "projected-2928" for this suite. 10/13/23 09:27:19.46 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl describe + should check if kubectl describe prints relevant information for rc and pods [Conformance] + test/e2e/kubectl/kubectl.go:1276 +[BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:27:19.47 +Oct 13 09:27:19.470: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename kubectl 10/13/23 09:27:19.471 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:27:19.487 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:27:19.49 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 +[It] should check if kubectl describe prints relevant information for rc and pods [Conformance] + test/e2e/kubectl/kubectl.go:1276 +Oct 13 09:27:19.492: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-4242 create -f -' +Oct 13 09:27:20.119: INFO: stderr: "" +Oct 13 09:27:20.119: INFO: stdout: "replicationcontroller/agnhost-primary created\n" +Oct 13 09:27:20.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-4242 create -f -' +Oct 13 09:27:20.697: INFO: stderr: "" +Oct 13 09:27:20.697: INFO: stdout: "service/agnhost-primary created\n" +STEP: Waiting for Agnhost primary to start. 10/13/23 09:27:20.697 +Oct 13 09:27:21.701: INFO: Selector matched 1 pods for map[app:agnhost] +Oct 13 09:27:21.701: INFO: Found 1 / 1 +Oct 13 09:27:21.701: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 +Oct 13 09:27:21.704: INFO: Selector matched 1 pods for map[app:agnhost] +Oct 13 09:27:21.704: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +Oct 13 09:27:21.704: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-4242 describe pod agnhost-primary-v7jb2' +Oct 13 09:27:21.776: INFO: stderr: "" +Oct 13 09:27:21.776: INFO: stdout: "Name: agnhost-primary-v7jb2\nNamespace: kubectl-4242\nPriority: 0\nService Account: default\nNode: node2/10.253.8.111\nStart Time: Fri, 13 Oct 2023 09:27:20 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: \nStatus: Running\nIP: 10.244.1.16\nIPs:\n IP: 10.244.1.16\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: containerd://bdfba4523342bc939bfad7c18ebddcdb172f7e87dfff4b283ec6af2186c93b1f\n Image: registry.k8s.io/e2e-test-images/agnhost:2.43\n Image ID: sha256:30e3d1b869f4d327bd612179ed37850611b462c8415691b3de9eb55787f81e71\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Fri, 13 Oct 2023 09:27:21 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nbmbz (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-nbmbz:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 1s default-scheduler Successfully assigned kubectl-4242/agnhost-primary-v7jb2 to node2\n Normal Pulled 0s kubelet Container image \"registry.k8s.io/e2e-test-images/agnhost:2.43\" already present on machine\n Normal Created 0s kubelet Created container agnhost-primary\n Normal Started 0s kubelet Started container agnhost-primary\n" +Oct 13 09:27:21.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-4242 describe rc agnhost-primary' +Oct 13 09:27:21.867: INFO: stderr: "" +Oct 13 09:27:21.868: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-4242\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: registry.k8s.io/e2e-test-images/agnhost:2.43\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 1s replication-controller Created pod: agnhost-primary-v7jb2\n" +Oct 13 09:27:21.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-4242 describe service agnhost-primary' +Oct 13 09:27:21.955: INFO: stderr: "" +Oct 13 09:27:21.955: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-4242\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.97.121.150\nIPs: 10.97.121.150\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.16:6379\nSession Affinity: None\nEvents: \n" +Oct 13 09:27:21.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-4242 describe node node1' +Oct 13 09:27:22.050: INFO: stderr: "" +Oct 13 09:27:22.051: INFO: stdout: "Name: node1\nRoles: control-plane\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=node1\n kubernetes.io/os=linux\n node-role.kubernetes.io/control-plane=\n node.kubernetes.io/exclude-from-external-load-balancers=\nAnnotations: flannel.alpha.coreos.com/backend-data: null\n flannel.alpha.coreos.com/backend-type: host-gw\n flannel.alpha.coreos.com/kube-subnet-manager: true\n flannel.alpha.coreos.com/public-ip: 10.253.8.110\n kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Fri, 13 Oct 2023 07:05:22 +0000\nTaints: \nUnschedulable: false\nLease:\n HolderIdentity: node1\n AcquireTime: \n RenewTime: Fri, 13 Oct 2023 09:27:17 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n NetworkUnavailable False Fri, 13 Oct 2023 08:11:45 +0000 Fri, 13 Oct 2023 08:11:45 +0000 FlannelIsUp Flannel is running on this node\n MemoryPressure False Fri, 13 Oct 2023 09:25:53 +0000 Fri, 13 Oct 2023 07:05:17 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Fri, 13 Oct 2023 09:25:53 +0000 Fri, 13 Oct 2023 07:05:17 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Fri, 13 Oct 2023 09:25:53 +0000 Fri, 13 Oct 2023 07:05:17 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Fri, 13 Oct 2023 09:25:53 +0000 Fri, 13 Oct 2023 07:51:49 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 10.253.8.110\n Hostname: node1\nCapacity:\n cpu: 8\n ephemeral-storage: 207948592Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 7880272Ki\n pods: 110\n scheduling.k8s.io/foo: 5\nAllocatable:\n cpu: 8\n ephemeral-storage: 191645422070\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 7777872Ki\n pods: 110\n scheduling.k8s.io/foo: 5\nSystem Info:\n Machine ID: af240c3a805a4e3a9ff327df13b83729\n System UUID: 87564d56-8b73-9395-f67b-d4a5a7966da7\n Boot ID: 881f5a6b-fa49-4a0b-a8a7-bd664e2c1fd8\n Kernel Version: 4.18.0-485.el8.x86_64\n OS Image: CentOS Stream 8\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.7.7\n Kubelet Version: v1.26.5\n Kube-Proxy Version: v1.26.5\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (11 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-flannel kube-flannel-ds-jtxbm 100m (1%) 0 (0%) 50Mi (0%) 0 (0%) 75m\n kube-system coredns-787d4945fb-89krv 100m (1%) 0 (0%) 70Mi (0%) 170Mi (2%) 67m\n kube-system etcd-node1 100m (1%) 0 (0%) 100Mi (1%) 0 (0%) 141m\n kube-system haproxy-node1 0 (0%) 0 (0%) 0 (0%) 0 (0%) 141m\n kube-system keepalived-node1 0 (0%) 0 (0%) 0 (0%) 0 (0%) 141m\n kube-system kube-apiserver-node1 250m (3%) 0 (0%) 0 (0%) 0 (0%) 141m\n kube-system kube-controller-manager-node1 200m (2%) 0 (0%) 0 (0%) 0 (0%) 141m\n kube-system kube-proxy-dqr76 0 (0%) 0 (0%) 0 (0%) 0 (0%) 141m\n kube-system kube-scheduler-node1 100m (1%) 0 (0%) 0 (0%) 0 (0%) 141m\n sonobuoy sonobuoy 0 (0%) 0 (0%) 0 (0%) 0 (0%) 73m\n sonobuoy sonobuoy-systemd-logs-daemon-set-4d2891c1fd524ac6-jr4nt 0 (0%) 0 (0%) 0 (0%) 0 (0%) 73m\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (10%) 0 (0%)\n memory 220Mi (2%) 170Mi (2%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\n scheduling.k8s.io/foo 0 0\nEvents: \n" +Oct 13 09:27:22.051: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-4242 describe namespace kubectl-4242' +Oct 13 09:27:22.137: INFO: stderr: "" +Oct 13 09:27:22.137: INFO: stdout: "Name: kubectl-4242\nLabels: e2e-framework=kubectl\n e2e-run=bac244cc-4119-4800-a1cc-8eb31f68e1cb\n kubernetes.io/metadata.name=kubectl-4242\n pod-security.kubernetes.io/enforce=baseline\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 +Oct 13 09:27:22.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 +STEP: Destroying namespace "kubectl-4242" for this suite. 10/13/23 09:27:22.141 +------------------------------ +• [2.678 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Kubectl describe + test/e2e/kubectl/kubectl.go:1270 + should check if kubectl describe prints relevant information for rc and pods [Conformance] + test/e2e/kubectl/kubectl.go:1276 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:27:19.47 + Oct 13 09:27:19.470: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename kubectl 10/13/23 09:27:19.471 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:27:19.487 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:27:19.49 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 + [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] + test/e2e/kubectl/kubectl.go:1276 + Oct 13 09:27:19.492: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-4242 create -f -' + Oct 13 09:27:20.119: INFO: stderr: "" + Oct 13 09:27:20.119: INFO: stdout: "replicationcontroller/agnhost-primary created\n" + Oct 13 09:27:20.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-4242 create -f -' + Oct 13 09:27:20.697: INFO: stderr: "" + Oct 13 09:27:20.697: INFO: stdout: "service/agnhost-primary created\n" + STEP: Waiting for Agnhost primary to start. 10/13/23 09:27:20.697 + Oct 13 09:27:21.701: INFO: Selector matched 1 pods for map[app:agnhost] + Oct 13 09:27:21.701: INFO: Found 1 / 1 + Oct 13 09:27:21.701: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 + Oct 13 09:27:21.704: INFO: Selector matched 1 pods for map[app:agnhost] + Oct 13 09:27:21.704: INFO: ForEach: Found 1 pods from the filter. Now looping through them. + Oct 13 09:27:21.704: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-4242 describe pod agnhost-primary-v7jb2' + Oct 13 09:27:21.776: INFO: stderr: "" + Oct 13 09:27:21.776: INFO: stdout: "Name: agnhost-primary-v7jb2\nNamespace: kubectl-4242\nPriority: 0\nService Account: default\nNode: node2/10.253.8.111\nStart Time: Fri, 13 Oct 2023 09:27:20 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: \nStatus: Running\nIP: 10.244.1.16\nIPs:\n IP: 10.244.1.16\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: containerd://bdfba4523342bc939bfad7c18ebddcdb172f7e87dfff4b283ec6af2186c93b1f\n Image: registry.k8s.io/e2e-test-images/agnhost:2.43\n Image ID: sha256:30e3d1b869f4d327bd612179ed37850611b462c8415691b3de9eb55787f81e71\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Fri, 13 Oct 2023 09:27:21 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nbmbz (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-nbmbz:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 1s default-scheduler Successfully assigned kubectl-4242/agnhost-primary-v7jb2 to node2\n Normal Pulled 0s kubelet Container image \"registry.k8s.io/e2e-test-images/agnhost:2.43\" already present on machine\n Normal Created 0s kubelet Created container agnhost-primary\n Normal Started 0s kubelet Started container agnhost-primary\n" + Oct 13 09:27:21.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-4242 describe rc agnhost-primary' + Oct 13 09:27:21.867: INFO: stderr: "" + Oct 13 09:27:21.868: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-4242\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: registry.k8s.io/e2e-test-images/agnhost:2.43\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 1s replication-controller Created pod: agnhost-primary-v7jb2\n" + Oct 13 09:27:21.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-4242 describe service agnhost-primary' + Oct 13 09:27:21.955: INFO: stderr: "" + Oct 13 09:27:21.955: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-4242\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.97.121.150\nIPs: 10.97.121.150\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.16:6379\nSession Affinity: None\nEvents: \n" + Oct 13 09:27:21.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-4242 describe node node1' + Oct 13 09:27:22.050: INFO: stderr: "" + Oct 13 09:27:22.051: INFO: stdout: "Name: node1\nRoles: control-plane\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=node1\n kubernetes.io/os=linux\n node-role.kubernetes.io/control-plane=\n node.kubernetes.io/exclude-from-external-load-balancers=\nAnnotations: flannel.alpha.coreos.com/backend-data: null\n flannel.alpha.coreos.com/backend-type: host-gw\n flannel.alpha.coreos.com/kube-subnet-manager: true\n flannel.alpha.coreos.com/public-ip: 10.253.8.110\n kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Fri, 13 Oct 2023 07:05:22 +0000\nTaints: \nUnschedulable: false\nLease:\n HolderIdentity: node1\n AcquireTime: \n RenewTime: Fri, 13 Oct 2023 09:27:17 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n NetworkUnavailable False Fri, 13 Oct 2023 08:11:45 +0000 Fri, 13 Oct 2023 08:11:45 +0000 FlannelIsUp Flannel is running on this node\n MemoryPressure False Fri, 13 Oct 2023 09:25:53 +0000 Fri, 13 Oct 2023 07:05:17 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Fri, 13 Oct 2023 09:25:53 +0000 Fri, 13 Oct 2023 07:05:17 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Fri, 13 Oct 2023 09:25:53 +0000 Fri, 13 Oct 2023 07:05:17 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Fri, 13 Oct 2023 09:25:53 +0000 Fri, 13 Oct 2023 07:51:49 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 10.253.8.110\n Hostname: node1\nCapacity:\n cpu: 8\n ephemeral-storage: 207948592Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 7880272Ki\n pods: 110\n scheduling.k8s.io/foo: 5\nAllocatable:\n cpu: 8\n ephemeral-storage: 191645422070\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 7777872Ki\n pods: 110\n scheduling.k8s.io/foo: 5\nSystem Info:\n Machine ID: af240c3a805a4e3a9ff327df13b83729\n System UUID: 87564d56-8b73-9395-f67b-d4a5a7966da7\n Boot ID: 881f5a6b-fa49-4a0b-a8a7-bd664e2c1fd8\n Kernel Version: 4.18.0-485.el8.x86_64\n OS Image: CentOS Stream 8\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.7.7\n Kubelet Version: v1.26.5\n Kube-Proxy Version: v1.26.5\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (11 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-flannel kube-flannel-ds-jtxbm 100m (1%) 0 (0%) 50Mi (0%) 0 (0%) 75m\n kube-system coredns-787d4945fb-89krv 100m (1%) 0 (0%) 70Mi (0%) 170Mi (2%) 67m\n kube-system etcd-node1 100m (1%) 0 (0%) 100Mi (1%) 0 (0%) 141m\n kube-system haproxy-node1 0 (0%) 0 (0%) 0 (0%) 0 (0%) 141m\n kube-system keepalived-node1 0 (0%) 0 (0%) 0 (0%) 0 (0%) 141m\n kube-system kube-apiserver-node1 250m (3%) 0 (0%) 0 (0%) 0 (0%) 141m\n kube-system kube-controller-manager-node1 200m (2%) 0 (0%) 0 (0%) 0 (0%) 141m\n kube-system kube-proxy-dqr76 0 (0%) 0 (0%) 0 (0%) 0 (0%) 141m\n kube-system kube-scheduler-node1 100m (1%) 0 (0%) 0 (0%) 0 (0%) 141m\n sonobuoy sonobuoy 0 (0%) 0 (0%) 0 (0%) 0 (0%) 73m\n sonobuoy sonobuoy-systemd-logs-daemon-set-4d2891c1fd524ac6-jr4nt 0 (0%) 0 (0%) 0 (0%) 0 (0%) 73m\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (10%) 0 (0%)\n memory 220Mi (2%) 170Mi (2%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\n scheduling.k8s.io/foo 0 0\nEvents: \n" + Oct 13 09:27:22.051: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=kubectl-4242 describe namespace kubectl-4242' + Oct 13 09:27:22.137: INFO: stderr: "" + Oct 13 09:27:22.137: INFO: stdout: "Name: kubectl-4242\nLabels: e2e-framework=kubectl\n e2e-run=bac244cc-4119-4800-a1cc-8eb31f68e1cb\n kubernetes.io/metadata.name=kubectl-4242\n pod-security.kubernetes.io/enforce=baseline\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" + [AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 + Oct 13 09:27:22.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 + STEP: Destroying namespace "kubectl-4242" for this suite. 10/13/23 09:27:22.141 + << End Captured GinkgoWriter Output +------------------------------ +SSS +------------------------------ +[sig-apps] CronJob + should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] + test/e2e/apps/cronjob.go:124 +[BeforeEach] [sig-apps] CronJob + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:27:22.148 +Oct 13 09:27:22.148: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename cronjob 10/13/23 09:27:22.149 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:27:22.164 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:27:22.166 +[BeforeEach] [sig-apps] CronJob + test/e2e/framework/metrics/init/init.go:31 +[It] should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] + test/e2e/apps/cronjob.go:124 +STEP: Creating a ForbidConcurrent cronjob 10/13/23 09:27:22.169 +STEP: Ensuring a job is scheduled 10/13/23 09:27:22.173 +STEP: Ensuring exactly one is scheduled 10/13/23 09:28:00.179 +STEP: Ensuring exactly one running job exists by listing jobs explicitly 10/13/23 09:28:00.183 +STEP: Ensuring no more jobs are scheduled 10/13/23 09:28:00.188 +STEP: Removing cronjob 10/13/23 09:33:00.197 +[AfterEach] [sig-apps] CronJob + test/e2e/framework/node/init/init.go:32 +Oct 13 09:33:00.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] CronJob + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] CronJob + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] CronJob + tear down framework | framework.go:193 +STEP: Destroying namespace "cronjob-4984" for this suite. 10/13/23 09:33:00.208 +------------------------------ +• [SLOW TEST] [338.067 seconds] +[sig-apps] CronJob +test/e2e/apps/framework.go:23 + should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] + test/e2e/apps/cronjob.go:124 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] CronJob + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:27:22.148 + Oct 13 09:27:22.148: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename cronjob 10/13/23 09:27:22.149 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:27:22.164 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:27:22.166 + [BeforeEach] [sig-apps] CronJob + test/e2e/framework/metrics/init/init.go:31 + [It] should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] + test/e2e/apps/cronjob.go:124 + STEP: Creating a ForbidConcurrent cronjob 10/13/23 09:27:22.169 + STEP: Ensuring a job is scheduled 10/13/23 09:27:22.173 + STEP: Ensuring exactly one is scheduled 10/13/23 09:28:00.179 + STEP: Ensuring exactly one running job exists by listing jobs explicitly 10/13/23 09:28:00.183 + STEP: Ensuring no more jobs are scheduled 10/13/23 09:28:00.188 + STEP: Removing cronjob 10/13/23 09:33:00.197 + [AfterEach] [sig-apps] CronJob + test/e2e/framework/node/init/init.go:32 + Oct 13 09:33:00.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] CronJob + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] CronJob + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] CronJob + tear down framework | framework.go:193 + STEP: Destroying namespace "cronjob-4984" for this suite. 10/13/23 09:33:00.208 + << End Captured GinkgoWriter Output +------------------------------ +SSSSS +------------------------------ +[sig-api-machinery] Namespaces [Serial] + should apply a finalizer to a Namespace [Conformance] + test/e2e/apimachinery/namespace.go:394 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:33:00.215 +Oct 13 09:33:00.215: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename namespaces 10/13/23 09:33:00.216 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:33:00.233 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:33:00.237 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/metrics/init/init.go:31 +[It] should apply a finalizer to a Namespace [Conformance] + test/e2e/apimachinery/namespace.go:394 +STEP: Creating namespace "e2e-ns-lndrc" 10/13/23 09:33:00.24 +Oct 13 09:33:00.258: INFO: Namespace "e2e-ns-lndrc-3269" has []v1.FinalizerName{"kubernetes"} +STEP: Adding e2e finalizer to namespace "e2e-ns-lndrc-3269" 10/13/23 09:33:00.258 +Oct 13 09:33:00.265: INFO: Namespace "e2e-ns-lndrc-3269" has []v1.FinalizerName{"kubernetes", "e2e.example.com/fakeFinalizer"} +STEP: Removing e2e finalizer from namespace "e2e-ns-lndrc-3269" 10/13/23 09:33:00.265 +Oct 13 09:33:00.273: INFO: Namespace "e2e-ns-lndrc-3269" has []v1.FinalizerName{"kubernetes"} +[AfterEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/node/init/init.go:32 +Oct 13 09:33:00.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + tear down framework | framework.go:193 +STEP: Destroying namespace "namespaces-4188" for this suite. 10/13/23 09:33:00.276 +STEP: Destroying namespace "e2e-ns-lndrc-3269" for this suite. 10/13/23 09:33:00.282 +------------------------------ +• [0.073 seconds] +[sig-api-machinery] Namespaces [Serial] +test/e2e/apimachinery/framework.go:23 + should apply a finalizer to a Namespace [Conformance] + test/e2e/apimachinery/namespace.go:394 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] Namespaces [Serial] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:33:00.215 + Oct 13 09:33:00.215: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename namespaces 10/13/23 09:33:00.216 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:33:00.233 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:33:00.237 + [BeforeEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/metrics/init/init.go:31 + [It] should apply a finalizer to a Namespace [Conformance] + test/e2e/apimachinery/namespace.go:394 + STEP: Creating namespace "e2e-ns-lndrc" 10/13/23 09:33:00.24 + Oct 13 09:33:00.258: INFO: Namespace "e2e-ns-lndrc-3269" has []v1.FinalizerName{"kubernetes"} + STEP: Adding e2e finalizer to namespace "e2e-ns-lndrc-3269" 10/13/23 09:33:00.258 + Oct 13 09:33:00.265: INFO: Namespace "e2e-ns-lndrc-3269" has []v1.FinalizerName{"kubernetes", "e2e.example.com/fakeFinalizer"} + STEP: Removing e2e finalizer from namespace "e2e-ns-lndrc-3269" 10/13/23 09:33:00.265 + Oct 13 09:33:00.273: INFO: Namespace "e2e-ns-lndrc-3269" has []v1.FinalizerName{"kubernetes"} + [AfterEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/node/init/init.go:32 + Oct 13 09:33:00.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + tear down framework | framework.go:193 + STEP: Destroying namespace "namespaces-4188" for this suite. 10/13/23 09:33:00.276 + STEP: Destroying namespace "e2e-ns-lndrc-3269" for this suite. 10/13/23 09:33:00.282 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + should include custom resource definition resources in discovery documents [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:198 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:33:00.288 +Oct 13 09:33:00.288: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename custom-resource-definition 10/13/23 09:33:00.289 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:33:00.304 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:33:00.308 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[It] should include custom resource definition resources in discovery documents [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:198 +STEP: fetching the /apis discovery document 10/13/23 09:33:00.311 +STEP: finding the apiextensions.k8s.io API group in the /apis discovery document 10/13/23 09:33:00.312 +STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document 10/13/23 09:33:00.312 +STEP: fetching the /apis/apiextensions.k8s.io discovery document 10/13/23 09:33:00.312 +STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document 10/13/23 09:33:00.313 +STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document 10/13/23 09:33:00.313 +STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document 10/13/23 09:33:00.314 +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Oct 13 09:33:00.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "custom-resource-definition-91" for this suite. 10/13/23 09:33:00.319 +------------------------------ +• [0.038 seconds] +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should include custom resource definition resources in discovery documents [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:198 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:33:00.288 + Oct 13 09:33:00.288: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename custom-resource-definition 10/13/23 09:33:00.289 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:33:00.304 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:33:00.308 + [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [It] should include custom resource definition resources in discovery documents [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:198 + STEP: fetching the /apis discovery document 10/13/23 09:33:00.311 + STEP: finding the apiextensions.k8s.io API group in the /apis discovery document 10/13/23 09:33:00.312 + STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document 10/13/23 09:33:00.312 + STEP: fetching the /apis/apiextensions.k8s.io discovery document 10/13/23 09:33:00.312 + STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document 10/13/23 09:33:00.313 + STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document 10/13/23 09:33:00.313 + STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document 10/13/23 09:33:00.314 + [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Oct 13 09:33:00.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "custom-resource-definition-91" for this suite. 10/13/23 09:33:00.319 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should deny crd creation [Conformance] + test/e2e/apimachinery/webhook.go:308 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:33:00.328 +Oct 13 09:33:00.328: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename webhook 10/13/23 09:33:00.329 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:33:00.343 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:33:00.346 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 +STEP: Setting up server cert 10/13/23 09:33:00.362 +STEP: Create role binding to let webhook read extension-apiserver-authentication 10/13/23 09:33:00.825 +STEP: Deploying the webhook pod 10/13/23 09:33:00.84 +STEP: Wait for the deployment to be ready 10/13/23 09:33:00.853 +Oct 13 09:33:00.861: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created +STEP: Deploying the webhook service 10/13/23 09:33:02.876 +STEP: Verifying the service has paired with the endpoint 10/13/23 09:33:02.894 +Oct 13 09:33:03.894: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should deny crd creation [Conformance] + test/e2e/apimachinery/webhook.go:308 +STEP: Registering the crd webhook via the AdmissionRegistration API 10/13/23 09:33:03.898 +STEP: Creating a custom resource definition that should be denied by the webhook 10/13/23 09:33:03.916 +Oct 13 09:33:03.916: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Oct 13 09:33:03.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "webhook-4015" for this suite. 10/13/23 09:33:03.984 +STEP: Destroying namespace "webhook-4015-markers" for this suite. 10/13/23 09:33:03.994 +------------------------------ +• [3.673 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should deny crd creation [Conformance] + test/e2e/apimachinery/webhook.go:308 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:33:00.328 + Oct 13 09:33:00.328: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename webhook 10/13/23 09:33:00.329 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:33:00.343 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:33:00.346 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 + STEP: Setting up server cert 10/13/23 09:33:00.362 + STEP: Create role binding to let webhook read extension-apiserver-authentication 10/13/23 09:33:00.825 + STEP: Deploying the webhook pod 10/13/23 09:33:00.84 + STEP: Wait for the deployment to be ready 10/13/23 09:33:00.853 + Oct 13 09:33:00.861: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created + STEP: Deploying the webhook service 10/13/23 09:33:02.876 + STEP: Verifying the service has paired with the endpoint 10/13/23 09:33:02.894 + Oct 13 09:33:03.894: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] should deny crd creation [Conformance] + test/e2e/apimachinery/webhook.go:308 + STEP: Registering the crd webhook via the AdmissionRegistration API 10/13/23 09:33:03.898 + STEP: Creating a custom resource definition that should be denied by the webhook 10/13/23 09:33:03.916 + Oct 13 09:33:03.916: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Oct 13 09:33:03.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "webhook-4015" for this suite. 10/13/23 09:33:03.984 + STEP: Destroying namespace "webhook-4015-markers" for this suite. 10/13/23 09:33:03.994 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] PodTemplates + should replace a pod template [Conformance] + test/e2e/common/node/podtemplates.go:176 +[BeforeEach] [sig-node] PodTemplates + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:33:04.002 +Oct 13 09:33:04.002: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename podtemplate 10/13/23 09:33:04.003 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:33:04.021 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:33:04.024 +[BeforeEach] [sig-node] PodTemplates + test/e2e/framework/metrics/init/init.go:31 +[It] should replace a pod template [Conformance] + test/e2e/common/node/podtemplates.go:176 +STEP: Create a pod template 10/13/23 09:33:04.027 +STEP: Replace a pod template 10/13/23 09:33:04.031 +Oct 13 09:33:04.038: INFO: Found updated podtemplate annotation: "true" + +[AfterEach] [sig-node] PodTemplates + test/e2e/framework/node/init/init.go:32 +Oct 13 09:33:04.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] PodTemplates + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] PodTemplates + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] PodTemplates + tear down framework | framework.go:193 +STEP: Destroying namespace "podtemplate-2001" for this suite. 10/13/23 09:33:04.042 +------------------------------ +• [0.046 seconds] +[sig-node] PodTemplates +test/e2e/common/node/framework.go:23 + should replace a pod template [Conformance] + test/e2e/common/node/podtemplates.go:176 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] PodTemplates + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:33:04.002 + Oct 13 09:33:04.002: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename podtemplate 10/13/23 09:33:04.003 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:33:04.021 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:33:04.024 + [BeforeEach] [sig-node] PodTemplates + test/e2e/framework/metrics/init/init.go:31 + [It] should replace a pod template [Conformance] + test/e2e/common/node/podtemplates.go:176 + STEP: Create a pod template 10/13/23 09:33:04.027 + STEP: Replace a pod template 10/13/23 09:33:04.031 + Oct 13 09:33:04.038: INFO: Found updated podtemplate annotation: "true" + + [AfterEach] [sig-node] PodTemplates + test/e2e/framework/node/init/init.go:32 + Oct 13 09:33:04.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] PodTemplates + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] PodTemplates + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] PodTemplates + tear down framework | framework.go:193 + STEP: Destroying namespace "podtemplate-2001" for this suite. 10/13/23 09:33:04.042 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide container's cpu request [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:221 +[BeforeEach] [sig-storage] Downward API volume + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:33:04.048 +Oct 13 09:33:04.049: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename downward-api 10/13/23 09:33:04.049 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:33:04.064 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:33:04.066 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:44 +[It] should provide container's cpu request [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:221 +STEP: Creating a pod to test downward API volume plugin 10/13/23 09:33:04.068 +Oct 13 09:33:04.076: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a7419b27-0d22-4ec6-815e-698079a92740" in namespace "downward-api-9394" to be "Succeeded or Failed" +Oct 13 09:33:04.080: INFO: Pod "downwardapi-volume-a7419b27-0d22-4ec6-815e-698079a92740": Phase="Pending", Reason="", readiness=false. Elapsed: 3.342871ms +Oct 13 09:33:06.084: INFO: Pod "downwardapi-volume-a7419b27-0d22-4ec6-815e-698079a92740": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007145461s +Oct 13 09:33:08.084: INFO: Pod "downwardapi-volume-a7419b27-0d22-4ec6-815e-698079a92740": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007684713s +STEP: Saw pod success 10/13/23 09:33:08.084 +Oct 13 09:33:08.084: INFO: Pod "downwardapi-volume-a7419b27-0d22-4ec6-815e-698079a92740" satisfied condition "Succeeded or Failed" +Oct 13 09:33:08.088: INFO: Trying to get logs from node node2 pod downwardapi-volume-a7419b27-0d22-4ec6-815e-698079a92740 container client-container: +STEP: delete the pod 10/13/23 09:33:08.102 +Oct 13 09:33:08.114: INFO: Waiting for pod downwardapi-volume-a7419b27-0d22-4ec6-815e-698079a92740 to disappear +Oct 13 09:33:08.117: INFO: Pod downwardapi-volume-a7419b27-0d22-4ec6-815e-698079a92740 no longer exists +[AfterEach] [sig-storage] Downward API volume + test/e2e/framework/node/init/init.go:32 +Oct 13 09:33:08.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Downward API volume + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Downward API volume + tear down framework | framework.go:193 +STEP: Destroying namespace "downward-api-9394" for this suite. 10/13/23 09:33:08.12 +------------------------------ +• [4.078 seconds] +[sig-storage] Downward API volume +test/e2e/common/storage/framework.go:23 + should provide container's cpu request [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:221 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Downward API volume + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:33:04.048 + Oct 13 09:33:04.049: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename downward-api 10/13/23 09:33:04.049 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:33:04.064 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:33:04.066 + [BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:44 + [It] should provide container's cpu request [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:221 + STEP: Creating a pod to test downward API volume plugin 10/13/23 09:33:04.068 + Oct 13 09:33:04.076: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a7419b27-0d22-4ec6-815e-698079a92740" in namespace "downward-api-9394" to be "Succeeded or Failed" + Oct 13 09:33:04.080: INFO: Pod "downwardapi-volume-a7419b27-0d22-4ec6-815e-698079a92740": Phase="Pending", Reason="", readiness=false. Elapsed: 3.342871ms + Oct 13 09:33:06.084: INFO: Pod "downwardapi-volume-a7419b27-0d22-4ec6-815e-698079a92740": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007145461s + Oct 13 09:33:08.084: INFO: Pod "downwardapi-volume-a7419b27-0d22-4ec6-815e-698079a92740": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007684713s + STEP: Saw pod success 10/13/23 09:33:08.084 + Oct 13 09:33:08.084: INFO: Pod "downwardapi-volume-a7419b27-0d22-4ec6-815e-698079a92740" satisfied condition "Succeeded or Failed" + Oct 13 09:33:08.088: INFO: Trying to get logs from node node2 pod downwardapi-volume-a7419b27-0d22-4ec6-815e-698079a92740 container client-container: + STEP: delete the pod 10/13/23 09:33:08.102 + Oct 13 09:33:08.114: INFO: Waiting for pod downwardapi-volume-a7419b27-0d22-4ec6-815e-698079a92740 to disappear + Oct 13 09:33:08.117: INFO: Pod downwardapi-volume-a7419b27-0d22-4ec6-815e-698079a92740 no longer exists + [AfterEach] [sig-storage] Downward API volume + test/e2e/framework/node/init/init.go:32 + Oct 13 09:33:08.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Downward API volume + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Downward API volume + tear down framework | framework.go:193 + STEP: Destroying namespace "downward-api-9394" for this suite. 10/13/23 09:33:08.12 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] PodTemplates + should delete a collection of pod templates [Conformance] + test/e2e/common/node/podtemplates.go:122 +[BeforeEach] [sig-node] PodTemplates + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:33:08.127 +Oct 13 09:33:08.127: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename podtemplate 10/13/23 09:33:08.128 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:33:08.142 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:33:08.145 +[BeforeEach] [sig-node] PodTemplates + test/e2e/framework/metrics/init/init.go:31 +[It] should delete a collection of pod templates [Conformance] + test/e2e/common/node/podtemplates.go:122 +STEP: Create set of pod templates 10/13/23 09:33:08.147 +Oct 13 09:33:08.151: INFO: created test-podtemplate-1 +Oct 13 09:33:08.156: INFO: created test-podtemplate-2 +Oct 13 09:33:08.160: INFO: created test-podtemplate-3 +STEP: get a list of pod templates with a label in the current namespace 10/13/23 09:33:08.16 +STEP: delete collection of pod templates 10/13/23 09:33:08.163 +Oct 13 09:33:08.163: INFO: requesting DeleteCollection of pod templates +STEP: check that the list of pod templates matches the requested quantity 10/13/23 09:33:08.178 +Oct 13 09:33:08.178: INFO: requesting list of pod templates to confirm quantity +[AfterEach] [sig-node] PodTemplates + test/e2e/framework/node/init/init.go:32 +Oct 13 09:33:08.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] PodTemplates + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] PodTemplates + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] PodTemplates + tear down framework | framework.go:193 +STEP: Destroying namespace "podtemplate-8801" for this suite. 10/13/23 09:33:08.183 +------------------------------ +• [0.062 seconds] +[sig-node] PodTemplates +test/e2e/common/node/framework.go:23 + should delete a collection of pod templates [Conformance] + test/e2e/common/node/podtemplates.go:122 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] PodTemplates + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:33:08.127 + Oct 13 09:33:08.127: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename podtemplate 10/13/23 09:33:08.128 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:33:08.142 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:33:08.145 + [BeforeEach] [sig-node] PodTemplates + test/e2e/framework/metrics/init/init.go:31 + [It] should delete a collection of pod templates [Conformance] + test/e2e/common/node/podtemplates.go:122 + STEP: Create set of pod templates 10/13/23 09:33:08.147 + Oct 13 09:33:08.151: INFO: created test-podtemplate-1 + Oct 13 09:33:08.156: INFO: created test-podtemplate-2 + Oct 13 09:33:08.160: INFO: created test-podtemplate-3 + STEP: get a list of pod templates with a label in the current namespace 10/13/23 09:33:08.16 + STEP: delete collection of pod templates 10/13/23 09:33:08.163 + Oct 13 09:33:08.163: INFO: requesting DeleteCollection of pod templates + STEP: check that the list of pod templates matches the requested quantity 10/13/23 09:33:08.178 + Oct 13 09:33:08.178: INFO: requesting list of pod templates to confirm quantity + [AfterEach] [sig-node] PodTemplates + test/e2e/framework/node/init/init.go:32 + Oct 13 09:33:08.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] PodTemplates + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] PodTemplates + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] PodTemplates + tear down framework | framework.go:193 + STEP: Destroying namespace "podtemplate-8801" for this suite. 10/13/23 09:33:08.183 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a service. [Conformance] + test/e2e/apimachinery/resource_quota.go:100 +[BeforeEach] [sig-api-machinery] ResourceQuota + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:33:08.19 +Oct 13 09:33:08.190: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename resourcequota 10/13/23 09:33:08.19 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:33:08.206 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:33:08.208 +[BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:31 +[It] should create a ResourceQuota and capture the life of a service. [Conformance] + test/e2e/apimachinery/resource_quota.go:100 +STEP: Counting existing ResourceQuota 10/13/23 09:33:08.211 +STEP: Creating a ResourceQuota 10/13/23 09:33:13.215 +STEP: Ensuring resource quota status is calculated 10/13/23 09:33:13.221 +STEP: Creating a Service 10/13/23 09:33:15.225 +STEP: Creating a NodePort Service 10/13/23 09:33:15.25 +STEP: Not allowing a LoadBalancer Service with NodePort to be created that exceeds remaining quota 10/13/23 09:33:15.276 +STEP: Ensuring resource quota status captures service creation 10/13/23 09:33:15.306 +STEP: Deleting Services 10/13/23 09:33:17.312 +STEP: Ensuring resource quota status released usage 10/13/23 09:33:17.354 +[AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/node/init/init.go:32 +Oct 13 09:33:19.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + tear down framework | framework.go:193 +STEP: Destroying namespace "resourcequota-7367" for this suite. 10/13/23 09:33:19.365 +------------------------------ +• [SLOW TEST] [11.182 seconds] +[sig-api-machinery] ResourceQuota +test/e2e/apimachinery/framework.go:23 + should create a ResourceQuota and capture the life of a service. [Conformance] + test/e2e/apimachinery/resource_quota.go:100 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] ResourceQuota + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:33:08.19 + Oct 13 09:33:08.190: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename resourcequota 10/13/23 09:33:08.19 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:33:08.206 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:33:08.208 + [BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:31 + [It] should create a ResourceQuota and capture the life of a service. [Conformance] + test/e2e/apimachinery/resource_quota.go:100 + STEP: Counting existing ResourceQuota 10/13/23 09:33:08.211 + STEP: Creating a ResourceQuota 10/13/23 09:33:13.215 + STEP: Ensuring resource quota status is calculated 10/13/23 09:33:13.221 + STEP: Creating a Service 10/13/23 09:33:15.225 + STEP: Creating a NodePort Service 10/13/23 09:33:15.25 + STEP: Not allowing a LoadBalancer Service with NodePort to be created that exceeds remaining quota 10/13/23 09:33:15.276 + STEP: Ensuring resource quota status captures service creation 10/13/23 09:33:15.306 + STEP: Deleting Services 10/13/23 09:33:17.312 + STEP: Ensuring resource quota status released usage 10/13/23 09:33:17.354 + [AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/node/init/init.go:32 + Oct 13 09:33:19.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + tear down framework | framework.go:193 + STEP: Destroying namespace "resourcequota-7367" for this suite. 10/13/23 09:33:19.365 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSS +------------------------------ +[sig-storage] CSIInlineVolumes + should support ephemeral VolumeLifecycleMode in CSIDriver API [Conformance] + test/e2e/storage/csi_inline.go:46 +[BeforeEach] [sig-storage] CSIInlineVolumes + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:33:19.371 +Oct 13 09:33:19.371: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename csiinlinevolumes 10/13/23 09:33:19.372 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:33:19.389 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:33:19.392 +[BeforeEach] [sig-storage] CSIInlineVolumes + test/e2e/framework/metrics/init/init.go:31 +[It] should support ephemeral VolumeLifecycleMode in CSIDriver API [Conformance] + test/e2e/storage/csi_inline.go:46 +STEP: creating 10/13/23 09:33:19.394 +STEP: getting 10/13/23 09:33:19.408 +STEP: listing 10/13/23 09:33:19.413 +STEP: deleting 10/13/23 09:33:19.416 +[AfterEach] [sig-storage] CSIInlineVolumes + test/e2e/framework/node/init/init.go:32 +Oct 13 09:33:19.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] CSIInlineVolumes + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] CSIInlineVolumes + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] CSIInlineVolumes + tear down framework | framework.go:193 +STEP: Destroying namespace "csiinlinevolumes-5482" for this suite. 10/13/23 09:33:19.434 +------------------------------ +• [0.068 seconds] +[sig-storage] CSIInlineVolumes +test/e2e/storage/utils/framework.go:23 + should support ephemeral VolumeLifecycleMode in CSIDriver API [Conformance] + test/e2e/storage/csi_inline.go:46 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] CSIInlineVolumes + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:33:19.371 + Oct 13 09:33:19.371: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename csiinlinevolumes 10/13/23 09:33:19.372 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:33:19.389 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:33:19.392 + [BeforeEach] [sig-storage] CSIInlineVolumes + test/e2e/framework/metrics/init/init.go:31 + [It] should support ephemeral VolumeLifecycleMode in CSIDriver API [Conformance] + test/e2e/storage/csi_inline.go:46 + STEP: creating 10/13/23 09:33:19.394 + STEP: getting 10/13/23 09:33:19.408 + STEP: listing 10/13/23 09:33:19.413 + STEP: deleting 10/13/23 09:33:19.416 + [AfterEach] [sig-storage] CSIInlineVolumes + test/e2e/framework/node/init/init.go:32 + Oct 13 09:33:19.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] CSIInlineVolumes + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] CSIInlineVolumes + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] CSIInlineVolumes + tear down framework | framework.go:193 + STEP: Destroying namespace "csiinlinevolumes-5482" for this suite. 10/13/23 09:33:19.434 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSS +------------------------------ +[sig-node] RuntimeClass + should schedule a Pod requesting a RuntimeClass and initialize its Overhead [NodeConformance] [Conformance] + test/e2e/common/node/runtimeclass.go:129 +[BeforeEach] [sig-node] RuntimeClass + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:33:19.44 +Oct 13 09:33:19.440: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename runtimeclass 10/13/23 09:33:19.441 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:33:19.455 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:33:19.457 +[BeforeEach] [sig-node] RuntimeClass + test/e2e/framework/metrics/init/init.go:31 +[It] should schedule a Pod requesting a RuntimeClass and initialize its Overhead [NodeConformance] [Conformance] + test/e2e/common/node/runtimeclass.go:129 +Oct 13 09:33:19.471: INFO: Waiting up to 1m20s for at least 1 pods in namespace runtimeclass-9636 to be scheduled +Oct 13 09:33:19.474: INFO: 1 pods are not scheduled: [runtimeclass-9636/test-runtimeclass-runtimeclass-9636-preconfigured-handler-gmq5j(27e9c70b-1bf7-4d18-af4c-766c54dddb7d)] +[AfterEach] [sig-node] RuntimeClass + test/e2e/framework/node/init/init.go:32 +Oct 13 09:33:21.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] RuntimeClass + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] RuntimeClass + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] RuntimeClass + tear down framework | framework.go:193 +STEP: Destroying namespace "runtimeclass-9636" for this suite. 10/13/23 09:33:21.49 +------------------------------ +• [2.055 seconds] +[sig-node] RuntimeClass +test/e2e/common/node/framework.go:23 + should schedule a Pod requesting a RuntimeClass and initialize its Overhead [NodeConformance] [Conformance] + test/e2e/common/node/runtimeclass.go:129 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] RuntimeClass + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:33:19.44 + Oct 13 09:33:19.440: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename runtimeclass 10/13/23 09:33:19.441 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:33:19.455 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:33:19.457 + [BeforeEach] [sig-node] RuntimeClass + test/e2e/framework/metrics/init/init.go:31 + [It] should schedule a Pod requesting a RuntimeClass and initialize its Overhead [NodeConformance] [Conformance] + test/e2e/common/node/runtimeclass.go:129 + Oct 13 09:33:19.471: INFO: Waiting up to 1m20s for at least 1 pods in namespace runtimeclass-9636 to be scheduled + Oct 13 09:33:19.474: INFO: 1 pods are not scheduled: [runtimeclass-9636/test-runtimeclass-runtimeclass-9636-preconfigured-handler-gmq5j(27e9c70b-1bf7-4d18-af4c-766c54dddb7d)] + [AfterEach] [sig-node] RuntimeClass + test/e2e/framework/node/init/init.go:32 + Oct 13 09:33:21.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] RuntimeClass + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] RuntimeClass + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] RuntimeClass + tear down framework | framework.go:193 + STEP: Destroying namespace "runtimeclass-9636" for this suite. 10/13/23 09:33:21.49 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-node] Variable Expansion + should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] + test/e2e/common/node/expansion.go:225 +[BeforeEach] [sig-node] Variable Expansion + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:33:21.496 +Oct 13 09:33:21.496: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename var-expansion 10/13/23 09:33:21.497 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:33:21.514 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:33:21.518 +[BeforeEach] [sig-node] Variable Expansion + test/e2e/framework/metrics/init/init.go:31 +[It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] + test/e2e/common/node/expansion.go:225 +STEP: creating the pod with failed condition 10/13/23 09:33:21.521 +Oct 13 09:33:21.531: INFO: Waiting up to 2m0s for pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013" in namespace "var-expansion-3414" to be "running" +Oct 13 09:33:21.535: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 3.197166ms +Oct 13 09:33:23.540: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0082851s +Oct 13 09:33:25.542: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01026437s +Oct 13 09:33:27.540: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 6.008329397s +Oct 13 09:33:29.541: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 8.009632931s +Oct 13 09:33:31.539: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 10.008134762s +Oct 13 09:33:33.541: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 12.009805114s +Oct 13 09:33:35.541: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 14.00946454s +Oct 13 09:33:37.542: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 16.0103777s +Oct 13 09:33:39.541: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 18.009189252s +Oct 13 09:33:41.539: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 20.007641214s +Oct 13 09:33:43.541: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 22.009156346s +Oct 13 09:33:45.540: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 24.008897242s +Oct 13 09:33:47.542: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 26.010515588s +Oct 13 09:33:49.543: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 28.011393212s +Oct 13 09:33:51.540: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 30.008400142s +Oct 13 09:33:53.542: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 32.010532151s +Oct 13 09:33:55.542: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 34.010327602s +Oct 13 09:33:57.540: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 36.008859115s +Oct 13 09:33:59.542: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 38.010346129s +Oct 13 09:34:01.542: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 40.010485719s +Oct 13 09:34:03.541: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 42.009229783s +Oct 13 09:34:05.542: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 44.010792987s +Oct 13 09:34:07.541: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 46.009410629s +Oct 13 09:34:09.541: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 48.010073087s +Oct 13 09:34:11.540: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 50.008936275s +Oct 13 09:34:13.540: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 52.008796483s +Oct 13 09:34:15.541: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 54.009907029s +Oct 13 09:34:17.540: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 56.00848206s +Oct 13 09:34:19.543: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 58.01125589s +Oct 13 09:34:21.539: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.007390104s +Oct 13 09:34:23.543: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.011918193s +Oct 13 09:34:25.542: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.010871573s +Oct 13 09:34:27.540: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.009085086s +Oct 13 09:34:29.542: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.010832612s +Oct 13 09:34:31.540: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.008557295s +Oct 13 09:34:33.540: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.009143921s +Oct 13 09:34:35.542: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.011153034s +Oct 13 09:34:37.540: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.009022209s +Oct 13 09:34:39.540: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.008895196s +Oct 13 09:34:41.541: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.009416729s +Oct 13 09:34:43.541: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.009467528s +Oct 13 09:34:45.542: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.010962647s +Oct 13 09:34:47.541: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.009610446s +Oct 13 09:34:49.541: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.009705413s +Oct 13 09:34:51.540: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.008963933s +Oct 13 09:34:53.541: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.009671767s +Oct 13 09:34:55.541: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.009350148s +Oct 13 09:34:57.542: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.010557969s +Oct 13 09:34:59.543: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.011301123s +Oct 13 09:35:01.541: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.009212551s +Oct 13 09:35:03.541: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.009736393s +Oct 13 09:35:05.542: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.010330441s +Oct 13 09:35:07.542: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.010901751s +Oct 13 09:35:09.540: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.008909543s +Oct 13 09:35:11.540: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.0089047s +Oct 13 09:35:13.541: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.009217868s +Oct 13 09:35:15.541: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.01005038s +Oct 13 09:35:17.541: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.009552863s +Oct 13 09:35:19.542: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.010472862s +Oct 13 09:35:21.541: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.009476396s +Oct 13 09:35:21.545: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.013998603s +STEP: updating the pod 10/13/23 09:35:21.545 +Oct 13 09:35:22.065: INFO: Successfully updated pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013" +STEP: waiting for pod running 10/13/23 09:35:22.065 +Oct 13 09:35:22.065: INFO: Waiting up to 2m0s for pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013" in namespace "var-expansion-3414" to be "running" +Oct 13 09:35:22.070: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 5.054077ms +Oct 13 09:35:24.076: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Running", Reason="", readiness=true. Elapsed: 2.011003651s +Oct 13 09:35:24.076: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013" satisfied condition "running" +STEP: deleting the pod gracefully 10/13/23 09:35:24.076 +Oct 13 09:35:24.076: INFO: Deleting pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013" in namespace "var-expansion-3414" +Oct 13 09:35:24.084: INFO: Wait up to 5m0s for pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013" to be fully deleted +[AfterEach] [sig-node] Variable Expansion + test/e2e/framework/node/init/init.go:32 +Oct 13 09:35:56.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Variable Expansion + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Variable Expansion + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Variable Expansion + tear down framework | framework.go:193 +STEP: Destroying namespace "var-expansion-3414" for this suite. 10/13/23 09:35:56.098 +------------------------------ +• [SLOW TEST] [154.609 seconds] +[sig-node] Variable Expansion +test/e2e/common/node/framework.go:23 + should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] + test/e2e/common/node/expansion.go:225 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Variable Expansion + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:33:21.496 + Oct 13 09:33:21.496: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename var-expansion 10/13/23 09:33:21.497 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:33:21.514 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:33:21.518 + [BeforeEach] [sig-node] Variable Expansion + test/e2e/framework/metrics/init/init.go:31 + [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] + test/e2e/common/node/expansion.go:225 + STEP: creating the pod with failed condition 10/13/23 09:33:21.521 + Oct 13 09:33:21.531: INFO: Waiting up to 2m0s for pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013" in namespace "var-expansion-3414" to be "running" + Oct 13 09:33:21.535: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 3.197166ms + Oct 13 09:33:23.540: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0082851s + Oct 13 09:33:25.542: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01026437s + Oct 13 09:33:27.540: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 6.008329397s + Oct 13 09:33:29.541: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 8.009632931s + Oct 13 09:33:31.539: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 10.008134762s + Oct 13 09:33:33.541: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 12.009805114s + Oct 13 09:33:35.541: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 14.00946454s + Oct 13 09:33:37.542: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 16.0103777s + Oct 13 09:33:39.541: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 18.009189252s + Oct 13 09:33:41.539: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 20.007641214s + Oct 13 09:33:43.541: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 22.009156346s + Oct 13 09:33:45.540: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 24.008897242s + Oct 13 09:33:47.542: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 26.010515588s + Oct 13 09:33:49.543: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 28.011393212s + Oct 13 09:33:51.540: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 30.008400142s + Oct 13 09:33:53.542: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 32.010532151s + Oct 13 09:33:55.542: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 34.010327602s + Oct 13 09:33:57.540: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 36.008859115s + Oct 13 09:33:59.542: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 38.010346129s + Oct 13 09:34:01.542: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 40.010485719s + Oct 13 09:34:03.541: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 42.009229783s + Oct 13 09:34:05.542: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 44.010792987s + Oct 13 09:34:07.541: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 46.009410629s + Oct 13 09:34:09.541: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 48.010073087s + Oct 13 09:34:11.540: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 50.008936275s + Oct 13 09:34:13.540: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 52.008796483s + Oct 13 09:34:15.541: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 54.009907029s + Oct 13 09:34:17.540: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 56.00848206s + Oct 13 09:34:19.543: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 58.01125589s + Oct 13 09:34:21.539: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.007390104s + Oct 13 09:34:23.543: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.011918193s + Oct 13 09:34:25.542: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.010871573s + Oct 13 09:34:27.540: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.009085086s + Oct 13 09:34:29.542: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.010832612s + Oct 13 09:34:31.540: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.008557295s + Oct 13 09:34:33.540: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.009143921s + Oct 13 09:34:35.542: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.011153034s + Oct 13 09:34:37.540: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.009022209s + Oct 13 09:34:39.540: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.008895196s + Oct 13 09:34:41.541: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.009416729s + Oct 13 09:34:43.541: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.009467528s + Oct 13 09:34:45.542: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.010962647s + Oct 13 09:34:47.541: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.009610446s + Oct 13 09:34:49.541: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.009705413s + Oct 13 09:34:51.540: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.008963933s + Oct 13 09:34:53.541: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.009671767s + Oct 13 09:34:55.541: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.009350148s + Oct 13 09:34:57.542: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.010557969s + Oct 13 09:34:59.543: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.011301123s + Oct 13 09:35:01.541: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.009212551s + Oct 13 09:35:03.541: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.009736393s + Oct 13 09:35:05.542: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.010330441s + Oct 13 09:35:07.542: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.010901751s + Oct 13 09:35:09.540: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.008909543s + Oct 13 09:35:11.540: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.0089047s + Oct 13 09:35:13.541: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.009217868s + Oct 13 09:35:15.541: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.01005038s + Oct 13 09:35:17.541: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.009552863s + Oct 13 09:35:19.542: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.010472862s + Oct 13 09:35:21.541: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.009476396s + Oct 13 09:35:21.545: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.013998603s + STEP: updating the pod 10/13/23 09:35:21.545 + Oct 13 09:35:22.065: INFO: Successfully updated pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013" + STEP: waiting for pod running 10/13/23 09:35:22.065 + Oct 13 09:35:22.065: INFO: Waiting up to 2m0s for pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013" in namespace "var-expansion-3414" to be "running" + Oct 13 09:35:22.070: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Pending", Reason="", readiness=false. Elapsed: 5.054077ms + Oct 13 09:35:24.076: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013": Phase="Running", Reason="", readiness=true. Elapsed: 2.011003651s + Oct 13 09:35:24.076: INFO: Pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013" satisfied condition "running" + STEP: deleting the pod gracefully 10/13/23 09:35:24.076 + Oct 13 09:35:24.076: INFO: Deleting pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013" in namespace "var-expansion-3414" + Oct 13 09:35:24.084: INFO: Wait up to 5m0s for pod "var-expansion-19b98950-188b-4233-8af1-3989c4059013" to be fully deleted + [AfterEach] [sig-node] Variable Expansion + test/e2e/framework/node/init/init.go:32 + Oct 13 09:35:56.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Variable Expansion + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Variable Expansion + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Variable Expansion + tear down framework | framework.go:193 + STEP: Destroying namespace "var-expansion-3414" for this suite. 10/13/23 09:35:56.098 + << End Captured GinkgoWriter Output +------------------------------ +SSS +------------------------------ +[sig-auth] ServiceAccounts + should update a ServiceAccount [Conformance] + test/e2e/auth/service_accounts.go:810 +[BeforeEach] [sig-auth] ServiceAccounts + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:35:56.105 +Oct 13 09:35:56.105: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename svcaccounts 10/13/23 09:35:56.106 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:35:56.131 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:35:56.137 +[BeforeEach] [sig-auth] ServiceAccounts + test/e2e/framework/metrics/init/init.go:31 +[It] should update a ServiceAccount [Conformance] + test/e2e/auth/service_accounts.go:810 +STEP: Creating ServiceAccount "e2e-sa-5fj8w" 10/13/23 09:35:56.14 +Oct 13 09:35:56.145: INFO: AutomountServiceAccountToken: false +STEP: Updating ServiceAccount "e2e-sa-5fj8w" 10/13/23 09:35:56.145 +Oct 13 09:35:56.152: INFO: AutomountServiceAccountToken: true +[AfterEach] [sig-auth] ServiceAccounts + test/e2e/framework/node/init/init.go:32 +Oct 13 09:35:56.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-auth] ServiceAccounts + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-auth] ServiceAccounts + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-auth] ServiceAccounts + tear down framework | framework.go:193 +STEP: Destroying namespace "svcaccounts-7743" for this suite. 10/13/23 09:35:56.156 +------------------------------ +• [0.058 seconds] +[sig-auth] ServiceAccounts +test/e2e/auth/framework.go:23 + should update a ServiceAccount [Conformance] + test/e2e/auth/service_accounts.go:810 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-auth] ServiceAccounts + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:35:56.105 + Oct 13 09:35:56.105: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename svcaccounts 10/13/23 09:35:56.106 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:35:56.131 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:35:56.137 + [BeforeEach] [sig-auth] ServiceAccounts + test/e2e/framework/metrics/init/init.go:31 + [It] should update a ServiceAccount [Conformance] + test/e2e/auth/service_accounts.go:810 + STEP: Creating ServiceAccount "e2e-sa-5fj8w" 10/13/23 09:35:56.14 + Oct 13 09:35:56.145: INFO: AutomountServiceAccountToken: false + STEP: Updating ServiceAccount "e2e-sa-5fj8w" 10/13/23 09:35:56.145 + Oct 13 09:35:56.152: INFO: AutomountServiceAccountToken: true + [AfterEach] [sig-auth] ServiceAccounts + test/e2e/framework/node/init/init.go:32 + Oct 13 09:35:56.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-auth] ServiceAccounts + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-auth] ServiceAccounts + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-auth] ServiceAccounts + tear down framework | framework.go:193 + STEP: Destroying namespace "svcaccounts-7743" for this suite. 10/13/23 09:35:56.156 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-node] Variable Expansion + should allow substituting values in a container's command [NodeConformance] [Conformance] + test/e2e/common/node/expansion.go:73 +[BeforeEach] [sig-node] Variable Expansion + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:35:56.163 +Oct 13 09:35:56.163: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename var-expansion 10/13/23 09:35:56.164 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:35:56.181 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:35:56.184 +[BeforeEach] [sig-node] Variable Expansion + test/e2e/framework/metrics/init/init.go:31 +[It] should allow substituting values in a container's command [NodeConformance] [Conformance] + test/e2e/common/node/expansion.go:73 +STEP: Creating a pod to test substitution in container's command 10/13/23 09:35:56.186 +Oct 13 09:35:56.195: INFO: Waiting up to 5m0s for pod "var-expansion-5d66619b-1930-4847-9e8b-bce340f284ac" in namespace "var-expansion-5127" to be "Succeeded or Failed" +Oct 13 09:35:56.198: INFO: Pod "var-expansion-5d66619b-1930-4847-9e8b-bce340f284ac": Phase="Pending", Reason="", readiness=false. Elapsed: 3.344128ms +Oct 13 09:35:58.203: INFO: Pod "var-expansion-5d66619b-1930-4847-9e8b-bce340f284ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008455116s +Oct 13 09:36:00.205: INFO: Pod "var-expansion-5d66619b-1930-4847-9e8b-bce340f284ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01012376s +STEP: Saw pod success 10/13/23 09:36:00.205 +Oct 13 09:36:00.205: INFO: Pod "var-expansion-5d66619b-1930-4847-9e8b-bce340f284ac" satisfied condition "Succeeded or Failed" +Oct 13 09:36:00.210: INFO: Trying to get logs from node node2 pod var-expansion-5d66619b-1930-4847-9e8b-bce340f284ac container dapi-container: +STEP: delete the pod 10/13/23 09:36:00.233 +Oct 13 09:36:00.247: INFO: Waiting for pod var-expansion-5d66619b-1930-4847-9e8b-bce340f284ac to disappear +Oct 13 09:36:00.250: INFO: Pod var-expansion-5d66619b-1930-4847-9e8b-bce340f284ac no longer exists +[AfterEach] [sig-node] Variable Expansion + test/e2e/framework/node/init/init.go:32 +Oct 13 09:36:00.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Variable Expansion + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Variable Expansion + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Variable Expansion + tear down framework | framework.go:193 +STEP: Destroying namespace "var-expansion-5127" for this suite. 10/13/23 09:36:00.254 +------------------------------ +• [4.099 seconds] +[sig-node] Variable Expansion +test/e2e/common/node/framework.go:23 + should allow substituting values in a container's command [NodeConformance] [Conformance] + test/e2e/common/node/expansion.go:73 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Variable Expansion + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:35:56.163 + Oct 13 09:35:56.163: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename var-expansion 10/13/23 09:35:56.164 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:35:56.181 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:35:56.184 + [BeforeEach] [sig-node] Variable Expansion + test/e2e/framework/metrics/init/init.go:31 + [It] should allow substituting values in a container's command [NodeConformance] [Conformance] + test/e2e/common/node/expansion.go:73 + STEP: Creating a pod to test substitution in container's command 10/13/23 09:35:56.186 + Oct 13 09:35:56.195: INFO: Waiting up to 5m0s for pod "var-expansion-5d66619b-1930-4847-9e8b-bce340f284ac" in namespace "var-expansion-5127" to be "Succeeded or Failed" + Oct 13 09:35:56.198: INFO: Pod "var-expansion-5d66619b-1930-4847-9e8b-bce340f284ac": Phase="Pending", Reason="", readiness=false. Elapsed: 3.344128ms + Oct 13 09:35:58.203: INFO: Pod "var-expansion-5d66619b-1930-4847-9e8b-bce340f284ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008455116s + Oct 13 09:36:00.205: INFO: Pod "var-expansion-5d66619b-1930-4847-9e8b-bce340f284ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01012376s + STEP: Saw pod success 10/13/23 09:36:00.205 + Oct 13 09:36:00.205: INFO: Pod "var-expansion-5d66619b-1930-4847-9e8b-bce340f284ac" satisfied condition "Succeeded or Failed" + Oct 13 09:36:00.210: INFO: Trying to get logs from node node2 pod var-expansion-5d66619b-1930-4847-9e8b-bce340f284ac container dapi-container: + STEP: delete the pod 10/13/23 09:36:00.233 + Oct 13 09:36:00.247: INFO: Waiting for pod var-expansion-5d66619b-1930-4847-9e8b-bce340f284ac to disappear + Oct 13 09:36:00.250: INFO: Pod var-expansion-5d66619b-1930-4847-9e8b-bce340f284ac no longer exists + [AfterEach] [sig-node] Variable Expansion + test/e2e/framework/node/init/init.go:32 + Oct 13 09:36:00.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Variable Expansion + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Variable Expansion + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Variable Expansion + tear down framework | framework.go:193 + STEP: Destroying namespace "var-expansion-5127" for this suite. 10/13/23 09:36:00.254 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Sysctls [LinuxOnly] [NodeConformance] + should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance] + test/e2e/common/node/sysctl.go:123 +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/common/node/sysctl.go:37 +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:36:00.263 +Oct 13 09:36:00.263: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename sysctl 10/13/23 09:36:00.264 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:36:00.28 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:36:00.282 +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/common/node/sysctl.go:67 +[It] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance] + test/e2e/common/node/sysctl.go:123 +STEP: Creating a pod with one valid and two invalid sysctls 10/13/23 09:36:00.284 +[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/framework/node/init/init.go:32 +Oct 13 09:36:00.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + tear down framework | framework.go:193 +STEP: Destroying namespace "sysctl-1524" for this suite. 10/13/23 09:36:00.294 +------------------------------ +• [0.038 seconds] +[sig-node] Sysctls [LinuxOnly] [NodeConformance] +test/e2e/common/node/framework.go:23 + should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance] + test/e2e/common/node/sysctl.go:123 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/common/node/sysctl.go:37 + [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:36:00.263 + Oct 13 09:36:00.263: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename sysctl 10/13/23 09:36:00.264 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:36:00.28 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:36:00.282 + [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/common/node/sysctl.go:67 + [It] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance] + test/e2e/common/node/sysctl.go:123 + STEP: Creating a pod with one valid and two invalid sysctls 10/13/23 09:36:00.284 + [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/framework/node/init/init.go:32 + Oct 13 09:36:00.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + tear down framework | framework.go:193 + STEP: Destroying namespace "sysctl-1524" for this suite. 10/13/23 09:36:00.294 + << End Captured GinkgoWriter Output +------------------------------ +SS +------------------------------ +[sig-node] Pods + should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:398 +[BeforeEach] [sig-node] Pods + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:36:00.302 +Oct 13 09:36:00.302: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename pods 10/13/23 09:36:00.303 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:36:00.318 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:36:00.321 +[BeforeEach] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:194 +[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:398 +STEP: creating the pod 10/13/23 09:36:00.324 +STEP: submitting the pod to kubernetes 10/13/23 09:36:00.324 +Oct 13 09:36:00.331: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-da35240b-3178-423e-abde-1a592a14be99" in namespace "pods-4110" to be "running and ready" +Oct 13 09:36:00.335: INFO: Pod "pod-update-activedeadlineseconds-da35240b-3178-423e-abde-1a592a14be99": Phase="Pending", Reason="", readiness=false. Elapsed: 3.288634ms +Oct 13 09:36:00.335: INFO: The phase of Pod pod-update-activedeadlineseconds-da35240b-3178-423e-abde-1a592a14be99 is Pending, waiting for it to be Running (with Ready = true) +Oct 13 09:36:02.341: INFO: Pod "pod-update-activedeadlineseconds-da35240b-3178-423e-abde-1a592a14be99": Phase="Running", Reason="", readiness=true. Elapsed: 2.009293171s +Oct 13 09:36:02.341: INFO: The phase of Pod pod-update-activedeadlineseconds-da35240b-3178-423e-abde-1a592a14be99 is Running (Ready = true) +Oct 13 09:36:02.341: INFO: Pod "pod-update-activedeadlineseconds-da35240b-3178-423e-abde-1a592a14be99" satisfied condition "running and ready" +STEP: verifying the pod is in kubernetes 10/13/23 09:36:02.345 +STEP: updating the pod 10/13/23 09:36:02.349 +Oct 13 09:36:02.866: INFO: Successfully updated pod "pod-update-activedeadlineseconds-da35240b-3178-423e-abde-1a592a14be99" +Oct 13 09:36:02.866: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-da35240b-3178-423e-abde-1a592a14be99" in namespace "pods-4110" to be "terminated with reason DeadlineExceeded" +Oct 13 09:36:02.869: INFO: Pod "pod-update-activedeadlineseconds-da35240b-3178-423e-abde-1a592a14be99": Phase="Running", Reason="", readiness=true. Elapsed: 3.285714ms +Oct 13 09:36:04.874: INFO: Pod "pod-update-activedeadlineseconds-da35240b-3178-423e-abde-1a592a14be99": Phase="Running", Reason="", readiness=true. Elapsed: 2.008798892s +Oct 13 09:36:06.874: INFO: Pod "pod-update-activedeadlineseconds-da35240b-3178-423e-abde-1a592a14be99": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.00802964s +Oct 13 09:36:06.874: INFO: Pod "pod-update-activedeadlineseconds-da35240b-3178-423e-abde-1a592a14be99" satisfied condition "terminated with reason DeadlineExceeded" +[AfterEach] [sig-node] Pods + test/e2e/framework/node/init/init.go:32 +Oct 13 09:36:06.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Pods + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Pods + tear down framework | framework.go:193 +STEP: Destroying namespace "pods-4110" for this suite. 10/13/23 09:36:06.878 +------------------------------ +• [SLOW TEST] [6.583 seconds] +[sig-node] Pods +test/e2e/common/node/framework.go:23 + should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:398 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Pods + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:36:00.302 + Oct 13 09:36:00.302: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename pods 10/13/23 09:36:00.303 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:36:00.318 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:36:00.321 + [BeforeEach] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:194 + [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:398 + STEP: creating the pod 10/13/23 09:36:00.324 + STEP: submitting the pod to kubernetes 10/13/23 09:36:00.324 + Oct 13 09:36:00.331: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-da35240b-3178-423e-abde-1a592a14be99" in namespace "pods-4110" to be "running and ready" + Oct 13 09:36:00.335: INFO: Pod "pod-update-activedeadlineseconds-da35240b-3178-423e-abde-1a592a14be99": Phase="Pending", Reason="", readiness=false. Elapsed: 3.288634ms + Oct 13 09:36:00.335: INFO: The phase of Pod pod-update-activedeadlineseconds-da35240b-3178-423e-abde-1a592a14be99 is Pending, waiting for it to be Running (with Ready = true) + Oct 13 09:36:02.341: INFO: Pod "pod-update-activedeadlineseconds-da35240b-3178-423e-abde-1a592a14be99": Phase="Running", Reason="", readiness=true. Elapsed: 2.009293171s + Oct 13 09:36:02.341: INFO: The phase of Pod pod-update-activedeadlineseconds-da35240b-3178-423e-abde-1a592a14be99 is Running (Ready = true) + Oct 13 09:36:02.341: INFO: Pod "pod-update-activedeadlineseconds-da35240b-3178-423e-abde-1a592a14be99" satisfied condition "running and ready" + STEP: verifying the pod is in kubernetes 10/13/23 09:36:02.345 + STEP: updating the pod 10/13/23 09:36:02.349 + Oct 13 09:36:02.866: INFO: Successfully updated pod "pod-update-activedeadlineseconds-da35240b-3178-423e-abde-1a592a14be99" + Oct 13 09:36:02.866: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-da35240b-3178-423e-abde-1a592a14be99" in namespace "pods-4110" to be "terminated with reason DeadlineExceeded" + Oct 13 09:36:02.869: INFO: Pod "pod-update-activedeadlineseconds-da35240b-3178-423e-abde-1a592a14be99": Phase="Running", Reason="", readiness=true. Elapsed: 3.285714ms + Oct 13 09:36:04.874: INFO: Pod "pod-update-activedeadlineseconds-da35240b-3178-423e-abde-1a592a14be99": Phase="Running", Reason="", readiness=true. Elapsed: 2.008798892s + Oct 13 09:36:06.874: INFO: Pod "pod-update-activedeadlineseconds-da35240b-3178-423e-abde-1a592a14be99": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.00802964s + Oct 13 09:36:06.874: INFO: Pod "pod-update-activedeadlineseconds-da35240b-3178-423e-abde-1a592a14be99" satisfied condition "terminated with reason DeadlineExceeded" + [AfterEach] [sig-node] Pods + test/e2e/framework/node/init/init.go:32 + Oct 13 09:36:06.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Pods + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Pods + tear down framework | framework.go:193 + STEP: Destroying namespace "pods-4110" for this suite. 10/13/23 09:36:06.878 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:197 +[BeforeEach] [sig-storage] EmptyDir volumes + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:36:06.885 +Oct 13 09:36:06.886: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename emptydir 10/13/23 09:36:06.887 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:36:06.902 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:36:06.905 +[BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:31 +[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:197 +STEP: Creating a pod to test emptydir 0644 on node default medium 10/13/23 09:36:06.908 +Oct 13 09:36:06.917: INFO: Waiting up to 5m0s for pod "pod-de329cc8-202a-424f-8052-3c89b264fa9a" in namespace "emptydir-439" to be "Succeeded or Failed" +Oct 13 09:36:06.921: INFO: Pod "pod-de329cc8-202a-424f-8052-3c89b264fa9a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026229ms +Oct 13 09:36:08.927: INFO: Pod "pod-de329cc8-202a-424f-8052-3c89b264fa9a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009544236s +Oct 13 09:36:10.926: INFO: Pod "pod-de329cc8-202a-424f-8052-3c89b264fa9a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008531078s +STEP: Saw pod success 10/13/23 09:36:10.926 +Oct 13 09:36:10.926: INFO: Pod "pod-de329cc8-202a-424f-8052-3c89b264fa9a" satisfied condition "Succeeded or Failed" +Oct 13 09:36:10.930: INFO: Trying to get logs from node node2 pod pod-de329cc8-202a-424f-8052-3c89b264fa9a container test-container: +STEP: delete the pod 10/13/23 09:36:10.937 +Oct 13 09:36:10.946: INFO: Waiting for pod pod-de329cc8-202a-424f-8052-3c89b264fa9a to disappear +Oct 13 09:36:10.949: INFO: Pod pod-de329cc8-202a-424f-8052-3c89b264fa9a no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/node/init/init.go:32 +Oct 13 09:36:10.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + tear down framework | framework.go:193 +STEP: Destroying namespace "emptydir-439" for this suite. 10/13/23 09:36:10.953 +------------------------------ +• [4.073 seconds] +[sig-storage] EmptyDir volumes +test/e2e/common/storage/framework.go:23 + should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:197 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] EmptyDir volumes + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:36:06.885 + Oct 13 09:36:06.886: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename emptydir 10/13/23 09:36:06.887 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:36:06.902 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:36:06.905 + [BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:31 + [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:197 + STEP: Creating a pod to test emptydir 0644 on node default medium 10/13/23 09:36:06.908 + Oct 13 09:36:06.917: INFO: Waiting up to 5m0s for pod "pod-de329cc8-202a-424f-8052-3c89b264fa9a" in namespace "emptydir-439" to be "Succeeded or Failed" + Oct 13 09:36:06.921: INFO: Pod "pod-de329cc8-202a-424f-8052-3c89b264fa9a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026229ms + Oct 13 09:36:08.927: INFO: Pod "pod-de329cc8-202a-424f-8052-3c89b264fa9a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009544236s + Oct 13 09:36:10.926: INFO: Pod "pod-de329cc8-202a-424f-8052-3c89b264fa9a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008531078s + STEP: Saw pod success 10/13/23 09:36:10.926 + Oct 13 09:36:10.926: INFO: Pod "pod-de329cc8-202a-424f-8052-3c89b264fa9a" satisfied condition "Succeeded or Failed" + Oct 13 09:36:10.930: INFO: Trying to get logs from node node2 pod pod-de329cc8-202a-424f-8052-3c89b264fa9a container test-container: + STEP: delete the pod 10/13/23 09:36:10.937 + Oct 13 09:36:10.946: INFO: Waiting for pod pod-de329cc8-202a-424f-8052-3c89b264fa9a to disappear + Oct 13 09:36:10.949: INFO: Pod pod-de329cc8-202a-424f-8052-3c89b264fa9a no longer exists + [AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/node/init/init.go:32 + Oct 13 09:36:10.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + tear down framework | framework.go:193 + STEP: Destroying namespace "emptydir-439" for this suite. 10/13/23 09:36:10.953 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a replica set. [Conformance] + test/e2e/apimachinery/resource_quota.go:448 +[BeforeEach] [sig-api-machinery] ResourceQuota + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:36:10.96 +Oct 13 09:36:10.960: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename resourcequota 10/13/23 09:36:10.961 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:36:10.975 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:36:10.977 +[BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:31 +[It] should create a ResourceQuota and capture the life of a replica set. [Conformance] + test/e2e/apimachinery/resource_quota.go:448 +STEP: Counting existing ResourceQuota 10/13/23 09:36:10.98 +STEP: Creating a ResourceQuota 10/13/23 09:36:15.985 +STEP: Ensuring resource quota status is calculated 10/13/23 09:36:15.995 +STEP: Creating a ReplicaSet 10/13/23 09:36:18.001 +STEP: Ensuring resource quota status captures replicaset creation 10/13/23 09:36:18.019 +STEP: Deleting a ReplicaSet 10/13/23 09:36:20.027 +STEP: Ensuring resource quota status released usage 10/13/23 09:36:20.033 +[AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/node/init/init.go:32 +Oct 13 09:36:22.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + tear down framework | framework.go:193 +STEP: Destroying namespace "resourcequota-4373" for this suite. 10/13/23 09:36:22.044 +------------------------------ +• [SLOW TEST] [11.092 seconds] +[sig-api-machinery] ResourceQuota +test/e2e/apimachinery/framework.go:23 + should create a ResourceQuota and capture the life of a replica set. [Conformance] + test/e2e/apimachinery/resource_quota.go:448 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] ResourceQuota + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:36:10.96 + Oct 13 09:36:10.960: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename resourcequota 10/13/23 09:36:10.961 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:36:10.975 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:36:10.977 + [BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:31 + [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] + test/e2e/apimachinery/resource_quota.go:448 + STEP: Counting existing ResourceQuota 10/13/23 09:36:10.98 + STEP: Creating a ResourceQuota 10/13/23 09:36:15.985 + STEP: Ensuring resource quota status is calculated 10/13/23 09:36:15.995 + STEP: Creating a ReplicaSet 10/13/23 09:36:18.001 + STEP: Ensuring resource quota status captures replicaset creation 10/13/23 09:36:18.019 + STEP: Deleting a ReplicaSet 10/13/23 09:36:20.027 + STEP: Ensuring resource quota status released usage 10/13/23 09:36:20.033 + [AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/node/init/init.go:32 + Oct 13 09:36:22.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + tear down framework | framework.go:193 + STEP: Destroying namespace "resourcequota-4373" for this suite. 10/13/23 09:36:22.044 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Probing container + with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:108 +[BeforeEach] [sig-node] Probing container + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:36:22.053 +Oct 13 09:36:22.053: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename container-probe 10/13/23 09:36:22.054 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:36:22.074 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:36:22.077 +[BeforeEach] [sig-node] Probing container + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:63 +[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:108 +[AfterEach] [sig-node] Probing container + test/e2e/framework/node/init/init.go:32 +Oct 13 09:37:22.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Probing container + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Probing container + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Probing container + tear down framework | framework.go:193 +STEP: Destroying namespace "container-probe-5131" for this suite. 10/13/23 09:37:22.101 +------------------------------ +• [SLOW TEST] [60.060 seconds] +[sig-node] Probing container +test/e2e/common/node/framework.go:23 + with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:108 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Probing container + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:36:22.053 + Oct 13 09:36:22.053: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename container-probe 10/13/23 09:36:22.054 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:36:22.074 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:36:22.077 + [BeforeEach] [sig-node] Probing container + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:63 + [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:108 + [AfterEach] [sig-node] Probing container + test/e2e/framework/node/init/init.go:32 + Oct 13 09:37:22.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Probing container + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Probing container + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Probing container + tear down framework | framework.go:193 + STEP: Destroying namespace "container-probe-5131" for this suite. 10/13/23 09:37:22.101 + << End Captured GinkgoWriter Output +------------------------------ +S +------------------------------ +[sig-api-machinery] Discovery + should validate PreferredVersion for each APIGroup [Conformance] + test/e2e/apimachinery/discovery.go:122 +[BeforeEach] [sig-api-machinery] Discovery + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:37:22.112 +Oct 13 09:37:22.112: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename discovery 10/13/23 09:37:22.113 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:37:22.133 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:37:22.136 +[BeforeEach] [sig-api-machinery] Discovery + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-api-machinery] Discovery + test/e2e/apimachinery/discovery.go:43 +STEP: Setting up server cert 10/13/23 09:37:22.139 +[It] should validate PreferredVersion for each APIGroup [Conformance] + test/e2e/apimachinery/discovery.go:122 +Oct 13 09:37:22.757: INFO: Checking APIGroup: apiregistration.k8s.io +Oct 13 09:37:22.759: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 +Oct 13 09:37:22.759: INFO: Versions found [{apiregistration.k8s.io/v1 v1}] +Oct 13 09:37:22.759: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 +Oct 13 09:37:22.759: INFO: Checking APIGroup: apps +Oct 13 09:37:22.760: INFO: PreferredVersion.GroupVersion: apps/v1 +Oct 13 09:37:22.760: INFO: Versions found [{apps/v1 v1}] +Oct 13 09:37:22.760: INFO: apps/v1 matches apps/v1 +Oct 13 09:37:22.760: INFO: Checking APIGroup: events.k8s.io +Oct 13 09:37:22.762: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 +Oct 13 09:37:22.762: INFO: Versions found [{events.k8s.io/v1 v1}] +Oct 13 09:37:22.762: INFO: events.k8s.io/v1 matches events.k8s.io/v1 +Oct 13 09:37:22.762: INFO: Checking APIGroup: authentication.k8s.io +Oct 13 09:37:22.763: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 +Oct 13 09:37:22.763: INFO: Versions found [{authentication.k8s.io/v1 v1}] +Oct 13 09:37:22.763: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 +Oct 13 09:37:22.763: INFO: Checking APIGroup: authorization.k8s.io +Oct 13 09:37:22.765: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 +Oct 13 09:37:22.765: INFO: Versions found [{authorization.k8s.io/v1 v1}] +Oct 13 09:37:22.765: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 +Oct 13 09:37:22.765: INFO: Checking APIGroup: autoscaling +Oct 13 09:37:22.766: INFO: PreferredVersion.GroupVersion: autoscaling/v2 +Oct 13 09:37:22.766: INFO: Versions found [{autoscaling/v2 v2} {autoscaling/v1 v1}] +Oct 13 09:37:22.766: INFO: autoscaling/v2 matches autoscaling/v2 +Oct 13 09:37:22.766: INFO: Checking APIGroup: batch +Oct 13 09:37:22.767: INFO: PreferredVersion.GroupVersion: batch/v1 +Oct 13 09:37:22.767: INFO: Versions found [{batch/v1 v1}] +Oct 13 09:37:22.767: INFO: batch/v1 matches batch/v1 +Oct 13 09:37:22.767: INFO: Checking APIGroup: certificates.k8s.io +Oct 13 09:37:22.768: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 +Oct 13 09:37:22.768: INFO: Versions found [{certificates.k8s.io/v1 v1}] +Oct 13 09:37:22.768: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 +Oct 13 09:37:22.768: INFO: Checking APIGroup: networking.k8s.io +Oct 13 09:37:22.769: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 +Oct 13 09:37:22.769: INFO: Versions found [{networking.k8s.io/v1 v1}] +Oct 13 09:37:22.769: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 +Oct 13 09:37:22.769: INFO: Checking APIGroup: policy +Oct 13 09:37:22.770: INFO: PreferredVersion.GroupVersion: policy/v1 +Oct 13 09:37:22.770: INFO: Versions found [{policy/v1 v1}] +Oct 13 09:37:22.770: INFO: policy/v1 matches policy/v1 +Oct 13 09:37:22.770: INFO: Checking APIGroup: rbac.authorization.k8s.io +Oct 13 09:37:22.771: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 +Oct 13 09:37:22.771: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1}] +Oct 13 09:37:22.771: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 +Oct 13 09:37:22.771: INFO: Checking APIGroup: storage.k8s.io +Oct 13 09:37:22.772: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 +Oct 13 09:37:22.772: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] +Oct 13 09:37:22.772: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 +Oct 13 09:37:22.772: INFO: Checking APIGroup: admissionregistration.k8s.io +Oct 13 09:37:22.773: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 +Oct 13 09:37:22.773: INFO: Versions found [{admissionregistration.k8s.io/v1 v1}] +Oct 13 09:37:22.773: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 +Oct 13 09:37:22.773: INFO: Checking APIGroup: apiextensions.k8s.io +Oct 13 09:37:22.774: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 +Oct 13 09:37:22.774: INFO: Versions found [{apiextensions.k8s.io/v1 v1}] +Oct 13 09:37:22.774: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 +Oct 13 09:37:22.774: INFO: Checking APIGroup: scheduling.k8s.io +Oct 13 09:37:22.775: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 +Oct 13 09:37:22.775: INFO: Versions found [{scheduling.k8s.io/v1 v1}] +Oct 13 09:37:22.775: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 +Oct 13 09:37:22.775: INFO: Checking APIGroup: coordination.k8s.io +Oct 13 09:37:22.775: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 +Oct 13 09:37:22.775: INFO: Versions found [{coordination.k8s.io/v1 v1}] +Oct 13 09:37:22.775: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 +Oct 13 09:37:22.775: INFO: Checking APIGroup: node.k8s.io +Oct 13 09:37:22.776: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1 +Oct 13 09:37:22.776: INFO: Versions found [{node.k8s.io/v1 v1}] +Oct 13 09:37:22.776: INFO: node.k8s.io/v1 matches node.k8s.io/v1 +Oct 13 09:37:22.776: INFO: Checking APIGroup: discovery.k8s.io +Oct 13 09:37:22.776: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1 +Oct 13 09:37:22.776: INFO: Versions found [{discovery.k8s.io/v1 v1}] +Oct 13 09:37:22.776: INFO: discovery.k8s.io/v1 matches discovery.k8s.io/v1 +Oct 13 09:37:22.776: INFO: Checking APIGroup: flowcontrol.apiserver.k8s.io +Oct 13 09:37:22.777: INFO: PreferredVersion.GroupVersion: flowcontrol.apiserver.k8s.io/v1beta3 +Oct 13 09:37:22.777: INFO: Versions found [{flowcontrol.apiserver.k8s.io/v1beta3 v1beta3} {flowcontrol.apiserver.k8s.io/v1beta2 v1beta2}] +Oct 13 09:37:22.777: INFO: flowcontrol.apiserver.k8s.io/v1beta3 matches flowcontrol.apiserver.k8s.io/v1beta3 +[AfterEach] [sig-api-machinery] Discovery + test/e2e/framework/node/init/init.go:32 +Oct 13 09:37:22.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] Discovery + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] Discovery + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] Discovery + tear down framework | framework.go:193 +STEP: Destroying namespace "discovery-5167" for this suite. 10/13/23 09:37:22.781 +------------------------------ +• [0.675 seconds] +[sig-api-machinery] Discovery +test/e2e/apimachinery/framework.go:23 + should validate PreferredVersion for each APIGroup [Conformance] + test/e2e/apimachinery/discovery.go:122 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] Discovery + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:37:22.112 + Oct 13 09:37:22.112: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename discovery 10/13/23 09:37:22.113 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:37:22.133 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:37:22.136 + [BeforeEach] [sig-api-machinery] Discovery + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-api-machinery] Discovery + test/e2e/apimachinery/discovery.go:43 + STEP: Setting up server cert 10/13/23 09:37:22.139 + [It] should validate PreferredVersion for each APIGroup [Conformance] + test/e2e/apimachinery/discovery.go:122 + Oct 13 09:37:22.757: INFO: Checking APIGroup: apiregistration.k8s.io + Oct 13 09:37:22.759: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 + Oct 13 09:37:22.759: INFO: Versions found [{apiregistration.k8s.io/v1 v1}] + Oct 13 09:37:22.759: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 + Oct 13 09:37:22.759: INFO: Checking APIGroup: apps + Oct 13 09:37:22.760: INFO: PreferredVersion.GroupVersion: apps/v1 + Oct 13 09:37:22.760: INFO: Versions found [{apps/v1 v1}] + Oct 13 09:37:22.760: INFO: apps/v1 matches apps/v1 + Oct 13 09:37:22.760: INFO: Checking APIGroup: events.k8s.io + Oct 13 09:37:22.762: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 + Oct 13 09:37:22.762: INFO: Versions found [{events.k8s.io/v1 v1}] + Oct 13 09:37:22.762: INFO: events.k8s.io/v1 matches events.k8s.io/v1 + Oct 13 09:37:22.762: INFO: Checking APIGroup: authentication.k8s.io + Oct 13 09:37:22.763: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 + Oct 13 09:37:22.763: INFO: Versions found [{authentication.k8s.io/v1 v1}] + Oct 13 09:37:22.763: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 + Oct 13 09:37:22.763: INFO: Checking APIGroup: authorization.k8s.io + Oct 13 09:37:22.765: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 + Oct 13 09:37:22.765: INFO: Versions found [{authorization.k8s.io/v1 v1}] + Oct 13 09:37:22.765: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 + Oct 13 09:37:22.765: INFO: Checking APIGroup: autoscaling + Oct 13 09:37:22.766: INFO: PreferredVersion.GroupVersion: autoscaling/v2 + Oct 13 09:37:22.766: INFO: Versions found [{autoscaling/v2 v2} {autoscaling/v1 v1}] + Oct 13 09:37:22.766: INFO: autoscaling/v2 matches autoscaling/v2 + Oct 13 09:37:22.766: INFO: Checking APIGroup: batch + Oct 13 09:37:22.767: INFO: PreferredVersion.GroupVersion: batch/v1 + Oct 13 09:37:22.767: INFO: Versions found [{batch/v1 v1}] + Oct 13 09:37:22.767: INFO: batch/v1 matches batch/v1 + Oct 13 09:37:22.767: INFO: Checking APIGroup: certificates.k8s.io + Oct 13 09:37:22.768: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 + Oct 13 09:37:22.768: INFO: Versions found [{certificates.k8s.io/v1 v1}] + Oct 13 09:37:22.768: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 + Oct 13 09:37:22.768: INFO: Checking APIGroup: networking.k8s.io + Oct 13 09:37:22.769: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 + Oct 13 09:37:22.769: INFO: Versions found [{networking.k8s.io/v1 v1}] + Oct 13 09:37:22.769: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 + Oct 13 09:37:22.769: INFO: Checking APIGroup: policy + Oct 13 09:37:22.770: INFO: PreferredVersion.GroupVersion: policy/v1 + Oct 13 09:37:22.770: INFO: Versions found [{policy/v1 v1}] + Oct 13 09:37:22.770: INFO: policy/v1 matches policy/v1 + Oct 13 09:37:22.770: INFO: Checking APIGroup: rbac.authorization.k8s.io + Oct 13 09:37:22.771: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 + Oct 13 09:37:22.771: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1}] + Oct 13 09:37:22.771: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 + Oct 13 09:37:22.771: INFO: Checking APIGroup: storage.k8s.io + Oct 13 09:37:22.772: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 + Oct 13 09:37:22.772: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] + Oct 13 09:37:22.772: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 + Oct 13 09:37:22.772: INFO: Checking APIGroup: admissionregistration.k8s.io + Oct 13 09:37:22.773: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 + Oct 13 09:37:22.773: INFO: Versions found [{admissionregistration.k8s.io/v1 v1}] + Oct 13 09:37:22.773: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 + Oct 13 09:37:22.773: INFO: Checking APIGroup: apiextensions.k8s.io + Oct 13 09:37:22.774: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 + Oct 13 09:37:22.774: INFO: Versions found [{apiextensions.k8s.io/v1 v1}] + Oct 13 09:37:22.774: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 + Oct 13 09:37:22.774: INFO: Checking APIGroup: scheduling.k8s.io + Oct 13 09:37:22.775: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 + Oct 13 09:37:22.775: INFO: Versions found [{scheduling.k8s.io/v1 v1}] + Oct 13 09:37:22.775: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 + Oct 13 09:37:22.775: INFO: Checking APIGroup: coordination.k8s.io + Oct 13 09:37:22.775: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 + Oct 13 09:37:22.775: INFO: Versions found [{coordination.k8s.io/v1 v1}] + Oct 13 09:37:22.775: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 + Oct 13 09:37:22.775: INFO: Checking APIGroup: node.k8s.io + Oct 13 09:37:22.776: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1 + Oct 13 09:37:22.776: INFO: Versions found [{node.k8s.io/v1 v1}] + Oct 13 09:37:22.776: INFO: node.k8s.io/v1 matches node.k8s.io/v1 + Oct 13 09:37:22.776: INFO: Checking APIGroup: discovery.k8s.io + Oct 13 09:37:22.776: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1 + Oct 13 09:37:22.776: INFO: Versions found [{discovery.k8s.io/v1 v1}] + Oct 13 09:37:22.776: INFO: discovery.k8s.io/v1 matches discovery.k8s.io/v1 + Oct 13 09:37:22.776: INFO: Checking APIGroup: flowcontrol.apiserver.k8s.io + Oct 13 09:37:22.777: INFO: PreferredVersion.GroupVersion: flowcontrol.apiserver.k8s.io/v1beta3 + Oct 13 09:37:22.777: INFO: Versions found [{flowcontrol.apiserver.k8s.io/v1beta3 v1beta3} {flowcontrol.apiserver.k8s.io/v1beta2 v1beta2}] + Oct 13 09:37:22.777: INFO: flowcontrol.apiserver.k8s.io/v1beta3 matches flowcontrol.apiserver.k8s.io/v1beta3 + [AfterEach] [sig-api-machinery] Discovery + test/e2e/framework/node/init/init.go:32 + Oct 13 09:37:22.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] Discovery + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] Discovery + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] Discovery + tear down framework | framework.go:193 + STEP: Destroying namespace "discovery-5167" for this suite. 10/13/23 09:37:22.781 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:205 +[BeforeEach] [sig-storage] Secrets + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:37:22.789 +Oct 13 09:37:22.789: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename secrets 10/13/23 09:37:22.79 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:37:22.805 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:37:22.807 +[BeforeEach] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:31 +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:205 +STEP: Creating secret with name s-test-opt-del-9adec14c-7261-4cc3-8692-b2b52ce3ce43 10/13/23 09:37:22.814 +STEP: Creating secret with name s-test-opt-upd-19de06d5-29c4-418b-a0c9-edd6bbf233a5 10/13/23 09:37:22.818 +STEP: Creating the pod 10/13/23 09:37:22.822 +Oct 13 09:37:22.829: INFO: Waiting up to 5m0s for pod "pod-secrets-39fb9c1b-9c59-4089-9ab6-736bc3d2bc44" in namespace "secrets-9428" to be "running and ready" +Oct 13 09:37:22.832: INFO: Pod "pod-secrets-39fb9c1b-9c59-4089-9ab6-736bc3d2bc44": Phase="Pending", Reason="", readiness=false. Elapsed: 2.666834ms +Oct 13 09:37:22.832: INFO: The phase of Pod pod-secrets-39fb9c1b-9c59-4089-9ab6-736bc3d2bc44 is Pending, waiting for it to be Running (with Ready = true) +Oct 13 09:37:24.839: INFO: Pod "pod-secrets-39fb9c1b-9c59-4089-9ab6-736bc3d2bc44": Phase="Running", Reason="", readiness=true. Elapsed: 2.009911474s +Oct 13 09:37:24.839: INFO: The phase of Pod pod-secrets-39fb9c1b-9c59-4089-9ab6-736bc3d2bc44 is Running (Ready = true) +Oct 13 09:37:24.839: INFO: Pod "pod-secrets-39fb9c1b-9c59-4089-9ab6-736bc3d2bc44" satisfied condition "running and ready" +STEP: Deleting secret s-test-opt-del-9adec14c-7261-4cc3-8692-b2b52ce3ce43 10/13/23 09:37:24.866 +STEP: Updating secret s-test-opt-upd-19de06d5-29c4-418b-a0c9-edd6bbf233a5 10/13/23 09:37:24.872 +STEP: Creating secret with name s-test-opt-create-9c6f41cb-d4d5-41f5-8ca8-a42fc53c965f 10/13/23 09:37:24.88 +STEP: waiting to observe update in volume 10/13/23 09:37:24.885 +[AfterEach] [sig-storage] Secrets + test/e2e/framework/node/init/init.go:32 +Oct 13 09:37:26.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Secrets + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Secrets + tear down framework | framework.go:193 +STEP: Destroying namespace "secrets-9428" for this suite. 10/13/23 09:37:26.924 +------------------------------ +• [4.144 seconds] +[sig-storage] Secrets +test/e2e/common/storage/framework.go:23 + optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:205 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Secrets + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:37:22.789 + Oct 13 09:37:22.789: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename secrets 10/13/23 09:37:22.79 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:37:22.805 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:37:22.807 + [BeforeEach] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:31 + [It] optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:205 + STEP: Creating secret with name s-test-opt-del-9adec14c-7261-4cc3-8692-b2b52ce3ce43 10/13/23 09:37:22.814 + STEP: Creating secret with name s-test-opt-upd-19de06d5-29c4-418b-a0c9-edd6bbf233a5 10/13/23 09:37:22.818 + STEP: Creating the pod 10/13/23 09:37:22.822 + Oct 13 09:37:22.829: INFO: Waiting up to 5m0s for pod "pod-secrets-39fb9c1b-9c59-4089-9ab6-736bc3d2bc44" in namespace "secrets-9428" to be "running and ready" + Oct 13 09:37:22.832: INFO: Pod "pod-secrets-39fb9c1b-9c59-4089-9ab6-736bc3d2bc44": Phase="Pending", Reason="", readiness=false. Elapsed: 2.666834ms + Oct 13 09:37:22.832: INFO: The phase of Pod pod-secrets-39fb9c1b-9c59-4089-9ab6-736bc3d2bc44 is Pending, waiting for it to be Running (with Ready = true) + Oct 13 09:37:24.839: INFO: Pod "pod-secrets-39fb9c1b-9c59-4089-9ab6-736bc3d2bc44": Phase="Running", Reason="", readiness=true. Elapsed: 2.009911474s + Oct 13 09:37:24.839: INFO: The phase of Pod pod-secrets-39fb9c1b-9c59-4089-9ab6-736bc3d2bc44 is Running (Ready = true) + Oct 13 09:37:24.839: INFO: Pod "pod-secrets-39fb9c1b-9c59-4089-9ab6-736bc3d2bc44" satisfied condition "running and ready" + STEP: Deleting secret s-test-opt-del-9adec14c-7261-4cc3-8692-b2b52ce3ce43 10/13/23 09:37:24.866 + STEP: Updating secret s-test-opt-upd-19de06d5-29c4-418b-a0c9-edd6bbf233a5 10/13/23 09:37:24.872 + STEP: Creating secret with name s-test-opt-create-9c6f41cb-d4d5-41f5-8ca8-a42fc53c965f 10/13/23 09:37:24.88 + STEP: waiting to observe update in volume 10/13/23 09:37:24.885 + [AfterEach] [sig-storage] Secrets + test/e2e/framework/node/init/init.go:32 + Oct 13 09:37:26.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Secrets + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Secrets + tear down framework | framework.go:193 + STEP: Destroying namespace "secrets-9428" for this suite. 10/13/23 09:37:26.924 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a replication controller. [Conformance] + test/e2e/apimachinery/resource_quota.go:392 +[BeforeEach] [sig-api-machinery] ResourceQuota + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:37:26.935 +Oct 13 09:37:26.935: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename resourcequota 10/13/23 09:37:26.936 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:37:26.953 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:37:26.956 +[BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:31 +[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] + test/e2e/apimachinery/resource_quota.go:392 +STEP: Counting existing ResourceQuota 10/13/23 09:37:26.958 +STEP: Creating a ResourceQuota 10/13/23 09:37:31.962 +STEP: Ensuring resource quota status is calculated 10/13/23 09:37:31.969 +STEP: Creating a ReplicationController 10/13/23 09:37:33.974 +STEP: Ensuring resource quota status captures replication controller creation 10/13/23 09:37:33.991 +STEP: Deleting a ReplicationController 10/13/23 09:37:35.997 +STEP: Ensuring resource quota status released usage 10/13/23 09:37:36.009 +[AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/node/init/init.go:32 +Oct 13 09:37:38.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + tear down framework | framework.go:193 +STEP: Destroying namespace "resourcequota-9427" for this suite. 10/13/23 09:37:38.017 +------------------------------ +• [SLOW TEST] [11.088 seconds] +[sig-api-machinery] ResourceQuota +test/e2e/apimachinery/framework.go:23 + should create a ResourceQuota and capture the life of a replication controller. [Conformance] + test/e2e/apimachinery/resource_quota.go:392 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] ResourceQuota + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:37:26.935 + Oct 13 09:37:26.935: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename resourcequota 10/13/23 09:37:26.936 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:37:26.953 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:37:26.956 + [BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:31 + [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] + test/e2e/apimachinery/resource_quota.go:392 + STEP: Counting existing ResourceQuota 10/13/23 09:37:26.958 + STEP: Creating a ResourceQuota 10/13/23 09:37:31.962 + STEP: Ensuring resource quota status is calculated 10/13/23 09:37:31.969 + STEP: Creating a ReplicationController 10/13/23 09:37:33.974 + STEP: Ensuring resource quota status captures replication controller creation 10/13/23 09:37:33.991 + STEP: Deleting a ReplicationController 10/13/23 09:37:35.997 + STEP: Ensuring resource quota status released usage 10/13/23 09:37:36.009 + [AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/node/init/init.go:32 + Oct 13 09:37:38.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + tear down framework | framework.go:193 + STEP: Destroying namespace "resourcequota-9427" for this suite. 10/13/23 09:37:38.017 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Watchers + should be able to start watching from a specific resource version [Conformance] + test/e2e/apimachinery/watch.go:142 +[BeforeEach] [sig-api-machinery] Watchers + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:37:38.024 +Oct 13 09:37:38.024: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename watch 10/13/23 09:37:38.025 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:37:38.039 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:37:38.041 +[BeforeEach] [sig-api-machinery] Watchers + test/e2e/framework/metrics/init/init.go:31 +[It] should be able to start watching from a specific resource version [Conformance] + test/e2e/apimachinery/watch.go:142 +STEP: creating a new configmap 10/13/23 09:37:38.043 +STEP: modifying the configmap once 10/13/23 09:37:38.049 +STEP: modifying the configmap a second time 10/13/23 09:37:38.055 +STEP: deleting the configmap 10/13/23 09:37:38.062 +STEP: creating a watch on configmaps from the resource version returned by the first update 10/13/23 09:37:38.068 +STEP: Expecting to observe notifications for all changes to the configmap after the first update 10/13/23 09:37:38.069 +Oct 13 09:37:38.069: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-7368 7630eb41-3cbc-4906-b673-d92a81695dba 35898 0 2023-10-13 09:37:38 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2023-10-13 09:37:38 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Oct 13 09:37:38.069: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-7368 7630eb41-3cbc-4906-b673-d92a81695dba 35899 0 2023-10-13 09:37:38 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2023-10-13 09:37:38 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +[AfterEach] [sig-api-machinery] Watchers + test/e2e/framework/node/init/init.go:32 +Oct 13 09:37:38.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] Watchers + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] Watchers + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] Watchers + tear down framework | framework.go:193 +STEP: Destroying namespace "watch-7368" for this suite. 10/13/23 09:37:38.073 +------------------------------ +• [0.055 seconds] +[sig-api-machinery] Watchers +test/e2e/apimachinery/framework.go:23 + should be able to start watching from a specific resource version [Conformance] + test/e2e/apimachinery/watch.go:142 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] Watchers + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:37:38.024 + Oct 13 09:37:38.024: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename watch 10/13/23 09:37:38.025 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:37:38.039 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:37:38.041 + [BeforeEach] [sig-api-machinery] Watchers + test/e2e/framework/metrics/init/init.go:31 + [It] should be able to start watching from a specific resource version [Conformance] + test/e2e/apimachinery/watch.go:142 + STEP: creating a new configmap 10/13/23 09:37:38.043 + STEP: modifying the configmap once 10/13/23 09:37:38.049 + STEP: modifying the configmap a second time 10/13/23 09:37:38.055 + STEP: deleting the configmap 10/13/23 09:37:38.062 + STEP: creating a watch on configmaps from the resource version returned by the first update 10/13/23 09:37:38.068 + STEP: Expecting to observe notifications for all changes to the configmap after the first update 10/13/23 09:37:38.069 + Oct 13 09:37:38.069: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-7368 7630eb41-3cbc-4906-b673-d92a81695dba 35898 0 2023-10-13 09:37:38 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2023-10-13 09:37:38 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} + Oct 13 09:37:38.069: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-7368 7630eb41-3cbc-4906-b673-d92a81695dba 35899 0 2023-10-13 09:37:38 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2023-10-13 09:37:38 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} + [AfterEach] [sig-api-machinery] Watchers + test/e2e/framework/node/init/init.go:32 + Oct 13 09:37:38.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] Watchers + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] Watchers + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] Watchers + tear down framework | framework.go:193 + STEP: Destroying namespace "watch-7368" for this suite. 10/13/23 09:37:38.073 + << End Captured GinkgoWriter Output +------------------------------ +SSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] + test/e2e/apimachinery/garbage_collector.go:735 +[BeforeEach] [sig-api-machinery] Garbage collector + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:37:38.079 +Oct 13 09:37:38.079: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename gc 10/13/23 09:37:38.08 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:37:38.094 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:37:38.097 +[BeforeEach] [sig-api-machinery] Garbage collector + test/e2e/framework/metrics/init/init.go:31 +[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] + test/e2e/apimachinery/garbage_collector.go:735 +STEP: create the rc1 10/13/23 09:37:38.102 +STEP: create the rc2 10/13/23 09:37:38.107 +STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well 10/13/23 09:37:43.118 +STEP: delete the rc simpletest-rc-to-be-deleted 10/13/23 09:37:43.512 +STEP: wait for the rc to be deleted 10/13/23 09:37:43.523 +Oct 13 09:37:48.537: INFO: 71 pods remaining +Oct 13 09:37:48.537: INFO: 71 pods has nil DeletionTimestamp +Oct 13 09:37:48.537: INFO: +STEP: Gathering metrics 10/13/23 09:37:53.533 +Oct 13 09:37:53.943: INFO: Waiting up to 5m0s for pod "kube-controller-manager-node3" in namespace "kube-system" to be "running and ready" +Oct 13 09:37:53.947: INFO: Pod "kube-controller-manager-node3": Phase="Running", Reason="", readiness=true. Elapsed: 3.784474ms +Oct 13 09:37:53.947: INFO: The phase of Pod kube-controller-manager-node3 is Running (Ready = true) +Oct 13 09:37:53.947: INFO: Pod "kube-controller-manager-node3" satisfied condition "running and ready" +Oct 13 09:37:54.367: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +Oct 13 09:37:54.367: INFO: Deleting pod "simpletest-rc-to-be-deleted-2mtxt" in namespace "gc-6578" +Oct 13 09:37:54.377: INFO: Deleting pod "simpletest-rc-to-be-deleted-2wf65" in namespace "gc-6578" +Oct 13 09:37:54.388: INFO: Deleting pod "simpletest-rc-to-be-deleted-4ms8b" in namespace "gc-6578" +Oct 13 09:37:54.399: INFO: Deleting pod "simpletest-rc-to-be-deleted-4s4pj" in namespace "gc-6578" +Oct 13 09:37:54.408: INFO: Deleting pod "simpletest-rc-to-be-deleted-4tp7q" in namespace "gc-6578" +Oct 13 09:37:54.418: INFO: Deleting pod "simpletest-rc-to-be-deleted-55wqj" in namespace "gc-6578" +Oct 13 09:37:54.432: INFO: Deleting pod "simpletest-rc-to-be-deleted-5cvmb" in namespace "gc-6578" +Oct 13 09:37:54.447: INFO: Deleting pod "simpletest-rc-to-be-deleted-5fjzv" in namespace "gc-6578" +Oct 13 09:37:54.462: INFO: Deleting pod "simpletest-rc-to-be-deleted-5mfxp" in namespace "gc-6578" +Oct 13 09:37:54.472: INFO: Deleting pod "simpletest-rc-to-be-deleted-62lpz" in namespace "gc-6578" +Oct 13 09:37:54.484: INFO: Deleting pod "simpletest-rc-to-be-deleted-66wbr" in namespace "gc-6578" +Oct 13 09:37:54.499: INFO: Deleting pod "simpletest-rc-to-be-deleted-6fr9m" in namespace "gc-6578" +Oct 13 09:37:54.520: INFO: Deleting pod "simpletest-rc-to-be-deleted-6m8x8" in namespace "gc-6578" +Oct 13 09:37:54.537: INFO: Deleting pod "simpletest-rc-to-be-deleted-7556g" in namespace "gc-6578" +Oct 13 09:37:54.556: INFO: Deleting pod "simpletest-rc-to-be-deleted-8dw5c" in namespace "gc-6578" +Oct 13 09:37:54.572: INFO: Deleting pod "simpletest-rc-to-be-deleted-8ghnq" in namespace "gc-6578" +Oct 13 09:37:54.588: INFO: Deleting pod "simpletest-rc-to-be-deleted-94hkq" in namespace "gc-6578" +Oct 13 09:37:54.603: INFO: Deleting pod "simpletest-rc-to-be-deleted-96klr" in namespace "gc-6578" +Oct 13 09:37:54.620: INFO: Deleting pod "simpletest-rc-to-be-deleted-96pr5" in namespace "gc-6578" +Oct 13 09:37:54.636: INFO: Deleting pod "simpletest-rc-to-be-deleted-97cx7" in namespace "gc-6578" +Oct 13 09:37:54.655: INFO: Deleting pod "simpletest-rc-to-be-deleted-9dt7z" in namespace "gc-6578" +Oct 13 09:37:54.683: INFO: Deleting pod "simpletest-rc-to-be-deleted-9gst6" in namespace "gc-6578" +Oct 13 09:37:54.699: INFO: Deleting pod "simpletest-rc-to-be-deleted-9j2cr" in namespace "gc-6578" +Oct 13 09:37:54.712: INFO: Deleting pod "simpletest-rc-to-be-deleted-9kdsh" in namespace "gc-6578" +Oct 13 09:37:54.730: INFO: Deleting pod "simpletest-rc-to-be-deleted-9lt7f" in namespace "gc-6578" +Oct 13 09:37:54.745: INFO: Deleting pod "simpletest-rc-to-be-deleted-bjkhn" in namespace "gc-6578" +Oct 13 09:37:54.768: INFO: Deleting pod "simpletest-rc-to-be-deleted-c67sg" in namespace "gc-6578" +Oct 13 09:37:54.780: INFO: Deleting pod "simpletest-rc-to-be-deleted-cfqn7" in namespace "gc-6578" +Oct 13 09:37:54.800: INFO: Deleting pod "simpletest-rc-to-be-deleted-cvm6x" in namespace "gc-6578" +Oct 13 09:37:54.818: INFO: Deleting pod "simpletest-rc-to-be-deleted-d6kb2" in namespace "gc-6578" +Oct 13 09:37:54.838: INFO: Deleting pod "simpletest-rc-to-be-deleted-dlf27" in namespace "gc-6578" +Oct 13 09:37:54.858: INFO: Deleting pod "simpletest-rc-to-be-deleted-dlx4t" in namespace "gc-6578" +Oct 13 09:37:54.880: INFO: Deleting pod "simpletest-rc-to-be-deleted-f99bk" in namespace "gc-6578" +Oct 13 09:37:54.904: INFO: Deleting pod "simpletest-rc-to-be-deleted-fkxtq" in namespace "gc-6578" +Oct 13 09:37:54.922: INFO: Deleting pod "simpletest-rc-to-be-deleted-g9tkn" in namespace "gc-6578" +Oct 13 09:37:54.944: INFO: Deleting pod "simpletest-rc-to-be-deleted-glf8h" in namespace "gc-6578" +Oct 13 09:37:54.965: INFO: Deleting pod "simpletest-rc-to-be-deleted-glhft" in namespace "gc-6578" +Oct 13 09:37:54.980: INFO: Deleting pod "simpletest-rc-to-be-deleted-gsdjf" in namespace "gc-6578" +Oct 13 09:37:54.994: INFO: Deleting pod "simpletest-rc-to-be-deleted-h54ls" in namespace "gc-6578" +Oct 13 09:37:55.011: INFO: Deleting pod "simpletest-rc-to-be-deleted-hgdbn" in namespace "gc-6578" +Oct 13 09:37:55.025: INFO: Deleting pod "simpletest-rc-to-be-deleted-hhlh8" in namespace "gc-6578" +Oct 13 09:37:55.040: INFO: Deleting pod "simpletest-rc-to-be-deleted-j24bj" in namespace "gc-6578" +Oct 13 09:37:55.059: INFO: Deleting pod "simpletest-rc-to-be-deleted-jbfvb" in namespace "gc-6578" +Oct 13 09:37:55.074: INFO: Deleting pod "simpletest-rc-to-be-deleted-jgfm8" in namespace "gc-6578" +Oct 13 09:37:55.095: INFO: Deleting pod "simpletest-rc-to-be-deleted-jpl9l" in namespace "gc-6578" +Oct 13 09:37:55.120: INFO: Deleting pod "simpletest-rc-to-be-deleted-jt7qs" in namespace "gc-6578" +Oct 13 09:37:55.135: INFO: Deleting pod "simpletest-rc-to-be-deleted-jxb4t" in namespace "gc-6578" +Oct 13 09:37:55.155: INFO: Deleting pod "simpletest-rc-to-be-deleted-k22pl" in namespace "gc-6578" +Oct 13 09:37:55.172: INFO: Deleting pod "simpletest-rc-to-be-deleted-kg2zz" in namespace "gc-6578" +Oct 13 09:37:55.195: INFO: Deleting pod "simpletest-rc-to-be-deleted-kq7h2" in namespace "gc-6578" +[AfterEach] [sig-api-machinery] Garbage collector + test/e2e/framework/node/init/init.go:32 +Oct 13 09:37:55.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] Garbage collector + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] Garbage collector + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] Garbage collector + tear down framework | framework.go:193 +STEP: Destroying namespace "gc-6578" for this suite. 10/13/23 09:37:55.221 +------------------------------ +• [SLOW TEST] [17.152 seconds] +[sig-api-machinery] Garbage collector +test/e2e/apimachinery/framework.go:23 + should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] + test/e2e/apimachinery/garbage_collector.go:735 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] Garbage collector + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:37:38.079 + Oct 13 09:37:38.079: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename gc 10/13/23 09:37:38.08 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:37:38.094 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:37:38.097 + [BeforeEach] [sig-api-machinery] Garbage collector + test/e2e/framework/metrics/init/init.go:31 + [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] + test/e2e/apimachinery/garbage_collector.go:735 + STEP: create the rc1 10/13/23 09:37:38.102 + STEP: create the rc2 10/13/23 09:37:38.107 + STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well 10/13/23 09:37:43.118 + STEP: delete the rc simpletest-rc-to-be-deleted 10/13/23 09:37:43.512 + STEP: wait for the rc to be deleted 10/13/23 09:37:43.523 + Oct 13 09:37:48.537: INFO: 71 pods remaining + Oct 13 09:37:48.537: INFO: 71 pods has nil DeletionTimestamp + Oct 13 09:37:48.537: INFO: + STEP: Gathering metrics 10/13/23 09:37:53.533 + Oct 13 09:37:53.943: INFO: Waiting up to 5m0s for pod "kube-controller-manager-node3" in namespace "kube-system" to be "running and ready" + Oct 13 09:37:53.947: INFO: Pod "kube-controller-manager-node3": Phase="Running", Reason="", readiness=true. Elapsed: 3.784474ms + Oct 13 09:37:53.947: INFO: The phase of Pod kube-controller-manager-node3 is Running (Ready = true) + Oct 13 09:37:53.947: INFO: Pod "kube-controller-manager-node3" satisfied condition "running and ready" + Oct 13 09:37:54.367: INFO: For apiserver_request_total: + For apiserver_request_latency_seconds: + For apiserver_init_events_total: + For garbage_collector_attempt_to_delete_queue_latency: + For garbage_collector_attempt_to_delete_work_duration: + For garbage_collector_attempt_to_orphan_queue_latency: + For garbage_collector_attempt_to_orphan_work_duration: + For garbage_collector_dirty_processing_latency_microseconds: + For garbage_collector_event_processing_latency_microseconds: + For garbage_collector_graph_changes_queue_latency: + For garbage_collector_graph_changes_work_duration: + For garbage_collector_orphan_processing_latency_microseconds: + For namespace_queue_latency: + For namespace_queue_latency_sum: + For namespace_queue_latency_count: + For namespace_retries: + For namespace_work_duration: + For namespace_work_duration_sum: + For namespace_work_duration_count: + For function_duration_seconds: + For errors_total: + For evicted_pods_total: + + Oct 13 09:37:54.367: INFO: Deleting pod "simpletest-rc-to-be-deleted-2mtxt" in namespace "gc-6578" + Oct 13 09:37:54.377: INFO: Deleting pod "simpletest-rc-to-be-deleted-2wf65" in namespace "gc-6578" + Oct 13 09:37:54.388: INFO: Deleting pod "simpletest-rc-to-be-deleted-4ms8b" in namespace "gc-6578" + Oct 13 09:37:54.399: INFO: Deleting pod "simpletest-rc-to-be-deleted-4s4pj" in namespace "gc-6578" + Oct 13 09:37:54.408: INFO: Deleting pod "simpletest-rc-to-be-deleted-4tp7q" in namespace "gc-6578" + Oct 13 09:37:54.418: INFO: Deleting pod "simpletest-rc-to-be-deleted-55wqj" in namespace "gc-6578" + Oct 13 09:37:54.432: INFO: Deleting pod "simpletest-rc-to-be-deleted-5cvmb" in namespace "gc-6578" + Oct 13 09:37:54.447: INFO: Deleting pod "simpletest-rc-to-be-deleted-5fjzv" in namespace "gc-6578" + Oct 13 09:37:54.462: INFO: Deleting pod "simpletest-rc-to-be-deleted-5mfxp" in namespace "gc-6578" + Oct 13 09:37:54.472: INFO: Deleting pod "simpletest-rc-to-be-deleted-62lpz" in namespace "gc-6578" + Oct 13 09:37:54.484: INFO: Deleting pod "simpletest-rc-to-be-deleted-66wbr" in namespace "gc-6578" + Oct 13 09:37:54.499: INFO: Deleting pod "simpletest-rc-to-be-deleted-6fr9m" in namespace "gc-6578" + Oct 13 09:37:54.520: INFO: Deleting pod "simpletest-rc-to-be-deleted-6m8x8" in namespace "gc-6578" + Oct 13 09:37:54.537: INFO: Deleting pod "simpletest-rc-to-be-deleted-7556g" in namespace "gc-6578" + Oct 13 09:37:54.556: INFO: Deleting pod "simpletest-rc-to-be-deleted-8dw5c" in namespace "gc-6578" + Oct 13 09:37:54.572: INFO: Deleting pod "simpletest-rc-to-be-deleted-8ghnq" in namespace "gc-6578" + Oct 13 09:37:54.588: INFO: Deleting pod "simpletest-rc-to-be-deleted-94hkq" in namespace "gc-6578" + Oct 13 09:37:54.603: INFO: Deleting pod "simpletest-rc-to-be-deleted-96klr" in namespace "gc-6578" + Oct 13 09:37:54.620: INFO: Deleting pod "simpletest-rc-to-be-deleted-96pr5" in namespace "gc-6578" + Oct 13 09:37:54.636: INFO: Deleting pod "simpletest-rc-to-be-deleted-97cx7" in namespace "gc-6578" + Oct 13 09:37:54.655: INFO: Deleting pod "simpletest-rc-to-be-deleted-9dt7z" in namespace "gc-6578" + Oct 13 09:37:54.683: INFO: Deleting pod "simpletest-rc-to-be-deleted-9gst6" in namespace "gc-6578" + Oct 13 09:37:54.699: INFO: Deleting pod "simpletest-rc-to-be-deleted-9j2cr" in namespace "gc-6578" + Oct 13 09:37:54.712: INFO: Deleting pod "simpletest-rc-to-be-deleted-9kdsh" in namespace "gc-6578" + Oct 13 09:37:54.730: INFO: Deleting pod "simpletest-rc-to-be-deleted-9lt7f" in namespace "gc-6578" + Oct 13 09:37:54.745: INFO: Deleting pod "simpletest-rc-to-be-deleted-bjkhn" in namespace "gc-6578" + Oct 13 09:37:54.768: INFO: Deleting pod "simpletest-rc-to-be-deleted-c67sg" in namespace "gc-6578" + Oct 13 09:37:54.780: INFO: Deleting pod "simpletest-rc-to-be-deleted-cfqn7" in namespace "gc-6578" + Oct 13 09:37:54.800: INFO: Deleting pod "simpletest-rc-to-be-deleted-cvm6x" in namespace "gc-6578" + Oct 13 09:37:54.818: INFO: Deleting pod "simpletest-rc-to-be-deleted-d6kb2" in namespace "gc-6578" + Oct 13 09:37:54.838: INFO: Deleting pod "simpletest-rc-to-be-deleted-dlf27" in namespace "gc-6578" + Oct 13 09:37:54.858: INFO: Deleting pod "simpletest-rc-to-be-deleted-dlx4t" in namespace "gc-6578" + Oct 13 09:37:54.880: INFO: Deleting pod "simpletest-rc-to-be-deleted-f99bk" in namespace "gc-6578" + Oct 13 09:37:54.904: INFO: Deleting pod "simpletest-rc-to-be-deleted-fkxtq" in namespace "gc-6578" + Oct 13 09:37:54.922: INFO: Deleting pod "simpletest-rc-to-be-deleted-g9tkn" in namespace "gc-6578" + Oct 13 09:37:54.944: INFO: Deleting pod "simpletest-rc-to-be-deleted-glf8h" in namespace "gc-6578" + Oct 13 09:37:54.965: INFO: Deleting pod "simpletest-rc-to-be-deleted-glhft" in namespace "gc-6578" + Oct 13 09:37:54.980: INFO: Deleting pod "simpletest-rc-to-be-deleted-gsdjf" in namespace "gc-6578" + Oct 13 09:37:54.994: INFO: Deleting pod "simpletest-rc-to-be-deleted-h54ls" in namespace "gc-6578" + Oct 13 09:37:55.011: INFO: Deleting pod "simpletest-rc-to-be-deleted-hgdbn" in namespace "gc-6578" + Oct 13 09:37:55.025: INFO: Deleting pod "simpletest-rc-to-be-deleted-hhlh8" in namespace "gc-6578" + Oct 13 09:37:55.040: INFO: Deleting pod "simpletest-rc-to-be-deleted-j24bj" in namespace "gc-6578" + Oct 13 09:37:55.059: INFO: Deleting pod "simpletest-rc-to-be-deleted-jbfvb" in namespace "gc-6578" + Oct 13 09:37:55.074: INFO: Deleting pod "simpletest-rc-to-be-deleted-jgfm8" in namespace "gc-6578" + Oct 13 09:37:55.095: INFO: Deleting pod "simpletest-rc-to-be-deleted-jpl9l" in namespace "gc-6578" + Oct 13 09:37:55.120: INFO: Deleting pod "simpletest-rc-to-be-deleted-jt7qs" in namespace "gc-6578" + Oct 13 09:37:55.135: INFO: Deleting pod "simpletest-rc-to-be-deleted-jxb4t" in namespace "gc-6578" + Oct 13 09:37:55.155: INFO: Deleting pod "simpletest-rc-to-be-deleted-k22pl" in namespace "gc-6578" + Oct 13 09:37:55.172: INFO: Deleting pod "simpletest-rc-to-be-deleted-kg2zz" in namespace "gc-6578" + Oct 13 09:37:55.195: INFO: Deleting pod "simpletest-rc-to-be-deleted-kq7h2" in namespace "gc-6578" + [AfterEach] [sig-api-machinery] Garbage collector + test/e2e/framework/node/init/init.go:32 + Oct 13 09:37:55.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + tear down framework | framework.go:193 + STEP: Destroying namespace "gc-6578" for this suite. 10/13/23 09:37:55.221 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSS +------------------------------ +[sig-network] DNS + should provide DNS for pods for Subdomain [Conformance] + test/e2e/network/dns.go:290 +[BeforeEach] [sig-network] DNS + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:37:55.234 +Oct 13 09:37:55.234: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename dns 10/13/23 09:37:55.236 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:37:55.28 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:37:55.284 +[BeforeEach] [sig-network] DNS + test/e2e/framework/metrics/init/init.go:31 +[It] should provide DNS for pods for Subdomain [Conformance] + test/e2e/network/dns.go:290 +STEP: Creating a test headless service 10/13/23 09:37:55.287 +STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6979.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-6979.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6979.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-6979.svc.cluster.local;sleep 1; done + 10/13/23 09:37:55.295 +STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6979.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-6979.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6979.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-6979.svc.cluster.local;sleep 1; done + 10/13/23 09:37:55.295 +STEP: creating a pod to probe DNS 10/13/23 09:37:55.295 +STEP: submitting the pod to kubernetes 10/13/23 09:37:55.295 +Oct 13 09:37:55.308: INFO: Waiting up to 15m0s for pod "dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f" in namespace "dns-6979" to be "running" +Oct 13 09:37:55.313: INFO: Pod "dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.32107ms +Oct 13 09:37:57.317: INFO: Pod "dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009240788s +Oct 13 09:37:59.318: INFO: Pod "dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f": Phase="Running", Reason="", readiness=true. Elapsed: 4.010328118s +Oct 13 09:37:59.318: INFO: Pod "dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f" satisfied condition "running" +STEP: retrieving the pod 10/13/23 09:37:59.318 +STEP: looking for the results for each expected name from probers 10/13/23 09:37:59.321 +Oct 13 09:37:59.324: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) +Oct 13 09:37:59.327: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) +Oct 13 09:37:59.330: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) +Oct 13 09:37:59.332: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) +Oct 13 09:37:59.335: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) +Oct 13 09:37:59.337: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) +Oct 13 09:37:59.340: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) +Oct 13 09:37:59.342: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) +Oct 13 09:37:59.342: INFO: Lookups using dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6979.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6979.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local jessie_udp@dns-test-service-2.dns-6979.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6979.svc.cluster.local] + +Oct 13 09:38:04.348: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) +Oct 13 09:38:04.351: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) +Oct 13 09:38:04.354: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) +Oct 13 09:38:04.357: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) +Oct 13 09:38:04.360: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) +Oct 13 09:38:04.363: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) +Oct 13 09:38:04.366: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) +Oct 13 09:38:04.369: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) +Oct 13 09:38:04.369: INFO: Lookups using dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6979.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6979.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local jessie_udp@dns-test-service-2.dns-6979.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6979.svc.cluster.local] + +Oct 13 09:38:09.348: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) +Oct 13 09:38:09.352: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) +Oct 13 09:38:09.355: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) +Oct 13 09:38:09.358: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) +Oct 13 09:38:09.361: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) +Oct 13 09:38:09.364: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) +Oct 13 09:38:09.367: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) +Oct 13 09:38:09.370: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) +Oct 13 09:38:09.370: INFO: Lookups using dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6979.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6979.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local jessie_udp@dns-test-service-2.dns-6979.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6979.svc.cluster.local] + +Oct 13 09:38:14.349: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) +Oct 13 09:38:14.353: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) +Oct 13 09:38:14.357: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) +Oct 13 09:38:14.360: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) +Oct 13 09:38:14.364: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) +Oct 13 09:38:14.366: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) +Oct 13 09:38:14.369: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) +Oct 13 09:38:14.372: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) +Oct 13 09:38:14.372: INFO: Lookups using dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6979.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6979.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local jessie_udp@dns-test-service-2.dns-6979.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6979.svc.cluster.local] + +Oct 13 09:38:19.349: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) +Oct 13 09:38:19.353: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) +Oct 13 09:38:19.357: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) +Oct 13 09:38:19.360: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) +Oct 13 09:38:19.363: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) +Oct 13 09:38:19.366: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) +Oct 13 09:38:19.369: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) +Oct 13 09:38:19.373: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) +Oct 13 09:38:19.373: INFO: Lookups using dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6979.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6979.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local jessie_udp@dns-test-service-2.dns-6979.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6979.svc.cluster.local] + +Oct 13 09:38:24.349: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) +Oct 13 09:38:24.354: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) +Oct 13 09:38:24.357: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) +Oct 13 09:38:24.361: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) +Oct 13 09:38:24.364: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) +Oct 13 09:38:24.368: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) +Oct 13 09:38:24.371: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) +Oct 13 09:38:24.374: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) +Oct 13 09:38:24.374: INFO: Lookups using dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6979.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6979.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local jessie_udp@dns-test-service-2.dns-6979.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6979.svc.cluster.local] + +Oct 13 09:38:29.381: INFO: DNS probes using dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f succeeded + +STEP: deleting the pod 10/13/23 09:38:29.381 +STEP: deleting the test headless service 10/13/23 09:38:29.402 +[AfterEach] [sig-network] DNS + test/e2e/framework/node/init/init.go:32 +Oct 13 09:38:29.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] DNS + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] DNS + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] DNS + tear down framework | framework.go:193 +STEP: Destroying namespace "dns-6979" for this suite. 10/13/23 09:38:29.424 +------------------------------ +• [SLOW TEST] [34.198 seconds] +[sig-network] DNS +test/e2e/network/common/framework.go:23 + should provide DNS for pods for Subdomain [Conformance] + test/e2e/network/dns.go:290 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] DNS + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:37:55.234 + Oct 13 09:37:55.234: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename dns 10/13/23 09:37:55.236 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:37:55.28 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:37:55.284 + [BeforeEach] [sig-network] DNS + test/e2e/framework/metrics/init/init.go:31 + [It] should provide DNS for pods for Subdomain [Conformance] + test/e2e/network/dns.go:290 + STEP: Creating a test headless service 10/13/23 09:37:55.287 + STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6979.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-6979.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6979.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-6979.svc.cluster.local;sleep 1; done + 10/13/23 09:37:55.295 + STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6979.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-6979.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6979.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-6979.svc.cluster.local;sleep 1; done + 10/13/23 09:37:55.295 + STEP: creating a pod to probe DNS 10/13/23 09:37:55.295 + STEP: submitting the pod to kubernetes 10/13/23 09:37:55.295 + Oct 13 09:37:55.308: INFO: Waiting up to 15m0s for pod "dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f" in namespace "dns-6979" to be "running" + Oct 13 09:37:55.313: INFO: Pod "dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.32107ms + Oct 13 09:37:57.317: INFO: Pod "dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009240788s + Oct 13 09:37:59.318: INFO: Pod "dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f": Phase="Running", Reason="", readiness=true. Elapsed: 4.010328118s + Oct 13 09:37:59.318: INFO: Pod "dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f" satisfied condition "running" + STEP: retrieving the pod 10/13/23 09:37:59.318 + STEP: looking for the results for each expected name from probers 10/13/23 09:37:59.321 + Oct 13 09:37:59.324: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) + Oct 13 09:37:59.327: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) + Oct 13 09:37:59.330: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) + Oct 13 09:37:59.332: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) + Oct 13 09:37:59.335: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) + Oct 13 09:37:59.337: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) + Oct 13 09:37:59.340: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) + Oct 13 09:37:59.342: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) + Oct 13 09:37:59.342: INFO: Lookups using dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6979.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6979.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local jessie_udp@dns-test-service-2.dns-6979.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6979.svc.cluster.local] + + Oct 13 09:38:04.348: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) + Oct 13 09:38:04.351: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) + Oct 13 09:38:04.354: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) + Oct 13 09:38:04.357: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) + Oct 13 09:38:04.360: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) + Oct 13 09:38:04.363: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) + Oct 13 09:38:04.366: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) + Oct 13 09:38:04.369: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) + Oct 13 09:38:04.369: INFO: Lookups using dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6979.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6979.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local jessie_udp@dns-test-service-2.dns-6979.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6979.svc.cluster.local] + + Oct 13 09:38:09.348: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) + Oct 13 09:38:09.352: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) + Oct 13 09:38:09.355: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) + Oct 13 09:38:09.358: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) + Oct 13 09:38:09.361: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) + Oct 13 09:38:09.364: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) + Oct 13 09:38:09.367: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) + Oct 13 09:38:09.370: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) + Oct 13 09:38:09.370: INFO: Lookups using dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6979.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6979.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local jessie_udp@dns-test-service-2.dns-6979.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6979.svc.cluster.local] + + Oct 13 09:38:14.349: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) + Oct 13 09:38:14.353: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) + Oct 13 09:38:14.357: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) + Oct 13 09:38:14.360: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) + Oct 13 09:38:14.364: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) + Oct 13 09:38:14.366: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) + Oct 13 09:38:14.369: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) + Oct 13 09:38:14.372: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) + Oct 13 09:38:14.372: INFO: Lookups using dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6979.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6979.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local jessie_udp@dns-test-service-2.dns-6979.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6979.svc.cluster.local] + + Oct 13 09:38:19.349: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) + Oct 13 09:38:19.353: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) + Oct 13 09:38:19.357: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) + Oct 13 09:38:19.360: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) + Oct 13 09:38:19.363: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) + Oct 13 09:38:19.366: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) + Oct 13 09:38:19.369: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) + Oct 13 09:38:19.373: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) + Oct 13 09:38:19.373: INFO: Lookups using dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6979.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6979.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local jessie_udp@dns-test-service-2.dns-6979.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6979.svc.cluster.local] + + Oct 13 09:38:24.349: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) + Oct 13 09:38:24.354: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) + Oct 13 09:38:24.357: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) + Oct 13 09:38:24.361: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) + Oct 13 09:38:24.364: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) + Oct 13 09:38:24.368: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) + Oct 13 09:38:24.371: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) + Oct 13 09:38:24.374: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6979.svc.cluster.local from pod dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f: the server could not find the requested resource (get pods dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f) + Oct 13 09:38:24.374: INFO: Lookups using dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6979.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6979.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6979.svc.cluster.local jessie_udp@dns-test-service-2.dns-6979.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6979.svc.cluster.local] + + Oct 13 09:38:29.381: INFO: DNS probes using dns-6979/dns-test-17ed5cd4-2c81-4625-9c77-c176494e9c0f succeeded + + STEP: deleting the pod 10/13/23 09:38:29.381 + STEP: deleting the test headless service 10/13/23 09:38:29.402 + [AfterEach] [sig-network] DNS + test/e2e/framework/node/init/init.go:32 + Oct 13 09:38:29.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] DNS + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] DNS + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] DNS + tear down framework | framework.go:193 + STEP: Destroying namespace "dns-6979" for this suite. 10/13/23 09:38:29.424 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide podname only [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:53 +[BeforeEach] [sig-storage] Projected downwardAPI + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:38:29.434 +Oct 13 09:38:29.434: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename projected 10/13/23 09:38:29.435 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:38:29.454 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:38:29.457 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:44 +[It] should provide podname only [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:53 +STEP: Creating a pod to test downward API volume plugin 10/13/23 09:38:29.459 +Oct 13 09:38:29.468: INFO: Waiting up to 5m0s for pod "downwardapi-volume-27bcc25f-4660-40ba-a484-0b0e22fd06d2" in namespace "projected-8324" to be "Succeeded or Failed" +Oct 13 09:38:29.472: INFO: Pod "downwardapi-volume-27bcc25f-4660-40ba-a484-0b0e22fd06d2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.135158ms +Oct 13 09:38:31.481: INFO: Pod "downwardapi-volume-27bcc25f-4660-40ba-a484-0b0e22fd06d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012297269s +Oct 13 09:38:33.478: INFO: Pod "downwardapi-volume-27bcc25f-4660-40ba-a484-0b0e22fd06d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010096324s +STEP: Saw pod success 10/13/23 09:38:33.478 +Oct 13 09:38:33.479: INFO: Pod "downwardapi-volume-27bcc25f-4660-40ba-a484-0b0e22fd06d2" satisfied condition "Succeeded or Failed" +Oct 13 09:38:33.484: INFO: Trying to get logs from node node2 pod downwardapi-volume-27bcc25f-4660-40ba-a484-0b0e22fd06d2 container client-container: +STEP: delete the pod 10/13/23 09:38:33.493 +Oct 13 09:38:33.515: INFO: Waiting for pod downwardapi-volume-27bcc25f-4660-40ba-a484-0b0e22fd06d2 to disappear +Oct 13 09:38:33.518: INFO: Pod downwardapi-volume-27bcc25f-4660-40ba-a484-0b0e22fd06d2 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/node/init/init.go:32 +Oct 13 09:38:33.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI + tear down framework | framework.go:193 +STEP: Destroying namespace "projected-8324" for this suite. 10/13/23 09:38:33.522 +------------------------------ +• [4.095 seconds] +[sig-storage] Projected downwardAPI +test/e2e/common/storage/framework.go:23 + should provide podname only [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:53 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected downwardAPI + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:38:29.434 + Oct 13 09:38:29.434: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename projected 10/13/23 09:38:29.435 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:38:29.454 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:38:29.457 + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:44 + [It] should provide podname only [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:53 + STEP: Creating a pod to test downward API volume plugin 10/13/23 09:38:29.459 + Oct 13 09:38:29.468: INFO: Waiting up to 5m0s for pod "downwardapi-volume-27bcc25f-4660-40ba-a484-0b0e22fd06d2" in namespace "projected-8324" to be "Succeeded or Failed" + Oct 13 09:38:29.472: INFO: Pod "downwardapi-volume-27bcc25f-4660-40ba-a484-0b0e22fd06d2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.135158ms + Oct 13 09:38:31.481: INFO: Pod "downwardapi-volume-27bcc25f-4660-40ba-a484-0b0e22fd06d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012297269s + Oct 13 09:38:33.478: INFO: Pod "downwardapi-volume-27bcc25f-4660-40ba-a484-0b0e22fd06d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010096324s + STEP: Saw pod success 10/13/23 09:38:33.478 + Oct 13 09:38:33.479: INFO: Pod "downwardapi-volume-27bcc25f-4660-40ba-a484-0b0e22fd06d2" satisfied condition "Succeeded or Failed" + Oct 13 09:38:33.484: INFO: Trying to get logs from node node2 pod downwardapi-volume-27bcc25f-4660-40ba-a484-0b0e22fd06d2 container client-container: + STEP: delete the pod 10/13/23 09:38:33.493 + Oct 13 09:38:33.515: INFO: Waiting for pod downwardapi-volume-27bcc25f-4660-40ba-a484-0b0e22fd06d2 to disappear + Oct 13 09:38:33.518: INFO: Pod downwardapi-volume-27bcc25f-4660-40ba-a484-0b0e22fd06d2 no longer exists + [AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/node/init/init.go:32 + Oct 13 09:38:33.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + tear down framework | framework.go:193 + STEP: Destroying namespace "projected-8324" for this suite. 10/13/23 09:38:33.522 + << End Captured GinkgoWriter Output +------------------------------ +SSSSS +------------------------------ +[sig-node] Variable Expansion + should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] + test/e2e/common/node/expansion.go:186 +[BeforeEach] [sig-node] Variable Expansion + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:38:33.529 +Oct 13 09:38:33.529: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename var-expansion 10/13/23 09:38:33.53 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:38:33.55 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:38:33.553 +[BeforeEach] [sig-node] Variable Expansion + test/e2e/framework/metrics/init/init.go:31 +[It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] + test/e2e/common/node/expansion.go:186 +Oct 13 09:38:33.563: INFO: Waiting up to 2m0s for pod "var-expansion-34c2ae40-6e8c-420b-a8f0-62c2fcb7915b" in namespace "var-expansion-6288" to be "container 0 failed with reason CreateContainerConfigError" +Oct 13 09:38:33.566: INFO: Pod "var-expansion-34c2ae40-6e8c-420b-a8f0-62c2fcb7915b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.917401ms +Oct 13 09:38:35.570: INFO: Pod "var-expansion-34c2ae40-6e8c-420b-a8f0-62c2fcb7915b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007113296s +Oct 13 09:38:35.570: INFO: Pod "var-expansion-34c2ae40-6e8c-420b-a8f0-62c2fcb7915b" satisfied condition "container 0 failed with reason CreateContainerConfigError" +Oct 13 09:38:35.570: INFO: Deleting pod "var-expansion-34c2ae40-6e8c-420b-a8f0-62c2fcb7915b" in namespace "var-expansion-6288" +Oct 13 09:38:35.582: INFO: Wait up to 5m0s for pod "var-expansion-34c2ae40-6e8c-420b-a8f0-62c2fcb7915b" to be fully deleted +[AfterEach] [sig-node] Variable Expansion + test/e2e/framework/node/init/init.go:32 +Oct 13 09:38:37.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Variable Expansion + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Variable Expansion + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Variable Expansion + tear down framework | framework.go:193 +STEP: Destroying namespace "var-expansion-6288" for this suite. 10/13/23 09:38:37.598 +------------------------------ +• [4.078 seconds] +[sig-node] Variable Expansion +test/e2e/common/node/framework.go:23 + should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] + test/e2e/common/node/expansion.go:186 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Variable Expansion + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:38:33.529 + Oct 13 09:38:33.529: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename var-expansion 10/13/23 09:38:33.53 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:38:33.55 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:38:33.553 + [BeforeEach] [sig-node] Variable Expansion + test/e2e/framework/metrics/init/init.go:31 + [It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] + test/e2e/common/node/expansion.go:186 + Oct 13 09:38:33.563: INFO: Waiting up to 2m0s for pod "var-expansion-34c2ae40-6e8c-420b-a8f0-62c2fcb7915b" in namespace "var-expansion-6288" to be "container 0 failed with reason CreateContainerConfigError" + Oct 13 09:38:33.566: INFO: Pod "var-expansion-34c2ae40-6e8c-420b-a8f0-62c2fcb7915b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.917401ms + Oct 13 09:38:35.570: INFO: Pod "var-expansion-34c2ae40-6e8c-420b-a8f0-62c2fcb7915b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007113296s + Oct 13 09:38:35.570: INFO: Pod "var-expansion-34c2ae40-6e8c-420b-a8f0-62c2fcb7915b" satisfied condition "container 0 failed with reason CreateContainerConfigError" + Oct 13 09:38:35.570: INFO: Deleting pod "var-expansion-34c2ae40-6e8c-420b-a8f0-62c2fcb7915b" in namespace "var-expansion-6288" + Oct 13 09:38:35.582: INFO: Wait up to 5m0s for pod "var-expansion-34c2ae40-6e8c-420b-a8f0-62c2fcb7915b" to be fully deleted + [AfterEach] [sig-node] Variable Expansion + test/e2e/framework/node/init/init.go:32 + Oct 13 09:38:37.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Variable Expansion + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Variable Expansion + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Variable Expansion + tear down framework | framework.go:193 + STEP: Destroying namespace "var-expansion-6288" for this suite. 10/13/23 09:38:37.598 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:99 +[BeforeEach] [sig-storage] Projected configMap + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:38:37.607 +Oct 13 09:38:37.608: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename projected 10/13/23 09:38:37.609 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:38:37.624 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:38:37.626 +[BeforeEach] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:31 +[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:99 +STEP: Creating configMap with name projected-configmap-test-volume-map-eedba127-e94c-48fb-89fd-b1165eeb19cd 10/13/23 09:38:37.629 +STEP: Creating a pod to test consume configMaps 10/13/23 09:38:37.633 +Oct 13 09:38:37.641: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ac5c7e8d-116d-415c-a5af-e6fbcc1fa586" in namespace "projected-9255" to be "Succeeded or Failed" +Oct 13 09:38:37.645: INFO: Pod "pod-projected-configmaps-ac5c7e8d-116d-415c-a5af-e6fbcc1fa586": Phase="Pending", Reason="", readiness=false. Elapsed: 3.598263ms +Oct 13 09:38:39.650: INFO: Pod "pod-projected-configmaps-ac5c7e8d-116d-415c-a5af-e6fbcc1fa586": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008095937s +Oct 13 09:38:41.651: INFO: Pod "pod-projected-configmaps-ac5c7e8d-116d-415c-a5af-e6fbcc1fa586": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009227426s +STEP: Saw pod success 10/13/23 09:38:41.651 +Oct 13 09:38:41.651: INFO: Pod "pod-projected-configmaps-ac5c7e8d-116d-415c-a5af-e6fbcc1fa586" satisfied condition "Succeeded or Failed" +Oct 13 09:38:41.656: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-ac5c7e8d-116d-415c-a5af-e6fbcc1fa586 container agnhost-container: +STEP: delete the pod 10/13/23 09:38:41.663 +Oct 13 09:38:41.676: INFO: Waiting for pod pod-projected-configmaps-ac5c7e8d-116d-415c-a5af-e6fbcc1fa586 to disappear +Oct 13 09:38:41.679: INFO: Pod pod-projected-configmaps-ac5c7e8d-116d-415c-a5af-e6fbcc1fa586 no longer exists +[AfterEach] [sig-storage] Projected configMap + test/e2e/framework/node/init/init.go:32 +Oct 13 09:38:41.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Projected configMap + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Projected configMap + tear down framework | framework.go:193 +STEP: Destroying namespace "projected-9255" for this suite. 10/13/23 09:38:41.682 +------------------------------ +• [4.081 seconds] +[sig-storage] Projected configMap +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:99 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected configMap + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:38:37.607 + Oct 13 09:38:37.608: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename projected 10/13/23 09:38:37.609 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:38:37.624 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:38:37.626 + [BeforeEach] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:31 + [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:99 + STEP: Creating configMap with name projected-configmap-test-volume-map-eedba127-e94c-48fb-89fd-b1165eeb19cd 10/13/23 09:38:37.629 + STEP: Creating a pod to test consume configMaps 10/13/23 09:38:37.633 + Oct 13 09:38:37.641: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ac5c7e8d-116d-415c-a5af-e6fbcc1fa586" in namespace "projected-9255" to be "Succeeded or Failed" + Oct 13 09:38:37.645: INFO: Pod "pod-projected-configmaps-ac5c7e8d-116d-415c-a5af-e6fbcc1fa586": Phase="Pending", Reason="", readiness=false. Elapsed: 3.598263ms + Oct 13 09:38:39.650: INFO: Pod "pod-projected-configmaps-ac5c7e8d-116d-415c-a5af-e6fbcc1fa586": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008095937s + Oct 13 09:38:41.651: INFO: Pod "pod-projected-configmaps-ac5c7e8d-116d-415c-a5af-e6fbcc1fa586": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009227426s + STEP: Saw pod success 10/13/23 09:38:41.651 + Oct 13 09:38:41.651: INFO: Pod "pod-projected-configmaps-ac5c7e8d-116d-415c-a5af-e6fbcc1fa586" satisfied condition "Succeeded or Failed" + Oct 13 09:38:41.656: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-ac5c7e8d-116d-415c-a5af-e6fbcc1fa586 container agnhost-container: + STEP: delete the pod 10/13/23 09:38:41.663 + Oct 13 09:38:41.676: INFO: Waiting for pod pod-projected-configmaps-ac5c7e8d-116d-415c-a5af-e6fbcc1fa586 to disappear + Oct 13 09:38:41.679: INFO: Pod pod-projected-configmaps-ac5c7e8d-116d-415c-a5af-e6fbcc1fa586 no longer exists + [AfterEach] [sig-storage] Projected configMap + test/e2e/framework/node/init/init.go:32 + Oct 13 09:38:41.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Projected configMap + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Projected configMap + tear down framework | framework.go:193 + STEP: Destroying namespace "projected-9255" for this suite. 10/13/23 09:38:41.682 + << End Captured GinkgoWriter Output +------------------------------ +SSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + patching/updating a validating webhook should work [Conformance] + test/e2e/apimachinery/webhook.go:413 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:38:41.689 +Oct 13 09:38:41.689: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename webhook 10/13/23 09:38:41.69 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:38:41.704 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:38:41.707 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 +STEP: Setting up server cert 10/13/23 09:38:41.721 +STEP: Create role binding to let webhook read extension-apiserver-authentication 10/13/23 09:38:42.142 +STEP: Deploying the webhook pod 10/13/23 09:38:42.149 +STEP: Wait for the deployment to be ready 10/13/23 09:38:42.161 +Oct 13 09:38:42.168: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service 10/13/23 09:38:44.178 +STEP: Verifying the service has paired with the endpoint 10/13/23 09:38:44.188 +Oct 13 09:38:45.188: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] patching/updating a validating webhook should work [Conformance] + test/e2e/apimachinery/webhook.go:413 +STEP: Creating a validating webhook configuration 10/13/23 09:38:45.192 +STEP: Creating a configMap that does not comply to the validation webhook rules 10/13/23 09:38:45.207 +STEP: Updating a validating webhook configuration's rules to not include the create operation 10/13/23 09:38:45.214 +STEP: Creating a configMap that does not comply to the validation webhook rules 10/13/23 09:38:45.223 +STEP: Patching a validating webhook configuration's rules to include the create operation 10/13/23 09:38:45.231 +STEP: Creating a configMap that does not comply to the validation webhook rules 10/13/23 09:38:45.237 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Oct 13 09:38:45.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "webhook-2628" for this suite. 10/13/23 09:38:45.297 +STEP: Destroying namespace "webhook-2628-markers" for this suite. 10/13/23 09:38:45.306 +------------------------------ +• [3.624 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + patching/updating a validating webhook should work [Conformance] + test/e2e/apimachinery/webhook.go:413 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:38:41.689 + Oct 13 09:38:41.689: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename webhook 10/13/23 09:38:41.69 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:38:41.704 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:38:41.707 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 + STEP: Setting up server cert 10/13/23 09:38:41.721 + STEP: Create role binding to let webhook read extension-apiserver-authentication 10/13/23 09:38:42.142 + STEP: Deploying the webhook pod 10/13/23 09:38:42.149 + STEP: Wait for the deployment to be ready 10/13/23 09:38:42.161 + Oct 13 09:38:42.168: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set + STEP: Deploying the webhook service 10/13/23 09:38:44.178 + STEP: Verifying the service has paired with the endpoint 10/13/23 09:38:44.188 + Oct 13 09:38:45.188: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] patching/updating a validating webhook should work [Conformance] + test/e2e/apimachinery/webhook.go:413 + STEP: Creating a validating webhook configuration 10/13/23 09:38:45.192 + STEP: Creating a configMap that does not comply to the validation webhook rules 10/13/23 09:38:45.207 + STEP: Updating a validating webhook configuration's rules to not include the create operation 10/13/23 09:38:45.214 + STEP: Creating a configMap that does not comply to the validation webhook rules 10/13/23 09:38:45.223 + STEP: Patching a validating webhook configuration's rules to include the create operation 10/13/23 09:38:45.231 + STEP: Creating a configMap that does not comply to the validation webhook rules 10/13/23 09:38:45.237 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Oct 13 09:38:45.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "webhook-2628" for this suite. 10/13/23 09:38:45.297 + STEP: Destroying namespace "webhook-2628-markers" for this suite. 10/13/23 09:38:45.306 + << End Captured GinkgoWriter Output +------------------------------ +[sig-storage] EmptyDir volumes + pod should support shared volumes between containers [Conformance] + test/e2e/common/storage/empty_dir.go:227 +[BeforeEach] [sig-storage] EmptyDir volumes + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:38:45.313 +Oct 13 09:38:45.313: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename emptydir 10/13/23 09:38:45.314 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:38:45.33 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:38:45.332 +[BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:31 +[It] pod should support shared volumes between containers [Conformance] + test/e2e/common/storage/empty_dir.go:227 +STEP: Creating Pod 10/13/23 09:38:45.334 +Oct 13 09:38:45.342: INFO: Waiting up to 5m0s for pod "pod-sharedvolume-a5b02c48-641e-4bfb-bea3-168d25e378e6" in namespace "emptydir-5714" to be "running" +Oct 13 09:38:45.345: INFO: Pod "pod-sharedvolume-a5b02c48-641e-4bfb-bea3-168d25e378e6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.765556ms +Oct 13 09:38:47.350: INFO: Pod "pod-sharedvolume-a5b02c48-641e-4bfb-bea3-168d25e378e6": Phase="Running", Reason="", readiness=false. Elapsed: 2.008394148s +Oct 13 09:38:47.350: INFO: Pod "pod-sharedvolume-a5b02c48-641e-4bfb-bea3-168d25e378e6" satisfied condition "running" +STEP: Reading file content from the nginx-container 10/13/23 09:38:47.35 +Oct 13 09:38:47.350: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-5714 PodName:pod-sharedvolume-a5b02c48-641e-4bfb-bea3-168d25e378e6 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 13 09:38:47.350: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +Oct 13 09:38:47.351: INFO: ExecWithOptions: Clientset creation +Oct 13 09:38:47.351: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/emptydir-5714/pods/pod-sharedvolume-a5b02c48-641e-4bfb-bea3-168d25e378e6/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fusr%2Fshare%2Fvolumeshare%2Fshareddata.txt&container=busybox-main-container&container=busybox-main-container&stderr=true&stdout=true) +Oct 13 09:38:47.407: INFO: Exec stderr: "" +[AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/node/init/init.go:32 +Oct 13 09:38:47.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + tear down framework | framework.go:193 +STEP: Destroying namespace "emptydir-5714" for this suite. 10/13/23 09:38:47.411 +------------------------------ +• [2.104 seconds] +[sig-storage] EmptyDir volumes +test/e2e/common/storage/framework.go:23 + pod should support shared volumes between containers [Conformance] + test/e2e/common/storage/empty_dir.go:227 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] EmptyDir volumes + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:38:45.313 + Oct 13 09:38:45.313: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename emptydir 10/13/23 09:38:45.314 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:38:45.33 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:38:45.332 + [BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:31 + [It] pod should support shared volumes between containers [Conformance] + test/e2e/common/storage/empty_dir.go:227 + STEP: Creating Pod 10/13/23 09:38:45.334 + Oct 13 09:38:45.342: INFO: Waiting up to 5m0s for pod "pod-sharedvolume-a5b02c48-641e-4bfb-bea3-168d25e378e6" in namespace "emptydir-5714" to be "running" + Oct 13 09:38:45.345: INFO: Pod "pod-sharedvolume-a5b02c48-641e-4bfb-bea3-168d25e378e6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.765556ms + Oct 13 09:38:47.350: INFO: Pod "pod-sharedvolume-a5b02c48-641e-4bfb-bea3-168d25e378e6": Phase="Running", Reason="", readiness=false. Elapsed: 2.008394148s + Oct 13 09:38:47.350: INFO: Pod "pod-sharedvolume-a5b02c48-641e-4bfb-bea3-168d25e378e6" satisfied condition "running" + STEP: Reading file content from the nginx-container 10/13/23 09:38:47.35 + Oct 13 09:38:47.350: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-5714 PodName:pod-sharedvolume-a5b02c48-641e-4bfb-bea3-168d25e378e6 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Oct 13 09:38:47.350: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + Oct 13 09:38:47.351: INFO: ExecWithOptions: Clientset creation + Oct 13 09:38:47.351: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/emptydir-5714/pods/pod-sharedvolume-a5b02c48-641e-4bfb-bea3-168d25e378e6/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fusr%2Fshare%2Fvolumeshare%2Fshareddata.txt&container=busybox-main-container&container=busybox-main-container&stderr=true&stdout=true) + Oct 13 09:38:47.407: INFO: Exec stderr: "" + [AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/node/init/init.go:32 + Oct 13 09:38:47.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + tear down framework | framework.go:193 + STEP: Destroying namespace "emptydir-5714" for this suite. 10/13/23 09:38:47.411 + << End Captured GinkgoWriter Output +------------------------------ +SSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with secret pod [Conformance] + test/e2e/storage/subpath.go:60 +[BeforeEach] [sig-storage] Subpath + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:38:47.417 +Oct 13 09:38:47.417: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename subpath 10/13/23 09:38:47.418 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:38:47.432 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:38:47.435 +[BeforeEach] [sig-storage] Subpath + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] Atomic writer volumes + test/e2e/storage/subpath.go:40 +STEP: Setting up data 10/13/23 09:38:47.437 +[It] should support subpaths with secret pod [Conformance] + test/e2e/storage/subpath.go:60 +STEP: Creating pod pod-subpath-test-secret-r7jq 10/13/23 09:38:47.445 +STEP: Creating a pod to test atomic-volume-subpath 10/13/23 09:38:47.445 +Oct 13 09:38:47.452: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-r7jq" in namespace "subpath-7211" to be "Succeeded or Failed" +Oct 13 09:38:47.455: INFO: Pod "pod-subpath-test-secret-r7jq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.927138ms +Oct 13 09:38:49.461: INFO: Pod "pod-subpath-test-secret-r7jq": Phase="Running", Reason="", readiness=true. Elapsed: 2.008675604s +Oct 13 09:38:51.460: INFO: Pod "pod-subpath-test-secret-r7jq": Phase="Running", Reason="", readiness=true. Elapsed: 4.007568687s +Oct 13 09:38:53.461: INFO: Pod "pod-subpath-test-secret-r7jq": Phase="Running", Reason="", readiness=true. Elapsed: 6.008764616s +Oct 13 09:38:55.460: INFO: Pod "pod-subpath-test-secret-r7jq": Phase="Running", Reason="", readiness=true. Elapsed: 8.008226556s +Oct 13 09:38:57.460: INFO: Pod "pod-subpath-test-secret-r7jq": Phase="Running", Reason="", readiness=true. Elapsed: 10.007531751s +Oct 13 09:38:59.460: INFO: Pod "pod-subpath-test-secret-r7jq": Phase="Running", Reason="", readiness=true. Elapsed: 12.008367568s +Oct 13 09:39:01.462: INFO: Pod "pod-subpath-test-secret-r7jq": Phase="Running", Reason="", readiness=true. Elapsed: 14.009458649s +Oct 13 09:39:03.461: INFO: Pod "pod-subpath-test-secret-r7jq": Phase="Running", Reason="", readiness=true. Elapsed: 16.009207907s +Oct 13 09:39:05.461: INFO: Pod "pod-subpath-test-secret-r7jq": Phase="Running", Reason="", readiness=true. Elapsed: 18.00880008s +Oct 13 09:39:07.461: INFO: Pod "pod-subpath-test-secret-r7jq": Phase="Running", Reason="", readiness=true. Elapsed: 20.009038369s +Oct 13 09:39:09.461: INFO: Pod "pod-subpath-test-secret-r7jq": Phase="Running", Reason="", readiness=false. Elapsed: 22.008377396s +Oct 13 09:39:11.461: INFO: Pod "pod-subpath-test-secret-r7jq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.009039854s +STEP: Saw pod success 10/13/23 09:39:11.461 +Oct 13 09:39:11.461: INFO: Pod "pod-subpath-test-secret-r7jq" satisfied condition "Succeeded or Failed" +Oct 13 09:39:11.468: INFO: Trying to get logs from node node1 pod pod-subpath-test-secret-r7jq container test-container-subpath-secret-r7jq: +STEP: delete the pod 10/13/23 09:39:11.495 +Oct 13 09:39:11.510: INFO: Waiting for pod pod-subpath-test-secret-r7jq to disappear +Oct 13 09:39:11.514: INFO: Pod pod-subpath-test-secret-r7jq no longer exists +STEP: Deleting pod pod-subpath-test-secret-r7jq 10/13/23 09:39:11.514 +Oct 13 09:39:11.514: INFO: Deleting pod "pod-subpath-test-secret-r7jq" in namespace "subpath-7211" +[AfterEach] [sig-storage] Subpath + test/e2e/framework/node/init/init.go:32 +Oct 13 09:39:11.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Subpath + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Subpath + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Subpath + tear down framework | framework.go:193 +STEP: Destroying namespace "subpath-7211" for this suite. 10/13/23 09:39:11.52 +------------------------------ +• [SLOW TEST] [24.110 seconds] +[sig-storage] Subpath +test/e2e/storage/utils/framework.go:23 + Atomic writer volumes + test/e2e/storage/subpath.go:36 + should support subpaths with secret pod [Conformance] + test/e2e/storage/subpath.go:60 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Subpath + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:38:47.417 + Oct 13 09:38:47.417: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename subpath 10/13/23 09:38:47.418 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:38:47.432 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:38:47.435 + [BeforeEach] [sig-storage] Subpath + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] Atomic writer volumes + test/e2e/storage/subpath.go:40 + STEP: Setting up data 10/13/23 09:38:47.437 + [It] should support subpaths with secret pod [Conformance] + test/e2e/storage/subpath.go:60 + STEP: Creating pod pod-subpath-test-secret-r7jq 10/13/23 09:38:47.445 + STEP: Creating a pod to test atomic-volume-subpath 10/13/23 09:38:47.445 + Oct 13 09:38:47.452: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-r7jq" in namespace "subpath-7211" to be "Succeeded or Failed" + Oct 13 09:38:47.455: INFO: Pod "pod-subpath-test-secret-r7jq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.927138ms + Oct 13 09:38:49.461: INFO: Pod "pod-subpath-test-secret-r7jq": Phase="Running", Reason="", readiness=true. Elapsed: 2.008675604s + Oct 13 09:38:51.460: INFO: Pod "pod-subpath-test-secret-r7jq": Phase="Running", Reason="", readiness=true. Elapsed: 4.007568687s + Oct 13 09:38:53.461: INFO: Pod "pod-subpath-test-secret-r7jq": Phase="Running", Reason="", readiness=true. Elapsed: 6.008764616s + Oct 13 09:38:55.460: INFO: Pod "pod-subpath-test-secret-r7jq": Phase="Running", Reason="", readiness=true. Elapsed: 8.008226556s + Oct 13 09:38:57.460: INFO: Pod "pod-subpath-test-secret-r7jq": Phase="Running", Reason="", readiness=true. Elapsed: 10.007531751s + Oct 13 09:38:59.460: INFO: Pod "pod-subpath-test-secret-r7jq": Phase="Running", Reason="", readiness=true. Elapsed: 12.008367568s + Oct 13 09:39:01.462: INFO: Pod "pod-subpath-test-secret-r7jq": Phase="Running", Reason="", readiness=true. Elapsed: 14.009458649s + Oct 13 09:39:03.461: INFO: Pod "pod-subpath-test-secret-r7jq": Phase="Running", Reason="", readiness=true. Elapsed: 16.009207907s + Oct 13 09:39:05.461: INFO: Pod "pod-subpath-test-secret-r7jq": Phase="Running", Reason="", readiness=true. Elapsed: 18.00880008s + Oct 13 09:39:07.461: INFO: Pod "pod-subpath-test-secret-r7jq": Phase="Running", Reason="", readiness=true. Elapsed: 20.009038369s + Oct 13 09:39:09.461: INFO: Pod "pod-subpath-test-secret-r7jq": Phase="Running", Reason="", readiness=false. Elapsed: 22.008377396s + Oct 13 09:39:11.461: INFO: Pod "pod-subpath-test-secret-r7jq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.009039854s + STEP: Saw pod success 10/13/23 09:39:11.461 + Oct 13 09:39:11.461: INFO: Pod "pod-subpath-test-secret-r7jq" satisfied condition "Succeeded or Failed" + Oct 13 09:39:11.468: INFO: Trying to get logs from node node1 pod pod-subpath-test-secret-r7jq container test-container-subpath-secret-r7jq: + STEP: delete the pod 10/13/23 09:39:11.495 + Oct 13 09:39:11.510: INFO: Waiting for pod pod-subpath-test-secret-r7jq to disappear + Oct 13 09:39:11.514: INFO: Pod pod-subpath-test-secret-r7jq no longer exists + STEP: Deleting pod pod-subpath-test-secret-r7jq 10/13/23 09:39:11.514 + Oct 13 09:39:11.514: INFO: Deleting pod "pod-subpath-test-secret-r7jq" in namespace "subpath-7211" + [AfterEach] [sig-storage] Subpath + test/e2e/framework/node/init/init.go:32 + Oct 13 09:39:11.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Subpath + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Subpath + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Subpath + tear down framework | framework.go:193 + STEP: Destroying namespace "subpath-7211" for this suite. 10/13/23 09:39:11.52 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:74 +[BeforeEach] [sig-storage] Projected configMap + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:39:11.529 +Oct 13 09:39:11.529: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename projected 10/13/23 09:39:11.529 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:39:11.546 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:39:11.549 +[BeforeEach] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:31 +[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:74 +STEP: Creating configMap with name projected-configmap-test-volume-699422d4-f84f-4290-a1f2-940dd178d0ce 10/13/23 09:39:11.552 +STEP: Creating a pod to test consume configMaps 10/13/23 09:39:11.556 +Oct 13 09:39:11.565: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9c6241ed-d90d-45dd-aa07-e7394cca4907" in namespace "projected-4205" to be "Succeeded or Failed" +Oct 13 09:39:11.568: INFO: Pod "pod-projected-configmaps-9c6241ed-d90d-45dd-aa07-e7394cca4907": Phase="Pending", Reason="", readiness=false. Elapsed: 3.440764ms +Oct 13 09:39:13.574: INFO: Pod "pod-projected-configmaps-9c6241ed-d90d-45dd-aa07-e7394cca4907": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009311429s +Oct 13 09:39:15.574: INFO: Pod "pod-projected-configmaps-9c6241ed-d90d-45dd-aa07-e7394cca4907": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009664756s +STEP: Saw pod success 10/13/23 09:39:15.574 +Oct 13 09:39:15.575: INFO: Pod "pod-projected-configmaps-9c6241ed-d90d-45dd-aa07-e7394cca4907" satisfied condition "Succeeded or Failed" +Oct 13 09:39:15.579: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-9c6241ed-d90d-45dd-aa07-e7394cca4907 container agnhost-container: +STEP: delete the pod 10/13/23 09:39:15.588 +Oct 13 09:39:15.604: INFO: Waiting for pod pod-projected-configmaps-9c6241ed-d90d-45dd-aa07-e7394cca4907 to disappear +Oct 13 09:39:15.607: INFO: Pod pod-projected-configmaps-9c6241ed-d90d-45dd-aa07-e7394cca4907 no longer exists +[AfterEach] [sig-storage] Projected configMap + test/e2e/framework/node/init/init.go:32 +Oct 13 09:39:15.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Projected configMap + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Projected configMap + tear down framework | framework.go:193 +STEP: Destroying namespace "projected-4205" for this suite. 10/13/23 09:39:15.611 +------------------------------ +• [4.089 seconds] +[sig-storage] Projected configMap +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:74 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected configMap + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:39:11.529 + Oct 13 09:39:11.529: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename projected 10/13/23 09:39:11.529 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:39:11.546 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:39:11.549 + [BeforeEach] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:31 + [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:74 + STEP: Creating configMap with name projected-configmap-test-volume-699422d4-f84f-4290-a1f2-940dd178d0ce 10/13/23 09:39:11.552 + STEP: Creating a pod to test consume configMaps 10/13/23 09:39:11.556 + Oct 13 09:39:11.565: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9c6241ed-d90d-45dd-aa07-e7394cca4907" in namespace "projected-4205" to be "Succeeded or Failed" + Oct 13 09:39:11.568: INFO: Pod "pod-projected-configmaps-9c6241ed-d90d-45dd-aa07-e7394cca4907": Phase="Pending", Reason="", readiness=false. Elapsed: 3.440764ms + Oct 13 09:39:13.574: INFO: Pod "pod-projected-configmaps-9c6241ed-d90d-45dd-aa07-e7394cca4907": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009311429s + Oct 13 09:39:15.574: INFO: Pod "pod-projected-configmaps-9c6241ed-d90d-45dd-aa07-e7394cca4907": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009664756s + STEP: Saw pod success 10/13/23 09:39:15.574 + Oct 13 09:39:15.575: INFO: Pod "pod-projected-configmaps-9c6241ed-d90d-45dd-aa07-e7394cca4907" satisfied condition "Succeeded or Failed" + Oct 13 09:39:15.579: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-9c6241ed-d90d-45dd-aa07-e7394cca4907 container agnhost-container: + STEP: delete the pod 10/13/23 09:39:15.588 + Oct 13 09:39:15.604: INFO: Waiting for pod pod-projected-configmaps-9c6241ed-d90d-45dd-aa07-e7394cca4907 to disappear + Oct 13 09:39:15.607: INFO: Pod pod-projected-configmaps-9c6241ed-d90d-45dd-aa07-e7394cca4907 no longer exists + [AfterEach] [sig-storage] Projected configMap + test/e2e/framework/node/init/init.go:32 + Oct 13 09:39:15.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Projected configMap + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Projected configMap + tear down framework | framework.go:193 + STEP: Destroying namespace "projected-4205" for this suite. 10/13/23 09:39:15.611 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] ConfigMap + should fail to create ConfigMap with empty key [Conformance] + test/e2e/common/node/configmap.go:138 +[BeforeEach] [sig-node] ConfigMap + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:39:15.619 +Oct 13 09:39:15.619: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename configmap 10/13/23 09:39:15.619 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:39:15.635 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:39:15.637 +[BeforeEach] [sig-node] ConfigMap + test/e2e/framework/metrics/init/init.go:31 +[It] should fail to create ConfigMap with empty key [Conformance] + test/e2e/common/node/configmap.go:138 +STEP: Creating configMap that has name configmap-test-emptyKey-8fd20b6d-3e22-4b68-bf01-e5bef0abe593 10/13/23 09:39:15.639 +[AfterEach] [sig-node] ConfigMap + test/e2e/framework/node/init/init.go:32 +Oct 13 09:39:15.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] ConfigMap + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] ConfigMap + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] ConfigMap + tear down framework | framework.go:193 +STEP: Destroying namespace "configmap-5888" for this suite. 10/13/23 09:39:15.645 +------------------------------ +• [0.031 seconds] +[sig-node] ConfigMap +test/e2e/common/node/framework.go:23 + should fail to create ConfigMap with empty key [Conformance] + test/e2e/common/node/configmap.go:138 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] ConfigMap + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:39:15.619 + Oct 13 09:39:15.619: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename configmap 10/13/23 09:39:15.619 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:39:15.635 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:39:15.637 + [BeforeEach] [sig-node] ConfigMap + test/e2e/framework/metrics/init/init.go:31 + [It] should fail to create ConfigMap with empty key [Conformance] + test/e2e/common/node/configmap.go:138 + STEP: Creating configMap that has name configmap-test-emptyKey-8fd20b6d-3e22-4b68-bf01-e5bef0abe593 10/13/23 09:39:15.639 + [AfterEach] [sig-node] ConfigMap + test/e2e/framework/node/init/init.go:32 + Oct 13 09:39:15.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] ConfigMap + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] ConfigMap + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] ConfigMap + tear down framework | framework.go:193 + STEP: Destroying namespace "configmap-5888" for this suite. 10/13/23 09:39:15.645 + << End Captured GinkgoWriter Output +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:67 +[BeforeEach] [sig-storage] Projected secret + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:39:15.65 +Oct 13 09:39:15.650: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename projected 10/13/23 09:39:15.651 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:39:15.666 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:39:15.668 +[BeforeEach] [sig-storage] Projected secret + test/e2e/framework/metrics/init/init.go:31 +[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:67 +STEP: Creating projection with secret that has name projected-secret-test-af0c7c2e-347b-439c-865b-b011a2b3540d 10/13/23 09:39:15.67 +STEP: Creating a pod to test consume secrets 10/13/23 09:39:15.675 +Oct 13 09:39:15.683: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-15e53760-bedd-494c-ba20-55eb06384a48" in namespace "projected-8973" to be "Succeeded or Failed" +Oct 13 09:39:15.687: INFO: Pod "pod-projected-secrets-15e53760-bedd-494c-ba20-55eb06384a48": Phase="Pending", Reason="", readiness=false. Elapsed: 3.755013ms +Oct 13 09:39:17.693: INFO: Pod "pod-projected-secrets-15e53760-bedd-494c-ba20-55eb06384a48": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009747234s +Oct 13 09:39:19.692: INFO: Pod "pod-projected-secrets-15e53760-bedd-494c-ba20-55eb06384a48": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008234328s +STEP: Saw pod success 10/13/23 09:39:19.692 +Oct 13 09:39:19.692: INFO: Pod "pod-projected-secrets-15e53760-bedd-494c-ba20-55eb06384a48" satisfied condition "Succeeded or Failed" +Oct 13 09:39:19.695: INFO: Trying to get logs from node node2 pod pod-projected-secrets-15e53760-bedd-494c-ba20-55eb06384a48 container projected-secret-volume-test: +STEP: delete the pod 10/13/23 09:39:19.701 +Oct 13 09:39:19.712: INFO: Waiting for pod pod-projected-secrets-15e53760-bedd-494c-ba20-55eb06384a48 to disappear +Oct 13 09:39:19.714: INFO: Pod pod-projected-secrets-15e53760-bedd-494c-ba20-55eb06384a48 no longer exists +[AfterEach] [sig-storage] Projected secret + test/e2e/framework/node/init/init.go:32 +Oct 13 09:39:19.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected secret + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Projected secret + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Projected secret + tear down framework | framework.go:193 +STEP: Destroying namespace "projected-8973" for this suite. 10/13/23 09:39:19.718 +------------------------------ +• [4.073 seconds] +[sig-storage] Projected secret +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:67 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected secret + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:39:15.65 + Oct 13 09:39:15.650: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename projected 10/13/23 09:39:15.651 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:39:15.666 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:39:15.668 + [BeforeEach] [sig-storage] Projected secret + test/e2e/framework/metrics/init/init.go:31 + [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:67 + STEP: Creating projection with secret that has name projected-secret-test-af0c7c2e-347b-439c-865b-b011a2b3540d 10/13/23 09:39:15.67 + STEP: Creating a pod to test consume secrets 10/13/23 09:39:15.675 + Oct 13 09:39:15.683: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-15e53760-bedd-494c-ba20-55eb06384a48" in namespace "projected-8973" to be "Succeeded or Failed" + Oct 13 09:39:15.687: INFO: Pod "pod-projected-secrets-15e53760-bedd-494c-ba20-55eb06384a48": Phase="Pending", Reason="", readiness=false. Elapsed: 3.755013ms + Oct 13 09:39:17.693: INFO: Pod "pod-projected-secrets-15e53760-bedd-494c-ba20-55eb06384a48": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009747234s + Oct 13 09:39:19.692: INFO: Pod "pod-projected-secrets-15e53760-bedd-494c-ba20-55eb06384a48": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008234328s + STEP: Saw pod success 10/13/23 09:39:19.692 + Oct 13 09:39:19.692: INFO: Pod "pod-projected-secrets-15e53760-bedd-494c-ba20-55eb06384a48" satisfied condition "Succeeded or Failed" + Oct 13 09:39:19.695: INFO: Trying to get logs from node node2 pod pod-projected-secrets-15e53760-bedd-494c-ba20-55eb06384a48 container projected-secret-volume-test: + STEP: delete the pod 10/13/23 09:39:19.701 + Oct 13 09:39:19.712: INFO: Waiting for pod pod-projected-secrets-15e53760-bedd-494c-ba20-55eb06384a48 to disappear + Oct 13 09:39:19.714: INFO: Pod pod-projected-secrets-15e53760-bedd-494c-ba20-55eb06384a48 no longer exists + [AfterEach] [sig-storage] Projected secret + test/e2e/framework/node/init/init.go:32 + Oct 13 09:39:19.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected secret + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Projected secret + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Projected secret + tear down framework | framework.go:193 + STEP: Destroying namespace "projected-8973" for this suite. 10/13/23 09:39:19.718 + << End Captured GinkgoWriter Output +------------------------------ +SS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + patching/updating a mutating webhook should work [Conformance] + test/e2e/apimachinery/webhook.go:508 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:39:19.724 +Oct 13 09:39:19.724: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename webhook 10/13/23 09:39:19.725 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:39:19.739 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:39:19.741 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 +STEP: Setting up server cert 10/13/23 09:39:19.755 +STEP: Create role binding to let webhook read extension-apiserver-authentication 10/13/23 09:39:20.311 +STEP: Deploying the webhook pod 10/13/23 09:39:20.321 +STEP: Wait for the deployment to be ready 10/13/23 09:39:20.332 +Oct 13 09:39:20.340: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created +STEP: Deploying the webhook service 10/13/23 09:39:22.354 +STEP: Verifying the service has paired with the endpoint 10/13/23 09:39:22.37 +Oct 13 09:39:23.370: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] patching/updating a mutating webhook should work [Conformance] + test/e2e/apimachinery/webhook.go:508 +STEP: Creating a mutating webhook configuration 10/13/23 09:39:23.374 +STEP: Updating a mutating webhook configuration's rules to not include the create operation 10/13/23 09:39:23.394 +STEP: Creating a configMap that should not be mutated 10/13/23 09:39:23.399 +STEP: Patching a mutating webhook configuration's rules to include the create operation 10/13/23 09:39:23.408 +STEP: Creating a configMap that should be mutated 10/13/23 09:39:23.415 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Oct 13 09:39:23.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "webhook-2248" for this suite. 10/13/23 09:39:23.478 +STEP: Destroying namespace "webhook-2248-markers" for this suite. 10/13/23 09:39:23.484 +------------------------------ +• [3.769 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + patching/updating a mutating webhook should work [Conformance] + test/e2e/apimachinery/webhook.go:508 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:39:19.724 + Oct 13 09:39:19.724: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename webhook 10/13/23 09:39:19.725 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:39:19.739 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:39:19.741 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 + STEP: Setting up server cert 10/13/23 09:39:19.755 + STEP: Create role binding to let webhook read extension-apiserver-authentication 10/13/23 09:39:20.311 + STEP: Deploying the webhook pod 10/13/23 09:39:20.321 + STEP: Wait for the deployment to be ready 10/13/23 09:39:20.332 + Oct 13 09:39:20.340: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created + STEP: Deploying the webhook service 10/13/23 09:39:22.354 + STEP: Verifying the service has paired with the endpoint 10/13/23 09:39:22.37 + Oct 13 09:39:23.370: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] patching/updating a mutating webhook should work [Conformance] + test/e2e/apimachinery/webhook.go:508 + STEP: Creating a mutating webhook configuration 10/13/23 09:39:23.374 + STEP: Updating a mutating webhook configuration's rules to not include the create operation 10/13/23 09:39:23.394 + STEP: Creating a configMap that should not be mutated 10/13/23 09:39:23.399 + STEP: Patching a mutating webhook configuration's rules to include the create operation 10/13/23 09:39:23.408 + STEP: Creating a configMap that should be mutated 10/13/23 09:39:23.415 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Oct 13 09:39:23.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "webhook-2248" for this suite. 10/13/23 09:39:23.478 + STEP: Destroying namespace "webhook-2248-markers" for this suite. 10/13/23 09:39:23.484 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicationController + should serve a basic image on each replica with a public image [Conformance] + test/e2e/apps/rc.go:67 +[BeforeEach] [sig-apps] ReplicationController + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:39:23.496 +Oct 13 09:39:23.496: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename replication-controller 10/13/23 09:39:23.497 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:39:23.517 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:39:23.519 +[BeforeEach] [sig-apps] ReplicationController + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] ReplicationController + test/e2e/apps/rc.go:57 +[It] should serve a basic image on each replica with a public image [Conformance] + test/e2e/apps/rc.go:67 +STEP: Creating replication controller my-hostname-basic-ef134722-e849-4552-812b-96deb29db8f9 10/13/23 09:39:23.522 +Oct 13 09:39:23.529: INFO: Pod name my-hostname-basic-ef134722-e849-4552-812b-96deb29db8f9: Found 0 pods out of 1 +Oct 13 09:39:28.535: INFO: Pod name my-hostname-basic-ef134722-e849-4552-812b-96deb29db8f9: Found 1 pods out of 1 +Oct 13 09:39:28.535: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-ef134722-e849-4552-812b-96deb29db8f9" are running +Oct 13 09:39:28.535: INFO: Waiting up to 5m0s for pod "my-hostname-basic-ef134722-e849-4552-812b-96deb29db8f9-t5mdv" in namespace "replication-controller-8520" to be "running" +Oct 13 09:39:28.539: INFO: Pod "my-hostname-basic-ef134722-e849-4552-812b-96deb29db8f9-t5mdv": Phase="Running", Reason="", readiness=true. Elapsed: 3.611769ms +Oct 13 09:39:28.539: INFO: Pod "my-hostname-basic-ef134722-e849-4552-812b-96deb29db8f9-t5mdv" satisfied condition "running" +Oct 13 09:39:28.539: INFO: Pod "my-hostname-basic-ef134722-e849-4552-812b-96deb29db8f9-t5mdv" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-10-13 09:39:24 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-10-13 09:39:25 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-10-13 09:39:25 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-10-13 09:39:23 +0000 UTC Reason: Message:}]) +Oct 13 09:39:28.539: INFO: Trying to dial the pod +Oct 13 09:39:33.551: INFO: Controller my-hostname-basic-ef134722-e849-4552-812b-96deb29db8f9: Got expected result from replica 1 [my-hostname-basic-ef134722-e849-4552-812b-96deb29db8f9-t5mdv]: "my-hostname-basic-ef134722-e849-4552-812b-96deb29db8f9-t5mdv", 1 of 1 required successes so far +[AfterEach] [sig-apps] ReplicationController + test/e2e/framework/node/init/init.go:32 +Oct 13 09:39:33.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] ReplicationController + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] ReplicationController + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] ReplicationController + tear down framework | framework.go:193 +STEP: Destroying namespace "replication-controller-8520" for this suite. 10/13/23 09:39:33.556 +------------------------------ +• [SLOW TEST] [10.068 seconds] +[sig-apps] ReplicationController +test/e2e/apps/framework.go:23 + should serve a basic image on each replica with a public image [Conformance] + test/e2e/apps/rc.go:67 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] ReplicationController + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:39:23.496 + Oct 13 09:39:23.496: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename replication-controller 10/13/23 09:39:23.497 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:39:23.517 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:39:23.519 + [BeforeEach] [sig-apps] ReplicationController + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] ReplicationController + test/e2e/apps/rc.go:57 + [It] should serve a basic image on each replica with a public image [Conformance] + test/e2e/apps/rc.go:67 + STEP: Creating replication controller my-hostname-basic-ef134722-e849-4552-812b-96deb29db8f9 10/13/23 09:39:23.522 + Oct 13 09:39:23.529: INFO: Pod name my-hostname-basic-ef134722-e849-4552-812b-96deb29db8f9: Found 0 pods out of 1 + Oct 13 09:39:28.535: INFO: Pod name my-hostname-basic-ef134722-e849-4552-812b-96deb29db8f9: Found 1 pods out of 1 + Oct 13 09:39:28.535: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-ef134722-e849-4552-812b-96deb29db8f9" are running + Oct 13 09:39:28.535: INFO: Waiting up to 5m0s for pod "my-hostname-basic-ef134722-e849-4552-812b-96deb29db8f9-t5mdv" in namespace "replication-controller-8520" to be "running" + Oct 13 09:39:28.539: INFO: Pod "my-hostname-basic-ef134722-e849-4552-812b-96deb29db8f9-t5mdv": Phase="Running", Reason="", readiness=true. Elapsed: 3.611769ms + Oct 13 09:39:28.539: INFO: Pod "my-hostname-basic-ef134722-e849-4552-812b-96deb29db8f9-t5mdv" satisfied condition "running" + Oct 13 09:39:28.539: INFO: Pod "my-hostname-basic-ef134722-e849-4552-812b-96deb29db8f9-t5mdv" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-10-13 09:39:24 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-10-13 09:39:25 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-10-13 09:39:25 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-10-13 09:39:23 +0000 UTC Reason: Message:}]) + Oct 13 09:39:28.539: INFO: Trying to dial the pod + Oct 13 09:39:33.551: INFO: Controller my-hostname-basic-ef134722-e849-4552-812b-96deb29db8f9: Got expected result from replica 1 [my-hostname-basic-ef134722-e849-4552-812b-96deb29db8f9-t5mdv]: "my-hostname-basic-ef134722-e849-4552-812b-96deb29db8f9-t5mdv", 1 of 1 required successes so far + [AfterEach] [sig-apps] ReplicationController + test/e2e/framework/node/init/init.go:32 + Oct 13 09:39:33.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] ReplicationController + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] ReplicationController + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] ReplicationController + tear down framework | framework.go:193 + STEP: Destroying namespace "replication-controller-8520" for this suite. 10/13/23 09:39:33.556 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Container Runtime blackbox test when starting a container that exits + should run with the expected status [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:52 +[BeforeEach] [sig-node] Container Runtime + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:39:33.565 +Oct 13 09:39:33.565: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename container-runtime 10/13/23 09:39:33.566 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:39:33.587 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:39:33.59 +[BeforeEach] [sig-node] Container Runtime + test/e2e/framework/metrics/init/init.go:31 +[It] should run with the expected status [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:52 +STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' 10/13/23 09:39:33.601 +STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' 10/13/23 09:39:50.695 +STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition 10/13/23 09:39:50.698 +STEP: Container 'terminate-cmd-rpa': should get the expected 'State' 10/13/23 09:39:50.708 +STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] 10/13/23 09:39:50.708 +STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' 10/13/23 09:39:50.737 +STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' 10/13/23 09:39:52.75 +STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition 10/13/23 09:39:54.763 +STEP: Container 'terminate-cmd-rpof': should get the expected 'State' 10/13/23 09:39:54.77 +STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] 10/13/23 09:39:54.77 +STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' 10/13/23 09:39:54.795 +STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' 10/13/23 09:39:55.802 +STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition 10/13/23 09:39:57.815 +STEP: Container 'terminate-cmd-rpn': should get the expected 'State' 10/13/23 09:39:57.826 +STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] 10/13/23 09:39:57.826 +[AfterEach] [sig-node] Container Runtime + test/e2e/framework/node/init/init.go:32 +Oct 13 09:39:57.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Container Runtime + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Container Runtime + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Container Runtime + tear down framework | framework.go:193 +STEP: Destroying namespace "container-runtime-3497" for this suite. 10/13/23 09:39:57.858 +------------------------------ +• [SLOW TEST] [24.301 seconds] +[sig-node] Container Runtime +test/e2e/common/node/framework.go:23 + blackbox test + test/e2e/common/node/runtime.go:44 + when starting a container that exits + test/e2e/common/node/runtime.go:45 + should run with the expected status [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:52 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Container Runtime + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:39:33.565 + Oct 13 09:39:33.565: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename container-runtime 10/13/23 09:39:33.566 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:39:33.587 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:39:33.59 + [BeforeEach] [sig-node] Container Runtime + test/e2e/framework/metrics/init/init.go:31 + [It] should run with the expected status [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:52 + STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' 10/13/23 09:39:33.601 + STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' 10/13/23 09:39:50.695 + STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition 10/13/23 09:39:50.698 + STEP: Container 'terminate-cmd-rpa': should get the expected 'State' 10/13/23 09:39:50.708 + STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] 10/13/23 09:39:50.708 + STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' 10/13/23 09:39:50.737 + STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' 10/13/23 09:39:52.75 + STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition 10/13/23 09:39:54.763 + STEP: Container 'terminate-cmd-rpof': should get the expected 'State' 10/13/23 09:39:54.77 + STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] 10/13/23 09:39:54.77 + STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' 10/13/23 09:39:54.795 + STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' 10/13/23 09:39:55.802 + STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition 10/13/23 09:39:57.815 + STEP: Container 'terminate-cmd-rpn': should get the expected 'State' 10/13/23 09:39:57.826 + STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] 10/13/23 09:39:57.826 + [AfterEach] [sig-node] Container Runtime + test/e2e/framework/node/init/init.go:32 + Oct 13 09:39:57.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Container Runtime + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Container Runtime + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Container Runtime + tear down framework | framework.go:193 + STEP: Destroying namespace "container-runtime-3497" for this suite. 10/13/23 09:39:57.858 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:217 +[BeforeEach] [sig-storage] EmptyDir volumes + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:39:57.866 +Oct 13 09:39:57.867: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename emptydir 10/13/23 09:39:57.867 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:39:57.882 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:39:57.884 +[BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:31 +[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:217 +STEP: Creating a pod to test emptydir 0777 on node default medium 10/13/23 09:39:57.887 +Oct 13 09:39:57.896: INFO: Waiting up to 5m0s for pod "pod-31a8a1d7-c0cc-4eb7-9a2d-20d268f3f7e9" in namespace "emptydir-1722" to be "Succeeded or Failed" +Oct 13 09:39:57.899: INFO: Pod "pod-31a8a1d7-c0cc-4eb7-9a2d-20d268f3f7e9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.006697ms +Oct 13 09:39:59.903: INFO: Pod "pod-31a8a1d7-c0cc-4eb7-9a2d-20d268f3f7e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0076286s +Oct 13 09:40:01.905: INFO: Pod "pod-31a8a1d7-c0cc-4eb7-9a2d-20d268f3f7e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009299316s +STEP: Saw pod success 10/13/23 09:40:01.905 +Oct 13 09:40:01.906: INFO: Pod "pod-31a8a1d7-c0cc-4eb7-9a2d-20d268f3f7e9" satisfied condition "Succeeded or Failed" +Oct 13 09:40:01.910: INFO: Trying to get logs from node node2 pod pod-31a8a1d7-c0cc-4eb7-9a2d-20d268f3f7e9 container test-container: +STEP: delete the pod 10/13/23 09:40:01.917 +Oct 13 09:40:01.930: INFO: Waiting for pod pod-31a8a1d7-c0cc-4eb7-9a2d-20d268f3f7e9 to disappear +Oct 13 09:40:01.933: INFO: Pod pod-31a8a1d7-c0cc-4eb7-9a2d-20d268f3f7e9 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/node/init/init.go:32 +Oct 13 09:40:01.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + tear down framework | framework.go:193 +STEP: Destroying namespace "emptydir-1722" for this suite. 10/13/23 09:40:01.937 +------------------------------ +• [4.077 seconds] +[sig-storage] EmptyDir volumes +test/e2e/common/storage/framework.go:23 + should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:217 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] EmptyDir volumes + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:39:57.866 + Oct 13 09:39:57.867: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename emptydir 10/13/23 09:39:57.867 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:39:57.882 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:39:57.884 + [BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:31 + [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:217 + STEP: Creating a pod to test emptydir 0777 on node default medium 10/13/23 09:39:57.887 + Oct 13 09:39:57.896: INFO: Waiting up to 5m0s for pod "pod-31a8a1d7-c0cc-4eb7-9a2d-20d268f3f7e9" in namespace "emptydir-1722" to be "Succeeded or Failed" + Oct 13 09:39:57.899: INFO: Pod "pod-31a8a1d7-c0cc-4eb7-9a2d-20d268f3f7e9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.006697ms + Oct 13 09:39:59.903: INFO: Pod "pod-31a8a1d7-c0cc-4eb7-9a2d-20d268f3f7e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0076286s + Oct 13 09:40:01.905: INFO: Pod "pod-31a8a1d7-c0cc-4eb7-9a2d-20d268f3f7e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009299316s + STEP: Saw pod success 10/13/23 09:40:01.905 + Oct 13 09:40:01.906: INFO: Pod "pod-31a8a1d7-c0cc-4eb7-9a2d-20d268f3f7e9" satisfied condition "Succeeded or Failed" + Oct 13 09:40:01.910: INFO: Trying to get logs from node node2 pod pod-31a8a1d7-c0cc-4eb7-9a2d-20d268f3f7e9 container test-container: + STEP: delete the pod 10/13/23 09:40:01.917 + Oct 13 09:40:01.930: INFO: Waiting for pod pod-31a8a1d7-c0cc-4eb7-9a2d-20d268f3f7e9 to disappear + Oct 13 09:40:01.933: INFO: Pod pod-31a8a1d7-c0cc-4eb7-9a2d-20d268f3f7e9 no longer exists + [AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/node/init/init.go:32 + Oct 13 09:40:01.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + tear down framework | framework.go:193 + STEP: Destroying namespace "emptydir-1722" for this suite. 10/13/23 09:40:01.937 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Variable Expansion + should succeed in writing subpaths in container [Slow] [Conformance] + test/e2e/common/node/expansion.go:297 +[BeforeEach] [sig-node] Variable Expansion + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:40:01.946 +Oct 13 09:40:01.946: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename var-expansion 10/13/23 09:40:01.947 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:40:01.965 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:40:01.969 +[BeforeEach] [sig-node] Variable Expansion + test/e2e/framework/metrics/init/init.go:31 +[It] should succeed in writing subpaths in container [Slow] [Conformance] + test/e2e/common/node/expansion.go:297 +STEP: creating the pod 10/13/23 09:40:01.972 +STEP: waiting for pod running 10/13/23 09:40:01.981 +Oct 13 09:40:01.981: INFO: Waiting up to 2m0s for pod "var-expansion-60bb8f29-0244-49b9-beb8-4082d20313c4" in namespace "var-expansion-2534" to be "running" +Oct 13 09:40:01.986: INFO: Pod "var-expansion-60bb8f29-0244-49b9-beb8-4082d20313c4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.641034ms +Oct 13 09:40:03.991: INFO: Pod "var-expansion-60bb8f29-0244-49b9-beb8-4082d20313c4": Phase="Running", Reason="", readiness=true. Elapsed: 2.009672094s +Oct 13 09:40:03.991: INFO: Pod "var-expansion-60bb8f29-0244-49b9-beb8-4082d20313c4" satisfied condition "running" +STEP: creating a file in subpath 10/13/23 09:40:03.991 +Oct 13 09:40:03.995: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-2534 PodName:var-expansion-60bb8f29-0244-49b9-beb8-4082d20313c4 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 13 09:40:03.995: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +Oct 13 09:40:03.996: INFO: ExecWithOptions: Clientset creation +Oct 13 09:40:03.996: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/var-expansion-2534/pods/var-expansion-60bb8f29-0244-49b9-beb8-4082d20313c4/exec?command=%2Fbin%2Fsh&command=-c&command=touch+%2Fvolume_mount%2Fmypath%2Ffoo%2Ftest.log&container=dapi-container&container=dapi-container&stderr=true&stdout=true) +STEP: test for file in mounted path 10/13/23 09:40:04.061 +Oct 13 09:40:04.065: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-2534 PodName:var-expansion-60bb8f29-0244-49b9-beb8-4082d20313c4 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Oct 13 09:40:04.065: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +Oct 13 09:40:04.066: INFO: ExecWithOptions: Clientset creation +Oct 13 09:40:04.066: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/var-expansion-2534/pods/var-expansion-60bb8f29-0244-49b9-beb8-4082d20313c4/exec?command=%2Fbin%2Fsh&command=-c&command=test+-f+%2Fsubpath_mount%2Ftest.log&container=dapi-container&container=dapi-container&stderr=true&stdout=true) +STEP: updating the annotation value 10/13/23 09:40:04.141 +Oct 13 09:40:04.655: INFO: Successfully updated pod "var-expansion-60bb8f29-0244-49b9-beb8-4082d20313c4" +STEP: waiting for annotated pod running 10/13/23 09:40:04.656 +Oct 13 09:40:04.656: INFO: Waiting up to 2m0s for pod "var-expansion-60bb8f29-0244-49b9-beb8-4082d20313c4" in namespace "var-expansion-2534" to be "running" +Oct 13 09:40:04.659: INFO: Pod "var-expansion-60bb8f29-0244-49b9-beb8-4082d20313c4": Phase="Running", Reason="", readiness=true. Elapsed: 3.233974ms +Oct 13 09:40:04.659: INFO: Pod "var-expansion-60bb8f29-0244-49b9-beb8-4082d20313c4" satisfied condition "running" +STEP: deleting the pod gracefully 10/13/23 09:40:04.659 +Oct 13 09:40:04.659: INFO: Deleting pod "var-expansion-60bb8f29-0244-49b9-beb8-4082d20313c4" in namespace "var-expansion-2534" +Oct 13 09:40:04.666: INFO: Wait up to 5m0s for pod "var-expansion-60bb8f29-0244-49b9-beb8-4082d20313c4" to be fully deleted +[AfterEach] [sig-node] Variable Expansion + test/e2e/framework/node/init/init.go:32 +Oct 13 09:40:38.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Variable Expansion + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Variable Expansion + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Variable Expansion + tear down framework | framework.go:193 +STEP: Destroying namespace "var-expansion-2534" for this suite. 10/13/23 09:40:38.68 +------------------------------ +• [SLOW TEST] [36.742 seconds] +[sig-node] Variable Expansion +test/e2e/common/node/framework.go:23 + should succeed in writing subpaths in container [Slow] [Conformance] + test/e2e/common/node/expansion.go:297 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Variable Expansion + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:40:01.946 + Oct 13 09:40:01.946: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename var-expansion 10/13/23 09:40:01.947 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:40:01.965 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:40:01.969 + [BeforeEach] [sig-node] Variable Expansion + test/e2e/framework/metrics/init/init.go:31 + [It] should succeed in writing subpaths in container [Slow] [Conformance] + test/e2e/common/node/expansion.go:297 + STEP: creating the pod 10/13/23 09:40:01.972 + STEP: waiting for pod running 10/13/23 09:40:01.981 + Oct 13 09:40:01.981: INFO: Waiting up to 2m0s for pod "var-expansion-60bb8f29-0244-49b9-beb8-4082d20313c4" in namespace "var-expansion-2534" to be "running" + Oct 13 09:40:01.986: INFO: Pod "var-expansion-60bb8f29-0244-49b9-beb8-4082d20313c4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.641034ms + Oct 13 09:40:03.991: INFO: Pod "var-expansion-60bb8f29-0244-49b9-beb8-4082d20313c4": Phase="Running", Reason="", readiness=true. Elapsed: 2.009672094s + Oct 13 09:40:03.991: INFO: Pod "var-expansion-60bb8f29-0244-49b9-beb8-4082d20313c4" satisfied condition "running" + STEP: creating a file in subpath 10/13/23 09:40:03.991 + Oct 13 09:40:03.995: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-2534 PodName:var-expansion-60bb8f29-0244-49b9-beb8-4082d20313c4 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Oct 13 09:40:03.995: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + Oct 13 09:40:03.996: INFO: ExecWithOptions: Clientset creation + Oct 13 09:40:03.996: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/var-expansion-2534/pods/var-expansion-60bb8f29-0244-49b9-beb8-4082d20313c4/exec?command=%2Fbin%2Fsh&command=-c&command=touch+%2Fvolume_mount%2Fmypath%2Ffoo%2Ftest.log&container=dapi-container&container=dapi-container&stderr=true&stdout=true) + STEP: test for file in mounted path 10/13/23 09:40:04.061 + Oct 13 09:40:04.065: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-2534 PodName:var-expansion-60bb8f29-0244-49b9-beb8-4082d20313c4 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Oct 13 09:40:04.065: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + Oct 13 09:40:04.066: INFO: ExecWithOptions: Clientset creation + Oct 13 09:40:04.066: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/var-expansion-2534/pods/var-expansion-60bb8f29-0244-49b9-beb8-4082d20313c4/exec?command=%2Fbin%2Fsh&command=-c&command=test+-f+%2Fsubpath_mount%2Ftest.log&container=dapi-container&container=dapi-container&stderr=true&stdout=true) + STEP: updating the annotation value 10/13/23 09:40:04.141 + Oct 13 09:40:04.655: INFO: Successfully updated pod "var-expansion-60bb8f29-0244-49b9-beb8-4082d20313c4" + STEP: waiting for annotated pod running 10/13/23 09:40:04.656 + Oct 13 09:40:04.656: INFO: Waiting up to 2m0s for pod "var-expansion-60bb8f29-0244-49b9-beb8-4082d20313c4" in namespace "var-expansion-2534" to be "running" + Oct 13 09:40:04.659: INFO: Pod "var-expansion-60bb8f29-0244-49b9-beb8-4082d20313c4": Phase="Running", Reason="", readiness=true. Elapsed: 3.233974ms + Oct 13 09:40:04.659: INFO: Pod "var-expansion-60bb8f29-0244-49b9-beb8-4082d20313c4" satisfied condition "running" + STEP: deleting the pod gracefully 10/13/23 09:40:04.659 + Oct 13 09:40:04.659: INFO: Deleting pod "var-expansion-60bb8f29-0244-49b9-beb8-4082d20313c4" in namespace "var-expansion-2534" + Oct 13 09:40:04.666: INFO: Wait up to 5m0s for pod "var-expansion-60bb8f29-0244-49b9-beb8-4082d20313c4" to be fully deleted + [AfterEach] [sig-node] Variable Expansion + test/e2e/framework/node/init/init.go:32 + Oct 13 09:40:38.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Variable Expansion + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Variable Expansion + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Variable Expansion + tear down framework | framework.go:193 + STEP: Destroying namespace "var-expansion-2534" for this suite. 10/13/23 09:40:38.68 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Pods + should support retrieving logs from the container over websockets [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:618 +[BeforeEach] [sig-node] Pods + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:40:38.689 +Oct 13 09:40:38.689: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename pods 10/13/23 09:40:38.689 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:40:38.709 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:40:38.712 +[BeforeEach] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:194 +[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:618 +Oct 13 09:40:38.714: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: creating the pod 10/13/23 09:40:38.715 +STEP: submitting the pod to kubernetes 10/13/23 09:40:38.715 +Oct 13 09:40:38.723: INFO: Waiting up to 5m0s for pod "pod-logs-websocket-24be6747-fc36-4984-87d3-349fff13781b" in namespace "pods-2018" to be "running and ready" +Oct 13 09:40:38.727: INFO: Pod "pod-logs-websocket-24be6747-fc36-4984-87d3-349fff13781b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.630371ms +Oct 13 09:40:38.727: INFO: The phase of Pod pod-logs-websocket-24be6747-fc36-4984-87d3-349fff13781b is Pending, waiting for it to be Running (with Ready = true) +Oct 13 09:40:40.733: INFO: Pod "pod-logs-websocket-24be6747-fc36-4984-87d3-349fff13781b": Phase="Running", Reason="", readiness=true. Elapsed: 2.010188747s +Oct 13 09:40:40.733: INFO: The phase of Pod pod-logs-websocket-24be6747-fc36-4984-87d3-349fff13781b is Running (Ready = true) +Oct 13 09:40:40.733: INFO: Pod "pod-logs-websocket-24be6747-fc36-4984-87d3-349fff13781b" satisfied condition "running and ready" +[AfterEach] [sig-node] Pods + test/e2e/framework/node/init/init.go:32 +Oct 13 09:40:40.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Pods + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Pods + tear down framework | framework.go:193 +STEP: Destroying namespace "pods-2018" for this suite. 10/13/23 09:40:40.759 +------------------------------ +• [2.077 seconds] +[sig-node] Pods +test/e2e/common/node/framework.go:23 + should support retrieving logs from the container over websockets [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:618 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Pods + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:40:38.689 + Oct 13 09:40:38.689: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename pods 10/13/23 09:40:38.689 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:40:38.709 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:40:38.712 + [BeforeEach] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:194 + [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:618 + Oct 13 09:40:38.714: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: creating the pod 10/13/23 09:40:38.715 + STEP: submitting the pod to kubernetes 10/13/23 09:40:38.715 + Oct 13 09:40:38.723: INFO: Waiting up to 5m0s for pod "pod-logs-websocket-24be6747-fc36-4984-87d3-349fff13781b" in namespace "pods-2018" to be "running and ready" + Oct 13 09:40:38.727: INFO: Pod "pod-logs-websocket-24be6747-fc36-4984-87d3-349fff13781b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.630371ms + Oct 13 09:40:38.727: INFO: The phase of Pod pod-logs-websocket-24be6747-fc36-4984-87d3-349fff13781b is Pending, waiting for it to be Running (with Ready = true) + Oct 13 09:40:40.733: INFO: Pod "pod-logs-websocket-24be6747-fc36-4984-87d3-349fff13781b": Phase="Running", Reason="", readiness=true. Elapsed: 2.010188747s + Oct 13 09:40:40.733: INFO: The phase of Pod pod-logs-websocket-24be6747-fc36-4984-87d3-349fff13781b is Running (Ready = true) + Oct 13 09:40:40.733: INFO: Pod "pod-logs-websocket-24be6747-fc36-4984-87d3-349fff13781b" satisfied condition "running and ready" + [AfterEach] [sig-node] Pods + test/e2e/framework/node/init/init.go:32 + Oct 13 09:40:40.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Pods + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Pods + tear down framework | framework.go:193 + STEP: Destroying namespace "pods-2018" for this suite. 10/13/23 09:40:40.759 + << End Captured GinkgoWriter Output +------------------------------ +[sig-node] Probing container + should have monotonically increasing restart count [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:199 +[BeforeEach] [sig-node] Probing container + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:40:40.766 +Oct 13 09:40:40.766: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename container-probe 10/13/23 09:40:40.767 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:40:40.785 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:40:40.788 +[BeforeEach] [sig-node] Probing container + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:63 +[It] should have monotonically increasing restart count [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:199 +STEP: Creating pod liveness-c420f629-62cd-4b5d-a4d0-9d791d3a11b2 in namespace container-probe-6197 10/13/23 09:40:40.791 +Oct 13 09:40:40.799: INFO: Waiting up to 5m0s for pod "liveness-c420f629-62cd-4b5d-a4d0-9d791d3a11b2" in namespace "container-probe-6197" to be "not pending" +Oct 13 09:40:40.802: INFO: Pod "liveness-c420f629-62cd-4b5d-a4d0-9d791d3a11b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.942918ms +Oct 13 09:40:42.808: INFO: Pod "liveness-c420f629-62cd-4b5d-a4d0-9d791d3a11b2": Phase="Running", Reason="", readiness=true. Elapsed: 2.009294265s +Oct 13 09:40:42.808: INFO: Pod "liveness-c420f629-62cd-4b5d-a4d0-9d791d3a11b2" satisfied condition "not pending" +Oct 13 09:40:42.808: INFO: Started pod liveness-c420f629-62cd-4b5d-a4d0-9d791d3a11b2 in namespace container-probe-6197 +STEP: checking the pod's current state and verifying that restartCount is present 10/13/23 09:40:42.808 +Oct 13 09:40:42.812: INFO: Initial restart count of pod liveness-c420f629-62cd-4b5d-a4d0-9d791d3a11b2 is 0 +Oct 13 09:41:02.871: INFO: Restart count of pod container-probe-6197/liveness-c420f629-62cd-4b5d-a4d0-9d791d3a11b2 is now 1 (20.059425758s elapsed) +Oct 13 09:41:22.925: INFO: Restart count of pod container-probe-6197/liveness-c420f629-62cd-4b5d-a4d0-9d791d3a11b2 is now 2 (40.112884464s elapsed) +Oct 13 09:41:42.983: INFO: Restart count of pod container-probe-6197/liveness-c420f629-62cd-4b5d-a4d0-9d791d3a11b2 is now 3 (1m0.170770957s elapsed) +Oct 13 09:42:03.042: INFO: Restart count of pod container-probe-6197/liveness-c420f629-62cd-4b5d-a4d0-9d791d3a11b2 is now 4 (1m20.230472209s elapsed) +Oct 13 09:43:11.256: INFO: Restart count of pod container-probe-6197/liveness-c420f629-62cd-4b5d-a4d0-9d791d3a11b2 is now 5 (2m28.443887917s elapsed) +STEP: deleting the pod 10/13/23 09:43:11.256 +[AfterEach] [sig-node] Probing container + test/e2e/framework/node/init/init.go:32 +Oct 13 09:43:11.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Probing container + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Probing container + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Probing container + tear down framework | framework.go:193 +STEP: Destroying namespace "container-probe-6197" for this suite. 10/13/23 09:43:11.272 +------------------------------ +• [SLOW TEST] [150.512 seconds] +[sig-node] Probing container +test/e2e/common/node/framework.go:23 + should have monotonically increasing restart count [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:199 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Probing container + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:40:40.766 + Oct 13 09:40:40.766: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename container-probe 10/13/23 09:40:40.767 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:40:40.785 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:40:40.788 + [BeforeEach] [sig-node] Probing container + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:63 + [It] should have monotonically increasing restart count [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:199 + STEP: Creating pod liveness-c420f629-62cd-4b5d-a4d0-9d791d3a11b2 in namespace container-probe-6197 10/13/23 09:40:40.791 + Oct 13 09:40:40.799: INFO: Waiting up to 5m0s for pod "liveness-c420f629-62cd-4b5d-a4d0-9d791d3a11b2" in namespace "container-probe-6197" to be "not pending" + Oct 13 09:40:40.802: INFO: Pod "liveness-c420f629-62cd-4b5d-a4d0-9d791d3a11b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.942918ms + Oct 13 09:40:42.808: INFO: Pod "liveness-c420f629-62cd-4b5d-a4d0-9d791d3a11b2": Phase="Running", Reason="", readiness=true. Elapsed: 2.009294265s + Oct 13 09:40:42.808: INFO: Pod "liveness-c420f629-62cd-4b5d-a4d0-9d791d3a11b2" satisfied condition "not pending" + Oct 13 09:40:42.808: INFO: Started pod liveness-c420f629-62cd-4b5d-a4d0-9d791d3a11b2 in namespace container-probe-6197 + STEP: checking the pod's current state and verifying that restartCount is present 10/13/23 09:40:42.808 + Oct 13 09:40:42.812: INFO: Initial restart count of pod liveness-c420f629-62cd-4b5d-a4d0-9d791d3a11b2 is 0 + Oct 13 09:41:02.871: INFO: Restart count of pod container-probe-6197/liveness-c420f629-62cd-4b5d-a4d0-9d791d3a11b2 is now 1 (20.059425758s elapsed) + Oct 13 09:41:22.925: INFO: Restart count of pod container-probe-6197/liveness-c420f629-62cd-4b5d-a4d0-9d791d3a11b2 is now 2 (40.112884464s elapsed) + Oct 13 09:41:42.983: INFO: Restart count of pod container-probe-6197/liveness-c420f629-62cd-4b5d-a4d0-9d791d3a11b2 is now 3 (1m0.170770957s elapsed) + Oct 13 09:42:03.042: INFO: Restart count of pod container-probe-6197/liveness-c420f629-62cd-4b5d-a4d0-9d791d3a11b2 is now 4 (1m20.230472209s elapsed) + Oct 13 09:43:11.256: INFO: Restart count of pod container-probe-6197/liveness-c420f629-62cd-4b5d-a4d0-9d791d3a11b2 is now 5 (2m28.443887917s elapsed) + STEP: deleting the pod 10/13/23 09:43:11.256 + [AfterEach] [sig-node] Probing container + test/e2e/framework/node/init/init.go:32 + Oct 13 09:43:11.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Probing container + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Probing container + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Probing container + tear down framework | framework.go:193 + STEP: Destroying namespace "container-probe-6197" for this suite. 10/13/23 09:43:11.272 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSS +------------------------------ +[sig-node] RuntimeClass + should schedule a Pod requesting a RuntimeClass without PodOverhead [NodeConformance] [Conformance] + test/e2e/common/node/runtimeclass.go:104 +[BeforeEach] [sig-node] RuntimeClass + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:43:11.279 +Oct 13 09:43:11.279: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename runtimeclass 10/13/23 09:43:11.28 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:43:11.296 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:43:11.299 +[BeforeEach] [sig-node] RuntimeClass + test/e2e/framework/metrics/init/init.go:31 +[It] should schedule a Pod requesting a RuntimeClass without PodOverhead [NodeConformance] [Conformance] + test/e2e/common/node/runtimeclass.go:104 +Oct 13 09:43:11.314: INFO: Waiting up to 1m20s for at least 1 pods in namespace runtimeclass-2163 to be scheduled +Oct 13 09:43:11.317: INFO: 1 pods are not scheduled: [runtimeclass-2163/test-runtimeclass-runtimeclass-2163-preconfigured-handler-z859d(74cd6ed3-9461-4ec3-a844-c7f894486ff8)] +[AfterEach] [sig-node] RuntimeClass + test/e2e/framework/node/init/init.go:32 +Oct 13 09:43:13.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] RuntimeClass + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] RuntimeClass + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] RuntimeClass + tear down framework | framework.go:193 +STEP: Destroying namespace "runtimeclass-2163" for this suite. 10/13/23 09:43:13.333 +------------------------------ +• [2.065 seconds] +[sig-node] RuntimeClass +test/e2e/common/node/framework.go:23 + should schedule a Pod requesting a RuntimeClass without PodOverhead [NodeConformance] [Conformance] + test/e2e/common/node/runtimeclass.go:104 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] RuntimeClass + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:43:11.279 + Oct 13 09:43:11.279: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename runtimeclass 10/13/23 09:43:11.28 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:43:11.296 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:43:11.299 + [BeforeEach] [sig-node] RuntimeClass + test/e2e/framework/metrics/init/init.go:31 + [It] should schedule a Pod requesting a RuntimeClass without PodOverhead [NodeConformance] [Conformance] + test/e2e/common/node/runtimeclass.go:104 + Oct 13 09:43:11.314: INFO: Waiting up to 1m20s for at least 1 pods in namespace runtimeclass-2163 to be scheduled + Oct 13 09:43:11.317: INFO: 1 pods are not scheduled: [runtimeclass-2163/test-runtimeclass-runtimeclass-2163-preconfigured-handler-z859d(74cd6ed3-9461-4ec3-a844-c7f894486ff8)] + [AfterEach] [sig-node] RuntimeClass + test/e2e/framework/node/init/init.go:32 + Oct 13 09:43:13.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] RuntimeClass + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] RuntimeClass + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] RuntimeClass + tear down framework | framework.go:193 + STEP: Destroying namespace "runtimeclass-2163" for this suite. 10/13/23 09:43:13.333 + << End Captured GinkgoWriter Output +------------------------------ +[sig-node] Security Context + should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + test/e2e/node/security_context.go:129 +[BeforeEach] [sig-node] Security Context + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:43:13.344 +Oct 13 09:43:13.344: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename security-context 10/13/23 09:43:13.345 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:43:13.361 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:43:13.364 +[BeforeEach] [sig-node] Security Context + test/e2e/framework/metrics/init/init.go:31 +[It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + test/e2e/node/security_context.go:129 +STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser 10/13/23 09:43:13.366 +Oct 13 09:43:13.375: INFO: Waiting up to 5m0s for pod "security-context-bc6b9a2e-9890-499f-8222-5b72aa448282" in namespace "security-context-6146" to be "Succeeded or Failed" +Oct 13 09:43:13.379: INFO: Pod "security-context-bc6b9a2e-9890-499f-8222-5b72aa448282": Phase="Pending", Reason="", readiness=false. Elapsed: 3.627853ms +Oct 13 09:43:15.385: INFO: Pod "security-context-bc6b9a2e-9890-499f-8222-5b72aa448282": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009578292s +Oct 13 09:43:17.385: INFO: Pod "security-context-bc6b9a2e-9890-499f-8222-5b72aa448282": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009488694s +STEP: Saw pod success 10/13/23 09:43:17.385 +Oct 13 09:43:17.385: INFO: Pod "security-context-bc6b9a2e-9890-499f-8222-5b72aa448282" satisfied condition "Succeeded or Failed" +Oct 13 09:43:17.389: INFO: Trying to get logs from node node2 pod security-context-bc6b9a2e-9890-499f-8222-5b72aa448282 container test-container: +STEP: delete the pod 10/13/23 09:43:17.4 +Oct 13 09:43:17.410: INFO: Waiting for pod security-context-bc6b9a2e-9890-499f-8222-5b72aa448282 to disappear +Oct 13 09:43:17.413: INFO: Pod security-context-bc6b9a2e-9890-499f-8222-5b72aa448282 no longer exists +[AfterEach] [sig-node] Security Context + test/e2e/framework/node/init/init.go:32 +Oct 13 09:43:17.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Security Context + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Security Context + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Security Context + tear down framework | framework.go:193 +STEP: Destroying namespace "security-context-6146" for this suite. 10/13/23 09:43:17.416 +------------------------------ +• [4.077 seconds] +[sig-node] Security Context +test/e2e/node/framework.go:23 + should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + test/e2e/node/security_context.go:129 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Security Context + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:43:13.344 + Oct 13 09:43:13.344: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename security-context 10/13/23 09:43:13.345 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:43:13.361 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:43:13.364 + [BeforeEach] [sig-node] Security Context + test/e2e/framework/metrics/init/init.go:31 + [It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + test/e2e/node/security_context.go:129 + STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser 10/13/23 09:43:13.366 + Oct 13 09:43:13.375: INFO: Waiting up to 5m0s for pod "security-context-bc6b9a2e-9890-499f-8222-5b72aa448282" in namespace "security-context-6146" to be "Succeeded or Failed" + Oct 13 09:43:13.379: INFO: Pod "security-context-bc6b9a2e-9890-499f-8222-5b72aa448282": Phase="Pending", Reason="", readiness=false. Elapsed: 3.627853ms + Oct 13 09:43:15.385: INFO: Pod "security-context-bc6b9a2e-9890-499f-8222-5b72aa448282": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009578292s + Oct 13 09:43:17.385: INFO: Pod "security-context-bc6b9a2e-9890-499f-8222-5b72aa448282": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009488694s + STEP: Saw pod success 10/13/23 09:43:17.385 + Oct 13 09:43:17.385: INFO: Pod "security-context-bc6b9a2e-9890-499f-8222-5b72aa448282" satisfied condition "Succeeded or Failed" + Oct 13 09:43:17.389: INFO: Trying to get logs from node node2 pod security-context-bc6b9a2e-9890-499f-8222-5b72aa448282 container test-container: + STEP: delete the pod 10/13/23 09:43:17.4 + Oct 13 09:43:17.410: INFO: Waiting for pod security-context-bc6b9a2e-9890-499f-8222-5b72aa448282 to disappear + Oct 13 09:43:17.413: INFO: Pod security-context-bc6b9a2e-9890-499f-8222-5b72aa448282 no longer exists + [AfterEach] [sig-node] Security Context + test/e2e/framework/node/init/init.go:32 + Oct 13 09:43:17.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Security Context + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Security Context + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Security Context + tear down framework | framework.go:193 + STEP: Destroying namespace "security-context-6146" for this suite. 10/13/23 09:43:17.416 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicaSet + should list and delete a collection of ReplicaSets [Conformance] + test/e2e/apps/replica_set.go:165 +[BeforeEach] [sig-apps] ReplicaSet + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:43:17.423 +Oct 13 09:43:17.423: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename replicaset 10/13/23 09:43:17.424 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:43:17.439 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:43:17.441 +[BeforeEach] [sig-apps] ReplicaSet + test/e2e/framework/metrics/init/init.go:31 +[It] should list and delete a collection of ReplicaSets [Conformance] + test/e2e/apps/replica_set.go:165 +STEP: Create a ReplicaSet 10/13/23 09:43:17.444 +STEP: Verify that the required pods have come up 10/13/23 09:43:17.45 +Oct 13 09:43:17.453: INFO: Pod name sample-pod: Found 0 pods out of 3 +Oct 13 09:43:22.459: INFO: Pod name sample-pod: Found 3 pods out of 3 +STEP: ensuring each pod is running 10/13/23 09:43:22.459 +Oct 13 09:43:22.463: INFO: Replica Status: {Replicas:3 FullyLabeledReplicas:3 ReadyReplicas:3 AvailableReplicas:3 ObservedGeneration:1 Conditions:[]} +STEP: Listing all ReplicaSets 10/13/23 09:43:22.463 +STEP: DeleteCollection of the ReplicaSets 10/13/23 09:43:22.467 +STEP: After DeleteCollection verify that ReplicaSets have been deleted 10/13/23 09:43:22.477 +[AfterEach] [sig-apps] ReplicaSet + test/e2e/framework/node/init/init.go:32 +Oct 13 09:43:22.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] ReplicaSet + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] ReplicaSet + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] ReplicaSet + tear down framework | framework.go:193 +STEP: Destroying namespace "replicaset-4436" for this suite. 10/13/23 09:43:22.485 +------------------------------ +• [SLOW TEST] [5.068 seconds] +[sig-apps] ReplicaSet +test/e2e/apps/framework.go:23 + should list and delete a collection of ReplicaSets [Conformance] + test/e2e/apps/replica_set.go:165 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] ReplicaSet + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:43:17.423 + Oct 13 09:43:17.423: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename replicaset 10/13/23 09:43:17.424 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:43:17.439 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:43:17.441 + [BeforeEach] [sig-apps] ReplicaSet + test/e2e/framework/metrics/init/init.go:31 + [It] should list and delete a collection of ReplicaSets [Conformance] + test/e2e/apps/replica_set.go:165 + STEP: Create a ReplicaSet 10/13/23 09:43:17.444 + STEP: Verify that the required pods have come up 10/13/23 09:43:17.45 + Oct 13 09:43:17.453: INFO: Pod name sample-pod: Found 0 pods out of 3 + Oct 13 09:43:22.459: INFO: Pod name sample-pod: Found 3 pods out of 3 + STEP: ensuring each pod is running 10/13/23 09:43:22.459 + Oct 13 09:43:22.463: INFO: Replica Status: {Replicas:3 FullyLabeledReplicas:3 ReadyReplicas:3 AvailableReplicas:3 ObservedGeneration:1 Conditions:[]} + STEP: Listing all ReplicaSets 10/13/23 09:43:22.463 + STEP: DeleteCollection of the ReplicaSets 10/13/23 09:43:22.467 + STEP: After DeleteCollection verify that ReplicaSets have been deleted 10/13/23 09:43:22.477 + [AfterEach] [sig-apps] ReplicaSet + test/e2e/framework/node/init/init.go:32 + Oct 13 09:43:22.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] ReplicaSet + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] ReplicaSet + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] ReplicaSet + tear down framework | framework.go:193 + STEP: Destroying namespace "replicaset-4436" for this suite. 10/13/23 09:43:22.485 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-node] Security Context When creating a container with runAsUser + should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/security_context.go:347 +[BeforeEach] [sig-node] Security Context + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:43:22.491 +Oct 13 09:43:22.491: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename security-context-test 10/13/23 09:43:22.492 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:43:22.522 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:43:22.525 +[BeforeEach] [sig-node] Security Context + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] Security Context + test/e2e/common/node/security_context.go:50 +[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/security_context.go:347 +Oct 13 09:43:22.538: INFO: Waiting up to 5m0s for pod "busybox-user-65534-8f521e2b-6783-4897-9743-4228609c356c" in namespace "security-context-test-8194" to be "Succeeded or Failed" +Oct 13 09:43:22.543: INFO: Pod "busybox-user-65534-8f521e2b-6783-4897-9743-4228609c356c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.571427ms +Oct 13 09:43:24.546: INFO: Pod "busybox-user-65534-8f521e2b-6783-4897-9743-4228609c356c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00754306s +Oct 13 09:43:26.548: INFO: Pod "busybox-user-65534-8f521e2b-6783-4897-9743-4228609c356c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009416492s +Oct 13 09:43:26.548: INFO: Pod "busybox-user-65534-8f521e2b-6783-4897-9743-4228609c356c" satisfied condition "Succeeded or Failed" +[AfterEach] [sig-node] Security Context + test/e2e/framework/node/init/init.go:32 +Oct 13 09:43:26.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Security Context + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Security Context + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Security Context + tear down framework | framework.go:193 +STEP: Destroying namespace "security-context-test-8194" for this suite. 10/13/23 09:43:26.552 +------------------------------ +• [4.069 seconds] +[sig-node] Security Context +test/e2e/common/node/framework.go:23 + When creating a container with runAsUser + test/e2e/common/node/security_context.go:309 + should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/security_context.go:347 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Security Context + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:43:22.491 + Oct 13 09:43:22.491: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename security-context-test 10/13/23 09:43:22.492 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:43:22.522 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:43:22.525 + [BeforeEach] [sig-node] Security Context + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] Security Context + test/e2e/common/node/security_context.go:50 + [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/security_context.go:347 + Oct 13 09:43:22.538: INFO: Waiting up to 5m0s for pod "busybox-user-65534-8f521e2b-6783-4897-9743-4228609c356c" in namespace "security-context-test-8194" to be "Succeeded or Failed" + Oct 13 09:43:22.543: INFO: Pod "busybox-user-65534-8f521e2b-6783-4897-9743-4228609c356c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.571427ms + Oct 13 09:43:24.546: INFO: Pod "busybox-user-65534-8f521e2b-6783-4897-9743-4228609c356c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00754306s + Oct 13 09:43:26.548: INFO: Pod "busybox-user-65534-8f521e2b-6783-4897-9743-4228609c356c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009416492s + Oct 13 09:43:26.548: INFO: Pod "busybox-user-65534-8f521e2b-6783-4897-9743-4228609c356c" satisfied condition "Succeeded or Failed" + [AfterEach] [sig-node] Security Context + test/e2e/framework/node/init/init.go:32 + Oct 13 09:43:26.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Security Context + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Security Context + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Security Context + tear down framework | framework.go:193 + STEP: Destroying namespace "security-context-test-8194" for this suite. 10/13/23 09:43:26.552 + << End Captured GinkgoWriter Output +------------------------------ +S +------------------------------ +[sig-api-machinery] Garbage collector + should orphan pods created by rc if delete options say so [Conformance] + test/e2e/apimachinery/garbage_collector.go:370 +[BeforeEach] [sig-api-machinery] Garbage collector + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:43:26.56 +Oct 13 09:43:26.560: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename gc 10/13/23 09:43:26.561 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:43:26.575 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:43:26.577 +[BeforeEach] [sig-api-machinery] Garbage collector + test/e2e/framework/metrics/init/init.go:31 +[It] should orphan pods created by rc if delete options say so [Conformance] + test/e2e/apimachinery/garbage_collector.go:370 +STEP: create the rc 10/13/23 09:43:26.583 +STEP: delete the rc 10/13/23 09:43:31.592 +STEP: wait for the rc to be deleted 10/13/23 09:43:31.599 +STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods 10/13/23 09:43:36.602 +STEP: Gathering metrics 10/13/23 09:44:06.627 +Oct 13 09:44:06.650: INFO: Waiting up to 5m0s for pod "kube-controller-manager-node3" in namespace "kube-system" to be "running and ready" +Oct 13 09:44:06.654: INFO: Pod "kube-controller-manager-node3": Phase="Running", Reason="", readiness=true. Elapsed: 3.562094ms +Oct 13 09:44:06.654: INFO: The phase of Pod kube-controller-manager-node3 is Running (Ready = true) +Oct 13 09:44:06.654: INFO: Pod "kube-controller-manager-node3" satisfied condition "running and ready" +Oct 13 09:44:06.724: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +Oct 13 09:44:06.724: INFO: Deleting pod "simpletest.rc-26rsj" in namespace "gc-7187" +Oct 13 09:44:06.741: INFO: Deleting pod "simpletest.rc-2r24j" in namespace "gc-7187" +Oct 13 09:44:06.753: INFO: Deleting pod "simpletest.rc-44895" in namespace "gc-7187" +Oct 13 09:44:06.766: INFO: Deleting pod "simpletest.rc-46b64" in namespace "gc-7187" +Oct 13 09:44:06.776: INFO: Deleting pod "simpletest.rc-4gfqj" in namespace "gc-7187" +Oct 13 09:44:06.792: INFO: Deleting pod "simpletest.rc-4hkwm" in namespace "gc-7187" +Oct 13 09:44:06.801: INFO: Deleting pod "simpletest.rc-4hxq2" in namespace "gc-7187" +Oct 13 09:44:06.813: INFO: Deleting pod "simpletest.rc-4rhjk" in namespace "gc-7187" +Oct 13 09:44:06.828: INFO: Deleting pod "simpletest.rc-5cqnj" in namespace "gc-7187" +Oct 13 09:44:06.843: INFO: Deleting pod "simpletest.rc-5kxjf" in namespace "gc-7187" +Oct 13 09:44:06.856: INFO: Deleting pod "simpletest.rc-5xqkg" in namespace "gc-7187" +Oct 13 09:44:06.868: INFO: Deleting pod "simpletest.rc-65kvn" in namespace "gc-7187" +Oct 13 09:44:06.886: INFO: Deleting pod "simpletest.rc-67qrn" in namespace "gc-7187" +Oct 13 09:44:06.905: INFO: Deleting pod "simpletest.rc-67vh4" in namespace "gc-7187" +Oct 13 09:44:06.919: INFO: Deleting pod "simpletest.rc-69nvq" in namespace "gc-7187" +Oct 13 09:44:06.935: INFO: Deleting pod "simpletest.rc-69qn6" in namespace "gc-7187" +Oct 13 09:44:06.950: INFO: Deleting pod "simpletest.rc-6fbtn" in namespace "gc-7187" +Oct 13 09:44:06.963: INFO: Deleting pod "simpletest.rc-727d8" in namespace "gc-7187" +Oct 13 09:44:06.977: INFO: Deleting pod "simpletest.rc-7fkd5" in namespace "gc-7187" +Oct 13 09:44:06.990: INFO: Deleting pod "simpletest.rc-7jftx" in namespace "gc-7187" +Oct 13 09:44:07.015: INFO: Deleting pod "simpletest.rc-8hwvw" in namespace "gc-7187" +Oct 13 09:44:07.038: INFO: Deleting pod "simpletest.rc-8z22z" in namespace "gc-7187" +Oct 13 09:44:07.059: INFO: Deleting pod "simpletest.rc-92ctp" in namespace "gc-7187" +Oct 13 09:44:07.073: INFO: Deleting pod "simpletest.rc-9m24x" in namespace "gc-7187" +Oct 13 09:44:07.088: INFO: Deleting pod "simpletest.rc-9qq99" in namespace "gc-7187" +Oct 13 09:44:07.103: INFO: Deleting pod "simpletest.rc-bwz9l" in namespace "gc-7187" +Oct 13 09:44:07.120: INFO: Deleting pod "simpletest.rc-c54px" in namespace "gc-7187" +Oct 13 09:44:07.137: INFO: Deleting pod "simpletest.rc-c56js" in namespace "gc-7187" +Oct 13 09:44:07.157: INFO: Deleting pod "simpletest.rc-c9xmr" in namespace "gc-7187" +Oct 13 09:44:07.173: INFO: Deleting pod "simpletest.rc-cm5tz" in namespace "gc-7187" +Oct 13 09:44:07.187: INFO: Deleting pod "simpletest.rc-cqlmr" in namespace "gc-7187" +Oct 13 09:44:07.208: INFO: Deleting pod "simpletest.rc-ds9pt" in namespace "gc-7187" +Oct 13 09:44:07.221: INFO: Deleting pod "simpletest.rc-f2866" in namespace "gc-7187" +Oct 13 09:44:07.235: INFO: Deleting pod "simpletest.rc-fqkxm" in namespace "gc-7187" +Oct 13 09:44:07.248: INFO: Deleting pod "simpletest.rc-fzhkw" in namespace "gc-7187" +Oct 13 09:44:07.264: INFO: Deleting pod "simpletest.rc-g4hsz" in namespace "gc-7187" +Oct 13 09:44:07.281: INFO: Deleting pod "simpletest.rc-gb5r8" in namespace "gc-7187" +Oct 13 09:44:07.298: INFO: Deleting pod "simpletest.rc-gcqc2" in namespace "gc-7187" +Oct 13 09:44:07.320: INFO: Deleting pod "simpletest.rc-grjnz" in namespace "gc-7187" +Oct 13 09:44:07.336: INFO: Deleting pod "simpletest.rc-gs44n" in namespace "gc-7187" +Oct 13 09:44:07.350: INFO: Deleting pod "simpletest.rc-gzlw6" in namespace "gc-7187" +Oct 13 09:44:07.367: INFO: Deleting pod "simpletest.rc-hl7bz" in namespace "gc-7187" +Oct 13 09:44:07.385: INFO: Deleting pod "simpletest.rc-hm9t8" in namespace "gc-7187" +Oct 13 09:44:07.407: INFO: Deleting pod "simpletest.rc-hrfwc" in namespace "gc-7187" +Oct 13 09:44:07.424: INFO: Deleting pod "simpletest.rc-hvzpx" in namespace "gc-7187" +Oct 13 09:44:07.439: INFO: Deleting pod "simpletest.rc-j52hb" in namespace "gc-7187" +Oct 13 09:44:07.454: INFO: Deleting pod "simpletest.rc-j8bgb" in namespace "gc-7187" +Oct 13 09:44:07.476: INFO: Deleting pod "simpletest.rc-jcxn6" in namespace "gc-7187" +Oct 13 09:44:07.497: INFO: Deleting pod "simpletest.rc-jfltp" in namespace "gc-7187" +Oct 13 09:44:07.528: INFO: Deleting pod "simpletest.rc-kghwl" in namespace "gc-7187" +Oct 13 09:44:07.555: INFO: Deleting pod "simpletest.rc-kj8h7" in namespace "gc-7187" +Oct 13 09:44:07.570: INFO: Deleting pod "simpletest.rc-kjqrm" in namespace "gc-7187" +Oct 13 09:44:07.583: INFO: Deleting pod "simpletest.rc-kkr2c" in namespace "gc-7187" +Oct 13 09:44:07.605: INFO: Deleting pod "simpletest.rc-kxtbs" in namespace "gc-7187" +Oct 13 09:44:07.621: INFO: Deleting pod "simpletest.rc-l4r24" in namespace "gc-7187" +Oct 13 09:44:07.637: INFO: Deleting pod "simpletest.rc-l9nq2" in namespace "gc-7187" +Oct 13 09:44:07.657: INFO: Deleting pod "simpletest.rc-lrtg9" in namespace "gc-7187" +Oct 13 09:44:07.675: INFO: Deleting pod "simpletest.rc-lzhgh" in namespace "gc-7187" +Oct 13 09:44:07.698: INFO: Deleting pod "simpletest.rc-m2z6v" in namespace "gc-7187" +Oct 13 09:44:07.732: INFO: Deleting pod "simpletest.rc-m86df" in namespace "gc-7187" +Oct 13 09:44:07.752: INFO: Deleting pod "simpletest.rc-mdvgq" in namespace "gc-7187" +Oct 13 09:44:07.772: INFO: Deleting pod "simpletest.rc-mwnr9" in namespace "gc-7187" +Oct 13 09:44:07.797: INFO: Deleting pod "simpletest.rc-n8ch5" in namespace "gc-7187" +Oct 13 09:44:07.813: INFO: Deleting pod "simpletest.rc-nfzdp" in namespace "gc-7187" +Oct 13 09:44:07.849: INFO: Deleting pod "simpletest.rc-nr7lc" in namespace "gc-7187" +Oct 13 09:44:07.867: INFO: Deleting pod "simpletest.rc-nrp5z" in namespace "gc-7187" +Oct 13 09:44:07.890: INFO: Deleting pod "simpletest.rc-nzsqz" in namespace "gc-7187" +Oct 13 09:44:07.907: INFO: Deleting pod "simpletest.rc-p22f4" in namespace "gc-7187" +Oct 13 09:44:07.922: INFO: Deleting pod "simpletest.rc-pb7c9" in namespace "gc-7187" +Oct 13 09:44:07.945: INFO: Deleting pod "simpletest.rc-pn622" in namespace "gc-7187" +Oct 13 09:44:07.967: INFO: Deleting pod "simpletest.rc-ptgqt" in namespace "gc-7187" +Oct 13 09:44:07.990: INFO: Deleting pod "simpletest.rc-px52w" in namespace "gc-7187" +Oct 13 09:44:08.020: INFO: Deleting pod "simpletest.rc-q7lhc" in namespace "gc-7187" +Oct 13 09:44:08.037: INFO: Deleting pod "simpletest.rc-q8bsr" in namespace "gc-7187" +Oct 13 09:44:08.056: INFO: Deleting pod "simpletest.rc-qkg2p" in namespace "gc-7187" +Oct 13 09:44:08.082: INFO: Deleting pod "simpletest.rc-qnz4z" in namespace "gc-7187" +Oct 13 09:44:08.120: INFO: Deleting pod "simpletest.rc-rgjpd" in namespace "gc-7187" +Oct 13 09:44:08.175: INFO: Deleting pod "simpletest.rc-rjxkq" in namespace "gc-7187" +Oct 13 09:44:08.221: INFO: Deleting pod "simpletest.rc-rlt97" in namespace "gc-7187" +Oct 13 09:44:08.270: INFO: Deleting pod "simpletest.rc-s8wtz" in namespace "gc-7187" +Oct 13 09:44:08.320: INFO: Deleting pod "simpletest.rc-scqj8" in namespace "gc-7187" +Oct 13 09:44:08.374: INFO: Deleting pod "simpletest.rc-sfggn" in namespace "gc-7187" +Oct 13 09:44:08.421: INFO: Deleting pod "simpletest.rc-shhpk" in namespace "gc-7187" +Oct 13 09:44:08.472: INFO: Deleting pod "simpletest.rc-szfdh" in namespace "gc-7187" +Oct 13 09:44:08.526: INFO: Deleting pod "simpletest.rc-t6js9" in namespace "gc-7187" +Oct 13 09:44:08.571: INFO: Deleting pod "simpletest.rc-tcl77" in namespace "gc-7187" +Oct 13 09:44:08.621: INFO: Deleting pod "simpletest.rc-ttmq5" in namespace "gc-7187" +Oct 13 09:44:08.668: INFO: Deleting pod "simpletest.rc-vr28r" in namespace "gc-7187" +Oct 13 09:44:08.724: INFO: Deleting pod "simpletest.rc-vvvml" in namespace "gc-7187" +Oct 13 09:44:08.765: INFO: Deleting pod "simpletest.rc-w6jd9" in namespace "gc-7187" +Oct 13 09:44:08.817: INFO: Deleting pod "simpletest.rc-w9w8c" in namespace "gc-7187" +Oct 13 09:44:08.870: INFO: Deleting pod "simpletest.rc-xfrlv" in namespace "gc-7187" +Oct 13 09:44:08.916: INFO: Deleting pod "simpletest.rc-xhbsf" in namespace "gc-7187" +Oct 13 09:44:08.965: INFO: Deleting pod "simpletest.rc-xhmfr" in namespace "gc-7187" +Oct 13 09:44:09.018: INFO: Deleting pod "simpletest.rc-xqm8r" in namespace "gc-7187" +Oct 13 09:44:09.063: INFO: Deleting pod "simpletest.rc-xqx5w" in namespace "gc-7187" +Oct 13 09:44:09.119: INFO: Deleting pod "simpletest.rc-xwkcc" in namespace "gc-7187" +Oct 13 09:44:09.167: INFO: Deleting pod "simpletest.rc-zct9s" in namespace "gc-7187" +Oct 13 09:44:09.218: INFO: Deleting pod "simpletest.rc-zmd9r" in namespace "gc-7187" +Oct 13 09:44:09.266: INFO: Deleting pod "simpletest.rc-zml9r" in namespace "gc-7187" +[AfterEach] [sig-api-machinery] Garbage collector + test/e2e/framework/node/init/init.go:32 +Oct 13 09:44:09.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] Garbage collector + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] Garbage collector + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] Garbage collector + tear down framework | framework.go:193 +STEP: Destroying namespace "gc-7187" for this suite. 10/13/23 09:44:09.357 +------------------------------ +• [SLOW TEST] [42.850 seconds] +[sig-api-machinery] Garbage collector +test/e2e/apimachinery/framework.go:23 + should orphan pods created by rc if delete options say so [Conformance] + test/e2e/apimachinery/garbage_collector.go:370 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] Garbage collector + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:43:26.56 + Oct 13 09:43:26.560: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename gc 10/13/23 09:43:26.561 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:43:26.575 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:43:26.577 + [BeforeEach] [sig-api-machinery] Garbage collector + test/e2e/framework/metrics/init/init.go:31 + [It] should orphan pods created by rc if delete options say so [Conformance] + test/e2e/apimachinery/garbage_collector.go:370 + STEP: create the rc 10/13/23 09:43:26.583 + STEP: delete the rc 10/13/23 09:43:31.592 + STEP: wait for the rc to be deleted 10/13/23 09:43:31.599 + STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods 10/13/23 09:43:36.602 + STEP: Gathering metrics 10/13/23 09:44:06.627 + Oct 13 09:44:06.650: INFO: Waiting up to 5m0s for pod "kube-controller-manager-node3" in namespace "kube-system" to be "running and ready" + Oct 13 09:44:06.654: INFO: Pod "kube-controller-manager-node3": Phase="Running", Reason="", readiness=true. Elapsed: 3.562094ms + Oct 13 09:44:06.654: INFO: The phase of Pod kube-controller-manager-node3 is Running (Ready = true) + Oct 13 09:44:06.654: INFO: Pod "kube-controller-manager-node3" satisfied condition "running and ready" + Oct 13 09:44:06.724: INFO: For apiserver_request_total: + For apiserver_request_latency_seconds: + For apiserver_init_events_total: + For garbage_collector_attempt_to_delete_queue_latency: + For garbage_collector_attempt_to_delete_work_duration: + For garbage_collector_attempt_to_orphan_queue_latency: + For garbage_collector_attempt_to_orphan_work_duration: + For garbage_collector_dirty_processing_latency_microseconds: + For garbage_collector_event_processing_latency_microseconds: + For garbage_collector_graph_changes_queue_latency: + For garbage_collector_graph_changes_work_duration: + For garbage_collector_orphan_processing_latency_microseconds: + For namespace_queue_latency: + For namespace_queue_latency_sum: + For namespace_queue_latency_count: + For namespace_retries: + For namespace_work_duration: + For namespace_work_duration_sum: + For namespace_work_duration_count: + For function_duration_seconds: + For errors_total: + For evicted_pods_total: + + Oct 13 09:44:06.724: INFO: Deleting pod "simpletest.rc-26rsj" in namespace "gc-7187" + Oct 13 09:44:06.741: INFO: Deleting pod "simpletest.rc-2r24j" in namespace "gc-7187" + Oct 13 09:44:06.753: INFO: Deleting pod "simpletest.rc-44895" in namespace "gc-7187" + Oct 13 09:44:06.766: INFO: Deleting pod "simpletest.rc-46b64" in namespace "gc-7187" + Oct 13 09:44:06.776: INFO: Deleting pod "simpletest.rc-4gfqj" in namespace "gc-7187" + Oct 13 09:44:06.792: INFO: Deleting pod "simpletest.rc-4hkwm" in namespace "gc-7187" + Oct 13 09:44:06.801: INFO: Deleting pod "simpletest.rc-4hxq2" in namespace "gc-7187" + Oct 13 09:44:06.813: INFO: Deleting pod "simpletest.rc-4rhjk" in namespace "gc-7187" + Oct 13 09:44:06.828: INFO: Deleting pod "simpletest.rc-5cqnj" in namespace "gc-7187" + Oct 13 09:44:06.843: INFO: Deleting pod "simpletest.rc-5kxjf" in namespace "gc-7187" + Oct 13 09:44:06.856: INFO: Deleting pod "simpletest.rc-5xqkg" in namespace "gc-7187" + Oct 13 09:44:06.868: INFO: Deleting pod "simpletest.rc-65kvn" in namespace "gc-7187" + Oct 13 09:44:06.886: INFO: Deleting pod "simpletest.rc-67qrn" in namespace "gc-7187" + Oct 13 09:44:06.905: INFO: Deleting pod "simpletest.rc-67vh4" in namespace "gc-7187" + Oct 13 09:44:06.919: INFO: Deleting pod "simpletest.rc-69nvq" in namespace "gc-7187" + Oct 13 09:44:06.935: INFO: Deleting pod "simpletest.rc-69qn6" in namespace "gc-7187" + Oct 13 09:44:06.950: INFO: Deleting pod "simpletest.rc-6fbtn" in namespace "gc-7187" + Oct 13 09:44:06.963: INFO: Deleting pod "simpletest.rc-727d8" in namespace "gc-7187" + Oct 13 09:44:06.977: INFO: Deleting pod "simpletest.rc-7fkd5" in namespace "gc-7187" + Oct 13 09:44:06.990: INFO: Deleting pod "simpletest.rc-7jftx" in namespace "gc-7187" + Oct 13 09:44:07.015: INFO: Deleting pod "simpletest.rc-8hwvw" in namespace "gc-7187" + Oct 13 09:44:07.038: INFO: Deleting pod "simpletest.rc-8z22z" in namespace "gc-7187" + Oct 13 09:44:07.059: INFO: Deleting pod "simpletest.rc-92ctp" in namespace "gc-7187" + Oct 13 09:44:07.073: INFO: Deleting pod "simpletest.rc-9m24x" in namespace "gc-7187" + Oct 13 09:44:07.088: INFO: Deleting pod "simpletest.rc-9qq99" in namespace "gc-7187" + Oct 13 09:44:07.103: INFO: Deleting pod "simpletest.rc-bwz9l" in namespace "gc-7187" + Oct 13 09:44:07.120: INFO: Deleting pod "simpletest.rc-c54px" in namespace "gc-7187" + Oct 13 09:44:07.137: INFO: Deleting pod "simpletest.rc-c56js" in namespace "gc-7187" + Oct 13 09:44:07.157: INFO: Deleting pod "simpletest.rc-c9xmr" in namespace "gc-7187" + Oct 13 09:44:07.173: INFO: Deleting pod "simpletest.rc-cm5tz" in namespace "gc-7187" + Oct 13 09:44:07.187: INFO: Deleting pod "simpletest.rc-cqlmr" in namespace "gc-7187" + Oct 13 09:44:07.208: INFO: Deleting pod "simpletest.rc-ds9pt" in namespace "gc-7187" + Oct 13 09:44:07.221: INFO: Deleting pod "simpletest.rc-f2866" in namespace "gc-7187" + Oct 13 09:44:07.235: INFO: Deleting pod "simpletest.rc-fqkxm" in namespace "gc-7187" + Oct 13 09:44:07.248: INFO: Deleting pod "simpletest.rc-fzhkw" in namespace "gc-7187" + Oct 13 09:44:07.264: INFO: Deleting pod "simpletest.rc-g4hsz" in namespace "gc-7187" + Oct 13 09:44:07.281: INFO: Deleting pod "simpletest.rc-gb5r8" in namespace "gc-7187" + Oct 13 09:44:07.298: INFO: Deleting pod "simpletest.rc-gcqc2" in namespace "gc-7187" + Oct 13 09:44:07.320: INFO: Deleting pod "simpletest.rc-grjnz" in namespace "gc-7187" + Oct 13 09:44:07.336: INFO: Deleting pod "simpletest.rc-gs44n" in namespace "gc-7187" + Oct 13 09:44:07.350: INFO: Deleting pod "simpletest.rc-gzlw6" in namespace "gc-7187" + Oct 13 09:44:07.367: INFO: Deleting pod "simpletest.rc-hl7bz" in namespace "gc-7187" + Oct 13 09:44:07.385: INFO: Deleting pod "simpletest.rc-hm9t8" in namespace "gc-7187" + Oct 13 09:44:07.407: INFO: Deleting pod "simpletest.rc-hrfwc" in namespace "gc-7187" + Oct 13 09:44:07.424: INFO: Deleting pod "simpletest.rc-hvzpx" in namespace "gc-7187" + Oct 13 09:44:07.439: INFO: Deleting pod "simpletest.rc-j52hb" in namespace "gc-7187" + Oct 13 09:44:07.454: INFO: Deleting pod "simpletest.rc-j8bgb" in namespace "gc-7187" + Oct 13 09:44:07.476: INFO: Deleting pod "simpletest.rc-jcxn6" in namespace "gc-7187" + Oct 13 09:44:07.497: INFO: Deleting pod "simpletest.rc-jfltp" in namespace "gc-7187" + Oct 13 09:44:07.528: INFO: Deleting pod "simpletest.rc-kghwl" in namespace "gc-7187" + Oct 13 09:44:07.555: INFO: Deleting pod "simpletest.rc-kj8h7" in namespace "gc-7187" + Oct 13 09:44:07.570: INFO: Deleting pod "simpletest.rc-kjqrm" in namespace "gc-7187" + Oct 13 09:44:07.583: INFO: Deleting pod "simpletest.rc-kkr2c" in namespace "gc-7187" + Oct 13 09:44:07.605: INFO: Deleting pod "simpletest.rc-kxtbs" in namespace "gc-7187" + Oct 13 09:44:07.621: INFO: Deleting pod "simpletest.rc-l4r24" in namespace "gc-7187" + Oct 13 09:44:07.637: INFO: Deleting pod "simpletest.rc-l9nq2" in namespace "gc-7187" + Oct 13 09:44:07.657: INFO: Deleting pod "simpletest.rc-lrtg9" in namespace "gc-7187" + Oct 13 09:44:07.675: INFO: Deleting pod "simpletest.rc-lzhgh" in namespace "gc-7187" + Oct 13 09:44:07.698: INFO: Deleting pod "simpletest.rc-m2z6v" in namespace "gc-7187" + Oct 13 09:44:07.732: INFO: Deleting pod "simpletest.rc-m86df" in namespace "gc-7187" + Oct 13 09:44:07.752: INFO: Deleting pod "simpletest.rc-mdvgq" in namespace "gc-7187" + Oct 13 09:44:07.772: INFO: Deleting pod "simpletest.rc-mwnr9" in namespace "gc-7187" + Oct 13 09:44:07.797: INFO: Deleting pod "simpletest.rc-n8ch5" in namespace "gc-7187" + Oct 13 09:44:07.813: INFO: Deleting pod "simpletest.rc-nfzdp" in namespace "gc-7187" + Oct 13 09:44:07.849: INFO: Deleting pod "simpletest.rc-nr7lc" in namespace "gc-7187" + Oct 13 09:44:07.867: INFO: Deleting pod "simpletest.rc-nrp5z" in namespace "gc-7187" + Oct 13 09:44:07.890: INFO: Deleting pod "simpletest.rc-nzsqz" in namespace "gc-7187" + Oct 13 09:44:07.907: INFO: Deleting pod "simpletest.rc-p22f4" in namespace "gc-7187" + Oct 13 09:44:07.922: INFO: Deleting pod "simpletest.rc-pb7c9" in namespace "gc-7187" + Oct 13 09:44:07.945: INFO: Deleting pod "simpletest.rc-pn622" in namespace "gc-7187" + Oct 13 09:44:07.967: INFO: Deleting pod "simpletest.rc-ptgqt" in namespace "gc-7187" + Oct 13 09:44:07.990: INFO: Deleting pod "simpletest.rc-px52w" in namespace "gc-7187" + Oct 13 09:44:08.020: INFO: Deleting pod "simpletest.rc-q7lhc" in namespace "gc-7187" + Oct 13 09:44:08.037: INFO: Deleting pod "simpletest.rc-q8bsr" in namespace "gc-7187" + Oct 13 09:44:08.056: INFO: Deleting pod "simpletest.rc-qkg2p" in namespace "gc-7187" + Oct 13 09:44:08.082: INFO: Deleting pod "simpletest.rc-qnz4z" in namespace "gc-7187" + Oct 13 09:44:08.120: INFO: Deleting pod "simpletest.rc-rgjpd" in namespace "gc-7187" + Oct 13 09:44:08.175: INFO: Deleting pod "simpletest.rc-rjxkq" in namespace "gc-7187" + Oct 13 09:44:08.221: INFO: Deleting pod "simpletest.rc-rlt97" in namespace "gc-7187" + Oct 13 09:44:08.270: INFO: Deleting pod "simpletest.rc-s8wtz" in namespace "gc-7187" + Oct 13 09:44:08.320: INFO: Deleting pod "simpletest.rc-scqj8" in namespace "gc-7187" + Oct 13 09:44:08.374: INFO: Deleting pod "simpletest.rc-sfggn" in namespace "gc-7187" + Oct 13 09:44:08.421: INFO: Deleting pod "simpletest.rc-shhpk" in namespace "gc-7187" + Oct 13 09:44:08.472: INFO: Deleting pod "simpletest.rc-szfdh" in namespace "gc-7187" + Oct 13 09:44:08.526: INFO: Deleting pod "simpletest.rc-t6js9" in namespace "gc-7187" + Oct 13 09:44:08.571: INFO: Deleting pod "simpletest.rc-tcl77" in namespace "gc-7187" + Oct 13 09:44:08.621: INFO: Deleting pod "simpletest.rc-ttmq5" in namespace "gc-7187" + Oct 13 09:44:08.668: INFO: Deleting pod "simpletest.rc-vr28r" in namespace "gc-7187" + Oct 13 09:44:08.724: INFO: Deleting pod "simpletest.rc-vvvml" in namespace "gc-7187" + Oct 13 09:44:08.765: INFO: Deleting pod "simpletest.rc-w6jd9" in namespace "gc-7187" + Oct 13 09:44:08.817: INFO: Deleting pod "simpletest.rc-w9w8c" in namespace "gc-7187" + Oct 13 09:44:08.870: INFO: Deleting pod "simpletest.rc-xfrlv" in namespace "gc-7187" + Oct 13 09:44:08.916: INFO: Deleting pod "simpletest.rc-xhbsf" in namespace "gc-7187" + Oct 13 09:44:08.965: INFO: Deleting pod "simpletest.rc-xhmfr" in namespace "gc-7187" + Oct 13 09:44:09.018: INFO: Deleting pod "simpletest.rc-xqm8r" in namespace "gc-7187" + Oct 13 09:44:09.063: INFO: Deleting pod "simpletest.rc-xqx5w" in namespace "gc-7187" + Oct 13 09:44:09.119: INFO: Deleting pod "simpletest.rc-xwkcc" in namespace "gc-7187" + Oct 13 09:44:09.167: INFO: Deleting pod "simpletest.rc-zct9s" in namespace "gc-7187" + Oct 13 09:44:09.218: INFO: Deleting pod "simpletest.rc-zmd9r" in namespace "gc-7187" + Oct 13 09:44:09.266: INFO: Deleting pod "simpletest.rc-zml9r" in namespace "gc-7187" + [AfterEach] [sig-api-machinery] Garbage collector + test/e2e/framework/node/init/init.go:32 + Oct 13 09:44:09.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + tear down framework | framework.go:193 + STEP: Destroying namespace "gc-7187" for this suite. 10/13/23 09:44:09.357 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] DisruptionController + should observe PodDisruptionBudget status updated [Conformance] + test/e2e/apps/disruption.go:141 +[BeforeEach] [sig-apps] DisruptionController + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:44:09.412 +Oct 13 09:44:09.412: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename disruption 10/13/23 09:44:09.412 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:44:09.428 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:44:09.431 +[BeforeEach] [sig-apps] DisruptionController + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] DisruptionController + test/e2e/apps/disruption.go:72 +[It] should observe PodDisruptionBudget status updated [Conformance] + test/e2e/apps/disruption.go:141 +STEP: Waiting for the pdb to be processed 10/13/23 09:44:09.438 +STEP: Waiting for all pods to be running 10/13/23 09:44:11.467 +Oct 13 09:44:11.471: INFO: running pods: 0 < 3 +Oct 13 09:44:13.476: INFO: running pods: 0 < 3 +Oct 13 09:44:15.476: INFO: running pods: 2 < 3 +Oct 13 09:44:17.474: INFO: running pods: 2 < 3 +[AfterEach] [sig-apps] DisruptionController + test/e2e/framework/node/init/init.go:32 +Oct 13 09:44:19.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] DisruptionController + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] DisruptionController + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] DisruptionController + tear down framework | framework.go:193 +STEP: Destroying namespace "disruption-6857" for this suite. 10/13/23 09:44:19.481 +------------------------------ +• [SLOW TEST] [10.074 seconds] +[sig-apps] DisruptionController +test/e2e/apps/framework.go:23 + should observe PodDisruptionBudget status updated [Conformance] + test/e2e/apps/disruption.go:141 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] DisruptionController + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:44:09.412 + Oct 13 09:44:09.412: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename disruption 10/13/23 09:44:09.412 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:44:09.428 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:44:09.431 + [BeforeEach] [sig-apps] DisruptionController + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] DisruptionController + test/e2e/apps/disruption.go:72 + [It] should observe PodDisruptionBudget status updated [Conformance] + test/e2e/apps/disruption.go:141 + STEP: Waiting for the pdb to be processed 10/13/23 09:44:09.438 + STEP: Waiting for all pods to be running 10/13/23 09:44:11.467 + Oct 13 09:44:11.471: INFO: running pods: 0 < 3 + Oct 13 09:44:13.476: INFO: running pods: 0 < 3 + Oct 13 09:44:15.476: INFO: running pods: 2 < 3 + Oct 13 09:44:17.474: INFO: running pods: 2 < 3 + [AfterEach] [sig-apps] DisruptionController + test/e2e/framework/node/init/init.go:32 + Oct 13 09:44:19.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] DisruptionController + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] DisruptionController + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] DisruptionController + tear down framework | framework.go:193 + STEP: Destroying namespace "disruption-6857" for this suite. 10/13/23 09:44:19.481 + << End Captured GinkgoWriter Output +------------------------------ +SS +------------------------------ +[sig-api-machinery] server version + should find the server version [Conformance] + test/e2e/apimachinery/server_version.go:39 +[BeforeEach] [sig-api-machinery] server version + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:44:19.486 +Oct 13 09:44:19.486: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename server-version 10/13/23 09:44:19.486 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:44:19.5 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:44:19.502 +[BeforeEach] [sig-api-machinery] server version + test/e2e/framework/metrics/init/init.go:31 +[It] should find the server version [Conformance] + test/e2e/apimachinery/server_version.go:39 +STEP: Request ServerVersion 10/13/23 09:44:19.504 +STEP: Confirm major version 10/13/23 09:44:19.505 +Oct 13 09:44:19.505: INFO: Major version: 1 +STEP: Confirm minor version 10/13/23 09:44:19.505 +Oct 13 09:44:19.505: INFO: cleanMinorVersion: 26 +Oct 13 09:44:19.505: INFO: Minor version: 26 +[AfterEach] [sig-api-machinery] server version + test/e2e/framework/node/init/init.go:32 +Oct 13 09:44:19.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] server version + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] server version + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] server version + tear down framework | framework.go:193 +STEP: Destroying namespace "server-version-2674" for this suite. 10/13/23 09:44:19.508 +------------------------------ +• [0.027 seconds] +[sig-api-machinery] server version +test/e2e/apimachinery/framework.go:23 + should find the server version [Conformance] + test/e2e/apimachinery/server_version.go:39 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] server version + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:44:19.486 + Oct 13 09:44:19.486: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename server-version 10/13/23 09:44:19.486 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:44:19.5 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:44:19.502 + [BeforeEach] [sig-api-machinery] server version + test/e2e/framework/metrics/init/init.go:31 + [It] should find the server version [Conformance] + test/e2e/apimachinery/server_version.go:39 + STEP: Request ServerVersion 10/13/23 09:44:19.504 + STEP: Confirm major version 10/13/23 09:44:19.505 + Oct 13 09:44:19.505: INFO: Major version: 1 + STEP: Confirm minor version 10/13/23 09:44:19.505 + Oct 13 09:44:19.505: INFO: cleanMinorVersion: 26 + Oct 13 09:44:19.505: INFO: Minor version: 26 + [AfterEach] [sig-api-machinery] server version + test/e2e/framework/node/init/init.go:32 + Oct 13 09:44:19.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] server version + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] server version + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] server version + tear down framework | framework.go:193 + STEP: Destroying namespace "server-version-2674" for this suite. 10/13/23 09:44:19.508 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSS +------------------------------ +[sig-scheduling] LimitRange + should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] + test/e2e/scheduling/limit_range.go:61 +[BeforeEach] [sig-scheduling] LimitRange + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:44:19.513 +Oct 13 09:44:19.513: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename limitrange 10/13/23 09:44:19.514 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:44:19.528 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:44:19.53 +[BeforeEach] [sig-scheduling] LimitRange + test/e2e/framework/metrics/init/init.go:31 +[It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] + test/e2e/scheduling/limit_range.go:61 +STEP: Creating a LimitRange 10/13/23 09:44:19.532 +STEP: Setting up watch 10/13/23 09:44:19.532 +STEP: Submitting a LimitRange 10/13/23 09:44:19.635 +STEP: Verifying LimitRange creation was observed 10/13/23 09:44:19.64 +STEP: Fetching the LimitRange to ensure it has proper values 10/13/23 09:44:19.64 +Oct 13 09:44:19.643: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] +Oct 13 09:44:19.643: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] +STEP: Creating a Pod with no resource requirements 10/13/23 09:44:19.643 +STEP: Ensuring Pod has resource requirements applied from LimitRange 10/13/23 09:44:19.648 +Oct 13 09:44:19.651: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] +Oct 13 09:44:19.651: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] +STEP: Creating a Pod with partial resource requirements 10/13/23 09:44:19.651 +STEP: Ensuring Pod has merged resource requirements applied from LimitRange 10/13/23 09:44:19.657 +Oct 13 09:44:19.660: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] +Oct 13 09:44:19.660: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] +STEP: Failing to create a Pod with less than min resources 10/13/23 09:44:19.66 +STEP: Failing to create a Pod with more than max resources 10/13/23 09:44:19.662 +STEP: Updating a LimitRange 10/13/23 09:44:19.664 +STEP: Verifying LimitRange updating is effective 10/13/23 09:44:19.67 +STEP: Creating a Pod with less than former min resources 10/13/23 09:44:21.673 +STEP: Failing to create a Pod with more than max resources 10/13/23 09:44:21.678 +STEP: Deleting a LimitRange 10/13/23 09:44:21.68 +STEP: Verifying the LimitRange was deleted 10/13/23 09:44:21.689 +Oct 13 09:44:26.692: INFO: limitRange is already deleted +STEP: Creating a Pod with more than former max resources 10/13/23 09:44:26.692 +[AfterEach] [sig-scheduling] LimitRange + test/e2e/framework/node/init/init.go:32 +Oct 13 09:44:26.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-scheduling] LimitRange + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-scheduling] LimitRange + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-scheduling] LimitRange + tear down framework | framework.go:193 +STEP: Destroying namespace "limitrange-136" for this suite. 10/13/23 09:44:26.706 +------------------------------ +• [SLOW TEST] [7.202 seconds] +[sig-scheduling] LimitRange +test/e2e/scheduling/framework.go:40 + should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] + test/e2e/scheduling/limit_range.go:61 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-scheduling] LimitRange + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:44:19.513 + Oct 13 09:44:19.513: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename limitrange 10/13/23 09:44:19.514 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:44:19.528 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:44:19.53 + [BeforeEach] [sig-scheduling] LimitRange + test/e2e/framework/metrics/init/init.go:31 + [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] + test/e2e/scheduling/limit_range.go:61 + STEP: Creating a LimitRange 10/13/23 09:44:19.532 + STEP: Setting up watch 10/13/23 09:44:19.532 + STEP: Submitting a LimitRange 10/13/23 09:44:19.635 + STEP: Verifying LimitRange creation was observed 10/13/23 09:44:19.64 + STEP: Fetching the LimitRange to ensure it has proper values 10/13/23 09:44:19.64 + Oct 13 09:44:19.643: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] + Oct 13 09:44:19.643: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] + STEP: Creating a Pod with no resource requirements 10/13/23 09:44:19.643 + STEP: Ensuring Pod has resource requirements applied from LimitRange 10/13/23 09:44:19.648 + Oct 13 09:44:19.651: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] + Oct 13 09:44:19.651: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] + STEP: Creating a Pod with partial resource requirements 10/13/23 09:44:19.651 + STEP: Ensuring Pod has merged resource requirements applied from LimitRange 10/13/23 09:44:19.657 + Oct 13 09:44:19.660: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] + Oct 13 09:44:19.660: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] + STEP: Failing to create a Pod with less than min resources 10/13/23 09:44:19.66 + STEP: Failing to create a Pod with more than max resources 10/13/23 09:44:19.662 + STEP: Updating a LimitRange 10/13/23 09:44:19.664 + STEP: Verifying LimitRange updating is effective 10/13/23 09:44:19.67 + STEP: Creating a Pod with less than former min resources 10/13/23 09:44:21.673 + STEP: Failing to create a Pod with more than max resources 10/13/23 09:44:21.678 + STEP: Deleting a LimitRange 10/13/23 09:44:21.68 + STEP: Verifying the LimitRange was deleted 10/13/23 09:44:21.689 + Oct 13 09:44:26.692: INFO: limitRange is already deleted + STEP: Creating a Pod with more than former max resources 10/13/23 09:44:26.692 + [AfterEach] [sig-scheduling] LimitRange + test/e2e/framework/node/init/init.go:32 + Oct 13 09:44:26.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-scheduling] LimitRange + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-scheduling] LimitRange + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-scheduling] LimitRange + tear down framework | framework.go:193 + STEP: Destroying namespace "limitrange-136" for this suite. 10/13/23 09:44:26.706 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPreemption [Serial] + validates basic preemption works [Conformance] + test/e2e/scheduling/preemption.go:130 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:44:26.716 +Oct 13 09:44:26.716: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename sched-preemption 10/13/23 09:44:26.717 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:44:26.73 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:44:26.732 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/scheduling/preemption.go:97 +Oct 13 09:44:26.744: INFO: Waiting up to 1m0s for all nodes to be ready +Oct 13 09:45:26.777: INFO: Waiting for terminating namespaces to be deleted... +[It] validates basic preemption works [Conformance] + test/e2e/scheduling/preemption.go:130 +STEP: Create pods that use 4/5 of node resources. 10/13/23 09:45:26.782 +Oct 13 09:45:26.809: INFO: Created pod: pod0-0-sched-preemption-low-priority +Oct 13 09:45:26.820: INFO: Created pod: pod0-1-sched-preemption-medium-priority +Oct 13 09:45:26.837: INFO: Created pod: pod1-0-sched-preemption-medium-priority +Oct 13 09:45:26.843: INFO: Created pod: pod1-1-sched-preemption-medium-priority +Oct 13 09:45:26.864: INFO: Created pod: pod2-0-sched-preemption-medium-priority +Oct 13 09:45:26.871: INFO: Created pod: pod2-1-sched-preemption-medium-priority +STEP: Wait for pods to be scheduled. 10/13/23 09:45:26.871 +Oct 13 09:45:26.871: INFO: Waiting up to 5m0s for pod "pod0-0-sched-preemption-low-priority" in namespace "sched-preemption-9009" to be "running" +Oct 13 09:45:26.875: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098778ms +Oct 13 09:45:28.882: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.010814758s +Oct 13 09:45:28.882: INFO: Pod "pod0-0-sched-preemption-low-priority" satisfied condition "running" +Oct 13 09:45:28.882: INFO: Waiting up to 5m0s for pod "pod0-1-sched-preemption-medium-priority" in namespace "sched-preemption-9009" to be "running" +Oct 13 09:45:28.887: INFO: Pod "pod0-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 5.0337ms +Oct 13 09:45:28.887: INFO: Pod "pod0-1-sched-preemption-medium-priority" satisfied condition "running" +Oct 13 09:45:28.887: INFO: Waiting up to 5m0s for pod "pod1-0-sched-preemption-medium-priority" in namespace "sched-preemption-9009" to be "running" +Oct 13 09:45:28.891: INFO: Pod "pod1-0-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 3.525902ms +Oct 13 09:45:28.891: INFO: Pod "pod1-0-sched-preemption-medium-priority" satisfied condition "running" +Oct 13 09:45:28.891: INFO: Waiting up to 5m0s for pod "pod1-1-sched-preemption-medium-priority" in namespace "sched-preemption-9009" to be "running" +Oct 13 09:45:28.894: INFO: Pod "pod1-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.947474ms +Oct 13 09:45:28.894: INFO: Pod "pod1-1-sched-preemption-medium-priority" satisfied condition "running" +Oct 13 09:45:28.894: INFO: Waiting up to 5m0s for pod "pod2-0-sched-preemption-medium-priority" in namespace "sched-preemption-9009" to be "running" +Oct 13 09:45:28.896: INFO: Pod "pod2-0-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.589739ms +Oct 13 09:45:28.896: INFO: Pod "pod2-0-sched-preemption-medium-priority" satisfied condition "running" +Oct 13 09:45:28.896: INFO: Waiting up to 5m0s for pod "pod2-1-sched-preemption-medium-priority" in namespace "sched-preemption-9009" to be "running" +Oct 13 09:45:28.899: INFO: Pod "pod2-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.625834ms +Oct 13 09:45:28.899: INFO: Pod "pod2-1-sched-preemption-medium-priority" satisfied condition "running" +STEP: Run a high priority pod that has same requirements as that of lower priority pod 10/13/23 09:45:28.899 +Oct 13 09:45:28.904: INFO: Waiting up to 2m0s for pod "preemptor-pod" in namespace "sched-preemption-9009" to be "running" +Oct 13 09:45:28.907: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.944328ms +Oct 13 09:45:30.911: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007411934s +Oct 13 09:45:32.913: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00945275s +Oct 13 09:45:34.913: INFO: Pod "preemptor-pod": Phase="Running", Reason="", readiness=true. Elapsed: 6.008974523s +Oct 13 09:45:34.913: INFO: Pod "preemptor-pod" satisfied condition "running" +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/node/init/init.go:32 +Oct 13 09:45:34.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/scheduling/preemption.go:84 +[DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] + tear down framework | framework.go:193 +STEP: Destroying namespace "sched-preemption-9009" for this suite. 10/13/23 09:45:34.991 +------------------------------ +• [SLOW TEST] [68.281 seconds] +[sig-scheduling] SchedulerPreemption [Serial] +test/e2e/scheduling/framework.go:40 + validates basic preemption works [Conformance] + test/e2e/scheduling/preemption.go:130 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:44:26.716 + Oct 13 09:44:26.716: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename sched-preemption 10/13/23 09:44:26.717 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:44:26.73 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:44:26.732 + [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/scheduling/preemption.go:97 + Oct 13 09:44:26.744: INFO: Waiting up to 1m0s for all nodes to be ready + Oct 13 09:45:26.777: INFO: Waiting for terminating namespaces to be deleted... + [It] validates basic preemption works [Conformance] + test/e2e/scheduling/preemption.go:130 + STEP: Create pods that use 4/5 of node resources. 10/13/23 09:45:26.782 + Oct 13 09:45:26.809: INFO: Created pod: pod0-0-sched-preemption-low-priority + Oct 13 09:45:26.820: INFO: Created pod: pod0-1-sched-preemption-medium-priority + Oct 13 09:45:26.837: INFO: Created pod: pod1-0-sched-preemption-medium-priority + Oct 13 09:45:26.843: INFO: Created pod: pod1-1-sched-preemption-medium-priority + Oct 13 09:45:26.864: INFO: Created pod: pod2-0-sched-preemption-medium-priority + Oct 13 09:45:26.871: INFO: Created pod: pod2-1-sched-preemption-medium-priority + STEP: Wait for pods to be scheduled. 10/13/23 09:45:26.871 + Oct 13 09:45:26.871: INFO: Waiting up to 5m0s for pod "pod0-0-sched-preemption-low-priority" in namespace "sched-preemption-9009" to be "running" + Oct 13 09:45:26.875: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098778ms + Oct 13 09:45:28.882: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.010814758s + Oct 13 09:45:28.882: INFO: Pod "pod0-0-sched-preemption-low-priority" satisfied condition "running" + Oct 13 09:45:28.882: INFO: Waiting up to 5m0s for pod "pod0-1-sched-preemption-medium-priority" in namespace "sched-preemption-9009" to be "running" + Oct 13 09:45:28.887: INFO: Pod "pod0-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 5.0337ms + Oct 13 09:45:28.887: INFO: Pod "pod0-1-sched-preemption-medium-priority" satisfied condition "running" + Oct 13 09:45:28.887: INFO: Waiting up to 5m0s for pod "pod1-0-sched-preemption-medium-priority" in namespace "sched-preemption-9009" to be "running" + Oct 13 09:45:28.891: INFO: Pod "pod1-0-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 3.525902ms + Oct 13 09:45:28.891: INFO: Pod "pod1-0-sched-preemption-medium-priority" satisfied condition "running" + Oct 13 09:45:28.891: INFO: Waiting up to 5m0s for pod "pod1-1-sched-preemption-medium-priority" in namespace "sched-preemption-9009" to be "running" + Oct 13 09:45:28.894: INFO: Pod "pod1-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.947474ms + Oct 13 09:45:28.894: INFO: Pod "pod1-1-sched-preemption-medium-priority" satisfied condition "running" + Oct 13 09:45:28.894: INFO: Waiting up to 5m0s for pod "pod2-0-sched-preemption-medium-priority" in namespace "sched-preemption-9009" to be "running" + Oct 13 09:45:28.896: INFO: Pod "pod2-0-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.589739ms + Oct 13 09:45:28.896: INFO: Pod "pod2-0-sched-preemption-medium-priority" satisfied condition "running" + Oct 13 09:45:28.896: INFO: Waiting up to 5m0s for pod "pod2-1-sched-preemption-medium-priority" in namespace "sched-preemption-9009" to be "running" + Oct 13 09:45:28.899: INFO: Pod "pod2-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.625834ms + Oct 13 09:45:28.899: INFO: Pod "pod2-1-sched-preemption-medium-priority" satisfied condition "running" + STEP: Run a high priority pod that has same requirements as that of lower priority pod 10/13/23 09:45:28.899 + Oct 13 09:45:28.904: INFO: Waiting up to 2m0s for pod "preemptor-pod" in namespace "sched-preemption-9009" to be "running" + Oct 13 09:45:28.907: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.944328ms + Oct 13 09:45:30.911: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007411934s + Oct 13 09:45:32.913: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00945275s + Oct 13 09:45:34.913: INFO: Pod "preemptor-pod": Phase="Running", Reason="", readiness=true. Elapsed: 6.008974523s + Oct 13 09:45:34.913: INFO: Pod "preemptor-pod" satisfied condition "running" + [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/node/init/init.go:32 + Oct 13 09:45:34.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/scheduling/preemption.go:84 + [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] + tear down framework | framework.go:193 + STEP: Destroying namespace "sched-preemption-9009" for this suite. 10/13/23 09:45:34.991 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + should perform canary updates and phased rolling updates of template modifications [Conformance] + test/e2e/apps/statefulset.go:317 +[BeforeEach] [sig-apps] StatefulSet + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:45:35 +Oct 13 09:45:35.000: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename statefulset 10/13/23 09:45:35.001 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:45:35.015 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:45:35.017 +[BeforeEach] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:98 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:113 +STEP: Creating service test in namespace statefulset-4946 10/13/23 09:45:35.019 +[It] should perform canary updates and phased rolling updates of template modifications [Conformance] + test/e2e/apps/statefulset.go:317 +STEP: Creating a new StatefulSet 10/13/23 09:45:35.024 +Oct 13 09:45:35.034: INFO: Found 0 stateful pods, waiting for 3 +Oct 13 09:45:45.038: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true +Oct 13 09:45:45.038: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true +Oct 13 09:45:45.038: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Updating stateful set template: update image from registry.k8s.io/e2e-test-images/httpd:2.4.38-4 to registry.k8s.io/e2e-test-images/httpd:2.4.39-4 10/13/23 09:45:45.046 +Oct 13 09:45:45.063: INFO: Updating stateful set ss2 +STEP: Creating a new revision 10/13/23 09:45:45.063 +STEP: Not applying an update when the partition is greater than the number of replicas 10/13/23 09:45:55.08 +STEP: Performing a canary update 10/13/23 09:45:55.08 +Oct 13 09:45:55.102: INFO: Updating stateful set ss2 +Oct 13 09:45:55.110: INFO: Waiting for Pod statefulset-4946/ss2-2 to have revision ss2-5459d8585b update revision ss2-7b6c9599d5 +STEP: Restoring Pods to the correct revision when they are deleted 10/13/23 09:46:05.121 +Oct 13 09:46:05.162: INFO: Found 1 stateful pods, waiting for 3 +Oct 13 09:46:15.169: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true +Oct 13 09:46:15.169: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true +Oct 13 09:46:15.169: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Performing a phased rolling update 10/13/23 09:46:15.177 +Oct 13 09:46:15.200: INFO: Updating stateful set ss2 +Oct 13 09:46:15.211: INFO: Waiting for Pod statefulset-4946/ss2-1 to have revision ss2-5459d8585b update revision ss2-7b6c9599d5 +Oct 13 09:46:25.247: INFO: Updating stateful set ss2 +Oct 13 09:46:25.259: INFO: Waiting for StatefulSet statefulset-4946/ss2 to complete update +Oct 13 09:46:25.260: INFO: Waiting for Pod statefulset-4946/ss2-0 to have revision ss2-5459d8585b update revision ss2-7b6c9599d5 +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:124 +Oct 13 09:46:35.276: INFO: Deleting all statefulset in ns statefulset-4946 +Oct 13 09:46:35.280: INFO: Scaling statefulset ss2 to 0 +Oct 13 09:46:45.304: INFO: Waiting for statefulset status.replicas updated to 0 +Oct 13 09:46:45.308: INFO: Deleting statefulset ss2 +[AfterEach] [sig-apps] StatefulSet + test/e2e/framework/node/init/init.go:32 +Oct 13 09:46:45.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] StatefulSet + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] StatefulSet + tear down framework | framework.go:193 +STEP: Destroying namespace "statefulset-4946" for this suite. 10/13/23 09:46:45.326 +------------------------------ +• [SLOW TEST] [70.333 seconds] +[sig-apps] StatefulSet +test/e2e/apps/framework.go:23 + Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:103 + should perform canary updates and phased rolling updates of template modifications [Conformance] + test/e2e/apps/statefulset.go:317 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] StatefulSet + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:45:35 + Oct 13 09:45:35.000: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename statefulset 10/13/23 09:45:35.001 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:45:35.015 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:45:35.017 + [BeforeEach] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:98 + [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:113 + STEP: Creating service test in namespace statefulset-4946 10/13/23 09:45:35.019 + [It] should perform canary updates and phased rolling updates of template modifications [Conformance] + test/e2e/apps/statefulset.go:317 + STEP: Creating a new StatefulSet 10/13/23 09:45:35.024 + Oct 13 09:45:35.034: INFO: Found 0 stateful pods, waiting for 3 + Oct 13 09:45:45.038: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true + Oct 13 09:45:45.038: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true + Oct 13 09:45:45.038: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true + STEP: Updating stateful set template: update image from registry.k8s.io/e2e-test-images/httpd:2.4.38-4 to registry.k8s.io/e2e-test-images/httpd:2.4.39-4 10/13/23 09:45:45.046 + Oct 13 09:45:45.063: INFO: Updating stateful set ss2 + STEP: Creating a new revision 10/13/23 09:45:45.063 + STEP: Not applying an update when the partition is greater than the number of replicas 10/13/23 09:45:55.08 + STEP: Performing a canary update 10/13/23 09:45:55.08 + Oct 13 09:45:55.102: INFO: Updating stateful set ss2 + Oct 13 09:45:55.110: INFO: Waiting for Pod statefulset-4946/ss2-2 to have revision ss2-5459d8585b update revision ss2-7b6c9599d5 + STEP: Restoring Pods to the correct revision when they are deleted 10/13/23 09:46:05.121 + Oct 13 09:46:05.162: INFO: Found 1 stateful pods, waiting for 3 + Oct 13 09:46:15.169: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true + Oct 13 09:46:15.169: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true + Oct 13 09:46:15.169: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true + STEP: Performing a phased rolling update 10/13/23 09:46:15.177 + Oct 13 09:46:15.200: INFO: Updating stateful set ss2 + Oct 13 09:46:15.211: INFO: Waiting for Pod statefulset-4946/ss2-1 to have revision ss2-5459d8585b update revision ss2-7b6c9599d5 + Oct 13 09:46:25.247: INFO: Updating stateful set ss2 + Oct 13 09:46:25.259: INFO: Waiting for StatefulSet statefulset-4946/ss2 to complete update + Oct 13 09:46:25.260: INFO: Waiting for Pod statefulset-4946/ss2-0 to have revision ss2-5459d8585b update revision ss2-7b6c9599d5 + [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:124 + Oct 13 09:46:35.276: INFO: Deleting all statefulset in ns statefulset-4946 + Oct 13 09:46:35.280: INFO: Scaling statefulset ss2 to 0 + Oct 13 09:46:45.304: INFO: Waiting for statefulset status.replicas updated to 0 + Oct 13 09:46:45.308: INFO: Deleting statefulset ss2 + [AfterEach] [sig-apps] StatefulSet + test/e2e/framework/node/init/init.go:32 + Oct 13 09:46:45.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] StatefulSet + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] StatefulSet + tear down framework | framework.go:193 + STEP: Destroying namespace "statefulset-4946" for this suite. 10/13/23 09:46:45.326 + << End Captured GinkgoWriter Output +------------------------------ +SSSSS +------------------------------ +[sig-node] Pods + should get a host IP [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:204 +[BeforeEach] [sig-node] Pods + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:46:45.334 +Oct 13 09:46:45.334: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename pods 10/13/23 09:46:45.336 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:46:45.35 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:46:45.352 +[BeforeEach] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:194 +[It] should get a host IP [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:204 +STEP: creating pod 10/13/23 09:46:45.355 +Oct 13 09:46:45.364: INFO: Waiting up to 5m0s for pod "pod-hostip-0ceed9c2-ceab-4a35-9e0c-a2cb0a5439c7" in namespace "pods-6679" to be "running and ready" +Oct 13 09:46:45.367: INFO: Pod "pod-hostip-0ceed9c2-ceab-4a35-9e0c-a2cb0a5439c7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.158593ms +Oct 13 09:46:45.367: INFO: The phase of Pod pod-hostip-0ceed9c2-ceab-4a35-9e0c-a2cb0a5439c7 is Pending, waiting for it to be Running (with Ready = true) +Oct 13 09:46:47.373: INFO: Pod "pod-hostip-0ceed9c2-ceab-4a35-9e0c-a2cb0a5439c7": Phase="Running", Reason="", readiness=true. Elapsed: 2.008942609s +Oct 13 09:46:47.373: INFO: The phase of Pod pod-hostip-0ceed9c2-ceab-4a35-9e0c-a2cb0a5439c7 is Running (Ready = true) +Oct 13 09:46:47.373: INFO: Pod "pod-hostip-0ceed9c2-ceab-4a35-9e0c-a2cb0a5439c7" satisfied condition "running and ready" +Oct 13 09:46:47.380: INFO: Pod pod-hostip-0ceed9c2-ceab-4a35-9e0c-a2cb0a5439c7 has hostIP: 10.253.8.111 +[AfterEach] [sig-node] Pods + test/e2e/framework/node/init/init.go:32 +Oct 13 09:46:47.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Pods + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Pods + tear down framework | framework.go:193 +STEP: Destroying namespace "pods-6679" for this suite. 10/13/23 09:46:47.385 +------------------------------ +• [2.058 seconds] +[sig-node] Pods +test/e2e/common/node/framework.go:23 + should get a host IP [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:204 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Pods + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:46:45.334 + Oct 13 09:46:45.334: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename pods 10/13/23 09:46:45.336 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:46:45.35 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:46:45.352 + [BeforeEach] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:194 + [It] should get a host IP [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:204 + STEP: creating pod 10/13/23 09:46:45.355 + Oct 13 09:46:45.364: INFO: Waiting up to 5m0s for pod "pod-hostip-0ceed9c2-ceab-4a35-9e0c-a2cb0a5439c7" in namespace "pods-6679" to be "running and ready" + Oct 13 09:46:45.367: INFO: Pod "pod-hostip-0ceed9c2-ceab-4a35-9e0c-a2cb0a5439c7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.158593ms + Oct 13 09:46:45.367: INFO: The phase of Pod pod-hostip-0ceed9c2-ceab-4a35-9e0c-a2cb0a5439c7 is Pending, waiting for it to be Running (with Ready = true) + Oct 13 09:46:47.373: INFO: Pod "pod-hostip-0ceed9c2-ceab-4a35-9e0c-a2cb0a5439c7": Phase="Running", Reason="", readiness=true. Elapsed: 2.008942609s + Oct 13 09:46:47.373: INFO: The phase of Pod pod-hostip-0ceed9c2-ceab-4a35-9e0c-a2cb0a5439c7 is Running (Ready = true) + Oct 13 09:46:47.373: INFO: Pod "pod-hostip-0ceed9c2-ceab-4a35-9e0c-a2cb0a5439c7" satisfied condition "running and ready" + Oct 13 09:46:47.380: INFO: Pod pod-hostip-0ceed9c2-ceab-4a35-9e0c-a2cb0a5439c7 has hostIP: 10.253.8.111 + [AfterEach] [sig-node] Pods + test/e2e/framework/node/init/init.go:32 + Oct 13 09:46:47.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Pods + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Pods + tear down framework | framework.go:193 + STEP: Destroying namespace "pods-6679" for this suite. 10/13/23 09:46:47.385 + << End Captured GinkgoWriter Output +------------------------------ +SSSSS +------------------------------ +[sig-apps] Deployment + RecreateDeployment should delete old pods and create new ones [Conformance] + test/e2e/apps/deployment.go:113 +[BeforeEach] [sig-apps] Deployment + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:46:47.393 +Oct 13 09:46:47.393: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename deployment 10/13/23 09:46:47.394 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:46:47.412 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:46:47.414 +[BeforeEach] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 +[It] RecreateDeployment should delete old pods and create new ones [Conformance] + test/e2e/apps/deployment.go:113 +Oct 13 09:46:47.417: INFO: Creating deployment "test-recreate-deployment" +Oct 13 09:46:47.422: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 +Oct 13 09:46:47.428: INFO: deployment "test-recreate-deployment" doesn't have the required revision set +Oct 13 09:46:49.438: INFO: Waiting deployment "test-recreate-deployment" to complete +Oct 13 09:46:49.442: INFO: Triggering a new rollout for deployment "test-recreate-deployment" +Oct 13 09:46:49.452: INFO: Updating deployment test-recreate-deployment +Oct 13 09:46:49.452: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods +[AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 +Oct 13 09:46:49.527: INFO: Deployment "test-recreate-deployment": +&Deployment{ObjectMeta:{test-recreate-deployment deployment-7353 dfe7adfe-66fe-49b6-8c94-9e2eceb494b9 41404 2 2023-10-13 09:46:47 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2023-10-13 09:46:49 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-10-13 09:46:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00491c0e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2023-10-13 09:46:49 +0000 UTC,LastTransitionTime:2023-10-13 09:46:49 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-cff6dc657" is progressing.,LastUpdateTime:2023-10-13 09:46:49 +0000 UTC,LastTransitionTime:2023-10-13 09:46:47 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} + +Oct 13 09:46:49.530: INFO: New ReplicaSet "test-recreate-deployment-cff6dc657" of Deployment "test-recreate-deployment": +&ReplicaSet{ObjectMeta:{test-recreate-deployment-cff6dc657 deployment-7353 e52ca192-a2e1-4963-b828-6313875a69cd 41402 1 2023-10-13 09:46:49 +0000 UTC map[name:sample-pod-3 pod-template-hash:cff6dc657] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment dfe7adfe-66fe-49b6-8c94-9e2eceb494b9 0xc0039a1530 0xc0039a1531}] [] [{kube-controller-manager Update apps/v1 2023-10-13 09:46:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dfe7adfe-66fe-49b6-8c94-9e2eceb494b9\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-10-13 09:46:49 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: cff6dc657,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:cff6dc657] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0039a15c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Oct 13 09:46:49.530: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": +Oct 13 09:46:49.531: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-795566c5cb deployment-7353 a52f8a8b-120f-4fb4-b7a0-08c902926c2b 41392 2 2023-10-13 09:46:47 +0000 UTC map[name:sample-pod-3 pod-template-hash:795566c5cb] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment dfe7adfe-66fe-49b6-8c94-9e2eceb494b9 0xc0039a1417 0xc0039a1418}] [] [{kube-controller-manager Update apps/v1 2023-10-13 09:46:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dfe7adfe-66fe-49b6-8c94-9e2eceb494b9\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-10-13 09:46:49 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 795566c5cb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:795566c5cb] map[] [] [] []} {[] [] [{agnhost registry.k8s.io/e2e-test-images/agnhost:2.43 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0039a14c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Oct 13 09:46:49.534: INFO: Pod "test-recreate-deployment-cff6dc657-g78z8" is not available: +&Pod{ObjectMeta:{test-recreate-deployment-cff6dc657-g78z8 test-recreate-deployment-cff6dc657- deployment-7353 26ccd268-b789-41ac-816c-fb1218e809e7 41403 0 2023-10-13 09:46:49 +0000 UTC map[name:sample-pod-3 pod-template-hash:cff6dc657] map[] [{apps/v1 ReplicaSet test-recreate-deployment-cff6dc657 e52ca192-a2e1-4963-b828-6313875a69cd 0xc0006b4790 0xc0006b4791}] [] [{kube-controller-manager Update v1 2023-10-13 09:46:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e52ca192-a2e1-4963-b828-6313875a69cd\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-10-13 09:46:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-fr9vl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fr9vl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 09:46:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 09:46:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 09:46:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 09:46:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.253.8.111,PodIP:,StartTime:2023-10-13 09:46:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + test/e2e/framework/node/init/init.go:32 +Oct 13 09:46:49.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] Deployment + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] Deployment + tear down framework | framework.go:193 +STEP: Destroying namespace "deployment-7353" for this suite. 10/13/23 09:46:49.537 +------------------------------ +• [2.152 seconds] +[sig-apps] Deployment +test/e2e/apps/framework.go:23 + RecreateDeployment should delete old pods and create new ones [Conformance] + test/e2e/apps/deployment.go:113 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] Deployment + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:46:47.393 + Oct 13 09:46:47.393: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename deployment 10/13/23 09:46:47.394 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:46:47.412 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:46:47.414 + [BeforeEach] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 + [It] RecreateDeployment should delete old pods and create new ones [Conformance] + test/e2e/apps/deployment.go:113 + Oct 13 09:46:47.417: INFO: Creating deployment "test-recreate-deployment" + Oct 13 09:46:47.422: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 + Oct 13 09:46:47.428: INFO: deployment "test-recreate-deployment" doesn't have the required revision set + Oct 13 09:46:49.438: INFO: Waiting deployment "test-recreate-deployment" to complete + Oct 13 09:46:49.442: INFO: Triggering a new rollout for deployment "test-recreate-deployment" + Oct 13 09:46:49.452: INFO: Updating deployment test-recreate-deployment + Oct 13 09:46:49.452: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods + [AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 + Oct 13 09:46:49.527: INFO: Deployment "test-recreate-deployment": + &Deployment{ObjectMeta:{test-recreate-deployment deployment-7353 dfe7adfe-66fe-49b6-8c94-9e2eceb494b9 41404 2 2023-10-13 09:46:47 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2023-10-13 09:46:49 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-10-13 09:46:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00491c0e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2023-10-13 09:46:49 +0000 UTC,LastTransitionTime:2023-10-13 09:46:49 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-cff6dc657" is progressing.,LastUpdateTime:2023-10-13 09:46:49 +0000 UTC,LastTransitionTime:2023-10-13 09:46:47 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} + + Oct 13 09:46:49.530: INFO: New ReplicaSet "test-recreate-deployment-cff6dc657" of Deployment "test-recreate-deployment": + &ReplicaSet{ObjectMeta:{test-recreate-deployment-cff6dc657 deployment-7353 e52ca192-a2e1-4963-b828-6313875a69cd 41402 1 2023-10-13 09:46:49 +0000 UTC map[name:sample-pod-3 pod-template-hash:cff6dc657] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment dfe7adfe-66fe-49b6-8c94-9e2eceb494b9 0xc0039a1530 0xc0039a1531}] [] [{kube-controller-manager Update apps/v1 2023-10-13 09:46:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dfe7adfe-66fe-49b6-8c94-9e2eceb494b9\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-10-13 09:46:49 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: cff6dc657,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:cff6dc657] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0039a15c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} + Oct 13 09:46:49.530: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": + Oct 13 09:46:49.531: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-795566c5cb deployment-7353 a52f8a8b-120f-4fb4-b7a0-08c902926c2b 41392 2 2023-10-13 09:46:47 +0000 UTC map[name:sample-pod-3 pod-template-hash:795566c5cb] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment dfe7adfe-66fe-49b6-8c94-9e2eceb494b9 0xc0039a1417 0xc0039a1418}] [] [{kube-controller-manager Update apps/v1 2023-10-13 09:46:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dfe7adfe-66fe-49b6-8c94-9e2eceb494b9\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-10-13 09:46:49 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 795566c5cb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:795566c5cb] map[] [] [] []} {[] [] [{agnhost registry.k8s.io/e2e-test-images/agnhost:2.43 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0039a14c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} + Oct 13 09:46:49.534: INFO: Pod "test-recreate-deployment-cff6dc657-g78z8" is not available: + &Pod{ObjectMeta:{test-recreate-deployment-cff6dc657-g78z8 test-recreate-deployment-cff6dc657- deployment-7353 26ccd268-b789-41ac-816c-fb1218e809e7 41403 0 2023-10-13 09:46:49 +0000 UTC map[name:sample-pod-3 pod-template-hash:cff6dc657] map[] [{apps/v1 ReplicaSet test-recreate-deployment-cff6dc657 e52ca192-a2e1-4963-b828-6313875a69cd 0xc0006b4790 0xc0006b4791}] [] [{kube-controller-manager Update v1 2023-10-13 09:46:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e52ca192-a2e1-4963-b828-6313875a69cd\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-10-13 09:46:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-fr9vl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fr9vl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 09:46:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 09:46:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 09:46:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-13 09:46:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.253.8.111,PodIP:,StartTime:2023-10-13 09:46:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + [AfterEach] [sig-apps] Deployment + test/e2e/framework/node/init/init.go:32 + Oct 13 09:46:49.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] Deployment + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] Deployment + tear down framework | framework.go:193 + STEP: Destroying namespace "deployment-7353" for this suite. 10/13/23 09:46:49.537 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-network] Services + should be able to change the type from ClusterIP to ExternalName [Conformance] + test/e2e/network/service.go:1515 +[BeforeEach] [sig-network] Services + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:46:49.545 +Oct 13 09:46:49.545: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename services 10/13/23 09:46:49.546 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:46:49.563 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:46:49.566 +[BeforeEach] [sig-network] Services + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 +[It] should be able to change the type from ClusterIP to ExternalName [Conformance] + test/e2e/network/service.go:1515 +STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-4152 10/13/23 09:46:49.569 +STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service 10/13/23 09:46:49.58 +STEP: creating service externalsvc in namespace services-4152 10/13/23 09:46:49.58 +STEP: creating replication controller externalsvc in namespace services-4152 10/13/23 09:46:49.594 +I1013 09:46:49.602457 23 runners.go:193] Created replication controller with name: externalsvc, namespace: services-4152, replica count: 2 +I1013 09:46:52.653874 23 runners.go:193] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +STEP: changing the ClusterIP service to type=ExternalName 10/13/23 09:46:52.657 +Oct 13 09:46:52.670: INFO: Creating new exec pod +Oct 13 09:46:52.680: INFO: Waiting up to 5m0s for pod "execpodnft9r" in namespace "services-4152" to be "running" +Oct 13 09:46:52.684: INFO: Pod "execpodnft9r": Phase="Pending", Reason="", readiness=false. Elapsed: 3.805546ms +Oct 13 09:46:54.688: INFO: Pod "execpodnft9r": Phase="Running", Reason="", readiness=true. Elapsed: 2.00803702s +Oct 13 09:46:54.688: INFO: Pod "execpodnft9r" satisfied condition "running" +Oct 13 09:46:54.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-4152 exec execpodnft9r -- /bin/sh -x -c nslookup clusterip-service.services-4152.svc.cluster.local' +Oct 13 09:46:54.898: INFO: stderr: "+ nslookup clusterip-service.services-4152.svc.cluster.local\n" +Oct 13 09:46:54.898: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-4152.svc.cluster.local\tcanonical name = externalsvc.services-4152.svc.cluster.local.\nName:\texternalsvc.services-4152.svc.cluster.local\nAddress: 10.98.187.2\n\n" +STEP: deleting ReplicationController externalsvc in namespace services-4152, will wait for the garbage collector to delete the pods 10/13/23 09:46:54.898 +Oct 13 09:46:54.961: INFO: Deleting ReplicationController externalsvc took: 8.740866ms +Oct 13 09:46:55.061: INFO: Terminating ReplicationController externalsvc pods took: 100.448975ms +Oct 13 09:46:56.677: INFO: Cleaning up the ClusterIP to ExternalName test service +[AfterEach] [sig-network] Services + test/e2e/framework/node/init/init.go:32 +Oct 13 09:46:56.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] Services + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] Services + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] Services + tear down framework | framework.go:193 +STEP: Destroying namespace "services-4152" for this suite. 10/13/23 09:46:56.69 +------------------------------ +• [SLOW TEST] [7.151 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should be able to change the type from ClusterIP to ExternalName [Conformance] + test/e2e/network/service.go:1515 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Services + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:46:49.545 + Oct 13 09:46:49.545: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename services 10/13/23 09:46:49.546 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:46:49.563 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:46:49.566 + [BeforeEach] [sig-network] Services + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 + [It] should be able to change the type from ClusterIP to ExternalName [Conformance] + test/e2e/network/service.go:1515 + STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-4152 10/13/23 09:46:49.569 + STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service 10/13/23 09:46:49.58 + STEP: creating service externalsvc in namespace services-4152 10/13/23 09:46:49.58 + STEP: creating replication controller externalsvc in namespace services-4152 10/13/23 09:46:49.594 + I1013 09:46:49.602457 23 runners.go:193] Created replication controller with name: externalsvc, namespace: services-4152, replica count: 2 + I1013 09:46:52.653874 23 runners.go:193] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady + STEP: changing the ClusterIP service to type=ExternalName 10/13/23 09:46:52.657 + Oct 13 09:46:52.670: INFO: Creating new exec pod + Oct 13 09:46:52.680: INFO: Waiting up to 5m0s for pod "execpodnft9r" in namespace "services-4152" to be "running" + Oct 13 09:46:52.684: INFO: Pod "execpodnft9r": Phase="Pending", Reason="", readiness=false. Elapsed: 3.805546ms + Oct 13 09:46:54.688: INFO: Pod "execpodnft9r": Phase="Running", Reason="", readiness=true. Elapsed: 2.00803702s + Oct 13 09:46:54.688: INFO: Pod "execpodnft9r" satisfied condition "running" + Oct 13 09:46:54.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-4152 exec execpodnft9r -- /bin/sh -x -c nslookup clusterip-service.services-4152.svc.cluster.local' + Oct 13 09:46:54.898: INFO: stderr: "+ nslookup clusterip-service.services-4152.svc.cluster.local\n" + Oct 13 09:46:54.898: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-4152.svc.cluster.local\tcanonical name = externalsvc.services-4152.svc.cluster.local.\nName:\texternalsvc.services-4152.svc.cluster.local\nAddress: 10.98.187.2\n\n" + STEP: deleting ReplicationController externalsvc in namespace services-4152, will wait for the garbage collector to delete the pods 10/13/23 09:46:54.898 + Oct 13 09:46:54.961: INFO: Deleting ReplicationController externalsvc took: 8.740866ms + Oct 13 09:46:55.061: INFO: Terminating ReplicationController externalsvc pods took: 100.448975ms + Oct 13 09:46:56.677: INFO: Cleaning up the ClusterIP to ExternalName test service + [AfterEach] [sig-network] Services + test/e2e/framework/node/init/init.go:32 + Oct 13 09:46:56.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] Services + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] Services + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] Services + tear down framework | framework.go:193 + STEP: Destroying namespace "services-4152" for this suite. 10/13/23 09:46:56.69 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:57 +[BeforeEach] [sig-storage] Secrets + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:46:56.696 +Oct 13 09:46:56.696: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename secrets 10/13/23 09:46:56.697 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:46:56.711 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:46:56.713 +[BeforeEach] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:31 +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:57 +STEP: Creating secret with name secret-test-3177e6c7-d316-486b-bdc0-7ccbc241d1a7 10/13/23 09:46:56.715 +STEP: Creating a pod to test consume secrets 10/13/23 09:46:56.719 +Oct 13 09:46:56.726: INFO: Waiting up to 5m0s for pod "pod-secrets-de2d4f1d-0a62-4b48-a7b3-11d51e43d33b" in namespace "secrets-6700" to be "Succeeded or Failed" +Oct 13 09:46:56.729: INFO: Pod "pod-secrets-de2d4f1d-0a62-4b48-a7b3-11d51e43d33b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.671373ms +Oct 13 09:46:58.734: INFO: Pod "pod-secrets-de2d4f1d-0a62-4b48-a7b3-11d51e43d33b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008098205s +Oct 13 09:47:00.733: INFO: Pod "pod-secrets-de2d4f1d-0a62-4b48-a7b3-11d51e43d33b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007203304s +STEP: Saw pod success 10/13/23 09:47:00.733 +Oct 13 09:47:00.733: INFO: Pod "pod-secrets-de2d4f1d-0a62-4b48-a7b3-11d51e43d33b" satisfied condition "Succeeded or Failed" +Oct 13 09:47:00.736: INFO: Trying to get logs from node node2 pod pod-secrets-de2d4f1d-0a62-4b48-a7b3-11d51e43d33b container secret-volume-test: +STEP: delete the pod 10/13/23 09:47:00.748 +Oct 13 09:47:00.757: INFO: Waiting for pod pod-secrets-de2d4f1d-0a62-4b48-a7b3-11d51e43d33b to disappear +Oct 13 09:47:00.760: INFO: Pod pod-secrets-de2d4f1d-0a62-4b48-a7b3-11d51e43d33b no longer exists +[AfterEach] [sig-storage] Secrets + test/e2e/framework/node/init/init.go:32 +Oct 13 09:47:00.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Secrets + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Secrets + tear down framework | framework.go:193 +STEP: Destroying namespace "secrets-6700" for this suite. 10/13/23 09:47:00.763 +------------------------------ +• [4.073 seconds] +[sig-storage] Secrets +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:57 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Secrets + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:46:56.696 + Oct 13 09:46:56.696: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename secrets 10/13/23 09:46:56.697 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:46:56.711 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:46:56.713 + [BeforeEach] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:31 + [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:57 + STEP: Creating secret with name secret-test-3177e6c7-d316-486b-bdc0-7ccbc241d1a7 10/13/23 09:46:56.715 + STEP: Creating a pod to test consume secrets 10/13/23 09:46:56.719 + Oct 13 09:46:56.726: INFO: Waiting up to 5m0s for pod "pod-secrets-de2d4f1d-0a62-4b48-a7b3-11d51e43d33b" in namespace "secrets-6700" to be "Succeeded or Failed" + Oct 13 09:46:56.729: INFO: Pod "pod-secrets-de2d4f1d-0a62-4b48-a7b3-11d51e43d33b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.671373ms + Oct 13 09:46:58.734: INFO: Pod "pod-secrets-de2d4f1d-0a62-4b48-a7b3-11d51e43d33b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008098205s + Oct 13 09:47:00.733: INFO: Pod "pod-secrets-de2d4f1d-0a62-4b48-a7b3-11d51e43d33b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007203304s + STEP: Saw pod success 10/13/23 09:47:00.733 + Oct 13 09:47:00.733: INFO: Pod "pod-secrets-de2d4f1d-0a62-4b48-a7b3-11d51e43d33b" satisfied condition "Succeeded or Failed" + Oct 13 09:47:00.736: INFO: Trying to get logs from node node2 pod pod-secrets-de2d4f1d-0a62-4b48-a7b3-11d51e43d33b container secret-volume-test: + STEP: delete the pod 10/13/23 09:47:00.748 + Oct 13 09:47:00.757: INFO: Waiting for pod pod-secrets-de2d4f1d-0a62-4b48-a7b3-11d51e43d33b to disappear + Oct 13 09:47:00.760: INFO: Pod pod-secrets-de2d4f1d-0a62-4b48-a7b3-11d51e43d33b no longer exists + [AfterEach] [sig-storage] Secrets + test/e2e/framework/node/init/init.go:32 + Oct 13 09:47:00.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Secrets + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Secrets + tear down framework | framework.go:193 + STEP: Destroying namespace "secrets-6700" for this suite. 10/13/23 09:47:00.763 + << End Captured GinkgoWriter Output +------------------------------ +SSSSS +------------------------------ +[sig-api-machinery] Aggregator + Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] + test/e2e/apimachinery/aggregator.go:100 +[BeforeEach] [sig-api-machinery] Aggregator + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:47:00.769 +Oct 13 09:47:00.769: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename aggregator 10/13/23 09:47:00.771 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:47:00.787 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:47:00.789 +[BeforeEach] [sig-api-machinery] Aggregator + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-api-machinery] Aggregator + test/e2e/apimachinery/aggregator.go:78 +Oct 13 09:47:00.791: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +[It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] + test/e2e/apimachinery/aggregator.go:100 +STEP: Registering the sample API server. 10/13/23 09:47:00.792 +Oct 13 09:47:01.298: INFO: new replicaset for deployment "sample-apiserver-deployment" is yet to be created +Oct 13 09:47:03.347: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 13 09:47:05.352: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 13 09:47:07.352: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 13 09:47:09.353: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 13 09:47:11.352: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 13 09:47:13.352: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 13 09:47:15.353: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 13 09:47:17.355: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 13 09:47:19.352: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 13 09:47:21.352: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 13 09:47:23.354: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} +Oct 13 09:47:25.491: INFO: Waited 125.661683ms for the sample-apiserver to be ready to handle requests. +STEP: Read Status for v1alpha1.wardle.example.com 10/13/23 09:47:25.545 +STEP: kubectl patch apiservice v1alpha1.wardle.example.com -p '{"spec":{"versionPriority": 400}}' 10/13/23 09:47:25.55 +STEP: List APIServices 10/13/23 09:47:25.561 +Oct 13 09:47:25.567: INFO: Found v1alpha1.wardle.example.com in APIServiceList +[AfterEach] [sig-api-machinery] Aggregator + test/e2e/apimachinery/aggregator.go:68 +[AfterEach] [sig-api-machinery] Aggregator + test/e2e/framework/node/init/init.go:32 +Oct 13 09:47:25.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] Aggregator + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] Aggregator + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] Aggregator + tear down framework | framework.go:193 +STEP: Destroying namespace "aggregator-7172" for this suite. 10/13/23 09:47:25.693 +------------------------------ +• [SLOW TEST] [24.932 seconds] +[sig-api-machinery] Aggregator +test/e2e/apimachinery/framework.go:23 + Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] + test/e2e/apimachinery/aggregator.go:100 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] Aggregator + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:47:00.769 + Oct 13 09:47:00.769: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename aggregator 10/13/23 09:47:00.771 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:47:00.787 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:47:00.789 + [BeforeEach] [sig-api-machinery] Aggregator + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-api-machinery] Aggregator + test/e2e/apimachinery/aggregator.go:78 + Oct 13 09:47:00.791: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] + test/e2e/apimachinery/aggregator.go:100 + STEP: Registering the sample API server. 10/13/23 09:47:00.792 + Oct 13 09:47:01.298: INFO: new replicaset for deployment "sample-apiserver-deployment" is yet to be created + Oct 13 09:47:03.347: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} + Oct 13 09:47:05.352: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} + Oct 13 09:47:07.352: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} + Oct 13 09:47:09.353: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} + Oct 13 09:47:11.352: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} + Oct 13 09:47:13.352: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} + Oct 13 09:47:15.353: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} + Oct 13 09:47:17.355: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} + Oct 13 09:47:19.352: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} + Oct 13 09:47:21.352: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} + Oct 13 09:47:23.354: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), LastTransitionTime:time.Date(2023, time.October, 13, 9, 47, 1, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} + Oct 13 09:47:25.491: INFO: Waited 125.661683ms for the sample-apiserver to be ready to handle requests. + STEP: Read Status for v1alpha1.wardle.example.com 10/13/23 09:47:25.545 + STEP: kubectl patch apiservice v1alpha1.wardle.example.com -p '{"spec":{"versionPriority": 400}}' 10/13/23 09:47:25.55 + STEP: List APIServices 10/13/23 09:47:25.561 + Oct 13 09:47:25.567: INFO: Found v1alpha1.wardle.example.com in APIServiceList + [AfterEach] [sig-api-machinery] Aggregator + test/e2e/apimachinery/aggregator.go:68 + [AfterEach] [sig-api-machinery] Aggregator + test/e2e/framework/node/init/init.go:32 + Oct 13 09:47:25.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] Aggregator + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] Aggregator + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] Aggregator + tear down framework | framework.go:193 + STEP: Destroying namespace "aggregator-7172" for this suite. 10/13/23 09:47:25.693 + << End Captured GinkgoWriter Output +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:89 +[BeforeEach] [sig-storage] Projected configMap + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:47:25.701 +Oct 13 09:47:25.701: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename projected 10/13/23 09:47:25.702 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:47:25.724 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:47:25.728 +[BeforeEach] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:31 +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:89 +STEP: Creating configMap with name projected-configmap-test-volume-map-15e0d1b4-9d67-41de-8517-e5863436ea26 10/13/23 09:47:25.731 +STEP: Creating a pod to test consume configMaps 10/13/23 09:47:25.738 +Oct 13 09:47:25.747: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ec721366-2d84-4457-88bc-dd3eb4c68f14" in namespace "projected-8245" to be "Succeeded or Failed" +Oct 13 09:47:25.750: INFO: Pod "pod-projected-configmaps-ec721366-2d84-4457-88bc-dd3eb4c68f14": Phase="Pending", Reason="", readiness=false. Elapsed: 3.05964ms +Oct 13 09:47:27.755: INFO: Pod "pod-projected-configmaps-ec721366-2d84-4457-88bc-dd3eb4c68f14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008349583s +Oct 13 09:47:29.755: INFO: Pod "pod-projected-configmaps-ec721366-2d84-4457-88bc-dd3eb4c68f14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008256044s +STEP: Saw pod success 10/13/23 09:47:29.755 +Oct 13 09:47:29.755: INFO: Pod "pod-projected-configmaps-ec721366-2d84-4457-88bc-dd3eb4c68f14" satisfied condition "Succeeded or Failed" +Oct 13 09:47:29.759: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-ec721366-2d84-4457-88bc-dd3eb4c68f14 container agnhost-container: +STEP: delete the pod 10/13/23 09:47:29.765 +Oct 13 09:47:29.780: INFO: Waiting for pod pod-projected-configmaps-ec721366-2d84-4457-88bc-dd3eb4c68f14 to disappear +Oct 13 09:47:29.784: INFO: Pod pod-projected-configmaps-ec721366-2d84-4457-88bc-dd3eb4c68f14 no longer exists +[AfterEach] [sig-storage] Projected configMap + test/e2e/framework/node/init/init.go:32 +Oct 13 09:47:29.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Projected configMap + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Projected configMap + tear down framework | framework.go:193 +STEP: Destroying namespace "projected-8245" for this suite. 10/13/23 09:47:29.787 +------------------------------ +• [4.091 seconds] +[sig-storage] Projected configMap +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:89 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected configMap + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:47:25.701 + Oct 13 09:47:25.701: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename projected 10/13/23 09:47:25.702 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:47:25.724 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:47:25.728 + [BeforeEach] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:31 + [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:89 + STEP: Creating configMap with name projected-configmap-test-volume-map-15e0d1b4-9d67-41de-8517-e5863436ea26 10/13/23 09:47:25.731 + STEP: Creating a pod to test consume configMaps 10/13/23 09:47:25.738 + Oct 13 09:47:25.747: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ec721366-2d84-4457-88bc-dd3eb4c68f14" in namespace "projected-8245" to be "Succeeded or Failed" + Oct 13 09:47:25.750: INFO: Pod "pod-projected-configmaps-ec721366-2d84-4457-88bc-dd3eb4c68f14": Phase="Pending", Reason="", readiness=false. Elapsed: 3.05964ms + Oct 13 09:47:27.755: INFO: Pod "pod-projected-configmaps-ec721366-2d84-4457-88bc-dd3eb4c68f14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008349583s + Oct 13 09:47:29.755: INFO: Pod "pod-projected-configmaps-ec721366-2d84-4457-88bc-dd3eb4c68f14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008256044s + STEP: Saw pod success 10/13/23 09:47:29.755 + Oct 13 09:47:29.755: INFO: Pod "pod-projected-configmaps-ec721366-2d84-4457-88bc-dd3eb4c68f14" satisfied condition "Succeeded or Failed" + Oct 13 09:47:29.759: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-ec721366-2d84-4457-88bc-dd3eb4c68f14 container agnhost-container: + STEP: delete the pod 10/13/23 09:47:29.765 + Oct 13 09:47:29.780: INFO: Waiting for pod pod-projected-configmaps-ec721366-2d84-4457-88bc-dd3eb4c68f14 to disappear + Oct 13 09:47:29.784: INFO: Pod pod-projected-configmaps-ec721366-2d84-4457-88bc-dd3eb4c68f14 no longer exists + [AfterEach] [sig-storage] Projected configMap + test/e2e/framework/node/init/init.go:32 + Oct 13 09:47:29.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Projected configMap + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Projected configMap + tear down framework | framework.go:193 + STEP: Destroying namespace "projected-8245" for this suite. 10/13/23 09:47:29.787 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] DisruptionController + should block an eviction until the PDB is updated to allow it [Conformance] + test/e2e/apps/disruption.go:347 +[BeforeEach] [sig-apps] DisruptionController + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:47:29.794 +Oct 13 09:47:29.794: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename disruption 10/13/23 09:47:29.795 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:47:29.809 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:47:29.811 +[BeforeEach] [sig-apps] DisruptionController + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] DisruptionController + test/e2e/apps/disruption.go:72 +[It] should block an eviction until the PDB is updated to allow it [Conformance] + test/e2e/apps/disruption.go:347 +STEP: Creating a pdb that targets all three pods in a test replica set 10/13/23 09:47:29.813 +STEP: Waiting for the pdb to be processed 10/13/23 09:47:29.817 +STEP: First trying to evict a pod which shouldn't be evictable 10/13/23 09:47:31.83 +STEP: Waiting for all pods to be running 10/13/23 09:47:31.83 +Oct 13 09:47:31.834: INFO: pods: 0 < 3 +STEP: locating a running pod 10/13/23 09:47:33.84 +STEP: Updating the pdb to allow a pod to be evicted 10/13/23 09:47:33.851 +STEP: Waiting for the pdb to be processed 10/13/23 09:47:33.86 +STEP: Trying to evict the same pod we tried earlier which should now be evictable 10/13/23 09:47:35.867 +STEP: Waiting for all pods to be running 10/13/23 09:47:35.867 +STEP: Waiting for the pdb to observed all healthy pods 10/13/23 09:47:35.87 +STEP: Patching the pdb to disallow a pod to be evicted 10/13/23 09:47:35.897 +STEP: Waiting for the pdb to be processed 10/13/23 09:47:35.907 +STEP: Waiting for all pods to be running 10/13/23 09:47:37.918 +STEP: locating a running pod 10/13/23 09:47:37.922 +STEP: Deleting the pdb to allow a pod to be evicted 10/13/23 09:47:37.932 +STEP: Waiting for the pdb to be deleted 10/13/23 09:47:37.939 +STEP: Trying to evict the same pod we tried earlier which should now be evictable 10/13/23 09:47:37.941 +STEP: Waiting for all pods to be running 10/13/23 09:47:37.941 +[AfterEach] [sig-apps] DisruptionController + test/e2e/framework/node/init/init.go:32 +Oct 13 09:47:37.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] DisruptionController + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] DisruptionController + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] DisruptionController + tear down framework | framework.go:193 +STEP: Destroying namespace "disruption-9270" for this suite. 10/13/23 09:47:37.962 +------------------------------ +• [SLOW TEST] [8.178 seconds] +[sig-apps] DisruptionController +test/e2e/apps/framework.go:23 + should block an eviction until the PDB is updated to allow it [Conformance] + test/e2e/apps/disruption.go:347 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] DisruptionController + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:47:29.794 + Oct 13 09:47:29.794: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename disruption 10/13/23 09:47:29.795 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:47:29.809 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:47:29.811 + [BeforeEach] [sig-apps] DisruptionController + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] DisruptionController + test/e2e/apps/disruption.go:72 + [It] should block an eviction until the PDB is updated to allow it [Conformance] + test/e2e/apps/disruption.go:347 + STEP: Creating a pdb that targets all three pods in a test replica set 10/13/23 09:47:29.813 + STEP: Waiting for the pdb to be processed 10/13/23 09:47:29.817 + STEP: First trying to evict a pod which shouldn't be evictable 10/13/23 09:47:31.83 + STEP: Waiting for all pods to be running 10/13/23 09:47:31.83 + Oct 13 09:47:31.834: INFO: pods: 0 < 3 + STEP: locating a running pod 10/13/23 09:47:33.84 + STEP: Updating the pdb to allow a pod to be evicted 10/13/23 09:47:33.851 + STEP: Waiting for the pdb to be processed 10/13/23 09:47:33.86 + STEP: Trying to evict the same pod we tried earlier which should now be evictable 10/13/23 09:47:35.867 + STEP: Waiting for all pods to be running 10/13/23 09:47:35.867 + STEP: Waiting for the pdb to observed all healthy pods 10/13/23 09:47:35.87 + STEP: Patching the pdb to disallow a pod to be evicted 10/13/23 09:47:35.897 + STEP: Waiting for the pdb to be processed 10/13/23 09:47:35.907 + STEP: Waiting for all pods to be running 10/13/23 09:47:37.918 + STEP: locating a running pod 10/13/23 09:47:37.922 + STEP: Deleting the pdb to allow a pod to be evicted 10/13/23 09:47:37.932 + STEP: Waiting for the pdb to be deleted 10/13/23 09:47:37.939 + STEP: Trying to evict the same pod we tried earlier which should now be evictable 10/13/23 09:47:37.941 + STEP: Waiting for all pods to be running 10/13/23 09:47:37.941 + [AfterEach] [sig-apps] DisruptionController + test/e2e/framework/node/init/init.go:32 + Oct 13 09:47:37.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] DisruptionController + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] DisruptionController + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] DisruptionController + tear down framework | framework.go:193 + STEP: Destroying namespace "disruption-9270" for this suite. 10/13/23 09:47:37.962 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a secret. [Conformance] + test/e2e/apimachinery/resource_quota.go:160 +[BeforeEach] [sig-api-machinery] ResourceQuota + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:47:37.974 +Oct 13 09:47:37.975: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename resourcequota 10/13/23 09:47:37.975 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:47:37.995 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:47:37.997 +[BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:31 +[It] should create a ResourceQuota and capture the life of a secret. [Conformance] + test/e2e/apimachinery/resource_quota.go:160 +STEP: Discovering how many secrets are in namespace by default 10/13/23 09:47:38 +STEP: Counting existing ResourceQuota 10/13/23 09:47:43.004 +STEP: Creating a ResourceQuota 10/13/23 09:47:48.01 +STEP: Ensuring resource quota status is calculated 10/13/23 09:47:48.019 +STEP: Creating a Secret 10/13/23 09:47:50.026 +STEP: Ensuring resource quota status captures secret creation 10/13/23 09:47:50.039 +STEP: Deleting a secret 10/13/23 09:47:52.045 +STEP: Ensuring resource quota status released usage 10/13/23 09:47:52.055 +[AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/node/init/init.go:32 +Oct 13 09:47:54.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + tear down framework | framework.go:193 +STEP: Destroying namespace "resourcequota-6039" for this suite. 10/13/23 09:47:54.066 +------------------------------ +• [SLOW TEST] [16.098 seconds] +[sig-api-machinery] ResourceQuota +test/e2e/apimachinery/framework.go:23 + should create a ResourceQuota and capture the life of a secret. [Conformance] + test/e2e/apimachinery/resource_quota.go:160 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] ResourceQuota + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:47:37.974 + Oct 13 09:47:37.975: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename resourcequota 10/13/23 09:47:37.975 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:47:37.995 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:47:37.997 + [BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:31 + [It] should create a ResourceQuota and capture the life of a secret. [Conformance] + test/e2e/apimachinery/resource_quota.go:160 + STEP: Discovering how many secrets are in namespace by default 10/13/23 09:47:38 + STEP: Counting existing ResourceQuota 10/13/23 09:47:43.004 + STEP: Creating a ResourceQuota 10/13/23 09:47:48.01 + STEP: Ensuring resource quota status is calculated 10/13/23 09:47:48.019 + STEP: Creating a Secret 10/13/23 09:47:50.026 + STEP: Ensuring resource quota status captures secret creation 10/13/23 09:47:50.039 + STEP: Deleting a secret 10/13/23 09:47:52.045 + STEP: Ensuring resource quota status released usage 10/13/23 09:47:52.055 + [AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/node/init/init.go:32 + Oct 13 09:47:54.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + tear down framework | framework.go:193 + STEP: Destroying namespace "resourcequota-6039" for this suite. 10/13/23 09:47:54.066 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:124 +[BeforeEach] [sig-storage] Projected configMap + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:47:54.075 +Oct 13 09:47:54.075: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename projected 10/13/23 09:47:54.076 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:47:54.092 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:47:54.094 +[BeforeEach] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:31 +[It] updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:124 +STEP: Creating projection with configMap that has name projected-configmap-test-upd-79995249-d413-45b3-9fa5-bbb1070c93bd 10/13/23 09:47:54.101 +STEP: Creating the pod 10/13/23 09:47:54.105 +Oct 13 09:47:54.113: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b2cf29d9-7ebb-48e0-aa6d-dc406897a2fe" in namespace "projected-4108" to be "running and ready" +Oct 13 09:47:54.116: INFO: Pod "pod-projected-configmaps-b2cf29d9-7ebb-48e0-aa6d-dc406897a2fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.915312ms +Oct 13 09:47:54.116: INFO: The phase of Pod pod-projected-configmaps-b2cf29d9-7ebb-48e0-aa6d-dc406897a2fe is Pending, waiting for it to be Running (with Ready = true) +Oct 13 09:47:56.122: INFO: Pod "pod-projected-configmaps-b2cf29d9-7ebb-48e0-aa6d-dc406897a2fe": Phase="Running", Reason="", readiness=true. Elapsed: 2.009279059s +Oct 13 09:47:56.122: INFO: The phase of Pod pod-projected-configmaps-b2cf29d9-7ebb-48e0-aa6d-dc406897a2fe is Running (Ready = true) +Oct 13 09:47:56.122: INFO: Pod "pod-projected-configmaps-b2cf29d9-7ebb-48e0-aa6d-dc406897a2fe" satisfied condition "running and ready" +STEP: Updating configmap projected-configmap-test-upd-79995249-d413-45b3-9fa5-bbb1070c93bd 10/13/23 09:47:56.133 +STEP: waiting to observe update in volume 10/13/23 09:47:56.138 +[AfterEach] [sig-storage] Projected configMap + test/e2e/framework/node/init/init.go:32 +Oct 13 09:47:58.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Projected configMap + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Projected configMap + tear down framework | framework.go:193 +STEP: Destroying namespace "projected-4108" for this suite. 10/13/23 09:47:58.161 +------------------------------ +• [4.095 seconds] +[sig-storage] Projected configMap +test/e2e/common/storage/framework.go:23 + updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:124 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected configMap + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:47:54.075 + Oct 13 09:47:54.075: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename projected 10/13/23 09:47:54.076 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:47:54.092 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:47:54.094 + [BeforeEach] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:31 + [It] updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:124 + STEP: Creating projection with configMap that has name projected-configmap-test-upd-79995249-d413-45b3-9fa5-bbb1070c93bd 10/13/23 09:47:54.101 + STEP: Creating the pod 10/13/23 09:47:54.105 + Oct 13 09:47:54.113: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b2cf29d9-7ebb-48e0-aa6d-dc406897a2fe" in namespace "projected-4108" to be "running and ready" + Oct 13 09:47:54.116: INFO: Pod "pod-projected-configmaps-b2cf29d9-7ebb-48e0-aa6d-dc406897a2fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.915312ms + Oct 13 09:47:54.116: INFO: The phase of Pod pod-projected-configmaps-b2cf29d9-7ebb-48e0-aa6d-dc406897a2fe is Pending, waiting for it to be Running (with Ready = true) + Oct 13 09:47:56.122: INFO: Pod "pod-projected-configmaps-b2cf29d9-7ebb-48e0-aa6d-dc406897a2fe": Phase="Running", Reason="", readiness=true. Elapsed: 2.009279059s + Oct 13 09:47:56.122: INFO: The phase of Pod pod-projected-configmaps-b2cf29d9-7ebb-48e0-aa6d-dc406897a2fe is Running (Ready = true) + Oct 13 09:47:56.122: INFO: Pod "pod-projected-configmaps-b2cf29d9-7ebb-48e0-aa6d-dc406897a2fe" satisfied condition "running and ready" + STEP: Updating configmap projected-configmap-test-upd-79995249-d413-45b3-9fa5-bbb1070c93bd 10/13/23 09:47:56.133 + STEP: waiting to observe update in volume 10/13/23 09:47:56.138 + [AfterEach] [sig-storage] Projected configMap + test/e2e/framework/node/init/init.go:32 + Oct 13 09:47:58.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Projected configMap + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Projected configMap + tear down framework | framework.go:193 + STEP: Destroying namespace "projected-4108" for this suite. 10/13/23 09:47:58.161 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] + test/e2e/apps/statefulset.go:587 +[BeforeEach] [sig-apps] StatefulSet + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:47:58.171 +Oct 13 09:47:58.171: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename statefulset 10/13/23 09:47:58.172 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:47:58.19 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:47:58.193 +[BeforeEach] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:98 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:113 +STEP: Creating service test in namespace statefulset-3515 10/13/23 09:47:58.196 +[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] + test/e2e/apps/statefulset.go:587 +STEP: Initializing watcher for selector baz=blah,foo=bar 10/13/23 09:47:58.201 +STEP: Creating stateful set ss in namespace statefulset-3515 10/13/23 09:47:58.207 +STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3515 10/13/23 09:47:58.213 +Oct 13 09:47:58.216: INFO: Found 0 stateful pods, waiting for 1 +Oct 13 09:48:08.222: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod 10/13/23 09:48:08.222 +Oct 13 09:48:08.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=statefulset-3515 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Oct 13 09:48:08.408: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Oct 13 09:48:08.408: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Oct 13 09:48:08.408: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Oct 13 09:48:08.413: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true +Oct 13 09:48:18.420: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Oct 13 09:48:18.420: INFO: Waiting for statefulset status.replicas updated to 0 +Oct 13 09:48:18.441: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999301s +Oct 13 09:48:19.448: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.995072589s +Oct 13 09:48:20.454: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.9881038s +Oct 13 09:48:21.459: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.982273124s +Oct 13 09:48:22.464: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.977766623s +Oct 13 09:48:23.470: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.97208792s +Oct 13 09:48:24.475: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.96627865s +Oct 13 09:48:25.481: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.961508811s +Oct 13 09:48:26.487: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.954724027s +Oct 13 09:48:27.493: INFO: Verifying statefulset ss doesn't scale past 1 for another 949.017873ms +STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3515 10/13/23 09:48:28.493 +Oct 13 09:48:28.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=statefulset-3515 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 13 09:48:28.675: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Oct 13 09:48:28.675: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Oct 13 09:48:28.675: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Oct 13 09:48:28.680: INFO: Found 1 stateful pods, waiting for 3 +Oct 13 09:48:38.688: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +Oct 13 09:48:38.688: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true +Oct 13 09:48:38.688: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Verifying that stateful set ss was scaled up in order 10/13/23 09:48:38.688 +STEP: Scale down will halt with unhealthy stateful pod 10/13/23 09:48:38.689 +Oct 13 09:48:38.696: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=statefulset-3515 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Oct 13 09:48:38.863: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Oct 13 09:48:38.863: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Oct 13 09:48:38.863: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Oct 13 09:48:38.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=statefulset-3515 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Oct 13 09:48:39.061: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Oct 13 09:48:39.061: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Oct 13 09:48:39.061: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Oct 13 09:48:39.061: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=statefulset-3515 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Oct 13 09:48:39.234: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Oct 13 09:48:39.234: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Oct 13 09:48:39.234: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Oct 13 09:48:39.234: INFO: Waiting for statefulset status.replicas updated to 0 +Oct 13 09:48:39.239: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 +Oct 13 09:48:49.249: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Oct 13 09:48:49.249: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false +Oct 13 09:48:49.249: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false +Oct 13 09:48:49.262: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999998899s +Oct 13 09:48:50.268: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996029432s +Oct 13 09:48:51.274: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.989680994s +Oct 13 09:48:52.279: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.983740704s +Oct 13 09:48:53.284: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.979396192s +Oct 13 09:48:54.289: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.973742239s +Oct 13 09:48:55.294: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.969190975s +Oct 13 09:48:56.299: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.963584967s +Oct 13 09:48:57.306: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.957703055s +Oct 13 09:48:58.312: INFO: Verifying statefulset ss doesn't scale past 3 for another 951.476399ms +STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3515 10/13/23 09:48:59.313 +Oct 13 09:48:59.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=statefulset-3515 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 13 09:48:59.485: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Oct 13 09:48:59.485: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Oct 13 09:48:59.485: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Oct 13 09:48:59.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=statefulset-3515 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 13 09:48:59.643: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Oct 13 09:48:59.643: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Oct 13 09:48:59.643: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Oct 13 09:48:59.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=statefulset-3515 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Oct 13 09:48:59.808: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Oct 13 09:48:59.808: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Oct 13 09:48:59.808: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Oct 13 09:48:59.808: INFO: Scaling statefulset ss to 0 +STEP: Verifying that stateful set ss was scaled down in reverse order 10/13/23 09:49:09.828 +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:124 +Oct 13 09:49:09.828: INFO: Deleting all statefulset in ns statefulset-3515 +Oct 13 09:49:09.832: INFO: Scaling statefulset ss to 0 +Oct 13 09:49:09.842: INFO: Waiting for statefulset status.replicas updated to 0 +Oct 13 09:49:09.845: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + test/e2e/framework/node/init/init.go:32 +Oct 13 09:49:09.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] StatefulSet + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] StatefulSet + tear down framework | framework.go:193 +STEP: Destroying namespace "statefulset-3515" for this suite. 10/13/23 09:49:09.861 +------------------------------ +• [SLOW TEST] [71.695 seconds] +[sig-apps] StatefulSet +test/e2e/apps/framework.go:23 + Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:103 + Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] + test/e2e/apps/statefulset.go:587 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] StatefulSet + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:47:58.171 + Oct 13 09:47:58.171: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename statefulset 10/13/23 09:47:58.172 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:47:58.19 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:47:58.193 + [BeforeEach] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:98 + [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:113 + STEP: Creating service test in namespace statefulset-3515 10/13/23 09:47:58.196 + [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] + test/e2e/apps/statefulset.go:587 + STEP: Initializing watcher for selector baz=blah,foo=bar 10/13/23 09:47:58.201 + STEP: Creating stateful set ss in namespace statefulset-3515 10/13/23 09:47:58.207 + STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3515 10/13/23 09:47:58.213 + Oct 13 09:47:58.216: INFO: Found 0 stateful pods, waiting for 1 + Oct 13 09:48:08.222: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true + STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod 10/13/23 09:48:08.222 + Oct 13 09:48:08.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=statefulset-3515 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' + Oct 13 09:48:08.408: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" + Oct 13 09:48:08.408: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" + Oct 13 09:48:08.408: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + + Oct 13 09:48:08.413: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true + Oct 13 09:48:18.420: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false + Oct 13 09:48:18.420: INFO: Waiting for statefulset status.replicas updated to 0 + Oct 13 09:48:18.441: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999301s + Oct 13 09:48:19.448: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.995072589s + Oct 13 09:48:20.454: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.9881038s + Oct 13 09:48:21.459: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.982273124s + Oct 13 09:48:22.464: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.977766623s + Oct 13 09:48:23.470: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.97208792s + Oct 13 09:48:24.475: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.96627865s + Oct 13 09:48:25.481: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.961508811s + Oct 13 09:48:26.487: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.954724027s + Oct 13 09:48:27.493: INFO: Verifying statefulset ss doesn't scale past 1 for another 949.017873ms + STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3515 10/13/23 09:48:28.493 + Oct 13 09:48:28.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=statefulset-3515 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' + Oct 13 09:48:28.675: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" + Oct 13 09:48:28.675: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" + Oct 13 09:48:28.675: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + + Oct 13 09:48:28.680: INFO: Found 1 stateful pods, waiting for 3 + Oct 13 09:48:38.688: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true + Oct 13 09:48:38.688: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true + Oct 13 09:48:38.688: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true + STEP: Verifying that stateful set ss was scaled up in order 10/13/23 09:48:38.688 + STEP: Scale down will halt with unhealthy stateful pod 10/13/23 09:48:38.689 + Oct 13 09:48:38.696: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=statefulset-3515 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' + Oct 13 09:48:38.863: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" + Oct 13 09:48:38.863: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" + Oct 13 09:48:38.863: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + + Oct 13 09:48:38.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=statefulset-3515 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' + Oct 13 09:48:39.061: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" + Oct 13 09:48:39.061: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" + Oct 13 09:48:39.061: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + + Oct 13 09:48:39.061: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=statefulset-3515 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' + Oct 13 09:48:39.234: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" + Oct 13 09:48:39.234: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" + Oct 13 09:48:39.234: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + + Oct 13 09:48:39.234: INFO: Waiting for statefulset status.replicas updated to 0 + Oct 13 09:48:39.239: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 + Oct 13 09:48:49.249: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false + Oct 13 09:48:49.249: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false + Oct 13 09:48:49.249: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false + Oct 13 09:48:49.262: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999998899s + Oct 13 09:48:50.268: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996029432s + Oct 13 09:48:51.274: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.989680994s + Oct 13 09:48:52.279: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.983740704s + Oct 13 09:48:53.284: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.979396192s + Oct 13 09:48:54.289: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.973742239s + Oct 13 09:48:55.294: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.969190975s + Oct 13 09:48:56.299: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.963584967s + Oct 13 09:48:57.306: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.957703055s + Oct 13 09:48:58.312: INFO: Verifying statefulset ss doesn't scale past 3 for another 951.476399ms + STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3515 10/13/23 09:48:59.313 + Oct 13 09:48:59.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=statefulset-3515 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' + Oct 13 09:48:59.485: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" + Oct 13 09:48:59.485: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" + Oct 13 09:48:59.485: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + + Oct 13 09:48:59.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=statefulset-3515 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' + Oct 13 09:48:59.643: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" + Oct 13 09:48:59.643: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" + Oct 13 09:48:59.643: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + + Oct 13 09:48:59.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=statefulset-3515 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' + Oct 13 09:48:59.808: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" + Oct 13 09:48:59.808: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" + Oct 13 09:48:59.808: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + + Oct 13 09:48:59.808: INFO: Scaling statefulset ss to 0 + STEP: Verifying that stateful set ss was scaled down in reverse order 10/13/23 09:49:09.828 + [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:124 + Oct 13 09:49:09.828: INFO: Deleting all statefulset in ns statefulset-3515 + Oct 13 09:49:09.832: INFO: Scaling statefulset ss to 0 + Oct 13 09:49:09.842: INFO: Waiting for statefulset status.replicas updated to 0 + Oct 13 09:49:09.845: INFO: Deleting statefulset ss + [AfterEach] [sig-apps] StatefulSet + test/e2e/framework/node/init/init.go:32 + Oct 13 09:49:09.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] StatefulSet + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] StatefulSet + tear down framework | framework.go:193 + STEP: Destroying namespace "statefulset-3515" for this suite. 10/13/23 09:49:09.861 + << End Captured GinkgoWriter Output +------------------------------ +[sig-network] DNS + should provide DNS for the cluster [Conformance] + test/e2e/network/dns.go:50 +[BeforeEach] [sig-network] DNS + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:49:09.867 +Oct 13 09:49:09.867: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename dns 10/13/23 09:49:09.868 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:49:09.885 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:49:09.887 +[BeforeEach] [sig-network] DNS + test/e2e/framework/metrics/init/init.go:31 +[It] should provide DNS for the cluster [Conformance] + test/e2e/network/dns.go:50 +STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;sleep 1; done + 10/13/23 09:49:09.889 +STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;sleep 1; done + 10/13/23 09:49:09.889 +STEP: creating a pod to probe DNS 10/13/23 09:49:09.889 +STEP: submitting the pod to kubernetes 10/13/23 09:49:09.889 +Oct 13 09:49:09.897: INFO: Waiting up to 15m0s for pod "dns-test-0522cad9-3123-4911-b1df-ddc49b84e5a8" in namespace "dns-5512" to be "running" +Oct 13 09:49:09.900: INFO: Pod "dns-test-0522cad9-3123-4911-b1df-ddc49b84e5a8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.146846ms +Oct 13 09:49:11.905: INFO: Pod "dns-test-0522cad9-3123-4911-b1df-ddc49b84e5a8": Phase="Running", Reason="", readiness=true. Elapsed: 2.00800993s +Oct 13 09:49:11.905: INFO: Pod "dns-test-0522cad9-3123-4911-b1df-ddc49b84e5a8" satisfied condition "running" +STEP: retrieving the pod 10/13/23 09:49:11.905 +STEP: looking for the results for each expected name from probers 10/13/23 09:49:11.909 +Oct 13 09:49:11.923: INFO: DNS probes using dns-5512/dns-test-0522cad9-3123-4911-b1df-ddc49b84e5a8 succeeded + +STEP: deleting the pod 10/13/23 09:49:11.923 +[AfterEach] [sig-network] DNS + test/e2e/framework/node/init/init.go:32 +Oct 13 09:49:11.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] DNS + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] DNS + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] DNS + tear down framework | framework.go:193 +STEP: Destroying namespace "dns-5512" for this suite. 10/13/23 09:49:11.948 +------------------------------ +• [2.087 seconds] +[sig-network] DNS +test/e2e/network/common/framework.go:23 + should provide DNS for the cluster [Conformance] + test/e2e/network/dns.go:50 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] DNS + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:49:09.867 + Oct 13 09:49:09.867: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename dns 10/13/23 09:49:09.868 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:49:09.885 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:49:09.887 + [BeforeEach] [sig-network] DNS + test/e2e/framework/metrics/init/init.go:31 + [It] should provide DNS for the cluster [Conformance] + test/e2e/network/dns.go:50 + STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;sleep 1; done + 10/13/23 09:49:09.889 + STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;sleep 1; done + 10/13/23 09:49:09.889 + STEP: creating a pod to probe DNS 10/13/23 09:49:09.889 + STEP: submitting the pod to kubernetes 10/13/23 09:49:09.889 + Oct 13 09:49:09.897: INFO: Waiting up to 15m0s for pod "dns-test-0522cad9-3123-4911-b1df-ddc49b84e5a8" in namespace "dns-5512" to be "running" + Oct 13 09:49:09.900: INFO: Pod "dns-test-0522cad9-3123-4911-b1df-ddc49b84e5a8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.146846ms + Oct 13 09:49:11.905: INFO: Pod "dns-test-0522cad9-3123-4911-b1df-ddc49b84e5a8": Phase="Running", Reason="", readiness=true. Elapsed: 2.00800993s + Oct 13 09:49:11.905: INFO: Pod "dns-test-0522cad9-3123-4911-b1df-ddc49b84e5a8" satisfied condition "running" + STEP: retrieving the pod 10/13/23 09:49:11.905 + STEP: looking for the results for each expected name from probers 10/13/23 09:49:11.909 + Oct 13 09:49:11.923: INFO: DNS probes using dns-5512/dns-test-0522cad9-3123-4911-b1df-ddc49b84e5a8 succeeded + + STEP: deleting the pod 10/13/23 09:49:11.923 + [AfterEach] [sig-network] DNS + test/e2e/framework/node/init/init.go:32 + Oct 13 09:49:11.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] DNS + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] DNS + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] DNS + tear down framework | framework.go:193 + STEP: Destroying namespace "dns-5512" for this suite. 10/13/23 09:49:11.948 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-node] Kubelet when scheduling a busybox command in a pod + should print the output to logs [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:52 +[BeforeEach] [sig-node] Kubelet + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:49:11.955 +Oct 13 09:49:11.955: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename kubelet-test 10/13/23 09:49:11.956 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:49:11.971 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:49:11.974 +[BeforeEach] [sig-node] Kubelet + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] Kubelet + test/e2e/common/node/kubelet.go:41 +[It] should print the output to logs [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:52 +Oct 13 09:49:11.983: INFO: Waiting up to 5m0s for pod "busybox-scheduling-6dbec8e1-4a19-4e52-85fa-e6d277cfb16d" in namespace "kubelet-test-1985" to be "running and ready" +Oct 13 09:49:11.986: INFO: Pod "busybox-scheduling-6dbec8e1-4a19-4e52-85fa-e6d277cfb16d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.939925ms +Oct 13 09:49:11.986: INFO: The phase of Pod busybox-scheduling-6dbec8e1-4a19-4e52-85fa-e6d277cfb16d is Pending, waiting for it to be Running (with Ready = true) +Oct 13 09:49:13.992: INFO: Pod "busybox-scheduling-6dbec8e1-4a19-4e52-85fa-e6d277cfb16d": Phase="Running", Reason="", readiness=true. Elapsed: 2.009273118s +Oct 13 09:49:13.992: INFO: The phase of Pod busybox-scheduling-6dbec8e1-4a19-4e52-85fa-e6d277cfb16d is Running (Ready = true) +Oct 13 09:49:13.992: INFO: Pod "busybox-scheduling-6dbec8e1-4a19-4e52-85fa-e6d277cfb16d" satisfied condition "running and ready" +[AfterEach] [sig-node] Kubelet + test/e2e/framework/node/init/init.go:32 +Oct 13 09:49:14.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Kubelet + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Kubelet + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Kubelet + tear down framework | framework.go:193 +STEP: Destroying namespace "kubelet-test-1985" for this suite. 10/13/23 09:49:14.007 +------------------------------ +• [2.058 seconds] +[sig-node] Kubelet +test/e2e/common/node/framework.go:23 + when scheduling a busybox command in a pod + test/e2e/common/node/kubelet.go:44 + should print the output to logs [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:52 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Kubelet + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:49:11.955 + Oct 13 09:49:11.955: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename kubelet-test 10/13/23 09:49:11.956 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:49:11.971 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:49:11.974 + [BeforeEach] [sig-node] Kubelet + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] Kubelet + test/e2e/common/node/kubelet.go:41 + [It] should print the output to logs [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:52 + Oct 13 09:49:11.983: INFO: Waiting up to 5m0s for pod "busybox-scheduling-6dbec8e1-4a19-4e52-85fa-e6d277cfb16d" in namespace "kubelet-test-1985" to be "running and ready" + Oct 13 09:49:11.986: INFO: Pod "busybox-scheduling-6dbec8e1-4a19-4e52-85fa-e6d277cfb16d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.939925ms + Oct 13 09:49:11.986: INFO: The phase of Pod busybox-scheduling-6dbec8e1-4a19-4e52-85fa-e6d277cfb16d is Pending, waiting for it to be Running (with Ready = true) + Oct 13 09:49:13.992: INFO: Pod "busybox-scheduling-6dbec8e1-4a19-4e52-85fa-e6d277cfb16d": Phase="Running", Reason="", readiness=true. Elapsed: 2.009273118s + Oct 13 09:49:13.992: INFO: The phase of Pod busybox-scheduling-6dbec8e1-4a19-4e52-85fa-e6d277cfb16d is Running (Ready = true) + Oct 13 09:49:13.992: INFO: Pod "busybox-scheduling-6dbec8e1-4a19-4e52-85fa-e6d277cfb16d" satisfied condition "running and ready" + [AfterEach] [sig-node] Kubelet + test/e2e/framework/node/init/init.go:32 + Oct 13 09:49:14.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Kubelet + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Kubelet + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Kubelet + tear down framework | framework.go:193 + STEP: Destroying namespace "kubelet-test-1985" for this suite. 10/13/23 09:49:14.007 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSS +------------------------------ +[sig-node] Variable Expansion + should allow substituting values in a container's args [NodeConformance] [Conformance] + test/e2e/common/node/expansion.go:92 +[BeforeEach] [sig-node] Variable Expansion + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:49:14.014 +Oct 13 09:49:14.014: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename var-expansion 10/13/23 09:49:14.015 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:49:14.031 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:49:14.034 +[BeforeEach] [sig-node] Variable Expansion + test/e2e/framework/metrics/init/init.go:31 +[It] should allow substituting values in a container's args [NodeConformance] [Conformance] + test/e2e/common/node/expansion.go:92 +STEP: Creating a pod to test substitution in container's args 10/13/23 09:49:14.036 +Oct 13 09:49:14.044: INFO: Waiting up to 5m0s for pod "var-expansion-a1a22c7f-85f4-4f12-974a-754e4cf7f239" in namespace "var-expansion-9284" to be "Succeeded or Failed" +Oct 13 09:49:14.047: INFO: Pod "var-expansion-a1a22c7f-85f4-4f12-974a-754e4cf7f239": Phase="Pending", Reason="", readiness=false. Elapsed: 3.389603ms +Oct 13 09:49:16.053: INFO: Pod "var-expansion-a1a22c7f-85f4-4f12-974a-754e4cf7f239": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009031643s +Oct 13 09:49:18.052: INFO: Pod "var-expansion-a1a22c7f-85f4-4f12-974a-754e4cf7f239": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007430541s +STEP: Saw pod success 10/13/23 09:49:18.052 +Oct 13 09:49:18.052: INFO: Pod "var-expansion-a1a22c7f-85f4-4f12-974a-754e4cf7f239" satisfied condition "Succeeded or Failed" +Oct 13 09:49:18.054: INFO: Trying to get logs from node node2 pod var-expansion-a1a22c7f-85f4-4f12-974a-754e4cf7f239 container dapi-container: +STEP: delete the pod 10/13/23 09:49:18.06 +Oct 13 09:49:18.070: INFO: Waiting for pod var-expansion-a1a22c7f-85f4-4f12-974a-754e4cf7f239 to disappear +Oct 13 09:49:18.073: INFO: Pod var-expansion-a1a22c7f-85f4-4f12-974a-754e4cf7f239 no longer exists +[AfterEach] [sig-node] Variable Expansion + test/e2e/framework/node/init/init.go:32 +Oct 13 09:49:18.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Variable Expansion + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Variable Expansion + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Variable Expansion + tear down framework | framework.go:193 +STEP: Destroying namespace "var-expansion-9284" for this suite. 10/13/23 09:49:18.076 +------------------------------ +• [4.067 seconds] +[sig-node] Variable Expansion +test/e2e/common/node/framework.go:23 + should allow substituting values in a container's args [NodeConformance] [Conformance] + test/e2e/common/node/expansion.go:92 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Variable Expansion + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:49:14.014 + Oct 13 09:49:14.014: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename var-expansion 10/13/23 09:49:14.015 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:49:14.031 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:49:14.034 + [BeforeEach] [sig-node] Variable Expansion + test/e2e/framework/metrics/init/init.go:31 + [It] should allow substituting values in a container's args [NodeConformance] [Conformance] + test/e2e/common/node/expansion.go:92 + STEP: Creating a pod to test substitution in container's args 10/13/23 09:49:14.036 + Oct 13 09:49:14.044: INFO: Waiting up to 5m0s for pod "var-expansion-a1a22c7f-85f4-4f12-974a-754e4cf7f239" in namespace "var-expansion-9284" to be "Succeeded or Failed" + Oct 13 09:49:14.047: INFO: Pod "var-expansion-a1a22c7f-85f4-4f12-974a-754e4cf7f239": Phase="Pending", Reason="", readiness=false. Elapsed: 3.389603ms + Oct 13 09:49:16.053: INFO: Pod "var-expansion-a1a22c7f-85f4-4f12-974a-754e4cf7f239": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009031643s + Oct 13 09:49:18.052: INFO: Pod "var-expansion-a1a22c7f-85f4-4f12-974a-754e4cf7f239": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007430541s + STEP: Saw pod success 10/13/23 09:49:18.052 + Oct 13 09:49:18.052: INFO: Pod "var-expansion-a1a22c7f-85f4-4f12-974a-754e4cf7f239" satisfied condition "Succeeded or Failed" + Oct 13 09:49:18.054: INFO: Trying to get logs from node node2 pod var-expansion-a1a22c7f-85f4-4f12-974a-754e4cf7f239 container dapi-container: + STEP: delete the pod 10/13/23 09:49:18.06 + Oct 13 09:49:18.070: INFO: Waiting for pod var-expansion-a1a22c7f-85f4-4f12-974a-754e4cf7f239 to disappear + Oct 13 09:49:18.073: INFO: Pod var-expansion-a1a22c7f-85f4-4f12-974a-754e4cf7f239 no longer exists + [AfterEach] [sig-node] Variable Expansion + test/e2e/framework/node/init/init.go:32 + Oct 13 09:49:18.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Variable Expansion + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Variable Expansion + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Variable Expansion + tear down framework | framework.go:193 + STEP: Destroying namespace "var-expansion-9284" for this suite. 10/13/23 09:49:18.076 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be immutable if `immutable` field is set [Conformance] + test/e2e/common/storage/secrets_volume.go:386 +[BeforeEach] [sig-storage] Secrets + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:49:18.082 +Oct 13 09:49:18.082: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename secrets 10/13/23 09:49:18.083 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:49:18.097 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:49:18.1 +[BeforeEach] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:31 +[It] should be immutable if `immutable` field is set [Conformance] + test/e2e/common/storage/secrets_volume.go:386 +[AfterEach] [sig-storage] Secrets + test/e2e/framework/node/init/init.go:32 +Oct 13 09:49:18.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Secrets + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Secrets + tear down framework | framework.go:193 +STEP: Destroying namespace "secrets-9958" for this suite. 10/13/23 09:49:18.136 +------------------------------ +• [0.060 seconds] +[sig-storage] Secrets +test/e2e/common/storage/framework.go:23 + should be immutable if `immutable` field is set [Conformance] + test/e2e/common/storage/secrets_volume.go:386 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Secrets + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:49:18.082 + Oct 13 09:49:18.082: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename secrets 10/13/23 09:49:18.083 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:49:18.097 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:49:18.1 + [BeforeEach] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:31 + [It] should be immutable if `immutable` field is set [Conformance] + test/e2e/common/storage/secrets_volume.go:386 + [AfterEach] [sig-storage] Secrets + test/e2e/framework/node/init/init.go:32 + Oct 13 09:49:18.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Secrets + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Secrets + tear down framework | framework.go:193 + STEP: Destroying namespace "secrets-9958" for this suite. 10/13/23 09:49:18.136 + << End Captured GinkgoWriter Output +------------------------------ +SSSSS +------------------------------ +[sig-api-machinery] Namespaces [Serial] + should ensure that all services are removed when a namespace is deleted [Conformance] + test/e2e/apimachinery/namespace.go:251 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:49:18.142 +Oct 13 09:49:18.142: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename namespaces 10/13/23 09:49:18.143 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:49:18.157 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:49:18.16 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/metrics/init/init.go:31 +[It] should ensure that all services are removed when a namespace is deleted [Conformance] + test/e2e/apimachinery/namespace.go:251 +STEP: Creating a test namespace 10/13/23 09:49:18.162 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:49:18.175 +STEP: Creating a service in the namespace 10/13/23 09:49:18.177 +STEP: Deleting the namespace 10/13/23 09:49:18.19 +STEP: Waiting for the namespace to be removed. 10/13/23 09:49:18.198 +STEP: Recreating the namespace 10/13/23 09:49:24.202 +STEP: Verifying there is no service in the namespace 10/13/23 09:49:24.217 +[AfterEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/node/init/init.go:32 +Oct 13 09:49:24.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + tear down framework | framework.go:193 +STEP: Destroying namespace "namespaces-95" for this suite. 10/13/23 09:49:24.224 +STEP: Destroying namespace "nsdeletetest-2589" for this suite. 10/13/23 09:49:24.23 +Oct 13 09:49:24.233: INFO: Namespace nsdeletetest-2589 was already deleted +STEP: Destroying namespace "nsdeletetest-7846" for this suite. 10/13/23 09:49:24.233 +------------------------------ +• [SLOW TEST] [6.096 seconds] +[sig-api-machinery] Namespaces [Serial] +test/e2e/apimachinery/framework.go:23 + should ensure that all services are removed when a namespace is deleted [Conformance] + test/e2e/apimachinery/namespace.go:251 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] Namespaces [Serial] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:49:18.142 + Oct 13 09:49:18.142: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename namespaces 10/13/23 09:49:18.143 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:49:18.157 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:49:18.16 + [BeforeEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/metrics/init/init.go:31 + [It] should ensure that all services are removed when a namespace is deleted [Conformance] + test/e2e/apimachinery/namespace.go:251 + STEP: Creating a test namespace 10/13/23 09:49:18.162 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:49:18.175 + STEP: Creating a service in the namespace 10/13/23 09:49:18.177 + STEP: Deleting the namespace 10/13/23 09:49:18.19 + STEP: Waiting for the namespace to be removed. 10/13/23 09:49:18.198 + STEP: Recreating the namespace 10/13/23 09:49:24.202 + STEP: Verifying there is no service in the namespace 10/13/23 09:49:24.217 + [AfterEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/node/init/init.go:32 + Oct 13 09:49:24.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + tear down framework | framework.go:193 + STEP: Destroying namespace "namespaces-95" for this suite. 10/13/23 09:49:24.224 + STEP: Destroying namespace "nsdeletetest-2589" for this suite. 10/13/23 09:49:24.23 + Oct 13 09:49:24.233: INFO: Namespace nsdeletetest-2589 was already deleted + STEP: Destroying namespace "nsdeletetest-7846" for this suite. 10/13/23 09:49:24.233 + << End Captured GinkgoWriter Output +------------------------------ +[sig-network] Proxy version v1 + A set of valid responses are returned for both pod and service Proxy [Conformance] + test/e2e/network/proxy.go:380 +[BeforeEach] version v1 + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:49:24.238 +Oct 13 09:49:24.238: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename proxy 10/13/23 09:49:24.239 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:49:24.254 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:49:24.257 +[BeforeEach] version v1 + test/e2e/framework/metrics/init/init.go:31 +[It] A set of valid responses are returned for both pod and service Proxy [Conformance] + test/e2e/network/proxy.go:380 +Oct 13 09:49:24.262: INFO: Creating pod... +Oct 13 09:49:24.270: INFO: Waiting up to 5m0s for pod "agnhost" in namespace "proxy-6701" to be "running" +Oct 13 09:49:24.273: INFO: Pod "agnhost": Phase="Pending", Reason="", readiness=false. Elapsed: 2.809993ms +Oct 13 09:49:26.276: INFO: Pod "agnhost": Phase="Running", Reason="", readiness=true. Elapsed: 2.006492374s +Oct 13 09:49:26.276: INFO: Pod "agnhost" satisfied condition "running" +Oct 13 09:49:26.276: INFO: Creating service... +Oct 13 09:49:26.287: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-6701/pods/agnhost/proxy?method=DELETE +Oct 13 09:49:26.294: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE +Oct 13 09:49:26.294: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-6701/pods/agnhost/proxy?method=OPTIONS +Oct 13 09:49:26.299: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS +Oct 13 09:49:26.299: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-6701/pods/agnhost/proxy?method=PATCH +Oct 13 09:49:26.303: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH +Oct 13 09:49:26.303: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-6701/pods/agnhost/proxy?method=POST +Oct 13 09:49:26.306: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST +Oct 13 09:49:26.306: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-6701/pods/agnhost/proxy?method=PUT +Oct 13 09:49:26.311: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT +Oct 13 09:49:26.311: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-6701/services/e2e-proxy-test-service/proxy?method=DELETE +Oct 13 09:49:26.316: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE +Oct 13 09:49:26.316: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-6701/services/e2e-proxy-test-service/proxy?method=OPTIONS +Oct 13 09:49:26.324: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS +Oct 13 09:49:26.324: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-6701/services/e2e-proxy-test-service/proxy?method=PATCH +Oct 13 09:49:26.331: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH +Oct 13 09:49:26.331: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-6701/services/e2e-proxy-test-service/proxy?method=POST +Oct 13 09:49:26.337: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST +Oct 13 09:49:26.337: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-6701/services/e2e-proxy-test-service/proxy?method=PUT +Oct 13 09:49:26.342: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT +Oct 13 09:49:26.342: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-6701/pods/agnhost/proxy?method=GET +Oct 13 09:49:26.345: INFO: http.Client request:GET StatusCode:301 +Oct 13 09:49:26.345: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-6701/services/e2e-proxy-test-service/proxy?method=GET +Oct 13 09:49:26.349: INFO: http.Client request:GET StatusCode:301 +Oct 13 09:49:26.349: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-6701/pods/agnhost/proxy?method=HEAD +Oct 13 09:49:26.351: INFO: http.Client request:HEAD StatusCode:301 +Oct 13 09:49:26.351: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-6701/services/e2e-proxy-test-service/proxy?method=HEAD +Oct 13 09:49:26.355: INFO: http.Client request:HEAD StatusCode:301 +[AfterEach] version v1 + test/e2e/framework/node/init/init.go:32 +Oct 13 09:49:26.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] version v1 + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] version v1 + dump namespaces | framework.go:196 +[DeferCleanup (Each)] version v1 + tear down framework | framework.go:193 +STEP: Destroying namespace "proxy-6701" for this suite. 10/13/23 09:49:26.359 +------------------------------ +• [2.128 seconds] +[sig-network] Proxy +test/e2e/network/common/framework.go:23 + version v1 + test/e2e/network/proxy.go:74 + A set of valid responses are returned for both pod and service Proxy [Conformance] + test/e2e/network/proxy.go:380 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] version v1 + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:49:24.238 + Oct 13 09:49:24.238: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename proxy 10/13/23 09:49:24.239 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:49:24.254 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:49:24.257 + [BeforeEach] version v1 + test/e2e/framework/metrics/init/init.go:31 + [It] A set of valid responses are returned for both pod and service Proxy [Conformance] + test/e2e/network/proxy.go:380 + Oct 13 09:49:24.262: INFO: Creating pod... + Oct 13 09:49:24.270: INFO: Waiting up to 5m0s for pod "agnhost" in namespace "proxy-6701" to be "running" + Oct 13 09:49:24.273: INFO: Pod "agnhost": Phase="Pending", Reason="", readiness=false. Elapsed: 2.809993ms + Oct 13 09:49:26.276: INFO: Pod "agnhost": Phase="Running", Reason="", readiness=true. Elapsed: 2.006492374s + Oct 13 09:49:26.276: INFO: Pod "agnhost" satisfied condition "running" + Oct 13 09:49:26.276: INFO: Creating service... + Oct 13 09:49:26.287: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-6701/pods/agnhost/proxy?method=DELETE + Oct 13 09:49:26.294: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE + Oct 13 09:49:26.294: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-6701/pods/agnhost/proxy?method=OPTIONS + Oct 13 09:49:26.299: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS + Oct 13 09:49:26.299: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-6701/pods/agnhost/proxy?method=PATCH + Oct 13 09:49:26.303: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH + Oct 13 09:49:26.303: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-6701/pods/agnhost/proxy?method=POST + Oct 13 09:49:26.306: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST + Oct 13 09:49:26.306: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-6701/pods/agnhost/proxy?method=PUT + Oct 13 09:49:26.311: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT + Oct 13 09:49:26.311: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-6701/services/e2e-proxy-test-service/proxy?method=DELETE + Oct 13 09:49:26.316: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE + Oct 13 09:49:26.316: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-6701/services/e2e-proxy-test-service/proxy?method=OPTIONS + Oct 13 09:49:26.324: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS + Oct 13 09:49:26.324: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-6701/services/e2e-proxy-test-service/proxy?method=PATCH + Oct 13 09:49:26.331: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH + Oct 13 09:49:26.331: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-6701/services/e2e-proxy-test-service/proxy?method=POST + Oct 13 09:49:26.337: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST + Oct 13 09:49:26.337: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-6701/services/e2e-proxy-test-service/proxy?method=PUT + Oct 13 09:49:26.342: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT + Oct 13 09:49:26.342: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-6701/pods/agnhost/proxy?method=GET + Oct 13 09:49:26.345: INFO: http.Client request:GET StatusCode:301 + Oct 13 09:49:26.345: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-6701/services/e2e-proxy-test-service/proxy?method=GET + Oct 13 09:49:26.349: INFO: http.Client request:GET StatusCode:301 + Oct 13 09:49:26.349: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-6701/pods/agnhost/proxy?method=HEAD + Oct 13 09:49:26.351: INFO: http.Client request:HEAD StatusCode:301 + Oct 13 09:49:26.351: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-6701/services/e2e-proxy-test-service/proxy?method=HEAD + Oct 13 09:49:26.355: INFO: http.Client request:HEAD StatusCode:301 + [AfterEach] version v1 + test/e2e/framework/node/init/init.go:32 + Oct 13 09:49:26.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] version v1 + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] version v1 + dump namespaces | framework.go:196 + [DeferCleanup (Each)] version v1 + tear down framework | framework.go:193 + STEP: Destroying namespace "proxy-6701" for this suite. 10/13/23 09:49:26.359 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + should have a working scale subresource [Conformance] + test/e2e/apps/statefulset.go:848 +[BeforeEach] [sig-apps] StatefulSet + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:49:26.376 +Oct 13 09:49:26.376: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename statefulset 10/13/23 09:49:26.377 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:49:26.397 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:49:26.4 +[BeforeEach] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:98 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:113 +STEP: Creating service test in namespace statefulset-9753 10/13/23 09:49:26.403 +[It] should have a working scale subresource [Conformance] + test/e2e/apps/statefulset.go:848 +STEP: Creating statefulset ss in namespace statefulset-9753 10/13/23 09:49:26.408 +Oct 13 09:49:26.418: INFO: Found 0 stateful pods, waiting for 1 +Oct 13 09:49:36.424: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: getting scale subresource 10/13/23 09:49:36.431 +STEP: updating a scale subresource 10/13/23 09:49:36.435 +STEP: verifying the statefulset Spec.Replicas was modified 10/13/23 09:49:36.44 +STEP: Patch a scale subresource 10/13/23 09:49:36.443 +STEP: verifying the statefulset Spec.Replicas was modified 10/13/23 09:49:36.45 +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:124 +Oct 13 09:49:36.454: INFO: Deleting all statefulset in ns statefulset-9753 +Oct 13 09:49:36.458: INFO: Scaling statefulset ss to 0 +Oct 13 09:49:46.480: INFO: Waiting for statefulset status.replicas updated to 0 +Oct 13 09:49:46.485: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + test/e2e/framework/node/init/init.go:32 +Oct 13 09:49:46.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] StatefulSet + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] StatefulSet + tear down framework | framework.go:193 +STEP: Destroying namespace "statefulset-9753" for this suite. 10/13/23 09:49:46.504 +------------------------------ +• [SLOW TEST] [20.134 seconds] +[sig-apps] StatefulSet +test/e2e/apps/framework.go:23 + Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:103 + should have a working scale subresource [Conformance] + test/e2e/apps/statefulset.go:848 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] StatefulSet + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:49:26.376 + Oct 13 09:49:26.376: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename statefulset 10/13/23 09:49:26.377 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:49:26.397 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:49:26.4 + [BeforeEach] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:98 + [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:113 + STEP: Creating service test in namespace statefulset-9753 10/13/23 09:49:26.403 + [It] should have a working scale subresource [Conformance] + test/e2e/apps/statefulset.go:848 + STEP: Creating statefulset ss in namespace statefulset-9753 10/13/23 09:49:26.408 + Oct 13 09:49:26.418: INFO: Found 0 stateful pods, waiting for 1 + Oct 13 09:49:36.424: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true + STEP: getting scale subresource 10/13/23 09:49:36.431 + STEP: updating a scale subresource 10/13/23 09:49:36.435 + STEP: verifying the statefulset Spec.Replicas was modified 10/13/23 09:49:36.44 + STEP: Patch a scale subresource 10/13/23 09:49:36.443 + STEP: verifying the statefulset Spec.Replicas was modified 10/13/23 09:49:36.45 + [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:124 + Oct 13 09:49:36.454: INFO: Deleting all statefulset in ns statefulset-9753 + Oct 13 09:49:36.458: INFO: Scaling statefulset ss to 0 + Oct 13 09:49:46.480: INFO: Waiting for statefulset status.replicas updated to 0 + Oct 13 09:49:46.485: INFO: Deleting statefulset ss + [AfterEach] [sig-apps] StatefulSet + test/e2e/framework/node/init/init.go:32 + Oct 13 09:49:46.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] StatefulSet + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] StatefulSet + tear down framework | framework.go:193 + STEP: Destroying namespace "statefulset-9753" for this suite. 10/13/23 09:49:46.504 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Security Context + should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + test/e2e/node/security_context.go:164 +[BeforeEach] [sig-node] Security Context + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:49:46.512 +Oct 13 09:49:46.512: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename security-context 10/13/23 09:49:46.513 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:49:46.532 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:49:46.535 +[BeforeEach] [sig-node] Security Context + test/e2e/framework/metrics/init/init.go:31 +[It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + test/e2e/node/security_context.go:164 +STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser 10/13/23 09:49:46.537 +Oct 13 09:49:46.550: INFO: Waiting up to 5m0s for pod "security-context-ad5ba72e-5464-4f28-805d-08954d5a578c" in namespace "security-context-2558" to be "Succeeded or Failed" +Oct 13 09:49:46.554: INFO: Pod "security-context-ad5ba72e-5464-4f28-805d-08954d5a578c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.417172ms +Oct 13 09:49:48.560: INFO: Pod "security-context-ad5ba72e-5464-4f28-805d-08954d5a578c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00972522s +Oct 13 09:49:50.560: INFO: Pod "security-context-ad5ba72e-5464-4f28-805d-08954d5a578c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009173562s +STEP: Saw pod success 10/13/23 09:49:50.56 +Oct 13 09:49:50.560: INFO: Pod "security-context-ad5ba72e-5464-4f28-805d-08954d5a578c" satisfied condition "Succeeded or Failed" +Oct 13 09:49:50.563: INFO: Trying to get logs from node node2 pod security-context-ad5ba72e-5464-4f28-805d-08954d5a578c container test-container: +STEP: delete the pod 10/13/23 09:49:50.57 +Oct 13 09:49:50.584: INFO: Waiting for pod security-context-ad5ba72e-5464-4f28-805d-08954d5a578c to disappear +Oct 13 09:49:50.589: INFO: Pod security-context-ad5ba72e-5464-4f28-805d-08954d5a578c no longer exists +[AfterEach] [sig-node] Security Context + test/e2e/framework/node/init/init.go:32 +Oct 13 09:49:50.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Security Context + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Security Context + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Security Context + tear down framework | framework.go:193 +STEP: Destroying namespace "security-context-2558" for this suite. 10/13/23 09:49:50.593 +------------------------------ +• [4.088 seconds] +[sig-node] Security Context +test/e2e/node/framework.go:23 + should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + test/e2e/node/security_context.go:164 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Security Context + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:49:46.512 + Oct 13 09:49:46.512: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename security-context 10/13/23 09:49:46.513 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:49:46.532 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:49:46.535 + [BeforeEach] [sig-node] Security Context + test/e2e/framework/metrics/init/init.go:31 + [It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + test/e2e/node/security_context.go:164 + STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser 10/13/23 09:49:46.537 + Oct 13 09:49:46.550: INFO: Waiting up to 5m0s for pod "security-context-ad5ba72e-5464-4f28-805d-08954d5a578c" in namespace "security-context-2558" to be "Succeeded or Failed" + Oct 13 09:49:46.554: INFO: Pod "security-context-ad5ba72e-5464-4f28-805d-08954d5a578c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.417172ms + Oct 13 09:49:48.560: INFO: Pod "security-context-ad5ba72e-5464-4f28-805d-08954d5a578c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00972522s + Oct 13 09:49:50.560: INFO: Pod "security-context-ad5ba72e-5464-4f28-805d-08954d5a578c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009173562s + STEP: Saw pod success 10/13/23 09:49:50.56 + Oct 13 09:49:50.560: INFO: Pod "security-context-ad5ba72e-5464-4f28-805d-08954d5a578c" satisfied condition "Succeeded or Failed" + Oct 13 09:49:50.563: INFO: Trying to get logs from node node2 pod security-context-ad5ba72e-5464-4f28-805d-08954d5a578c container test-container: + STEP: delete the pod 10/13/23 09:49:50.57 + Oct 13 09:49:50.584: INFO: Waiting for pod security-context-ad5ba72e-5464-4f28-805d-08954d5a578c to disappear + Oct 13 09:49:50.589: INFO: Pod security-context-ad5ba72e-5464-4f28-805d-08954d5a578c no longer exists + [AfterEach] [sig-node] Security Context + test/e2e/framework/node/init/init.go:32 + Oct 13 09:49:50.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Security Context + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Security Context + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Security Context + tear down framework | framework.go:193 + STEP: Destroying namespace "security-context-2558" for this suite. 10/13/23 09:49:50.593 + << End Captured GinkgoWriter Output +------------------------------ +SSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate pod and apply defaults after mutation [Conformance] + test/e2e/apimachinery/webhook.go:264 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:49:50.599 +Oct 13 09:49:50.599: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename webhook 10/13/23 09:49:50.6 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:49:50.615 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:49:50.617 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 +STEP: Setting up server cert 10/13/23 09:49:50.631 +STEP: Create role binding to let webhook read extension-apiserver-authentication 10/13/23 09:49:50.937 +STEP: Deploying the webhook pod 10/13/23 09:49:50.944 +STEP: Wait for the deployment to be ready 10/13/23 09:49:50.956 +Oct 13 09:49:50.964: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service 10/13/23 09:49:52.976 +STEP: Verifying the service has paired with the endpoint 10/13/23 09:49:52.989 +Oct 13 09:49:53.990: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate pod and apply defaults after mutation [Conformance] + test/e2e/apimachinery/webhook.go:264 +STEP: Registering the mutating pod webhook via the AdmissionRegistration API 10/13/23 09:49:53.994 +STEP: create a pod that should be updated by the webhook 10/13/23 09:49:54.01 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Oct 13 09:49:54.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "webhook-9939" for this suite. 10/13/23 09:49:54.087 +STEP: Destroying namespace "webhook-9939-markers" for this suite. 10/13/23 09:49:54.094 +------------------------------ +• [3.505 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should mutate pod and apply defaults after mutation [Conformance] + test/e2e/apimachinery/webhook.go:264 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:49:50.599 + Oct 13 09:49:50.599: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename webhook 10/13/23 09:49:50.6 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:49:50.615 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:49:50.617 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 + STEP: Setting up server cert 10/13/23 09:49:50.631 + STEP: Create role binding to let webhook read extension-apiserver-authentication 10/13/23 09:49:50.937 + STEP: Deploying the webhook pod 10/13/23 09:49:50.944 + STEP: Wait for the deployment to be ready 10/13/23 09:49:50.956 + Oct 13 09:49:50.964: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set + STEP: Deploying the webhook service 10/13/23 09:49:52.976 + STEP: Verifying the service has paired with the endpoint 10/13/23 09:49:52.989 + Oct 13 09:49:53.990: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] should mutate pod and apply defaults after mutation [Conformance] + test/e2e/apimachinery/webhook.go:264 + STEP: Registering the mutating pod webhook via the AdmissionRegistration API 10/13/23 09:49:53.994 + STEP: create a pod that should be updated by the webhook 10/13/23 09:49:54.01 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Oct 13 09:49:54.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "webhook-9939" for this suite. 10/13/23 09:49:54.087 + STEP: Destroying namespace "webhook-9939-markers" for this suite. 10/13/23 09:49:54.094 + << End Captured GinkgoWriter Output +------------------------------ +SSS +------------------------------ +[sig-storage] EmptyDir volumes + volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:157 +[BeforeEach] [sig-storage] EmptyDir volumes + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:49:54.105 +Oct 13 09:49:54.105: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename emptydir 10/13/23 09:49:54.106 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:49:54.123 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:49:54.126 +[BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:31 +[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:157 +STEP: Creating a pod to test emptydir volume type on node default medium 10/13/23 09:49:54.129 +Oct 13 09:49:54.138: INFO: Waiting up to 5m0s for pod "pod-602858b8-df54-402a-ae79-00b1a800be99" in namespace "emptydir-1981" to be "Succeeded or Failed" +Oct 13 09:49:54.142: INFO: Pod "pod-602858b8-df54-402a-ae79-00b1a800be99": Phase="Pending", Reason="", readiness=false. Elapsed: 3.604745ms +Oct 13 09:49:56.146: INFO: Pod "pod-602858b8-df54-402a-ae79-00b1a800be99": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008181278s +Oct 13 09:49:58.148: INFO: Pod "pod-602858b8-df54-402a-ae79-00b1a800be99": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009771478s +STEP: Saw pod success 10/13/23 09:49:58.148 +Oct 13 09:49:58.148: INFO: Pod "pod-602858b8-df54-402a-ae79-00b1a800be99" satisfied condition "Succeeded or Failed" +Oct 13 09:49:58.151: INFO: Trying to get logs from node node2 pod pod-602858b8-df54-402a-ae79-00b1a800be99 container test-container: +STEP: delete the pod 10/13/23 09:49:58.159 +Oct 13 09:49:58.174: INFO: Waiting for pod pod-602858b8-df54-402a-ae79-00b1a800be99 to disappear +Oct 13 09:49:58.177: INFO: Pod pod-602858b8-df54-402a-ae79-00b1a800be99 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/node/init/init.go:32 +Oct 13 09:49:58.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + tear down framework | framework.go:193 +STEP: Destroying namespace "emptydir-1981" for this suite. 10/13/23 09:49:58.184 +------------------------------ +• [4.085 seconds] +[sig-storage] EmptyDir volumes +test/e2e/common/storage/framework.go:23 + volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:157 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] EmptyDir volumes + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:49:54.105 + Oct 13 09:49:54.105: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename emptydir 10/13/23 09:49:54.106 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:49:54.123 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:49:54.126 + [BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:31 + [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:157 + STEP: Creating a pod to test emptydir volume type on node default medium 10/13/23 09:49:54.129 + Oct 13 09:49:54.138: INFO: Waiting up to 5m0s for pod "pod-602858b8-df54-402a-ae79-00b1a800be99" in namespace "emptydir-1981" to be "Succeeded or Failed" + Oct 13 09:49:54.142: INFO: Pod "pod-602858b8-df54-402a-ae79-00b1a800be99": Phase="Pending", Reason="", readiness=false. Elapsed: 3.604745ms + Oct 13 09:49:56.146: INFO: Pod "pod-602858b8-df54-402a-ae79-00b1a800be99": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008181278s + Oct 13 09:49:58.148: INFO: Pod "pod-602858b8-df54-402a-ae79-00b1a800be99": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009771478s + STEP: Saw pod success 10/13/23 09:49:58.148 + Oct 13 09:49:58.148: INFO: Pod "pod-602858b8-df54-402a-ae79-00b1a800be99" satisfied condition "Succeeded or Failed" + Oct 13 09:49:58.151: INFO: Trying to get logs from node node2 pod pod-602858b8-df54-402a-ae79-00b1a800be99 container test-container: + STEP: delete the pod 10/13/23 09:49:58.159 + Oct 13 09:49:58.174: INFO: Waiting for pod pod-602858b8-df54-402a-ae79-00b1a800be99 to disappear + Oct 13 09:49:58.177: INFO: Pod pod-602858b8-df54-402a-ae79-00b1a800be99 no longer exists + [AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/node/init/init.go:32 + Oct 13 09:49:58.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + tear down framework | framework.go:193 + STEP: Destroying namespace "emptydir-1981" for this suite. 10/13/23 09:49:58.184 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should be able to change the type from ExternalName to NodePort [Conformance] + test/e2e/network/service.go:1477 +[BeforeEach] [sig-network] Services + set up framework | framework.go:178 +STEP: Creating a kubernetes client 10/13/23 09:49:58.19 +Oct 13 09:49:58.190: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 +STEP: Building a namespace api object, basename services 10/13/23 09:49:58.191 +STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:49:58.205 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:49:58.207 +[BeforeEach] [sig-network] Services + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 +[It] should be able to change the type from ExternalName to NodePort [Conformance] + test/e2e/network/service.go:1477 +STEP: creating a service externalname-service with the type=ExternalName in namespace services-1436 10/13/23 09:49:58.209 +STEP: changing the ExternalName service to type=NodePort 10/13/23 09:49:58.214 +STEP: creating replication controller externalname-service in namespace services-1436 10/13/23 09:49:58.238 +I1013 09:49:58.245710 23 runners.go:193] Created replication controller with name: externalname-service, namespace: services-1436, replica count: 2 +I1013 09:50:01.297546 23 runners.go:193] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Oct 13 09:50:01.297: INFO: Creating new exec pod +Oct 13 09:50:01.306: INFO: Waiting up to 5m0s for pod "execpodmqhcb" in namespace "services-1436" to be "running" +Oct 13 09:50:01.310: INFO: Pod "execpodmqhcb": Phase="Pending", Reason="", readiness=false. Elapsed: 3.80708ms +Oct 13 09:50:03.313: INFO: Pod "execpodmqhcb": Phase="Running", Reason="", readiness=true. Elapsed: 2.00732975s +Oct 13 09:50:03.313: INFO: Pod "execpodmqhcb" satisfied condition "running" +Oct 13 09:50:04.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-1436 exec execpodmqhcb -- /bin/sh -x -c nc -v -z -w 2 externalname-service 80' +Oct 13 09:50:04.464: INFO: stderr: "+ nc -v -z -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" +Oct 13 09:50:04.464: INFO: stdout: "" +Oct 13 09:50:04.464: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-1436 exec execpodmqhcb -- /bin/sh -x -c nc -v -z -w 2 10.107.175.85 80' +Oct 13 09:50:04.591: INFO: stderr: "+ nc -v -z -w 2 10.107.175.85 80\nConnection to 10.107.175.85 80 port [tcp/http] succeeded!\n" +Oct 13 09:50:04.591: INFO: stdout: "" +Oct 13 09:50:04.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-1436 exec execpodmqhcb -- /bin/sh -x -c nc -v -z -w 2 10.253.8.111 30578' +Oct 13 09:50:04.726: INFO: stderr: "+ nc -v -z -w 2 10.253.8.111 30578\nConnection to 10.253.8.111 30578 port [tcp/*] succeeded!\n" +Oct 13 09:50:04.726: INFO: stdout: "" +Oct 13 09:50:04.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-1436 exec execpodmqhcb -- /bin/sh -x -c nc -v -z -w 2 10.253.8.112 30578' +Oct 13 09:50:04.849: INFO: stderr: "+ nc -v -z -w 2 10.253.8.112 30578\nConnection to 10.253.8.112 30578 port [tcp/*] succeeded!\n" +Oct 13 09:50:04.849: INFO: stdout: "" +Oct 13 09:50:04.849: INFO: Cleaning up the ExternalName to NodePort test service +[AfterEach] [sig-network] Services + test/e2e/framework/node/init/init.go:32 +Oct 13 09:50:04.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] Services + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] Services + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] Services + tear down framework | framework.go:193 +STEP: Destroying namespace "services-1436" for this suite. 10/13/23 09:50:04.882 +------------------------------ +• [SLOW TEST] [6.699 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should be able to change the type from ExternalName to NodePort [Conformance] + test/e2e/network/service.go:1477 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Services + set up framework | framework.go:178 + STEP: Creating a kubernetes client 10/13/23 09:49:58.19 + Oct 13 09:49:58.190: INFO: >>> kubeConfig: /tmp/kubeconfig-1565935798 + STEP: Building a namespace api object, basename services 10/13/23 09:49:58.191 + STEP: Waiting for a default service account to be provisioned in namespace 10/13/23 09:49:58.205 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/13/23 09:49:58.207 + [BeforeEach] [sig-network] Services + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 + [It] should be able to change the type from ExternalName to NodePort [Conformance] + test/e2e/network/service.go:1477 + STEP: creating a service externalname-service with the type=ExternalName in namespace services-1436 10/13/23 09:49:58.209 + STEP: changing the ExternalName service to type=NodePort 10/13/23 09:49:58.214 + STEP: creating replication controller externalname-service in namespace services-1436 10/13/23 09:49:58.238 + I1013 09:49:58.245710 23 runners.go:193] Created replication controller with name: externalname-service, namespace: services-1436, replica count: 2 + I1013 09:50:01.297546 23 runners.go:193] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady + Oct 13 09:50:01.297: INFO: Creating new exec pod + Oct 13 09:50:01.306: INFO: Waiting up to 5m0s for pod "execpodmqhcb" in namespace "services-1436" to be "running" + Oct 13 09:50:01.310: INFO: Pod "execpodmqhcb": Phase="Pending", Reason="", readiness=false. Elapsed: 3.80708ms + Oct 13 09:50:03.313: INFO: Pod "execpodmqhcb": Phase="Running", Reason="", readiness=true. Elapsed: 2.00732975s + Oct 13 09:50:03.313: INFO: Pod "execpodmqhcb" satisfied condition "running" + Oct 13 09:50:04.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-1436 exec execpodmqhcb -- /bin/sh -x -c nc -v -z -w 2 externalname-service 80' + Oct 13 09:50:04.464: INFO: stderr: "+ nc -v -z -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" + Oct 13 09:50:04.464: INFO: stdout: "" + Oct 13 09:50:04.464: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-1436 exec execpodmqhcb -- /bin/sh -x -c nc -v -z -w 2 10.107.175.85 80' + Oct 13 09:50:04.591: INFO: stderr: "+ nc -v -z -w 2 10.107.175.85 80\nConnection to 10.107.175.85 80 port [tcp/http] succeeded!\n" + Oct 13 09:50:04.591: INFO: stdout: "" + Oct 13 09:50:04.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-1436 exec execpodmqhcb -- /bin/sh -x -c nc -v -z -w 2 10.253.8.111 30578' + Oct 13 09:50:04.726: INFO: stderr: "+ nc -v -z -w 2 10.253.8.111 30578\nConnection to 10.253.8.111 30578 port [tcp/*] succeeded!\n" + Oct 13 09:50:04.726: INFO: stdout: "" + Oct 13 09:50:04.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1565935798 --namespace=services-1436 exec execpodmqhcb -- /bin/sh -x -c nc -v -z -w 2 10.253.8.112 30578' + Oct 13 09:50:04.849: INFO: stderr: "+ nc -v -z -w 2 10.253.8.112 30578\nConnection to 10.253.8.112 30578 port [tcp/*] succeeded!\n" + Oct 13 09:50:04.849: INFO: stdout: "" + Oct 13 09:50:04.849: INFO: Cleaning up the ExternalName to NodePort test service + [AfterEach] [sig-network] Services + test/e2e/framework/node/init/init.go:32 + Oct 13 09:50:04.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] Services + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] Services + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] Services + tear down framework | framework.go:193 + STEP: Destroying namespace "services-1436" for this suite. 10/13/23 09:50:04.882 + << End Captured GinkgoWriter Output +------------------------------ +[SynchronizedAfterSuite] +test/e2e/e2e.go:88 +[SynchronizedAfterSuite] TOP-LEVEL + test/e2e/e2e.go:88 +[SynchronizedAfterSuite] TOP-LEVEL + test/e2e/e2e.go:88 +Oct 13 09:50:04.890: INFO: Running AfterSuite actions on node 1 +Oct 13 09:50:04.890: INFO: Skipping dumping logs from cluster +------------------------------ +[SynchronizedAfterSuite] PASSED [0.000 seconds] +[SynchronizedAfterSuite] +test/e2e/e2e.go:88 + + Begin Captured GinkgoWriter Output >> + [SynchronizedAfterSuite] TOP-LEVEL + test/e2e/e2e.go:88 + [SynchronizedAfterSuite] TOP-LEVEL + test/e2e/e2e.go:88 + Oct 13 09:50:04.890: INFO: Running AfterSuite actions on node 1 + Oct 13 09:50:04.890: INFO: Skipping dumping logs from cluster + << End Captured GinkgoWriter Output +------------------------------ +[ReportAfterSuite] Kubernetes e2e suite report +test/e2e/e2e_test.go:153 +[ReportAfterSuite] TOP-LEVEL + test/e2e/e2e_test.go:153 +------------------------------ +[ReportAfterSuite] PASSED [0.000 seconds] +[ReportAfterSuite] Kubernetes e2e suite report +test/e2e/e2e_test.go:153 + + Begin Captured GinkgoWriter Output >> + [ReportAfterSuite] TOP-LEVEL + test/e2e/e2e_test.go:153 + << End Captured GinkgoWriter Output +------------------------------ +[ReportAfterSuite] Kubernetes e2e JUnit report +test/e2e/framework/test_context.go:529 +[ReportAfterSuite] TOP-LEVEL + test/e2e/framework/test_context.go:529 +------------------------------ +[ReportAfterSuite] PASSED [0.121 seconds] +[ReportAfterSuite] Kubernetes e2e JUnit report +test/e2e/framework/test_context.go:529 + + Begin Captured GinkgoWriter Output >> + [ReportAfterSuite] TOP-LEVEL + test/e2e/framework/test_context.go:529 + << End Captured GinkgoWriter Output +------------------------------ + +Ran 368 of 7069 Specs in 5793.341 seconds +SUCCESS! -- 368 Passed | 0 Failed | 0 Pending | 6701 Skipped +PASS + +Ginkgo ran 1 suite in 1h36m34.198961726s +Test Suite Passed +You're using deprecated Ginkgo functionality: +============================================= + --noColor is deprecated, use --no-color instead + Learn more at: https://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-command-line-flags + +To silence deprecations that can be silenced set the following environment variable: + ACK_GINKGO_DEPRECATIONS=2.4.0 + diff --git a/v1.26/ceake/junit_01.xml b/v1.26/ceake/junit_01.xml new file mode 100644 index 0000000000..100c5a0856 --- /dev/null +++ b/v1.26/ceake/junit_01.xml @@ -0,0 +1,20499 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file