Skip to content
This repository was archived by the owner on Sep 4, 2021. It is now read-only.
This repository was archived by the owner on Sep 4, 2021. It is now read-only.

support for Kubernetes 1.6 in vagrant #863

@jbw976

Description

@jbw976

I tried bumping the Kubernetes version to 1.6.1 in my fork with this commit: jbw976@f839738

However, the cluster doesn't seem to come up successfully with vagrant up from multi-node/vagrant. Is 1.6.1 expected to work simply by bumping the version number, or is there more work necessary to support 1.6?

It looks like the api server container keeps failing/exiting:

core@c1 ~ $ docker ps -a
CONTAINER ID        IMAGE                                                                                              COMMAND                  CREATED              STATUS                          PORTS               NAMES
33ee03df2ca8        quay.io/coreos/hyperkube@sha256:1c8b4487be52a6df7668135d88b4c375aeeda4d934e34dbf5a8191c96161a8f5   "/hyperkube apiserver"   About a minute ago   Exited (2) About a minute ago                       k8s_kube-apiserver_kube-apiserver-172.17.4.101_kube-system_63ca746f1897c616e533e8a22bc52f25_11
78c8e6698fdb        quay.io/coreos/hyperkube@sha256:1c8b4487be52a6df7668135d88b4c375aeeda4d934e34dbf5a8191c96161a8f5   "/hyperkube proxy --m"   22 minutes ago       Up 22 minutes                                       k8s_kube-proxy_kube-proxy-172.17.4.101_kube-system_3adc2e5909a25a7591be4e34d03a979a_0
236a7318e1e1        quay.io/coreos/hyperkube@sha256:1c8b4487be52a6df7668135d88b4c375aeeda4d934e34dbf5a8191c96161a8f5   "/hyperkube scheduler"   22 minutes ago       Up 22 minutes                                       k8s_kube-scheduler_kube-scheduler-172.17.4.101_kube-system_00f8fdc56c1d255064005c48f70be4ef_0
88168fcf90cc        quay.io/coreos/hyperkube@sha256:1c8b4487be52a6df7668135d88b4c375aeeda4d934e34dbf5a8191c96161a8f5   "/hyperkube controlle"   22 minutes ago       Up 22 minutes                                       k8s_kube-controller-manager_kube-controller-manager-172.17.4.101_kube-system_3904d793c0237421892d0b11d8787f7d_0
ee5e7cd9c687        gcr.io/google_containers/pause-amd64:3.0                                                           "/pause"                 23 minutes ago       Up 23 minutes                                       k8s_POD_kube-controller-manager-172.17.4.101_kube-system_3904d793c0237421892d0b11d8787f7d_0
5099f6e7db56        gcr.io/google_containers/pause-amd64:3.0                                                           "/pause"                 23 minutes ago       Up 23 minutes                                       k8s_POD_kube-proxy-172.17.4.101_kube-system_3adc2e5909a25a7591be4e34d03a979a_0
7b861b49e90d        gcr.io/google_containers/pause-amd64:3.0                                                           "/pause"                 23 minutes ago       Up 23 minutes                                       k8s_POD_kube-apiserver-172.17.4.101_kube-system_63ca746f1897c616e533e8a22bc52f25_0
157ea2d35035        gcr.io/google_containers/pause-amd64:3.0                                                           "/pause"                 23 minutes ago       Up 23 minutes                                       k8s_POD_kube-scheduler-172.17.4.101_kube-system_00f8fdc56c1d255064005c48f70be4ef_0

And here is the entirety of the api server container logs:

core@c1 ~ $ docker logs 33ee03df2ca8
[restful] 2017/04/11 00:28:07 log.go:30: [restful/swagger] listing is available at https://172.17.4.101:443/swaggerapi/
[restful] 2017/04/11 00:28:07 log.go:30: [restful/swagger] https://172.17.4.101:443/swaggerui/ is mapped to folder /swagger-ui/
E0411 00:28:07.959405       1 reflector.go:201] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:70: Failed to list *api.Secret: Get https://localhost:443/api/v1/secrets?resourceVersion=0: dial tcp [::1]:443: getsockopt: connection refused
E0411 00:28:07.983872       1 reflector.go:201] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:70: Failed to list *api.Namespace: Get https://localhost:443/api/v1/namespaces?resourceVersion=0: dial tcp [::1]:443: getsockopt: connection refused
E0411 00:28:07.988497       1 reflector.go:201] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:70: Failed to list *api.LimitRange: Get https://localhost:443/api/v1/limitranges?resourceVersion=0: dial tcp [::1]:443: getsockopt: connection refused
E0411 00:28:07.988710       1 reflector.go:201] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:70: Failed to list *api.ServiceAccount: Get https://localhost:443/api/v1/serviceaccounts?resourceVersion=0: dial tcp [::1]:443: getsockopt: connection refused
E0411 00:28:07.989015       1 reflector.go:201] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:70: Failed to list *storage.StorageClass: Get https://localhost:443/apis/storage.k8s.io/v1beta1/storageclasses?resourceVersion=0: dial tcp [::1]:443: getsockopt: connection refused
E0411 00:28:07.989238       1 reflector.go:201] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:70: Failed to list *api.ResourceQuota: Get https://localhost:443/api/v1/resourcequotas?resourceVersion=0: dial tcp [::1]:443: getsockopt: connection refused
I0411 00:28:08.075165       1 serve.go:79] Serving securely on 0.0.0.0:443
I0411 00:28:08.075310       1 serve.go:94] Serving insecurely on 127.0.0.1:8080
E0411 00:28:08.190502       1 status.go:62] apiserver received an error that is not an metav1.Status: rpc error: code = 13 desc = transport is closing
E0411 00:28:08.235467       1 client_ca_hook.go:58] rpc error: code = 13 desc = transport is closing
E0411 00:28:13.032638       1 status.go:62] apiserver received an error that is not an metav1.Status: rpc error: code = 13 desc = transport is closing
E0411 00:28:14.564430       1 status.go:62] apiserver received an error that is not an metav1.Status: rpc error: code = 13 desc = transport is closing
I0411 00:28:18.414582       1 trace.go:61] Trace "Create /api/v1/namespaces/kube-system/pods" (started 2017-04-11 00:28:08.403696382 +0000 UTC):
[24.604µs] [24.604µs] About to convert to expected version
[94.481µs] [69.877µs] Conversion done
"Create /api/v1/namespaces/kube-system/pods" [10.010853094s] [10.010758613s] END
E0411 00:28:20.106746       1 status.go:62] apiserver received an error that is not an metav1.Status: rpc error: code = 13 desc = transport is closing
E0411 00:28:21.792029       1 status.go:62] apiserver received an error that is not an metav1.Status: rpc error: code = 13 desc = transport is closing
E0411 00:28:27.231371       1 status.go:62] apiserver received an error that is not an metav1.Status: rpc error: code = 13 desc = transport is closing
E0411 00:28:28.868291       1 status.go:62] apiserver received an error that is not an metav1.Status: rpc error: code = 13 desc = transport is closing
E0411 00:28:34.492583       1 status.go:62] apiserver received an error that is not an metav1.Status: rpc error: code = 13 desc = transport is closing
E0411 00:28:36.038974       1 status.go:62] apiserver received an error that is not an metav1.Status: rpc error: code = 13 desc = transport is closing
I0411 00:28:36.615233       1 trace.go:61] Trace "Create /api/v1/namespaces/kube-system/pods" (started 2017-04-11 00:28:26.573277232 +0000 UTC):
[30.753µs] [30.753µs] About to convert to expected version
[80.065µs] [49.312µs] Conversion done
"Create /api/v1/namespaces/kube-system/pods" [10.041929421s] [10.041849356s] END

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions