The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?" #11
-
Hello! I try to get it up and running on rasperry pi's, 3 Masters, and 4 Workers. Error message after lunching the playbook:
What can I do? Where do I have to search? what could the error be? Thanks for any help that you can give me. |
Beta Was this translation helpful? Give feedback.
Replies: 21 comments 19 replies
-
Did you get latest? Also are you sure you are using the right interface name in your variables? Can you show them here? |
Beta Was this translation helpful? Give feedback.
-
THX for your tutorials and help. Yes, i got the latest, Interface is eth0 k3s_version: v1.23.4+k3s1 interface which will be used for flannelflannel_iface: "eth0" apiserver_endpoint is virtual ip-address which will be configured on each masterapiserver_endpoint: "192.168.0.190" k3s_token is required masters can talk together securelythis token should be alpha numeric onlyk3s_token: "++++++++++" extra_server_args: "--no-deploy servicelb --no-deploy traefik --write-kubeconfig-mode 644 default-not-ready-toleration-seconds=30 --kube-apiserver-arg default-unreachable-toleration-seconds=30 --kube-controller-arg node-monitor-period=20s --kube-controller-arg node-monitor-grace-period=20s --kubelet-arg node-status-update-frequency=5s" change these to your liking, the only required one is--no-deploy servicelb#extra_server_args: "--no-deploy servicelb --no-deploy traefik" image tag for kube-vipkube_vip_tag_version: "v0.4.2" image tag for metal lbmetal_lb_speaker_tag_version: "v0.12.1" metallb ip range for load balancermetal_lb_ip_range: "192.168.0.180-192.168.0.189" |
Beta Was this translation helpful? Give feedback.
-
I don't see anything odd. I would try removing all server args except required, reset, and try it again |
Beta Was this translation helpful? Give feedback.
-
Expand your hard disks... On all nodes... Probably should make a note Tim. You do say this in the video! |
Beta Was this translation helpful? Give feedback.
-
I too ran into this problem. I have double checked that the |
Beta Was this translation helpful? Give feedback.
-
Before doing this I actually switched my Ubuntu template to the prior video that Tim did and made sure that the both username/password and SSH keys are consistent across all VMs. |
Beta Was this translation helpful? Give feedback.
-
I am encountering the same issue as well when I try to run this on rasperry pi's. |
Beta Was this translation helpful? Give feedback.
-
Do all machines have the same time zones, same ssh keys, and are able to communicate with each other? Are you using passwordless sudo? If if not you might have to pass in additional flags like |
Beta Was this translation helpful? Give feedback.
-
I am encountering the same issue on a setup provisioned with vagrant. Here is the stacktrace in verbose mode
|
Beta Was this translation helpful? Give feedback.
-
How big should the hard disk be? ATM it is 86% free?`
Yes, yes, yes, yes. I have tried it with only the first do neccessary args and run into the same issue again. |
Beta Was this translation helpful? Give feedback.
-
can you please paste your |
Beta Was this translation helpful? Give feedback.
-
Ok, so I was able to solve my issue. I have also done a reset and verified that the old token was causing the issue. After some digging in logs I found this error line. So I was a bit unlucky with the token I had set:
|
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Oh, i also traversed all nodes -- master and worker -- and ran |
Beta Was this translation helpful? Give feedback.
-
Not surprisingly trying to check on the nodes in the cluster failed as the service is not running: |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Finally, note that in this current state, I can reach the active node directly at |
Beta Was this translation helpful? Give feedback.
-
Now I find this very odd ... even though I set the configuration as you did in your video ... I am not able to run the kubectl commands without sudo: Even more concerning, each of the masters is aware of only itself rather than the cluster at large. |
Beta Was this translation helpful? Give feedback.
-
Regarding my issue running on Raspberry Pi OS Lite 64 bit. |
Beta Was this translation helpful? Give feedback.
-
This was fixed with a new reset task that cleans up the VIP |
Beta Was this translation helpful? Give feedback.
This was fixed with a new reset task that cleans up the VIP