I have a question. When joining a master node, the following messages appear: error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists [ERROR Port-10250]: Port 10250 is in use In what state should these nodes be added?
thanks for this video. i got error when access curl xx.xx.xx.xx curl: (7) Failed to connect to xx.xx.xx.xx port 6443: Connection refused. my firewall off. on other hand master1 kubelet status show master1 not found. any succession on this ?
Excellent video!! I have one question though. I've bootstraped my cluster a couple of months ago with a single control plane and some workers. No LoadBalancer in front. Now I wanted to make it HA, by adding 2 more masters I already created a LB (haproxy). What should I change in my master01? (this master was not initialized with a LB ip as you did in your video) Many thanks in advance!
if we don't give the flag --experimental-control-plane and --certificate-key ? node will join cluster not as master but as application node? I got great information about HA setup using kubeadm. I am thinking there are similar steps for joining nodes (not master) using nginx node as lb.
thank you very much for this tutorial. Will this setup make http/https endpoint highy available , i mean will it make https/https traffic loadbalanced among the master ? thanks you
When I try to execute `kubeadm init --config=/etc/kubernetes/kubeadm/kubeadm-config.yaml --experimental-upload-certs `, I faced "unknown flag: --experimental-upload-certs" error, next I tried `kubeadm init --config /etc/kubernetes/kubeadm/kubeadm-config.yaml --upload-certs`, but the initialization got stock in "error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster", can any body help please?
I am facing same issue while follow the steps in this video. In NGINX LoadBalancer showing following error : failed (113: No route to host) while connecting to upstream, client: 192.168.184.136, server: 0.0.0.0:6443, upstream: "192.168.184.137:6443"
Really nice. Can i use this in production environment? If any of mater node have problem we can use the same method of creating certs and can join right?
Yes, but I have done a demo setup(2-masters and 1-worker) and followed the same instructions as you mentioned in blog. And tried some testing with down the masters but after down the 1master other one didn't connect with API server. Unable to get node details.
Hi sir. I want to ask why I could not use kubectl to see the pod after I shut down one of the master node. HA means I have three master node. I shut down one of them. I still have two masters. The other master node would move pod to available node. Many thanks.