We are trying to create k8s cluster environment that spans across 2 data center, We have only 7 BM servers. We choose to have 2 Master and 2 worker nodes in data center1, In a sunny day scenario this works
But in case of a disaster at data center1, the single master in data center2 becomes unresponsive and does not start any pod in the worker node in data center2 worker nodes and our application service becomes unavailable. Is there a solution to recover this master node back to the service other than bringing back the other two master online.
We set up the clusters using kubeadm Stacked control plane approach
#sudo kubeadm init --control-plane-endpoint "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT" --upload-certs
Add two more Master to to the cluster using the below command
#sudo kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash
When 2 Masters are down the API server on the third stop listening and the kubectl also stops works,
We tried restarting the available master , but it did not start listening and we were unable to do any actions on the available master. Is there any way to bring back this master into service