This page explains two different approaches to setting up a highly available Kubernetes cluster using kubeadm:
Before proceeding, you should carefully consider which approach best meets the needs of your applications and environment. Options for Highly Available topology outlines the advantages and disadvantages of each.
If you encounter issues with setting up the HA cluster, please report these in the kubeadm issue tracker.
See also the upgrade documentation.
The prerequisites depend on which topology you have selected for your cluster's control plane:
You need:
sudo
sudo
in the examples.kubeadm
and kubelet
already installed on all machines.See Stacked etcd topology for context.
You need:
sudo
sudo
in the examples.kubeadm
and kubelet
already installed on all machines.And you also need:
kubeadm
and kubelet
installed.See External etcd topology for context.
Each host should have access read and fetch images from the Kubernetes container image registry, registry.k8s.io
. If you want to deploy a highly-available cluster where the hosts do not have access to pull images, this is possible. You must ensure by some other means that the correct container images are already available on the relevant hosts.
To manage Kubernetes once your cluster is set up, you should install kubectl on your PC. It is also useful to install the kubectl
tool on each control plane node, as this can be helpful for troubleshooting.
Create a kube-apiserver load balancer with a name that resolves to DNS.
In a cloud environment you should place your control plane nodes behind a TCP forwarding load balancer. This load balancer distributes traffic to all healthy control plane nodes in its target list. The health check for an apiserver is a TCP check on the port the kube-apiserver listens on (default value :6443
).
It is not recommended to use an IP address directly in a cloud environment.
The load balancer must be able to communicate with all control plane nodes on the apiserver port. It must also allow incoming traffic on its listening port.
Make sure the address of the load balancer always matches the address of kubeadm's ControlPlaneEndpoint
.
Read the Options for Software Load Balancing guide for more details.
Add the first control plane node to the load balancer, and test the connection:
nc -zv -w 2 <LOAD_BALANCER_IP> <PORT>
A connection refused error is expected because the API server is not yet running. A timeout, however, means the load balancer cannot communicate with the control plane node. If a timeout occurs, reconfigure the load balancer to communicate with the control plane node.
Add the remaining control plane nodes to the load balancer target group.
Initialize the control plane:
sudo kubeadm init --control-plane-endpoint "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT" --upload-certs
You can use the --kubernetes-version
flag to set the Kubernetes version to use. It is recommended that the versions of kubeadm, kubelet, kubectl and Kubernetes match.
The --control-plane-endpoint
flag should be set to the address or DNS and port of the load balancer.
The --upload-certs
flag is used to upload the certificates that should be shared across all the control-plane instances to the cluster. If instead, you prefer to copy certs across control-plane nodes manually or using automation tools, please remove this flag and refer to Manual certificate distribution section below.
kubeadm init
flags --config
and --certificate-key
cannot be mixed, therefore if you want to use the kubeadm configuration you must add the certificateKey
field in the appropriate config locations (under InitConfiguration
and JoinConfiguration: controlPlane
).--pod-network-cidr
, or if you are using a kubeadm configuration file set the podSubnet
field under the networking
object of ClusterConfiguration
.The output looks similar to:
... You can now join any number of control-plane node by running the following command on each as a root: kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 --control-plane --certificate-key f8902e114ef118304e561c3ecd4d0b543adc226b7a07f675f56564185ffe0c07 Please note that the certificate-key gives access to cluster sensitive data, keep it secret! As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use kubeadm init phase upload-certs to reload certs afterward. Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866
Copy this output to a text file. You will need it later to join control plane and worker nodes to the cluster.
When --upload-certs
is used with kubeadm init
, the certificates of the primary control plane are encrypted and uploaded in the kubeadm-certs
Secret.
To re-upload the certificates and generate a new decryption key, use the following command on a control plane node that is already joined to the cluster:
sudo kubeadm init phase upload-certs --upload-certs
You can also specify a custom --certificate-key
during init
that can later be used by join
. To generate such a key you can use the following command:
kubeadm certs certificate-key
The certificate key is a hex encoded string that is an AES key of size 32 bytes.
kubeadm-certs
Secret and the decryption key expire after two hours.Apply the CNI plugin of your choice: Follow these instructions to install the CNI provider. Make sure the configuration corresponds to the Pod CIDR specified in the kubeadm configuration file (if applicable).
Type the following and watch the pods of the control plane components get started:
kubectl get pod -n kube-system -w
For each additional control plane node you should:
Execute the join command that was previously given to you by the kubeadm init
output on the first node. It should look something like this:
sudo kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 --control-plane --certificate-key f8902e114ef118304e561c3ecd4d0b543adc226b7a07f675f56564185ffe0c07
--control-plane
flag tells kubeadm join
to create a new control plane.--certificate-key ...
will cause the control plane certificates to be downloaded from the kubeadm-certs
Secret in the cluster and be decrypted using the given key.You can join multiple control-plane nodes in parallel.
Setting up a cluster with external etcd nodes is similar to the procedure used for stacked etcd with the exception that you should setup etcd first, and you should pass the etcd information in the kubeadm config file.
Follow these instructions to set up the etcd cluster.
Set up SSH as described here.
Copy the following files from any etcd node in the cluster to the first control plane node:
exportCONTROL_PLANE="ubuntu@10.0.0.7"scp /etc/kubernetes/pki/etcd/ca.crt "${CONTROL_PLANE}": scp /etc/kubernetes/pki/apiserver-etcd-client.crt "${CONTROL_PLANE}": scp /etc/kubernetes/pki/apiserver-etcd-client.key "${CONTROL_PLANE}":
CONTROL_PLANE
with the user@host
of the first control-plane node.Create a file called kubeadm-config.yaml
with the following contents:
---apiVersion:kubeadm.k8s.io/v1beta4kind:ClusterConfigurationkubernetesVersion:stablecontrolPlaneEndpoint:"LOAD_BALANCER_DNS:LOAD_BALANCER_PORT"# change this (see below)etcd:external:endpoints:- https://ETCD_0_IP:2379# change ETCD_0_IP appropriately- https://ETCD_1_IP:2379# change ETCD_1_IP appropriately- https://ETCD_2_IP:2379# change ETCD_2_IP appropriatelycaFile:/etc/kubernetes/pki/etcd/ca.crtcertFile:/etc/kubernetes/pki/apiserver-etcd-client.crtkeyFile:/etc/kubernetes/pki/apiserver-etcd-client.key
external
object for etcd
. In the case of the stacked etcd topology, this is managed automatically.Replace the following variables in the config template with the appropriate values for your cluster:
LOAD_BALANCER_DNS
LOAD_BALANCER_PORT
ETCD_0_IP
ETCD_1_IP
ETCD_2_IP
The following steps are similar to the stacked etcd setup:
Run sudo kubeadm init --config kubeadm-config.yaml --upload-certs
on this node.
Write the output join commands that are returned to a text file for later use.
Apply the CNI plugin of your choice.
The steps are the same as for the stacked etcd setup:
--certificate-key
expires after two hours, by default.Worker nodes can be joined to the cluster with the command you stored previously as the output from the kubeadm init
command:
sudo kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866
If you choose to not use kubeadm init
with the --upload-certs
flag this means that you are going to have to manually copy the certificates from the primary control plane node to the joining control plane nodes.
There are many ways to do this. The following example uses ssh
and scp
:
SSH is required if you want to control all nodes from a single machine.
Enable ssh-agent on your main device that has access to all other nodes in the system:
eval$(ssh-agent)
Add your SSH identity to the session:
ssh-add ~/.ssh/path_to_private_key
SSH between nodes to check that the connection is working correctly.
When you SSH to any node, add the -A
flag. This flag allows the node that you have logged into via SSH to access the SSH agent on your PC. Consider alternative methods if you do not fully trust the security of your user session on the node.
ssh -A 10.0.0.7
When using sudo on any node, make sure to preserve the environment so SSH forwarding works:
sudo -E -s
After configuring SSH on all the nodes you should run the following script on the first control plane node after running kubeadm init
. This script will copy the certificates from the first control plane node to the other control plane nodes:
In the following example, replace CONTROL_PLANE_IPS
with the IP addresses of the other control plane nodes.
USER=ubuntu # customizableCONTROL_PLANE_IPS="10.0.0.7 10.0.0.8"for host in ${CONTROL_PLANE_IPS}; do scp /etc/kubernetes/pki/ca.crt "${USER}"@$host: scp /etc/kubernetes/pki/ca.key "${USER}"@$host: scp /etc/kubernetes/pki/sa.key "${USER}"@$host: scp /etc/kubernetes/pki/sa.pub "${USER}"@$host: scp /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host: scp /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host: scp /etc/kubernetes/pki/etcd/ca.crt "${USER}"@$host:etcd-ca.crt # Skip the next line if you are using external etcd scp /etc/kubernetes/pki/etcd/ca.key "${USER}"@$host:etcd-ca.key done
Then on each joining control plane node you have to run the following script before running kubeadm join
. This script will move the previously copied certificates from the home directory to /etc/kubernetes/pki
:
USER=ubuntu # customizablemkdir -p /etc/kubernetes/pki/etcd mv /home/${USER}/ca.crt /etc/kubernetes/pki/ mv /home/${USER}/ca.key /etc/kubernetes/pki/ mv /home/${USER}/sa.pub /etc/kubernetes/pki/ mv /home/${USER}/sa.key /etc/kubernetes/pki/ mv /home/${USER}/front-proxy-ca.crt /etc/kubernetes/pki/ mv /home/${USER}/front-proxy-ca.key /etc/kubernetes/pki/ mv /home/${USER}/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt # Skip the next line if you are using external etcdmv /home/${USER}/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key