Backup content
- Resource configuration
- ETCD Cluster
- Persistent Volumes
ETCD will be the focus of this post.
Backup
Configure data_dir in etcd.service file to set up the directory, the destination of backup. Then user etcd built-in snapshot feature.
etcdctl is a command line client for etcd.
master $ export ETCDCTL_API=3 master $ etcdctl version #Check version) master $ etcdctl \ --endpoints=https://127.0.0.1:2379 \ --cacert=/etc/kubernetes/pki/etcd/ca.crt \ --cert=/etc/kubernetes/pki/etcd/server.crt \ --key=/etc/kubernetes/pki/etcd/server.key \ snapshot save /opt/snapshot-pre-boot.db
If the new etcd service is in the same server as previous one, we don’t need to provide certs, endpoints.
Restore
If etcd runs as a service, we need to stop and restore it. Otherwise, if it’s a static pod, we don’t need to.
#opt service kube-apiserver stop
Restore the snapshot.
master $ etcdctl snapshot restore /opt/snapshot-pre-boot.db \ --data-dir=/var/lib/etcd-from-backup \ --endpoints=https://127.0.0.1:2379 \ --cacert=/etc/kubernetes/pki/etcd/ca.crt \ --cert=/etc/kubernetes/pki/etcd/server.crt \ --key=/etc/kubernetes/pki/etcd/server.key \ --name=master \ --initial-cluster=master=https://127.0.0.1:2380 \ --initial-cluster-token=etcd-cluster-1 \ --initial-advertise-peer-urls=https://127.0.0.1:2380
The above is all option for etcd restore command. If we are restoring the snapshot to a different directory but in the same server where we took the backup, the only required option for the command is the –data-dir.
Edit /etc/kubernetes/manifests/etcd.yaml file to configure etcd.
Change etcd-data mount volume path. From /var/lib/etcd
to /var/lib/etcd-from-backup
.
When this file is updated, the ETCD pod is automatically re-created as this is a static pod placed under the /etc/kubernetes/manifests
directory.
Restart etcd, start kube-apiserver if it’s necessary.
#opt systemctl daemon-reload systemctl etcd restart systemctl kube-apiserver start