Life is too bug

Etcd 使用入门

2019.06.15

etcd是coreos根据Raft一致性算法用go实现的分布式KV数据库,作为k8s数据唯一存储的地方,etcd具有高性能的读写以及watch性能,也可以用在服务注册,配置中心等。官方Doc https://github.com/etcd-io/etcd/blob/master/Documentation

入门

初识单节点

etcd使用了两个端口 2379 grpc 以及2380 http,现在最新的API版本是v3,之前的v2性能有一定问题,建议直接用v3的API。配置etcd可以通过环境变量或者命令行参数。Etcd会有一些默认参数,但是bind的地址都是127.0.0.1

docker run -d --name etcd k8s.gcr.io/etcd:3.3.10 etcd
# etcdmain
2019-06-25 06:26:30.323960 I | etcdmain: etcd Version: 3.3.10
2019-06-25 06:26:30.324127 I | etcdmain: Git SHA: 27fc7e2
2019-06-25 06:26:30.324138 I | etcdmain: Go Version: go1.10.4
2019-06-25 06:26:30.324153 I | etcdmain: Go OS/Arch: linux/amd64
2019-06-25 06:26:30.324162 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2
2019-06-25 06:26:30.324177 W | etcdmain: no data-dir provided, using default data-dir ./default.etcd

# embed
2019-06-25 06:26:40.336726 I | embed: listening for peers on http://localhost:2380
2019-06-25 06:26:50.345223 I | embed: listening for client requests on localhost:2379
2019-06-25 06:26:50.479247 I | embed: ready to serve client requests
2019-06-25 06:26:50.491112 N | embed: serving insecure client requests on 127.0.0.1:2379, this is strongly discouraged!

# etcdserver
2019-06-25 06:26:50.358147 I | etcdserver: name = default
2019-06-25 06:26:50.358327 I | etcdserver: data dir = default.etcd
2019-06-25 06:26:50.358749 I | etcdserver: member dir = default.etcd/member
2019-06-25 06:26:50.358781 I | etcdserver: heartbeat = 100ms
2019-06-25 06:26:50.358803 I | etcdserver: election = 1000ms
2019-06-25 06:26:50.358824 I | etcdserver: snapshot count = 100000
2019-06-25 06:26:50.359052 I | etcdserver: advertise client URLs = http://localhost:2379
2019-06-25 06:26:50.359121 I | etcdserver: initial advertise peer URLs = http://localhost:2380
2019-06-25 06:26:50.359170 I | etcdserver: initial cluster = default=http://localhost:2380
2019-06-25 06:26:50.370417 I | etcdserver: starting member 8e9e05c52164694d in cluster cdf818194e3a8c32
2019-06-25 06:26:50.408222 I | etcdserver: 8e9e05c52164694d as single-node; fast-forwarding 9 ticks (election ticks 10)
2019-06-25 06:26:50.412210 I | etcdserver/membership: added member 8e9e05c52164694d [http://localhost:2380] to cluster cdf818194e3a8c32
2019-06-25 06:26:50.401063 I | etcdserver: starting server... [version: 3.3.10, cluster version: to_be_decided]
2019-06-25 06:26:50.478249 I | etcdserver: setting up the initial cluster version to 3.3
2019-06-25 06:26:50.478939 I | etcdserver: published {Name:default ClientURLs:[http://localhost:2379]} to cluster cdf818194e3a8c32
2019-06-25 06:26:50.484943 N | etcdserver/membership: set the initial cluster version to 3.3
2019-06-25 06:26:50.490960 I | etcdserver/api: enabled capabilities for version 3.3

# raft
2019-06-25 06:26:50.370872 I | raft: 8e9e05c52164694d became follower at term 0
2019-06-25 06:26:50.370946 I | raft: newRaft 8e9e05c52164694d [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
2019-06-25 06:26:50.370969 I | raft: 8e9e05c52164694d became follower at term 1

2019-06-25 06:26:50.396502 W | auth: simple token is not cryptographically signed

# raft
2019-06-25 06:26:50.473495 I | raft: 8e9e05c52164694d is starting a new election at term 1
2019-06-25 06:26:50.473735 I | raft: 8e9e05c52164694d became candidate at term 2
2019-06-25 06:26:50.474179 I | raft: 8e9e05c52164694d received MsgVoteResp from 8e9e05c52164694d at term 2
2019-06-25 06:26:50.475485 I | raft: 8e9e05c52164694d became leader at term 2
2019-06-25 06:26:50.475667 I | raft: raft.node: 8e9e05c52164694d elected leader 8e9e05c52164694d at term 2

常用操作

# 列出节点
etcdctl member list -w table
+------------------+---------+---------+-----------------------+-----------------------+
|        ID        | STATUS  |  NAME   |      PEER ADDRS       |     CLIENT ADDRS      |
+------------------+---------+---------+-----------------------+-----------------------+
| 8e9e05c52164694d | started | default | http://localhost:2380 | http://localhost:2379 |
+------------------+---------+---------+-----------------------+-----------------------+


# 查看单个状态
ETCDCTL_API=3 etcdctl --cacert=/etc/kubernetes/pki/etcd/server.crt \
  --key=/etc/kubernetes/pki/etcd/peer.key --cert=/etc/kubernetes/pki/etcd/peer.crt \
  endpoint health
+----------------+------------------+---------+---------+-----------+-----------+------------+
|    ENDPOINT    |        ID        | VERSION | DB SIZE | IS LEADER | RAFT TERM | RAFT INDEX |
+----------------+------------------+---------+---------+-----------+-----------+------------+
| 127.0.0.1:2379 | 8e9e05c52164694d |  3.3.10 |   20 kB |      true |         2 |          4 |
+----------------+------------------+---------+---------+-----------+-----------+------------+  
  
curl -k --cacert /etc/kubernetes/pki/etcd/server.crt  \
  --key /etc/kubernetes/pki/etcd/peer.key --cert /etc/kubernetes/pki/etcd/peer.crt \
  -L https://127.0.0.1:2379/health && echo 
  
# 删除某个资源  
ETCDCTL_API=3  etcdctl --cacert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/peer.key --cert=/etc/kubernetes/pki/etcd/peer.crt del /registry/namespaces/cattle-system 
# 使用--prefix可以看到所有的子目录,如查看集群中的namespace:
ETCDCTL_API=3 etcdctl get /registry/namespaces --prefix -w=json|python -m json.tool
# 查看value 因为k8s使用了pb序列化,所以这里用的是hexdump
etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key get --prefix /registry/pods/kube-system/kube-proxy-kz4x8  | hexdump -C


# 新开一个终端可以用作watch 

集群

集群的初始化算起来有两种方式,主要区别在于initial-cluster-state 参数,如果都是new,则所有节点要同时启动,而existing 则无此要求,后面启动的节点要配置上前面节点的信息,同样可以用作集群扩容。

docker run -d --name etcd1  k8s.gcr.io/etcd:3.3.10 sleep 1d
docker run -d --name etcd2  k8s.gcr.io/etcd:3.3.10 sleep 1d
docker run -d --name etcd3  k8s.gcr.io/etcd:3.3.10 sleep 1d
docker run -d --name etcd4  k8s.gcr.io/etcd:3.3.10 sleep 1d
docker run -d --name etcd5  k8s.gcr.io/etcd:3.3.10 sleep 1d

IP1=$(docker inspect etcd1 --format '{{ .NetworkSettings.IPAddress }}')
IP2=$(docker inspect etcd2 --format '{{ .NetworkSettings.IPAddress }}')
IP3=$(docker inspect etcd3 --format '{{ .NetworkSettings.IPAddress }}')
IP4=$(docker inspect etcd4 --format '{{ .NetworkSettings.IPAddress }}')
IP5=$(docker inspect etcd5 --format '{{ .NetworkSettings.IPAddress }}')
echo $IP1 $IP2 $IP3

########################################################################
# 启动第一个节点
docker exec etcd1 etcd --name infra1 --advertise-client-urls=http://$IP1:2379  \
 --initial-advertise-peer-urls=http://$IP1:2380   \
 --initial-cluster=infra1=http://$IP1:2380  \
 --listen-client-urls=http://127.0.0.1:2379,http://$IP1:2379  \
 --listen-peer-urls=http://$IP1:2380 


docker exec -e ETCDCTL_API=3 etcd1 etcdctl member add infra2 --peer-urls=http://$IP2:2380
# 这里会返回第二个节点的启动参数
Member e6ccd69690165566 added to cluster b55d10073feab447
ETCD_NAME="infra2"
ETCD_INITIAL_CLUSTER="infra1=http://172.17.0.3:2380,infra2=http://172.17.0.4:2380"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://172.17.0.4:2380"
ETCD_INITIAL_CLUSTER_STATE="existing"
########################################################################
# 启动第二个节点
docker exec etcd2 etcd --name infra2  --advertise-client-urls=http://$IP2:2379  \
 --initial-advertise-peer-urls=http://$IP2:2380  \
 --initial-cluster=infra1=http://$IP1:2380,infra2=http://$IP2:2380  \
 --initial-cluster-state=existing  \
 --listen-client-urls=http://127.0.0.1:2379,http://$IP2:2379  \
 --listen-peer-urls=http://$IP2:2380 


docker exec -e ETCDCTL_API=3 etcd1 etcdctl member add infra3 --peer-urls=http://$IP3:2380
ETCD_NAME="infra3"
ETCD_INITIAL_CLUSTER="infra1=http://172.17.0.3:2380,infra3=http://172.17.0.5:2380,infra2=http://172.17.0.4:2380"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://172.17.0.5:2380"
ETCD_INITIAL_CLUSTER_STATE="existing"

docker exec etcd3 etcd --name infra3 --advertise-client-urls=http://$IP3:2379  \
 --initial-advertise-peer-urls=http://$IP3:2380  \
 --initial-cluster=infra1=http://$IP1:2380,infra2=http://$IP2:2380,infra3=http://$IP3:2380  \
 --initial-cluster-state=existing  \
 --listen-client-urls=http://127.0.0.1:2379,http://$IP3:2379  \
 --listen-peer-urls=http://$IP3:2380 

docker exec -e ETCDCTL_API=3 etcd1 etcdctl member add infra4 --peer-urls=http://$IP4:2380

docker exec etcd4 etcd --name infra4 --advertise-client-urls=http://$IP4:2379  \
 --initial-advertise-peer-urls=http://$IP4:2380  \
 --initial-cluster=infra1=http://$IP1:2380,infra2=http://$IP2:2380,infra3=http://$IP3:2380,infra4=http://$IP4:2380  \
 --initial-cluster-state=existing  \
 --listen-client-urls=http://127.0.0.1:2379,http://$IP4:2379  \
 --listen-peer-urls=http://$IP4:2380 

docker exec -e ETCDCTL_API=3 etcd1 etcdctl member add infra5 --peer-urls=http://$IP5:2380

docker exec etcd5 etcd --name infra5 --advertise-client-urls=http://$IP5:2379  \
 --initial-advertise-peer-urls=http://$IP5:2380  \
 --initial-cluster=infra1=http://$IP1:2380,infra2=http://$IP2:2380,infra3=http://$IP3:2380,infra4=http://$IP4:2380,infra5=http://$IP5:2380  \
 --initial-cluster-state=existing  \
 --listen-client-urls=http://127.0.0.1:2379,http://$IP5:2379  \
 --listen-peer-urls=http://$IP5:2380 

etcdctl member list -w table
+------------------+---------+--------+------------------------+------------------------+
|        ID        | STATUS  |  NAME  |       PEER ADDRS       |      CLIENT ADDRS      |
+------------------+---------+--------+------------------------+------------------------+
| 660aa483274d103a | started | infra1 | http://172.17.0.3:2380 | http://172.17.0.3:2379 |
| 77da1d05697d1725 | started | infra3 | http://172.17.0.5:2380 | http://172.17.0.5:2379 |
| e6ccd69690165566 | started | infra2 | http://172.17.0.4:2380 | http://172.17.0.4:2379 |
+------------------+---------+--------+------------------------+------------------------+

for ep in $(etcdctl --endpoints=http://$IP1:2379 member list |awk '{print $5}' );do etcdctl --endpoints=$ep -w table endpoint status;done
Failed to get the status of endpoint  (context deadline exceeded)
+------------------------+------------------+---------+---------+-----------+-----------+------------+
|        ENDPOINT        |        ID        | VERSION | DB SIZE | IS LEADER | RAFT TERM | RAFT INDEX |
+------------------------+------------------+---------+---------+-----------+-----------+------------+
| http://172.17.0.3:2380 | 1d0417edf0a96f72 |  3.3.10 |   20 kB |     false |        15 |          9 |
+------------------------+------------------+---------+---------+-----------+-----------+------------+
Failed to get the status of endpoint  (context deadline exceeded)
+------------------------+------------------+---------+---------+-----------+-----------+------------+
|        ENDPOINT        |        ID        | VERSION | DB SIZE | IS LEADER | RAFT TERM | RAFT INDEX |
+------------------------+------------------+---------+---------+-----------+-----------+------------+
| http://172.17.0.2:2380 | 69015be41c714f32 |  3.3.10 |   20 kB |      true |        15 |          9 |
+------------------------+------------------+---------+---------+-----------+-----------+------------+
Failed to get the status of endpoint  (context deadline exceeded)
+------------------------+------------------+---------+---------+-----------+-----------+------------+
|        ENDPOINT        |        ID        | VERSION | DB SIZE | IS LEADER | RAFT TERM | RAFT INDEX |
+------------------------+------------------+---------+---------+-----------+-----------+------------+
| http://172.17.0.4:2380 | f1134c30b97be195 |  3.3.10 |   20 kB |     false |        15 |          9 |
+------------------------+------------------+---------+---------+-----------+-----------+------------+

备份恢复

通过etcdctl子命令即可完成,注意需要停止etcd服务并清空对应的data目录。

# 备份数据
ETCDCTL_API=3 etcdctl snapshot save etcd.db
# 数据状态
etcdctl snapshot status etcd.db  -w table
+----------+----------+------------+------------+
|   HASH   | REVISION | TOTAL KEYS | TOTAL SIZE |
+----------+----------+------------+------------+
| 77e8a851 |        2 |          5 |      20 kB |
+----------+----------+------------+------------+

etcdctl --endpoints $ENDPOINT snapshot restore snapshot.db --data-dir /var/lib/etcd

更进一步

集群选主失败

选举期间,Etcd 集群无法处理任何写操作,客户端的写请求会缓存到队列中,直到新的主节点产生为止,选举期间发送给旧主节点的写请求若还包含没有提交的数据则可能会丢失,因为新的主节点有权利覆盖旧的主节点的所有未提交的数据,从用户的角度来看,在选主期间,一些写请求会超时。所有已提交的数据都不会丢失。

数据持久化

首先使用CLI查看数据

etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key get --prefix /registry/pods/kube-system/kube-proxy-kz4x8 | hexdump -C 

Etcd 会在默认的工作目录下生成两个子目录:snap 和 wal。两个目录的作用说明如下。

  • snap:用于存放快照数据。Etcd 为防止 WAL 文件过多会创建快照,snap 用于存储 Etcd 的快照数据状态。

  • wal:用于存放预写式日志,其最大的作用是记录整个数据变化的全部历程。在 Etcd 中,所有数据的修改在提交前,都要先写入 WAL 中。使用 WAL 进行数据的存储使得 Etcd 拥有故障快速回复和数据回滚这两个重要功能。

故障快速恢复:如果你的数据遭到破坏,就可以通过执行所有 WAL 中记录的修改操作,快速从最原始的数据恢复到数据损坏之前的状态。

数据回滚(undo)/ 重做(redo):因为所有的修改操作都被记录在 WAL 中,所以进行回滚或重做时,只需要反向或正向执行日志中的操作即可。

既然有了 WAL 实时存储所有的变更,那么为什么还需要做快照呢?因为随着使用量的增加,WAL 存储的数据会暴增,为了防止磁盘很快就爆满,Etcd 默认没 10000 条记录做一次快照,做过快照之后的 WAL 文件就可以删除。而通过 API 可以查询的历史 Etcd 操作默认为 1000 条。

首次启动时,Etcd 会把启动的配置信息存储到 data-dir 参数指定的数据目录中。配置信息包括本地节点的 ID、集群 ID 和初始时集群信息。用户需要避免从一个过期的数据目录中重新启动 etcd,因为使用过期的数据目录启动的节点会与集群的其他节点产生不一致(例如,重启之后又向 Leader 节点申请这个信息)的问题。所以,为了最大化保障集群的安全性,一旦有任何数据存在损坏或丢失的可能性,就应该包这个节点从集群中移除,任何加入一个空数据目录的新节点。

压缩历史版本

对于 Etcd 为每个 key 都保存了历史版本,因此这些历史版本需要进行周期性地压缩,以避免出现性能问题或存储空间耗尽的问题。压缩历史版本会丢弃该 key 给定版本之前的所有信息,节省出来的空间可以用于后续的写操作。

key 的历史版本可以通过 Etcd 带时间窗口的历史版本来保留策略自动压缩,或通过 etcdctl 命令行进行手动操作。Etcd 启动参数 “–auto-compaction” 支持自动压缩 key 的历史版本,其以小时为单位。示例代码具体如下。

保留 1 个小时的历史版本:

etcd --auto-compaction-retention=1 

用 etcdctl 命令行压缩的示例代码具体如下:

etcdctl compact 3 

消除碎片化

压缩历史版本之后,后台数据库将会存在内部的碎片。这些碎片无法被后台存储使用,却仍占据节点的存储空间。因此消除碎片化的过程就是释放这些存储空间。压缩旧的历史版本会对后台数据库打个 ”洞”,从而导致碎片的产生。这些碎片空间对 Etcd 是可用的,但对宿主机文件系统是不可用的。

使用 etcdctl 命令行的 defrag 子命令可以清理 etcd 节点的存储碎片,示例代码如下:

etcdctl defrag 
Finished defragmenting etcd member[127.0.0.1:2379] 

增量同步/异地备份

snapshot的方式只能做全量备份,那么增量呢?经过一番查找,发现了这个命令,可以点对点拷贝value,同时也支持全量拷贝。

etcdctl make-mirror http://IP:2379 

Ref

很棒的资料

https://wiki.shileizcc.com/confluence/display/etcd/Etcd

comments powered by Disqus