什么是kubeadm?就是把k8s的组件Docker化(除了kubelet)。

准备

安装Docker

首先把docker的cgroup驱动更改为systemd,docker在启动的时候会读取/etc/docker/daemon.json
的内容,并加入到启动环境之中。

1
2
3
4
5
cat << EOF > /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

有关于cgroup驱动的的讨论在这里 https://github.com/coreos/bugs/issues/1435
如果想安装比较稳定的旧版本,可以用下面的命令

1
2
yum install -y docker
systemctl enable docker && systemctl start docker

如果想安装比较新的版本,可以用下面的命令

1
2
3
4
5
6
sudo yum install -y yum-utils
sudo yum-config-manager \
--add-repo \
https://download.daocloud.io/docker/linux/centos/docker-ce.repo
sudo yum install -y -q --setopt=obsoletes=0 docker-ce-17.03.2.ce* docker-ce-selinux-17.03.2.ce*
systemctl enable docker && systemctl start docker

不出问题的运行docker info应该会有输出。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 36
Server Version: 17.03.2-ce
Storage Driver: overlay
Backing Filesystem: xfs
Supports d_type: true
Logging Driver: json-file
Cgroup Driver: systemd #修改的这里
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 4ab9917febca54791c5f071a9d1f404867857fcc
runc version: 54296cf40ad8143b62dbcaa1d90e520a2136ddfe
init version: 949e6fa
Security Options:
seccomp
Profile: default
Kernel Version: 3.10.0-693.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 1.796 GiB
Name: kubeadm
ID: WGBW:RHKQ:BVNO:UNSU:N2DU:UATD:HEAV:3SCP:PHNM:3CFU:TQF5:SYVA
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
127.0.0.0/8
Registry Mirrors:
https://docker.mirrors.ustc.edu.cn/
Live Restore Enabled: false

SELinux与Firewall

虽然你在上一步已经安装了docker-selinux,但是为了避免出现不必要的麻烦,还是关闭SE为好。

1
2
3
4
5
systemctl stop firewalld && systemctl disable firewalld
setenforce 0 && sed -i 's/enforcing/disabled/' /etc/sysconfig/selinux
#使用iptables替换掉自带的firewalld
yum install -y iptables-services && systemctl start iptables.service && systemctl enable iptables.service`

安装kubeadm

从源中下载kubeadm,以及依赖包

准备repo

1
2
3
4
5
6
7
8
9
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

安装RPM包

1
2
yum install -y kubelet kubeadm kubectl
systemctl enable kubelet && systemctl start kubelet

这一步我们能看到kubeadm 依赖kubelet,kubectl和kubernetes-cni(k8s的网络插件)

1
2
3
4
5
6
7
kubeadm.x86_64 1.9.0-0
依赖:kubectl >= 1.6.0
provider: kubectl.x86_64 1.9.0-0
依赖:kubelet >= 1.6.0
provider: kubelet.x86_64 1.9.0-0
依赖:kubernetes-cni
provider: kubernetes-cni.x86_64 0.6.0-0

修改net bridge

对于flannel等网络此步骤必须做

1
2
3
4
5
6
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

开始安装

kubeadm 支持以下命令,如果是安装的话,用init就可以了。

1
2
3
4
5
6
7
8
9
alpha Experimental sub-commands not yet fully functional.
completion Output shell completion code for the specified shell (bash or zsh)
config Manage configuration for a kubeadm cluster persisted in a ConfigMap in the cluster.
init Run this in order to set up the Kubernetes master
join Run this on any machine you wish to join an existing cluster
reset Run this to revert any changes made to this host by 'kubeadm init' or 'kubeadm join'.
token Manage bootstrap tokens.
upgrade Upgrade your cluster smoothly to a newer version with this command.
version Print the version of kubeadm

init 支持以下子命令,同时也支持yaml格式的配置文件,达到高可用的目的。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
kubeadm init [flags]
Flags:
--apiserver-advertise-address string The IP address the API Server will advertise it's listening on. 0.0.0.0 means the default network interface's address.
--apiserver-bind-port int32 Port for the API Server to bind to (default 6443)
--apiserver-cert-extra-sans stringSlice Optional extra altnames to use for the API Server serving cert. Can be both IP addresses and dns names.
--cert-dir string The path where to save and store the certificates (default "/etc/kubernetes/pki")
--config string Path to kubeadm config file (WARNING: Usage of a configuration file is experimental)
--dry-run Don't apply any changes; just output what would be done
--feature-gates string A set of key=value pairs that describe feature gates for various features. Options are:
SelfHosting=true|false (ALPHA - default=false)
StoreCertsInSecrets=true|false (ALPHA - default=false)
--kubernetes-version string Choose a specific Kubernetes version for the control plane (default "stable-1.8")
--node-name string Specify the node name
--pod-network-cidr string Specify range of IP addresses for the pod network; if set, the control plane will automatically allocate CIDRs for every node
--service-cidr string Use alternative range of IP address for service VIPs (default "10.96.0.0/12")
--service-dns-domain string Use alternative domain for services, e.g. "myorg.internal" (default "cluster.local")
--skip-preflight-checks Skip preflight checks normally run before modifying the system
--skip-token-print Skip printing of the default bootstrap token generated by 'kubeadm init'
--token string The token to use for establishing bidirectional trust between nodes and masters.
--token-ttl duration The duration before the bootstrap token is automatically deleted. 0 means 'never expires'. (default 24h0m0s)

清除swap分区
swapoff -a
初始化kubeadm,并指定pod网段
kubeadm init --pod-network-cidr=10.244.0.0/16
输出

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.8.5
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks
[preflight] WARNING: hostname "kubeadm" could not be reached
[preflight] WARNING: hostname "kubeadm" lookup kubeadm on 192.168.1.29:53: no such host
[preflight] WARNING: Connection to "https://10.8.1.53:6443" uses proxy "http://10.8.0.231:1087". If that is not intended, adjust your proxy settings
[preflight] Starting the kubelet service
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [kubeadm kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.8.1.53]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] This often takes around a minute; or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 32.044984 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node kubeadm as master by adding a label and a taint
[markmaster] Master kubeadm tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: 19dfb5.debdfcebeed37a89
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run (as a regular user):
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
http://kubernetes.io/docs/admin/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join --token 19dfb5.debdfcebeed37a89 10.8.1.53:6443 --discovery-token-ca-cert-hash sha256:ecfac0f0dd5ccca1dd5a9505afc42ed05496f2d4404b196b21162e433f503a5b

可以看出kubeadm的配置文件/etc/kubernetes/,如下所示

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
/etc/kubernetes/
├── admin.conf
├── controller-manager.conf
├── kubelet.conf
├── manifests
│   ├── etcd.yaml
│   ├── kube-apiserver.yaml
│   ├── kube-controller-manager.yaml
│   └── kube-scheduler.yaml
├── pki
│   ├── apiserver.crt
│   ├── apiserver.key
│   ├── apiserver-kubelet-client.crt
│   ├── apiserver-kubelet-client.key
│   ├── ca.crt
│   ├── ca.key
│   ├── front-proxy-ca.crt
│   ├── front-proxy-ca.key
│   ├── front-proxy-client.crt
│   ├── front-proxy-client.key
│   ├── sa.key
│   └── sa.pub
└── scheduler.conf

以最重要的apiserver为例,看一下他的配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
apiVersion: v1
kind: Pod
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
- --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
- --admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --requestheader-allowed-names=front-proxy-client
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --requestheader-username-headers=X-Remote-User
- --advertise-address=10.8.1.53
- --service-account-key-file=/etc/kubernetes/pki/sa.pub
- --enable-bootstrap-token-auth=true
- --insecure-port=0
- --requestheader-extra-headers-prefix=X-Remote-Extra-
- --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
- --secure-port=6443
- --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
- --allow-privileged=true
- --requestheader-group-headers=X-Remote-Group
- --service-cluster-ip-range=10.96.0.0/12
- --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
- --authorization-mode=Node,RBAC
- --etcd-servers=http://127.0.0.1:2379
env:
- name: http_proxy
value: http://10.8.0.231:1087
- name: https_proxy
value: http://10.8.0.231:1087
image: gcr.io/google_containers/kube-apiserver-amd64:v1.8.5
livenessProbe:
failureThreshold: 8
httpGet:
host: 127.0.0.1
path: /healthz
port: 6443
scheme: HTTPS
initialDelaySeconds: 15
timeoutSeconds: 15
name: kube-apiserver
resources:
requests:
cpu: 250m
volumeMounts:
- mountPath: /etc/kubernetes/pki
name: k8s-certs
readOnly: true
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /etc/pki
name: ca-certs-etc-pki
readOnly: true
hostNetwork: true
volumes:
- hostPath:
path: /etc/kubernetes/pki
type: DirectoryOrCreate
name: k8s-certs
- hostPath:
path: /etc/ssl/certs
type: DirectoryOrCreate
name: ca-certs
- hostPath:
path: /etc/pki
type: DirectoryOrCreate
name: ca-certs-etc-pki
status: {}

拷贝一下kubectl的配置文件,检查集群状态
DNS 处于pending因为没有安装网络插件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health": "true"}
kubectl get pod --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
kube-system etcd-kubeadm 1/1 Running 0 12m 10.8.1.53 kubeadm
kube-system kube-apiserver-kubeadm 1/1 Running 0 12m 10.8.1.53 kubeadm
kube-system kube-controller-manager-kubeadm 1/1 Running 0 12m 10.8.1.53 kubeadm
kube-system kube-dns-545bc4bfd4-rnggp 0/3 Pending 0 13m <none> <none>
kube-system kube-proxy-cm8s9 1/1 Running 0 13m 10.8.1.53 kubeadm
kube-system kube-scheduler-kubeadm 1/1 Running 0 12m 10.8.1.53 kubeadm

允许master参与调度
kubectl taint nodes --all node-role.kubernetes.io/master-

安装flannel网络
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.0/Documentation/kube-flannel.yml
再次检查集群

1
2
3
4
5
6
7
8
9
kubectl get pod --all-namespaces -o wide
kube-system etcd-kubeadm 1/1 Running 0 47m 10.8.1.53 kubeadm
kube-system kube-apiserver-kubeadm 1/1 Running 0 48m 10.8.1.53 kubeadm
kube-system kube-controller-manager-kubeadm 1/1 Running 0 48m 10.8.1.53 kubeadm
kube-system kube-dns-545bc4bfd4-8h2pr 3/3 Running 0 48m 10.244.0.120 kubeadm
kube-system kube-flannel-ds-plgdg 1/1 Running 0 14m 10.8.1.53 kubeadm
kube-system kube-proxy-zttwt 1/1 Running 0 48m 10.8.1.53 kubeadm
kube-system kube-scheduler-kubeadm 1/1 Running 0 47m 10.8.1.53 kubeadm

测试DNS

1
2
3
kubectl run curl --image=radial/busyboxplus:curl -i --tty
nslookup kubernetes.default

安装dashboard
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

运行2个副本的ngnix
kubectl run nginx --image=nginx --port=80 --replicas 2

开放nginx服务
kubectl expose deployment nginx --type=NodePort

访问dashboard

因为采用了RBAC的访问策略,所以在访问dashboard之前要先设置一下,单当然也可以用代理的方式

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
# 代理
kubectl proxy --address='0.0.0.0' --accept-hosts='^*$' --port=8080
docker ps --format "table {{.ID}} \t {{.Image}} \t {{.Status}}"
cat <<EOF > rbac
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-admin
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard-admin
labels:
k8s-app: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard-admin
namespace: kube-system
EOF
kubectl apply -f rbac
kubectl -n kube-system get secret | grep kubernetes-dashboard-admin

EFK日志插件

1
2
3
4
wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/fluentd-elasticsearch/es-statefulset.yaml
wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/fluentd-elasticsearch/es-service.yaml
wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/fluentd-elasticsearch/kibana-service.yaml
wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/fluentd-elasticsearch/kibana-deployment.yaml

重置
kubeadm reset

参考
https://mritd.me/2017/07/21/set-up-kubernetes-ha-cluster-by-binary/
https://blog.frognew.com/2017/09/kubeadm-install-kubernetes-1.8.html