这里假设你已经有了使用二进制搭建的k8s集群,这里所说的DEBUG模式,其实也就是把所有组件的日志,统一输出到文件之中。
然后使用kubectl run 一个pod,通过观察日志,了解pod的启动过程。
(完整日志在最后,去掉了IP信息)

首先kubectl会向API Server POST 数据,类型为deployments
POST /apis/extensions/v1beta1/namespaces/default/deployments:

然后API Server 创建对应的replicasets
POST /apis/extensions/v1beta1/namespaces/default/replicasets:

之后controller-manager发现replicasets数量不够
Too few replicas for ReplicaSet default/curl-6b7d8c6f49, need 1, creating 1

便创建了一个
{Kind:”Deployment”, Namespace:”default”, Name:”curl”, UID:”9829f149-9246-11e8-8157-00163f00bfd3”, APIVersion:”extensions”, ResourceVersion:”76635”, FieldPath:””}
): type: ‘Normal’ reason: ‘ScalingReplicaSet’ Scaled up replica set curl-6b7d8c6f49 to 1

之后API Server通过softlink的地址更新pod的状态
PUT /apis/extensions/v1beta1/namespaces/default/deployments/curl/status

然后查询对应namespace的limitranges
GET /api/v1/namespaces/default/limitranges

并把pod的事件记录到namespace 的event中
POST /api/v1/namespaces/default/events

这时候调度器开始工作,为pod分配node
About to try and schedule pod curl-6b7d8c6f49-bc9p8
Attempting to schedule pod: default/curl-6b7d8c6f49-bc9p8

找到合适的node之后把pod 和对应的node发送给API Server
Attempting to bind curl-6b7d8c6f49-bc9p8 to MASTER2

API Server 发现调度器bind之后,准备创建pod
POST /api/v1/namespaces/default/pods

并更新deployment的状态

之后controller-manager发现数量符合要求
Event(v1.ObjectReference{Kind:”ReplicaSet”, Namespace:”default”, Name:”curl-6b7d8c6f49”, UID:”982b85b8-9246-11e8-8157-00163f00bfd3”, APIVersion:”extensions”, ResourceVersion:”76636”, FieldPath:””}): type: ‘Normal’ reason: ‘SuccessfulCreate’ Created pod: curl-6b7d8c6f49-bc9p8

API Server接受到调度器的信息之后,开始biding pod到node上
POST /api/v1/namespaces/default/pods/curl-6b7d8c6f49-bc9p8/binding

并更新对应的状态
PUT /apis/extensions/v1beta1/namespaces/default/deployments/curl/status
PUT /apis/extensions/v1beta1/namespaces/default/replicasets/curl-6b7d8c6f49/status

对应node上的kubelet收到bind信息之后开始准备创建容器
Setting pods for source api
Receiving a new pod “curl-6b7d8c6f49-bc9p8_default(982da442-9246-11e8-8157-00163f00bfd3)”

SyncLoop (ADD, “api”): “curl-6b7d8c6f49-bc9p8_default(982da442-9246-11e8-8157-00163f00bfd3)”

Generating status for “curl-6b7d8c6f49-bc9p8_default(982da442-9246-11e8-8157-00163f00bfd3)”

pod waiting > 0, pending

Status Manager: adding pod: “982da442-9246-11e8-8157-00163f00bfd3”, with status: (‘\x01’, {Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-28 17:14:13 +0800 CST }
{Ready False 0001-01-01 00:00:00 +0000 UTC 2018-07-28 17:14:13 +0800 CST ContainersNotReady containers with unready status: [curl]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-28 17:14:13 +0800 CST }]
MASTER2 2018-07-28 17:14:13 +0800 CST [] [{curl {&ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 vsxen/k8s }] BestEffort}) to podStatusChannel

Updated QoS cgroup configuration

Factory “docker” was unable to handle container “/kubepods/besteffort/pod982da442-9246-11e8-8157-00163f00bfd3”

Error trying to work out if we can handle /kubepods/besteffort/pod982da442-9246-11e8-8157-00163f00bfd3:
/kubepods/besteffort/pod982da442-9246-11e8-8157-00163f00bfd3 not handled by systemd handler

Factory “systemd” was unable to handle container “/kubepods/besteffort/pod982da442-9246-11e8-8157-00163f00bfd3”

Using factory “raw” for container “/kubepods/besteffort/pod982da442-9246-11e8-8157-00163f00bfd3”

Added container: “/kubepods/besteffort/pod982da442-9246-11e8-8157-00163f00bfd3” (aliases: [], namespace: “”)

中间调度器也会受到消息,完成调度
Finished binding for pod 982da442-9246-11e8-8157-00163f00bfd3. Can be expired.

在完成启动之前 API Server会一直更新状态,不再多说,只分析kubelet的行为.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
Setting pods for source api
(RECONCILE, "api"): "curl-6b7d8c6f49-bc9p8_default(982da442-9246-11e8-8157-00163f00bfd3)"
PUT https://MASTER2:6443/api/v1/namespaces/default/pods/curl-6b7d8c6f49-bc9p8/status 200 OK in 5 milliseconds
Status for pod "curl-6b7d8c6f49-bc9p8_default(982da442-9246-11e8-8157-00163f00bfd3)" updated successfully:
(1, {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2018-07-28 17:14:13 +0800 CST Reason: Message:}
{Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2018-07-28 17:14:13 +0800 CST Reason:
ContainersNotReady Message:containers with unready status: [curl]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC
LastTransitionTime:2018-07-28 17:14:13 +0800 CST Reason: Message:}] Message: Reason: HostIP:MASTER2 PodIP: StartTime:2018-07-28 17:14:13 +0800 CST
InitContainerStatuses:[] ContainerStatuses:[{Name:curl State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil}
LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:vsxen/k8s ImageID: ContainerID:}] QOSClass:BestEffort})
operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-tgfb5"
(UniqueName: "kubernetes.io/secret/982da442-9246-11e8-8157-00163f00bfd3-default-token-tgfb5") pod "curl-6b7d8c6f49-bc9p8" (UID: "982da442-9246-11e8-8157-00163f00bfd3")
using default mounter/exec for kubernetes.io/secret
operationExecutor.MountVolume started for volume "default-token-tgfb5"
(UniqueName: "kubernetes.io/secret/982da442-9246-11e8-8157-00163f00bfd3-default-token-tgfb5") pod "curl-6b7d8c6f49-bc9p8" (UID: "982da442-9246-11e8-8157-00163f00bfd3")
Setting up volume default-token-tgfb5 for pod 982da442-9246-11e8-8157-00163f00bfd3 at
/var/lib/kubelet/pods/982da442-9246-11e8-8157-00163f00bfd3/volumes/kubernetes.io~secret/default-token-tgfb5
using default mounter/exec for kubernetes.io/empty-dir
Determining mount medium of /var/lib/kubelet/pods/982da442-9246-11e8-8157-00163f00bfd3/volumes/kubernetes.io~secret/default-token-tgfb5
Statfs_t of /var/lib/kubelet/pods/982da442-9246-11e8-8157-00163f00bfd3/volumes/kubernetes.io~secret/default-token-tgfb5:
{Type:61267 Bsize:4096 Blocks:25769800 Bfree:23615883 Bavail:22301094 Files:6553600 Ffree:6495982 Fsid:{X__val:[386919882 -1187374761]}
Namelen:255 Frsize:4096 Flags:4128 Spare:[0 0 0 0]}
pod 982da442-9246-11e8-8157-00163f00bfd3: mounting tmpfs for volume wrapped_default-token-tgfb5
Mounting cmd (systemd-run) with arguments ([--description=Kubernetes transient mount for
/var/lib/kubelet/pods/982da442-9246-11e8-8157-00163f00bfd3/volumes/kubernetes.io~secret/default-token-tgfb5 --scope
-- mount -t tmpfs tmpfs /var/lib/kubelet/pods/982da442-9246-11e8-8157-00163f00bfd3/volumes/kubernetes.io~secret/default-token-tgfb5])
Factory "docker" was unable to handle container "/system.slice/run-20734.scope"
Error trying to work out if we can handle /system.slice/run-20734.scope: /system.slice/run-20734.scope not handled by systemd handler
Factory "systemd" was unable to handle container "/system.slice/run-20734.scope"
Using factory "raw" for container "/system.slice/run-20734.scope"
Added container: "/system.slice/run-20734.scope" (aliases: [], namespace: "")
Added event &{/system.slice/run-20734.scope 2018-07-28 17:14:13.567561614 +0800 CST containerCreation {<nil>}}
Start housekeeping for container "/system.slice/run-20734.scope"
POST https://MASTER2:6443/api/v1/namespaces/default/events 201 Created in 3 milliseconds
Looking for [api], have seen map[api:{}]
SyncLoop (housekeeping)
All volumes are attached and mounted for pod "curl-6b7d8c6f49-bc9p8_default(982da442-9246-11e8-8157-00163f00bfd3)"
Syncing Pod "curl-6b7d8c6f49-bc9p8_default(982da442-9246-11e8-8157-00163f00bfd3)": &Pod
{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:curl-6b7d8c6f49-bc9p8,GenerateName:curl-6b7d8c6f49-,
Namespace:default,SelfLink:/api/v1/namespaces/default/pods/curl-6b7d8c6f49-bc9p8,UID:982da442-9246-11e8-8157-00163f00bfd3,
ResourceVersion:76640,Generation:0,CreationTimestamp:2018-07-28 17:14:13 +0800 CST,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,
Labels:map[string]string{pod-template-hash: 2638472905,run: curl,},Annotations:map[string]string{kubernetes.io/config.seen:
2018-07-28T17:14:13.336400231+08:00,kubernetes.io/config.source: api,},OwnerReferences:[{extensions/v1beta1 ReplicaSet curl-6b7d8c6f49
982b85b8-9246-11e8-8157-00163f00bfd3 0xc420e74340 0xc420e74341}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec
{Volumes:[{default-token-tgfb5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tgfb5,Items:[],DefaultMode:*420,Optional:nil,}
nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{curl vsxen/k8s [] [sleep 1d] [] [] [] {map[] map[]}
[{default-token-tgfb5 true /var/run/secrets/kubernetes.io/serviceaccount <nil>}] [] nil nil nil /dev/termination-log File Always nil false false false}],
RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},
ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:MASTER2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext
:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,},ImagePullSecrets:[],Hostname:,
Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],
PriorityClassName:,Priority:nil,DNSConfig:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC
2018-07-28 17:14:13 +0800 CST }],Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],},}
No sandbox for pod "curl-6b7d8c6f49-bc9p8_default(982da442-9246-11e8-8157-00163f00bfd3)" can be found. Need to start a new one
computePodActions got {KillPod:true CreateSandbox:true SandboxID: Attempt:0 NextInitContainerToStart:nil ContainersToStart:[0] ContainersToKill:map[]} for pod "curl-6b7d8c6f49-bc9p8_default(982da442-9246-11e8-8157-00163f00bfd3)"
SyncPod received new pod "curl-6b7d8c6f49-bc9p8_default(982da442-9246-11e8-8157-00163f00bfd3)", will create a sandbox for it
Stopping PodSandbox for "curl-6b7d8c6f49-bc9p8_default(982da442-9246-11e8-8157-00163f00bfd3)", will start new one
Creating sandbox for pod "curl-6b7d8c6f49-bc9p8_default(982da442-9246-11e8-8157-00163f00bfd3)"
Entry version: {key:version obj:0xc420baa500} has expired
Setting cgroup parent to: "/kubepods/besteffort/pod982da442-9246-11e8-8157-00163f00bfd3"
Using factory "docker" for container "/kubepods/besteffort/pod982da442-9246-11e8-8157-00163f00bfd3/9f74cceca180071a31d44289e7b5085f298849c9dd78523349f8d7a8c7037713"
20122 docker_sandbox.go:658] Will attempt to re-write config file
/var/lib/docker/containers/9f74cceca180071a31d44289e7b5085f298849c9dd78523349f8d7a8c7037713/resolv.conf with:
[nameserver 10.68.0.2 search default.svc.cluster.local. svc.cluster.local. cluster.local. options ndots:5]
Calling network plugin cni to set up pod "curl-6b7d8c6f49-bc9p8_default"
Added container: "/kubepods/besteffort/pod982da442-9246-11e8-8157-00163f00bfd3/9f74cceca180071a31d44289e7b5085f298849c9dd78523349f8d7a8c7037713"
(aliases: [k8s_POD_curl-6b7d8c6f49-bc9p8_default_982da442-9246-11e8-8157-00163f00bfd3_0 9f74cceca180071a31d44289e7b5085f298849c9dd78523349f8d7a8c7037713], namespace: "docker")
Got netns path /proc/20757/ns/net
Using podns path default
About to add CNI network cni-loopback (type=loopback)
Added event &{/kubepods/besteffort/pod982da442-9246-11e8-8157-00163f00bfd3/9f74cceca180071a31d44289e7b5085f298849c9dd78523349f8d7a8c7037713
2018-07-28 17:14:13.720561614 +0800 CST containerCreation {<nil>}}
Start housekeeping for container "/kubepods/besteffort/pod982da442-9246-11e8-8157-00163f00bfd3/9f74cceca180071a31d44289e7b5085f298849c9dd78523349f8d7a8c7037713"
Got netns path /proc/20757/ns/net
Using podns path default
About to add CNI network mynet (type=bridge)
Created PodSandbox "9f74cceca180071a31d44289e7b5085f298849c9dd78523349f8d7a8c7037713" for pod "curl-6b7d8c6f49-bc9p8_default(982da442-9246-11e8-8157-00163f00bfd3)"
Determined the ip "172.20.0.18" for pod "curl-6b7d8c6f49-bc9p8_default(982da442-9246-11e8-8157-00163f00bfd3)" after sandbox changed
Creating container &Container{Name:curl,Image:vsxen/k8s,Command:[],Args:[sleep 1d],WorkingDir:,Ports:[],Env:[],
Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},
VolumeMounts:[{default-token-tgfb5 true /var/run/secrets/kubernetes.io/serviceaccount <nil>}],LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,
TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[],
TerminationMessagePolicy:File,VolumeDevices:[],} in pod curl-6b7d8c6f49-bc9p8_default(982da442-9246-11e8-8157-00163f00bfd3)
kuberuntime_image.go:46] Pulling image "docker.io/vsxen/k8s:latest" without credentials
Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"curl-6b7d8c6f49-bc9p8", UID:"982da442-9246-11e8-8157-00163f00bfd3",
APIVersion:"v1", ResourceVersion:"76640", FieldPath:"spec.containers{curl}"}): type: 'Normal' reason: 'Pulling' pulling image "vsxen/k8s"
POST https://MASTER2:6443/api/v1/namespaces/default/events 201 Created in 4 milliseconds
GenericPLEG: Relisting
GenericPLEG: 982da442-9246-11e8-8157-00163f00bfd3/9f74cceca180071a31d44289e7b5085f298849c9dd78523349f8d7a8c7037713: non-existent -> running
getSandboxIDByPodUID got sandbox IDs ["9f74cceca180071a31d44289e7b5085f298849c9dd78523349f8d7a8c7037713"] for pod "curl-6b7d8c6f49-bc9p8_default(982da442-9246-11e8-8157-00163f00bfd3)"
PLEG: Write status for curl-6b7d8c6f49-bc9p8/default: &container.PodStatus{ID:"982da442-9246-11e8-8157-00163f00bfd3",
Name:"curl-6b7d8c6f49-bc9p8", Namespace:"default", IP:"172.20.0.18",
ContainerStatuses:[]*container.ContainerStatus{}, SandboxStatuses:[]*runtime.PodSandboxStatus{(*runtime.PodSandboxStatus)(0xc4213d7ea0)}} (err: <nil>)
SyncLoop (PLEG): "curl-6b7d8c6f49-bc9p8_default(982da442-9246-11e8-8157-00163f00bfd3)",
event: &pleg.PodLifecycleEvent{ID:"982da442-9246-11e8-8157-00163f00bfd3", Type:"ContainerStarted",
Data:"9f74cceca180071a31d44289e7b5085f298849c9dd78523349f8d7a8c7037713"}

完整log

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
==> api.log <==
I0728 17:14:08.796901 19316 wrap.go:42] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings?resourceVersion=75870&timeoutSeconds=579&watch=true: (9m39.000525375s) 200 [[kube-controller-manager/v1.9.8 (linux/amd64) kubernetes/c138b85/shared-informers] 127.0.0.1:54852]
I0728 17:14:08.797765 19316 get.go:238] Starting watch for /apis/rbac.authorization.k8s.io/v1/clusterrolebindings, rv=75870 labels= fields= timeout=8m45s
I0728 17:14:08.877765 19316 wrap.go:42] GET /apis/batch/v1/jobs: (1.318479ms) 200 [[kube-controller-manager/v1.9.8 (linux/amd64) kubernetes/c138b85/cronjob-controller] 127.0.0.1:55132]
I0728 17:14:08.879739 19316 wrap.go:42] GET /apis/batch/v1beta1/cronjobs: (1.31371ms) 200 [[kube-controller-manager/v1.9.8 (linux/amd64) kubernetes/c138b85/cronjob-controller] 127.0.0.1:55132]
==> let.log <==
I0728 17:14:09.149371 20122 generic.go:183] GenericPLEG: Relisting
I0728 17:14:09.623603 20122 config.go:99] Looking for [api], have seen map[api:{}]
I0728 17:14:09.623686 20122 kubelet.go:1939] SyncLoop (housekeeping)
==> api.log <==
I0728 17:14:09.940738 19316 wrap.go:42] GET /api/v1/nodes: (1.810572ms) 200 [[kube-controller-manager/v1.9.8 (linux/amd64) kubernetes/c138b85/pod-garbage-collector] 127.0.0.1:55132]
==> let.log <==
I0728 17:14:10.152109 20122 generic.go:183] GenericPLEG: Relisting
==> api.log <==
I0728 17:14:10.189372 19316 wrap.go:42] GET /api/v1/namespaces/kube-system/endpoints/kube-controller-manager: (1.619466ms) 200 [[kube-controller-manager/v1.9.8 (linux/amd64) kubernetes/c138b85/leader-election] 127.0.0.1:55132]
I0728 17:14:10.191863 19316 wrap.go:42] PUT /api/v1/namespaces/kube-system/endpoints/kube-controller-manager: (1.781229ms) 200 [[kube-controller-manager/v1.9.8 (linux/amd64) kubernetes/c138b85/leader-election] 127.0.0.1:55132]
I0728 17:14:10.749206 19316 wrap.go:42] GET /api/v1/namespaces/kube-system/endpoints/kube-scheduler: (1.657134ms) 200 [[kube-scheduler/v1.9.8 (linux/amd64) kubernetes/c138b85/leader-election] 127.0.0.1:54988]
==> sch.log <==
I0728 17:14:10.749421 19408 round_trippers.go:436] GET http://127.0.0.1:8080/api/v1/namespaces/kube-system/endpoints/kube-scheduler 200 OK in 2 milliseconds
==> api.log <==
I0728 17:14:10.751914 19316 wrap.go:42] PUT /api/v1/namespaces/kube-system/endpoints/kube-scheduler: (2.083001ms) 200 [[kube-scheduler/v1.9.8 (linux/amd64) kubernetes/c138b85/leader-election] 127.0.0.1:54988]
==> sch.log <==
I0728 17:14:10.752064 19408 round_trippers.go:436] PUT http://127.0.0.1:8080/api/v1/namespaces/kube-system/endpoints/kube-scheduler 200 OK in 2 milliseconds
I0728 17:14:10.752254 19408 leaderelection.go:199] successfully renewed lease kube-system/kube-scheduler
==> let.log <==
I0728 17:14:11.154278 20122 generic.go:183] GenericPLEG: Relisting
I0728 17:14:11.623720 20122 config.go:99] Looking for [api], have seen map[api:{}]
I0728 17:14:11.623827 20122 kubelet.go:1939] SyncLoop (housekeeping)
I0728 17:14:12.156840 20122 generic.go:183] GenericPLEG: Relisting
==> api.log <==
I0728 17:14:12.194631 19316 wrap.go:42] GET /api/v1/namespaces/kube-system/endpoints/kube-controller-manager: (1.779756ms) 200 [[kube-controller-manager/v1.9.8 (linux/amd64) kubernetes/c138b85/leader-election] 127.0.0.1:55132]
I0728 17:14:12.197153 19316 wrap.go:42] PUT /api/v1/namespaces/kube-system/endpoints/kube-controller-manager: (1.787833ms) 200 [[kube-controller-manager/v1.9.8 (linux/amd64) kubernetes/c138b85/leader-election] 127.0.0.1:55132]
I0728 17:14:12.754741 19316 wrap.go:42] GET /api/v1/namespaces/kube-system/endpoints/kube-scheduler: (1.818411ms) 200 [[kube-scheduler/v1.9.8 (linux/amd64) kubernetes/c138b85/leader-election] 127.0.0.1:54988]
==> sch.log <==
I0728 17:14:12.754931 19408 round_trippers.go:436] GET http://127.0.0.1:8080/api/v1/namespaces/kube-system/endpoints/kube-scheduler 200 OK in 2 milliseconds
==> api.log <==
I0728 17:14:12.757202 19316 wrap.go:42] PUT /api/v1/namespaces/kube-system/endpoints/kube-scheduler: (1.787304ms) 200 [[kube-scheduler/v1.9.8 (linux/amd64) kubernetes/c138b85/leader-election] 127.0.0.1:54988]
==> sch.log <==
I0728 17:14:12.757361 19408 round_trippers.go:436] PUT http://127.0.0.1:8080/api/v1/namespaces/kube-system/endpoints/kube-scheduler 200 OK in 2 milliseconds
I0728 17:14:12.757486 19408 leaderelection.go:199] successfully renewed lease kube-system/kube-scheduler
==> let.log <==
I0728 17:14:12.818552 20122 kubelet.go:2118] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=true reason: message:
I0728 17:14:13.158931 20122 generic.go:183] GenericPLEG: Relisting
==> api.log <==
I0728 17:14:13.292807 19316 wrap.go:42] GET /apis/extensions/v1beta1: (441.366µs) 200 [[kubectl/v1.9.8 (linux/amd64) kubernetes/c138b85] 127.0.0.1:55146]
I0728 17:14:13.307219 19316 wrap.go:42] GET /api/v1/namespaces/default/resourcequotas: (1.476427ms) 200 [[kube-apiserver/v1.9.8 (linux/amd64) kubernetes/c138b85] MASTER2:55042]
I0728 17:14:13.309370 19316 wrap.go:42] POST /apis/extensions/v1beta1/namespaces/default/deployments: (4.698546ms) 201 [[kubectl/v1.9.8 (linux/amd64) kubernetes/c138b85] 127.0.0.1:55146]
I0728 17:14:13.317227 19316 wrap.go:42] POST /apis/extensions/v1beta1/namespaces/default/replicasets: (2.0307ms) 201 [[kube-controller-manager/v1.9.8 (linux/amd64) kubernetes/c138b85/deployment-controller] 127.0.0.1:55132]
==> con.log <==
I0728 17:14:13.317870 19386 replica_set.go:478] Too few replicas for ReplicaSet default/curl-6b7d8c6f49, need 1, creating 1
I0728 17:14:13.317922 19386 event.go:218] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"curl", UID:"9829f149-9246-11e8-8157-00163f00bfd3", APIVersion:"extensions", ResourceVersion:"76635", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set curl-6b7d8c6f49 to 1
==> let.log <==
I0728 17:14:13.321489 20122 helpers.go:125] Unable to get network stats from pid 18249: couldn't read network stats: failure opening /proc/18249/net/dev: open /proc/18249/net/dev: no such file or directory
==> api.log <==
I0728 17:14:13.328823 19316 wrap.go:42] PUT /apis/extensions/v1beta1/namespaces/default/deployments/curl/status: (10.443073ms) 200 [[kube-controller-manager/v1.9.8 (linux/amd64) kubernetes/c138b85/deployment-controller] 127.0.0.1:55132]
I0728 17:14:13.328910 19316 wrap.go:42] GET /api/v1/namespaces/default/limitranges: (9.844849ms) 200 [[kube-apiserver/v1.9.8 (linux/amd64) kubernetes/c138b85] MASTER2:55042]
I0728 17:14:13.330890 19316 wrap.go:42] POST /api/v1/namespaces/default/events: (12.08914ms) 201 [[kube-controller-manager/v1.9.8 (linux/amd64) kubernetes/c138b85/deployment-controller] 127.0.0.1:55148]
==> sch.log <==
I0728 17:14:13.331948 19408 factory.go:1022] About to try and schedule pod curl-6b7d8c6f49-bc9p8
I0728 17:14:13.331991 19408 scheduler.go:438] Attempting to schedule pod: default/curl-6b7d8c6f49-bc9p8
I0728 17:14:13.332199 19408 factory.go:1242] Attempting to bind curl-6b7d8c6f49-bc9p8 to MASTER2
==> api.log <==
I0728 17:14:13.332135 19316 wrap.go:42] POST /api/v1/namespaces/default/pods: (13.789127ms) 201 [[kube-controller-manager/v1.9.8 (linux/amd64) kubernetes/c138b85/replicaset-controller] 127.0.0.1:55149]
I0728 17:14:13.332161 19316 wrap.go:42] PUT /apis/extensions/v1beta1/namespaces/default/deployments/curl/status: (2.315122ms) 409 [[kube-controller-manager/v1.9.8 (linux/amd64) kubernetes/c138b85/deployment-controller] 127.0.0.1:55132]
==> sch.log <==
==> con.log <==
I0728 17:14:13.332385 19386 deployment_controller.go:485] Error syncing deployment default/curl: Operation cannot be fulfilled on deployments.extensions "curl": the object has been modified; please apply your changes to the latest version and try again
I0728 17:14:13.333126 19386 event.go:218] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"curl-6b7d8c6f49", UID:"982b85b8-9246-11e8-8157-00163f00bfd3", APIVersion:"extensions", ResourceVersion:"76636", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: curl-6b7d8c6f49-bc9p8
==> api.log <==
I0728 17:14:13.335844 19316 wrap.go:42] POST /api/v1/namespaces/default/pods/curl-6b7d8c6f49-bc9p8/binding: (2.286462ms) 201 [[kube-scheduler/v1.9.8 (linux/amd64) kubernetes/c138b85/scheduler] 127.0.0.1:54988]
I0728 17:14:13.339862 19316 wrap.go:42] PUT /apis/extensions/v1beta1/namespaces/default/deployments/curl/status: (6.407533ms) 200 [[kube-controller-manager/v1.9.8 (linux/amd64) kubernetes/c138b85/deployment-controller] 127.0.0.1:55132]
I0728 17:14:13.340081 19316 wrap.go:42] PUT /apis/extensions/v1beta1/namespaces/default/replicasets/curl-6b7d8c6f49/status: (6.030625ms) 200 [[kube-controller-manager/v1.9.8 (linux/amd64) kubernetes/c138b85/replicaset-controller] 127.0.0.1:55148]
==> let.log <==
I0728 17:14:13.336334 20122 config.go:297] Setting pods for source api
I0728 17:14:13.336390 20122 config.go:405] Receiving a new pod "curl-6b7d8c6f49-bc9p8_default(982da442-9246-11e8-8157-00163f00bfd3)"
I0728 17:14:13.336435 20122 kubelet.go:1852] SyncLoop (ADD, "api"): "curl-6b7d8c6f49-bc9p8_default(982da442-9246-11e8-8157-00163f00bfd3)"
I0728 17:14:13.336621 20122 kubelet_pods.go:1370] Generating status for "curl-6b7d8c6f49-bc9p8_default(982da442-9246-11e8-8157-00163f00bfd3)"
I0728 17:14:13.336666 20122 kubelet_pods.go:1335] pod waiting > 0, pending
I0728 17:14:13.336737 20122 status_manager.go:367] Status Manager: adding pod: "982da442-9246-11e8-8157-00163f00bfd3", with status: ('\x01', {Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-28 17:14:13 +0800 CST } {Ready False 0001-01-01 00:00:00 +0000 UTC 2018-07-28 17:14:13 +0800 CST ContainersNotReady containers with unready status: [curl]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-28 17:14:13 +0800 CST }] MASTER2 2018-07-28 17:14:13 +0800 CST [] [{curl {&ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 vsxen/k8s }] BestEffort}) to podStatusChannel
I0728 17:14:13.336945 20122 qos_container_manager_linux.go:317] [ContainerManager]: Updated QoS cgroup configuration
I0728 17:14:13.338281 20122 factory.go:116] Factory "docker" was unable to handle container "/kubepods/besteffort/pod982da442-9246-11e8-8157-00163f00bfd3"
I0728 17:14:13.341451 20122 factory.go:105] Error trying to work out if we can handle /kubepods/besteffort/pod982da442-9246-11e8-8157-00163f00bfd3: /kubepods/besteffort/pod982da442-9246-11e8-8157-00163f00bfd3 not handled by systemd handler
I0728 17:14:13.341470 20122 factory.go:116] Factory "systemd" was unable to handle container "/kubepods/besteffort/pod982da442-9246-11e8-8157-00163f00bfd3"
I0728 17:14:13.341501 20122 factory.go:112] Using factory "raw" for container "/kubepods/besteffort/pod982da442-9246-11e8-8157-00163f00bfd3"
I0728 17:14:13.341751 20122 manager.go:970] Added container: "/kubepods/besteffort/pod982da442-9246-11e8-8157-00163f00bfd3" (aliases: [], namespace: "")
==> sch.log <==
I0728 17:14:13.337248 19408 round_trippers.go:436] POST http://127.0.0.1:8080/api/v1/namespaces/default/pods/curl-6b7d8c6f49-bc9p8/binding 201 Created in 4 milliseconds
I0728 17:14:13.337351 19408 cache.go:153] Finished binding for pod 982da442-9246-11e8-8157-00163f00bfd3. Can be expired.
==> api.log <==
==> let.log <==
I0728 17:14:13.338337 20122 status_manager.go:146] Status Manager: syncing pod: "982da442-9246-11e8-8157-00163f00bfd3", with status: (1, {Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-28 17:14:13 +0800 CST } {Ready False 0001-01-01 00:00:00 +0000 UTC 2018-07-28 17:14:13 +0800 CST ContainersNotReady containers with unready status: [curl]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-28 17:14:13 +0800 CST }] MASTER2 2018-07-28 17:14:13 +0800 CST [] [{curl {&ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 vsxen/k8s }] BestEffort}) from podStatusChannel
I0728 17:14:13.341941 20122 handler.go:325] Added event &{/kubepods/besteffort/pod982da442-9246-11e8-8157-00163f00bfd3 2018-07-28 17:14:13.336561614 +0800 CST containerCreation {<nil>}}
I0728 17:14:13.341995 20122 container.go:448] Start housekeeping for container "/kubepods/besteffort/pod982da442-9246-11e8-8157-00163f00bfd3"
I0728 17:14:13.344074 20122 volume_manager.go:343] Waiting for volumes to attach and mount for pod "curl-6b7d8c6f49-bc9p8_default(982da442-9246-11e8-8157-00163f00bfd3)"
==> api.log <==
I0728 17:14:13.347311 19316 wrap.go:42] PUT /apis/extensions/v1beta1/namespaces/default/replicasets/curl-6b7d8c6f49/status: (3.757557ms) 200 [[kube-controller-manager/v1.9.8 (linux/amd64) kubernetes/c138b85/replicaset-controller] 127.0.0.1:55148]
I0728 17:14:13.347559 19316 wrap.go:42] GET /api/v1/namespaces/default/pods/curl-6b7d8c6f49-bc9p8: (4.72522ms) 200 [[kubelet/v1.9.8 (linux/amd64) kubernetes/c138b85] MASTER2:55336]
I0728 17:14:13.347994 19316 wrap.go:42] POST /api/v1/namespaces/default/events: (14.442518ms) 201 [[kube-controller-manager/v1.9.8 (linux/amd64) kubernetes/c138b85/replicaset-controller] 127.0.0.1:55152]
==> let.log <==
I0728 17:14:13.348399 20122 round_trippers.go:436] GET https://MASTER2:6443/api/v1/namespaces/default/pods/curl-6b7d8c6f49-bc9p8 200 OK in 6 milliseconds
==> api.log <==
I0728 17:14:13.351046 19316 wrap.go:42] POST /api/v1/namespaces/default/events: (12.106266ms) 201 [[kube-scheduler/v1.9.8 (linux/amd64) kubernetes/c138b85/leader-election] 127.0.0.1:54988]
==> sch.log <==
I0728 17:14:13.351276 19408 round_trippers.go:436] POST http://127.0.0.1:8080/api/v1/namespaces/default/events 201 Created in 13 milliseconds
==> api.log <==
I0728 17:14:13.352543 19316 wrap.go:42] PUT /api/v1/namespaces/default/pods/curl-6b7d8c6f49-bc9p8/status: (2.774097ms) 200 [[kubelet/v1.9.8 (linux/amd64) kubernetes/c138b85] MASTER2:55336]
I0728 17:14:13.352919 19316 wrap.go:42] PUT /apis/extensions/v1beta1/namespaces/default/deployments/curl/status: (3.29022ms) 200 [[kube-controller-manager/v1.9.8 (linux/amd64) kubernetes/c138b85/deployment-controller] 127.0.0.1:55148]
==> let.log <==
I0728 17:14:13.353578 20122 config.go:297] Setting pods for source api
I0728 17:14:13.353738 20122 kubelet.go:1865] SyncLoop (RECONCILE, "api"): "curl-6b7d8c6f49-bc9p8_default(982da442-9246-11e8-8157-00163f00bfd3)"
I0728 17:14:13.353793 20122 round_trippers.go:436] PUT https://MASTER2:6443/api/v1/namespaces/default/pods/curl-6b7d8c6f49-bc9p8/status 200 OK in 5 milliseconds
I0728 17:14:13.353891 20122 status_manager.go:479] Status for pod "curl-6b7d8c6f49-bc9p8_default(982da442-9246-11e8-8157-00163f00bfd3)" updated successfully: (1, {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2018-07-28 17:14:13 +0800 CST Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2018-07-28 17:14:13 +0800 CST Reason:ContainersNotReady Message:containers with unready status: [curl]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2018-07-28 17:14:13 +0800 CST Reason: Message:}] Message: Reason: HostIP:MASTER2 PodIP: StartTime:2018-07-28 17:14:13 +0800 CST InitContainerStatuses:[] ContainerStatuses:[{Name:curl State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:vsxen/k8s ImageID: ContainerID:}] QOSClass:BestEffort})
I0728 17:14:13.464027 20122 reconciler.go:217] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-tgfb5" (UniqueName: "kubernetes.io/secret/982da442-9246-11e8-8157-00163f00bfd3-default-token-tgfb5") pod "curl-6b7d8c6f49-bc9p8" (UID: "982da442-9246-11e8-8157-00163f00bfd3")
I0728 17:14:13.564327 20122 volume_host.go:218] using default mounter/exec for kubernetes.io/secret
I0728 17:14:13.564490 20122 reconciler.go:262] operationExecutor.MountVolume started for volume "default-token-tgfb5" (UniqueName: "kubernetes.io/secret/982da442-9246-11e8-8157-00163f00bfd3-default-token-tgfb5") pod "curl-6b7d8c6f49-bc9p8" (UID: "982da442-9246-11e8-8157-00163f00bfd3")
I0728 17:14:13.564589 20122 secret.go:186] Setting up volume default-token-tgfb5 for pod 982da442-9246-11e8-8157-00163f00bfd3 at /var/lib/kubelet/pods/982da442-9246-11e8-8157-00163f00bfd3/volumes/kubernetes.io~secret/default-token-tgfb5
I0728 17:14:13.564649 20122 volume_host.go:218] using default mounter/exec for kubernetes.io/empty-dir
I0728 17:14:13.564664 20122 volume_host.go:218] using default mounter/exec for kubernetes.io/empty-dir
I0728 17:14:13.564861 20122 empty_dir_linux.go:41] Determining mount medium of /var/lib/kubelet/pods/982da442-9246-11e8-8157-00163f00bfd3/volumes/kubernetes.io~secret/default-token-tgfb5
I0728 17:14:13.564911 20122 empty_dir_linux.go:51] Statfs_t of /var/lib/kubelet/pods/982da442-9246-11e8-8157-00163f00bfd3/volumes/kubernetes.io~secret/default-token-tgfb5: {Type:61267 Bsize:4096 Blocks:25769800 Bfree:23615883 Bavail:22301094 Files:6553600 Ffree:6495982 Fsid:{X__val:[386919882 -1187374761]} Namelen:255 Frsize:4096 Flags:4128 Spare:[0 0 0 0]}
I0728 17:14:13.564985 20122 empty_dir.go:264] pod 982da442-9246-11e8-8157-00163f00bfd3: mounting tmpfs for volume wrapped_default-token-tgfb5
I0728 17:14:13.565028 20122 mount_linux.go:142] Mounting cmd (systemd-run) with arguments ([--description=Kubernetes transient mount for /var/lib/kubelet/pods/982da442-9246-11e8-8157-00163f00bfd3/volumes/kubernetes.io~secret/default-token-tgfb5 --scope -- mount -t tmpfs tmpfs /var/lib/kubelet/pods/982da442-9246-11e8-8157-00163f00bfd3/volumes/kubernetes.io~secret/default-token-tgfb5])
I0728 17:14:13.568585 20122 factory.go:116] Factory "docker" was unable to handle container "/system.slice/run-20734.scope"
I0728 17:14:13.568725 20122 factory.go:105] Error trying to work out if we can handle /system.slice/run-20734.scope: /system.slice/run-20734.scope not handled by systemd handler
I0728 17:14:13.568741 20122 factory.go:116] Factory "systemd" was unable to handle container "/system.slice/run-20734.scope"
I0728 17:14:13.568756 20122 factory.go:112] Using factory "raw" for container "/system.slice/run-20734.scope"
I0728 17:14:13.568990 20122 manager.go:970] Added container: "/system.slice/run-20734.scope" (aliases: [], namespace: "")
I0728 17:14:13.569155 20122 handler.go:325] Added event &{/system.slice/run-20734.scope 2018-07-28 17:14:13.567561614 +0800 CST containerCreation {<nil>}}
I0728 17:14:13.569211 20122 container.go:448] Start housekeeping for container "/system.slice/run-20734.scope"
==> api.log <==
I0728 17:14:13.574247 19316 wrap.go:42] GET /api/v1/namespaces/default/secrets/default-token-tgfb5: (2.237624ms) 200 [[kubelet/v1.9.8 (linux/amd64) kubernetes/c138b85] MASTER2:55336]
==> let.log <==
I0728 17:14:13.574526 20122 round_trippers.go:436] GET https://MASTER2:6443/api/v1/namespaces/default/secrets/default-token-tgfb5 200 OK in 2 milliseconds
I0728 17:14:13.574628 20122 secret.go:216] Received secret default/default-token-tgfb5 containing (3) pieces of data, 2199 total bytes
I0728 17:14:13.574743 20122 atomic_writer.go:176] pod default/curl-6b7d8c6f49-bc9p8 volume default-token-tgfb5: performed write of new data to ts data directory: /var/lib/kubelet/pods/982da442-9246-11e8-8157-00163f00bfd3/volumes/kubernetes.io~secret/default-token-tgfb5/..2018_07_28_09_14_13.778766417
I0728 17:14:13.574844 20122 operation_generator.go:523] MountVolume.SetUp succeeded for volume "default-token-tgfb5" (UniqueName: "kubernetes.io/secret/982da442-9246-11e8-8157-00163f00bfd3-default-token-tgfb5") pod "curl-6b7d8c6f49-bc9p8" (UID: "982da442-9246-11e8-8157-00163f00bfd3")
I0728 17:14:13.574940 20122 server.go:231] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"curl-6b7d8c6f49-bc9p8", UID:"982da442-9246-11e8-8157-00163f00bfd3", APIVersion:"v1", ResourceVersion:"76640", FieldPath:""}): type: 'Normal' reason: 'SuccessfulMountVolume' MountVolume.SetUp succeeded for volume "default-token-tgfb5"
==> api.log <==
I0728 17:14:13.578136 19316 wrap.go:42] POST /api/v1/namespaces/default/events: (2.893354ms) 201 [[kubelet/v1.9.8 (linux/amd64) kubernetes/c138b85] MASTER2:55336]
==> let.log <==
I0728 17:14:13.578466 20122 round_trippers.go:436] POST https://MASTER2:6443/api/v1/namespaces/default/events 201 Created in 3 milliseconds
I0728 17:14:13.623538 20122 config.go:99] Looking for [api], have seen map[api:{}]
I0728 17:14:13.623630 20122 kubelet.go:1939] SyncLoop (housekeeping)
I0728 17:14:13.644308 20122 volume_manager.go:372] All volumes are attached and mounted for pod "curl-6b7d8c6f49-bc9p8_default(982da442-9246-11e8-8157-00163f00bfd3)"
I0728 17:14:13.644368 20122 kuberuntime_manager.go:442] Syncing Pod "curl-6b7d8c6f49-bc9p8_default(982da442-9246-11e8-8157-00163f00bfd3)": &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:curl-6b7d8c6f49-bc9p8,GenerateName:curl-6b7d8c6f49-,Namespace:default,SelfLink:/api/v1/namespaces/default/pods/curl-6b7d8c6f49-bc9p8,UID:982da442-9246-11e8-8157-00163f00bfd3,ResourceVersion:76640,Generation:0,CreationTimestamp:2018-07-28 17:14:13 +0800 CST,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{pod-template-hash: 2638472905,run: curl,},Annotations:map[string]string{kubernetes.io/config.seen: 2018-07-28T17:14:13.336400231+08:00,kubernetes.io/config.source: api,},OwnerReferences:[{extensions/v1beta1 ReplicaSet curl-6b7d8c6f49 982b85b8-9246-11e8-8157-00163f00bfd3 0xc420e74340 0xc420e74341}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-tgfb5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tgfb5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{curl vsxen/k8s [] [sleep 1d] [] [] [] {map[] map[]} [{default-token-tgfb5 true /var/run/secrets/kubernetes.io/serviceaccount <nil>}] [] nil nil nil /dev/termination-log File Always nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:MASTER2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-28 17:14:13 +0800 CST }],Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],},}
I0728 17:14:13.644645 20122 kuberuntime_manager.go:385] No sandbox for pod "curl-6b7d8c6f49-bc9p8_default(982da442-9246-11e8-8157-00163f00bfd3)" can be found. Need to start a new one
I0728 17:14:13.644675 20122 kuberuntime_manager.go:571] computePodActions got {KillPod:true CreateSandbox:true SandboxID: Attempt:0 NextInitContainerToStart:nil ContainersToStart:[0] ContainersToKill:map[]} for pod "curl-6b7d8c6f49-bc9p8_default(982da442-9246-11e8-8157-00163f00bfd3)"
I0728 17:14:13.644752 20122 kuberuntime_manager.go:580] SyncPod received new pod "curl-6b7d8c6f49-bc9p8_default(982da442-9246-11e8-8157-00163f00bfd3)", will create a sandbox for it
I0728 17:14:13.644784 20122 kuberuntime_manager.go:589] Stopping PodSandbox for "curl-6b7d8c6f49-bc9p8_default(982da442-9246-11e8-8157-00163f00bfd3)", will start new one
I0728 17:14:13.644818 20122 kuberuntime_manager.go:641] Creating sandbox for pod "curl-6b7d8c6f49-bc9p8_default(982da442-9246-11e8-8157-00163f00bfd3)"
I0728 17:14:13.646734 20122 expiration_cache.go:98] Entry version: {key:version obj:0xc420baa500} has expired
I0728 17:14:13.647300 20122 docker_service.go:441] Setting cgroup parent to: "/kubepods/besteffort/pod982da442-9246-11e8-8157-00163f00bfd3"
I0728 17:14:13.838147 20122 factory.go:112] Using factory "docker" for container "/kubepods/besteffort/pod982da442-9246-11e8-8157-00163f00bfd3/9f74cceca180071a31d44289e7b5085f298849c9dd78523349f8d7a8c7037713"
I0728 17:14:13.838160 20122 docker_sandbox.go:658] Will attempt to re-write config file /var/lib/docker/containers/9f74cceca180071a31d44289e7b5085f298849c9dd78523349f8d7a8c7037713/resolv.conf with:
[nameserver 10.68.0.2 search default.svc.cluster.local. svc.cluster.local. cluster.local. options ndots:5]
I0728 17:14:13.838383 20122 plugins.go:412] Calling network plugin cni to set up pod "curl-6b7d8c6f49-bc9p8_default"
I0728 17:14:13.839138 20122 manager.go:970] Added container: "/kubepods/besteffort/pod982da442-9246-11e8-8157-00163f00bfd3/9f74cceca180071a31d44289e7b5085f298849c9dd78523349f8d7a8c7037713" (aliases: [k8s_POD_curl-6b7d8c6f49-bc9p8_default_982da442-9246-11e8-8157-00163f00bfd3_0 9f74cceca180071a31d44289e7b5085f298849c9dd78523349f8d7a8c7037713], namespace: "docker")
I0728 17:14:13.839171 20122 cni.go:284] Got netns path /proc/20757/ns/net
I0728 17:14:13.839190 20122 cni.go:285] Using podns path default
I0728 17:14:13.839274 20122 cni.go:256] About to add CNI network cni-loopback (type=loopback)
I0728 17:14:13.839290 20122 handler.go:325] Added event &{/kubepods/besteffort/pod982da442-9246-11e8-8157-00163f00bfd3/9f74cceca180071a31d44289e7b5085f298849c9dd78523349f8d7a8c7037713 2018-07-28 17:14:13.720561614 +0800 CST containerCreation {<nil>}}
I0728 17:14:13.839336 20122 container.go:448] Start housekeeping for container "/kubepods/besteffort/pod982da442-9246-11e8-8157-00163f00bfd3/9f74cceca180071a31d44289e7b5085f298849c9dd78523349f8d7a8c7037713"
I0728 17:14:13.842146 20122 cni.go:284] Got netns path /proc/20757/ns/net
I0728 17:14:13.842166 20122 cni.go:285] Using podns path default
I0728 17:14:13.842228 20122 cni.go:256] About to add CNI network mynet (type=bridge)
I0728 17:14:13.880451 20122 kuberuntime_manager.go:655] Created PodSandbox "9f74cceca180071a31d44289e7b5085f298849c9dd78523349f8d7a8c7037713" for pod "curl-6b7d8c6f49-bc9p8_default(982da442-9246-11e8-8157-00163f00bfd3)"
I0728 17:14:13.884828 20122 kuberuntime_manager.go:674] Determined the ip "172.20.0.18" for pod "curl-6b7d8c6f49-bc9p8_default(982da442-9246-11e8-8157-00163f00bfd3)" after sandbox changed
I0728 17:14:13.884933 20122 kuberuntime_manager.go:725] Creating container &Container{Name:curl,Image:vsxen/k8s,Command:[],Args:[sleep 1d],WorkingDir:,Ports:[],Env:[],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},
VolumeMounts:[{default-token-tgfb5 true /var/run/secrets/kubernetes.io/serviceaccount <nil>}],LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,
TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[],TerminationMessagePolicy:File,VolumeDevices:[],} in pod curl-6b7d8c6f49-bc9p8_default(982da442-9246-11e8-8157-00163f00bfd3)
I0728 17:14:13.886153 20122 kuberuntime_image.go:46] Pulling image "docker.io/vsxen/k8s:latest" without credentials
I0728 17:14:13.886231 20122 server.go:231] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"curl-6b7d8c6f49-bc9p8", UID:"982da442-9246-11e8-8157-00163f00bfd3", APIVersion:"v1", ResourceVersion:"76640", FieldPath:"spec.containers{curl}"}): type: 'Normal' reason: 'Pulling' pulling image "vsxen/k8s"
==> api.log <==
I0728 17:14:13.890693 19316 wrap.go:42] POST /api/v1/namespaces/default/events: (4.173414ms) 201 [[kubelet/v1.9.8 (linux/amd64) kubernetes/c138b85] MASTER2:55336]
==> let.log <==
I0728 17:14:13.890963 20122 round_trippers.go:436] POST https://MASTER2:6443/api/v1/namespaces/default/events 201 Created in 4 milliseconds
I0728 17:14:14.161348 20122 generic.go:183] GenericPLEG: Relisting
I0728 17:14:14.164103 20122 generic.go:147] GenericPLEG: 982da442-9246-11e8-8157-00163f00bfd3/9f74cceca180071a31d44289e7b5085f298849c9dd78523349f8d7a8c7037713: non-existent -> running
I0728 17:14:14.165579 20122 kuberuntime_manager.go:853] getSandboxIDByPodUID got sandbox IDs ["9f74cceca180071a31d44289e7b5085f298849c9dd78523349f8d7a8c7037713"] for pod "curl-6b7d8c6f49-bc9p8_default(982da442-9246-11e8-8157-00163f00bfd3)"
I0728 17:14:14.170862 20122 generic.go:380] PLEG: Write status for curl-6b7d8c6f49-bc9p8/default: &container.PodStatus{ID:"982da442-9246-11e8-8157-00163f00bfd3", Name:"curl-6b7d8c6f49-bc9p8", Namespace:"default", IP:"172.20.0.18",
ContainerStatuses:[]*container.ContainerStatus{}, SandboxStatuses:[]*runtime.PodSandboxStatus{(*runtime.PodSandboxStatus)(0xc4213d7ea0)}} (err: <nil>)
I0728 17:14:14.171020 20122 kubelet.go:1897] SyncLoop (PLEG): "curl-6b7d8c6f49-bc9p8_default(982da442-9246-11e8-8157-00163f00bfd3)", event: &pleg.PodLifecycleEvent{ID:"982da442-9246-11e8-8157-00163f00bfd3", Type:"ContainerStarted", Data:"9f74cceca180071a31d44289e7b5085f298849c9dd78523349f8d7a8c7037713"}
==> api.log <==
I0728 17:14:14.199785 19316 wrap.go:42] GET /api/v1/namespaces/kube-system/endpoints/kube-controller-manager: (1.683232ms) 200 [[kube-controller-manager/v1.9.8 (linux/amd64) kubernetes/c138b85/leader-election] 127.0.0.1:55148]
I0728 17:14:14.202186 19316 wrap.go:42] PUT /api/v1/namespaces/kube-system/endpoints/kube-controller-manager: (1.792173ms) 200 [[kube-controller-manager/v1.9.8 (linux/amd64) kubernetes/c138b85/leader-election] 127.0.0.1:55148]
I0728 17:14:14.760048 19316 wrap.go:42] GET /api/v1/namespaces/kube-system/endpoints/kube-scheduler: (1.79092ms) 200 [[kube-scheduler/v1.9.8 (linux/amd64) kubernetes/c138b85/leader-election] 127.0.0.1:54988]
==> sch.log <==
I0728 17:14:14.760267 19408 round_trippers.go:436] GET http://127.0.0.1:8080/api/v1/namespaces/kube-system/endpoints/kube-scheduler 200 OK in 2 milliseconds
==> api.log <==
I0728 17:14:14.763205 19316 wrap.go:42] PUT /api/v1/namespaces/kube-system/endpoints/kube-scheduler: (2.530929ms) 200 [[kube-scheduler/v1.9.8 (linux/amd64) kubernetes/c138b85/leader-election] 127.0.0.1:54988]
==> sch.log <==
I0728 17:14:14.763375 19408 round_trippers.go:436] PUT http://127.0.0.1:8080/api/v1/namespaces/kube-system/endpoints/kube-scheduler 200 OK in 2 milliseconds
I0728 17:14:14.763514 19408 leaderelection.go:199] successfully renewed lease kube-system/kube-scheduler
==> let.log <==
I0728 17:14:15.171194 20122 generic.go:183] GenericPLEG: Relisting
E0728 17:14:15.493364 20122 fs.go:418] Stat fs failed. Error: no such file or directory
I0728 17:14:15.623554 20122 config.go:99] Looking for [api], have seen map[api:{}]
I0728 17:14:15.623641 20122 kubelet.go:1939] SyncLoop (housekeeping)
==> api.log <==
I0728 17:14:15.815589 19316 wrap.go:42] GET /apis/extensions/v1beta1/deployments?resourceVersion=75870&timeoutSeconds=586&watch=true: (9m46.000519573s) 200 [[kube-controller-manager/v1.9.8 (linux/amd64) kubernetes/c138b85/shared-informers] 127.0.0.1:54868]
I0728 17:14:15.816517 19316 get.go:238] Starting watch for /apis/extensions/v1beta1/deployments, rv=76646 labels= fields= timeout=5m45s
I0728 17:14:15.900812 19316 wrap.go:42] GET /api/v1/nodes/MASTER2?resourceVersion=0: (914.138µs) 200 [[kubelet/v1.9.8 (linux/amd64) kubernetes/c138b85] MASTER2:55336]
==> let.log <==
I0728 17:14:15.901267 20122 round_trippers.go:436] GET https://MASTER2:6443/api/v1/nodes/MASTER2?resourceVersion=0 200 OK in 1 milliseconds
==> api.log <==
I0728 17:14:15.907919 19316 wrap.go:42] PATCH /api/v1/nodes/MASTER2/status: (2.953384ms) 200 [[kubelet/v1.9.8 (linux/amd64) kubernetes/c138b85] MASTER2:55336]
==> let.log <==
I0728 17:14:15.908111 20122 round_trippers.go:436] PATCH https://MASTER2:6443/api/v1/nodes/MASTER2/status 200 OK in 3 milliseconds
I0728 17:14:16.174459 20122 generic.go:183] GenericPLEG: Relisting
==> api.log <==
I0728 17:14:16.205212 19316 wrap.go:42] GET /api/v1/namespaces/kube-system/endpoints/kube-controller-manager: (1.844072ms) 200 [[kube-controller-manager/v1.9.8 (linux/amd64) kubernetes/c138b85/leader-election] 127.0.0.1:55132]
I0728 17:14:16.208335 19316 wrap.go:42] PUT /api/v1/namespaces/kube-system/endpoints/kube-controller-manager: (2.469144ms) 200 [[kube-controller-manager/v1.9.8 (linux/amd64) kubernetes/c138b85/leader-election] 127.0.0.1:55132]
I0728 17:14:16.766583 19316 wrap.go:42] GET /api/v1/namespaces/kube-system/endpoints/kube-scheduler: (2.153181ms) 200 [[kube-scheduler/v1.9.8 (linux/amd64) kubernetes/c138b85/leader-election] 127.0.0.1:54988]
==> sch.log <==
I0728 17:14:16.766828 19408 round_trippers.go:436] GET http://127.0.0.1:8080/api/v1/namespaces/kube-system/endpoints/kube-scheduler 200 OK in 2 milliseconds
==> api.log <==
I0728 17:14:16.769450 19316 wrap.go:42] PUT /api/v1/namespaces/kube-system/endpoints/kube-scheduler: (2.08502ms) 200 [[kube-scheduler/v1.9.8 (linux/amd64) kubernetes/c138b85/leader-election] 127.0.0.1:54988]