目录
一、绪论二、情景再现三、解决方案一、绪论
产生问题的原因是master节点部署Pod,导致无法启动;
问题描述:
Warning FailedScheduling 40s (x28 over 28m) default-scheduler 0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn’t tolerate.
二、情景再现
部署环境,k8s中的master节点创建Pod
命令kubectl run 自定义pod名字 --image=基础镜像
示例
[root@VM-4-8-centos kubernetes]# kubectl run my-nginx --image=nginxpod/my-nginx created
查看pod
由于上面创建Pod时,未指定namespace
,故默认处于default
中;
命令kubectl get pod
my-nginx一直处于Ping状态;
查看Pod描述信息
命令kubectl describe pod 自定义的Pod名称
原因:kubeadm集群时,出于安全考虑Pod不会被调度到Master Node上,默认情况下,master打了污点,不参与工作负载;
解决方案:手动删除master的污点;
查看污点信息
命令:kubectl get no -o yaml | grep taint -A 5
三、解决方案
删除master节点污点命令kubectl taint nodes --all node-role.kubernetes.io/master-
结果如下:
[root@VM-4-8-centos kubernetes]# kubectl taint nodes --all node-role.kubernetes.io/master-node/vm-4-8-centos untainted
再次查看Pod状态,已经Running
查看Pod描述信息
着重点Events
:
QoS Class: BestEffortNode-Selectors: <none>Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300snode.kubernetes.io/unreachable:NoExecute op=Exists for 300sEvents:TypeReason Age FromMessage---------- ---------------//之前Ping事件Warning FailedScheduling 69s (x36 over 36m) default-scheduler 0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.//删除污点之后,执行事件Normal Pulling 44s kubelet Pulling image "nginx"Normal Pulled 39s kubelet Successfully pulled image "nginx" in 5.61441699sNormal Created 39s kubelet Created container my-nginxNormal Started 38s kubelet Started container my-nginx
再次开启master节点污点
命令kubectl taint nodes k8s node-role.kubernetes.io/master=true:NoSchedule
如果觉得《【kubernetes系列】master节点部署Pod处于Pending状态》对你有帮助,请点赞、收藏,并留下你的观点哦!