Kubeadm

安装 kubeadm |Kubernetes (简体中文)

更新证书

检查证书

root@test-edge-auto01:~# kubeadm  certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[check-expiration] Error reading configuration from the cluster. Falling back to default configuration

CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 Mar 25, 2025 02:49 UTC   <invalid>       ca                      no     
apiserver                  Mar 25, 2025 02:49 UTC   <invalid>       ca                      no     
apiserver-etcd-client      Mar 25, 2025 02:49 UTC   <invalid>       etcd-ca                 no     
apiserver-kubelet-client   Mar 25, 2025 02:49 UTC   <invalid>       ca                      no     
controller-manager.conf    Mar 25, 2025 02:49 UTC   <invalid>       ca                      no     
etcd-healthcheck-client    Mar 25, 2025 02:49 UTC   <invalid>       etcd-ca                 no     
etcd-peer                  Mar 25, 2025 02:49 UTC   <invalid>       etcd-ca                 no     
etcd-server                Mar 25, 2025 02:49 UTC   <invalid>       etcd-ca                 no     
front-proxy-client         Mar 25, 2025 02:49 UTC   <invalid>       front-proxy-ca          no     
scheduler.conf             Mar 25, 2025 02:49 UTC   <invalid>       ca                      no     

CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      Oct 12, 2034 10:11 UTC   9y              no     
etcd-ca                 Oct 12, 2034 10:11 UTC   9y              no     
front-proxy-ca          Oct 12, 2034 10:11 UTC   9y              no     
root@test-edge-auto01:~#

发现证书已过期

备份集群证书、配置信息

cp -a ~/.kube/ ~/.kube_bak
cp -a /etc/kubernetes/ /etc/kubernetes_bak

更新证书

root@test-edge-auto01:~# kubeadm certs renew all
[renew] Reading configuration from the cluster...
[renew] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

certificate embedded in the kubeconfig file for the admin to use and for kubeadm itself renewed
certificate for serving the Kubernetes API renewed
certificate the apiserver uses to access etcd renewed
certificate for the API server to connect to kubelet renewed
certificate embedded in the kubeconfig file for the controller manager to use renewed
certificate for liveness probes to healthcheck etcd renewed
certificate for etcd nodes to communicate with each other renewed
certificate for serving etcd renewed
certificate for the front proxy client renewed
certificate embedded in the kubeconfig file for the scheduler manager to use renewed

Done renewing certificates. You must restart the kube-apiserver, kube-controller-manager, kube-scheduler and etcd, so that they can use the new certificates.
root@test-edge-auto01:~#

再次检查证书,日期已更新

root@test-edge-auto01:~# kubeadm  certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 Nov 25, 2025 02:49 UTC   364d            ca                      no     
apiserver                  Nov 25, 2025 02:49 UTC   364d            ca                      no     
apiserver-etcd-client      Nov 25, 2025 02:49 UTC   364d            etcd-ca                 no     
apiserver-kubelet-client   Nov 25, 2025 02:49 UTC   364d            ca                      no     
controller-manager.conf    Nov 25, 2025 02:49 UTC   364d            ca                      no     
etcd-healthcheck-client    Nov 25, 2025 02:49 UTC   364d            etcd-ca                 no     
etcd-peer                  Nov 25, 2025 02:49 UTC   364d            etcd-ca                 no     
etcd-server                Nov 25, 2025 02:49 UTC   364d            etcd-ca                 no     
front-proxy-client         Nov 25, 2025 02:49 UTC   364d            front-proxy-ca          no     
scheduler.conf             Nov 25, 2025 02:49 UTC   364d            ca                      no     

CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      Oct 12, 2034 10:11 UTC   9y              no     
etcd-ca                 Oct 12, 2034 10:11 UTC   9y              no     
front-proxy-ca          Oct 12, 2034 10:11 UTC   9y              no     
root@test-edge-auto01:~#

重启相关应用

执行完此命令之后你需要重启control plane Pods。因为所有组件目前都不支持动态证书重载,所以这项操作是必须的。静态 Pod 不是被 API 服务器管理,而是被本地 kubelet 管理,所以 kubectl 不能用来删除或重启他们。 要重启静态 Pod 可以临时将清单文件从 /etc/kubernetes/manifests/ 移除并等待 20 秒 (参考 KubeletConfiguration 结构中的 fileCheckFrequency 值)。如果 Pod 不在清单目录里,kubelet 将会终止它。 在另一个 fileCheckFrequency 周期之后你可以将文件移回去,kubelet 可以完成 Pod 的重建,而组件的证书更新操作也得以完成。

# 用docker ps或crictl ps先检查相关pod状态,确认命令正常执行
docker ps |grep -E 'k8s_kube-apiserver|k8s_kube-controller-manager|k8s_kube-scheduler|k8s_etcd_etcd'
crictl ps |grep -E 'kube-apiserver|kube-controller-manager|kube-scheduler|etcd'

# 临时将清单文件从 /etc/kubernetes/manifests/ 移除并等待 20 秒
mv /etc/kubernetes/manifests/*.yaml /etc/kubernetes/
sleep 20

# 检查相关pod是否已销毁
docker ps |grep -E 'k8s_kube-apiserver|k8s_kube-controller-manager|k8s_kube-scheduler|k8s_etcd_etcd'
#输出应该为空

# 将清单文件移回原目录并稍作等待
mv /etc/kubernetes/*.yaml /etc/kubernetes/manifests/

# 检查相关pod是否已经启动
docker ps |grep -E 'k8s_kube-apiserver|k8s_kube-controller-manager|k8s_kube-scheduler|k8s_etcd_etcd'

更新client config

cp -a /etc/kubernetes/admin.conf ~/.kube/config

检查kubectl命令是否正常
kubectl get pod -A

异常处理1

重启机器后,kubelet没有自启动,手动启动失败,检查日志,反馈找不到bootstrap-kubelet.conf这个文件。

57481 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"

bootstrap-kubelet.conf 这个文件的主要作用是引导 Kubelet 与控制平面节点通信,获得永久的 kubeconfig 文件和认证信息。

那么bootstrap-kubelet就相当于是引导令牌!,从这里就可以定位到是证书方面的问题

bootstrap-kubelet工作流程:
Kubelet 启动时读取 bootstrap-kubelet.conf 文件,并使用其中的引导令牌与 API 服务器进行认证。
Kubelet 请求加入集群,并尝试注册到控制平面。
API 服务器验证引导令牌,并为节点分配正式的认证凭据。
Kubelet 接收到正式的 kubeconfig 文件(/etc/kubernetes/kubelet.conf),然后切换到该文件进行后续通信。

先检查k8s证书状态:kubeadm certs check-expiration,确保 admin.conf 证书没有过期。

用admin.conf 替换 kubelet.conf,重新启动kubelet服务。

# 拷贝 admin.conf 到 kubelet.conf ,再重新启动kubelet服务
cp /etc/kubernetes/kubelet.conf /etc/kubernetes/kubelet.conf.old
cp -a /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf
systemctl daemon-reload && systemctl restart kubelet

如果该方法不行,请尝试方法2

异常处理2

参考: https://www.cnblogs.com/Tempted/p/18663326

报错如下:

failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory

一、解决方法:

1、备份重新生成证书

[root@k8s-master ~]# cd /etc/kubernetes/pki/
[root@k8s-master pki]# mkdir backup
[root@k8s-master pki]# mv apiserver.crt apiserver-etcd-client.key apiserver-kubelet-client.crt front-proxy-ca.crt front-proxy-client.crt front-proxy-client.key front-proxy-ca.key apiserver-kubelet-client.key apiserver.key apiserver-etcd-client.crt backup/


[root@k8s-master pki]# kubeadm init phase certs all
I0215 00:18:05.381433   30175 version.go:254] remote version is much newer: v1.32.2; falling back to: stable-1.20
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Using existing ca certificate authority
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.118.100]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Using the existing "sa" key

2、备份并重新生成配置文件

[root@k8s-master pki]# cd /etc/kubernetes
[root@k8s-master kubernetes]# mkdir backup
[root@k8s-master kubernetes]# mv admin.conf controller-manager.conf kubelet.conf scheduler.conf backup/


[root@k8s-master kubernetes]# kubeadm init phase kubeconfig all
I0215 00:21:55.802025   31981 version.go:254] remote version is much newer: v1.32.2; falling back to: stable-1.20
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file

4、配置config

[root@k8s-master kubernetes]# cp -i /etc/kubernetes/admin.conf ~/.kube/config

5、删除软连接

[root@k8s-master kubernetes]# cd /var/lib/kubelet/pki/
[root@k8s-master pki]# rm -rf kubelet-client-current.pem

6、重启kubelet

[root@k8s-master pki]# systemctl daemon-reload  
[root@k8s-master pki]# systemctl restart kubelet.service  
[root@k8s-master pki]# systemctl status kubelet.service