Kubernetes根证书ca.crt更新
集群的ca.crt证书建议签发的久一点,不像apiserver的证书通过kubeadm就可以重签,需要多次重启使用了service account的pod,整个操作非常重。
在更新证书之前,建议先知道对应证书的作用:kubernetes.io/docs/setup/...
根CA
路径 | 默认 CN | 描述 |
---|---|---|
ca.crt,key | kubernetes-ca | Kubernetes 通用 CA |
etcd/ca.crt,key | etcd-ca | 与 etcd 相关的所有功能 |
front-proxy-ca.crt,key | kubernetes-front-proxy-ca | 用于前端代理 |
根CA签发的证书
默认 CN | 父级 CA | O(位于 Subject 中) | kind | 主机 (SAN) |
kube-etcd | etcd-ca | server, client | <hostname> , <Host_IP> , localhost , 127.0.0.1 |
|
kube-etcd-peer | etcd-ca | server, client | <hostname> , <Host_IP> , localhost , 127.0.0.1 |
|
kube-etcd-healthcheck-client | etcd-ca | client | ||
kube-apiserver-etcd-client | etcd-ca | client | ||
kube-apiserver | kubernetes-ca | server | <hostname> , <Host_IP> , <advertise_IP> , [1] |
|
kube-apiserver-kubelet-client | kubernetes-ca | system:masters | client | |
front-proxy-client | kubernetes-front-proxy-ca | client |
服务帐户密钥对
私钥路径 | 公钥路径 | 命令 | 参数 |
---|---|---|---|
sa.key | kube-controller-manager | --service-account-private-key-file | |
sa.pub | kube-apiserver | --service-account-key-file |
用户帐户配置证书
文件名 | 命令 | 凭据名称 | 默认 CN | O (位于 Subject 中) |
---|---|---|---|---|
admin.conf | kubectl | default-admin | kubernetes-admin | system:masters |
kubelet.conf | kubelet | default-auth | system:node:<nodeName> (参阅注释) |
system:nodes |
controller-manager.conf | kube-controller-manager | default-controller-manager | system:kube-controller-manager | |
scheduler.conf | kube-scheduler | default-scheduler | system:kube-scheduler |
查看证书
查看证书情况,发现/etc/kubernetes/xxx/pki/ca.crt集群的根证书快要过期了
shell
$ for tls in `find /etc/kubernetes/xxx/pki -maxdepth 2 -name "*.crt"`; do echo $tls; openssl x509 -in $tls -text| grep Not; done
/etc/kubernetes/xxx/pki/apiserver-kubelet-client.crt
Not Before: May 14 10:32:22 2020 GMT
Not After : Jan 21 10:32:22 2034 GMT
/etc/kubernetes/xxx/pki/ca.crt
Not Before: Apr 27 09:09:00 2019 GMT
Not After : Apr 25 09:09:00 2024 GMT
/etc/kubernetes/xxx/pki/front-proxy-ca.crt
Not Before: Apr 27 09:57:00 2019 GMT
Not After : Apr 25 09:57:00 2024 GMT
/etc/kubernetes/xxx/pki/front-proxy-client.crt
Not Before: May 14 10:32:47 2020 GMT
Not After : Jan 21 10:32:47 2034 GMT
/etc/kubernetes/xxx/pki/apiserver.crt
Not Before: May 20 02:51:01 2020 GMT
Not After : Jan 27 02:51:01 2034 GMT
高版本我们可以直接这样查看证书
shell
$ kubeadm certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
admin.conf Feb 15, 2024 03:41 UTC 93d ca no
apiserver Feb 15, 2024 03:41 UTC 93d ca no
apiserver-etcd-client Feb 15, 2024 03:41 UTC 93d etcd-ca no
apiserver-kubelet-client Feb 15, 2024 03:41 UTC 93d ca no
controller-manager.conf Feb 15, 2024 03:41 UTC 93d ca no
etcd-healthcheck-client Feb 15, 2024 03:41 UTC 93d etcd-ca no
etcd-peer Feb 15, 2024 03:41 UTC 93d etcd-ca no
etcd-server Feb 15, 2024 03:41 UTC 93d etcd-ca no
front-proxy-client Feb 15, 2024 03:41 UTC 93d front-proxy-ca no
scheduler.conf Feb 15, 2024 03:41 UTC 93d ca no
CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
ca Feb 12, 2033 03:41 UTC 9y no
etcd-ca Feb 12, 2033 03:41 UTC 9y no
front-proxy-ca Feb 12, 2033 03:41 UTC 9y no
因为ca.crt过期了,不能够直接使用kubeadm certs renew all
来生成证书。我们参考:kubernetes.io/docs/tasks/...
重新签发ca.crt和front-proxy-ca.crt
-
ca.crt 集群根证书签发:
pki/apiserver.crt
、pki/apiserver-kubelet-client.crt
-
front-proxy-ca.crt 扩展apiserver证书,验证CN的合法性,签发:
front-proxy-client.crt
-
etcd/ca.crt etcd根证书签发:
pki/etcd/server.crt
、pki/etcd/peer.crt
、pki/etcd/healthcheck-client.crt
、pki/apiserver-etcd-client.crt
预估风险:
- 需要重启涉及service account Secrets的pod
- 需要重签kubeconfig
0、备份
首先要做的,也是非常重要的一步:备份原证书
shell
$ sudo cp -r /etc/kubernetes/ctrl-k8s/ /etc/kubernetes/ctrl-k8s-2023
1、创建新证书
将新的 CA 证书和私钥(例如:
ca.crt
、ca.key
、front-proxy-ca.crt
和front-proxy-client.key
)分发到所有控制面节点,放在其 Kubernetes 证书目录下。
我们重新生成ca.crt、front-proxy-ca.crt以及各组件证书,参考附件的gencert.sh
shell
$ mkdir -p /etc/kubernetes/sre-jda-prd-new/pki
$ mkdir -p /etc/kubernetes/sre-jda-prd-new/pki/etcd
# 拷贝openssl.cnf
$ cp /etc/kubernetes/sre-jda-prd/pki/openssl.cnf /etc/kubernetes/sre-jda-prd-new/pki/openssl.cnf
$ cp /etc/kubernetes/sre-jda-prd/pki/etcd/openssl.cnf /etc/kubernetes/sre-jda-prd-new/pki/etcd/openssl.cnf
# 拷贝原来的key
$ cp /etc/kubernetes/sre-jda-prd/pki/sa.pub /etc/kubernetes/sre-jda-prd-new/pki/sa.pub
$ cp /etc/kubernetes/sre-jda-prd/pki/*.key /etc/kubernetes/sre-jda-prd-new/pki/
$ cp /etc/kubernetes/sre-jda-prd/pki/etcd/*.key /etc/kubernetes/sre-jda-prd-new/pki/etcd/
$ bash gencert.sh sre-jda-prd-new
# 把证书拷贝到其他master上
$ cd /etc/kubernetes/
$ tar -cvf sre-jda-prd-new.tar sre-jda-prd-new/
$ scp sre-jda-prd-new.tar 172.17.0.9:~/
$ scp sre-jda-prd-new.tar 172.17.0.10:~/
签发new和new-old的admin.conf、controller-manager.conf、scheduler.conf
shell
# 直接签发new
$ bash genkubeconfig.sh sre-jda.k8s.cn-east-p1.internal 6443 sre-jda-prd-new
$ ls /etc/kubernetes/sre-jda-prd-new/
admin.conf controller-manager.conf pki scheduler.conf
# 签发new-old
$ cp -r /etc/kubernetes/sre-jda-prd-new/ /etc/kubernetes/sre-jda-prd-newold
# 把老的ca加进去 /etc/kubernetes/sre-jda-prd/pki/ca.crt
$ vi /etc/kubernetes/sre-jda-prd-newold/pki/ca.crt
$ bash genkubeconfig.sh sre-jda.k8s.cn-east-p1.internal 6443 sre-jda-prd-newold
2、使用含老的和新的CA更新kube-controller-manager
更新 kube-controller-manager 的
--root-ca-file
标志,使之同时包含老的和新的 CA,之后重启 kube-controller-manager。
查看kube-controller-manager的证书
shell
$ cat /etc/kubernetes/manifests/sre-jda-prd-kube-controller-manager.yaml
- hostPath:
path: /etc/kubernetes/sre-jda-prd/pki
type: DirectoryOrCreate
ca.crt的bundle,就是包含多个BEGIN-END。我们把生成的ca.crt内容追加到各个管控的ca.crt中
shell
$ cat /etc/kubernetes/sre-jda-prd-new/pki/ca.crt >> /etc/kubernetes/sre-jda-prd/pki/ca.crt
-----BEGIN CERTIFICATE-----
xxx
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
xxx
-----BEGIN CERTIFICATE-----
复制新的ca.crt/ca.key,还需要修改/etc/kubernetes/manifests/sre-jda-prd-kube-controller-manager.yaml
shell
$ cp /etc/kubernetes/sre-jda-prd-new/pki/ca.crt /etc/kubernetes/sre-jda-prd/pki/newca.crt
$ cp /etc/kubernetes/sre-jda-prd-new/pki/ca.key /etc/kubernetes/sre-jda-prd/pki/newca.key
$ vi /etc/kubernetes/manifests/sre-jda-prd-kube-controller-manager.yaml
- --cluster-signing-cert-file=/etc/kubernetes/pki/newca.crt # 保证使用的是新的
- --cluster-signing-key-file=/etc/kubernetes/pki/newca.key # 保证使用的是新的
- --root-ca-file=/etc/kubernetes/pki/ca.crt # 老的和新的CA
分别重启kube-controller-manager
shell
# 查看kube-controller-manager的leader
$ kubectl get ep -n kube-system kube-controller-manager -oyaml
apiVersion: v1
kind: Endpoints
metadata:
annotations:
control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"sre-jda-prd-master2_d5202c74-13c1-11ed-9081-fa163e91aaa9","leaseDurationSeconds":15,"acquireTime":"2023-10-23T13:53:19Z","renewTime":"2023-11-14T08:58:13Z","leaderTransitions":54}'
# 重启kube-controller-manager
# $ mv /etc/kubernetes/manifests/sre-jda-prd-kube-controller-manager.yaml .
# $ mv sre-jda-prd-kube-controller-manager.yaml /etc/kubernetes/manifests/
$ docker ps |grep kube-controller-manager
f53fb27201b1 90fd4a237264 "kube-controller-man..." 28 hours ago Up 28 hours k8s_kube-controller-manager_sre-jda-prd-kube-controller-manager-sre-jda-prd-master1_kube-system_d06de620dbc9009e168d91a186c8b865_1
deb11bc71277 harbor.cloud.netease.com/qzprod-k8s/pause-amd64:3.1 "/pause" 28 hours ago Up 28 hours k8s_POD_sre-jda-prd-kube-controller-manager-sre-jda-prd-master1_kube-system_d06de620dbc9009e168d91a186c8b865_1
$ docker restart f53fb27201b1
f53fb27201b1
# 查看kube-controller-manager的pod
$ kubectl get po -n kube-system -o wide |grep kube-controller-manager
3、等待service account Secrets生效
等待该控制器管理器更新服务账号 Secret 中的
ca.crt
,使之同时包含老的和新的 CA 证书。
等待controller manager 为sa更新Secrets为包含新老ca的证书,所有的sa都会更新。
新的Pod将同时信任旧和新的CA证书,老的Pod不会自动信任新的CA证书。只有在Pod重新启动或重新拉取Secrets时,才会获取到更新后的ca.crt文件并开始信任新的CA证书。如果老的Pod在新CA证书生效之前一直运行,它们将继续信任旧的CA证书。
shell
$ kubectl get secret default-token-4qbz5 -oyaml
apiVersion: v1
data:
ca.crt: xxx
# 验证是否更新了
$ echo 'xxx' | base64 -d
4、(跳过)重启集群内所有使用serviceAccount的pod
重启所有使用集群内配置的 Pod(例如:kube-proxy、CoreDNS 等),以便这些 Pod 能够使用与 ServiceAccount 相关联的 Secret 中的、已更新的证书机构数据。
我们在第9步执行即可
shell
$ kubectl get ds -n kube-system -oyaml |egrep "^ name:| serviceAccount:" |grep -B1 serviceAccount
name: ingress-nginx
serviceAccount: nginx-ingress-serviceaccount
name: kube-proxy
serviceAccount: kube-proxy
name: localstorage-provisioner
serviceAccount: localstorage-provisioner
name: node-exporter
serviceAccount: node-exporter
$ kubectl get deploy -n kube-system -oyaml |egrep "^ name:| serviceAccount:" |grep -B1 serviceAccount
name: cephfs-provisioner
serviceAccount: cephfs-provisioner
name: cloud-controller-manager
serviceAccount: cloud-controller-manager
name: coredns
serviceAccount: coredns
name: custom-metrics-apiserver
serviceAccount: custom-metrics-apiserver
name: kube-state-metrics
serviceAccount: kube-state-metrics
name: localstorage-webhook
serviceAccount: localstorage-webhook
name: metrics-server
serviceAccount: metrics-server
name: nfs-client-provisioner
serviceAccount: nfs-client-provisioner
name: rbd-provisioner
serviceAccount: rbd-provisioner
5、使用含老的和新的CA更新kube-apiserver
将老的和新的 CA 都追加到
kube-apiserver
配置的--client-ca-file
和--kubelet-certificate-authority
标志所指的文件。
查看kube-apiserver的证书
shell
$ cat /etc/kubernetes/manifests/sre-jda-prd-kube-apiserver.yaml
- --client-ca-file=/etc/kubernetes/pki/ca.crt # 第2步已经更新为 老的和新的CA
# - --kubelet-certificate-authority # 证书颁发机构的证书文件,这个集群里面没有配置
# 这里只更新--client-ca-file,剩下的这些第9步更新
- --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
- --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
- --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
- --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --service-account-key-file=/etc/kubernetes/pki/sa.pub # 不用更新
- --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
- --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
重启apiserver
shell
$ mv /etc/kubernetes/manifests/sre-jda-prd-kube-apiserver.yaml .
$ mv sre-jda-prd-kube-apiserver.yaml /etc/kubernetes/manifests/
$ kubectl get po -n kube-system |grep kube-apiserver
sre-jda-prd-kube-apiserver-sre-jda-prd-master1 1/1 Running 1 6m31s
sre-jda-prd-kube-apiserver-sre-jda-prd-master2 1/1 Running 1 23s
sre-jda-prd-kube-apiserver-sre-jda-prd-master3 1/1 Running 1 72s
6、使用含老的和新的CA更新kube-scheduler
将老的和新的 CA 都追加到
kube-scheduler
配置的--client-ca-file
标志所指的文件。
查看kube-scheduler的证书
shell
$ cat /etc/kubernetes/manifests/sre-jda-prd-kube-scheduler.yaml
# 本集群里面没有--client-ca-file相关配置,有的话也替换为老的和新的CA
7、使用含老的和新的CA更新用户证书---暂停操作,让用户替换新证书
通过替换
client-certificate-data
和client-key-data
中的内容,更新用户账号的证书。还要更新 kubeconfig 文件中的
certificate-authority-data
节, 使之包含 Base64 编码的老的和新的证书机构数据。
测试发现:集群现在使用老证书、新证书、新老证书均可以请求kubeapiserver。到此用户就可以使用新证书来操作集群
shell
# 官方说需要用老的和新的证书来更新kubeconfig
$ ls /etc/kubernetes/sre-jda-prd/*.conf
/etc/kubernetes/sre-jda-prd/admin.conf /etc/kubernetes/sre-jda-prd/controller-manager.conf /etc/kubernetes/sre-jda-prd/scheduler.conf
$ cp /etc/kubernetes/sre-jda-prd-newold/admin.conf /etc/kubernetes/sre-jda-prd/admin.conf
$ cp /etc/kubernetes/sre-jda-prd-newold/controller-manager.conf /etc/kubernetes/sre-jda-prd/controller-manager.conf
$ cp /etc/kubernetes/sre-jda-prd-newold/scheduler.conf /etc/kubernetes/sre-jda-prd/scheduler.conf
用户证书
shell
假设我们要创建一个用户名为tom的用户
1. 首先需要为此用户创建一个私钥:`umask 077;openssl genrsa -out tom.key 2048`
2. 接着用此私钥创建一个csr(证书签名请求)文件,其中我们需要在subject里带上用户信息(CN为用户名,O为用户组):
`openssl req -new -key tom.key -out tom.csr -subj "/CN=tom/O=UESRADMIN"`,其中/O参数可以出现多次,即可以有多个用户组。
CN对应binding里面的User,O对应binding里面的Group
3. 找到K8S集群(API Server)的CA证书文件,其位置取决于安装集群的方式,通常会在`/etc/kubernetes/pki/`路径下,会有两个文件,一个是CA证书(ca.crt),一个是CA私钥(ca.key)
4. 通过集群的CA证书和之前创建的csr文件,来为用户颁发证书,-CA和-CAkey参数需要指定集群CA证书所在位置,-days参数指定此证书的过期时间,这里为365天:
`openssl x509 -req -in tom.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -out tom.crt -days 365`
8、(跳过)更新云控制器证书
9、重启所有相关的pod
- 重新启动所有其他被聚合的 API 服务器 或者 Webhook 处理程序,使之信任新的 CA 证书。
- 在所有节点上更新 kubelet 配置中的
clientCAFile
所指文件以及kubelet.conf
中的certificate-authority-data
并重启 kubelet 以同时使用老的和新的 CA 证书。- 使用用新的 CA 签名的证书 (
apiserver.crt
、apiserver-kubelet-client.crt
和front-proxy-client.crt
) 来重启 API 服务器。 你可以使用现有的私钥,也可以使用新的私钥。 如果你改变了私钥,则要将更新的私钥也放到 Kubernetes 证书目录下。
9.1、重启聚合apiservers和webhook
9.2、使用含老的和新的CA重启每个节点的kubelet
shell
$ vi /var/lib/kubelet/config.yaml
clientCAFile: /etc/kubernetes/pki/ca.crt # 老的和新的CA
rotateCertificates: true # 确认开启证书轮换
$ cp /etc/kubernetes/sre-jda-prd/pki/ca.crt /etc/kubernetes/pki/ca.crt
$ kubectl get secret default-token-4qbz5 -oyaml
apiVersion: v1
data:
ca.crt: xxx # 把这个复制到certificate-authority-data
$ vi /etc/kubernetes/kubelet.conf
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: xxx # 需要修改
certificate-authority-data-old: yyy#可以保留老的
users:
- name: default-auth
user:
client-certificate: /var/lib/kubelet/pki/kubelet-client-current.pem # 开启轮换后会自动更新
client-key: /var/lib/kubelet/pki/kubelet-client-current.pem
$ ls -l /var/lib/kubelet/pki/kubelet-client-current.pem
lrwxrwxrwx 1 root root 59 Aug 5 2020 /var/lib/kubelet/pki/kubelet-client-current.pem -> /var/lib/kubelet/pki/kubelet-client-2020-08-05-02-25-32.pem
# 再重启kubelet
$ systemctl restart kubelet.service
9.3、使用新的CA签名的证书更新apiservers证书并且重启
拷贝新证书、重启apiserver(由于集群中的 Pod 既信任老的 CA 也信任新的 CA,Pod 中的客户端会经历短暂的连接断开状态)
shell
$ cp /etc/kubernetes/sre-jda-prd-new/pki/front-proxy-ca.crt /etc/kubernetes/sre-jda-prd/pki
$ cp /etc/kubernetes/sre-jda-prd-new/pki/front-proxy-ca.key /etc/kubernetes/sre-jda-prd/pki
$ cp /etc/kubernetes/sre-jda-prd-new/pki/front-proxy-client.crt /etc/kubernetes/sre-jda-prd/pki
$ cp /etc/kubernetes/sre-jda-prd-new/pki/front-proxy-client.key /etc/kubernetes/sre-jda-prd/pki
# apiserver证书
$ cp /etc/kubernetes/sre-jda-prd-new/pki/apiserver-kubelet-client.crt /etc/kubernetes/sre-jda-prd/pki
$ cp /etc/kubernetes/sre-jda-prd-new/pki/apiserver-kubelet-client.key /etc/kubernetes/sre-jda-prd/pki
$ cp /etc/kubernetes/sre-jda-prd-new/pki/apiserver.crt /etc/kubernetes/sre-jda-prd/pki
$ cp /etc/kubernetes/sre-jda-prd-new/pki/apiserver.key /etc/kubernetes/sre-jda-prd/pki
$ mv /etc/kubernetes/manifests/sre-jda-prd-kube-apiserver.yaml .
$ mv sre-jda-prd-kube-apiserver.yaml /etc/kubernetes/manifests/
需要注意的是有些apiservice需要手动更新ca证书
shell
$ kubectl get apiservices |grep -v Local
NAME SERVICE AVAILABLE AGE
v1beta1.custom.metrics.k8s.io kube-system/custom-metrics-apiserver True 3y
v1beta1.metrics.k8s.io kube-system/metrics-server True 3y
# 需要检查apiservice的secret是否包含ca
$ kubectl get deploy/metrics-server -n kube-system -oyaml
- name: kubeconfig
secret:
defaultMode: 256
secretName: metrics-kubeconfig # kubernetes-admin.conf
- name: ssl-dir
secret:
defaultMode: 256
secretName: metrics-server-secrets # front-proxy-ca.crt
$ cat /etc/kubernetes/sre-jda-prd/pki/front-proxy-ca.crt |base64 -w0
$ kubectl edit secret -n kube-system metrics-server-secrets
apiVersion: v1
data:
ca: xxx # 更新
$ cat /etc/kubernetes/sre-jda-prd-new/admin.conf |base64 -w0
$ kubectl edit secret -n kube-system metrics-server-secrets
# 重启metrics-server的pod
重启 kube-scheduler 以使用并信任新的 CA 证书
确保没有TLS的错误日志
9.4、重启所有pod,对于sts使用了serviceAccount也要重启
shell
for namespace in $(kubectl get namespace -o jsonpath='{.items[*].metadata.name}'); do
for name in $(kubectl get deployments -n $namespace -o jsonpath='{.items[*].metadata.name}'); do
kubectl patch deployment -n ${namespace} ${name} -p '{"spec":{"template":{"metadata":{"annotations":{"ca-rotation": "1"}}}}}';
done
for name in $(kubectl get daemonset -n $namespace -o jsonpath='{.items[*].metadata.name}'); do
kubectl patch daemonset -n ${namespace} ${name} -p '{"spec":{"template":{"metadata":{"annotations":{"ca-rotation": "1"}}}}}';
done
done
10、更新cm/cluster-info
如果你的集群使用启动引导令牌来添加节点,则需要更新
kube-public
名字空间下的 ConfigMapcluster-info
,使之包含新的 CA 证书。
shell
base64_encoded_ca="$(base64 -w0 /etc/kubernetes/sre-jda-prd/pki/ca.crt)"
kubectl get cm/cluster-info --namespace kube-public -o yaml | \
/bin/sed "s/\(certificate-authority-data:\).*/\1 ${base64_encoded_ca}/" | \
kubectl apply -f -
11、验证
检查集群管控组件的日志是否有TLS、x509相关的报错
12、去掉老证书
12.1、修改/etc/kubernetes/sre-jda-prd/pki/ca.crt
去掉老证书,再参考第2步更新service account token
shell
$ cp /etc/kubernetes/sre-jda-prd-new/pki/ca.crt /etc/kubernetes/sre-jda-prd/pki/ca.crt
$ mv /etc/kubernetes/manifests/sre-jda-prd-kube-controller-manager.yaml .
$ mv sre-jda-prd-kube-controller-manager.yaml /etc/kubernetes/manifests/
12.2、从 kubeconfig 文件和 --client-ca-file
以及 --root-ca-file
标志所指向的文件 中去除老的 CA 数据重启管控组件
shell
$ cp /etc/kubernetes/sre-jda-prd-new/pki/ca.crt /etc/kubernetes/sre-jda-prd/pki/ca.crt
# 下面这些应该也要替换
$ cp /etc/kubernetes/sre-jda-prd-new/admin.conf /etc/kubernetes/sre-jda-prd/admin.conf
$ cp /etc/kubernetes/sre-jda-prd-new/controller-manager.conf /etc/kubernetes/sre-jda-prd/controller-manager.conf
$ cp /etc/kubernetes/sre-jda-prd-new/scheduler.conf /etc/kubernetes/sre-jda-prd/scheduler.conf
$ mv /etc/kubernetes/manifests/sre-jda-prd-kube-controller-manager.yaml .
$ mv sre-jda-prd-kube-controller-manager.yaml /etc/kubernetes/manifests/
$ mv /etc/kubernetes/manifests/sre-jda-prd-kube-scheduler.yaml .
$ mv sre-jda-prd-kube-scheduler.yaml /etc/kubernetes/manifests/
$ mv /etc/kubernetes/manifests/sre-jda-prd-kube-apiserver.yaml .
$ mv sre-jda-prd-kube-apiserver.yaml /etc/kubernetes/manifests/
12.3、在每个节点上,移除 clientCAFile
标志所指向的文件,以删除老的 CA 数据,并从 kubelet kubeconfig 文件中去掉老的 CA,重启 kubelet。
shell
$ vi /etc/kubernetes/pki/ca.crt
# 删除老的
$ vi /etc/kubernetes/kubelet.conf
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: xxx # 需要修改为新的
$ systemctl restart kubelet.service
附件
shell
#!/bin/bash
set -e
cluster=${1}
days=5000
cd /etc/kubernetes/${cluster}/pki
# ca
[ -f ca.key ] || openssl genrsa -out ca.key 2048
[ -f ca.crt ] || openssl req -x509 -new -nodes -key ca.key -subj "/CN=kubernetes-ca" -days ${days} -out ca.crt
# apiserver
[ -f apiserver.key ] || openssl genrsa -out apiserver.key 2048
[ -f apiserver.csr ] || openssl req -new -key apiserver.key -subj "/CN=kube-apiserver" -out apiserver.csr -config openssl.cnf
[ -f apiserver.crt ] || openssl x509 -req -in apiserver.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out apiserver.crt -days ${days} -extensions v3_req -extfile openssl.cnf
# apiserver-kubelet-client
cat > openssl-client.cnf <<EOF
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
keyUsage = critical, digitalSignature, keyEncipherment
extendedKeyUsage = clientAuth
EOF
[ -f apiserver-kubelet-client.key ] || openssl genrsa -out apiserver-kubelet-client.key 2048
[ -f apiserver-kubelet-client.csr ] || openssl req -new -key apiserver-kubelet-client.key -out apiserver-kubelet-client.csr -subj "/O=system:masters/CN=kube-apiserver-kubelet-client"
[ -f apiserver-kubelet-client.crt ] || openssl x509 -req -in apiserver-kubelet-client.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out apiserver-kubelet-client.crt -days ${days} -extensions v3_req -extfile openssl-client.cnf
# front-proxy-ca
[ -f front-proxy-ca.key ] || openssl genrsa -out front-proxy-ca.key 2048
[ -f front-proxy-ca.crt ] || openssl req -x509 -new -nodes -key front-proxy-ca.key -subj "/CN=kubernetes-front-proxy-ca" -days ${days} -out front-proxy-ca.crt
# front-proxy-client
[ -f front-proxy-client.key ] || openssl genrsa -out front-proxy-client.key 2048
[ -f front-proxy-client.csr ] || openssl req -new -key front-proxy-client.key -out front-proxy-client.csr -subj "/CN=front-proxy-client"
[ -f front-proxy-client.crt ] || openssl x509 -req -in front-proxy-client.csr -CA front-proxy-ca.crt -CAkey front-proxy-ca.key -CAcreateserial -out front-proxy-client.crt -days ${days} -extensions v3_req -extfile openssl-client.cnf
# sa
[ -f sa.key ] || openssl genrsa -out sa.key 2048
[ -f sa.pub ] || openssl rsa -in sa.key -pubout -out sa.pub
# etcd
cd etcd
# etcd ca
[ -f ca.key ] || openssl genrsa -out ca.key 2048
[ -f ca.crt ] || openssl req -x509 -new -nodes -key ca.key -subj "/CN=etcd-ca" -days ${days} -out ca.crt
# etcd server
[ -f server.key ] || openssl genrsa -out server.key 2048
[ -f server.csr ] || openssl req -new -key server.key -subj "/CN=kube-etcd" -out server.csr -config openssl.cnf
[ -f server.crt ] || openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out server.crt -days ${days} -extensions v3_req -extfile openssl.cnf
# etcd peer
[ -f peer.key ] || openssl genrsa -out peer.key 2048
[ -f peer.csr ] || openssl req -new -key peer.key -subj "/CN=kube-etcd-peer" -out peer.csr -config openssl.cnf
[ -f peer.crt ] || openssl x509 -req -in peer.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out peer.crt -days ${days} -extensions v3_req -extfile openssl.cnf
# etcd healthcheck-client
[ -f healthcheck-client.key ] || openssl genrsa -out healthcheck-client.key 2048
[ -f healthcheck-client.csr ] || openssl req -new -key healthcheck-client.key -out healthcheck-client.csr -subj "/O=system:masters/CN=kube-etcd-healthcheck-client"
[ -f healthcheck-client.crt ] || openssl x509 -req -in healthcheck-client.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out healthcheck-client.crt -days ${days} -extensions v3_req -extfile ../openssl-client.cnf
# kube-apiserver-etcd-client
cd ..
[ -f apiserver-etcd-client.key ] || openssl genrsa -out apiserver-etcd-client.key 2048
[ -f apiserver-etcd-client.csr ] || openssl req -new -key apiserver-etcd-client.key -out apiserver-etcd-client.csr -subj "/O=system:masters/CN=kube-apiserver-etcd-client"
[ -f apiserver-etcd-client.crt ] || openssl x509 -req -in apiserver-etcd-client.csr -CA etcd/ca.crt -CAkey etcd/ca.key -CAcreateserial -out apiserver-etcd-client.crt -days ${days} -extensions v3_req -extfile openssl-client.cnf
# clean csr files
rm -f /etc/kubernetes/${cluster}/pki/*.csr
rm -f /etc/kubernetes/${cluster}/pki/etcd/*.csr
echo "done"