Ubuntu二进制部署K8S 1.29.2

本机说明

本版本非高可用,单Master,以及一个Node

  • 新装的 ubuntu 22.04
  • k8s 1.29.3
  • 使用该文档请使用批量替换 192.168.44.141这个IP,其余照着复制粘贴就可以成功
  • 需要手动 设置一个 固定DNS,我这里设置的是 8.8.8.8不然coredns无法启动

一 基础环境准备

1.1 更换apt镜像源

复制代码
cat > /etc/apt/sources.list << "eof"
deb http://mirrors.aliyun.com/ubuntu/ jammy main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ jammy-security main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ jammy-updates main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ jammy-proposed main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ jammy-backports main restricted universe multiverse
eof
apt-get update
apt-get -y install lrzsz net-tools gcc net-tools

apt install ipvsadm ipset sysstat conntrack -y

1.2 关闭swap

复制代码
swapoff -a

# 永久关闭swap
vim /etc/fstab

1.3 其他配置项目

复制代码
# 关闭防火墙
iptables -P FORWARD ACCEPT
/etc/init.d/ufw stop
ufw disable
# 将桥接的IPv4流量传递到iptables的链
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward=1
vm.max_map_count=262144
EOF
modprobe br_netfilter
sysctl -p /etc/sysctl.d/k8s.conf
# 使用ipvsadmin
mkdir -p  /etc/sysconfig/modules
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack
# 调整k8s网络配置
cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward         = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
# Apply sysctl params without reboot
sysctl --system

1.4 配置Ulimit

复制代码
ulimit -SHn 65535
cat >> /etc/security/limits.conf <<EOF
* soft nofile 655360
* hard nofile 131072
* soft nproc 655350
* hard nproc 655350
* seft memlock unlimited
* hard memlock unlimitedd
EOF

二 CA证书准备

注意,所有的命令执行均在 data/k8s-work

2.1 配置SSL软件

复制代码
mkdir -p /data/k8s-work     #统一软件的存放位置
cd /data/k8s-work
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl*
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo

2.2 生成CA文件

复制代码
cat > ca-csr.json <<"EOF"
{
  "CN": "kubernetes",
  "key": {
      "algo": "rsa",
      "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Sichuan",
      "L": "Chengdu",
      "O": "Jcrose",
      "OU": "CN"
    }
  ],
  "ca": {
          "expiry": "87600h"
  }
}
EOF
2.2.1 创建CA证书
复制代码
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
2.2.2 配置ca证书策略
复制代码
cfssl print-defaults config > ca-config.json

cat > ca-config.json <<"EOF"
{
  "signing": {
      "default": {
          "expiry": "87600h"
        },
      "profiles": {
          "kubernetes": {
              "usages": [
                  "signing",
                  "key encipherment",
                  "server auth",
                  "client auth"
              ],
              "expiry": "87600h"
          }
      }
  }
}
EOF

2.3 单机启动ETCD

2.3.1 配置ETCD的证书文件

配置csd.json

复制代码
cat > etcd-csr.json <<"EOF"
{
  "CN": "etcd",
  "hosts": [
    "127.0.0.1",
    "192.168.44.136"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [{
    "C": "CN",
    "ST": "Sichuan",
    "L": "Chengdu",
    "O": "Jcrose",
    "OU": "CN"
  }]
}
EOF

创建etcd文件

复制代码
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson  -bare etcd
2.3.2 部署ETCD

下载二进制文件

复制代码
wget https://github.com/etcd-io/etcd/releases/download/v3.5.2/etcd-v3.5.2-linux-amd64.tar.gz
tar -xvf etcd-v3.5.2-linux-amd64.tar.gz
cp -p etcd-v3.5.2-linux-amd64/etcd* /usr/local/bin/
mkdir /etc/etcd

mkdir -p /etc/etcd/ssl
mkdir -p /var/lib/etcd/default.etcd
cd /data/k8s-work
cp ca*.pem /etc/etcd/ssl
cp etcd*.pem /etc/etcd/ssl

mkdir -p /etc/etcd
mkdir -p /etc/etcd/ssl
mkdir -p /var/lib/etcd/default.etcd

创建配置文件

复制代码
cat > /etc/etcd/etcd.conf << "eof"
#[Member]
ETCD_NAME="default"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://127.0.0.1:2380"
ETCD_LISTEN_CLIENT_URLS="https://127.0.0.1:2379"
eof

创建启动命令

复制代码
cat > /etc/systemd/system/etcd.service << "EOF"
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=-/etc/etcd/etcd.conf
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/local/bin/etcd \
  --cert-file=/etc/etcd/ssl/etcd.pem \
  --key-file=/etc/etcd/ssl/etcd-key.pem \
  --trusted-ca-file=/etc/etcd/ssl/ca.pem \
  --peer-cert-file=/etc/etcd/ssl/etcd.pem \
  --peer-key-file=/etc/etcd/ssl/etcd-key.pem \
  --peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \
  --advertise-client-urls=https://127.0.0.1:2380 \
  --peer-client-cert-auth \
  --client-cert-auth
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable --now etcd.service
systemctl status etcd

检查服务状态

复制代码
/usr/local/bin/etcdctl --write-out=table --cacert=/etc/etcd/ssl/ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem --endpoints=https://127.0.0.1:2379 endpoint health



root@jcrose:/data/k8s-work# /usr/local/bin/etcdctl --write-out=table --cacert=/etc/etcd/ssl/ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem --endpoints=https://127.0.0.1:2379 endpoint health
+------------------------+--------+------------+-------+
|        ENDPOINT        | HEALTH |    TOOK    | ERROR |
+------------------------+--------+------------+-------+
| https://127.0.0.1:2379 |   true | 4.006819ms |       |
+------------------------+--------+------------+-------+

三 准备k8s组件

复制代码
cd /data/k8s-work
tar -xf kubernetes-server-linux-amd64.tar.gz  --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}

3.1 k8s-apiserver

3.1.1 证书文件
复制代码
mkdir -p /etc/kubernetes/        
mkdir -p /etc/kubernetes/ssl     
mkdir -p /var/log/kubernetes 

cd /data/k8s-work

cat > kube-apiserver-csr.json << "EOF"
{
"CN": "kubernetes",
  "hosts": [
    "127.0.0.1",
    "192.168.44.141",
    "10.96.0.1",
    "kubernetes",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Sichuan",
      "L": "Chengdu",
      "O": "Jcrose",
      "OU": "CN"
    }
  ]
}
EOF



cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-apiserver-csr.json | cfssljson -bare kube-apiserver

# 生成token文件
cat > token.csv << EOF
$(head -c 16 /dev/urandom | od -An -t x | tr -d ' '),kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF

mkdir -p /etc/kubernetes/        
mkdir -p /etc/kubernetes/ssl     
mkdir -p /var/log/kubernetes 

cd /data/k8s-work

cat > kube-apiserver-csr.json << "EOF"
{
"CN": "kubernetes",
  "hosts": [
    "127.0.0.1",
    "192.168.44.141",
    "10.96.0.1",
    "kubernetes",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Sichuan",
      "L": "Chengdu",
      "O": "Jcrose",
      "OU": "CN"
    }
  ]
}
EOF



cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-apiserver-csr.json | cfssljson -bare kube-apiserver

# 生成token文件
cat > token.csv << EOF
$(head -c 16 /dev/urandom | od -An -t x | tr -d ' '),kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF
3.1.2 复制证书文件
复制代码
cp ca.pem ca-key.pem kube-apiserver.pem kube-apiserver-key.pem token.csv /etc/kubernetes/ssl/
3.1.3 开启API聚合功能
复制代码
cat > kube-metrics-csr.json << "eof"
{
  "CN": "aggregator",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Sichuan",
      "L": "Chengdu",
      "O": "system:masters",
      "OU": "CN"
    }
  ]
}
eof

cfssl gencert   -profile=kubernetes   -ca=/etc/kubernetes/ssl/ca.pem   -ca-key=/etc/kubernetes/ssl/ca-key.pem   kube-metrics-csr.json | cfssljson -bare kube-metrics
cp kube-metrics*.pem /etc/kubernetes/ssl/

cat > kube-metrics-csr.json << "eof"
{
  "CN": "aggregator",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Sichuan",
      "L": "Chengdu",
      "O": "system:masters",
      "OU": "CN"
    }
  ]
}
eof

cfssl gencert   -profile=kubernetes   -ca=/etc/kubernetes/ssl/ca.pem   -ca-key=/etc/kubernetes/ssl/ca-key.pem   kube-metrics-csr.json | cfssljson -bare kube-metrics
cp kube-metrics*.pem /etc/kubernetes/ssl/
3.1.4 创建apiserver服务配置文件
复制代码
cat > /etc/kubernetes/kube-apiserver.conf << "EOF"
KUBE_APISERVER_OPTS="--anonymous-auth=true \
  --v=2 \
  --allow-privileged=true \
  --bind-address=0.0.0.0 \
  --secure-port=6443 \
  --advertise-address=192.168.44.141 \
  --service-cluster-ip-range=10.96.0.0/12,fd00:1111::/112 \
  --service-node-port-range=80-32767 \
  --etcd-servers=https://127.0.0.1:2379 \
  --etcd-cafile=/etc/etcd/ssl/ca.pem \
  --etcd-certfile=/etc/etcd/ssl/etcd.pem \
  --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
  --client-ca-file=/etc/kubernetes/ssl/ca.pem \
  --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem  \
  --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem \
  --proxy-client-cert-file=/etc/kubernetes/ssl/kube-metrics.pem \
  --proxy-client-key-file=/etc/kubernetes/ssl/kube-metrics-key.pem \
  --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem \
  --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem \
  --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \
  --service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem  \
  --service-account-issuer=https://kubernetes.default.svc.cluster.local \
  --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \
  --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \
  --authorization-mode=Node,RBAC \
  --runtime-config=api/all=true \
  --enable-bootstrap-token-auth \
  --token-auth-file=/etc/kubernetes/ssl/token.csv \
  --requestheader-allowed-names=aggregator  \
  --requestheader-group-headers=X-Remote-Group  \
  --requestheader-extra-headers-prefix=X-Remote-Extra-  \
  --requestheader-username-headers=X-Remote-User \
  --enable-aggregator-routing=true"
EOF  

cat > /etc/kubernetes/kube-apiserver.conf << "EOF"
KUBE_APISERVER_OPTS="--anonymous-auth=true \
  --v=2 \
  --allow-privileged=true \
  --bind-address=0.0.0.0 \
  --secure-port=6443 \
  --advertise-address=192.168.44.141 \
  --service-cluster-ip-range=10.96.0.0/12,fd00:1111::/112 \
  --service-node-port-range=80-32767 \
  --etcd-servers=https://127.0.0.1:2379 \
  --etcd-cafile=/etc/etcd/ssl/ca.pem \
  --etcd-certfile=/etc/etcd/ssl/etcd.pem \
  --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
  --client-ca-file=/etc/kubernetes/ssl/ca.pem \
  --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem  \
  --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem \
  --proxy-client-cert-file=/etc/kubernetes/ssl/kube-metrics.pem \
  --proxy-client-key-file=/etc/kubernetes/ssl/kube-metrics-key.pem \
  --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem \
  --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem \
  --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \
  --service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem  \
  --service-account-issuer=https://kubernetes.default.svc.cluster.local \
  --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \
  --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \
  --authorization-mode=Node,RBAC \
  --runtime-config=api/all=true \
  --enable-bootstrap-token-auth \
  --token-auth-file=/etc/kubernetes/ssl/token.csv \
  --requestheader-allowed-names=aggregator  \
  --requestheader-group-headers=X-Remote-Group  \
  --requestheader-extra-headers-prefix=X-Remote-Extra-  \
  --requestheader-username-headers=X-Remote-User \
  --enable-aggregator-routing=true"
EOF  
3.1.5 创建apiserver服务管理配置文件
复制代码
cat > /etc/systemd/system/kube-apiserver.service << "EOF"
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=etcd.service
Wants=etcd.service

[Service]
EnvironmentFile=-/etc/kubernetes/kube-apiserver.conf
ExecStart=/usr/local/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF
3.1.6 启动api-server
复制代码
cp ca*.pem /etc/kubernetes/ssl/
cp kube-apiserver*.pem /etc/kubernetes/ssl/
cp token.csv /etc/kubernetes/

systemctl daemon-reload
systemctl enable --now kube-apiserver

systemctl status kube-apiserver

# 测试
curl --insecure https://192.168.44.141:6443/

3.2 kubectl

3.2.1 证书文件
复制代码
cat > admin-csr.json << "EOF"
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Sichuan",
      "L": "Chengdu",
      "O": "system:masters",             
      "OU": "system"
    }
  ]
}
EOF



cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
cp admin*.pem /etc/kubernetes/ssl/
3.2.2 创建RBAC
复制代码
kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.44.141:6443 --kubeconfig=kube.config
kubectl config set-credentials admin --client-certificate=admin.pem --client-key=admin-key.pem --embed-certs=true --kubeconfig=kube.config
kubectl config set-context kubernetes --cluster=kubernetes --user=admin --kubeconfig=kube.config
kubectl config use-context kubernetes --kubeconfig=kube.config

mkdir ~/.kube
cp kube.config ~/.kube/config

# 准备kubectl配置文件并进行角色绑定


kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes --kubeconfig=/root/.kube/config

3.2 kube-controller-manager

3.2.1 证书文件
复制代码
cat > kube-controller-manager-csr.json << "EOF"
{
    "CN": "system:kube-controller-manager",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "hosts": [
      "127.0.0.1",
      "192.168.44.141"
    ],
    "names": [
      {
        "C": "CN",
        "ST": "Sichuan",
        "L": "Chengdu",
        "O": "system:kube-controller-manager",
        "OU": "system"
      }
    ]
}
EOF
3.2.2 创建RBAC
复制代码
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager

kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.44.141:6443 --kubeconfig=kube-controller-manager.kubeconfig

kubectl config set-credentials system:kube-controller-manager --client-certificate=kube-controller-manager.pem --client-key=kube-controller-manager-key.pem --embed-certs=true --kubeconfig=kube-controller-manager.kubeconfig

kubectl config set-context system:kube-controller-manager --cluster=kubernetes --user=system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig

kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig

cp kube-controller-manager*.pem /etc/kubernetes/ssl/
cp kube-controller-manager.kubeconfig /etc/kubernetes/
3.2.3 创建kube-controller-manager配置文件
复制代码
cat > kube-controller-manager.conf << "EOF"
KUBE_CONTROLLER_MANAGER_OPTS="--v=2 \
     --bind-address=127.0.0.1 \
     --root-ca-file=/etc/kubernetes/ssl/ca.pem \
     --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \
     --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
     --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \
     --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \
     --leader-elect=true \
     --use-service-account-credentials=true \
     --node-monitor-grace-period=40s \
     --node-monitor-period=5s \
     --controllers=*,bootstrapsigner,tokencleaner \
     --allocate-node-cidrs=true \
     --service-cluster-ip-range=10.96.0.0/12,fd00:1111::/112 \
     --cluster-cidr=172.16.0.0/12,fc00:2222::/112 \
     --node-cidr-mask-size-ipv4=24 \
     --node-cidr-mask-size-ipv6=120" 
EOF
3.2.4 创建服务启动文件
复制代码
cat > kube-controller-manager.service << "EOF"
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/etc/kubernetes/kube-controller-manager.conf
ExecStart=/usr/local/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF
3.2.5 复制配置文件
复制代码
cp kube-controller-manager*.pem /etc/kubernetes/ssl/
cp kube-controller-manager.kubeconfig /etc/kubernetes/
cp kube-controller-manager.conf /etc/kubernetes/
cp kube-controller-manager.service /usr/lib/systemd/system/
3.2.6 启动服务
复制代码
systemctl daemon-reload 
systemctl enable --now kube-controller-manager
systemctl status kube-controller-manager

3.3 kube-scheduler

3.3.1 证书文件
复制代码
cat > kube-scheduler-csr.json << "EOF"
{
    "CN": "system:kube-scheduler",
    "hosts": [
      "127.0.0.1",
      "192.168.44.141"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
      {
        "C": "CN",
        "ST": "Sichuan",
        "L": "Chengdu",
        "O": "system:kube-scheduler",
        "OU": "system"
      }
    ]
}
EOF



cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler

kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.44.141:6443 --kubeconfig=kube-scheduler.kubeconfig

kubectl config set-credentials system:kube-scheduler --client-certificate=kube-scheduler.pem --client-key=kube-scheduler-key.pem --embed-certs=true --kubeconfig=kube-scheduler.kubeconfig

kubectl config set-context system:kube-scheduler --cluster=kubernetes --user=system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig

kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig

cp kube-scheduler*.pem /etc/kubernetes/ssl/
cp kube-scheduler.kubeconfig /etc/kubernetes/
3.3.2 创建服务配置文件
复制代码
cat > kube-scheduler.conf << "EOF"
KUBE_SCHEDULER_OPTS="--kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig --leader-elect=true --v=2"
EOF
3.3.3 创建服务启动文件
复制代码
cat > kube-scheduler.service << "EOF"
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/etc/kubernetes/kube-scheduler.conf
ExecStart=/usr/local/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF
3.3.4 复制服务启动文件
复制代码
cp kube-scheduler*.pem /etc/kubernetes/ssl/
cp kube-scheduler.kubeconfig /etc/kubernetes/
cp kube-scheduler.conf /etc/kubernetes/
cp kube-scheduler.service /usr/lib/systemd/system/
3.3.5 启动服务
复制代码
systemctl daemon-reload
systemctl enable --now kube-scheduler
systemctl status kube-scheduler

3.4 Containerd

3.4.1 下载软件包containerd
复制代码
wget https://ghproxy.com/https://github.com/containerd/containerd/releases/download/v1.6.10/cri-containerd-cni-1.6.10-linux-amd64.tar.gz
wget https://ghproxy.com/https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz
wget https://ghproxy.com/https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz

#创建cni插件所需目录
mkdir -p /etc/cni/net.d /opt/cni/bin 
#解压cni二进制包
tar xf cni-plugins-linux-amd64-v*.tgz -C /opt/cni/bin/

#解压
tar -xzf cri-containerd-cni-*-linux-amd64.tar.gz -C /

#创建服务启动文件
cat > /etc/systemd/system/containerd.service <<EOF
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target local-fs.target

[Service]
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/local/bin/containerd
Type=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5
LimitNPROC=infinity
LimitCORE=infinity
LimitNOFILE=infinity
TasksMax=infinity
OOMScoreAdjust=-999

[Install]
WantedBy=multi-user.target
EOF
3.4.2 配置Containerd所需的模块
复制代码
cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF
 
#加载模块
systemctl restart systemd-modules-load.service
3.4.3 创建Containerd的配置文件
复制代码
# 创建默认配置文件
mkdir -p /etc/containerd
containerd config default | tee /etc/containerd/config.toml

# 修改Containerd的配置文件
sed -i "s#SystemdCgroup\ \=\ false#SystemdCgroup\ \=\ true#g" /etc/containerd/config.toml
cat /etc/containerd/config.toml | grep SystemdCgroup

sed -i "s#registry.k8s.io#registry.cn-hangzhou.aliyuncs.com/chenby#g" /etc/containerd/config.toml
cat /etc/containerd/config.toml | grep sandbox_image

sed -i "s#config_path\ \=\ \"\"#config_path\ \=\ \"/etc/containerd/certs.d\"#g" /etc/containerd/config.toml
cat /etc/containerd/config.toml | grep certs.d

mkdir /etc/containerd/certs.d/docker.io -pv

cat > /etc/containerd/certs.d/docker.io/hosts.toml << EOF
server = "https://docker.io"
[host."https://dockerpull.cn"]
  capabilities = ["pull", "resolve"]
EOF
3.4.4 设置服务为开机自动
复制代码
systemctl daemon-reload
systemctl enable --now containerd
systemctl restart containerd
3.4.5 配置crictl客户端连接的运行时位置
复制代码
tar xf crictl-v*-linux-amd64.tar.gz -C /usr/bin/
#生成配置文件
cat > /etc/crictl.yaml <<EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF

#测试
systemctl restart  containerd
crictl info

3.5 Kubelet

主要用于k8s-apiserver与 节点node进行交互

3.5.1 创建配置文件

创建kubelet-bootstrap.kubeconfig

复制代码
BOOTSTRAP_TOKEN=$(awk -F "," '{print $1}' /etc/kubernetes/ssl/token.csv)

kubectl config set-cluster kubernetes     \
--certificate-authority=/etc/kubernetes/ssl/ca.pem     \
--embed-certs=true     --server=https://192.168.44.141:6443     \
--kubeconfig=kubelet-bootstrap.kubeconfig

kubectl config set-credentials tls-bootstrap-token-user     \
--token=${BOOTSTRAP_TOKEN} \
--kubeconfig=kubelet-bootstrap.kubeconfig

kubectl config set-context tls-bootstrap-token-user@kubernetes     \
--cluster=kubernetes     \
--user=tls-bootstrap-token-user     \
--kubeconfig=kubelet-bootstrap.kubeconfig

kubectl config use-context tls-bootstrap-token-user@kubernetes     \
--kubeconfig=kubelet-bootstrap.kubeconfig


cp kubelet-bootstrap.kubeconfig /etc/kubernetes
3.5.2 创建kubelet.json文件
复制代码
cat > kubelet.json << "EOF"
{
  "kind": "KubeletConfiguration",
  "apiVersion": "kubelet.config.k8s.io/v1beta1",
  "authentication": {
    "x509": {
      "clientCAFile": "/etc/kubernetes/ssl/ca.pem"
    },
    "webhook": {
      "enabled": true,
      "cacheTTL": "2m0s"
    },
    "anonymous": {
      "enabled": false
    }
  },
  "authorization": {
    "mode": "Webhook",
    "webhook": {
      "cacheAuthorizedTTL": "5m0s",
      "cacheUnauthorizedTTL": "30s"
    }
  },
  "address": "192.168.44.141",
  "port": 10250,
  "readOnlyPort": 10255,
  "cgroupDriver": "systemd",                    
  "hairpinMode": "promiscuous-bridge",
  "serializeImagePulls": false,
  "clusterDomain": "cluster.local.",
  "clusterDNS": ["10.96.0.2"]
}
EOF

# 同步文件到集群节点
cp kubelet-bootstrap.kubeconfig /etc/kubernetes/
cp kubelet.json /etc/kubernetes/
3.2.4 创建kubelet服务启动管理文件
复制代码
cat > kubelet.service << "EOF"
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=containerd.service
Requires=containerd.service

[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/usr/local/bin/kubelet \
    --bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \
    --kubeconfig=/root/.kube/config \
    --config=/etc/kubernetes/kubelet.json \
    --container-runtime-endpoint=unix:///run/containerd/containerd.sock  \
    --node-labels=node.kubernetes.io/node=
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF
3.2.5 复制配置文件
复制代码
cp kubelet-bootstrap.kubeconfig /etc/kubernetes/
cp kubelet.json /etc/kubernetes/
cp kubelet.service /usr/lib/systemd/system/
3.2.6 创建启动文件
复制代码
systemctl daemon-reload
systemctl enable --now kubelet

systemctl status kubelet

3.6 kube-proxy

3.6.1 创建kube-proxy配置文件
复制代码
cat > kube-proxy-csr.json << "EOF"
{
  "CN": "system:kube-proxy",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Sichuan",
      "L": "Chengdu",
      "O": "Jcrose",
      "OU": "CN"
    }
  ]
}
EOF
3.6.2 生成证书
复制代码
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.44.141:6443 --kubeconfig=kube-proxy.kubeconfig


kubectl config set-credentials kube-proxy  \
  --client-certificate=kube-proxy.pem     \
  --client-key=kube-proxy-key.pem     \
  --embed-certs=true     \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config set-context kube-proxy@kubernetes    \
  --cluster=kubernetes     \
  --user=kube-proxy     \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config use-context kube-proxy@kubernetes  --kubeconfig=kube-proxy.kubeconfig
3.3.3 创建配置文件
复制代码
cat > kube-proxy.yaml << EOF
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
clientConnection:
  acceptContentTypes: ""
  burst: 10
  contentType: application/vnd.kubernetes.protobuf
  kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
  qps: 5
clusterCIDR: 172.16.0.0/12,fc00:2222::/112
configSyncPeriod: 15m0s
conntrack:
  max: null
  maxPerCore: 32768
  min: 131072
  tcpCloseWaitTimeout: 1h0m0s
  tcpEstablishedTimeout: 24h0m0s
enableProfiling: false
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: ""
iptables:
  masqueradeAll: false
  masqueradeBit: 14
  minSyncPeriod: 0s
  syncPeriod: 30s
ipvs:
  masqueradeAll: true
  minSyncPeriod: 5s
  scheduler: "rr"
  syncPeriod: 30s
kind: KubeProxyConfiguration
metricsBindAddress: 127.0.0.1:10249
mode: "ipvs"
nodePortAddresses: null
oomScoreAdj: -999
portRange: ""
udpIdleTimeout: 250ms
EOF
3.3.4 创建服务启动管理文件
复制代码
cat >  kube-proxy.service << "EOF"
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/usr/local/bin/kube-proxy \
	--config=/etc/kubernetes/kube-proxy.yaml \
	--v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF
3.3.5 移动配置文件
复制代码
cp kube-proxy*.pem /etc/kubernetes/ssl/
cp kube-proxy.kubeconfig kube-proxy.yaml /etc/kubernetes/
cp kube-proxy.service /usr/lib/systemd/system/
mkdir -p /var/lib/kube-proxy
3.3.6 启动
复制代码
systemctl daemon-reload
systemctl enable --now kube-proxy

systemctl status kube-proxy

四 安装K8S内部的服务

4.1 安装 Kubens Tab补全

tab补全

复制代码
apt install bash-completion
source <(kubectl completion bash)
echo 'source <(kubectl completion bash)' >> ~/.bashrc source ~/.bashrc 

安装jq yq

复制代码
wget https://mirror.ghproxy.com/https://github.com/mikefarah/yq/releases/latest/download/yq_linux_amd64 -O /usr/bin/yq &&    chmod +x /usr/bin/yq
wget https://mirror.ghproxy.com/https://github.com/stedolan/jq/releases/download/jq-1.6/jq-linux64 -O /usr/bin/jq && chmod +x /usr/bin/jq

kubens

复制代码
wget https://mirror.ghproxy.com/https://github.com/ahmetb/kubectx/releases/download/v0.9.5/kubens
chmod +x kubens && mv kubens /usr/local/sbin

4.2 网络组件部署 Calico

复制代码
wget  https://mirror.ghproxy.com/https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/calico.yaml
sed -i 's#192.168.0.0/16#10.244.0.0/16#' calico.yaml 

五 Node节点加入K8S

Master节点操作

添加 /etc/hosts

复制代码
192.168.44.142 k8s-master02

配置免密登录

复制代码
ssh-keygen		#一直按回车键就可以了
ssh-copy-id [email protected]

5.1 Node节点操作-Containerd

重复上面的 一 基础环境准备的步骤

5.1.1 下载软件包containerd
复制代码
wget https://ghproxy.com/https://github.com/containerd/containerd/releases/download/v1.6.10/cri-containerd-cni-1.6.10-linux-amd64.tar.gz
wget https://ghproxy.com/https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz
wget https://ghproxy.com/https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz

#创建cni插件所需目录
mkdir -p /etc/cni/net.d /opt/cni/bin 
#解压cni二进制包
tar xf cni-plugins-linux-amd64-v*.tgz -C /opt/cni/bin/

#解压
tar -xzf cri-containerd-cni-*-linux-amd64.tar.gz -C /

#创建服务启动文件
cat > /etc/systemd/system/containerd.service <<EOF
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target local-fs.target

[Service]
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/local/bin/containerd
Type=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5
LimitNPROC=infinity
LimitCORE=infinity
LimitNOFILE=infinity
TasksMax=infinity
OOMScoreAdjust=-999

[Install]
WantedBy=multi-user.target
EOF
5.1.2 配置Containerd所需的模块
复制代码
cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF
 
#加载模块
systemctl restart systemd-modules-load.service
5.1.3 创建Containerd的配置文件
复制代码
# 创建默认配置文件
mkdir -p /etc/containerd
containerd config default | tee /etc/containerd/config.toml

# 修改Containerd的配置文件
sed -i "s#SystemdCgroup\ \=\ false#SystemdCgroup\ \=\ true#g" /etc/containerd/config.toml
cat /etc/containerd/config.toml | grep SystemdCgroup

sed -i "s#registry.k8s.io#registry.cn-hangzhou.aliyuncs.com/chenby#g" /etc/containerd/config.toml
cat /etc/containerd/config.toml | grep sandbox_image

sed -i "s#config_path\ \=\ \"\"#config_path\ \=\ \"/etc/containerd/certs.d\"#g" /etc/containerd/config.toml
cat /etc/containerd/config.toml | grep certs.d

mkdir /etc/containerd/certs.d/docker.io -pv

cat > /etc/containerd/certs.d/docker.io/hosts.toml << EOF
server = "https://docker.io"
[host."https://dockerpull.cn"]
  capabilities = ["pull", "resolve"]
EOF
5.1.4 设置服务为开机自动
复制代码
systemctl daemon-reload
systemctl enable --now containerd
systemctl restart containerd
5.1.5 配置crictl客户端连接的运行时位置
复制代码
tar xf crictl-v*-linux-amd64.tar.gz -C /usr/bin/
#生成配置文件
cat > /etc/crictl.yaml <<EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF

#测试
systemctl restart  containerd
crictl info

5.2 Master节点操作-Kubelet

k8s-master01操作

复制代码
说明:
kubelet.json中address需要修改为当前主机IP地址。

for i in  k8s-master02 ;do scp kubelet-bootstrap.kubeconfig kubelet.json $i:/etc/kubernetes/;done

for i in  k8s-master02 ;do scp ca.pem $i:/etc/kubernetes/ssl/;done

for i in  k8s-master02 ;do scp kubelet.service $i:/usr/lib/systemd/system/;done

mkdir -p /var/lib/kubelet
mkdir -p /var/log/kubernetes

systemctl daemon-reload
systemctl enable --now kubelet

systemctl status kubelet

5.3 Kube-proxy

复制代码
for i in k8s-master02 ;do scp kube-proxy.kubeconfig kube-proxy.yaml $i:/etc/kubernetes/;done
for i in k8s-master02 ;do scp  kube-proxy.service $i:/usr/lib/systemd/system/;done

mkdir -p /var/lib/kube-proxy
systemctl daemon-reload
systemctl enable --now kube-proxy

systemctl status kube-proxy

六 检验集群节点状态

复制代码
root@master:~# kubectl get node
NAME        STATUS   ROLES    AGE   VERSION
k8s-slave   Ready    <none>   21h   v1.29.2
master      Ready    <none>   25h   v1.29.2	
相关推荐
什么半岛铁盒1 小时前
【Linux系统】Linux环境变量:系统配置的隐形指挥官
linux
Lw老王要学习1 小时前
Linux容器篇、第一章_02Rocky9.5 系统下 Docker 的持久化操作与 Dockerfile 指令详解
linux·运维·docker·容器·云计算
橙子小哥的代码世界2 小时前
【大模型RAG】Docker 一键部署 Milvus 完整攻略
linux·docker·大模型·milvus·向量数据库·rag
倔强的石头1063 小时前
【Linux指南】用户与系统基础操作
linux·运维·服务器
云上艺旅3 小时前
centos升级内核
linux·运维·centos
kaikaile19953 小时前
centos开启samba服务
linux·运维·centos
云上艺旅3 小时前
centos部署k8s v1.33版本
linux·云原生·kubernetes·centos
好多知识都想学3 小时前
Centos 7 服务器部署多网站
linux·服务器·centos
好多知识都想学3 小时前
centos 7 部署awstats 网站访问检测
linux·运维·centos
颜淡慕潇3 小时前
【K8S系列】Kubernetes 中 Pod(Java服务)启动缓慢的深度分析与解决方案
容器·kubernetes·pod