ubuntu 22.04 开发环境 Kubernetes 集群搭建

安装

Docker 基础配置

matlab 复制代码
cat <<EOF | sudo tee /etc/docker/daemon.json
{
  "registry-mirrors": ["https://uy35zvn6.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF

sudo systemctl enable docker
sudo systemctl daemon-reload
sudo systemctl restart docker

k8s 的防火墙设置

bash 复制代码
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward=1 # better than modify /etc/sysctl.conf
EOF

sudo sysctl --system

证书工具

arduino 复制代码
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl

sudo curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -

添加 k8s 存储库(阿里云)

bash 复制代码
sudo tee /etc/apt/sources.list.d/kubernetes.list <<-'EOF'
deb https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial main
EOF

安装 kubeadm kubectl kubelet:

sql 复制代码
sudo apt-get update
sudo apt-get install -y kubelet=1.28.2-00 kubeadm=1.28.2-00 kubectl=1.28.2-00 
sudo apt-mark hold kubelet kubeadm kubectl

将默认 CRI 改为 containerd

参考:github.com/cncamp/101/...

创建 containerd 配置目录:

arduino 复制代码
sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml

修改 /etc/containerd/config.toml 配置:

ini 复制代码
k8s.gcr.io/pause:3.x
# 改为 >>>
registry.aliyuncs.com/google_containers/pause:3.9

SystemdCgroup = false
# 改为 >>>
SystemdCgroup = true

更新配置,直接禁用 docker,启用 containerd:

bash 复制代码
systemctl daemon-reload
systemctl disable docker
systemctl stop docker
systemctl enable containerd
systemctl restart containerd

kubeadm 初始化集群,注意地址改成本机地址:

css 复制代码
kubeadm init \
 --image-repository registry.aliyuncs.com/google_containers \
 --kubernetes-version v1.28.2 \
 --pod-network-cidr=192.168.0.0/16 \
 --apiserver-advertise-address=192.168.79.129

注意将输出文件存起来,会提示如下信息:

bash 复制代码
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.79.129:6443 --token xxxxxx.xxxxxxxxxxxxxxxx \
	--discovery-token-ca-cert-hash sha256:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

所以,在当前用户,按照提示配置好 $HOME/.kube/config,查看所有的 pod:

sql 复制代码
kubectl get po --all-namespaces

可以正常看到 pod 就已经成功了:

sql 复制代码
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   coredns-66f779496c-5mlvq                   0/0     Pending   0          26m
kube-system   coredns-66f779496c-hgpr5                   0/0     Pending   0          26m
kube-system   etcd-matebook-x-pro                        1/1     Running   0          26m
kube-system   kube-apiserver-matebook-x-pro              1/1     Running   0          26m
kube-system   kube-controller-manager-matebook-x-pro     1/1     Running   0          26m
kube-system   kube-proxy-6jclh                           1/1     Running   0          26m
kube-system   kube-scheduler-matebook-x-pro              1/1     Running   0          26m

去除 master 节点的 taints

执行这个去除 Master 的 taints,否则 pod 无法调度在 master 节点。

bash 复制代码
kubectl taint nodes --all node-role.kubernetes.io/control-plane-
kubectl taint nodes --all node-role.kubernetes.io/master-

安装 Calico CNI 插件

参照官方文档:docs.tigera.io/calico/late...

bash 复制代码
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.4/manifests/tigera-operator.yaml
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.4/manifests/custom-resources.yaml

时间稍为会有点长,完成之后稍等片刻,然后 kubelet get po --all-namespaces 看到如下结果:

sql 复制代码
NAMESPACE          NAME                                       READY   STATUS    RESTARTS   AGE
calico-apiserver   calico-apiserver-5ffbc745df-95rvm          1/1     Running   0          3m22s
calico-apiserver   calico-apiserver-5ffbc745df-f4g88          1/1     Running   0          3m22s
calico-system      calico-kube-controllers-757d6c6d9d-9tnsk   1/1     Running   0          4m47s
calico-system      calico-node-cp4b9                          1/1     Running   0          4m47s
calico-system      calico-typha-648459974b-zqr7z              1/1     Running   0          4m47s
calico-system      csi-node-driver-f6qcp                      2/2     Running   0          4m47s
kube-system        coredns-66f779496c-pp6dc                   1/1     Running   0          12m
kube-system        coredns-66f779496c-rdc4m                   1/1     Running   0          12m
kube-system        etcd-matebook-x-pro                        1/1     Running   4          12m
kube-system        kube-apiserver-matebook-x-pro              1/1     Running   4          12m
kube-system        kube-controller-manager-matebook-x-pro     1/1     Running   4          12m
kube-system        kube-proxy-whwl6                           1/1     Running   0          12m
kube-system        kube-scheduler-matebook-x-pro              1/1     Running   4          12m
tigera-operator    tigera-operator-7f8cd97876-vgvvl           1/1     Running   0          4m57s

检查状态 kubectl get cs

vbnet 复制代码
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE   ERROR
scheduler            Healthy   ok        
controller-manager   Healthy   ok        
etcd-0               Healthy   ok

尝试创建一个 nginx 实例

创建一个 deployment.yaml:

yaml 复制代码
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 2 # tells deployment to run 2 pods matching the template
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        ports:
        - containerPort: 80

然后 kubectl create -f deployment.yaml 注意拉取镜像的过程是隐藏的,如果拉镜像不顺畅可能会卡住挺久的。如果是这种情况,可以用下面这种方式先拉镜像,可以看到进度:

arduino 复制代码
ctr image pull docker.io/library/nginx:latest

成功的话 kubectl get po 可以看到下面的结果:

sql 复制代码
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-7c79c4bf97-8kjf6   1/1     Running   0          11s
nginx-deployment-7c79c4bf97-tl9mt   1/1     Running   0          11s

然后添加 service.yaml,通过 nodePort 将 nginx 放出在本机 80 端口:

yaml 复制代码
apiVersion: v1
kind: Service
metadata:
  name: my-nginx
spec:
  selector:
    app: nginx
  type: NodePort
  ports:
    - protocol: TCP
      port: 80
sh 复制代码
alfred@matebook-x-pro:~/kube/kubernetes$ k get service
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP        48m
my-nginx     NodePort    10.108.220.245   <none>        80:31153/TCP   4s
alfred@matebook-x-pro:~/kube/kubernetes$ k get deployment
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   2/2     2            2           7m31s
alfred@matebook-x-pro:~/kube/kubernetes$ k get po
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-7c79c4bf97-8kjf6   1/1     Running   0          7m34s
nginx-deployment-7c79c4bf97-tl9mt   1/1     Running   0          7m34s

然后打开 http://127.0.0.1:31153 或者 http://10.108.220.245 都可以看到网页已经可以正常访问了。

相关推荐
ggaofeng3 小时前
通过命令学习k8s
云原生·容器·kubernetes
qq_道可道6 小时前
K8S升级到1.24后,切换运行时导致 dind 构建镜像慢根因定位与解决
云原生·容器·kubernetes
SONGW20186 小时前
k8s拓扑域 :topologyKey
kubernetes
weixin_438197387 小时前
K8S实现反向代理,负载均衡
linux·运维·服务器·nginx·kubernetes
郝同学的测开笔记9 小时前
云原生探索系列(十二):Go 语言接口详解
后端·云原生·go
mit6.82410 小时前
[Docker#5] 镜像仓库 | 命令 | 实验:搭建Nginx | 创建私有仓库
linux·后端·docker·云原生
数据猿11 小时前
【金猿人物展】博睿数据董事长兼CEO李凯:云原生与数据治理融合,实现全域数据协同...
云原生
华为云开发者联盟13 小时前
解读Karmada多云容器编排技术,加速分布式云原生应用升级
kubernetes·集群·karmada·多云容器
A ?Charis1 天前
我来讲一下-Service Mesh.
云原生·service_mesh
严格要求自己1 天前
nacos-operator在k8s集群上部署nacos-server2.4.3版本踩坑实录
云原生·容器·kubernetes