一、k8s1.26环境准备
bash
k-master 192.168.50.100
k-node1 192.168.50.101
k-node2 192.168.50.102
bash
安装docker 默认会安装containerd
config.toml里面需要配置
先要安装k8s才会有crictl命令
安装calico网络插件
v3.25
下载2个文件
修改自定义的文件,改为pod的dir网段才行
kubeadm init --apiserver-advertise-address=192.168.200.10 --kubernetes-version=v1.23.0 --service-cidr=10.96.0.0/16 --pod-network-cidr=10.244.0.0/16 --image-repository=registry.aliyuncs.com/google_containers
1、前置工作
1、关掉防火墙和selinux
bash
systemctl disable firewalld.service --now
setenforce 0
vim /etc/selinux/config
2、关闭swap交换分区
bash
swapoff -a
3、打开linux内核工具
bash
vim /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
sysctl -p /etc/sysctl.d/k8s.conf
2、重要工作
1、配置docker源和kubernetes源
bash
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
2、安装docker-ce
bash
yum -y install docker-ce
systemctl enable docker --now
3、修改config.toml文件
bash
containerd config default > /etc/containerd/config.toml
vim config.toml
SystemdCgroup = true
sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"
# 配置拉取镜像策略
# 因为国内访问不到dockerhub或者其他,就需要使用一些magic
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."registry.k8s.io"] # 拉取registry.k8s.io镜像从下面网站拉取
endpoint = ["https://registry-k8s-io.mirrors.sjtug.sjtu.edu.cn"]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] # 拉取docker.io镜像也是这样的
endpoint = ["https://自己网站"]
systemctl restart containerd
systemctl enable containerd
4、安装kubelet,kubeadm,kubectl
bash
yum -y install kubelet-1.26.0 kubeadm-1.26.0 kubectl-1.26.0
# 启动kubelet
systemctl enable kubelet --now
5、设置容器运行时
bash
crictl config runtime-endpoint unix:///var/run/containerd/containerd.sock
crictl config image-endpoint unix:///var/run/containerd/containerd.sock
3、只在master上面做
1、kubernetes初始化
bash
kubeadm init --apiserver-advertise-address=192.168.50.100 \
--kubernetes-version=v1.26.0 \
--pod-network-cidr=10.244.0.0/16 \
--image-repository=registry.aliyuncs.com/google_containers
-
--image-repository 拉取控制平面镜像的仓库,etcd,api-server等等
-
--apiserver-advertise-address master的地址
-
--pod-network-cid pod的网段
2、加入node节点
- node1,node2上面需要执行
bash
[root@k-node1 ~]# kubeadm join 192.168.50.100:6443 --token q6tybk.47n9q7zymfpxeufi --discovery-token-ca-cert-hash sha256:d949c3809ba2f36425000119f9e7c7e29f3715aebd568b91eb8af309a86de09a
[preflight] Running pre-flight checks
[WARNING FileExisting-tc]: tc not found in system path
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[root@k-node2 containerd]# kubeadm join 192.168.50.100:6443 --token q6tybk.47n9q7zymfpxeufi --discovery-token-ca-cert-hash sha256:d949c3809ba2f36425000119f9e7c7e29f3715aebd568b91eb8af309a86de09a
- 因为没有安装网络插件,节点直接的pod不能进行通信,因此是notready状态
3、安装calico网络插件
-
下载calico网络插件yaml文件
-
安装的版本是v3.25.2
-
https://archive-os-3-25.netlify.app/calico/3.25/getting-started/kubernetes/quickstart
-
下载2个yaml文件
bash
# 直接进行应用
kubectl create -f tigera-operator.yaml
# 修改custom-resources.yaml中ip地址即可,为pod的网段地址即可
kubectl create -f custom-resources.yaml
bash
[root@k-master ~]# kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
calico-apiserver calico-apiserver-5948d966b5-c5x9j 1/1 Running 0 70m
calico-apiserver calico-apiserver-5948d966b5-q29qv 1/1 Running 0 70m
calico-system calico-kube-controllers-84dd694985-znfdz 1/1 Running 0 72m
calico-system calico-node-lf6f5 1/1 Running 0 72m
calico-system calico-node-rtbfq 1/1 Running 0 72m
calico-system calico-node-tcz85 1/1 Running 0 72m
calico-system calico-typha-665f4cfb48-4pzz5 1/1 Running 0 72m
calico-system calico-typha-665f4cfb48-q8jnw 1/1 Running 0 72m
calico-system csi-node-driver-b9fps 2/2 Running 0 72m
calico-system csi-node-driver-d4mr9 2/2 Running 0 72m
calico-system csi-node-driver-qzcwr 2/2 Running 0 72m
default centos8-demo 1/1 Running 0 40m
kube-system coredns-5bbd96d687-rsnp6 1/1 Running 0 95m
kube-system coredns-5bbd96d687-svq2d 1/1 Running 0 95m
kube-system etcd-k-master 1/1 Running 0 95m
kube-system kube-apiserver-k-master 1/1 Running 0 95m
kube-system kube-controller-manager-k-master 1/1 Running 0 95m
kube-system kube-proxy-fgct4 1/1 Running 0 93m
kube-system kube-proxy-lfsvb 1/1 Running 0 95m
kube-system kube-proxy-mk56p 1/1 Running 0 94m
kube-system kube-scheduler-k-master 1/1 Running 0 95m
tigera-operator tigera-operator-66654c8696-gxkmg 1/1 Running 0 75m
4、查看node状态和创建pod测试网络
bash
[root@k-master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k-master Ready control-plane 69m v1.26.0
k-node1 Ready <none> 68m v1.26.0
k-node2 Ready <none> 67m v1.26.0
[root@k-master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
centos8-demo 1/1 Running 0 14m
[root@k-master ~]# kubectl exec -ti centos8-demo -- /bin/bash
[root@centos8-demo /]# ping qq.com
PING qq.com (113.108.81.189) 56(84) bytes of data.
64 bytes from 113.108.81.189 (113.108.81.189): icmp_seq=1 ttl=127 time=44.10 ms
5、补全功能
bash
echo 'source <(kubectl completion bash)' /root/.bashrc
二、kubernets组件
1、api-server
-
接收外部请求的组件
-
集群的唯一入口,都需要通过这个来进行通信
2、客户端kubectl
-
命令行工具
-
执行命令后会被api-server捕捉到
3、sceduler组件
- 负责pod调度到哪一个node上面,资源调度
4、ectd组件
-
键值对数据库
-
分布式存储数据的
-
存储集群所有状态信息
5、controller-manager组件
- 管理各种控制器
6、kubelet组件
-
每一个节点上面都有这个组件
-
负责pod的创建,等等操作
-
周期性的获取pod信息,返回给api-server
-
健康检查等等
7、kube-proxy组件
-
网络组件
-
实现pod之间的通信,pod和外部之间的通信
-
通过这个组件创建svc,从而实现了外部访问pod通信
三、kubernetes资源
1、pod
- 一个pod可以有一个容器或者多个容器