部署K8S集群

目录

一、环境搭建

1、准备环境

2、安装master节点

3、安装k8s-master上的node

4、安装配置k8s-node1节点

5、安装k8s-node2节点

6、为所有node节点配置flannel网络

7、配置docker开启加载防火墙规则允许转发数据

二、k8s常用资源管理

1、创建一个pod

2、pod管理


一、环境搭建

1、准备环境

  1. 计算机说明,建议系统版本7.4或者7.6

|------------|---------------|-------------|-----------------------------------------------------------------------|
| 主机名 | IP地址 | 角色 | 硬件 |
| k8s-master | 192.168.2.116 | Master,node | Etcd、apiserver、controlor-manager、scheduler、kube-proxy、docker、registry |
| K8s-node1 | 192.168.2.117 | Node | Kubletel、kube-proxy、docker |
| K8s-node2 | 192.168.2.118 | Node | Kubletel、kube-proxy、docker |

  1. 修改master主机的计算机名设置host文件

    [root@centos01 ~]# hostnamectl set-hostname k8s-master

    [root@centos01 ~]# bash

    [root@k8s-master ~]# vim /etc/hosts

    192.168.2.116 k8s-master

    192.168.2.117 k8s-node1

    192.168.2.118 k8s-node2

  2. 修改节点一主机名设置host文件

    [root@centos02 ~]# hostnamectl set-hostname k8s-node1

    [root@centos02 ~]# bash

    [root@k8s-node1 ~]# scp 192.168.2.117:/etc/hosts /etc/

4)修改节点二主机名字设置host文件

复制代码
[root@centos03 ~]# hostnamectl set-hostname k8s-node2

[root@centos03 ~]# bash

[root@k8s-node2 ~]# scp 192.168.2.118:/etc/hosts /etc/

2、安装master节点

1)安装etcd配置etcd

复制代码
[root@k8s-master ~]# yum -y install etcd

[root@k8s-master ~]# cp /etc/etcd/etcd.conf /etc/etcd/etcd.conf.bak

[root@k8s-master ~]# vim /etc/etcd/etcd.conf

6 ETCD_LISTEN_CLIENT_URLS=http://0.0.0.0:2379
21 ETCD_ADVERTISE_CLIENT_URLS=http://192.168.2.116:2379

[root@k8s-master ~]# systemctl start etcd

[root@k8s-master ~]#  systemctl enable etcd

Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.

2)安装k8s-master节点

复制代码
[root@k8s-master ~]#yum install kubernetes-master.x86_64 -y

3)配置apiserver

复制代码
[root@k8s-master ~]# vim /etc/kubernetes/apiserver

8 KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"		//修改监听IP地址
12 KUBE_API_PORT="--port=8080"							//监听端口
16 KUBELET_PORT="--kubelet-port=10250"					//kubelet监听端口
19 KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.2.116:2379"	//连接etcd
24 KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"

4)配置controller和scheduler

复制代码
[root@k8s-master ~]# vim /etc/kubernetes/config

22 KUBE_MASTER="--master=http://192.168.2.116:8080"

5)启动k8s服务

复制代码
[root@k8s-master ~]#  systemctl start kube-apiserver.service

[root@k8s-master ~]# systemctl start kube-controller-manager.service

[root@k8s-master ~]# systemctl start kube-scheduler.service

[root@k8s-master ~]# systemctl enable kube-apiserver.service

Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.

[root@k8s-master ~]# systemctl enable kube-controller-manager.service

Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.

[root@k8s-master ~]# systemctl enable kube-scheduler.service

Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.

6)检查节点安装都是健康的

复制代码
[root@k8s-master ~]# kubectl get componentstatus 

NAME                 STATUS    MESSAGE             ERROR
etcd-0               Healthy   {"health":"true"}   
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  

3、安装k 8 s - master上的node

1)安装node

复制代码
[root@k8s-master ~]# yum install kubernetes node.x86_64

2)配置kubelet

复制代码
[root@k8s-master ~]# vim /etc/kubernetes/kubelet
5 KUBELET_ADDRESS="--address=192.168.2.116"						//监听IP地址
11 KUBELET_HOSTNAME="--hostname-override=k8s-master"				//监听计算机名
14 KUBELET_API_SERVER="--api-servers=http://192.168.2.116:8080"		//监听apiserver端口

3)启动kubelet启动自动启动docker服务

复制代码
[root@k8s-master ~]# systemctl start kubelet
[root@k8s-master ~]# systemctl enable kubelet

Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.

4)启动kubelet-proxy

复制代码
[root@k8s-master ~]# systemctl start kube-proxy
[root@k8s-master ~]# systemctl enable kube-proxy

Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.

5)检查node节点

复制代码
[root@k8s-master ~]# kubectl get nodes

NAME         STATUS    AGE
k8s-master   Ready     51s

4、安装配置k 8 s - node 1 节点

1)安装node

复制代码
[root@k8s-node1 ~]# yum install kubernetes-node.x86_64

2)node1连接k8s-master

复制代码
[root@k8s-node1 ~]# vim /etc/kubernetes/config
22 KUBE_MASTER="--master=http://192.168.2.116:8080"

3)配置kubelet

复制代码
[root@k8s-node1 ~]# vim /etc/kubernetes/kubelet
5 KUBELET_ADDRESS="--address=192.168.2.117"
11 KUBELET_HOSTNAME="--hostname-override=k8s-node1"
15 KUBELET_API_SERVER="--api-servers=http://192.168.2.116:8080"

4)启动服务

复制代码
[root@k8s-node1 yum.repos.d]# systemctl start kubelet
[root@k8s-node1 yum.repos.d]# systemctl start kube-proxy
[root@k8s-node1 yum.repos.d]# systemctl enable kubelet

Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.

[root@k8s-node1 yum.repos.d]# systemctl enable kube-proxy

Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.

5)在master节点检测node节点状态

复制代码
[root@k8s-master ~]# kubectl get nodes

NAME         STATUS    AGE
k8s-master   Ready     27m
k8s-node1    Ready     12m

5 、安装k 8 s - node 2 节点

1)安装node

复制代码
[root@k8s-node2 ~]# yum install kubernetes-node.x86_64

2)node2连接k8s-master

复制代码
[root@k8s-node2 ~]# vim /etc/kubernetes/config
22 KUBE_MASTER="--master=http://192.168.2.116:8080"

3)配置kubelet

复制代码
[root@k8s-node2 ~]# vim /etc/kubernetes/kubelet
5 KUBELET_ADDRESS="--address=192.168.2.118"
11 KUBELET_HOSTNAME="--hostname-override=k8s-node1"
15 KUBELET_API_SERVER="--api-servers=http://192.168.2.116:8080"

4)启动服务

复制代码
[root@k8s-node2 yum.repos.d]# systemctl start kubelet
[root@k8s-node2 yum.repos.d]# systemctl start kube-proxy
[root@k8s-node2 yum.repos.d]# systemctl enable kube-proxy

Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.

[root@k8s-node2 yum.repos.d]# systemctl enable kubelet

Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to 
/usr/lib/systemd/system/kubelet.service.

5)在master节点检测node节点状态

复制代码
[root@k8s-master ~]# kubectl get nodes

NAME         STATUS    AGE
k8s-master   Ready     31m
k8s-node1    Ready     16m
k8s-node2    Ready     33s

6、为所有node节点配置flannel网络

1)在k8s-master节点安装flannel

复制代码
[root@k8s-master ~]# yum install flannel -y
[root@k8s-master ~]# vim /etc/sysconfig/flanneld
4 FLANNEL_ETCD_ENDPOINTS=http://192.168.200.112:2379

[root@k8s-master ~]# etcdctl set /atomic.io/network/config '{ "Network": "172.16.0.0/16" }'	//配置网络
{ "Network": "172.16.0.0/16" }

[root@k8s-master ~]# systemctl start flanneld
[root@k8s-master ~]# systemctl enable flanneld
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
Created symlink from /etc/systemd/system/docker.service.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.

[root@k8s-master ~]# ifconfig

flannel0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST>  mtu 1472
        inet 172.16.35.0  netmask 255.255.0.0  destination 172.16.35.0
        inet6 fe80::db5b:30ce:83c4:c67b  prefixlen 64  scopeid 0x20<link>
        unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00  txqueuelen 500  (UNSPEC)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 3  bytes 144 (144.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@k8s-master ~]# systemctl restart docker
[root@k8s-master ~]# systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.

[root@k8s-master ~]# ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.16.35.1  netmask 255.255.255.0  broadcast 0.0.0.0
        ether 02:42:02:36:e4:15  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

2)配置node1节点flannel网络

复制代码
[root@k8s-node1 ~]# yum install flannel -y
[root@k8s-node1 ~]# vim /etc/sysconfig/flanneld

4 FLANNEL_ETCD_ENDPOINTS=http://192.168.2.116:2379

[root@k8s-node1 ~]# systemctl start flanneld
[root@k8s-node1 ~]# systemctl enable flanneld

Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
Created symlink from /etc/systemd/system/docker.service.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.

[root@k8s-node1 ~]# systemctl restart docker
[root@k8s-node1 ~]# systemctl enable docker

Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.

3)安装node2节点flannel网络

复制代码
[root@k8s-node2 ~]# yum install flannel -y

[root@k8s-node2 ~]# vim /etc/sysconfig/flanneld

4 FLANNEL_ETCD_ENDPOINTS=http://192.168.2.116:2379

[root@k8s-node2 ~]# systemctl start flanneld
[root@k8s-node2 ~]# systemctl enable flanneld

Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
Created symlink from /etc/systemd/system/docker.service.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.

[root@k8s-node2 ~]# systemctl restart docker
[root@k8s-node2 ~]# systemctl enable docker

Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.


[root@k8s-master ~]# kubectl get nodes

NAME         STATUS    AGE
k8s-master   Ready     47m
k8s-node1    Ready     32m
k8s-node2    Ready     16m

4)测试docker容器跨宿主机通信

复制代码
[root@k8s-master ~]# docker run -it busybox            //下载镜像
Unable to find image 'busybox:latest' locally
Trying to pull repository docker.io/library/busybox ... 
latest: Pulling from docker.io/library/busybox
3f4d90098f5b: Pull complete 
Digest: sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79
Status: Downloaded newer image for docker.io/busybox:latest
/ # 
/ # ping 172.16.71.1                    //测试和其他docker宿主机之间通信
PING 172.16.71.1 (172.16.71.1): 56 data bytes
64 bytes from 172.16.71.1: seq=0 ttl=61 time=1.730 ms
64 bytes from 172.16.71.1: seq=1 ttl=61 time=0.443 ms
64 bytes from 172.16.71.1: seq=2 ttl=61 time=0.867 ms
^C
--- 172.16.71.1 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.443/1.013/1.730 ms
/ # ping 172.16.10.1                    //测试和其他docker宿主机之间通信
PING 172.16.10.1 (172.16.10.1): 56 data bytes
64 bytes from 172.16.10.1: seq=0 ttl=61 time=1.424 ms
64 bytes from 172.16.10.1: seq=1 ttl=61 time=0.485 ms
^C
--- 172.16.10.1 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.485/0.954/1.424 ms

7、配置docker开启加载防火墙规则允许转发数据

1)配置k8s-master节点

复制代码
[root@k8s-master ~]# vim /usr/lib/systemd/system/docker.service
18 ExecStartPort=/usr/sbin/iptables -P FORWARD ACCEPT        #添加

[root@k8s-master ~]# systemctl daemon-reload
[root@k8s-master ~]# systemctl restart docker

2)配置k8s-node1节点

复制代码
[root@k8s-node1 ~]# vim /usr/lib/systemd/system/docker.service

18 ExecStartPort=/usr/sbin/iptables -P FORWARD ACCEPT        #添加

[root@k8s-node1 ~]# systemctl daemon-reload
[root@k8s-node1 ~]# systemctl restart docker

3)配置k8s-node2节点

复制代码
[root@k8s-node2 ~]# vim /usr/lib/systemd/system/docker.service

18 ExecStartPort=/usr/sbin/iptables -P FORWARD ACCEPT            #添加

[root@k8s-node2 ~]# systemctl daemon-reload
[root@k8s-node2 ~]# systemctl restart docker

二、k 8 s常用资源管理

1、创建一个pod

1)创建yuml文件

复制代码
[root@k8s-master ~]# mkdir k8s
[root@k8s-master ~]# vim ./k8s/nginx.yaml
apiVersion: v1
kind: Pod
metadata:
  name: nginx
  labels:
    app: web
spec:
  containers:
    - name: nginx
      image: nginx:1.13
      ports:
        - containerPort: 80

2)创建容器

复制代码
方法一.  yum安装

[root@k8s-master ~]#yum install *rhsm*

方法二 (我是用这方法解决的)

执行命令:

[root@k8s-master ~]#wget http://mirror.centos.org/centos/7/os/x86_64/Packages/python-rhsm-certificates-1.19.10-1.el7_4.x86_64.rpm

[root@k8s-master ~]#rpm2cpio python-rhsm-certificates-1.19.10-1.el7_4.x86_64.rpm | cpio -iv --to-stdout ./etc/rhsm/ca/redhat-uep.pem | tee /etc/rhsm/ca/redhat-uep.pem    
前两个命令会生成/etc/rhsm/ca/redhat-uep.pem文件.     

[root@k8s-master ~]#docker pull registry.access.redhat.com/rhel7/pod-infrastructure:latest
[root@k8s-master ~]# kubectl create -f ./k8s/nginx.yaml

3)查看所有pod创建运行状态

复制代码
[root@k8s-master ~]# kubectl get pod
NAME      READY     STATUS              RESTARTS   AGE
nginx     0/1       ContainerCreating   0          5m

4)查看指定pod资源

复制代码
[root@k8s-master ~]# kubectl get pod nginx
NAME      READY     STATUS              RESTARTS   AGE
nginx     0/1       ContainerCreating   0          6m

5)查看pod运行的详细信息

复制代码
[root@k8s-master ~]# kubectl describe pod nginx
Name:		nginx
Namespace:	default
Node:		k8s-node2/192.168.2.118
Start Time:	Fri, 11 Aug 2023 15:34:10 +0800
Labels:		app=web
Status:		Pending
IP:		
Controllers:	<none>
Containers:
  nginx:
    Container ID:		
    Image:			nginx:1.13
    Image ID:			
    Port:			80/TCP
    State:			Waiting
      Reason:			ContainerCreating
    Ready:			False
    Restart Count:		0
    Volume Mounts:		<none>
    Environment Variables:	<none>
Conditions:
  Type		Status
  Initialized 	True 
  Ready 	False 
  PodScheduled 	True 
No volumes.
QoS Class:	BestEffort
Tolerations:	<none>
Events:
  FirstSeen	LastSeen	Count	From			SubObjectPath	Type  Reason		Message
  ---------	--------	-----	----			-------------	--------	------		-------
  7m		7m		1	{default-scheduler }			NormalScheduled	Successfully assigned nginx to k8s-node2
  7m		1m		6	{kubelet k8s-node2}			WarninFailedSync	Error syncing pod, skipping: failed to "StartContainer" for "POD" with ErrImagePull: "image pull failed for registry.access.redhat.com/rhel7/pod-infrastructure:latest, this may be because there are no credentials on this request.  details: (open /etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt: no such file or directory)"

  6m	9s	25	{kubelet k8s-node2}		Warning	FailedSync	Error syncing pod, skipping: failed to "StartContainer" for "POD" with ImagePullBackOff: "Back-off pulling image \"registry.access.redhat.com/rhel7/pod-infrastructure:latest\""

6)验证运行的pod

复制代码
[root@k8s-master ~]#  kubectl get pod nginx -o wide
NAME      READY     STATUS              RESTARTS   AGE       IP        NODE
nginx     0/1       ContainerCreating   0          8m        <none>    k8s-node2

2、pod管理

1)删除pod

复制代码
[root@k8s-master ~]# kubectl delete  pod nginx
pod "nginx" deleted

2)查看删除pod无法找到

复制代码
[root@k8s-master ~]# kubectl get pod nginx -o wide
Error from server (NotFound): pods "nginx" not found

3)创建pod

复制代码
[root@k8s-master ~]# kubectl create -f ./k8s/nginx.yaml 
pod "nginx" created

4)发现最先创建的pod运行在k8s-master节点上,下载镜像速度太慢没法运行

复制代码
[root@k8s-master ~]# kubectl get pod nginx -o wide
NAME      READY     STATUS              RESTARTS   AGE       IP        NODE
nginx     0/1       ContainerCreating   0          8s        <none>    k8s-node1
相关推荐
牧天白衣.6 小时前
Docker相关内容
docker·容器·eureka
2401_836836596 小时前
k8s配置管理
云原生·容器·kubernetes
一切顺势而行6 小时前
k8s 使用docker 安装教程
docker·容器·kubernetes
霖檬ing6 小时前
K8s——配置管理(1)
java·贪心算法·kubernetes
澜兮子6 小时前
k8s-服务发布基础
云原生·容器·kubernetes
Andy杨6 小时前
20250707-2-第二章:Kubernetes 核心概念-K8s集群架构,生产部署K8s两_笔记
docker·容器
小安运维日记6 小时前
CKS认证 | Day4 最小化微服务漏洞
安全·docker·微服务·云原生·容器·kubernetes
2401_836836596 小时前
k8s服务发布进阶
云原生·容器·kubernetes
裁二尺秋风9 小时前
CI/CD — DevOps概念之实现k8s持续交付持续集成(一)
ci/cd·kubernetes·devops
Liudef0611 小时前
Docker企业级应用:从入门到生产环境最佳实践
docker·容器·eureka