arm架构 uos操作系统离线安装k8s

目录

操作系统信息

安装文件准备

主机准备

主机配置

配置hosts(所有节点)

关闭防火墙、selinux、swap、dnsmasq(所有节点)

系统参数设置(所有节点)

配置ipvs功能(所有节点)

安装docker(所有节点)

卸载老版本

安装docker

安装

[添加 system启动](#添加 system启动)

配置cgroupd

k8s准备和安装

[安装 kubeadm,kubelet 和 kubectl(所有节点)](#安装 kubeadm,kubelet 和 kubectl(所有节点))

准备镜像(所有节点)

[安装 master(master节点)](#安装 master(master节点))

[安装kubernets node(node节点)](#安装kubernets node(node节点))

[安装kubernets 网络插件 calico(master节点操作)](#安装kubernets 网络插件 calico(master节点操作))


操作系统信息

复制代码
uname -a  # 查看所有操作系统信息
uname -s  # 查看内核名称
uname -r  # 查看内核版本号
uname -m  # 查看机器硬件名称
复制代码
cat /etc/os-release  # 查看所有操作系统信息

安装文件准备

主机准备

主机配置

172.171.16.88 meng

备注:本次只是为了测试整个离线安装过程,只用了一个节点,多个节点,同理把node节点加入进去即可。

上传好所有安装需要的文件后,断网,在虚拟机安装界面操作,确保整个流程是断网的。

配置hosts(所有节点)

配置 /etc/hosts 文件

bash 复制代码
cat >> /etc/hosts << EOF
172.171.16.88 meng
EOF

关闭防火墙、selinux、swap、dnsmasq(所有节点)

关闭防火墙

bash 复制代码
systemctl stop firewalld
systemctl disable firewalld

关闭selinux

复制代码
sed -i 's/enforcing/disabled/' /etc/selinux/config  #永久
setenforce 0  #临时

关闭swap(k8s禁止虚拟内存以提高性能)

复制代码
sed -ri 's/.*swap.*/#&/' /etc/fstab #永久
swapoff -a #临时

//关闭dnsmasq(否则可能导致docker容器无法解析域名)

bash 复制代码
service dnsmasq stop 
systemctl disable dnsmaq

系统参数设置(所有节点 )

//制作配置文件 设置网桥参数

mkdir /etc/sysctl.d

vim /etc/sysctl.d/kubernetes.conf

bash 复制代码
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness=0
vm.overcommit_memory = 1
vm.panic_on_oom = 0
fs.inotify.max_user_watches = 89100

/生效文件

sysctl -p /etc/sysctl.d/kubernetes.conf

如果报错:

bash 复制代码
[root@crawler-k8s-master ~]# sysctl -p /etc/sysctl.d/kubernetes.conf
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: 没有那个文件或目录
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: 没有那个文件或目录

//加载网桥过滤模块

modprobe br_netfilter

然后再次

sysctl -p /etc/sysctl.d/kubernetes.conf

配置ipvs功能(所有节点 )

在kubernetes中service有两种代理模型,一种是基于iptables的,一种是基于ipvs的
两者比较的话,ipvs的性能明显要高一些,但是如果要使用它,需要手动载入ipvs模块

//添加需要加载的模块写入脚本文件

vim /etc/sysconfig/modules/ipvs.modules

bash 复制代码
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4

//为脚本文件添加执行权限

chmod +x /etc/sysconfig/modules/ipvs.modules

//执行脚本文件

/bin/bash /etc/sysconfig/modules/ipvs.modules

备注:如果报错可能是需要将 modprobe -- nf_conntrack_ipv4 改为modprobe -- nf_conntrack

安装docker(所有节点)

卸载老版本

备注:docker版本最好用本文中提到的20.10,我再25版本上的docker安装报错,重装了20.10版本,马上就可以了

bash 复制代码
yum remove docker docker-client  docker-client-latest  docker-common docker-latest  docker-latest-logrotate docker-logrotate docker-engine

安装docker

docker 安装包在准备文件的 docker目录下,上传到服务器

安装
bash 复制代码
tar xf docker-20.10.9.tgz

mv docker/* /usr/bin/
添加 system启动

编辑docker的系统服务文件

bash 复制代码
vim /usr/lib/systemd/system/docker.service
bash 复制代码
[Unit]

Description=Docker Application Container Engine

Documentation=https://docs.docker.com

After=network-online.target firewalld.service

Wants=network-online.target

[Service]

Type=notify

ExecStart=/usr/bin/dockerd

ExecReload=/bin/kill -s HUP $MAINPID

LimitNOFILE=infinity

LimitNPROC=infinity

TimeoutStartSec=0

Delegate=yes

KillMode=process

Restart=on-failure

StartLimitBurst=3

StartLimitInterval=60s

[Install]

WantedBy=multi-user.target

设置自启动

bash 复制代码
systemctl start docker & systemctl enable docker
配置cgroupd
bash 复制代码
vim /etc/docker/daemon.json
bash 复制代码
{
          "exec-opts": ["native.cgroupdriver=systemd"]
}
bash 复制代码
//设置开机启动

systemctl start docker

systemctl enable docker

//重启docker

systemctl daemon-reload

systemctl restart docker

k8s准备和安装

安装 kubeadm,kubelet 和 kubectl(所有节点)

安装包在 准备文件的 k8s-rpm下,上传到服务器/home 下

工具说明:

  • kubeadm:部署集群用的命令
  • kubelet:在集群中每台机器上都要运行的组件,负责管理pod、容器的什么周期
  • kubectl:集群管理工具配置阿里云源:
  • 其他的为这三个工具用到的依赖

安装:

bash 复制代码
cd /home/k8s-rpm

rpm -ivh *.rpm

rpm -ivh *.rpm --force --nodeps

或者一个个安装 提示有依赖没装,就先装依赖

设置开机自启动:

bash 复制代码
systemctl start kubelet && systemctl enable kubelet

准备镜像(所有节点)

安装包在 准备文件的 k8s-images下,上传到服务器/home 下

解压

bash 复制代码
​​​​​​​cd /home/k8s-images/

find /home/k8s-images -name "*.tar" -exec docker load -i {} \;

解压完成后

安装 master(master节点)

bash 复制代码
kubeadm init --apiserver-advertise-address=172.171.16.147 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.23.7 --service-cidr=10.96.0.0/16 --pod-network-cidr=10.244.0.0/16

日志如下:

bash 复制代码
[init] Using Kubernetes version: v1.23.7
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [crawler-k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.171.16.147]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [crawler-k8s-master localhost] and IPs [172.171.16.147 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [crawler-k8s-master localhost] and IPs [172.171.16.147 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 12.507186 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node crawler-k8s-master as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node crawler-k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: i4dp7i.7t1j8ezmgwkj1gio
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.171.16.147:6443 --token i4dp7i.7t1j8ezmgwkj1gio \
	--discovery-token-ca-cert-hash sha256:9fb74686ff3bea5769e5ed466dbb2c32ed3fc920374ff2175b39b8162ac27f8f 

在 master上进一步执行上面提示的命令

bash 复制代码
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

安装kubernets node(node节点)

将 node 添加到集群中

bash 复制代码
kubeadm join 172.171.16.147:6443 --token i4dp7i.7t1j8ezmgwkj1gio \
	--discovery-token-ca-cert-hash sha256:9fb74686ff3bea5769e5ed466dbb2c32ed3fc920374ff2175b39b8162ac27f8f

然后显示日志:

bash 复制代码
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

安装kubernets 网络插件 calico(master节点操作)

安装包在 准备文件的 k8s/calico.yaml下,上传到服务器/home 下

下载 calico文档 https://docs.projectcalico.org/manifests/calico.yaml

修改文件中的镜像地址

bash 复制代码
grep image calico.yaml

sed -i 's#docker.io/##g' calico.yaml
bash 复制代码
kubectl apply -f calico.yaml

部署完成

相关推荐
诡异森林。3 小时前
Docker--Docker网络原理
网络·docker·容器
matrixlzp4 小时前
K8S Service 原理、案例
云原生·容器·kubernetes
angushine5 小时前
让Docker端口映射受Firewall管理而非iptables
运维·docker·容器
SimonLiu0097 小时前
清理HiNas(海纳斯) Docker日志并限制日志大小
java·docker·容器
高峰君主10 小时前
Docker容器持久化
docker·容器·eureka
能来帮帮蒟蒻吗10 小时前
Docker安装(Ubuntu22版)
笔记·学习·spring cloud·docker·容器
言之。14 小时前
别学了,打会王者吧
java·python·mysql·容器·spark·php·html5
秦始皇爱找茬17 小时前
docker部署Jenkins工具
docker·容器·jenkins
樽酒ﻬق21 小时前
Kubernetes 常用运维命令整理
运维·容器·kubernetes
Golinie1 天前
Docker底层原理浅析 | namespace+cgroups+文件系统
docker·容器·文件系统·cgroups·unionfs