陌殇
CSDN:https://blog.csdn.net/weixin_58410911?spm=1000.2115.3001.5343
B站:https://space.bilibili.com/1735941541?spm_id_from=333.788.0.0
基于Helm部署Rancher容器管理平台
案例分析
1. 规划节点
节点规划,见表1。
表1 节点规划
IP | 主机名 | 节点 |
---|---|---|
192.168.100.3 | k8s-master | k8s-master |
192.168.100.4 | k8s-node1 | k8s-node1 |
192.168.100.5 | k8s-node2 | k8s-node2 |
2. 基础准备
Kubernete环境已安装完成,将提供的软件包 Rancher.tar.gz 上传至master节点/root目录下并解压。
3. Rancher简介
Rancher 是由 SUSE 公司(原 Rancher Labs)开源 的 企业级 Kubernetes 管理与编排平台,致力于简化 Kubernetes 在多云、混合云及边缘环境中的部署、管理与运维,通过统一的控制平面实现跨集群的高效治理,是云原生技术栈中实现规模化集群管理的核心工具。其设计目标是降低 Kubernetes 使用门槛,让企业无需成为 K8s 专家即可快速落地容器化应用,加速数字化转型。
1. 核心定位
- Kubernetes 管理中枢:不替代 Kubernetes,而是作为其"管理层",提供跨环境、跨集群的统一控制平面,解决多集群管理复杂度高、资源分散、运维成本高等问题。
- 多云/混合云适配:支持在公有云(AWS、Azure、阿里云)、私有云(VMware、OpenStack)、自建数据中心及边缘节点上部署和管理 Kubernetes 集群,实现"一处管理,处处运行"。
2. 核心功能
(1)集群全生命周期管理
- 快速创建与部署:支持通过 UI、CLI 或 API 一键创建 Kubernetes 集群,兼容主流发行版(如 EKS、GKE、AKS)及自定义集群(如基于 K3s 的轻量边缘集群)。
- 版本升级与扩展:提供集群版本自动检测、滚动升级策略,支持节点扩缩容、插件动态安装(如存储、网络插件)。
- 状态监控与故障恢复:实时监控集群健康状态(节点、Pod、资源利用率),自动检测并修复网络、存储等组件故障。
(2)多集群统一管控
- 集中式仪表盘:通过单一界面管理数十到上万集群,支持资源分组(按环境、团队、项目)、权限分级(RBAC、LDAP/AD 集成)。
- 跨集群资源调度:统一配置网络策略、存储类、安全策略,实现跨集群的服务发现、负载均衡与流量管理。
- 日志与指标聚合:集成 Prometheus、ELK 等工具,集中收集集群日志与性能指标,支持自定义告警与报表。
(3)应用全流程管理
- 图形化部署工具:支持 Helm 图表、Docker 镜像、自定义 YAML 部署应用,内置 CI/CD 集成(与 Jenkins、GitLab CI 对接)。
- 微服务治理:提供服务网格(如 Istio 集成)、API 网关、金丝雀发布、蓝绿部署等功能,优化应用交付流程。
- 边缘计算支持:轻量化节点(如 K3s)适配低算力设备,支持边缘节点离线运行、断点续传及远程调试。
(4)安全与合规
- 身份与访问管理:集成 OAuth 2.0、LDAP、AML 2.0,支持基于角色的访问控制(RBAC)和审计日志。
- 镜像与网络安全:与 Harbor、Trivy 等工具联动,实现镜像漏洞扫描、签名校验;支持网络策略(如 Calico)、TLS 加密传输。
- 合规性认证:满足 GDPR、PCI-DSS 等合规要求,提供审计日志、操作追溯与配置快照。
(5)生态兼容性
- 多云原生集成:无缝对接 Docker、Kubernetes 生态工具(如 Rancher Desktop、kubectl),支持 Istio、Knative 等扩展组件。
- 混合部署能力:兼容虚拟机、物理机、容器云等基础设施,支持传统应用与云原生应用共存。
3. 核心优势
- 化繁为简:通过可视化界面屏蔽 Kubernetes 底层复杂性,降低团队学习成本,提升运维效率。
- 降本增效:统一管理多集群资源,避免重复建设,支持自动化扩缩容与成本监控,优化资源利用率。
- 灵活扩展:模块化架构支持插件化部署(如存储、网络插件),适配不同规模企业需求(从小型团队到超大规模集群)。
- 开源与企业级支持:社区版完全免费开源,企业版(Rancher Enterprise)提供高级功能(如高级安全、技术支持、离线安装)。
4. 适用场景
- 企业容器化转型: 适合大规模微服务架构落地,简化 K8s 集群运维,加速应用上云。
- 多云/混合云管理: 需要在多个云环境或自建数据中心间统一管理 K8s 集群的组织(如金融、零售、制造业)。
- 边缘计算与 IoT: 支持在边缘节点(如工厂设备、智能终端)部署轻量化集群,实现云端与边缘协同。
- DevOps 与 CI/CD: 集成持续集成/持续部署流程,提供标准化应用交付模板,加速开发到生产的交付周期。
4. 架构图
Rancher Server 由认证代理(Authentication Proxy)、Rancher API Server、集群控制器(Cluster Controller)、etcd 节点和集群 Agent(Cluster Agent) 组成。除了集群 Agent 以外,其他组件都部署在 Rancher Server 中。
下图描述的是用户通过 Rancher Server 管控 Rancher 部署的 Kubernetes 集群(RKE 集群)和托管的 Kubernetes 集群的(EKS)集群的流程。以用户下发指令为例,指令的流动路径如下:
- 首先,用户通过 Rancher UI(即 Rancher 控制台) Rancher 命令行工具(Rancher CLI)输入指令;直接调Rancher API 接口也可以达到相同的效果。
- 用户通过 Rancher 的代理认证后,指令会进一步下发到 Rancher Server 。
- 与此同时,Rancher Server 也会执行容灾备份,将数据备份到 etcd 节点。
- 然后 Rancher Server 把指令传递给集群控制器。集群控制器把指令传递到下游集群的 Agent,最终通过 Agent 把指令下发到指定的集群中。
如果 Rancher Server 出现问题,我们也提供了备用方案,您可以通过授权集群端点管理集群。
考虑到性能表现和安全因素,我们建议您使用两个 Kubernetes 集群,分开部署 Rancher Server 和工作负载。部署 Rancher Server 后,您可以创建或导入集群,然后在这些集群上运行您的工作负载。
通过Rancher认证代理管理 Kubernetes 集群

案例实施
1. 基础环境准备
(1)导入软件包
shell
[root@k8s-master ~]# tar xf Rancher.tar.gz
[root@k8s-master ~]# cd Rancher/
[root@k8s-master ~]# tar xf helm-v3.17.3-linux-amd64.tar.gz
[root@k8s-master Rancher]# cp linux-amd64/helm /usr/local/bin/
[root@k8s-master Rancher]# for i in `ls *.tar`; do docker load -i $i; done
[root@k8s-master Rancher]# for i in `ls *.tar`; do scp $i 192.168.100.4:/root; done
[root@k8s-master Rancher]# for i in `ls *.tar`; do scp $i 192.168.100.5:/root; done
[root@k8s-master rancher]# helm version
version.BuildInfo{Version:"v3.17.3", GitCommit:"e4da49785aa6e6ee2b86efd5dd9e43400318262b", GitTreeState:"clean", GoVersion:"go1.23.7"}
配置helm命令补全
shell
[root@k8s-master ~]# yum install -y bash-com*
[root@k8s-master ~]# echo "source <(helm completion bash)" >> ~/.bashrc
[root@k8s-master ~]# source ~/.bashrc
查看集群状态:
shell
[root@k8s-master ~]# kubectl cluster-info
Kubernetes control plane is running at https://apiserver.cluster.local:6443
CoreDNS is running at https://apiserver.cluster.local:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
2. 配置Ingress服务发现
(1)导入镜像包
shell
[root@k8s-master Rancher]# docker load -i ingress-nginx-controller-v1.8.0.tar
[root@k8s-master Rancher]# docker load -i ingress-nginx-kube-webhook-certgen-v20230407.tar
(2)配置节点标签
shell
[root@k8s-master ~]# kubectl label nodes k8s-master node-role=ingress
node/k8s-master labeled
(3)部署Ingress
shell
[root@k8s-master ~]# kubectl apply -f deploy-v1.8.0-Bare-metal-clusters-Edit-latest.yaml
查看Ingress资源
shell
[root@k8s-master ~]# kubectl get pod -n ingress-nginx
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-f76sf 0/1 Completed 0 2m28s
ingress-nginx-admission-patch-vkh7c 0/1 Completed 0 2m28s
ingress-nginx-controller-vmd4h 1/1 Running 0 2m28s
3. 部署Rancher
(1)添加 Rancher 仓库
shell
[root@k8s-master ~]# helm repo add rancher-stable https://releases.rancher.com/server-charts/stable
[root@k8s-master ~]# helm repo update
[root@k8s-master ~]# helm fetch rancher-stable/rancher --version=v2.9.3
(2)使用cert-manager签发并维护证书
yaml
[root@k8s-master ~]# vim create_self-signed-cert.sh
#!/bin/bash -e
help ()
{
echo ' ================================================================ '
echo ' --ssl-domain: 生成ssl证书需要的主域名,如不指定则默认为www.rancher.local,如果是ip访问服务,则可忽略;'
echo ' --ssl-trusted-ip: 一般ssl证书只信任域名的访问请求,有时候需要使用ip去访问server,那么需要给ssl证书添加扩展IP,多个IP用逗号隔开;'
echo ' --ssl-trusted-domain: 如果想多个域名访问,则添加扩展域名(SSL_TRUSTED_DOMAIN),多个扩展域名用逗号隔开;'
echo ' --ssl-size: ssl加密位数,默认2048;'
echo ' --ssl-date: ssl有效期,默认10年;'
echo ' --ca-date: ca有效期,默认10年;'
echo ' --ssl-cn: 国家代码(2个字母的代号),默认CN;'
echo ' 使用示例:'
echo ' ./create_self-signed-cert.sh --ssl-domain=www.test.com --ssl-trusted-domain=www.test2.com \ '
echo ' --ssl-trusted-ip=1.1.1.1,2.2.2.2,3.3.3.3 --ssl-size=2048 --ssl-date=3650'
echo ' ================================================================'
}
case "$1" in
-h|--help) help; exit;;
esac
if [[ $1 == '' ]];then
help;
exit;
fi
CMDOPTS="$*"
for OPTS in $CMDOPTS;
do
key=$(echo ${OPTS} | awk -F"=" '{print $1}' )
value=$(echo ${OPTS} | awk -F"=" '{print $2}' )
case "$key" in
--ssl-domain) SSL_DOMAIN=$value ;;
--ssl-trusted-ip) SSL_TRUSTED_IP=$value ;;
--ssl-trusted-domain) SSL_TRUSTED_DOMAIN=$value ;;
--ssl-size) SSL_SIZE=$value ;;
--ssl-date) SSL_DATE=$value ;;
--ca-date) CA_DATE=$value ;;
--ssl-cn) CN=$value ;;
esac
done
# CA相关配置
CA_DATE=${CA_DATE:-3650}
CA_KEY=${CA_KEY:-cakey.pem}
CA_CERT=${CA_CERT:-cacerts.pem}
CA_DOMAIN=cattle-ca
# ssl相关配置
SSL_CONFIG=${SSL_CONFIG:-$PWD/openssl.cnf}
SSL_DOMAIN=${SSL_DOMAIN:-'www.rancher.local'}
SSL_DATE=${SSL_DATE:-3650}
SSL_SIZE=${SSL_SIZE:-2048}
# 国家代码(2个字母的代号),默认CN;
CN=${CN:-CN}
SSL_KEY=$SSL_DOMAIN.key
SSL_CSR=$SSL_DOMAIN.csr
SSL_CERT=$SSL_DOMAIN.crt
echo -e "\033[32m ---------------------------- \033[0m"
echo -e "\033[32m | 生成 SSL Cert | \033[0m"
echo -e "\033[32m ---------------------------- \033[0m"
if [[ -e ./${CA_KEY} ]]; then
echo -e "\033[32m ====> 1. 发现已存在CA私钥,备份"${CA_KEY}"为"${CA_KEY}"-bak,然后重新创建 \033[0m"
mv ${CA_KEY} "${CA_KEY}"-bak
openssl genrsa -out ${CA_KEY} ${SSL_SIZE}
else
echo -e "\033[32m ====> 1. 生成新的CA私钥 ${CA_KEY} \033[0m"
openssl genrsa -out ${CA_KEY} ${SSL_SIZE}
fi
if [[ -e ./${CA_CERT} ]]; then
echo -e "\033[32m ====> 2. 发现已存在CA证书,先备份"${CA_CERT}"为"${CA_CERT}"-bak,然后重新创建 \033[0m"
mv ${CA_CERT} "${CA_CERT}"-bak
openssl req -x509 -sha256 -new -nodes -key ${CA_KEY} -days ${CA_DATE} -out ${CA_CERT} -subj "/C=${CN}/CN=${CA_DOMAIN}"
else
echo -e "\033[32m ====> 2. 生成新的CA证书 ${CA_CERT} \033[0m"
openssl req -x509 -sha256 -new -nodes -key ${CA_KEY} -days ${CA_DATE} -out ${CA_CERT} -subj "/C=${CN}/CN=${CA_DOMAIN}"
fi
echo -e "\033[32m ====> 3. 生成Openssl配置文件 ${SSL_CONFIG} \033[0m"
cat > ${SSL_CONFIG} <<EOM
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
extendedKeyUsage = clientAuth, serverAuth
EOM
if [[ -n ${SSL_TRUSTED_IP} || -n ${SSL_TRUSTED_DOMAIN} ]]; then
cat >> ${SSL_CONFIG} <<EOM
subjectAltName = @alt_names
[alt_names]
EOM
IFS=","
dns=(${SSL_TRUSTED_DOMAIN})
dns+=(${SSL_DOMAIN})
for i in "${!dns[@]}"; do
echo DNS.$((i+1)) = ${dns[$i]} >> ${SSL_CONFIG}
done
if [[ -n ${SSL_TRUSTED_IP} ]]; then
ip=(${SSL_TRUSTED_IP})
for i in "${!ip[@]}"; do
echo IP.$((i+1)) = ${ip[$i]} >> ${SSL_CONFIG}
done
fi
fi
echo -e "\033[32m ====> 4. 生成服务SSL KEY ${SSL_KEY} \033[0m"
openssl genrsa -out ${SSL_KEY} ${SSL_SIZE}
echo -e "\033[32m ====> 5. 生成服务SSL CSR ${SSL_CSR} \033[0m"
openssl req -sha256 -new -key ${SSL_KEY} -out ${SSL_CSR} -subj "/C=${CN}/CN=${SSL_DOMAIN}" -config ${SSL_CONFIG}
echo -e "\033[32m ====> 6. 生成服务SSL CERT ${SSL_CERT} \033[0m"
openssl x509 -sha256 -req -in ${SSL_CSR} -CA ${CA_CERT} \
-CAkey ${CA_KEY} -CAcreateserial -out ${SSL_CERT} \
-days ${SSL_DATE} -extensions v3_req \
-extfile ${SSL_CONFIG}
echo -e "\033[32m ====> 7. 证书制作完成 \033[0m"
echo
echo -e "\033[32m ====> 8. 以YAML格式输出结果 \033[0m"
echo "----------------------------------------------------------"
echo "ca_key: |"
cat $CA_KEY | sed 's/^/ /'
echo
echo "ca_cert: |"
cat $CA_CERT | sed 's/^/ /'
echo
echo "ssl_key: |"
cat $SSL_KEY | sed 's/^/ /'
echo
echo "ssl_csr: |"
cat $SSL_CSR | sed 's/^/ /'
echo
echo "ssl_cert: |"
cat $SSL_CERT | sed 's/^/ /'
echo
echo -e "\033[32m ====> 9. 附加CA证书到Cert文件 \033[0m"
cat ${CA_CERT} >> ${SSL_CERT}
echo "ssl_cert: |"
cat $SSL_CERT | sed 's/^/ /'
echo
echo -e "\033[32m ====> 10. 重命名服务证书 \033[0m"
echo "cp ${SSL_DOMAIN}.key tls.key"
cp ${SSL_DOMAIN}.key tls.key
echo "cp ${SSL_DOMAIN}.crt tls.crt"
cp ${SSL_DOMAIN}.crt tls.crt
执行脚本
shell
[root@k8s-master ~]# bash create_self-signed-cert.sh --ssl-domain=gxl.rancher.com --ssl-trusted-ip=192.168.100.3,192.168.100.4,192.168.100.5 --ssl-size=2048 --ssl-date=3650
查看资源
shell
[root@k8s-master ~]# ll
total 48
-rw-r--r-- 1 root root 1131 Apr 13 12:49 cacerts.pem
-rw-r--r-- 1 root root 17 Apr 13 12:49 cacerts.srl
-rw-r--r-- 1 root root 1679 Apr 13 12:49 cakey.pem
-rw-r--r-- 1 root root 1675 Apr 13 12:49 cakey.pem-bak
-rw-r--r-- 1 root root 5327 Apr 13 12:48 create_self-signed-cert.sh
-rw-r--r-- 1 root root 364 Apr 13 12:49 openssl.cnf
-rw-r--r-- 1 root root 2290 Apr 13 12:49 gxl.rancher.com.crt
-rw-r--r-- 1 root root 1066 Apr 13 12:49 gxl.rancher.com.csr
-rw-r--r-- 1 root root 1679 Apr 13 12:49 gxl.rancher.com.key
-rw-r--r-- 1 root root 2290 Apr 13 12:49 tls.crt
-rw-r--r-- 1 root root 1679 Apr 13 12:49 tls.key
- gxl.rancher.com:是自定义的测试域名;
- --ssl-domain:生成ssl证书需要的主域名,如不指定则默认为www.rancher.local,如果是ip访问服务,则可忽略;
- --ssl-trusted-ip:一般ssl证书只信任域名的访问请求,有时候需要使用ip去访问server,那么需要给ssl证书添加扩展IP,多个IP用逗号隔开;
- --ssl-trusted-domain:如果想多个域名访问,则添加扩展域名(TRUSTED_DOMAIN),多个TRUSTED_DOMAIN用逗号隔开;
- --ssl-size:ssl加密位数,默认2048;
- --ssl-cn:国家代码(2个字母的代号),默认CN;
(4)Rancher镜像上传Harbor仓库
配置信任仓库
shell
[root@k8s-master ~]# sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json << 'EOF'
{
"insecure-registries": ["gxl.harbor.domain","192.168.100.4"],
"registry-mirrors": [
"https://b9pmyelo.mirror.aliyuncs.com",
"https://proxy.1panel.live",
"https://docker.1panel.top",
"https://docker.m.daocloud.io",
"https://docker.1ms.run",
"https://docker.ketches.cn",
"https://docker.credclouds.com",
"https://k8s.credclouds.com",
"https://quay.credclouds.com",
"https://gcr.credclouds.com",
"https://k8s-gcr.credclouds.com",
"https://ghcr.credclouds.com",
"https://do.nark.eu.org",
"https://docker.nju.edu.cn",
"https://docker.mirrors.sjtug.sjtu.edu.cn",
"https://docker.rainbond.cc"
],
"exec-opts": [
"native.cgroupdriver=systemd"
],
"max-concurrent-downloads": 10,
"max-concurrent-uploads": 5,
"log-opts": {
"max-size": "300m",
"max-file": "2"
},
"live-restore": true,
"log-level": "debug"
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker
登录Harbor镜像仓库
shell
[root@k8s-master rancher]# docker login -u admin -p Harbor12345 192.168.100.4
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
为镜像打标签
shell
docker tag rancher/fleet-agent:v0.10.4 192.168.100.4/library/rancher/fleet-agent:v0.10.4
docker tag rancher/fleet:v0.10.4 192.168.100.4/library/rancher/fleet:v0.10.4
docker tag rancher/mirrored-cluster-api-controller:v1.7.3 192.168.100.4/library/rancher/mirrored-cluster-api-controller:v1.7.3
docker tag rancher/rancher-agent:v2.9.3 192.168.100.4/library/rancher/rancher-agent:v2.9.3
docker tag rancher/fleet-agent:v0.10.4 192.168.100.4/library/rancher/fleet-agent:v0.10.4
docker tag rancher/fleet:v0.10.4 192.168.100.4/library/rancher/fleet:v0.10.4
docker tag rancher/kubectl:v1.29.2 192.168.100.4/library/rancher/kubectl:v1.29.2
docker tag rancher/mirrored-cluster-api-controller:v1.7.3 192.168.100.4/library/rancher/mirrored-cluster-api-controller:v1.7.3
docker tag rancher/shell:v0.2.2 192.168.100.4/library/rancher/shell:v0.2.2
docker tag rancher/rancher:v2.9.3 192.168.100.4/library/rancher/rancher:v2.9.3
docker tag rancher/rancher-webhook:v0.5.3 192.168.100.4/library/rancher/rancher-webhook:v0.5.3
docker tag rancher/shell:v0.2.2 192.168.100.4/library/rancher/shell:v0.2.2
推送镜像
shell
docker push 192.168.100.4/library/rancher/fleet-agent:v0.10.4
docker push 192.168.100.4/library/rancher/fleet:v0.10.4
docker push 192.168.100.4/library/rancher/mirrored-cluster-api-controller:v1.7.3
docker push 192.168.100.4/library/rancher/rancher-agent:v2.9.3
docker push 192.168.100.4/library/rancher/fleet-agent:v0.10.4
docker push 192.168.100.4/library/rancher/fleet:v0.10.4
docker push 192.168.100.4/library/rancher/kubectl:v1.29.2
docker push 192.168.100.4/library/rancher/mirrored-cluster-api-controller:v1.7.3
docker push 192.168.100.4/library/rancher/shell:v0.2.2
docker push 192.168.100.4/library/rancher/rancher:v2.9.3
docker push 192.168.100.4/library/rancher/rancher-webhook:v0.5.3
docker push 192.168.100.4/library/rancher/shell:v0.2.2
(5)Helm部署Rancher
创建命名空间
shell
[root@k8s-master ~]# kubectl create ns cattle-system
私有 CA 根证书(让 Rancher Server/Agent 信任)
shell
[root@k8s-master ~]# kubectl create secret generic tls-ca \
--from-file=cacerts.pem=./cacerts.pem -n cattle-system
额外信任的 CA(如果使用 additionalTrustedCAs)
shell
[root@k8s-master ~]# cp cacerts.pem ca-additional.pem
[root@k8s-master ~]# kubectl create secret generic tls-ca-additional \
--from-file=ca-additional.pem -n cattle-system
Ingress TLS 证书(nginx-ingress 使用)
shell
[root@k8s-master ~]# kubectl create secret tls tls-rancher-ingress \
--cert=tls.crt --key=tls.key -n cattle-system
私有镜像仓库拉取凭据
shell
[root@k8s-master ~]# kubectl create secret docker-registry regcred \
--docker-server=192.168.100.4 \
--docker-username=admin \
--docker-password=Harbor12345 \
[email protected] \
--namespace=cattle-system
部署Rancher (第一种方法)
shell
[root@k8s-master ~]# helm install rancher ./rancher-2.9.3.tgz \
--namespace cattle-system \
--set hostname=gxl.rancher.com \
--set ingress.ingressClassName=nginx \
--set systemDefaultRegistry=192.168.100.4/library \
--set rancherImage=192.168.100.4/library/rancher/rancher \
--set fleetImage=192.168.100.4/library/rancher/fleet \
--set fleetAgentImage=192.168.100.4/library/rancher/fleet-agent \
--set gitjobImage=192.168.100.4/library/rancher/gitjob \
--set shellImage=192.168.100.4/library/rancher/shell \
--set webhookImage=192.168.100.4/library/rancher/rancher-webhook \
--set ingress.tls.source=secret \
--set privateCA=true \
--set useBundledSystemChart=true \
--set imagePullSecrets[0].name=regcred
部署Rancher (第二种方法)
shell
[root@k8s-master ~]# helm install rancher ./rancher-2.9.3.tgz \
--namespace cattle-system \
--set hostname=gxl.rancher.com \
--set ingress.tls.source=secret \
--set privateCA=true \
--set ingress.ingressClassName=nginx \
--set systemDefaultRegistry=192.168.100.4/library \
--set imagePullSecrets[0].name=regcred \
--set useBundledSystemChart=true
部署Rancher创建单独仓库 rancher(第三种方法)
shell
[root@k8s-master ~]# helm install rancher ./rancher-2.9.3.tgz \
--namespace cattle-system \
--set hostname=gxl.rancher.com \
--set ingress.tls.source=secret \
--set privateCA=true \
--set ingress.ingressClassName=nginx \
--set systemDefaultRegistry=192.168.100.4 \
--set imagePullSecrets[0].name=regcred \
--set useBundledSystemChart=true
查看部署Rancher的 Pod 资源
shell
[root@k8s-master rancher]# kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
calico-apiserver calico-apiserver-87b48ffb-72k57 1/1 Running 1 (21d ago) 21d
calico-apiserver calico-apiserver-87b48ffb-ghspq 1/1 Running 1 (21d ago) 21d
calico-system calico-kube-controllers-685f7c9b88-p8vtm 1/1 Running 1 (21d ago) 21d
calico-system calico-node-748p9 1/1 Running 1 (21d ago) 21d
calico-system calico-node-k59ml 1/1 Running 1 (21d ago) 21d
calico-system calico-node-xt2vg 1/1 Running 1 (29h ago) 21d
calico-system calico-typha-5f8776bcbb-gn4vq 1/1 Running 1 (21d ago) 21d
calico-system calico-typha-5f8776bcbb-wf9mh 1/1 Running 2 (29h ago) 21d
calico-system csi-node-driver-flknn 2/2 Running 2 (21d ago) 21d
calico-system csi-node-driver-nkmhp 2/2 Running 2 (21d ago) 21d
calico-system csi-node-driver-zgvxz 2/2 Running 2 (29h ago) 21d
cattle-fleet-local-system fleet-agent-0 2/2 Running 0 28h
cattle-fleet-system gitjob-7954df9b87-fpfrl 1/1 Running 0 28h
cattle-provisioning-capi-system capi-controller-manager-5c7fcc56fd-28llx 1/1 Running 0 28h
cattle-system rancher-7fb7f7b549-cknvx 1/1 Running 0 27h
cattle-system rancher-7fb7f7b549-n6rf9 1/1 Running 0 28h
cattle-system rancher-7fb7f7b549-zghjs 1/1 Running 0 28h
cattle-system rancher-webhook-6d99657f98-pbjll 1/1 Running 0 28h
default nginx-7854ff8877-2ps5z 1/1 Running 1 (21d ago) 21d
fleet-default rke2-machineconfig-cleanup-cronjob-29083685-4z5hk 0/1 Completed 0 5h42m
ingress-nginx ingress-nginx-admission-create-f76sf 0/1 Completed 0 28h
ingress-nginx ingress-nginx-admission-patch-vkh7c 0/1 Completed 3 28h
ingress-nginx ingress-nginx-controller-vmd4h 1/1 Running 0 28h
kube-system coredns-6554b8b87f-9fm4k 1/1 Running 1 (21d ago) 21d
kube-system coredns-6554b8b87f-wwq2n 1/1 Running 1 (21d ago) 21d
kube-system etcd-k8s-master 1/1 Running 1 (29h ago) 21d
kube-system kube-apiserver-k8s-master 1/1 Running 1 (29h ago) 21d
kube-system kube-controller-manager-k8s-master 1/1 Running 1 (29h ago) 21d
kube-system kube-proxy-2qxs4 1/1 Running 1 (29h ago) 21d
kube-system kube-proxy-5k6l2 1/1 Running 1 (29h ago) 21d
kube-system kube-proxy-trk9k 1/1 Running 1 (21d ago) 21d
kube-system kube-scheduler-k8s-master 1/1 Running 1 (29h ago) 21d
tigera-operator tigera-operator-8547bd6cc6-9sqnp 1/1 Running 2 (29h ago) 21d
访问模式(NodePort方式)
shell
[root@master1 rancher]# kubectl get svc -n cattle-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
rancher ClusterIP 10.98.1.68 <none> 80/TCP,443/TCP 6m6s
rancher-webhook ClusterIP 10.102.75.165 <none> 443/TCP 3m27s
webhook-service ClusterIP 10.96.37.189 <none> 443/TCP 3m27s
修改NodePort模式和自定义端口
shell
[root@master1 rancher]# kubectl edit svc -n cattle-system rancher -oyaml
apiVersion: v1
kind: Service
metadata:
annotations:
field.cattle.io/publicEndpoints: '[{"port":30010,"protocol":"TCP","serviceName":"cattle-system:rancher","allNodes":true},{"port":30011,"protocol":"TCP","serviceName":"cattle-system:rancher","allNodes":true}]'
meta.helm.sh/release-name: rancher
meta.helm.sh/release-namespace: cattle-system
creationTimestamp: "2024-07-12T00:47:13Z"
labels:
app: rancher
app.kubernetes.io/managed-by: Helm
chart: rancher-2.7.5
heritage: Helm
release: rancher
name: rancher
namespace: cattle-system
resourceVersion: "718874"
uid: 921f9744-51e2-47be-a815-45857e24760e
spec:
clusterIP: 10.98.1.68
clusterIPs:
- 10.98.1.68
externalTrafficPolicy: Cluster
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: http
nodePort: 30010 # 自行设置
port: 80
protocol: TCP
targetPort: 80
- name: https-internal
nodePort: 30011 # 自行设置
port: 443
protocol: TCP
targetPort: 444
selector:
app: rancher
sessionAffinity: None
type: NodePort # ClusterIP 修改为 NodePort
status:
loadBalancer: {}
查看 svc 确定配置生效
shell
[root@master1 rancher]# kubectl get svc -n cattle-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
rancher NodePort 10.98.1.68 <none> 80:30010/TCP,443:30011/TCP 10m
rancher-webhook ClusterIP 10.102.75.165 <none> 443/TCP 7m49s
webhook-service ClusterIP 10.96.37.189 <none> 443/TCP 7m49s
4. 浏览器访问
(1)配置Linux hosts文件
shell
[root@k8s-master rancher]# vim /etc/hosts
192.168.100.3 k8s-master gxl.rancher.com
192.168.100.4 k8s-node1
192.168.100.5 k8s-node2
(2)配置Windows电脑hosts文件
编辑C:\Windows\System32\drivers\etc\hosts文件
shell
### 添加以下内容
192.168.100.3 gxl.rancher.com

获取Rancher密码,helm部署返回内容或者浏览器内容
shell
NAME: rancher
LAST DEPLOYED: Sun Apr 13 14:56:46 2025
NAMESPACE: cattle-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Rancher Server has been installed.
NOTE: Rancher may take several minutes to fully initialize. Please standby while Certificates are being issued, Containers are started and the Ingress rule comes up.
Check out our docs at https://rancher.com/docs/
If you provided your own bootstrap password during installation, browse to https://rancher.com to get started.
If this is the first time you installed Rancher, get started by running this command and clicking the URL it generates:
```
echo https://rancher.com/dashboard/?setup=$(kubectl get secret --namespace cattle-system bootstrap-secret -o go-template='{{.data.bootstrapPassword|base64decode}}')
```
To get just the bootstrap password on its own, run:
```
kubectl get secret --namespace cattle-system bootstrap-secret -o go-template='{{.data.bootstrapPassword|base64decode}}{{ "\n" }}'
```
获取密码浏览器登录
shell
[root@k8s-master rancher]# kubectl get secret --namespace cattle-system bootstrap-secret -o go-template='{{.data.bootstrapPassword|base64decode}}{{ "\n" }}'
r9lbnplkc7pp8bjh6fprbfwxz2n5c9k2wbp6b5lhlm44xvzrjrtr6c
设置12位Rancher新密码

查看Rancher集群面板

查看local集群
