k8s集群本地搭建
准备:
一台windows即可
我windows内存是32gb的,6核,每核2线程
全程使用终端 ssh命令操作.
我是直接用的mac点操作windows,然后windows连接虚拟机即可.
虚拟机记得改网卡,这样才能保证以后ip不变.
介绍:
k8s集群本地搭建(1master、2node)
k8x运用
devops来自动化构建服务(gitlab、harbor、jenkens、cicd)
简介:
一:环境设置
1.环境设置:
2.调整网卡
3.docker设置
4.安装kubelet、kubeadm、kubectl
5.初始化master
6.加入k8s节点
7.部署CNI网络插件
8.测试k8s集群
二 : k8x运用
1.部署nginx
2.部署fluentd日志采集
3.查看容器资源
4.安装helm和ingress
5.nginx绑定configMap
6.安装nfs,创建StorageClass
7.helm安装redis集群
8.安装Prometheus监控
9.ELK 服务日志收集和搜索查看
10.kuboard可视化界面
三:devops来构建服务
1.安装gitlab
2.安装Harbor 镜像仓库管理
3.jenkins
4.jenkins使用CICD构建服务
环境设置:
# 关闭防火墙
systemctl stop firewalld
systemctl disable firewalld
# 关闭selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config # 永久
setenforce 0 # 临时
# 关闭swap
swapoff -a # 临时
sed -ri 's/.*swap.*/#&/' /etc/fstab # 永久
# 关闭完swap后,一定要重启一下虚拟机!!!
# 根据规划设置主机名
hostnamectl set-hostname <hostname>
# 在master添加hosts
cat >> /etc/hosts << EOF
192.168.113.120 k8s-master
192.168.113.121 k8s-node1
192.168.113.122 k8s-node2
EOF
更改网卡,固定ip(不固定ip也行)
vi /etc/sysconfig/network-scripts/ifcfg-ens33
修改成如下内容
TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="none"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="ens33"
UUID="7ae1e162-35b2-45d4-b642-65b8452b6bfe" //写个唯一的
DEVICE="ens33"
ONBOOT="yes"
IPADDR="192.168.190.12" //填写自己的IP地址 需要和自己的子网保持一致
PREFIX="24"
GATEWAY="192.168.190.2" //网关
DNS1="192.168.190.2" //填写自己的网关
IPV6_PRIVACY="no"
# 将桥接的IPv4流量传递到iptables的链
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system # 生效
# 时间同步
yum install ntpdate -y
ntpdate time.windows.com
第一步:CentOS修复
2024 年 7 月 1 日,CentOS 7 终止使用,CentOS 团队已将其存储库移至 的存档
vault.centos.org
。如果不更新存储库 URL,则无法更新或验证软件包,从而导致这些错误。
运行此命令来下载并执行修复脚本:
curl -fsSL https://autoinstall.plesk.com/PSA_18.0.62/examiners/repository_check.sh | bash -s -- update >/dev/null
第二步骤:CentOS加载iso
虚拟机加载cd磁盘,磁盘为centOS7 iso,记得要连接
防止有些包拉不下来
mkdir /mnt/cdrom
mount /dev/cdrom /mnt/cdrom
vi /etc/yum.repos.d/dvd.repo
[dvd]
name=dvd
baseurl=file:///mnt/cdrom/
enabled=1
gpgcheck=1
pgpkey=file://mnt/RPM-GPG-KEY-CentOS-7
第三步:安装tools
https://github.com/wjw1758548031/resource(资源地址)
conntrack-tools 要单独安装 直接下载 rpm 然后本地安装即可,安装kubectl需要的
sudo yum localinstall conntrack-tools-1.4.4-7.el7.x86_64.rpm
第四步骤:docker设置代理
sudo mkdir -p /etc/systemd/system/docker.service.d
sudo vi /etc/systemd/system/docker.service.d/http-proxy.conf
[Service]
Environment="HTTP_PROXY=http://192.168.1.5:7890"
Environment="HTTPS_PROXY=http://192.168.1.5:7890"
# 这里设置自己的节点ip
Environment="NO_PROXY=localhost,127.0.0.1,192.168.190.10,192.168.190.11,192.168.190.12"
sudo systemctl daemon-reload
sudo systemctl restart docker
//输出设置的代理
systemctl show --property=Environment docker
export https_proxy=http://192.168.1.5:7890 http_proxy=http://192.168.1.5:7890 all_proxy=socks5://192.168.1.5.1:7890
ip和端口填写自己科学上网的,如果能拉取镜像,也不需要管这一步
第五步:安装docker
# step 1: 安装必要的一些系统工具
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
# Step 2: 添加软件源信息
sudo yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# PS:如果出现如下错误信息
Loaded plugins: fastestmirror
adding repo from: https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
grabbing file https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo to /etc/yum.repos.d/docker-ce.repo
Could not fetch/save url https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo to file /etc/yum.repos.d/docker-ce.repo: [Errno 14] curl#60 - "Peer's Certificate issuer is not recognized."
# 编辑 /etc/yum.conf 文件, 在 [main] 下面添加 sslverify=0 参数
vi /etc/yum.conf
# 配置如下----------------------
[main]
sslverify=0
# -----------------------------
# Step 3: 更新并安装Docker-CE
sudo yum makecache fast
sudo yum -y install docker-ce
# Step 4: 开启Docker服务
sudo systemctl enable docker
sudo service docker start
第六步:添加阿里云yum源
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgc=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
第七步:安装kubelet、kubeadm、kubectl
yum install -y kubelet-1.23.6 kubeadm-1.23.6 kubectl-1.23.6
systemctl enable kubelet
systemctl start kubelet
# 配置关闭 Docker 的 cgroups,修改 /etc/docker/daemon.json,加入以下内容
//和k8s保持同一驱动
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
# 重启 docker
systemctl daemon-reload
systemctl restart docker
--------以上都是所有虚拟机都需要执行---------
第八步:初始化master(master虚拟机上执行)
kubeadm init \
--apiserver-advertise-address=192.168.190.10 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.23.6 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16
//提示intiialized successfully 就代表成功 输出的内容复制一下保存
//--service-cidr和--pod-network-cidr子网可以随便填,只要子网不存在即可.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl get nodes
第九步:加入k8s节点(node节点上执行)
kubeadm join 192.168.113.120:6443 --token w34ha2.66if2c8nwmeat9o7 --discovery-token-ca-cert-hash sha256:20e2227554f8883811c01edd850f0cf2f396589d32b57b9984de3353a7389477
token通过如下获取:
kubeadm token list
hash通过如下获取:
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \
openssl dgst -sha256 -hex | sed 's/^.* //'
第十步:部署CNI网络插件(master执行)
wget https://calico-v3-25.netlify.app/archive/v3.25/manifests/calico.yaml
修改calico如下内容
- name: CALICO_IPV4POOL_CIDR
value: "10.244.0.0/16" //value为master init时候的--pod-network-cidr
sed -i 's#docker.io/##g' calico.yaml
grep image calico.yaml 查询这个yaml要拉取哪些镜像
每个节点运行:
通过docker pull 把这些镜像全部拉取下来
设置代理拉 HTTP_PROXY=http://192.168.1.6:7890 HTTPS_PROXY=http://192.168.1.6:7890 docker pull docker.io/calico/cni:v3.25.0
如果还是拉不下就设置docker加速镜像
vi /etc/docker/daemon.json
{"registry-mirrors": [
"https://docker.m.daocloud.io",
"https://dockerproxy.com",
"https://docker.mirrors.ustc.edu.cn",
"https://docker.nju.edu.cn"
]}
部署
kubectl apply -f calico.yaml
查看节点和容器是否正常,如果有网络不通的,和没起来的
kubectl get nodes
kubectl get pod -n kube-system
通过这几个命令来排查问题修复
kubectl describe pod calico-node-68dr4 -n kube-system
kubectl logs -l k8s-app=calico-node -n kube-system
报错提示:有的时候会出现node服务未运行,然后calico连不到node节点,链接ip 端口10250会失败.
这种一般都是node节点kubelet未启动造成的,可以去查看日志为什么没启动.
一般都是docker没启动,或者配置文件有问题造成的.
journalctl -u kubelet 来查看日志
测试k8s集群
# 创建部署
kubectl create deployment nginx --image=nginx
# 暴露端口
kubectl expose deployment nginx --port=80 --type=NodePort
# 查看 pod 以及服务信息
kubectl get pod,svc
通过服务信息的映射端口,可以去浏览器里访问一下
//验证完就可以删掉
kubectl delete services nginx
kubectl delete deploy nginx
让所有的节点也能使用kubectl(所有节点运行)
//把master机子上的文件复制到node上
scp root@192.168.190.10:/etc/kubernetes/admin.conf /etc/kubernetes
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
source ~/.bash_profile
---------以上k8s搭建完毕---------
---------以下介绍k8s里面的操作---------
第一步:部署nginx
//部署nginx
kubectl create deploy nginx-deploy --image=nginx
//获取deploy配置
kubectl get deploy nginx-deploy -o yaml
把输出的内容全部复制(除了status: 这一层的全部不要)
vim nginx-deploy.yaml
复制进去,把没有用的配置删掉后为:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx-deploy
name: nginx-deploy
namespace: default
spec:
replicas: 1
//滚动跟新后,保留的历史版本数,可以会滚到之前的版本
revisionHistoryLimit: 10
selector:
matchLabels:
app: nginx-deploy
strategy://更新策略
rollingUpdate: //滚动更新配置
//在滚动更新的时候,所有容器允许超过百分之多少或者多少个
maxSurge: 25%
//更新过程中,允许有百分之25的不可用.
maxUnavailable: 25%
type: RollingUpdate //类行为滚动更新
template:
metadata:
creationTimestamp: null
labels:
app: nginx-deploy
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: nginx
resources:
limits:
cpu: 200m
memory: 128Mi
requests:
cpu: 100m
memory: 128Mi
restartPolicy: Always
//容器没有正常的退出会停顿30秒后才退出
terminationGracePeriodSeconds: 30
//启动
kubectl create -f nginx-deploy.yaml
vi nginx-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-svc
labels:
app: nginx-svc
spec:
selector:
app: nginx-deploy
ports:
- name: http # service 端~O~E~M置~Z~D~P~M称
port: 80 # service ~G己~Z~D端~O
targetPort: 80 # ~[| ~G pod ~Z~D端~O
type: NodePort
//启动
kubectl create -f nginx-svc.yaml
第二步:部署fluentd日志采集
编写文件 fluentd.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd
spec:
selector:
matchLabels:
app: logging
template:
metadata:
labels:
app: logging
id: fluentd
name: fluentd
spec:
containers:
- name: fluentd-es
image: agilestacks/fluentd-elasticsearch:v1.3.0
env:
- name: FLUENTD_ARGS
value: -qq
volumeMounts:
//容器里的地址,把node磁盘映射到容器刻路径
- name: containers
mountPath: /var/lib/docker/containers
- name: varlog
mountPath: /varlog
volumes:
//挂载node里的路径
- hostPath:
path: /var/lib/docker/containers
name: containers
- hostPath:
path: /var/log
name: varlog
执行 kubectl create -f fluentd.yaml
通过kubectl get pod 来查看容器是否启动
第三步: 查看容器资源
wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml -O metrics-server-components.yaml
vi metrics-server-components.yaml
定位到- --metric-resolution=15s往下面添加- --kubelet-insecure-tls
kubectl apply -f metrics-server-components.yaml
kubectl get pods --all-namespaces | grep metrics
//就能看到容器占用的cpu和内存
kubectl top pods
第四步: 安装helm和ingress
wget https://get.helm.sh/helm-v3.2.3-linux-amd64.tar.gz
tar -zxvf helm-v3.10.2-linux-amd64.tar.gz
cd linux-amd64/
cp helm /usr/local/bin/helm
//命令有用即可
helm
# 添加仓库
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
# 查看和下载
helm search repo ingress-nginx
helm pull ingress-nginx/ingress-nginx --version 4.0.1
tar -xf ingress-nginx-4.11.1.tgz
cd ingress-nginx
vi values.yaml
修改如下
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
kind: Deployment 改成 kind: DaemonSet
nodeSelector:
ingress: "true" # 增加选择器,如果 node 上有 ingress=true 就部署
将 admissionWebhooks.enabled 修改为 false
将 service 中的 type 由 LoadBalancer 修改为 ClusterIP,如果服务器是云平台才用 LoadBalancer
# 为 ingress 专门创建一个 namespace
kubectl create ns ingress-nginx
# 为需要部署 ingress 的节点上加标签
kubectl label node k8s-node1 ingress=true
#部署ingress控制器
helm install ingress-nginx -n ingress-nginx .
#查看是否成功
kubectl get pod -n ingress-nginx
vi wolfcode-nginx-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress # 资源类型为 Ingress
metadata:
name: wolfcode-nginx-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules: # ingress 规则配置,可以配置多个
- host: k8s.wolfcode.cn # 域名配置,可以使用通配符 *
http:
paths: # 相当于 nginx 的 location 配置,可以配置多个
- pathType: Prefix # 路径类型,按照路径类型进行匹配 ImplementationSpecific 需要指定 IngressClass,具体匹配规则以 IngressClass 中的规则为准。Exact:精确匹配,URL需要与path完全匹配上,且区分大小写的。Prefix:以 / 作为分隔符来进行前缀匹配
backend:
service:
name: nginx-svc # 代理到哪个 service
port:
number: 80 # service 的端口
path: /api # 等价于 nginx 中的 location 的路径前缀匹配
//创建ingress
kubectl create -f wolfcide-ingress.yaml
kubectl get ingress
windows主机将hosts添加一个域名映射
192.168.190.11 k8s.wolfcode.cn
然后浏览器直接访问k8s.wolfcode.cn 就能访问到nginx了
//可能需要取消翻墙才能访问
第五步:nginx绑定configMap
//进入其中一台nginx pod
kubectl exec -it nginx-deploy-6b4db948c6-9sw6k -- sh
//把查询的内容辅助下来
cat nginx.conf
//退出
exit
//编辑nginx.conf 把内容复制进去
vi nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log notice;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
//创建configmap
kubectl create configmap nginx-conf-cm --from-file=./nginx.conf
//查看configmap 是否创建成功
kubectl describe cm nginx-conf-cm
//编辑之前的nginx-deploy.yaml
把template.spec全部覆盖
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: nginx
resources:
limits:
cpu: 200m
memory: 128Mi
requests:
cpu: 100m
memory: 128Mi
volumeMounts:
- mountPath: /etc/nginx/nginx.conf
name: nginx-conf
subPath: nginx.conf
volumes:
- name: nginx-conf # 正确~Z~D缩~[
configMap:
name: nginx-conf-cm # 正确~Z~D缩~[
defaultMode: 420
items:
- key: nginx.conf
path: nginx.conf
//重新加载,如果加载不了就删了重新create
kubectl apply -f nginx-deploy.yaml
第六步:安装nfs,创建StorageClass
# 安装 nfs
yum install nfs-utils -y
# 启动 nfs
systemctl start nfs-server
# 开机启动
systemctl enable nfs-server
# 查看 nfs 版本
cat /proc/fs/nfsd/versions
# 创建共享目录
mkdir -p /data/nfs
cd /data/nfs
mkdir rw
mkdir ro
# 设置共享目录 export
vim /etc/exports
/data/nfs/rw 192.168.190.0/24(rw,sync,no_subtree_check,no_root_squash)
/data/nfs/ro 192.168.190.0/24(ro,sync,no_subtree_check,no_root_squash)
# 重新加载
exportfs -f
systemctl reload nfs-server
vi nfs-provisioner-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
namespace: kube-system
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: kube-system
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
namespace: kube-system
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
namespace: kube-system
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: kube-system
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
kubectl apply -f nfs-provisioner-rbac.yaml
vi nfs-provisioner-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
namespace: kube-system
labels:
app: nfs-client-provisioner
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: registry.cn-beijing.aliyuncs.com/pylixm/nfs-subdir-external-provisioner:v4.0.0
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: fuseim.pri/ifs
- name: NFS_SERVER
value: 192.168.190.10
- name: NFS_PATH
value: /data/nfs/rw
volumes:
- name: nfs-client-root
nfs:
server: 192.168.190.10
path: /data/nfs/rw
kubectl apply -f nfs-provisioner-deployment.yaml
vi nfs-storage-class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage
namespace: kube-system
provisioner: fuseim.pri/ifs # 外部制备器提供者,编写为提供者的名称
parameters:
archiveOnDelete: "false" # 是否存档,false 表示不存档,会删除 oldPath 下面的数据,true 表示存档,会重命名路径
reclaimPolicy: Retain # 回收策略,默认为 Delete 可以配置为 Retain
volumeBindingMode: Immediate # 默认为 Immediate,表示创建 PVC 立即进行绑定,只有 azuredisk 和 AWSelasticblockstore 支持其他值
kubectl apply -f nfs-storage-class.yaml
//查看是否创建成功
kubectl get sc
//查看制备器容器是否启动
kubectl get pod -n kube-system
第七步:helm安装redis集群
helm repo add bitnami https://charts.bitnami.com/bitnami
helm search repo redis
# 先将 chart 拉到本地
helm pull bitnami/redis --version 17.4.3
# 解压后,修改 values.yaml 中的参数
tar -xvf redis-17.4.3.tgz
cd redis/
vi values.yaml
# 修改 storageClass 为 managed-nfs-storage
# 设置 redis 密码 password
//创建命名空间
kubectl create namespace redis
cd ../
//安装
helm install redis ./redis -n redis
//查看是否启动
kubectl get pod -n redis
kubectl get pvc -n redis
第八步:安装Prometheus监控
wget https://github.com/prometheus-ope
rator/kube-prometheus/archive/refs/tags/v0.10.0.tar.gz
tar -zxvf v0.10.0.tar.gz
cd kube-prometheus-0.10.0/
kubectl create -f manifests/setup
//查看所有资源是否正常
kubectl get all -n monitoring
//查看svc
kubectl get svc -n monitoring
# 创建 prometheus-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
namespace: monitoring
name: prometheus-ingress
spec:
ingressClassName: nginx
rules:
- host: grafana.wolfcode.cn # 访问 Grafana 域名
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: grafana
port:
number: 3000
- host: prometheus.wolfcode.cn # 访问 Prometheus 域名
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: prometheus-k8s
port:
number: 9090
- host: alertmanager.wolfcode.cn # 访问 alertmanager 域名
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: alertmanager-main
port:
number: 9093
# 创建 ingress
kubectl apply -f prometheus-ingress.yaml
//windows主机配置host
192.168.190.11 grafana.wolfcode.cn
192.168.190.11 prometheus.wolfcode.cn
192.168.190.11 alertmanager.wolfcode.cn
//以上的域名就可以直接访问对应的监控界面了
grafana.wolfcode.cn
//通过这个去看k8s各种资源监控
//默认账号密码都是admin 配置一下dashboards
第九步:ELK 服务日志收集和搜索查看
kubectl label node k8s-node1 es=data
vi es.yaml
---
apiVersion: v1
kind: Service
metadata:
name: elasticsearch-logging
namespace: kube-logging
labels:
k8s-app: elasticsearch-logging
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "Elasticsearch"
spec:
ports:
- port: 9200
protocol: TCP
targetPort: db
selector:
k8s-app: elasticsearch-logging
---
# RBAC authn and authz
apiVersion: v1
kind: ServiceAccount
metadata:
name: elasticsearch-logging
namespace: kube-logging
labels:
k8s-app: elasticsearch-logging
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: elasticsearch-logging
labels:
k8s-app: elasticsearch-logging
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
rules:
- apiGroups:
- ""
resources:
- "services"
- "namespaces"
- "endpoints"
verbs:
- "get"
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: kube-logging
name: elasticsearch-logging
labels:
k8s-app: elasticsearch-logging
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
subjects:
- kind: ServiceAccount
name: elasticsearch-logging
namespace: kube-logging
apiGroup: ""
roleRef:
kind: ClusterRole
name: elasticsearch-logging
apiGroup: ""
---
# Elasticsearch deployment itself
apiVersion: apps/v1
kind: StatefulSet #使用statefulset创建Pod
metadata:
name: elasticsearch-logging #pod名称,使用statefulSet创建的Pod是有序号有顺序的
namespace: kube-logging #命名空间
labels:
k8s-app: elasticsearch-logging
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
srv: srv-elasticsearch
spec:
serviceName: elasticsearch-logging #与svc相关联,这可以确保使用以下DNS地址访问Statefulset中的每个pod (es-cluster-[0,1,2].elasticsearch.elk.svc.cluster.local)
replicas: 1 #副本数量,单节点
selector:
matchLabels:
k8s-app: elasticsearch-logging #和pod template配置的labels相匹配
template:
metadata:
labels:
k8s-app: elasticsearch-logging
kubernetes.io/cluster-service: "true"
spec:
serviceAccountName: elasticsearch-logging
containers:
- image: docker.io/library/elasticsearch:7.9.3
name: elasticsearch-logging
resources:
# need more cpu upon initialization, therefore burstable class
limits:
cpu: 1000m
memory: 2Gi
requests:
cpu: 100m
memory: 500Mi
ports:
- containerPort: 9200
name: db
protocol: TCP
- containerPort: 9300
name: transport
protocol: TCP
volumeMounts:
- name: elasticsearch-logging
mountPath: /usr/share/elasticsearch/data/ #挂载点
env:
- name: "NAMESPACE"
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: "discovery.type" #定义单节点类型
value: "single-node"
- name: ES_JAVA_OPTS #设置Java的内存参数,可以适当进行加大调整
value: "-Xms512m -Xmx2g"
volumes:
- name: elasticsearch-logging
hostPath:
path: /data/es/
nodeSelector: #如果需要匹配落盘节点可以添加 nodeSelect
es: data
tolerations:
- effect: NoSchedule
operator: Exists
# Elasticsearch requires vm.max_map_count to be at least 262144.
# If your OS already sets up this number to a higher value, feel free
# to remove this init container.
initContainers: #容器初始化前的操作
- name: elasticsearch-logging-init
image: alpine:3.6
command: ["/sbin/sysctl", "-w", "vm.max_map_count=262144"] #添加mmap计数限制,太低可能造成内存不足的错误
securityContext: #仅应用到指定的容器上,并且不会影响Volume
privileged: true #运行特权容器
- name: increase-fd-ulimit
image: busybox
imagePullPolicy: IfNotPresent
command: ["sh", "-c", "ulimit -n 65536"] #修改文件描述符最大数量
securityContext:
privileged: true
- name: elasticsearch-volume-init #es数据落盘初始化,加上777权限
image: alpine:3.6
command:
- chmod
- -R
- "777"
- /usr/share/elasticsearch/data/
volumeMounts:
- name: elasticsearch-logging
mountPath: /usr/share/elasticsearch/data/
# 创建命名空间
kubectl create ns kube-logging
# 创建服务
kubectl create -f es.yaml
vi logstash.yaml
---
apiVersion: v1
kind: Service
metadata:
name: logstash
namespace: kube-logging
spec:
ports:
- port: 5044
targetPort: beats
selector:
type: logstash
clusterIP: None
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: logstash
namespace: kube-logging
spec:
selector:
matchLabels:
type: logstash
template:
metadata:
labels:
type: logstash
srv: srv-logstash
spec:
containers:
- image: docker.io/kubeimages/logstash:7.9.3 #该镜像支持arm64和amd64两种架构
name: logstash
ports:
- containerPort: 5044
name: beats
command:
- logstash
- '-f'
- '/etc/logstash_c/logstash.conf'
env:
- name: "XPACK_MONITORING_ELASTICSEARCH_HOSTS"
value: "http://elasticsearch-logging:9200"
volumeMounts:
- name: config-volume
mountPath: /etc/logstash_c/
- name: config-yml-volume
mountPath: /usr/share/logstash/config/
- name: timezone
mountPath: /etc/localtime
resources: #logstash一定要加上资源限制,避免对其他业务造成资源抢占影响
limits:
cpu: 1000m
memory: 2048Mi
requests:
cpu: 512m
memory: 512Mi
volumes:
- name: config-volume
configMap:
name: logstash-conf
items:
- key: logstash.conf
path: logstash.conf
- name: timezone
hostPath:
path: /etc/localtime
- name: config-yml-volume
configMap:
name: logstash-yml
items:
- key: logstash.yml
path: logstash.yml
---
apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-conf
namespace: kube-logging
labels:
type: logstash
data:
logstash.conf: |-
input {
beats {
port => 5044
}
}
filter {
# 处理 ingress 日志
if [kubernetes][container][name] == "nginx-ingress-controller" {
json {
source => "message"
target => "ingress_log"
}
if [ingress_log][requesttime] {
mutate {
convert => ["[ingress_log][requesttime]", "float"]
}
}
if [ingress_log][upstremtime] {
mutate {
convert => ["[ingress_log][upstremtime]", "float"]
}
}
if [ingress_log][status] {
mutate {
convert => ["[ingress_log][status]", "float"]
}
}
if [ingress_log][httphost] and [ingress_log][uri] {
mutate {
add_field => {"[ingress_log][entry]" => "%{[ingress_log][httphost]}%{[ingress_log][uri]}"}
}
mutate {
split => ["[ingress_log][entry]","/"]
}
if [ingress_log][entry][1] {
mutate {
add_field => {"[ingress_log][entrypoint]" => "%{[ingress_log][entry][0]}/%{[ingress_log][entry][1]}"}
remove_field => "[ingress_log][entry]"
}
} else {
mutate {
add_field => {"[ingress_log][entrypoint]" => "%{[ingress_log][entry][0]}/"}
remove_field => "[ingress_log][entry]"
}
}
}
}
# 处理以srv进行开头的业务服务日志
if [kubernetes][container][name] =~ /^srv*/ {
json {
source => "message"
target => "tmp"
}
if [kubernetes][namespace] == "kube-logging" {
drop{}
}
if [tmp][level] {
mutate{
add_field => {"[applog][level]" => "%{[tmp][level]}"}
}
if [applog][level] == "debug"{
drop{}
}
}
if [tmp][msg] {
mutate {
add_field => {"[applog][msg]" => "%{[tmp][msg]}"}
}
}
if [tmp][func] {
mutate {
add_field => {"[applog][func]" => "%{[tmp][func]}"}
}
}
if [tmp][cost]{
if "ms" in [tmp][cost] {
mutate {
split => ["[tmp][cost]","m"]
add_field => {"[applog][cost]" => "%{[tmp][cost][0]}"}
convert => ["[applog][cost]", "float"]
}
} else {
mutate {
add_field => {"[applog][cost]" => "%{[tmp][cost]}"}
}
}
}
if [tmp][method] {
mutate {
add_field => {"[applog][method]" => "%{[tmp][method]}"}
}
}
if [tmp][request_url] {
mutate {
add_field => {"[applog][request_url]" => "%{[tmp][request_url]}"}
}
}
if [tmp][meta._id] {
mutate {
add_field => {"[applog][traceId]" => "%{[tmp][meta._id]}"}
}
}
if [tmp][project] {
mutate {
add_field => {"[applog][project]" => "%{[tmp][project]}"}
}
}
if [tmp][time] {
mutate {
add_field => {"[applog][time]" => "%{[tmp][time]}"}
}
}
if [tmp][status] {
mutate {
add_field => {"[applog][status]" => "%{[tmp][status]}"}
convert => ["[applog][status]", "float"]
}
}
}
mutate {
rename => ["kubernetes", "k8s"]
remove_field => "beat"
remove_field => "tmp"
remove_field => "[k8s][labels][app]"
}
}
output {
elasticsearch {
hosts => ["http://elasticsearch-logging:9200"]
codec => json
index => "logstash-%{+YYYY.MM.dd}" #索引名称以logstash+日志进行每日新建
}
}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-yml
namespace: kube-logging
labels:
type: logstash
data:
logstash.yml: |-
http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.hosts: http://elasticsearch-logging:9200
kubectl create -f logstash.yaml
vi filebeat.yaml
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: kube-logging
labels:
k8s-app: filebeat
data:
filebeat.yml: |-
filebeat.inputs:
- type: container
enable: true
paths:
- /var/log/containers/*.log #这里是filebeat采集挂载到pod中的日志目录
processors:
- add_kubernetes_metadata: #添加k8s的字段用于后续的数据清洗
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/log/containers/"
#output.kafka: #如果日志量较大,es中的日志有延迟,可以选择在filebeat和logstash中间加入kafka
# hosts: ["kafka-log-01:9092", "kafka-log-02:9092", "kafka-log-03:9092"]
# topic: 'topic-test-log'
# version: 2.0.0
output.logstash: #因为还需要部署logstash进行数据的清洗,因此filebeat是把数据推到logstash中
hosts: ["logstash:5044"]
enabled: true
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
namespace: kube-logging
labels:
k8s-app: filebeat
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: filebeat
labels:
k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- pods
verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: filebeat
subjects:
- kind: ServiceAccount
name: filebeat
namespace: kube-logging
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: filebeat
namespace: kube-logging
labels:
k8s-app: filebeat
spec:
selector:
matchLabels:
k8s-app: filebeat
template:
metadata:
labels:
k8s-app: filebeat
spec:
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
containers:
- name: filebeat
image: docker.io/kubeimages/filebeat:7.9.3 #该镜像支持arm64和amd64两种架构
args: [
"-c", "/etc/filebeat.yml",
"-e","-httpprof","0.0.0.0:6060"
]
#ports:
# - containerPort: 6060
# hostPort: 6068
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: ELASTICSEARCH_HOST
value: elasticsearch-logging
- name: ELASTICSEARCH_PORT
value: "9200"
securityContext:
runAsUser: 0
# If using Red Hat OpenShift uncomment this:
#privileged: true
resources:
limits:
memory: 1000Mi
cpu: 1000m
requests:
memory: 100Mi
cpu: 100m
volumeMounts:
- name: config #挂载的是filebeat的配置文件
mountPath: /etc/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: data #持久化filebeat数据到宿主机上
mountPath: /usr/share/filebeat/data
- name: varlibdockercontainers #这里主要是把宿主机上的源日志目录挂载到filebeat容器中,如果没有修改docker或者containerd的runtime进行了标准的日志落盘路径,可以把mountPath改为/var/lib
mountPath: /var/lib
readOnly: true
- name: varlog #这里主要是把宿主机上/var/log/pods和/var/log/containers的软链接挂载到filebeat容器中
mountPath: /var/log/
readOnly: true
- name: timezone
mountPath: /etc/localtime
volumes:
- name: config
configMap:
defaultMode: 0600
name: filebeat-config
- name: varlibdockercontainers
hostPath: #如果没有修改docker或者containerd的runtime进行了标准的日志落盘路径,可以把path改为/var/lib
path: /var/lib
- name: varlog
hostPath:
path: /var/log/
# data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
- name: inputs
configMap:
defaultMode: 0600
name: filebeat-inputs
- name: data
hostPath:
path: /data/filebeat-data
type: DirectoryOrCreate
- name: timezone
hostPath:
path: /etc/localtime
tolerations: #加入容忍能够调度到每一个节点
- effect: NoExecute
key: dedicated
operator: Equal
value: gpu
- effect: NoSchedule
operator: Exists
kubectl create -f filebeat.yaml
vi kibana.yaml
---
apiVersion: v1
kind: ConfigMap
metadata:
namespace: kube-logging
name: kibana-config
labels:
k8s-app: kibana
data:
kibana.yml: |-
server.name: kibana
server.host: "0"
i18n.locale: zh-CN #设置默认语言为中文
elasticsearch:
hosts: ${ELASTICSEARCH_HOSTS} #es集群连接地址,由于我这都都是k8s部署且在一个ns下,可以直接使用service name连接
---
apiVersion: v1
kind: Service
metadata:
name: kibana
namespace: kube-logging
labels:
k8s-app: kibana
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "Kibana"
srv: srv-kibana
spec:
type: NodePort
ports:
- port: 5601
protocol: TCP
targetPort: ui
selector:
k8s-app: kibana
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
namespace: kube-logging
labels:
k8s-app: kibana
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
srv: srv-kibana
spec:
replicas: 1
selector:
matchLabels:
k8s-app: kibana
template:
metadata:
labels:
k8s-app: kibana
spec:
containers:
- name: kibana
image: docker.io/kubeimages/kibana:7.9.3 #该镜像支持arm64和amd64两种架构
resources:
# need more cpu upon initialization, therefore burstable class
limits:
cpu: 1000m
requests:
cpu: 100m
env:
- name: ELASTICSEARCH_HOSTS
value: http://elasticsearch-logging:9200
ports:
- containerPort: 5601
name: ui
protocol: TCP
volumeMounts:
- name: config
mountPath: /usr/share/kibana/config/kibana.yml
readOnly: true
subPath: kibana.yml
volumes:
- name: config
configMap:
name: kibana-config
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kibana
namespace: kube-logging
spec:
ingressClassName: nginx
rules:
- host: kibana.wolfcode.cn
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kibana
port:
number: 5601
kubectl create -f kibana.yaml
# 查看 pod 启用情况
kubectl get pod -n kube-logging
# 查看svc 复制kibana的端口
kubectl get svc -n kube-logging
//通过节点的ip加端口就能访问
192.168.190.11:32036
第一步创建Stack Management索引
第二步点击discover 选择完创建的索引,就能看到所有日志了.
第十步:kuboard可视化界面
wget https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml
wget https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml
vi cluster-configuration.yaml
修改 storageClass: "managed-nfs-storage"
kubectl apply -f kubesphere-installer.yaml
kubectl apply -f cluster-configuration.yaml
# 检查安装日志
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f
# 查看端口
kubectl get svc/ks-console -n kubesphere-system
# 默认端口是 30880,如果是云服务商,或开启了防火墙,记得要开放该端口
# 登录控制台访问,账号密码:admin/P@88w0rd
---------以上介绍k8s里面的操作完毕---------
---------以下会使用devops来构建服务---------
第一步:安装gitlab
# 下载安装包
wget https://mirrors.tuna.tsinghua.edu.cn/gitlab-ce/yum/el7/gitlab-ce-15.9.1-ce.0.el7.x86_64.rpm
# 安装
rpm -i gitlab-ce-15.9.1-ce.0.el7.x86_64.rpm
# 编辑 /etc/gitlab/gitlab.rb 文件
# 修改 external_url 访问路径 http://<ip>:28080
# 其他配置修改如下
gitlab_rails['time_zone'] = 'Asia/Shanghai'
puma['worker_processes'] = 2
sidekiq['max_concurrency'] = 8
postgresql['shared_buffers'] = "128MB"
postgresql['max_worker_processes'] = 4
prometheus_monitoring['enable'] = false
# 更新配置并重启
gitlab-ctl reconfigure
gitlab-ctl restart
sudo systemctl enable gitlab-runsvdir.service
访问 192.168.190.10:28080
账号 root
密码 cat /etc/gitlab/initial_root_password
网站操作
# 登录后修改默认密码 > 右上角头像 > Perferences > Password 修改成wolfcode
# 开启 webhook 外部访问
# Settings > Network > Outbound requests > Allow requests to the local network from web hooks and services 勾选
# 设置语言为中文(全局)
# Settings > Preferences > Localization > Default language > 选择简体中文 > Save changes
# 设置当前用户语言为中文
# 右上角用户头像 > Preferences > Localization > Language > 选择简体中文 > Save changes
第二步:安装Harbor 镜像仓库管理
sudo curl -L "https://github.com/docker/compose/releases/download/v2.20.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
docker-compose --version
wget https://github.com/goharbor/harbor/releases/download/v2.5.0/harbor-offline-installer-v2.5.0.tgz
tar -xzf harbor-offline-installer-v2.5.0.tgz
cd harbor-offline-installer-v2.5.0
cp harbor.yml.tmpl harbor.yml
vi harbor.yml
hostname: 192.168.190.10
port: 8858
关于https的都注视掉
# https related config
#https:
# https port for harbor, default is 443
# port: 443
# The path of cert and key files for nginx
# certificate: /your/certificate/path
# private_key: /your/private/key/path
harbor_admin_password: wolfcode
./install.sh
kubectl create secret docker-registry harbor-secret --docker-server=192.168.190.10:8858 --docker-username=admin --docker-password=wolfcode -n kube-devops
192.168.190.10:8858
账号 admin
密码 wolfcode
创建一个wolfcode的项目即可
//每个机器都操作一遍
vi /etc/docker/daemon.json
添加 "insecure-registries": ["192.168.190.10:8858","registry.cn-hangzhou.aliyuncs.com"],
sudo systemctl daemon-reload
sudo systemctl restart docker
//使用 docker login登录一下,可以登录的话说明就成功了
docker login -uadmin 192.168.190.10:8858
第三步:jenkins
mkdir jenkins
cd jenkins
vi Docker
FROM jenkins/jenkins:lts-jdk11
ADD ./sonar-scanner-cli-4.8.0.2856-linux.zip /usr/local/
USER root
WORKDIR /usr/local/
RUN unzip sonar-scanner-cli-4.8.0.2856-linux.zip
RUN mv sonar-scanner-4.8.0.2856-linux sonar-scanner-cli
RUN ln -s /usr/local/sonar-scanner-cli/bin/sonar-scanner /usr/bin/sonar-scanner
RUN echo "jenkins ALL=NOPASSWD: ALL" >> /etc/sudoers
USER jenkins
wget https://binaries.sonarsource.com/Distribution/sonar-scanner-cli/sonar-scanner-cli-4.8.0.2856-linux.zip
unzip sonar-scanner-cli-4.8.0.2856-linux.zip
# 构建 jenkins 镜像
docker build -t 192.168.190.10:8858/library/jenkins-jdk11:jdk-11 .
# 登录 harbor
docker login --username=admin 192.168.190.10:8858
# 推送镜像到 harbor
docker push 192.168.190.10:8858/library/jenkins-jdk11:jdk-11
kubectl create secret docker-registry aliyum-secret --docker-server=192.168.190.10:8858 --docker-username=admin --docker-password=wolfcode -n kube-devops
vi jenkins-serviceAccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: jenkins-admin
namespace: kube-devops
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: jenkins-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: jenkins-admin
namespace: kube-devops
vi jenkins-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-pvc
namespace: kube-devops
spec:
storageClassName: managed-nfs-storage
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
vi jenkins-service.yaml
apiVersion: v1
kind: Service
metadata:
name: jenkins-service
namespace: kube-devops
annotations:
prometheus.io/scrape: 'true'
prometheus.io/path: /
prometheus.io/port: '8080'
spec:
selector:
app: jenkins-server
type: NodePort
ports:
- name: http
port: 8080
targetPort: 8080
- name: agent
port: 50000
protocol: TCP
targetPort: 50000
vi jenkins-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins
namespace: kube-devops
spec:
replicas: 1
selector:
matchLabels:
app: jenkins-server
template:
metadata:
labels:
app: jenkins-server
spec:
serviceAccountName: jenkins-admin
imagePullSecrets:
- name: harbor-secret # harbor 访问 或者看情况修改成自己阿里云的容器仓库
containers:
- name: jenkins
image: 192.168.190.10:8858/library/jenkins-jdk11:jdk-11
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
runAsUser: 0 # 使用 root 用户运行容器
resources:
limits:
memory: "2Gi"
cpu: "1000m"
requests:
memory: "500Mi"
cpu: "500m"
ports:
- name: httpport
containerPort: 8080
- name: jnlpport
containerPort: 50000
livenessProbe:
httpGet:
path: "/login"
port: 8080
initialDelaySeconds: 90
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 5
readinessProbe:
httpGet:
path: "/login"
port: 8080
initialDelaySeconds: 60
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
volumeMounts:
- name: jenkins-data
mountPath: /var/jenkins_home
- name: docker
mountPath: /run/docker.sock
- name: docker-home
mountPath: /usr/bin/docker
- name: daemon
mountPath: /etc/docker/daemon.json
subPath: daemon.json
- name: kubectl
mountPath: /usr/bin/kubectl
volumes:
- name: kubectl
hostPath:
path: /usr/bin/kubectl
- name: jenkins-data
persistentVolumeClaim:
claimName: jenkins-pvc
- name: docker
hostPath:
path: /run/docker.sock # 将主机的 docker 映射到容器中
- name: docker-home
hostPath:
path: /usr/bin/docker
- name: mvn-setting
configMap:
name: mvn-settings
items:
- key: settings.xml
path: settings.xml
- name: daemon
hostPath:
path: /etc/docker/
# 进入 jenkins 目录,安装 jenkins
kubectl apply -f manifests/
# 查看是否运行成功
kubectl get po -n kube-devops
# 查看 service 端口,通过浏览器访问
kubectl get svc -n kube-devops
# 查看容器日志,获取默认密码
//在This may also be found at: /var/jenkins_home/secrets/initialAdminPassword上面
kubectl logs -f pod名称 -n kube-devops
//通过自己的ip地址和映射端口就能访问
192.168.190.11:31697
选择安装推荐插件
进去的时候账号密码弄成 admin/wolfcode
通过manage jenkins -> 点击插件管理里的-》Avalilable plugins 去下载需要的插件
Build Authorization Token Root
Gitlab
Node and Label parameter
Kubernetes
Config File Provider
Git Parameter
jenkins + k8s 环境配置
进入 Dashboard > 系统管理 > 节点管理 > Clouds > NewClouds
配置 k8s 集群
名称:kubernetes
Kubernetes 地址:https://kubernetes.default
警用https证书检查
Jenkins 地址:http://jenkins-service.kube-devops:8080
配置完成后保存即可
凭据-系统-全局凭据-添加凭证
类型 Username with password
把gitlab的账号密码填进去
id git-user-pass
同理把harbor也添加进去
第四步:jenkins使用CICD构建服务
去自己的gitlab界面把https://github.com/wjw1758548031/resource/k8s-go-demo 倒入进去
我的库名叫做 resource
cat ~/.kube/config
系统管理,Managed files ,创建一个新的config,把上面的内容copy进去
jenkins里面点击新建任务
流水线
gitlab 选择默认
build when 选择一下
push Events
Opened Merge
高级
生成一下Secret token
流水线选择:Pipeline script form SCM
填写一下git信息
脚本路径就 Jenkinsfile
然后把build when 后面的url 复制下来
打开gitlab 项目 设置 webhook
url放进去,ssl验证关闭,打开push推送
Secret token 也添加进去
项目只要有push就会构建镜像和容器 通过下面的url就可以访问接口
http://192.168.190.11:30001/
项目里的 jenkinsFile、dockerFile、deploment.yaml 我都写好了,直接用即可.