k8s集群通过helm部署skywalking

1、安装helm

下载脚本安装

bash 复制代码
~# curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
~# chmod 700 get_helm.sh
~# ./get_helm.sh

或者下载包进行安装

bash 复制代码
~# wget https://get.helm.sh/helm-canary-linux-amd64.tar.gz
~# mv helm /usr/local/bin
~# chmod +x /usr/local/bin/helm

2、安装nfs

bash 复制代码
### 这里就将 nfs-server 安装在 master 节点

# 安装 nfs-utils、rpcbind 软件包(===所有节点===)
yum -y install nfs-utils rpcbind

# 创建目录
sudo mkdir -p /data/nfs
 
# 添加权限
sudo chmod 777 -R /data/nfs
 
# 编辑文件,添加以下内容
sudo vim /etc/exports
/data/nfs 172.16.10.0/24(rw,no_root_squash,sync)

# 重启服务
systemctl start rpcbind && systemctl enable rpcbind
systemctl start nfs && systemctl enable nfs(所有节点)

# 配置生效
exportfs -rv

# 查看共享目录
sudo showmount -e 192.168.10.100
# 返回值如下,表示创建成功
Export list for 192.168.10.100:
/data/nfs	172.16.10.*

3、安装skypwalk服务
参考链接

1)设置环境变量

bash 复制代码
# change the release version according to your need
export SKYWALKING_RELEASE_VERSION=4.5.0 
# change the release name according to your scenario 
export SKYWALKING_RELEASE_NAME=skywalking
# change the namespace to where you want to install SkyWalking  
export SKYWALKING_RELEASE_NAMESPACE=default  

2)通过Docker Helm repository安装skywalking

编辑配置文件

bash 复制代码
cd /root/skywalking-kubernetes/chart
cat  skywalking/values-my-es.yaml 
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# Default values for skywalking.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

oap:
  image:
    tag: 9.6.0
  storageType: elasticsearch

ui:
  image:
    tag: 9.6.0

elasticsearch:
  enabled: false
  config:               # For users of an existing elasticsearch cluster,takes effect when `elasticsearch.enabled` is false
    host: elasticsearch-es-http
    port:
      http: 9200
    user: "xxx"         # [optional]
    password: "xxx"     # [optional]
bash 复制代码
kubectl create namespace skywalking
helm install "${SKYWALKING_RELEASE_NAME}" \
  oci://registry-1.docker.io/apache/skywalking-helm \
  --version "${SKYWALKING_RELEASE_VERSION}" \
  -n "${SKYWALKING_RELEASE_NAMESPACE}" \
  --set oap.image.tag=9.6.0 \
  --set oap.storageType=elasticsearch \
  --set ui.image.tag=9.6.0

结果输出如下

bash 复制代码
NAME: skywalking
LAST DEPLOYED: Sun Dec 24 03:18:33 2023
NAMESPACE: skywalking
STATUS: deployed
REVISION: 1
NOTES:
************************************************************************
*                                                                      *
*                 SkyWalking Helm Chart by SkyWalking Team             *
*                                                                      *
************************************************************************

Thank you for installing skywalking-helm.

Your release is named skywalking.

Learn more, please visit https://skywalking.apache.org/

Get the UI URL by running these commands:
  echo "Visit http://127.0.0.1:8080 to use your application"
  kubectl port-forward svc/skywalking-skywalking-helm-ui 8080:80 --namespace skywalking
#################################################################################
######   WARNING: Persistence is disabled!!! You will lose your data when   #####
######            the SkyWalking's storage ES pod is terminated.            #####
#################################################################################

查看对应pods

bash 复制代码
kubectl get pods -n skywalking
NAME                                              READY   STATUS      RESTARTS   AGE
elasticsearch-master-0                            1/1     Running     0          34h
elasticsearch-master-1                            1/1     Running     0          34h
elasticsearch-master-2                            1/1     Running     0          34h
skywalking-skywalking-helm-oap-5c7bc85f97-48rbt   1/1     Running     0          34h
skywalking-skywalking-helm-oap-5c7bc85f97-v74ls   1/1     Running     0          34h
skywalking-skywalking-helm-oap-init-ftc8m         0/1     Completed   0          34h
skywalking-skywalking-helm-ui-b9f69c6fc-wgjvr     1/1     Running     0          34h

3)查看配置文件

A、查看skywalking-skywalking-helm-ui的相关文件内容

depoyment文本内容

bash 复制代码
~# kubectl edit deployment skywalking-skywalking-helm-ui -n skywalking

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
    meta.helm.sh/release-name: skywalking
    meta.helm.sh/release-namespace: skywalking
  creationTimestamp: "2023-12-23T19:18:33Z"
  generation: 1
  labels:
    app: skywalking
    app.kubernetes.io/managed-by: Helm
    chart: skywalking-helm-4.5.0
    component: ui
    heritage: Helm
    release: skywalking
  name: skywalking-skywalking-helm-ui
  namespace: skywalking
  resourceVersion: "46556645"
  uid: c0e3217e-4456-4252-b28c-cb53c26e5bf1
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: skywalking
      component: ui
      release: skywalking
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: skywalking
        component: ui
        release: skywalking
    spec:
      containers:
      - env:
        - name: SW_OAP_ADDRESS
          value: http://skywalking-skywalking-helm-oap:12800
        image: skywalking.docker.scarf.sh/apache/skywalking-ui:9.6.0
        imagePullPolicy: IfNotPresent
        name: ui
        ports:
        - containerPort: 8080
          name: page
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30        

services

bash 复制代码
~# kubectl edit svc skywalking-skywalking-helm-ui -n skywalking

apiVersion: v1
kind: Service
metadata:
  annotations:
    meta.helm.sh/release-name: skywalking
    meta.helm.sh/release-namespace: skywalking
  creationTimestamp: "2023-12-23T19:18:33Z"
  labels:
    app: skywalking
    app.kubernetes.io/managed-by: Helm
    chart: skywalking-helm-4.5.0
    component: ui
    heritage: Helm
    release: skywalking
  name: skywalking-skywalking-helm-ui
  namespace: skywalking
  resourceVersion: "46556519"
  uid: b4e3da64-da1a-4dcb-9324-537773ce251a
spec:
  clusterIP: 10.68.222.100
  clusterIPs:
  - 10.68.222.100
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - port: 80
    protocol: TCP
    targetPort: 8080
  selector:
    app: skywalking
    component: ui
    release: skywalking
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

A、查看skywalking-skywalking-helm-oap的相关文件内容

deployment

bash 复制代码
~# kubectl edit deployment skywalking-skywalking-helm-oap -n skywalking

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
    meta.helm.sh/release-name: skywalking
    meta.helm.sh/release-namespace: skywalking
  creationTimestamp: "2023-12-23T19:18:33Z"
  generation: 1
  labels:
    app: skywalking
    app.kubernetes.io/managed-by: Helm
    chart: skywalking-helm-4.5.0
    component: oap
    heritage: Helm
    release: skywalking
  name: skywalking-skywalking-helm-oap
  namespace: skywalking
  resourceVersion: "46557028"
  uid: 83562701-0540-479d-b45c-72445e0d5bcf
spec:
  progressDeadlineSeconds: 600
  replicas: 2
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: skywalking
      component: oap
      release: skywalking
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      creationTimestamp: null
      labels:
        app: skywalking
        component: oap
        release: skywalking
    spec:
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - podAffinityTerm:
              labelSelector:
                matchLabels:
                  app: skywalking
                  component: oap
                  release: skywalking
              topologyKey: kubernetes.io/hostname
            weight: 1
      containers:
      - env:
        - name: JAVA_OPTS
          value: -Dmode=no-init -Xmx2g -Xms2g
        - name: SW_CLUSTER
          value: kubernetes
        - name: SW_CLUSTER_K8S_NAMESPACE
          value: skywalking
        - name: SW_CLUSTER_K8S_LABEL
          value: app=skywalking,release=skywalking,component=oap
        - name: SKYWALKING_COLLECTOR_UID
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.uid
        - name: SW_STORAGE
          value: elasticsearch
        - name: SW_STORAGE_ES_CLUSTER_NODES
          value: elasticsearch-master:9200
        image: skywalking.docker.scarf.sh/apache/skywalking-oap-server:9.6.0
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          initialDelaySeconds: 5
          periodSeconds: 10
          successThreshold: 1
          tcpSocket:
            port: 12800
          timeoutSeconds: 1
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      initContainers:
      - command:
        - sh
        - -c
        - for i in $(seq 1 60); do nc -z -w3 elasticsearch-master 9200 && exit 0 ||
          sleep 5; done; exit 1
        image: busybox:1.30
        imagePullPolicy: IfNotPresent
        name: wait-for-elasticsearch
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: skywalking-skywalking-helm-oap
      serviceAccountName: skywalking-skywalking-helm-oap
      terminationGracePeriodSeconds: 30 

service

bash 复制代码
~# kubectl edit svc skywalking-skywalking-helm-oap -n skywalking

apiVersion: v1
kind: Service
metadata:
  annotations:
    meta.helm.sh/release-name: skywalking
    meta.helm.sh/release-namespace: skywalking
  creationTimestamp: "2023-12-23T19:18:33Z"
  labels:
    app: skywalking
    app.kubernetes.io/managed-by: Helm
    chart: skywalking-helm-4.5.0
    component: oap
    heritage: Helm
    release: skywalking
  name: skywalking-skywalking-helm-oap
  namespace: skywalking
  resourceVersion: "46556523"
  uid: 386c9503-7003-447a-94a0-4f9dcd2fa06d
spec:
  clusterIP: 10.68.169.25
  clusterIPs:
  - 10.68.169.25
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - name: grpc
    port: 11800
    protocol: TCP
    targetPort: 11800
  - name: rest
    port: 12800
    protocol: TCP
    targetPort: 12800
  selector:
    app: skywalking
    component: oap
    release: skywalking
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}  

C、ingress的配置文件内容

secret

bash 复制代码
~# kubectl edit secret abc.com-ssl -n skywalking

apiVersion: v1
data:
  tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUdMakNDQlJhZ0F3SUJBZ0lRQzAzRDZSSTlvbnBXZmhYTkJvNUZmakFOQmdrcWhraUc5dzBCQVFzRkFEQmcKTVFzd0NRWURWUVFHRXdKVlV6RVZNQk1HQTFVRUNoTU1SR2xuYVVObGNuUWdTVzVqTVJrd0Z3WURWUVFMRXhCMwpkM2N1WkdsbmFXTmxjblF1WTI5dE1SOHdIUVlEVlFRREV4WkhaVzlVY25WemRDQlVURk1nVWxOQklFTkJJRWN4Ck1CNFhEVEl6TURVd016QXdNREF3TUZvWERUSTBNRFV3TWpJek5UazFPVm93R3pFWk1CY0dBMVVFQXd3UUtpNWoKZFdsM2FuSndZM1pwTG1OdmJUQ0NBU0l3RFFZSktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQUw1cQpCZzlGcWt6dmgzV1pBTmtBVHdlYmlzcStyYVV1R2hPUUM3Q1dEN3VFVE5MSU9lMFRJelZkYXhNZ1pCeDRDaHBDClk4MmY5VzhHaVlTTFBvLzkvVWNqNklKaXdsRDFGZmVrY2NyN1duWXpXUk1GSldQMFQ3c3luVTUrRDhGTHNSVTMKL0ZLVC9EVHIzY3NrQ2RJcS9XcWU5RFAzNitjSlZJbURXYTZ2QjR4T1BYT2FlT1FHTjl5cTNLZ3FWVUU2MnZDWQpRSWhyL3lDSmQ5RjRadEM5MS95dWFlUEJqckdCMzV5N25TZjVVOFhxRGkyNTZqaU5ubFpBQ2NZZHgzYjhNOFNpClFOaHpha0xhbmNPajlNZ005djNrcHFNY253OXZQT3lEVUNpZ1lxWFdHNk5Xdzh2azVWV2ZtZmFNdmFjRERPYWwKdVFCc1MxbzdKZmRzNnNIK1U0VUNBd0VBQWFPQ0F5Y3dnZ01qTUI4R0ExVWRJd1FZTUJhQUZKUlAxRjJMNUtUaQpwb0QrL2RqNUFPK2p2Z0pYTUIwR0ExVWREZ1FXQkJTdGsvaUNFV3B3aTJBWDRmMCtjLy9sc3RrQkZUQXJCZ05WCkhSRUVKREFpZ2hBcUxtTjFhWGRxY25CamRta3VZMjl0Z2c1amRXbDNhbkp3WTNacExtTnZiVEFPQmdOVkhROEIKQWY4RUJBTUNCYUF3SFFZRFZSMGxCQll3RkFZSUt3WUJCUVVIQXdFR0NDc0dBUVVGQndNQ01EOEdBMVVkSHdRNApNRFl3TktBeW9EQ0dMbWgwZEhBNkx5OWpaSEF1WjJWdmRISjFjM1F1WTI5dEwwZGxiMVJ5ZFhOMFZFeFRVbE5CClEwRkhNUzVqY213d1BnWURWUjBnQkRjd05UQXpCZ1puZ1F3QkFnRXdLVEFuQmdnckJnRUZCUWNDQVJZYmFIUjAKY0RvdkwzZDNkeTVrYVdkcFkyVnlkQzVqYjIwdlExQlRNSFlHQ0NzR0FRVUZCd0VCQkdvd2FEQW1CZ2dyQmdFRgpCUWN3QVlZYWFIUjBjRG92TDNOMFlYUjFjeTVuWlc5MGNuVnpkQzVqYjIwd1BnWUlLd1lCQlFVSE1BS0dNbWgwCmRIQTZMeTlqWVdObGNuUnpMbWRsYjNSeWRYTjBMbU52YlM5SFpXOVVjblZ6ZEZSTVUxSlRRVU5CUnpFdVkzSjAKTUFrR0ExVWRFd1FDTUFBd2dnRi9CZ29yQmdFRUFkWjVBZ1FDQklJQmJ3U0NBV3NCYVFCMkFPN04wR1RWMnhyTwp4VnkzbmJUTkU2SXloMFo4dk96ZXcxRklXVVp4SDdXYkFBQUJoK01JQU1VQUFBUURBRWN3UlFJaEFON1BuOEhGClBJT0VwenV4Ky9wUS9YYW1CS1VKWHhIbER2bll1eU9FTVd6bkFpQTZHUzdDRjNWcGVzeWFwVzR1RDEzQjcrbTgKVVN3R3lBcXNGak9ORkRkNWlBQjNBSFBabm9rYlRKWjRvQ0I5UjUzbXNzWWMwRkZlY1JrcWpHdUFFSHJCZDNLMQpBQUFCaCtNSUFQOEFBQVFEQUVnd1JnSWhBTXROa3ZEams1U0E5MWtlWVVsS090ZzUvRkJZTjFNbG9KTUdSYmdNCmY2R1NBaUVBMWpETnhZa0ZXZTFQVXp3QmtBaG92M3ptaXdEZ1hPSmxVSElISXRhN3Jic0FkZ0JJc09OcjJxWkgKTkEvbGFnTDZuVERySEZJQnkxYmRMSUhadTcrck9kaUVjd0FBQVlmakNBRFNBQUFFQXdCSE1FVUNJQkxkUXRNegowdkE4SEt3bTJQamgxNmNxT2M5MmlWOFFreHRzSUVWZDdEQlpBaUVBNWdJWnlEWEs3MjV3dXFBZFhLZFA0N0JmCllrTE9hRFI4UlpCeUkvTjljY0V3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQUhHK2FYTnJHL0lpRkdYbGsrczQKa3FJL1JUV1FsOXVwdnA3ejk5U3ZNamFFSEJETUJ5Z21LS1p1K0l2bjlkOWhVdnZDUTFpM2xFOE1OZkpocGh1awo2MFgwcm5pSU1ucmdETExMYURSc3pSeXVhazZvcGV4QkF6ejAzNGhBVnM3UnMzTlYzVGVpZGV6RlJmZEd2ZEVNCjk5NG1BUDV6a3A0V1BTVFpFbldBS0FBR2o4Wk4ydk9ZSENLaCtrYTFGb29oRzNGMVV0d3hWSG9Gc042L3EvKzgKa2FweXRCTGFSamxvSnZxcFRtbyt4TzRTMU9oNHJkRmdOa3JOYmVoTTU2VDJOSkF5c0p0c25WancyanRnVFhrbApaQzdyUy85RlFUT0RKbTlKYzIyN0xMTjBFbnp6SG03RERDaHBGaTI4akVYM2NiVDNaWDNxb1drSmw4VjdKWlVTCkZKRT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQotLS0tLUJFR0lOIENFUlRJRklDQVRFLS0tLS0KTUlJRWpUQ0NBM1dnQXdJQkFnSVFEUWQ0S2hNL3h2bWxjcGJoTWYvUmVUQU5CZ2txaGtpRzl3MEJBUXNGQURCaApNUXN3Q1FZRFZRUUdFd0pWVXpFVk1CTUdBMVVFQ2hNTVJHbG5hVU5sY25RZ1NXNWpNUmt3RndZRFZRUUxFeEIzCmQzY3VaR2xuYVdObGNuUXVZMjl0TVNBd0hnWURWUVFERXhkRWFXZHBRMlZ5ZENCSGJHOWlZV3dnVW05dmRDQkgKTWpBZUZ3MHhOekV4TURJeE1qSXpNemRhRncweU56RXhNREl4TWpJek16ZGFNR0F4Q3pBSkJnTlZCQVlUQWxWVApNUlV3RXdZRFZRUUtFd3hFYVdkcFEyVnlkQ0JKYm1NeEdUQVhCZ05WQkFzVEVIZDNkeTVrYVdkcFkyVnlkQzVqCmIyMHhIekFkQmdOVkJBTVRGa2RsYjFSeWRYTjBJRlJNVXlCU1UwRWdRMEVnUnpFd2dnRWlNQTBHQ1NxR1NJYjMKRFFFQkFRVUFBNElCRHdBd2dnRUtBb0lCQVFDK0YranN2aWtLeS82NUxXRXgvVE1rQ0RJdVdlZ2gxTmd3dm00UQp5SVNnUDdvVTVkNzllb3lTRzN2T2hDM3cvM2pFTXVpcG9IMWZCdHA3bTB0VHBzWWJBaGNoNFhBN3JmdUQ2d2hVCmdhamVFckxWeG9pV01Qa0MvRG5VdmJnaTc0QkptZEJpdUdIUVNkN0x3c3VYcFRFR0c5ZllYY2JUVk41U0FUWXEKRGZiZXhiWXhUTXdWSldvVmI2bHJCRWdNM2dCQnFpaUFpeTgwMHh1MU5xMDdKZENJUWtCc05wRnRaYklaaHNEUwpmemxHV1A0d0VtQlEzTzY3YytaWGtGcjJEY3JYQkV0SGFtODBHcDJTTmhvdTJVNVU3VWVzREwveGdMSzYvMGQ3CjZUbkVWTVNVVkprWjhWZVpyK0lVSWx2b0xydGpMYnF1Z2IwVDNPWVhXK0NRVTBrQkFnTUJBQUdqZ2dGQU1JSUIKUERBZEJnTlZIUTRFRmdRVWxFL1VYWXZrcE9LbWdQNzkyUGtBNzZPK0FsY3dId1lEVlIwakJCZ3dGb0FVVGlKVQpJQmlWNXVOdTVnLzYrcmtTN1FZWGp6a3dEZ1lEVlIwUEFRSC9CQVFEQWdHR01CMEdBMVVkSlFRV01CUUdDQ3NHCkFRVUZCd01CQmdnckJnRUZCUWNEQWpBU0JnTlZIUk1CQWY4RUNEQUdBUUgvQWdFQU1EUUdDQ3NHQVFVRkJ3RUIKQkNnd0pqQWtCZ2dyQmdFRkJRY3dBWVlZYUhSMGNEb3ZMMjlqYzNBdVpHbG5hV05sY25RdVkyOXRNRUlHQTFVZApId1E3TURrd042QTFvRE9HTVdoMGRIQTZMeTlqY213ekxtUnBaMmxqWlhKMExtTnZiUzlFYVdkcFEyVnlkRWRzCmIySmhiRkp2YjNSSE1pNWpjbXd3UFFZRFZSMGdCRFl3TkRBeUJnUlZIU0FBTUNvd0tBWUlLd1lCQlFVSEFnRVcKSEdoMGRIQnpPaTh2ZDNkM0xtUnBaMmxqWlhKMExtTnZiUzlEVUZNd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQgpBSUljQkRxQzZjV3B5R1VTWEFqakFjWXdzSzRpaUdGN0t3ZUc5N2kxUkp6MWt3WmhSb282b3JVMUp0QlluanpCCmM0Ky9zWG1uSEprM21sUHlMMXh1SUF0OXNNZUM3K3ZyZVJJRjV3RkJDME1DTjVzYkh3aE5OMUp6S2JpZk5lUDUKb3pwWmRRRm1rQ28rbmVCaUtSNkhxSUErTE1UTUNNTXV2MmtoR0d1UEhtdER6ZTRHbUVHWnRZTHlGOEVRcGE1WQpqUHVWNmsyQ3IvTjNYeEZwVDNoUnB0LzN1c1UvWmI5d2ZLUHRXcG96blo0LzQ0YzFwOXJ6RmNaWXJXa2ozQSs3ClROQkpFMEdtUDJmaFhoUDFEL1hWZklXL2gweUNKR0VpVjlHbG0vdUdPYTNEWEhsbWJBY3hTeUNScmFHK1pCa0EKN2g0U2VNNlk4bC83TUJScFBDejZsOFk9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tAED
  tls.key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBdm1vR0QwV3FUTytIZFprQTJRQlBCNXVLeXI2dHBTNGFFNUFMc0pZUHU0Uk0wc2c1CjdSTWpOVjFyRXlCa0hIZ0tHa0pqelovMWJ3YUpoSXMrai8zOVJ5UG9nbUxDVVBVVjk2Unh5dnRhZGpOWkV3VWwKWS9SUHV6S2RUbjRQd1V1eEZUZjhVcFA4Tk92ZHl5UUowaXI5YXA3ME0vZnI1d2xVaVlOWnJxOEhqRTQ5YzVwNAo1QVkzM0tyY3FDcFZRVHJhOEpoQWlHdi9JSWwzMFhobTBMM1gvSzVwNDhHT3NZSGZuTHVkSi9sVHhlb09MYm5xCk9JMmVWa0FKeGgzSGR2d3p4S0pBMkhOcVF0cWR3NlAweUF6Mi9lU21veHlmRDI4ODdJTlFLS0JpcGRZYm8xYkQKeStUbFZaK1o5b3k5cHdNTTVxVzVBR3hMV2pzbDkyenF3ZjVUaFFJREFRQUJBb0lCQUJkb0FRYXZrWmVUZWh0QwppNUFoTVpYRjBQSExMcDAzWlkweUQ3M05OSEhnZVhFUG04OUFvdnRVV0cwcGRpVHB2SlF0eFFicHVzbkRDL1IzCkNXRzUzd0Izc1lVVmpyMVU0elpseUhjakhxT1kvRUlTUjk1WmtkTjVEVTB3d2M4STl1T2MxaTl3Y1hndjVqdXEKV21xelRpTmxGcStzc2hyY1VyLzBuWG1UbW1Ic3BYRW1meDF6TVc3YVA5TDJNT01xazRVamJDblNVZ1c4QnJ6SAo2dlMyLzFpWG5rdEJWK0pvV2ZhYUpmNFljM21RT3J0SFB1R3hCbEhCMk5ISkcxQitQeWN2ckVyLzFhOE10RzlUCjFINlg2aGxmY0hpUUZCRmdpa25mWTJ6Zk5HcElORGpaejFPK2VSRjRpdlphb0ozR3ZtY2RVUldjbHpibmlSQXcKcFJyb2pnRUNnWUVBKzlGK0FqcXVaWnp5T1o5ZWNZYlJwVXd5TWNiNXYwZ2hhdjNoQWdYOXhaclA4dUtCejFqTApBdUcrQVRlL3BDVE1sTVlzL3Rabk1pMVR3Z2Rld2dEM3VqR2plRmZ3TFdINnVWb0kzR3JxbFhscjVGSWYwVjlRCjJYRG5LdXdVOWh6eFpycG52K2xOS01QVWMwdGh1ZjlJVU9LL2Q4LzA3V0lhTDg3OE4yRmUyOUVDZ1lFQXdaTisKMXFHa29hQzNhR1huZWk3ODlBLzNyMEdmSDgvVE1HUkpHaWgrM01FOERhcVNSUEtDbTY3cFlUU28vQlh3VnpvOApoR1R6RWRLOHFMM3NIRW85OEtHRjZ2MjJVSDBrN2ZIY0RkMGZEZGFKT1l1eDZsNWVGTm9nYlF1bWxhdlVPdG0rCk12NWZacVVRS0h1MGxJeExRTWxQZnQ4d2lsakZzb3RIR0RDbVRYVUNnWUVBa1BzWU5XaGJxQjBvU0l5ODcxZlQKcy91ZW1wSVlrRXlURU9xQ2RZdkZnOU9TRGlEaTQrSVhYOTFnYTRzUlJ3djR6VjhiNE16SU5WZHJkYmFRazluQwp4dXgxVllBcnc3VTVpU1dSSEhaaUFSVWJUU1VMTkp4UURDQzR0em1kQitXNkJvOGoxSllaMm5LRkNxeEg0N0phCmpGZEFMVmNKaVBLR2FTZ2VoalFGVFVFQ2dZQktYam9LaDB0U2RRWkJhM1VFc2V5b2IwSCs2TDBUWWFxSEd1QWkKMW8vMmk1NWd1Yms4RjljcHJJY291eXg0dkl6N1ZmcE4rdUtQWkdEcWl4eWN1Y0VXSTFmcHNkTkxGT2tOS1RBYgplMm9rek5rbmJJM0x0cmw2VlZyRHlnZ1QxRkhTMGppS0tzUElFWDRscjNEdTZQODRRcDd4NVJrbTdYZjJZaC9NCklWU2l5UUtCZ0JiK0plU0lFRmM1czRyRklaYnN0eENCK1dqdk1PczR0RW5NNkp3WXh4TkJmYUdQQUtMWmNCNDMKWTVQblNoT2w0T21RYnZ5RWdHK1o4RFcrN0J2cVVjVE92SGlhb2YzTzZPUjBQdmV4Y3Z0Sldha3RHRVlPT0g2bQpGTWZmZ3QvUkEraUFQay91QllCZVNLWHNGNDhCY3Nsck15TU9rUThpaU02OTQ1WEIxaE5RCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tAVse
immutable: false
kind: Secret
metadata:
  creationTimestamp: "2023-12-21T14:30:06Z"
  name: abc.com-ssl
  namespace: skywalking
  resourceVersion: "46095322"
bash 复制代码
~# kubectl edit ingress skywalking-ui -n skywalking

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  creationTimestamp: "2023-12-22T11:51:21Z"
  generation: 3
  labels:
    app: skywalking
    app.kubernetes.io/managed-by: Helm
    chart: skywalking-4.3.0
    component: ui
    heritage: Helm
    k8s.kuboard.cn/name: skywalking-ui
    release: skywalking
  name: skywalking-ui
  namespace: skywalking
  resourceVersion: "46541768"
  uid: 9331ed1a-40e0-4fb2-b6a5-370207fb196b
spec:
  ingressClassName: biking-ingress
  rules:
  - host: skywalking-oap-test.abc.com
    http:
      paths:
      - backend:
          service:
            name: skywalking-skywalking-helm-ui
            port:
              number: 80
        path: /
        pathType: Prefix
  tls:
  - hosts:
    - skywalking-oap-test.abc.com
    secretName: abc.com-ssl
status:
  loadBalancer:
    ingress:
    - ip: 172.16.10.202
    - ip: 172.16.10.203

D、查看elasticsearch的配置内容

StatefulSet

yaml 复制代码
~# kubectl edit StatefulSet elasticsearch-master -n skywalking

apiVersion: apps/v1
kind: StatefulSet
metadata:
  annotations:
    esMajorVersion: "7"
    meta.helm.sh/release-name: skywalking
    meta.helm.sh/release-namespace: skywalking
  creationTimestamp: "2023-12-23T19:18:33Z"
  generation: 1
  labels:
    app: elasticsearch-master
    app.kubernetes.io/managed-by: Helm
    chart: elasticsearch
    heritage: Helm
    release: skywalking
  name: elasticsearch-master
  namespace: skywalking
  resourceVersion: "46556881"
  uid: 6181350e-d165-43b6-b1df-e2473b928f47
spec:
  persistentVolumeClaimRetentionPolicy:
    whenDeleted: Retain
    whenScaled: Retain
  podManagementPolicy: Parallel
  replicas: 3
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: elasticsearch-master
  serviceName: elasticsearch-master-headless
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: elasticsearch-master
        chart: elasticsearch
        release: skywalking
      name: elasticsearch-master
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - elasticsearch-master
            topologyKey: kubernetes.io/hostname
      automountServiceAccountToken: true
      containers:
      - env:
        - name: node.name
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.name
        - name: cluster.initial_master_nodes
          value: elasticsearch-master-0,elasticsearch-master-1,elasticsearch-master-2,
        - name: discovery.seed_hosts
          value: elasticsearch-master-headless
        - name: cluster.name
          value: elasticsearch
        - name: network.host
          value: 0.0.0.0
        - name: cluster.deprecation_indexing.enabled
          value: "false"
        - name: ES_JAVA_OPTS
          value: -Xmx1g -Xms1g
        - name: node.data
          value: "true"
        - name: node.ingest
          value: "true"
        - name: node.master
          value: "true"
        - name: node.ml
          value: "true"
        - name: node.remote_cluster_client
          value: "true"
        image: docker.elastic.co/elasticsearch/elasticsearch:7.17.3
        imagePullPolicy: IfNotPresent
        name: elasticsearch
        ports:
        - containerPort: 9200
          name: http 
          protocol: TCP
        - containerPort: 9300
          name: transport
          protocol: TCP
        readinessProbe:
          exec:
            command:
            - bash
            - -c
            - |
              set -e
              # If the node is starting up wait for the cluster to be ready (request params: "wait_for_status=green&timeout=1s" )
              # Once it has started only check that the node itself is responding
              START_FILE=/tmp/.es_start_file

              # Disable nss cache to avoid filling dentry cache when calling curl
              # This is required with Elasticsearch Docker using nss < 3.52
              export NSS_SDB_USE_CACHE=no

              http () {
                local path="${1}"
                local args="${2}"
                set -- -XGET -s

                if [ "$args" != "" ]; then
                  set -- "$@" $args
                fi

                if [ -n "${ELASTIC_PASSWORD}" ]; then
                  set -- "$@" -u "elastic:${ELASTIC_PASSWORD}"
                fi

                curl --output /dev/null -k "$@" "http://127.0.0.1:9200${path}"
              }

              if [ -f "${START_FILE}" ]; then
                echo 'Elasticsearch is already running, lets check the node is healthy'
                HTTP_CODE=$(http "/" "-w %{http_code}")
                RC=$?
                if [[ ${RC} -ne 0 ]]; then
                  echo "curl --output /dev/null -k -XGET -s -w '%{http_code}' \${BASIC_AUTH} http://127.0.0.1:9200/ failed with RC ${RC}"
                  exit ${RC}
                fi
                # ready if HTTP code 200, 503 is tolerable if ES version is 6.x
                if [[ ${HTTP_CODE} == "200" ]]; then
                  exit 0
                elif [[ ${HTTP_CODE} == "503" && "7" == "6" ]]; then
                  exit 0
                else
                  echo "curl --output /dev/null -k -XGET -s -w '%{http_code}' \${BASIC_AUTH} http://127.0.0.1:9200/ failed with HTTP code ${HTTP_CODE}"
                  exit 1
                fi

              else
                echo 'Waiting for elasticsearch cluster to become ready (request params: "wait_for_status=green&timeout=1s" )'
                if http "/_cluster/health?wait_for_status=green&timeout=1s" "--fail" ; then
                  touch ${START_FILE}
                  exit 0
                else
                  echo 'Cluster is not yet ready (request params: "wait_for_status=green&timeout=1s" )'
                  exit 1
                fi
              fi
          failureThreshold: 3
          initialDelaySeconds: 10
          periodSeconds: 10
          successThreshold: 3
          timeoutSeconds: 5
        resources:
          limits:
            cpu: "1"
            memory: 2Gi
          requests:
            cpu: 100m
            memory: 2Gi
        securityContext:
          capabilities:
            drop:
            - ALL
          runAsNonRoot: true
          runAsUser: 1000
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      enableServiceLinks: true
      initContainers:
      - command:
        - sysctl
        - -w    
        - vm.max_map_count=262144
        image: docker.elastic.co/elasticsearch/elasticsearch:7.17.3
        imagePullPolicy: IfNotPresent
        name: configure-sysctl
        resources: {}
        securityContext:
          privileged: true
          runAsUser: 0
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext:
        fsGroup: 1000
        runAsUser: 1000
      terminationGracePeriodSeconds: 120
  updateStrategy:
    type: RollingUpdate   

service

yaml 复制代码
~# kubectl edit service elasticsearch-master -n skywalking

apiVersion: v1
kind: Service
metadata:
  annotations:
    meta.helm.sh/release-name: skywalking
    meta.helm.sh/release-namespace: skywalking
  creationTimestamp: "2023-12-23T19:18:33Z"
  labels:
    app: elasticsearch-master
    app.kubernetes.io/managed-by: Helm
    chart: elasticsearch
    heritage: Helm
    release: skywalking
  name: elasticsearch-master
  namespace: skywalking
  resourceVersion: "46556527"
  uid: 7e6f70f2-de8f-46e6-8b2d-0421e32ec4f7
spec:
  clusterIP: 10.68.47.138
  clusterIPs:
  - 10.68.47.138
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - name: http
    port: 9200
    protocol: TCP
    targetPort: 9200
  - name: transport
    port: 9300
    protocol: TCP
    targetPort: 9300
  selector:
    app: elasticsearch-master
    chart: elasticsearch
    release: skywalking
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}  

检测脚本:

bash 复制代码
bash -c 'set -e
# If the node is starting up wait for the cluster to be ready (request params: "wait_for_status=green&timeout=1s" )
# Once it has started only check that the node itself is responding
START_FILE=/tmp/.es_start_file

# Disable nss cache to avoid filling dentry cache when calling curl
# This is required with Elasticsearch Docker using nss < 3.52
export NSS_SDB_USE_CACHE=no

http () {
  local path="${1}"
  local args="${2}"
  set -- -XGET -s

  if [ "$args" != "" ]; then
    set -- "$@" $args
  fi

  if [ -n "${ELASTIC_PASSWORD}" ]; then
    set -- "$@" -u "elastic:${ELASTIC_PASSWORD}"
  fi

  curl --output /dev/null -k "$@" "http://127.0.0.1:9200${path}"
}

if [ -f "${START_FILE}" ]; then
  echo 'Elasticsearch is already running, lets check the node is healthy'
  HTTP_CODE=$(http "/" "-w %{http_code}")
  RC=$?
  if [[ ${RC} -ne 0 ]]; then
    echo "curl --output /dev/null -k -XGET -s -w '%{http_code}' \${BASIC_AUTH} http://127.0.0.1:9200/ failed with RC ${RC}"
    exit ${RC}
  fi
  # ready if HTTP code 200, 503 is tolerable if ES version is 6.x
  if [[ ${HTTP_CODE} == "200" ]]; then
    exit 0
  elif [[ ${HTTP_CODE} == "503" && "7" == "6" ]]; then
    exit 0
  else
    echo "curl --output /dev/null -k -XGET -s -w '%{http_code}' \${BASIC_AUTH} http://127.0.0.1:9200/ failed with HTTP code ${HTTP_CODE}"
    exit 1
  fi

else
  echo 'Waiting for elasticsearch cluster to become ready (request params: "wait_for_status=green&timeout=1s" )'
  if http "/_cluster/health?wait_for_status=green&timeout=1s" "--fail" ; then
    touch ${START_FILE}
    exit 0
  else
    echo 'Cluster is not yet ready (request params: "wait_for_status=green&timeout=1s" )'
    exit 1
  fi
fi
'

4、添加客户端agent

下载apache-skywalking-apm-es7后,解压文件,将解压目录下的agent目录,存放到nfs下,然后挂载到java对应的pod服务,并将对应的文件加目录挂载到pods的/usr/skywalking/agent/下

8.7.0版本

bash 复制代码
wget https://archive.apache.org/dist/skywalking/8.7.0/apache-skywalking-apm-es7-8.7.0.tar.gz
tar -xf apache-skywalking-apm-es7-8.7.0.tar.gz
mv apache-skywalking-apm-es7-8.70/agent/* /data/nfs/skywalking-agent/

9.1.0版本

yaml 复制代码
wget https://archive.apache.org/dist/skywalking/java-agent/9.1.0/apache-skywalking-java-agent-9.1.0.tgz
tar -xf apache-skywalking-java-agent-9.1.0.tgz
mv skywalking-agent/* /data/nfs/

启动一个pods

deployment文本内容

bash 复制代码
~# kubectl edit deployment financial-management -n test

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "9"
    k8s.kuboard.cn/displayName: financial-management
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{"k8s.kuboard.cn/displayName":"financial-management"},"labels":{"k8s.kuboard.cn/layer":"platform","k8s.kuboard.cn/name":"financial-management"},"name":"financial-management","namespace":"biking"},"spec":{"progressDeadlineSeconds":600,"replicas":1,"revisionHistoryLimit":10,"selector":{"matchLabels":{"k8s.kuboard.cn/name":"financial-management"}},"strategy":{"rollingUpdate":{"maxSurge":"25%","maxUnavailable":"25%"},"type":"RollingUpdate"},"template":{"metadata":{"labels":{"k8s.kuboard.cn/name":"financial-management"}},"spec":{"containers":[{"env":[{"name":"TZ","value":"Asia/Shanghai"},{"name":"NACOS","value":"172.16.10.20"}],"image":"harbor.cuiwjrpcvi.com/bktest/financial-management:test","imagePullPolicy":"Always","name":"financial-management","ports":[{"containerPort":8081,"name":"msag7","protocol":"TCP"}],"resources":{},"terminationMessagePath":"/test/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","hostAliases":[{"hostnames":["rocketmq","kafka","redis"],"ip":"172.16.10.20"}],"imagePullSecrets":[{"name":"harbor-secret"}],"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30}}}}
  creationTimestamp: "2023-06-27T08:19:05Z"
  generation: 11
  labels:
    k8s.kuboard.cn/layer: platform
    k8s.kuboard.cn/name: financial-management
  name: financial-management
  namespace: test
  resourceVersion: "46557189"
  uid: 89bc052f-e343-489d-bef9-caa7585c8d1e
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s.kuboard.cn/name: financial-management
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      annotations:
        kubectl.kubernetes.io/restartedAt: "2023-12-23T23:21:18+04:00"
      creationTimestamp: null
      labels:
        k8s.kuboard.cn/name: financial-management
    spec:
      containers:
      - env:
        - name: TZ
          value: Asia/Shanghai
        - name: NACOS
          value: 172.16.10.20
        - name: BUILD_TAG
          value: ${BUILD_TAG}
        - name: NAMESPACE
          value: test
        - name: JAVA_TOOL_OPTIONS
          value: -javaagent:/usr/skywalking/agent/skywalking-agent.jar
        - name: SW_AGENT_NAME
          value: financial-management
        - name: SW_AGENT_COLLECTOR_BACKEND_SERVICES
          value: skywalking-skywalking-helm-oap.skywalking.svc.cluster.local:11800
        - name: SW_AGENT_FORCE_TLS
          value: "true"
        image: harbor.cuiwjrpcvi.com/bktest/financial-management:test
        imagePullPolicy: Always
        name: financial-management
        ports:
        - containerPort: 8081
          name: msag7
          protocol: TCP
        resources: {}
        terminationMessagePath: /test/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /usr/skywalking/agent/
          name: volume-2bhb2
          subPath: skywalking-agent
      dnsPolicy: ClusterFirst
      hostAliases:
      - hostnames:
        - rocketmq
        - kafka
        - redis
        ip: 172.16.10.20
      imagePullSecrets:
      - name: harbor-secret
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
      - name: volume-2bhb2
        nfs:
          path: /data/nfs
          server: 172.16.10.20       

service文本内容

bash 复制代码
~# kubectl edit svc financial-management -n test

apiVersion: v1
kind: Service
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app.kubernetes.io/instance":"financial-management","k8s.kuboard.cn/name":"financial-management"},"name":"financial-management","namespace":"biking"},"spec":{"ports":[{"name":"kjmfca","port":8081,"protocol":"TCP","targetPort":8081}],"selector":{"k8s.kuboard.cn/name":"financial-management"},"sessionAffinity":"None","type":"ClusterIP"}}
  creationTimestamp: "2023-06-27T08:19:05Z"
  labels:
    app.kubernetes.io/instance: financial-management
    k8s.kuboard.cn/name: financial-management
  name: financial-management
  namespace: test
  resourceVersion: "5855"
  uid: 4620bbc0-f8bd-48d9-8ad6-41a69939c31d
spec:
  clusterIP: 10.68.140.221
  clusterIPs:
  - 10.68.140.221
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - name: kjmfca
    port: 8081
    protocol: TCP
    targetPort: 8081
  selector:
    k8s.kuboard.cn/name: financial-management
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

依次类推部署其他的pod

5、登录ui控制台

https://skywalking-oap-test.abc.com/General-Service/Services

service图形入下

Topology图形如下

查看实例的图形

相关推荐
让生命变得有价值10 小时前
使用 helm 部署 gitlab
gitlab·helm
条纹布鲁斯12 小时前
dockerdsktop修改安装路径/k8s部署wordpress和ubuntu
docker·kubernetes
登云时刻15 小时前
Kubernetes集群外连接redis集群和使用redis-shake工具迁移数据(一)
redis·kubernetes·bootstrap
吴半杯16 小时前
gateway漏洞(CVE-2022-22947)
docker·kubernetes·gateway
灼烧的疯狂21 小时前
K8S + Jenkins 做CICD
容器·kubernetes·jenkins
wenyue112121 小时前
Revolutionize Your Kubernetes Experience with Easegress: Kubernetes Gateway API
容器·kubernetes·gateway
Python私教1 天前
ubuntu搭建k8s环境详细教程
linux·ubuntu·kubernetes
O&REO1 天前
单机部署kubernetes环境下Overleaf-基于MicroK8s的Overleaf应用部署指南
云原生·容器·kubernetes
politeboy1 天前
k8s启动springboot容器的时候,显示找不到application.yml文件
java·spring boot·kubernetes
运维小文1 天前
K8S资源限制之LimitRange
云原生·容器·kubernetes·k8s资源限制