服务器迁徙大作战(四):基于Apisix 的服务网关部署

继之前完成了服务器硬件搭建、持续集成搞定后,开始着手建设计统一的服务网关。

API网关是一个服务器,是系统的唯一入口, API网关封装了系统内部架构,为每个客户端提供一个定制的API。它可能还具有其它职责,如身份验证、监控、负载均衡、缓存、协议转换、限流熔断、静态响应处理。 API网关方式的核心要点是,所有的客户端和消费端都通过统一的网关接入,在网关层处理所有的非业务功能。通常,网关也是提供REST/HTTP的访问API。

目前主流的网关解决方案非常多,简单的对比:

维护组织 开发语言 优点 不足 资源占用情况
Apisix apache go - 插件机制,高性能、轻量级,提供可视化后台管理系统 - 依赖 etcd 做存储
Nginx nginx c - 高性能、可靠、轻量级 - 配置相对复杂,无现成的管理系统; - 配置修改无法自动刷新; - 不支持集群管理
Zuul Netflix, Inc Java - 基于Netflix的经验,适合微服务架构 - 性能可能有一定瓶颈,比较重 ☆☆☆
Spring Cloud spring-cloud Java - 天然spring 生态,贴合主流java 技术栈 - 学习曲线较陡,部分场景需要二开 ☆☆
Kong Kong 基于Nginx,使用Lua - 可扩展性强,支持插件机制 - 社区相对较小 - 依赖于 PostgreSQL 或 Cassandra 数据库 - 性能可能有一定瓶颈
Traefik Traefik Labs Go - 简单易用,支持自动服务发现和配置 - 功能可能相对较少

介绍

图来源Apisix 官网

Apache APISIX 是 Apache 软件基金会下的顶级项目,由 API7.ai 开发并捐赠。它是一个具有动态、实时、高性能等特点的云原生 API 网关。

你可以使用 APISIX 网关作为所有业务的流量入口,它提供了动态路由、动态上游、动态证书、A/B 测试、灰度发布(金丝雀发布)、蓝绿部署、限速、防攻击、收集指标、监控报警、可观测、服务治理等功能。

Apisix 虽然是后起之秀,但生态社区还是非常完善,APISIX 不仅支持众多协议与操作系统,而且也支持多语言编程插件,目前已经正式支持了 Java、Golang、Node.js、Python 等语言(图来源Apisix 官网)。

部署

apisix 提供多种的部署方式,包括二进制、容器部署、集群部署,本文只描述基于 k8s 的部署方式。

etcd部署

pv pvc 部署

首先定义了名为 manual-etcds-0 的 数据卷,以存储 etcd 数据。可以根据实际情况使用其他类型的存储。

yaml 复制代码
apiVersion: v1
kind: PersistentVolume
metadata:
  name: etcds-pv-1gi-0
  labels:
    type: local
spec:
  storageClassName: manual-etcds-0
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce # 卷可以被一个节点以读写方式挂载
  hostPath:
    path: "/mnt/data"


# deploy etcd pvc
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: data-apisix-etcds-0
  namespace: gpt
spec:
  storageClassName: manual-etcds-0
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

etcd 部署

熟悉基本运维的扫一眼 yaml 就能明白。

yaml 复制代码
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: apisix-etcd
  namespace: gpt
  labels:
    app.kubernetes.io/instance: apisix-etcd
    app.kubernetes.io/name: apisix-etcd
spec:
  podManagementPolicy: Parallel
  replicas: 1 # 推荐部署3个副本,笔者只是个人的开发环境  一个足够
  serviceName: apisix-etcd-headless
  selector:
    matchLabels:
      app.kubernetes.io/instance: apisix-etcd
      app.kubernetes.io/name: apisix-etcd
  template:
    metadata:
      labels:
        app.kubernetes.io/instance: apisix-etcd
        app.kubernetes.io/name: apisix-etcd
    spec:
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - podAffinityTerm:
                labelSelector:
                  matchLabels:
                    app.kubernetes.io/instance: apisix-etcd
                    app.kubernetes.io/name: apisix-etcd
                topologyKey: kubernetes.io/hostname
              weight: 1
      containers:
        - name: apisix-etcd-app
          #image: bitnami/etcd:3.5.14-debian-12-r1
          image: registry.cn-hangzhou.aliyuncs.com/xxxxx/etcd:3.5.10-debian-11-r2 # 国内dockers被封,最好先把etcd 的镜像考到到国内一份
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 2379
              name: client
              protocol: TCP
            - containerPort: 2380
              name: peer
              protocol: TCP
          env:
            - name: BITNAMI_DEBUG
              value: 'false'
            - name: MY_POD_IP
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: status.podIP
            - name: MY_POD_NAME
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.name
            - name: MY_STS_NAME
              value: apisix-etcd
            - name: ETCDCTL_API
              value: '3'
            - name: ETCD_ON_K8S
              value: 'yes'
            - name: ETCD_START_FROM_SNAPSHOT
              value: 'no'
            - name: ETCD_DISASTER_RECOVERY
              value: 'no'
            - name: ETCD_NAME
              value: $(MY_POD_NAME)
            - name: ETCD_DATA_DIR
              value: /bitnami/etcd/data
            - name: ETCD_LOG_LEVEL
              value: info
            - name: ALLOW_NONE_AUTHENTICATION
              value: 'yes'
            - name: ETCD_ADVERTISE_CLIENT_URLS
              value: http://$(MY_POD_NAME).apisix-etcd-headless.gpt.svc.cluster.local:2379
            - name: ETCD_LISTEN_CLIENT_URLS
              value: http://0.0.0.0:2379
            - name: ETCD_INITIAL_ADVERTISE_PEER_URLS
              value: http://$(MY_POD_NAME).apisix-etcd-headless.gpt.svc.cluster.local:2380
            - name: ETCD_LISTEN_PEER_URLS
              value: http://0.0.0.0:2380
            - name: ETCD_INITIAL_CLUSTER_TOKEN
              value: apisix-etcd-cluster-k8s
            - name: ETCD_INITIAL_CLUSTER_STATE
              value: new
            - name: ETCD_INITIAL_CLUSTER
              value: apisix-etcd-0=http://apisix-etcd-0.apisix-etcd-headless.gpt.svc.cluster.local:2380 # 推荐部署3个副本,笔者只是个人的开发环境  一个足够
              #value: apisix-etcd-0=http://apisix-etcd-0.apisix-etcd-headless.gpt.svc.cluster.local:2380,apisix-etcd-1=http://apisix-etcd-1.apisix-etcd-headless.gpt.svc.cluster.local:2380,apisix-etcd-2=http://apisix-etcd-2.apisix-etcd-headless.gpt.svc.cluster.local:2380
            - name: ETCD_CLUSTER_DOMAIN
              value: apisix-etcd-headless.gpt.svc.cluster.local
          volumeMounts:
            - name: data
              mountPath: /bitnami/etcd
          lifecycle:
            preStop:
              exec:
                command:
                  - /opt/bitnami/scripts/etcd/prestop.sh
          livenessProbe:
            exec:
              command:
                - /opt/bitnami/scripts/etcd/healthcheck.sh
            initialDelaySeconds: 60
            timeoutSeconds: 5
            periodSeconds: 30
            successThreshold: 1
            failureThreshold: 5
          readinessProbe:
            exec:
              command:
                - /opt/bitnami/scripts/etcd/healthcheck.sh
            initialDelaySeconds: 60
            timeoutSeconds: 5
            periodSeconds: 10
            successThreshold: 1
            failureThreshold: 5
      securityContext:
        fsGroup: 1001
      volumes:
        - name: data
          persistentVolumeClaim:
            claimName: data-apisix-etcds-0
yaml 复制代码
---
# apisix-etcd-headless
apiVersion: v1
kind: Service
metadata:
  name: apisix-etcd-headless
  namespace: gpt
  labels:
    app.kubernetes.io/instance: apisix-etcd
    app.kubernetes.io/name: apisix-etcd
spec:
  ports:
    - name: client
      port: 2379
      protocol: TCP
      targetPort: 2379
    - name: peer
      port: 2380
      protocol: TCP
      targetPort: 2380
  clusterIP: None
  selector:
    app.kubernetes.io/instance: apisix-etcd
    app.kubernetes.io/name: apisix-etcd
  publishNotReadyAddresses: true


---
# apisix-etcd
apiVersion: v1
kind: Service
metadata:
  name: apisix-etcd
  namespace: gpt
  labels:
    app.kubernetes.io/instance: apisix-etcd
    app.kubernetes.io/name: apisix-etcd
spec:
  ports:
    - name: client
      port: 2379
      protocol: TCP
      targetPort: 2379
    - name: peer
      port: 2380
      protocol: TCP
      targetPort: 2380
  selector:
    app.kubernetes.io/instance: apisix-etcd
    app.kubernetes.io/name: apisix-etcd

基本文件准备好执行部署就行

yaml 复制代码
# 执行部署
kubectl apply-f pvc.yaml
kubectl apply-f deployment.yaml
kubectl apply-f svc.yaml

备注:

  • 提前准备好需要的镜像(docker hub 最近国内被封),提供两种方案:
    • 转存到阿里云等国内的镜像仓库
    • 通过 docker load 加载本地的镜像
yaml 复制代码
docker pull rancher/mirrored-pause:3.6 # 开发电脑通过(能科学上网的) pull 下载镜像
docker save -o ./mirrored-pause.tar rancher/mirrored-pause:3.6 # 导出镜像为tar 文件
docker load -i ./mirrored-pause.tar # 把tar 上传到服务器,通过  docker load 加载
  • etcd 启动 pod 一直失败,日志提示无权限的问题, 大概是 pvc 挂载的 host path 没有操作权限_ _sudo chmod -R 777 path(目录路径),修改目录的权限后可以重启 pod 试一试。

apisix 部署

apisix 部署

yaml 复制代码
kind: Deployment
apiVersion: apps/v1
metadata:
  name: apisix
  namespace: gpt
  labels:
    app.kubernetes.io/instance: apisix
    app.kubernetes.io/name: apisix
    app.kubernetes.io/version: 2.10.0
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/instance: apisix
      app.kubernetes.io/name: apisix
  template:
    metadata:
      creationTimestamp: null
      labels:
        app.kubernetes.io/instance: apisix
        app.kubernetes.io/name: apisix
    spec:
      volumes:
        - name: apisix-config
          configMap:
            name: apisix
            defaultMode: 420
      initContainers:
        - name: wait-etcd
          image: registry.cn-hangzhou.aliyuncs.com/xxxx/busybox:1.28 #该镜像换成自己的
          command:
            - sh
            - '-c'
            - >-
              until nc -z apisix-etcd.gpt.svc.cluster.local 2379; do echo
              waiting for etcd `date`; sleep 2; done;
          resources: {}
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          imagePullPolicy: IfNotPresent
      containers:
        - name: apisix
          image: registry.cn-hangzhou.aliyuncs.com/xxxx/apisix:2.10.0-alpine #该镜像换成自己的
          ports:
            - name: http
              containerPort: 9080
              protocol: TCP
            - name: tls
              containerPort: 9443
              protocol: TCP
            - name: admin
              containerPort: 9180
              protocol: TCP
          resources: {}
          volumeMounts:
            - name: apisix-config
              mountPath: /usr/local/apisix/conf/config.yaml
              subPath: config.yaml
          readinessProbe:
            tcpSocket:
              port: 9080
            initialDelaySeconds: 10
            timeoutSeconds: 1
            periodSeconds: 10
            successThreshold: 1
            failureThreshold: 6
          lifecycle:
            preStop:
              exec:
                command:
                  - /bin/sh
                  - '-c'
                  - sleep 30
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          imagePullPolicy: IfNotPresent
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      dnsPolicy: ClusterFirst
      securityContext: {}
      schedulerName: default-scheduler
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 25%
      maxSurge: 25%
  revisionHistoryLimit: 10
  progressDeadlineSeconds: 600
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: apisix
  namespace: gpt
data:
  config.yaml: >-
    #

    # Licensed to the Apache Software Foundation (ASF) under one or more

    # contributor license agreements.  See the NOTICE file distributed with

    # this work for additional information regarding copyright ownership.

    # The ASF licenses this file to You under the Apache License, Version 2.0

    # (the "License"); you may not use this file except in compliance with

    # the License.  You may obtain a copy of the License at

    #

    #     http://www.apache.org/licenses/LICENSE-2.0

    #

    # Unless required by applicable law or agreed to in writing, software

    # distributed under the License is distributed on an "AS IS" BASIS,

    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

    # See the License for the specific language governing permissions and

    # limitations under the License.

    #

    apisix:
      node_listen: 9080             # APISIX listening port
      enable_heartbeat: true
      enable_admin: true
      enable_admin_cors: true
      enable_debug: false
      enable_dev_mode: false          # Sets nginx worker_processes to 1 if set to true
      enable_reuseport: true          # Enable nginx SO_REUSEPORT switch if set to true.
      enable_ipv6: true
      config_center: etcd             # etcd: use etcd to store the config value
        # yaml: fetch the config value from local yaml file `/your_path/conf/apisix.yaml`
        
        
        #proxy_protocol:                 # Proxy Protocol configuration
        #  listen_http_port: 9181        # The port with proxy protocol for http, it differs from node_listen and port_admin.
        # This port can only receive http request with proxy protocol, but node_listen & port_admin
        # can only receive http request. If you enable proxy protocol, you must use this port to
      # receive http request with proxy protocol
      #  listen_https_port: 9182       # The port with proxy protocol for https
      #  enable_tcp_pp: true           # Enable the proxy protocol for tcp proxy, it works for stream_proxy.tcp option
      #  enable_tcp_pp_to_upstream: true # Enables the proxy protocol to the upstream server

      proxy_cache:                     # Proxy Caching configuration
        cache_ttl: 10s                 # The default caching time if the upstream does not specify the cache time
        zones:                         # The parameters of a cache
          - name: disk_cache_one         # The name of the cache, administrator can be specify
            # which cache to use by name in the admin api
            memory_size: 50m             # The size of shared memory, it's used to store the cache index
            disk_size: 1G                # The size of disk, it's used to store the cache data
            disk_path: "/tmp/disk_cache_one" # The path to store the cache data
            cache_levels: "1:2"           # The hierarchy levels of a cache
      #  - name: disk_cache_two
      #    memory_size: 50m
      #    disk_size: 1G
      #    disk_path: "/tmp/disk_cache_two"
      #    cache_levels: "1:2"

      allow_admin:                  # http://nginx.org/en/docs/http/ngx_http_access_module.html#allow
        - 127.0.0.1/24
      #   - "::/64"
      port_admin: 9180

      # Default token when use API to call for Admin API.
      # *NOTE*: Highly recommended to modify this value to protect APISIX's Admin API.
      # Disabling this configuration item means that the Admin API does not
      # require any authentication.
      admin_key:
        # admin: can everything for configuration data
        - name: "admin"
          key: edd1c9f034335f136f87ad84b625c8f1
          role: admin
        # viewer: only can view configuration data
        - name: "viewer"
          key: 4054f7cf07e344346cd3f287985e76a2
          role: viewer
      router:
        http: 'radixtree_uri'         # radixtree_uri: match route by uri(base on radixtree)
        # radixtree_host_uri: match route by host + uri(base on radixtree)
        ssl: 'radixtree_sni'          # radixtree_sni: match route by SNI(base on radixtree)
      # dns_resolver:
      #
      #   - 127.0.0.1
      #
      #   - 172.20.0.10
      #
      #   - 114.114.114.114
      #
      #   - 223.5.5.5
      #
      #   - 1.1.1.1
      #
      #   - 8.8.8.8
      #
      dns_resolver_valid: 30
      resolver_timeout: 5
      ssl:
        enable: false
        enable_http2: true
        listen_port: 9443
        ssl_protocols: "TLSv1 TLSv1.1 TLSv1.2 TLSv1.3"
        ssl_ciphers: "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA256:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA"

    nginx_config:                     # config for render the template to
    genarate nginx.conf
      error_log: "/dev/stderr"
      error_log_level: "warn"         # warn,error
      worker_rlimit_nofile: 20480     # the number of files a worker process can open, should be larger than worker_connections
      event:
        worker_connections: 10620
      http:
        access_log: "/dev/stdout"
        keepalive_timeout: 60s         # timeout during which a keep-alive client connection will stay open on the server side.
        client_header_timeout: 60s     # timeout for reading client request header, then 408 (Request Time-out) error is returned to the client
        client_body_timeout: 60s       # timeout for reading client request body, then 408 (Request Time-out) error is returned to the client
        send_timeout: 10s              # timeout for transmitting a response to the client.then the connection is closed
        underscores_in_headers: "on"   # default enables the use of underscores in client request header fields
        real_ip_header: "X-Real-IP"    # http://nginx.org/en/docs/http/ngx_http_realip_module.html#real_ip_header
        real_ip_from:                  # http://nginx.org/en/docs/http/ngx_http_realip_module.html#set_real_ip_from
          - 127.0.0.1
          - 'unix:'

    etcd:
      host:                                 # it's possible to define multiple etcd hosts addresses of the same etcd cluster.
        - "http://apisix-etcd.gpt.svc.cluster.local:2379"
      prefix: "/apisix"     # apisix configurations prefix
      timeout: 30   # 30 seconds
    plugins:                          # plugin list
      - api-breaker
      - authz-keycloak
      - basic-auth
      - batch-requests
      - consumer-restriction
      - cors
      - echo
      - fault-injection
      - grpc-transcode
      - hmac-auth
      - http-logger
      - ip-restriction
      - ua-restriction
      - jwt-auth
      - kafka-logger
      - key-auth
      - limit-conn
      - limit-count
      - limit-req
      - node-status
      - openid-connect
      - authz-casbin
      - prometheus
      - proxy-cache
      - proxy-mirror
      - proxy-rewrite
      - redirect
      - referer-restriction
      - request-id
      - request-validation
      - response-rewrite
      - serverless-post-function
      - serverless-pre-function
      - sls-logger
      - syslog
      - tcp-logger
      - udp-logger
      - uri-blocker
      - wolf-rbac
      - zipkin
      - server-info
      - traffic-split
      - gzip
      - real-ip
    stream_plugins:
      - mqtt-proxy
      - ip-restriction
      - limit-conn
    plugin_attr:
      server-info:
        report_interval: 60
        report_ttl: 3600
---
kind: Service
apiVersion: v1
metadata:
  name: apisix-admin
  namespace: gpt
  labels:
    app.kubernetes.io/instance: apisix
    app.kubernetes.io/name: apisix
    app.kubernetes.io/version: 2.10.0
spec:
  ports:
    - name: apisix-admin
      protocol: TCP
      port: 9180
      targetPort: 9180
  selector:
    app.kubernetes.io/instance: apisix
    app.kubernetes.io/name: apisix
  type: ClusterIP
---
kind: Service
apiVersion: v1
metadata:
  name: apisix-gateway
  namespace: gpt
  labels:
    app.kubernetes.io/instance: apisix
    app.kubernetes.io/name: apisix
    app.kubernetes.io/version: 2.10.0
spec:
  ports:
    - name: apisix-gateway
      protocol: TCP
      port: 80
      targetPort: 9080
      nodePort: 31684
  selector:
    app.kubernetes.io/instance: apisix
    app.kubernetes.io/name: apisix
  type: NodePort
  sessionAffinity: None
  externalTrafficPolicy: Cluster

apisix-dashboard 部署

yaml 复制代码
kind: Deployment
apiVersion: apps/v1
metadata:
  name: apisix-dashboard
  namespace:  gpt
  labels:
    app.kubernetes.io/instance: apisix-dashboard
    app.kubernetes.io/name: apisix-dashboard
    app.kubernetes.io/version: 3.0.1
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/instance: apisix-dashboard
      app.kubernetes.io/name: apisix-dashboard
  template:
    metadata:
      creationTimestamp: null
      labels:
        app.kubernetes.io/instance: apisix-dashboard
        app.kubernetes.io/name: apisix-dashboard
    spec:
      volumes:
        - name: apisix-dashboard-config
          configMap:
            name: apisix-dashboard
            defaultMode: 420
      containers:
        - name: apisix-dashboard
          image: registry.cn-hangzhou.aliyuncs.com/xxx/apisix-dashboard:2.15.0-alpine # 该镜像换成自己的
          ports:
            - name: http
              containerPort: 9000
              protocol: TCP
          resources: {}
          volumeMounts:
            - name: apisix-dashboard-config
              mountPath: /usr/local/apisix-dashboard/conf/conf.yaml
              subPath: conf.yaml
          livenessProbe:
            httpGet:
              path: /ping
              port: http
              scheme: HTTP
            timeoutSeconds: 1
            periodSeconds: 10
            successThreshold: 1
            failureThreshold: 3
          readinessProbe:
            httpGet:
              path: /ping
              port: http
              scheme: HTTP
            timeoutSeconds: 1
            periodSeconds: 10
            successThreshold: 1
            failureThreshold: 3
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          imagePullPolicy: IfNotPresent
          securityContext: {}
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      dnsPolicy: ClusterFirst
      serviceAccountName: apisix-dashboard
      serviceAccount: apisix-dashboard
      securityContext: {}
      schedulerName: default-scheduler
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 25%
      maxSurge: 25%
  revisionHistoryLimit: 10
  progressDeadlineSeconds: 600
---
kind: Service
apiVersion: v1
metadata:
  name: apisix-dashboard
  namespace:  gpt
  labels:
    app.kubernetes.io/instance: apisix-dashboard
    app.kubernetes.io/name: apisix-dashboard
    app.kubernetes.io/version: 3.0.1
spec:
  ports:
    - name: http
      protocol: TCP
      #port: 80
      port: 9055 # 服务部署在了9055 端口
      targetPort: http
  selector:
    app.kubernetes.io/instance: apisix-dashboard
    app.kubernetes.io/name: apisix-dashboard
  type: LoadBalancer # 生产环境不要这么配,有安全问题
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: apisix-dashboard
  namespace:  gpt
  labels:
    app.kubernetes.io/instance: apisix-dashboard
    app.kubernetes.io/name: apisix-dashboard
    app.kubernetes.io/version: 3.0.1
data:
  conf.yaml: |-
    conf:
      listen:
        host: 0.0.0.0
        port: 9000
      etcd:
        endpoints:
          - apisix-etcd:2379
      log:
        error_log:
          level: warn
          file_path: /dev/stderr
        access_log:
          file_path: /dev/stdout
    authentication:
      secert: secert
      expire_time: 3600
      users:
        - username: admin 
          password: admin
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: apisix-dashboard
  namespace:  gpt

因为是内网个人开发的环境。,后台管理系统直接用了 LoadBalancer 的方式部署,端口定义为 9055,方便配置。部署成功后理论上 通过内网ip:9055 就能访问。

配置案例

上游服务配置

点击创建按钮填写一些基本的服务信息就行。

路由配置

点击创建按钮填写一些基本的路由信息就行,针对特定的路由可以配置一些插件,诸如身份鉴权、限流等。

最重要的配置是路径的配置,匹配到的路径会走 apisix 网关。

身份鉴权(jwt-auth)

更加详细的配置移步 jwt-auth

  1. 配置路由时最后会提示配置插件,这里以最常见的 jwt 鉴权为例,首先需要把路由的 jwt 插件打开。
  1. jwt 需要配合 Consumer 使用,新建 Consumer。

Consumer 端参数如下:

名称 类型 必选项 默认值 有效值 描述
key string Consumer 的 access_key 必须是唯一的。如果不同 Consumer 使用了相同的 access_key ,将会出现请求匹配异常。
secret string 加密秘钥。如果未指定,后台将会自动生成。该字段支持使用 APISIX Secret 资源,将值保存在 Secret Manager 中。
public_key string RSA 或 ECDSA 公钥, algorithm 属性选择 RS256 或 ES256 算法时必选。该字段支持使用 APISIX Secret 资源,将值保存在 Secret Manager 中。
private_key string RSA 或 ECDSA 私钥, algorithm 属性选择 RS256 或 ES256 算法时必选。该字段支持使用 APISIX Secret 资源,将值保存在 Secret Manager 中。
algorithm string "HS256" ["HS256", "HS512", "RS256", "ES256"] 加密算法。
exp integer 86400 [1,...] token 的超时时间。
base64_secret boolean false 当设置为 true 时,密钥为 base64 编码。
lifetime_grace_period integer 0 [0,...] 定义生成 JWT 的服务器和验证 JWT 的服务器之间的时钟偏移。该值应该是零(0)或一个正整数。

注意:schema 中还定义了 encrypt_fields = {"secret", "private_key"},这意味着该字段将会被加密存储在 etcd 中。具体参考 加密存储字段

  1. 针对配置的路由请求时携带 Authorization , 如果没有传或者无效的token,网关层面会直接返回 401。

基础环境指导

相关推荐
耶啵奶膘43 分钟前
uniapp-是否删除
linux·前端·uni-app
NiNg_1_2341 小时前
SpringBoot整合SpringSecurity实现密码加密解密、登录认证退出功能
java·spring boot·后端
王哈哈^_^2 小时前
【数据集】【YOLO】【目标检测】交通事故识别数据集 8939 张,YOLO道路事故目标检测实战训练教程!
前端·人工智能·深度学习·yolo·目标检测·计算机视觉·pyqt
Chrikk3 小时前
Go-性能调优实战案例
开发语言·后端·golang
幼儿园老大*3 小时前
Go的环境搭建以及GoLand安装教程
开发语言·经验分享·后端·golang·go
canyuemanyue3 小时前
go语言连续监控事件并回调处理
开发语言·后端·golang
杜杜的man3 小时前
【go从零单排】go语言中的指针
开发语言·后端·golang
cs_dn_Jie3 小时前
钉钉 H5 微应用 手机端调试
前端·javascript·vue.js·vue·钉钉
开心工作室_kaic3 小时前
ssm068海鲜自助餐厅系统+vue(论文+源码)_kaic
前端·javascript·vue.js
有梦想的刺儿4 小时前
webWorker基本用法
前端·javascript·vue.js