08-Event Sources和Sink架构

1 PingSource -> Kubernetes Service Sink

  • 架构模型

  • 示例1

    • 部署一个kubernetes类型的sink,这里面还是以event-display为例,下面是资源清单

      yaml 复制代码
      ---
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: event-display
      spec:
        replicas: 1
        selector:
          matchLabels: &labels
            app: event-display
        template:
          metadata:
            labels: *labels
          spec:
            containers:
              - name: event-display
                image: ikubernetes/event_display
      ---
      kind: Service
      apiVersion: v1
      metadata:
        name: event-display
      spec:
        selector:
          app: event-display
        ports:
        - protocol: TCP
          port: 80
          targetPort: 8080
    • 定义pingsource资源,指定sink为event-display

      yaml 复制代码
      apiVersion: sources.knative.dev/v1
      kind: PingSource
      metadata:
        name: ping-00001
      spec:
        schedule: "* * * * *"
        contentType: "application/json"
        data: '{"message": "Hello Eventing!"}'
        sink:
          ref:
            apiVersion: v1
            kind: Service
            name: event-display

      后面knative-eventing的namespace会出现pingsource的POD

    • 获取event-display当中的日志信息,验证是否进行了事件发送

      sh 复制代码
      kubectl logs -f event-display-6496b6c66d-7drk5

2 PingSource → Knative Service Sink

  • 架构

  • 示例2

    • 部署一个kservice类型的sink,这里面还是以event-display为例,下面是资源清单

      yaml 复制代码
      ---
      apiVersion: serving.knative.dev/v1
      kind: Service
      metadata:
        name: event-display
      spec:
        template:
          metadata:
            annotations:
              autoscaling.knative.dev/min-scale: "1"
          spec:
            containers:
              - image: ikubernetes/event_display
                ports:
                  - containerPort: 8080
    • 定义pingsource资源,指定sink为event-display

      yaml 复制代码
      ---
      apiVersion: sources.knative.dev/v1
      kind: PingSource
      metadata:
        name: pingsource-00001
      spec:
        schedule: "* * * * *"
        contentType: "application/json"
        data: '{"message": "Hello Eventing!"}'
        sink:
          ref:
            apiVersion: serving.knative.dev/v1
            kind: Service
            name: event-display
    • 获取event-display当中的日志信息,验证是否进行了事件发送

      bash 复制代码
      kubectl logs -f event-display-00001-deployment-7f488cb57b-grsc6

3 ContainerSource → Knative Service Sink

  • 旨在收集一些容器的event是,比如容器的一些元数据,下面是架构:

  • 示例2

    • 部署一个kservice类型的sink,这里面还是以event-display为例,下面是资源清单

      yaml 复制代码
      ---
      apiVersion: serving.knative.dev/v1
      kind: Service
      metadata:
        name: event-display
      spec:
        template:
          metadata:
            annotations:
              autoscaling.knative.dev/min-scale: "1"
          spec:
            containers:
              - image: ikubernetes/event_display
                ports:
                  - containerPort: 8080
    • 定义containergsource资源,指定sink为event-display

      yaml 复制代码
      apiVersion: sources.knative.dev/v1
      kind: ContainerSource
      metadata:
        name: containersource-heartbeat
      spec:
        template:
          spec:
            containers:
              - image: ikubernetes/containersource-heartbeats:latest
                name: heartbeats
                env:
                - name: POD_NAME
                  valueFrom:
                    fieldRef:
                      fieldPath: metadata.name
                - name: POD_NAMESPACE
                  valueFrom:
                    fieldRef:
                      fieldPath: metadata.namespace
        sink:
          ref:
            apiVersion: serving.knative.dev/v1
            kind: Service
            name: event-display

      在event-display的名称空间创建了一个containersource的pod

    • 获取event-display当中的日志信息,验证是否进行了事件发送

      bash 复制代码
      kubectl logs -f event-display-00001-deployment-645944b779-2b4vt

4 ApiServerSource → Knative Service Sink

  • apiserversource主要记录了aoiserver上的event,比如create/delete/update/get,架构如下:

  • 示例4:

    • 部署一个kservice类型的sink,这里面还是以event-display为例,下面是资源清单

      yaml 复制代码
      ---
      apiVersion: serving.knative.dev/v1
      kind: Service
      metadata:
        name: event-display
      spec:
        template:
          metadata:
            annotations:
              autoscaling.knative.dev/min-scale: "1"
          spec:
            containers:
              - image: ikubernetes/event_display
                ports:
                  - containerPort: 8080
    • 定义ServiceAccount:定义向指定目标资源的get,watch,list的相关权限

      yaml 复制代码
      apiVersion: v1
      kind: ServiceAccount
      metadata:
        name: pod-watcher
      ---
      apiVersion: rbac.authorization.k8s.io/v1
      kind: Role
      metadata:
        name: pod-reader
      rules:
      - apiGroups:
        - ""
        resources:
        - pods
        verbs:
        - get
        - list
        - watch
      ---
      apiVersion: rbac.authorization.k8s.io/v1
      kind: RoleBinding
      metadata:
        name: pod-reader
      roleRef:
        apiGroup: rbac.authorization.k8s.io
        kind: Role
        name: pod-reader
      subjects:
      - kind: ServiceAccount
        name: pod-watcher
    • 定义apiserversource资源,结合serviceaccount,对指定权限的pod进行收集event

      yaml 复制代码
      apiVersion: sources.knative.dev/v1
      kind: ApiServerSource
      metadata:
        name: pods-event
      spec:
        serviceAccountName: pod-watcher
        mode: Reference
        resources:
        - apiVersion: v1
          kind: Pod
        sink:
          ref:
            apiVersion: serving.knative.dev/v1
            kind: Service
            name: event-display

      会发现创建一个apiserversource的pod

    • 获取event-display当中的日志信息,验证是否进行了事件发送

      bash 复制代码
      kubectl logs -f event-display-00001-deployment-9469968cd-vb4l2 

5 GitLab Source

  • 关于GitLabSource
    • 将GitLab仓库上的事件转为CloudEvents
    • GitLabSource为指定的事件类型创建一个Webhook,监听传入的事件,并将其传给消费者
  • GitLabSource支持的事件类型包括如下这些:
    • 推送事件:push_events
      • 对应的CloudEvents的类型为"dev.knative.sources.gitlab.push",以下类同
    • tag推送事件:tag_push_events
    • 议题事件:issues_events
    • 合并请求事件:merge_requests_events
    • 私密议题事件:confidential_issues_events
    • 私密评论:confidential_note_events
    • 部署事件:deployment_events
    • 作业事件:job_events
    • 评论:note_events
    • 流水线事件:pipeline_events
    • Wiki页面事件:wiki_page_events
5.1 GitLab Source 实践
  • 示例环境说明

    • 一个部署可用的GitLab服务
    • GitLab服务上隶于某个用户(例如root)的代码仓库(例如myproject)
    • 负责接收CloudEvents的kservice/event-display
  • 具体步骤

    • 部署Gitlab
    • GitLab上的操作
      • 为GitLab用户设置Personal Access Token
      • 准备示例仓库myproject
    • 在Knative上部署GitLabSource
    • 在Knative上部署KService/event-display
    • 创建Secret资源,包含两个数据项
      • GitLab上的Personal Access Token
      • GitLab调用GitLabSource与Webhook Secret
    • 创建GitLabSource资源
      • 从GitLab仓库加载事件
      • 将事件转为CloudEvents,并发往Sink
  • 具体操作

    • 部署GitLab
    • Gitlab上操作
      • 通用→可见性:设定"自定义HTTP(S)协议Git克隆URL"
相关推荐
JosieBook1 小时前
【架构】主流企业架构Zachman、ToGAF、FEA、DoDAF介绍
架构
.生产的驴2 小时前
SpringCloud OpenFeign用户转发在请求头中添加用户信息 微服务内部调用
spring boot·后端·spring·spring cloud·微服务·架构
丁总学Java2 小时前
ARM 架构(Advanced RISC Machine)精简指令集计算机(Reduced Instruction Set Computer)
arm开发·架构
运维&陈同学3 小时前
【zookeeper03】消息队列与微服务之zookeeper集群部署
linux·微服务·zookeeper·云原生·消息队列·云计算·java-zookeeper
ZOMI酱4 小时前
【AI系统】GPU 架构与 CUDA 关系
人工智能·架构
Code_Artist6 小时前
使用Portainer来管理并编排Docker容器
docker·云原生·容器
梅见十柒11 小时前
wsl2中kali linux下的docker使用教程(教程总结)
linux·经验分享·docker·云原生