Serverless集群搭建:Knative

文章目录

Knative搭建

搭建流程图:

1.准备工作

准备工作

● 本安装操作中的步骤 bash 适用于 MacOS 或 Linux 环境。对于 Windows,某些命令可能需要调整。

● 本安装操作假定您具有现有的 Kubernetes 集群,可以在其上轻松安装和运行Alpha 级软件。

● Knative 需要 Kubernetes 集群 v1.14 或更高版本,以及可兼容 kubectl。

安装Kubernetes

这就无需多言了直接安装即可。(本文版本为1.23)

yaml 复制代码
[root@master ~]# kubectl get pods -A 
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-64cc74d646-7tjcg   1/1     Running   0          27s
kube-system   calico-node-24lwx                          1/1     Running   0          27s
kube-system   calico-node-cnhbg                          1/1     Running   0          27s
kube-system   calico-node-x6hxn                          1/1     Running   0          27s
kube-system   coredns-6d8c4cb4d-rxdlv                    1/1     Running   0          2m8s
kube-system   coredns-6d8c4cb4d-wwk7v                    1/1     Running   0          2m8s
kube-system   etcd-master                                1/1     Running   0          2m24s
kube-system   kube-apiserver-master                      1/1     Running   0          2m24s
kube-system   kube-controller-manager-master             1/1     Running   0          2m24s
kube-system   kube-proxy-dtntn                           1/1     Running   0          2m2s
kube-system   kube-proxy-fdzk4                           1/1     Running   0          2m7s
kube-system   kube-proxy-mzj6c                           1/1     Running   0          2m9s
kube-system   kube-scheduler-master                      1/1     Running   0          2m24s

安装 Istio

Knative 依赖 Istio 进行流量路由和入口。您可以选择注入 Istio sidecar 并启用Istio 服务网格,但是并非所有 Knative 组件都需要它。如果您的云平台提供了托管的 Istio 安装,则建议您以这种方式安装 Istio,除非您需要自定义安装功能。如果您希望手动安装 Istio,或者云提供商不提供托管的 Istio 安装,或者您要使用 Minkube 或类似的本地安装 Knative

Master 192.168.100.10 docker,kubectl,kubeadm,kubelet
Node1 192.168.100.20 docker,kubectl,kubeadm,kubelet
Node2 192.168.100.30 docker,kubectl,kubeadm,kubelet

在此不在过多赘述。

下载 Istio,下载内容将包含:安装文件、示例和 istioctl 命令行工具。我的kubernetes集群版本为1.23.0,所以选取Istio1.17.0版本进行搭建。

复制代码
curl -L https://istio.io/downloadIstio | ISTIO_VERSION=1.17.0 TARGET_ARCH=x86_64 sh -

将 istioctl 客户端路径增加到 path 环境变量中

复制代码
[root@master ~]# vi /etc/profile
export PATH=/root/istio-1.17.0/bin:$PATH
[root@master ~]# source /etc/profile

用于快速启动一个带有基本功能的 Istio 环境。这个配置会部署 Istio 的核心控制平面组件(如 Pilot、Citadel、Galley 等)以及为每个应用服务的 Pod 自动注入 Envoy 代理。

复制代码
[root@master ]# cd istio-1.17.0/
[root@master istio-1.17.0]# istioctl manifest apply --set profile=demo

使用命令查看:

复制代码
[root@master istio-1.17.0]# istioctl verify-install

查看pod

确保关联的 Kubernetes pod 已经部署,并且 STATUS 为 Running:

复制代码
[root@master istio-1.17.0]# kubectl get pod -n istio-system

对外暴露istio-gateway接口

复制代码
[root@master ~]# kubectl edit svc istio-ingressgateway -n istio-system
##将type接口更改为No的Port
service/istio-ingressgateway edited
[root@master ~]# kubectl get svc istio-ingressgateway -n istio-system
NAME                   TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)                                                                      AGE
istio-ingressgateway   NodePort   10.107.99.250   <none>        15021:31668/TCP,80:31296/TCP,443:31680/TCP,31400:30111/TCP,15443:31671/TCP   7d19h

2.部署Knative

在应用 net-istio 配置之前,确保 knative-serving 命名空间已经创建。

安装 Knative 与 Istio 集成的网络层组件(net-istio,目的是让 Knative 通过 Istio 管理流量(如网关、路由、安全策略),并创建了相关资源(如 Gateway、Service、Webhook 等),确保 Knative 服务可通过 Istio 入口对外暴露。

复制代码
[root@master ~]# kubectl create namespace knative-serving
namespace/knative-serving created
[root@master ~]# kubectl apply -f https://github.com/knative/net-istio/releases/download/knative-v1.7.1/net-istio.yaml 
clusterrole.rbac.authorization.k8s.io/knative-serving-istio created
gateway.networking.istio.io/knative-ingress-gateway created
gateway.networking.istio.io/knative-local-gateway created
service/knative-local-gateway created
configmap/config-istio created
peerauthentication.security.istio.io/webhook created
peerauthentication.security.istio.io/domainmapping-webhook created
peerauthentication.security.istio.io/net-istio-webhook created
deployment.apps/net-istio-controller created
deployment.apps/net-istio-webhook created
secret/net-istio-webhook-certs created
service/net-istio-webhook created
mutatingwebhookconfiguration.admissionregistration.k8s.io/webhook.istio.networking.internal.knative.dev created
validatingwebhookconfiguration.admissionregistration.k8s.io/config.webhook.istio.networking.internal.knative.dev created

安装 Knative Serving v1.7.1 版本的自定义资源定义(CRD) ,这些 CRD(如 ConfigurationsRevisionsRoutes 等)扩展了 Kubernetes API,使集群能够识别和管理 Knative 的核心资源,支撑无服务器应用的核心功能(如流量路由、版本管理、自动扩缩等),为后续部署 Knative Serving 组件提供基础。

复制代码
[root@master ~]# kubectl apply -f https://github.com/knative/serving/releases/download/knative-v1.7.1/serving-crds.yaml    
customresourcedefinition.apiextensions.k8s.io/certificates.networking.internal.knative.dev created
customresourcedefinition.apiextensions.k8s.io/configurations.serving.knative.dev created
customresourcedefinition.apiextensions.k8s.io/clusterdomainclaims.networking.internal.knative.dev created
customresourcedefinition.apiextensions.k8s.io/domainmappings.serving.knative.dev created
customresourcedefinition.apiextensions.k8s.io/ingresses.networking.internal.knative.dev created
customresourcedefinition.apiextensions.k8s.io/metrics.autoscaling.internal.knative.dev created
customresourcedefinition.apiextensions.k8s.io/podautoscalers.autoscaling.internal.knative.dev created
customresourcedefinition.apiextensions.k8s.io/revisions.serving.knative.dev created
customresourcedefinition.apiextensions.k8s.io/routes.serving.knative.dev created
customresourcedefinition.apiextensions.k8s.io/serverlessservices.networking.internal.knative.dev created
customresourcedefinition.apiextensions.k8s.io/services.serving.knative.dev created
customresourcedefinition.apiextensions.k8s.io/images.caching.internal.knative.dev created

部署 Knative Serving v1.7.1 核心组件 ,包括权限配置(RBAC)、自动扩缩控制器(autoscaler)、流量管理(activator)、域名映射(domain-mapping)等核心功能。

复制代码
[root@master istio-1.17.0]# kubectl apply -f https://github.com/knative/serving/releases/download/knative-v1.7.1/serving-core.yaml
Warning: resource namespaces/knative-serving is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
namespace/knative-serving configured
clusterrole.rbac.authorization.k8s.io/knative-serving-aggregated-addressable-resolver created
clusterrole.rbac.authorization.k8s.io/knative-serving-addressable-resolver created
clusterrole.rbac.authorization.k8s.io/knative-serving-namespaced-admin created
clusterrole.rbac.authorization.k8s.io/knative-serving-namespaced-edit created
clusterrole.rbac.authorization.k8s.io/knative-serving-namespaced-view created
clusterrole.rbac.authorization.k8s.io/knative-serving-core created
clusterrole.rbac.authorization.k8s.io/knative-serving-podspecable-binding created
serviceaccount/controller created
clusterrole.rbac.authorization.k8s.io/knative-serving-admin created
clusterrolebinding.rbac.authorization.k8s.io/knative-serving-controller-admin created
clusterrolebinding.rbac.authorization.k8s.io/knative-serving-controller-addressable-resolver created
customresourcedefinition.apiextensions.k8s.io/images.caching.internal.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/certificates.networking.internal.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/configurations.serving.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/clusterdomainclaims.networking.internal.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/domainmappings.serving.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/ingresses.networking.internal.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/metrics.autoscaling.internal.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/podautoscalers.autoscaling.internal.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/revisions.serving.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/routes.serving.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/serverlessservices.networking.internal.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/services.serving.knative.dev unchanged
secret/serving-certs-ctrl-ca created
secret/knative-serving-certs created
image.caching.internal.knative.dev/queue-proxy created
configmap/config-autoscaler created
configmap/config-defaults created
configmap/config-deployment created
configmap/config-domain created
configmap/config-features created
configmap/config-gc created
configmap/config-leader-election created
configmap/config-logging created
configmap/config-network created
configmap/config-observability created
configmap/config-tracing created
Warning: autoscaling/v2beta2 HorizontalPodAutoscaler is deprecated in v1.23+, unavailable in v1.26+
horizontalpodautoscaler.autoscaling/activator created
poddisruptionbudget.policy/activator-pdb created
deployment.apps/activator created
service/activator-service created
deployment.apps/autoscaler created
service/autoscaler created
deployment.apps/controller created
service/controller created
deployment.apps/domain-mapping created
deployment.apps/domainmapping-webhook created
service/domainmapping-webhook created
horizontalpodautoscaler.autoscaling/webhook created
poddisruptionbudget.policy/webhook-pdb created
deployment.apps/webhook created
service/webhook created
validatingwebhookconfiguration.admissionregistration.k8s.io/config.webhook.serving.knative.dev created
mutatingwebhookconfiguration.admissionregistration.k8s.io/webhook.serving.knative.dev created
mutatingwebhookconfiguration.admissionregistration.k8s.io/webhook.domainmapping.serving.knative.dev created
secret/domainmapping-webhook-certs created
validatingwebhookconfiguration.admissionregistration.k8s.io/validation.webhook.domainmapping.serving.knative.dev created
validatingwebhookconfiguration.admissionregistration.k8s.io/validation.webhook.serving.knative.dev created
secret/webhook-certs created

镜像无法拉取怎么办嘞?

有梯子的小伙伴有福了,Docker 单独配置网络代理:

shell 复制代码
[root@master ~]# mkdir -p /etc/systemd/system/docker.service.d
[root@master ~]# vi /etc/systemd/system/docker.service.d/http-proxy.conf
[Service]
Environment="HTTP_PROXY=http://192.168.180.83:7890"
Environment="HTTPS_PROXY=http://192.168.180.83:7890"
Environment="NO_PROXY=localhost,127.0.0.1,.cluster.local,.svc,.internal.gcr.io"

// 最好开国外的地址进行代理。

查看Knative的Pod状态

复制代码
[root@master istio-1.17.0]# kubectl get pods -n knative-serving
NAME                                     READY   STATUS    RESTARTS   AGE
activator-54cdf744fb-bgkcr               1/1     Running   0          61s
autoscaler-684495f859-f6nt8              1/1     Running   0          61s
controller-865d96c97f-gj5gp              1/1     Running   0          61s
domain-mapping-5d488c9654-pthss          1/1     Running   0          61s
domainmapping-webhook-54d46d9b6c-4bb2q   1/1     Running   0          61s
net-istio-controller-549f854f4-rh6k8     1/1     Running   0          51s
net-istio-webhook-f9bdbc6f9-wdjk5        1/1     Running   0          51s
webhook-65984d8585-4jp5l                 1/1     Running   0          61s

Knative Serving中部署serving-hpa.yaml组件,本质上是为Knative服务启用Kubernetes原生的HPA(Horizontal Pod Autoscaler)自动扩缩能力。部署serving-hap:

复制代码
[root@master ]# kubectl apply -f https://github.com/knative/serving/releases/download/knative-v1.7.1/serving-hpa.yaml  
deployment.apps/autoscaler-hpa created
service/autoscaler-hpa created

Knative Eventing安装

安装 Eventing 的核心组件

部署完成后,查看Pod状态:

复制代码
[root@master ~]# kubectl get pods -n knative-eventing
NAME                                   READY   STATUS    RESTARTS   AGE
eventing-controller-578d46cb89-b5c79   1/1     Running   0          19s
eventing-webhook-54bc4585b5-hmdmc      1/1     Running   0          19s

安装 Kafka 控制器

安装 KafkaChannel 数据平面

安装Broker层

查验集群状态:

复制代码
[root@master ~]# kubectl get pods -n knative-eventing
NAME                                       READY   STATUS    RESTARTS   AGE
eventing-controller-578d46cb89-b5c79       1/1     Running   0          7m49s
eventing-webhook-54bc4585b5-hmdmc          1/1     Running   0          7m49s
kafka-broker-dispatcher-57df55bb4-d2rjq    1/1     Running   0          2m4s
kafka-broker-receiver-69f4dcfd97-4gfxn     1/1     Running   0          2m4s
kafka-channel-dispatcher-fffc6796f-kvlmj   1/1     Running   0          2m12s
kafka-channel-receiver-7655dcc69d-826qg    1/1     Running   0          2m12s
kafka-controller-5875747ddc-tvzq5          1/1     Running   0          2m18s
kafka-webhook-eventing-57cfbd8b44-rqw2h    1/1     Running   0          2m18s

关于以上所有的yaml文件无法在集群搭建的问题,我们可以使用梯子下载所有的yaml文件,上传至master节点,依次执行,但是每个节点都要配置Docker代理,不然就会出现镜像无法拉取的问题!!

至此,Knative集群搭建完成。

Knative集群标志着企业级无服务器及事件驱动架构平台的落地,其基于Kubernetes和Istio的深度融合,不仅实现了服务自动扩缩、流量精准路由和灰度发布等核心能力,还通过Knative Eventing与Kafka的集成构建了高可靠事件总线,支撑实时数据流处理与异步任务调度。该平台将开发运维复杂度大幅降低,使开发者聚焦业务逻辑,同时通过资源动态优化显著提升基础设施利用率,为微服务敏捷迭代、突发流量应对、跨系统事件驱动场景提供开箱即用的云原生底座,加速企业向现代化、弹性化应用架构的转型进程。

后续的学习我们慢慢来。

相关推荐
时序数据说9 分钟前
时序数据库IoTDB基于云原生的创新与实践
大数据·数据库·分布式·云原生·时序数据库·iotdb
一切顺势而行1 小时前
zookeeper 操作总结
分布式·zookeeper·云原生
Gold Steps.10 小时前
修改 K8S Service 资源类型 NodePort 的端口范围
云原生·容器·kubernetes
木头左12 小时前
云原生架构核心特性详解
云原生·架构
P.H. Infinity16 小时前
【K8S】K8S基础概念
云原生·容器·kubernetes
在未来等你17 小时前
互联网大厂Java求职面试:AI与云原生架构实战解析
java·spring boot·低代码·ai·云原生·面试·架构设计
木头左17 小时前
K8s入门(4)Kubernetes的技术演进
云原生·容器·kubernetes
炎码工坊20 小时前
云原生安全基石:深度解析HTTPS协议(从原理到实战)
网络协议·网络安全·云原生·https·云计算
行星00820 小时前
docker常用命令
java·云原生·eureka
炎码工坊21 小时前
云原生安全之HTTP协议:从基础到实战的安全指南
安全·http·网络安全·云原生·云计算