k8s service
1. K8s Service 概念
Kubernetes Service(简称K8s Service)是Kubernetes中的一个核心组件,它主要负责在集群中为应用程序提供稳定的服务发现和负载均衡功能。Service是一个抽象层,它将一组Pod(运行应用程序的容器)抽象为一个逻辑服务,为外部或内部客户端提供稳定的访问点。
2. K8s Service 原理
-
服务发现:
- Service通过Label Selector来选择提供服务的Pod。Label Selector本质上是一种转换规则,它会生成Service的Endpoints,Endpoints是Kubernetes中的一个资源对象,存储在etcd中,记录了Service IP对应的所有Pod IP。
- 当Pod的IP地址发生变化(如Pod被删除或重建)时,kube-proxy会基于监听机制发现这些变化并实时更新Endpoints中的Pod IP,确保Service能够始终将请求转发到正确的Pod。
-
负载均衡:
- Service通常代表着一组Pod。当请求到达Service时,该请求会根据负载均衡策略(如随机、轮询或基于客户端地址的会话保持)被定向到对应的Pod上。
- Service的默认类型是ClusterIP,它会在Kubernetes集群内部为Service生成一个虚拟IP地址(VIP),此IP地址仅在集群内部有效,用于将请求分发到后端的Pod。
3. K8s Service 工作原理
- 客户端发送请求到Kubernetes集群中的Service IP。
- kube-proxy根据Service的配置规则和iptables或IPVS等内核技术将请求转发到正确的Pod。
- 请求被Pod中的应用程序处理,并将响应发送回kube-proxy。
- kube-proxy将响应返回给客户端。
在这个过程中,kube-proxy起到了关键的作用,它负责监控Pod的状态和标签,并动态更新iptables或IPVS规则,以确保流量能够正确转发。
在Kubernetes(k8s)中,Service的工作原理涉及多个组件和模式,其中userspace、iptables和ipvs是kube-proxy支持的三种主要工作模式,它们各自有不同的工作原理和特点。
3.1. Userspace模式
-
工作原理:
- 在Userspace模式下,kube-proxy会为每一个Service在本地节点上监听一个端口。
- 当有请求到达这个端口时,kube-proxy会接收这些请求,并根据Service的配置和负载均衡算法(如轮询)选择一个后端Pod来处理这些请求。
-kube-proxy会与目标Pod建立连接,并将请求转发给该Pod。
-由于整个转发过程在用户空间完成,因此会涉及到内核空间和用户空间之间的数据拷贝,这会导致一定的性能开销。
-
特点:
- 稳定性较高,因为所有操作都在用户空间进行,不会直接操作内核网络栈。
- 但效率较低,不适合高并发的场景。
3.2. Iptables模式
-
工作原理:
- 在Iptables模式下,kube-proxy不直接处理请求,而是利用Linux内核的iptables工具来创建和管理网络规则。
- 当有请求到达集群时,iptables规则会将这些请求重定向到后端的Pod上。
- kube-proxy负责监听Service和Endpoints对象的变化,并更新iptables规则以反映这些变化。
- 由于转发过程在内核空间完成,因此效率较高。
-
特点:
- 效率高,减少了内核空间和用户空间之间的数据拷贝。
- 但不能提供复杂的负载均衡算法和故障转移机制。
3.3. IPVS模式(4层负载)
-
工作原理:
- IPVS(IP Virtual Server)是一种基于Linux内核的负载均衡器,它提供了比iptables更高效的转发性能和更丰富的负载均衡算法。
- 在IPVS模式下,kube-proxy会监听Service和Endpoints对象的变化,并调用ipvsadm命令来创建和管理IPVS规则。
- 当有请求到达集群时,IPVS规则会将这些请求转发到后端的Pod上,并根据配置的负载均衡算法(如轮询、最少连接数等)来选择目标Pod。
-
特点:
- 效率高,转发性能优于iptables。
- 支持丰富的负载均衡算法和故障转移机制。
- 但在某些情况下,如内核版本不支持IPVS或IPVS配置不正确时,可能会出现问题。
总结
Userspace、iptables和ipvs是kube-proxy支持的三种主要工作模式,它们各有优缺点。Userspace模式稳定但效率较低;iptables模式效率较高但功能有限;IPVS模式则提供了更高的效率和更丰富的功能。在Kubernetes集群中,管理员可以根据实际需求选择适合的服务代理模式。随着Kubernetes的发展,IPVS模式逐渐成为默认的代理模式,因为它在性能和功能上都优于其他两种模式。
3.4 模式修改
bash
# edit使用方法
[root@master 6]# kubectl get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
deployment-demo 10/10 10 10 3m4s
[root@master 6]# kubectl edit deploy/deployment-demo
# 修改kube-proxy工作模式
[root@master ~]# kubectl edit configmap kube-proxy -n kube-system
[root@master 6]# kubectl get pod -n kube-system --show-labels
[root@master 6]# kubectl get pod -n kube-system -l k8s-app=kube-proxy
NAME READY STATUS RESTARTS AGE
kube-proxy-ddqbc 1/1 Running 8 (75m ago) 3d12h
kube-proxy-lr5qj 1/1 Running 13 (75m ago) 3d12h
kube-proxy-p6hlv 1/1 Running 9 (75m ago) 3d12h
# 删除重建
[root@master 6]# kubectl delete pod -n kube-system -l k8s-app=kube-proxy
pod "kube-proxy-ddqbc" deleted
pod "kube-proxy-lr5qj" deleted
pod "kube-proxy-p6hlv" deleted
[root@master 6]# kubectl get pod -n kube-system -l k8s-app=kube-proxy
NAME READY STATUS RESTARTS AGE
kube-proxy-jwvjm 1/1 Running 0 10s
kube-proxy-rf7hf 1/1 Running 0 10s
kube-proxy-s885z 1/1 Running 0 10s
[root@master 6]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.0.0.1:443 rr
-> 10.0.17.100:6443 Masq 1 1 0
TCP 10.0.0.10:53 rr
-> 10.244.219.85:53 Masq 1 0 0
-> 10.244.219.87:53 Masq 1 0 0
TCP 10.0.0.10:9153 rr
-> 10.244.219.85:9153 Masq 1 0 0
-> 10.244.219.87:9153 Masq 1 0 0
TCP 10.7.184.4:5473 rr
-> 10.0.17.101:5473 Masq 1 0 0
TCP 10.11.211.253:80 rr
-> 10.244.104.34:80 Masq 1 0 0
-> 10.244.104.47:80 Masq 1 0 0
-> 10.244.104.50:80 Masq 1 0 0
-> 10.244.104.51:80 Masq 1 0 0
-> 10.244.104.52:80 Masq 1 0 0
-> 10.244.166.128:80 Masq 1 0 0
-> 10.244.166.132:80 Masq 1 0 0
-> 10.244.166.133:80 Masq 1 0 0
-> 10.244.166.136:80 Masq 1 0 0
-> 10.244.166.188:80 Masq 1 0 0
UDP 10.0.0.10:53 rr
-> 10.244.219.85:53 Masq 1 0 0
-> 10.244.219.87:53 Masq 1 0 0
[root@master 6]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
deployment-demo ClusterIP 10.11.211.253 <none> 80/TCP 24h
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 3d13h
[root@master 6]# kubectl get pod -l app=deployment-demo -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deployment-demo-6465d4c5c9-2mc6s 1/1 Running 0 10m 10.244.104.50 node2 <none> <none>
deployment-demo-6465d4c5c9-6r7kc 1/1 Running 0 10m 10.244.166.133 node1 <none> <none>
deployment-demo-6465d4c5c9-7gvb6 1/1 Running 0 10m 10.244.166.136 node1 <none> <none>
deployment-demo-6465d4c5c9-9xbnq 1/1 Running 0 10m 10.244.104.34 node2 <none> <none>
deployment-demo-6465d4c5c9-c2mkl 1/1 Running 0 10m 10.244.166.188 node1 <none> <none>
deployment-demo-6465d4c5c9-gt954 1/1 Running 0 10m 10.244.104.52 node2 <none> <none>
deployment-demo-6465d4c5c9-nfhz4 1/1 Running 0 10m 10.244.166.132 node1 <none> <none>
deployment-demo-6465d4c5c9-p275v 1/1 Running 0 10m 10.244.104.47 node2 <none> <none>
deployment-demo-6465d4c5c9-tv94p 1/1 Running 0 10m 10.244.104.51 node2 <none> <none>
deployment-demo-6465d4c5c9-w4tqg 1/1 Running 0 10m 10.244.166.128 node1 <none> <none>
[root@master 6]# curl 10.11.211.253:80
www.xinxianghf.com | hello MyAPP | version v2.0
4. k8s service工作模式(4层负载均衡)
Kubernetes(简称K8s)中的Service工作类型主要有四种,每种类型适用于不同的访问场景和需求。以下是这四种类型的详细解释:
4.1. ClusterIP(集群内部访问)
- 默认类型:ClusterIP是Kubernetes Service的默认类型。
- 特点:它会自动分配一个仅集群内部可访问的虚拟IP地址。客户端(如Pod或其他服务)可以通过这个虚拟IP地址来访问Service背后的一组Pod。
- 适用场景:主要用于集群内部的服务访问,如数据库、缓存等后端服务。
yaml
vim myapp-clusterip-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-clusterip-deploy
namespace: default
spec:
replicas: 3
selector:
matchLabels:
app: myapp
release: stabel
svc: clusterip
template:
metadata:
labels:
app: myapp
release: stabel
env: test
svc: clusterip
spec:
containers:
- name: myapp-container
image: wangyanglinux/myapp:v1.0
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
readinessProbe:
httpGet:
port: 80
path: /index1.html
initialDelaySeconds: 1
periodSeconds: 3
[root@master 6]# kubectl edit deployment/myapp-clusterip-deploy
readinessProbe:
httpGet:
port: 80
path: /index.html
initialDelaySeconds: 1
periodSeconds: 3
vim myapp-service.yaml
apiVersion: v1
kind: Service
metadata:
name: myapp-clusterip
namespace: default
spec:
type: ClusterIP
selector:
app: myapp
release: stabel
svc: clusterip
ports:
- name: http
port: 80
targetPort: 80
[root@master 6]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
myapp-clusterip-deploy-8569f76579-fgkrw 1/1 Running 0 6m44s 10.244.104.56 node2 <none> <none>
myapp-clusterip-deploy-8569f76579-kkhss 1/1 Running 0 6m49s 10.244.104.54 node2 <none> <none>
myapp-clusterip-deploy-8569f76579-wnlzt 1/1 Running 0 6m46s 10.244.166.130 node1 <none> <none>
[root@master 6]# kubectl get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 3d14h <none>
myapp-clusterip ClusterIP 10.10.227.188 <none> 80/TCP 8m22s app=myapp,release=stabel,svc=clusterip
[root@master 6]# kubectl describe svc myapp
Name: myapp-clusterip
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=myapp,release=stabel,svc=clusterip
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.10.227.188
IPs: 10.10.227.188
Port: http 80/TCP
TargetPort: 80/TCP
Endpoints: 10.244.104.54:80,10.244.104.56:80,10.244.166.130:80
Session Affinity: None
Events: <none>
[root@master 6]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.0.0.1:443 rr
-> 10.0.17.100:6443 Masq 1 0 0
TCP 10.0.0.10:53 rr
-> 10.244.219.85:53 Masq 1 0 0
-> 10.244.219.87:53 Masq 1 0 0
TCP 10.0.0.10:9153 rr
-> 10.244.219.85:9153 Masq 1 0 0
-> 10.244.219.87:9153 Masq 1 0 0
TCP 10.7.184.4:5473 rr
-> 10.0.17.101:5473 Masq 1 0 0
TCP 10.10.227.188:80 rr
-> 10.244.104.54:80 Masq 1 0 4
-> 10.244.104.56:80 Masq 1 0 3
-> 10.244.166.130:80 Masq 1 0 4
UDP 10.0.0.10:53 rr
-> 10.244.219.85:53 Masq 1 0 0
-> 10.244.219.87:53 Masq 1 0 0
[root@master 6]# curl 10.10.227.188/hostname.html
myapp-clusterip-deploy-8569f76579-wnlzt
[root@master 6]# curl 10.10.227.188/hostname.html
myapp-clusterip-deploy-8569f76579-kkhss
[root@master 6]# curl 10.10.227.188/hostname.html
myapp-clusterip-deploy-8569f76579-fgkrw
[root@master 6]# curl 10.10.227.188/hostname.html
myapp-clusterip-deploy-8569f76579-wnlzt
[root@master 6]# curl 10.10.227.188/hostname.html
myapp-clusterip-deploy-8569f76579-kkhss
[root@master 6]# curl 10.10.227.188/hostname.html
myapp-clusterip-deploy-8569f76579-fgkrw
[root@master 6]# curl 10.10.227.188/hostname.html
myapp-clusterip-deploy-8569f76579-wnlzt
[root@master 6]# curl 10.10.227.188/hostname.html
myapp-clusterip-deploy-8569f76579-kkhss
[root@master 6]# curl 10.10.227.188/hostname.html
myapp-clusterip-deploy-8569f76579-fgkrw
[root@master 6]# curl 10.10.227.188/hostname.html
myapp-clusterip-deploy-8569f76579-wnlzt
[root@master 6]# curl 10.10.227.188/hostname.html
myapp-clusterip-deploy-8569f76579-kkhss
# dns解析
[root@master 6]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 3d14h
myapp-clusterip ClusterIP 10.10.227.188 <none> 80/TCP 31m
[root@master 6]# kubectl get pod -n kube-system -o wide | grep dns
coredns-857d9ff4c9-6cb2b 1/1 Running 8 (174m ago) 3d2h 10.244.219.87 master <none> <none>
coredns-857d9ff4c9-tvrff 1/1 Running 8 (174m ago) 3d2h 10.244.219.85 master <none> <none>
[root@master 6]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.0.0.10:53 rr
-> 10.244.219.85:53 Masq 1 0 0
-> 10.244.219.87:53 Masq 1 0 0
UDP 10.0.0.10:53 rr
-> 10.244.219.85:53 Masq 1 0 0
-> 10.244.219.87:53 Masq 1 0 0
# svc dsn域名格式
svcName.nsName.svc.domainName
#domainName默认是cluster.local.
[root@master 6]# yum install -y bind-utils
[root@master 6]# dig -t
# -t tcp解析
# A A记录
[root@master 6]# dig -t A myapp-clusterip.default.svc.cluster.local. @10.0.0.10
; <<>> DiG 9.16.23-RH <<>> -t A myapp-clusterip.default.svc.cluster.local. @10.0.0.10
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 17330
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: e9f21e5c248bb712 (echoed)
;; QUESTION SECTION:
;myapp-clusterip.default.svc.cluster.local. IN A
;; ANSWER SECTION:
myapp-clusterip.default.svc.cluster.local. 30 IN A 10.10.227.188
;; Query time: 2 msec
;; SERVER: 10.0.0.10#53(10.0.0.10)
;; WHEN: Sat Aug 24 12:46:50 CST 2024
;; MSG SIZE rcvd: 139
[root@master 6]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 3d14h
myapp-clusterip ClusterIP 10.10.227.188 <none> 80/TCP 31m
[root@master 6]# kubectl exec -it myapp-clusterip-deploy-8569f76579-fgkrw -- bash
myapp-clusterip-deploy-8569f76579-fgkrw:/# curl myapp-clusterip.default.svc.cluster.local.
www.xinxianghf.com | hello MyAPP | version v1.0
会话保持( IPVS 持久化连接)
shell
[root@master 6]# kubectl get svc myapp-clusterip -o yaml
# local为只有当前集群内部的pod能够访问
internalTrafficPolicy: Cluster/local
# k8s打开持久化连接
service.spec.sessionAffinity: ClientIP
#默认10800秒
[root@master 6]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.10.227.188:80 rr persistent 10800
-> 10.244.104.54:80 Masq 1 0 0
-> 10.244.104.56:80 Masq 1 0 0
-> 10.244.166.130:80 Masq 1 0 0
#ipvsaadm打开持久化连接
ipvsadm -A -t ip:port -s rr -p time
# -p 持久化时间
4.2. NodePort(物理网卡绑定端口实现集群外部访问)
- 扩展访问:在ClusterIP的基础上,NodePort会在集群的每个节点上分配一个静态端口,将外部流量导入到集群内部。
- 访问方式:外部客户端可以通过任意节点的IP地址和该静态端口来访问Service。
- 适用场景:NodePort类型通常用于开发和测试环境,因为它提供了一种简单直接的方式来从集群外部访问服务。然而,在生产环境中,由于端口范围有限且每个节点都需要打开相同的端口,因此不推荐使用。
yaml
vim myapp-nodeport-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-nodeport-deploy
namespace: default
spec:
replicas: 3
selector:
matchLabels:
app: myapp
release: stabel
svc: nodeport
template:
metadata:
labels:
app: myapp
release: stabel
env: test
svc: nodeport
spec:
containers:
- name: myapp-container
image: wangyanglinux/myapp:v1.0
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
vim myapp-nodeport-service.yaml
apiVersion: v1
kind: Service
metadata:
name: myapp-nodeport
namespace: default
spec:
type: NodePort
selector:
app: myapp
release: stabel
svc: nodeport
ports:
- name: http
port: 80
targetPort: 80
nodePort: 30010
[root@master 6]# kubectl get po
NAME READY STATUS RESTARTS AGE
myapp-nodeport-deploy-685dcc6ddf-4ns8j 1/1 Running 0 11s
myapp-nodeport-deploy-685dcc6ddf-tx426 1/1 Running 0 11s
myapp-nodeport-deploy-685dcc6ddf-xsj4t 1/1 Running 0 11s
[root@master 6]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 3d17h
myapp-nodeport NodePort 10.9.237.11 <none> 80:30010/TCP 10s
[root@master 6]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.0.17.100:30010 rr
-> 10.244.104.39:80 Masq 1 0 5
-> 10.244.104.46:80 Masq 1 1 5
-> 10.244.166.134:80 Masq 1 0 6
TCP 10.9.237.11:80 rr
-> 10.244.104.39:80 Masq 1 0 0
-> 10.244.104.46:80 Masq 1 0 0
-> 10.244.166.134:80 Masq 1 0 0
[root@master 6]# kubectl exec -it myapp-nodeport-deploy-685dcc6ddf-4ns8j -- bash
myapp-nodeport-deploy-685dcc6ddf-4ns8j:/# curl myapp-nodeport.default.svc.cluster.local./hostname.html
myapp-nodeport-deploy-685dcc6ddf-tx426
[root@master 6]# kubectl edit svc myapp-nodeport
externalTrafficPolicy: Cluster/local
# cluster 集群以外主机可通过node节点物理网卡ip:port访问,集群node可通过物理网卡ip:port访问
# local 集群node可通过物理网卡ip:port访问
#修改成local之后集群外主机无法访问
internalTrafficPolicy: Cluster/local
# cluster: 集群node节点可通过虚拟ip访问,node内部pod可通过虚拟ip访问
# local: 仅支持node内部pod通过虚拟ip访问
4.3. LoadBalancer
- 云环境支持:LoadBalancer类型在NodePort的基础上,进一步借助云提供商的负载均衡器来实现外部访问。
- 自动创建:当在支持LoadBalancer的云环境中创建LoadBalancer类型的Service时,云提供商会自动创建一个负载均衡器,并将外部流量转发到集群内部。
- 适用场景:LoadBalancer类型适用于需要高可靠性和高可用性的生产环境。它能够将流量均匀分发到多个节点上,提高服务的响应能力和容错能力。
4.4. ExternalName
- 引入外部服务:ExternalName类型允许将集群外部的服务引入到集群内部,通过Service名称直接访问外部服务。
- 特点:ExternalName类型的Service不会分配ClusterIP,也不会创建任何代理或负载均衡器。相反,它通过在DNS中返回一个CNAME记录来实现服务映射。
- 适用场景:当集群内部的应用需要访问集群外部的服务时,可以使用ExternalName类型来简化服务发现和访问过程。然而,需要注意的是,ExternalName类型仅适用于Kubernetes 1.7及以上版本,并且需要kube-dns或CoreDNS的支持。
yaml
vi my-external-service.yaml
apiVersion: v1
kind: Service
metadata:
name: my-external-service
spec:
type: ExternalName
externalName: www.baidu.com
[root@master 6]# kubectl exec -it myapp-nodeport-deploy-685dcc6ddf-4ns8j bash
myapp-nodeport-deploy-685dcc6ddf-4ns8j:/# ping my-external-service.default.svc.cluster.local.
PING my-external-service.default.svc.cluster.local. (220.181.38.149): 56 data bytes
64 bytes from 220.181.38.149: seq=0 ttl=127 time=18.446 ms
64 bytes from 220.181.38.149: seq=1 ttl=127 time=17.791 ms
64 bytes from 220.181.38.149: seq=2 ttl=127 time=21.961 ms
64 bytes from 220.181.38.149: seq=3 ttl=127 time=19.518 ms
# 百度ip:220.181.38.149
5. 使用 Endpoints
Endpoints是Kubernetes中用于将Service和具体的Pod IP地址进行关联的资源对象。通过Endpoints,我们可以在Service对象内部动态地管理Pod的IP地址,确保Service可以正确地路由流量到后端的Pod上。
创建Endpoints对象通常不是由用户直接进行的,而是由Kubernetes系统根据Service对象和Pod的Label Selector自动创建的。然而,在某些特殊情况下,用户也可以手动创建或修改Endpoints对象来满足特定的需求。
shell
[root@master 6]# kubectl get ep
NAME ENDPOINTS AGE
kubernetes 10.0.17.100:6443 3d18h
myapp-nodeport 10.244.104.39:80,10.244.104.46:80,10.244.166.134:80 47m
[root@master 6]# kubectl edit ep kubernetes
apiVersion: v1
kind: Endpoints
metadata:
creationTimestamp: "2024-08-20T14:05:33Z"
labels:
endpointslice.kubernetes.io/skip-mirror: "true"
name: kubernetes
namespace: default
resourceVersion: "124820"
uid: de808267-dc60-4de8-8a0b-6aefc6f694b0
subsets:
- addresses:
- ip: 10.0.17.100
ports:
- name: https
port: 6443
protocol: TCP
yaml
vi nginx-noselectt.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-noselectt
spec:
ports:
- protocol: TCP
port: 6666
targetPort: 80
---
apiVersion: v1
kind: Endpoints
metadata:
name: nginx-noselectt
subsets:
- addresses:
- ip: 10.0.17.100
ports:
- port: 80
shell
[root@master 6]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.7.214.139:6666 rr
-> 10.0.17.100:80 Masq 1 0 0
[root@master 6]# docker run --name nginx -p 80:80 -d nginx
[root@master 6]# curl localhost
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
6. publishNotReadyAddresses
publishNotReadyAddresses是Service配置中的一个字段,用于控制是否将未就绪(NotReady)的Pod的地址关联到Service上。默认情况下,该字段的值为false,即只有处于就绪状态的Pod的地址才会被关联到Service上。
如果将该字段设置为true,则未就绪的Pod的地址也会被关联到Service上。然而,这通常不是推荐的做法,因为未就绪的Pod可能无法正确处理请求,导致服务不稳定或返回错误响应。
yaml
vim readiness-httpget-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: readiness-httpget-pod
namespace: default
labels:
app: myapp
env: test
spec:
containers:
- name: readiness-httpget-container
image: wangyanglinux/myapp:v1.0
imagePullPolicy: IfNotPresent
readinessProbe:
httpGet:
port: 80
path: /index1.html
initialDelaySeconds: 1
periodSeconds: 3
vim myapp.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: myapp
name: myapp
spec:
ports:
- name: 80-80
port: 80
protocol: TCP
targetPort: 80
selector:
app: myapp
type: ClusterIP
[root@master 6]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 3d18h
myapp ClusterIP 10.5.69.148 <none> 80/TCP 87s
[root@master 6]# curl 10.5.69.148
curl: (7) Failed to connect to 10.5.69.148 port 80: Connection refused
kubectl patch service myapp -p '{"spec":{"publishNotReadyAddresses": true}}'
综上所述,K8s Service通过服务发现和负载均衡机制为Kubernetes集群中的应用程序提供了稳定的服务访问能力。同时,通过Endpoints和publishNotReadyAddresses等配置选项,用户可以进一步定制和优化Service的行为。