84、 k8s的pod基础+https-harbor

一、pod基础:

pod进阶:探针(面试必问---扩缩容,挂载)

1.1、pod的定义

pod是k8s里面的最小单位,pod也是最小运行容器的资源对象。

容器时基于pod在k8s集群当中工作。

在k8s集群当中,一个pod就代表一个运行的进程,k8s的大部分组件都是围绕pod来进行的,对pod进行支撑,扩展。

deployment service都是围绕pod来进行部署的。

1.2、k8s的pod的两种使用方式:

1、一个pod一个容器,这是最常见的方式,k8s管理的是pod,不是容器。

2、一个pod里面有多个容器。多个容器,也是共享网络,挂载卷。

现在的容器的技术要求,一个pod下的必须运行在同一个节点上。

共享网络,挂载卷并不是pod自身提供的功能。

pause容器提供的共享和挂载卷共享。

[root@master01 ~]# kubectl run --image=nginx:1.22 test1
pod/test1 created
[root@master01 ~]# kubectl get pod -o wide
NAME    READY   STATUS    RESTARTS   AGE   IP            NODE     NOMINATED NODE   READINESS GATES
test1   1/1     Running   0          61s   10.244.2.70   node02   <none>           <none>
pause管理系统重要的组件

1.3、pod的分类

1、基础容器-------pause

共享网络和共享挂载卷

2、初始化容器:

这种初始化容器包含在pod容器内部的,属于pod的组成部分之一,而且伴随着pod生命周期当中的一个环节:启动环节。

当我们拉起一个pod时,先构建pause,构建完成之后,如果包含初始化容器,必须要等初始化容器部署完成之后,才会部署应用容器。

1.3.1、1、3个容器怎么查看

kubectl exec -it init-pod(pod的名称) -c centos2(容器名)

初始化容器运行完成之后,使命完成之后即退出,节点上的容器还在,这个时候只能查询到业务容器的日志和状态。

初始化容器运行完毕之后必须要退出,否则后续的容器无法继续构建。

查看日志:kubectl logs -f init-pod(pod的名称) -c centos2(容器名)

1.3.2、启动的先后顺序

先有初始化的容器,再启动业务容器。

#初始化启动失败,pod能否进入ready状态

[root@master01 k8s-yaml]# vim init.yml


第一种:业务容器在后
apiVersion: v1
kind: Pod
metadata:
  name: init-pod
  labels:
    app: test1
spec:
  initContainers:
#定义pod的内部的初始化容器-----一个pod里面有多个容器-----初始化容器
  - name: centos1
    image: centos:7
    command: ["/bin/bash","-c","echo 123 > /opt/123.txt && sleep 2"]
#多个命令   :用逻辑或,&&表示逻辑且
  - name: centos2
    image: centos:7
    command: ["/bin/bash","-c","echo 345 > /opt/345.txt && sleep 50"]
  containers:
  - name: centos3
    image: centos:7
    command: ["/bin/bash","-c","echo system is running && sleep 3600"]
#1、3个容器怎么查看
kubectl exec -it init-pod(pod的名称) -c centos2(容器名)
初始化容器运行完成之后,使命完成之后即退出,节点上的容器还在,这个时候只能查询到业务容器的日志和状态。
初始化容器运行完毕之后必须要退出,否则后续的容器无法继续构建。
查看日志:kubectl logs -f init-pod -c centos2
#2、启动的先后顺序, 
先有初始化的容器,再启动业务容器。
#初始化启动失败,pod能否进入ready状态




[root@master01 k8s-yaml]# kubectl apply -f init.yml 
pod/init-pod created
[root@master01 k8s-yaml]# kubectl get pod -o wide
[root@master01 k8s-yaml]# kubectl describe pod init-pod 


Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  55s   default-scheduler  Successfully assigned default/init-pod to node02
  Normal  Pulled     54s   kubelet            Container image "centos:7" already present on machine
  Normal  Created    54s   kubelet            Created container centos1
  Normal  Started    54s   kubelet            Started container centos1
  Normal  Pulled     52s   kubelet            Container image "centos:7" already present on machine
  Normal  Created    52s   kubelet            Created container centos2
  Normal  Started    52s   kubelet            Started container centos2
  Normal  Pulled     2s    kubelet            Container image "centos:7" already present on machine
  Normal  Created    2s    kubelet            Created container centos3
  Normal  Started    2s    kubelet            Started container centos3

第二种:业务容器在前
[root@master01 k8s-yaml]# vim init.yml

apiVersion: v1
kind: Pod
metadata:
  name: init-pod
  labels:
    app: test1
spec:
  containers:
  - name: centos3
    image: centos:7
    command: ["/bin/bash","-c","echo system is running && sleep 3600"]
  initContainers:
#定义pod的内部的初始化容器-----一个pod里面有多个容器-----初始化容器
  - name: centos1
    image: centos:7
    command: ["/bin/bash","-c","echo 123 > /opt/123.txt && sleep 2"]
#多个命令   :用逻辑或,&&表示逻辑且
  - name: centos2
    image: centos:7
    command: ["/bin/bash","-c","echo 345 > /opt/345.txt && sleep 50"]
[root@master01 k8s-yaml]# kubectl apply -f init.yml 

[root@master01 k8s-yaml]# kubectl get pod
NAME        READY   STATUS    RESTARTS   AGE
init-pod    1/1     Running   0          107s

[root@master01 k8s-yaml]# kubectl describe pod init-pod 

Events:
  Type    Reason     Age    From               Message
  ----    ------     ----   ----               -------
  Normal  Scheduled  3m6s   default-scheduler  Successfully assigned default/init-pod to node02
  Normal  Pulled     3m6s   kubelet            Container image "centos:7" already present on machine
  Normal  Created    3m6s   kubelet            Created container centos1
  Normal  Started    3m6s   kubelet            Started container centos1
  Normal  Pulled     3m4s   kubelet            Container image "centos:7" already present on machine
  Normal  Created    3m4s   kubelet            Created container centos2
  Normal  Started    3m4s   kubelet            Started container centos2
  Normal  Pulled     2m13s  kubelet            Container image "centos:7" already present on machine
  Normal  Created    2m13s  kubelet            Created container centos3
  Normal  Started    2m13s  kubelet            Started container centos3
进入容器
kubectl exec -it init-pod(pod名称) -c centos2(容器)
查看日志
kubectl logs -f init-pod(pod名称) -c centos2(容器)
初始化nginx容器一直再运行,容器启动不了
[root@master01 k8s-yaml]# vim init1.yml 

apiVersion: v1
kind: Pod
metadata:
  name: init1-pod
  labels:
    app: test1
spec:
  initContainers:
#定义pod的内部的初始化容器-----一个pod里面有多个容器-----初始化容器
  - name: nginx1
    image: nginx:1.22
#多个命令   :用逻辑或,&&表示逻辑且
  - name: nginx2
    image: nginx:1.22
  containers:
  - name: nginx3
    image: nginx:1.22

[root@master01 k8s-yaml]# kubectl apply -f init1.yml 
[root@master01 k8s-yaml]# kubectl get pod -o wide
[root@master01 k8s-yaml]# kubectl describe pod init1-pod
Events:
  Type    Reason     Age    From               Message
  ----    ------     ----   ----               -------
  Normal  Scheduled  2m33s  default-scheduler  Successfully assigned default/init1-pod to node01
  Normal  Pulled     2m33s  kubelet            Container image "nginx:1.22" already present on machine
  Normal  Created    2m33s  kubelet            Created container nginx1
  Normal  Started    2m33s  kubelet            Started container nginx1
command--/bin/bash命令进行覆盖
[root@master01 k8s-yaml]# vim init1.yml 

apiVersion: v1
kind: Pod
metadata:
  name: init1-pod
  labels:
    app: test1
spec:
  initContainers:
#定义pod的内部的初始化容器-----一个pod里面有多个容器-----初始化容器
  - name: nginx1
    image: nginx:1.22
    command: ["/bin/bash","-c","touch /opt/123.txt"]
#多个命令   :用逻辑或,&&表示逻辑且
  - name: nginx2
    image: nginx:1.22
    command: ["/bin/bash","-c","touch /opt/123.txt"]
  containers:
  - name: nginx3
    image: nginx:1.22
    command: ["/bin/bash","-c","touch /opt/123.txt"]


[root@master01 k8s-yaml]# kubectl apply -f init1.yml --force
[root@master01 k8s-yaml]# kubectl get pod -o wide
[root@master01 k8s-yaml]# kubectl describe pod init1-pod 

Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  86s                default-scheduler  Successfully assigned default/init1-pod to node01
  Normal   Pulled     86s                kubelet            Container image "nginx:1.22" already present on machine
  Normal   Created    85s                kubelet            Created container nginx1
  Normal   Started    85s                kubelet            Started container nginx1
  Normal   Pulled     85s                kubelet            Container image "nginx:1.22" already present on machine
  Normal   Created    85s                kubelet            Created container nginx2
  Normal   Started    84s                kubelet            Started container nginx2
  Normal   Pulled     39s (x4 over 84s)  kubelet            Container image "nginx:1.22" already present on machine
  Normal   Created    39s (x4 over 83s)  kubelet            Created container nginx3
  Normal   Started    39s (x4 over 83s)  kubelet            Started container nginx3
  Warning  BackOff    14s (x7 over 81s)  kubelet            Back-off restarting failed container
  
  
[root@master01 k8s-yaml]# kubectl exec -it init1-pod -c nginx3 bash  ##已经退出,进去


[root@master01 k8s-yaml]# vim init1.yml 


apiVersion: v1
kind: Pod
metadata:
  name: init1-pod
  labels:
    app: test1
spec:
  initContainers:
#定义pod的内部的初始化容器-----一个pod里面有多个容器-----初始化容器
  - name: nginx1
    image: nginx:1.22
    command: ["/bin/bash","-c","touch /opt/123.txt"]
#多个命令   :用逻辑或,&&表示逻辑且
  - name: nginx2
    image: nginx:1.22
    command: ["/bin/bash","-c","touch /opt/123.txt"]
  containers:
  - name: nginx3
    image: nginx:1.22



[root@master01 k8s-yaml]# kubectl apply -f init1.yml --force
[root@master01 k8s-yaml]# kubectl get pod -o wide
[root@master01 k8s-yaml]# kubectl describe pod init1-pod 
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  44s   default-scheduler  Successfully assigned default/init1-pod to node01
  Normal  Pulled     43s   kubelet            Container image "nginx:1.22" already present on machine
  Normal  Created    43s   kubelet            Created container nginx1
  Normal  Started    43s   kubelet            Started container nginx1
  Normal  Pulled     43s   kubelet            Container image "nginx:1.22" already present on machine
  Normal  Created    43s   kubelet            Created container nginx2
  Normal  Started    43s   kubelet            Started container nginx2
  Normal  Pulled     42s   kubelet            Container image "nginx:1.22" already present on machine
  Normal  Created    42s   kubelet            Created container nginx3
  Normal  Started    42s   kubelet            Started container nginx3

[root@master01 k8s-yaml]# kubectl exec -it init1-pod -c nginx3 bash  ##-c 登录容器
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
root@init1-pod:/# 
root@init1-pod:/# exit
exit

[root@master01 k8s-yaml]# kubectl logs -f init1-pod -c nginx3      ##查看容器日志
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2024/08/30 03:06:28 [notice] 1#1: using the "epoll" event method
2024/08/30 03:06:28 [notice] 1#1: nginx/1.22.1
2024/08/30 03:06:28 [notice] 1#1: built by gcc 10.2.1 20210110 (Debian 10.2.1-6) 
2024/08/30 03:06:28 [notice] 1#1: OS: Linux 3.10.0-957.el7.x86_64
2024/08/30 03:06:28 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 65536:65536
2024/08/30 03:06:28 [notice] 1#1: start worker processes
2024/08/30 03:06:28 [notice] 1#1: start worker process 29
2024/08/30 03:06:28 [notice] 1#1: start worker process 30
2024/08/30 03:06:28 [notice] 1#1: start worker process 31
2024/08/30 03:06:28 [notice] 1#1: start worker process 32
错误的初始化nginx容器启动失败,后续的业务容器也起不来
[root@master01 k8s-yaml]# vim init1.yml 

apiVersion: v1
kind: Pod
metadata:
  name: init1-pod
  labels:
    app: test1
spec:
  initContainers:
#定义pod的内部的初始化容器-----一个pod里面有多个容器-----初始化容器
  - name: nginx1
    image: nginx:1.22
    command: ["/bin/bash","-c","touch /uipt/123.txt"]
#多个命令   :用逻辑或,&&表示逻辑且
  - name: nginx2
    image: nginx:1.22
    command: ["/bin/bash","-c","touch /opt/123.txt"]
  containers:
  - name: nginx3
    image: nginx:1.22
[root@master01 k8s-yaml]# kubectl apply -f init1.yml --force
[root@master01 k8s-yaml]# kubectl describe pod init1-pod 



Events:
  Type     Reason     Age              From               Message
  ----     ------     ----             ----               -------
  Normal   Scheduled  7s               default-scheduler  Successfully assigned default/init1-pod to node01
  Normal   Pulled     6s (x2 over 7s)  kubelet            Container image "nginx:1.22" already present on machine
  Normal   Created    6s (x2 over 7s)  kubelet            Created container nginx1
  Normal   Started    6s (x2 over 7s)  kubelet            Started container nginx1
  Warning  BackOff    4s (x2 over 5s)  kubelet            Back-off restarting failed container
[root@master01 k8s-yaml]# kubectl get pod
NAME        READY   STATUS                  RESTARTS   AGE
init-pod    1/1     Running                 0          39m
init1-pod   0/1     Init:CrashLoopBackOff   2          35s

1.4、init容器的作用:

1、创建pod的时候,可以位业务容器初始化运行条件以及提供环境变量和一些软件(自定义)

2、权限,初始化容器可以访问Serects权限,不需要配置。业务容器必须要配置之后才能访问serect。

可以业务容器运行之前,提供一些必要的条件,前置条件满足之后,业务容器才会运行。

1.5、镜像拉取策略:

1、IfNotPresent:镜像在本地已经存在,就不会到镜像仓库再一次拉取镜像。(默认方式)

2、Always: 每次创建pod都会拉取镜像

3、Never:从来不去仓库拉取镜像,只使用本地镜像

镜像的标签:nginx:1.22 nginx 1.22

如果不加标签 nginx 默认 nginx:laster laster指的是最新版

如果没有镜像拉取策略,默认策略就默认策略就是IfNotPresent,但是如果镜像没有指定标签,即使没有声明镜像的拉取策略。默认就是始终。

nginx:1.22----IfNotPresent-------指定版本号镜像拉取策略就是IfNotPresent

nginx:laster----Always-------不指定版本号镜像拉取策略就是Always

[root@master01 k8s-yaml]# vim init.yml 

apiVersion: v1
kind: Pod
metadata:
  name: init-pod
  labels:
    app: test1
spec:
  volumes:
  - name: testdata
    emptyDir: {}
  initContainers:
#定义pod的内部的初始化容器-----一个pod里面有多个容器-----初始化容器
  - name: centos1
    image: centos:7
    command: ["/bin/bash","-c","echo 123 > /opt/data/123.txt && sleep 2"]
#多个命令   :用逻辑或,&&表示逻辑且
    volumeMounts:
    - name: testdata
      mountPath: /opt/data
  - name: centos2
    image: centos:7
    command: ["/bin/bash","-c","echo 345 > /opt/data/345.txt && sleep 50"]
    volumeMounts:
    - name: testdata
      mountPath: /opt/data
  containers:
  - name: centos3
    image: centos:7
    command: ["/bin/bash","-c","echo 567 > /opt/data/567.txt && sleep 3600"]
    volumeMounts:
    - name: testdata
      mountPath: /opt/data


[root@master01 k8s-yaml]# kubectl apply -f init.yml --force

[root@master01 k8s-yaml]# kubectl describe pod init-pod 

[root@master01 k8s-yaml]# kubectl get pod
NAME        READY   STATUS                  RESTARTS   AGE
init-pod    0/1     Init:1/2                0          44s
init1-pod   0/1     Init:CrashLoopBackOff   7          11m
[root@master01 k8s-yaml]# kubectl get pod
NAME        READY   STATUS                  RESTARTS   AGE
init-pod    0/1     PodInitializing         0          54s
init1-pod   0/1     Init:CrashLoopBackOff   7          11m
[root@master01 k8s-yaml]# kubectl exec -it init-pod bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
[root@init-pod /]# cd /opt/
[root@init-pod opt]# ls
data
[root@init-pod opt]# cd data/
[root@init-pod data]# ls
123.txt  345.txt  567.txt
[root@init-pod data]# cat 123.txt 
123
镜像拉取策略imagePullPolicy: Always
[root@master01 k8s-yaml]# vim init2.yml 

apiVersion: v1
kind: Pod
metadata:
  name: init2-pod
  labels:
    app: test1
spec:
  containers:
  - name: centos3
    image: centos:7
    imagePullPolicy: Always
~                             
[root@master01 k8s-yaml]# kubectl apply -f init2.yml --force
[root@master01 k8s-yaml]# kubectl describe pod init2-pod 


Events:
  Type     Reason     Age                   From               Message
  ----     ------     ----                  ----               -------
  Normal   Scheduled  3m19s                 default-scheduler  Successfully assigned default/init2-pod to node02
  Normal   Pulled     2m31s                 kubelet            Successfully pulle
镜像拉取策略imagePullPolicy: Never
[root@master01 k8s-yaml]# vim init2.yml 

apiVersion: v1
kind: Pod
metadata:
  name: init2-pod
  labels:
    app: test1
spec:
  containers:
  - name: centos3
    image: centos
    imagePullPolicy: Never
[root@master01 k8s-yaml]# kubectl apply -f init2.yml --force
[root@master01 k8s-yaml]# kubectl describe pod init2-pod 
Events:
  Type     Reason     Age               From               Message
  ----     ------     ----              ----               -------
  Normal   Scheduled  10s               default-scheduler  Successfully assigned default/init2-pod to node02
  Normal   Pulled     9s (x2 over 10s)  kubelet            Container image "centos" already present on machine
  Normal   Created    9s (x2 over 10s)  kubelet            Created container centos3
  Normal   Started    9s (x2 over 10s)  kubelet            Started container centos3
  Warning  BackOff    7s (x2 over 8s)   kubelet            Back-off restarting failed container
镜像拉取策略imagePullPolicy: IfNotPresent
[root@master01 k8s-yaml]# vim init2.yml 

apiVersion: v1
kind: Pod
metadata:
  name: init2-pod
  labels:
    app: test1
spec:
  containers:
  - name: centos3
    image: centos:7
    imagePullPolicy: IfNotPresent    


[root@master01 k8s-yaml]# kubectl describe pod init2-pod 

Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  34s                default-scheduler  Successfully assigned default/init2-pod to node02
  Normal   Pulled     16s (x3 over 34s)  kubelet            Container image "centos:7" already present on machine
镜像拉取策略imagePullPolicy: IfNotPresent----未指定版本
[root@master01 k8s-yaml]# vim init2.yml 

apiVersion: v1
kind: Pod
metadata:
  name: init2-pod
  labels:
    app: test1
spec:
  containers:
  - name: centos3
    image: centos
    imagePullPolicy: IfNotPresent
[root@master01 k8s-yaml]# kubectl apply -f init2.yml --force
[root@master01 k8s-yaml]# kubectl describe pod init2-pod 
Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  14s                default-scheduler  Successfully assigned default/init2-pod to node02
  Normal   Pulled     12s (x2 over 13s)  kubelet            Container image "centos" already present on machine
  Normal   Created    12s (x2 over 13s)  kubelet            Created container centos3
  Normal   Started    12s (x2 over 13s)  kubelet            Started container centos3
  Warning  BackOff    10s (x2 over 11s)  kubelet            Back-off restarting failed container
镜像拉取策略未指定版本,未指定策略
[root@master01 k8s-yaml]# vim init2.yml 

apiVersion: v1
kind: Pod
metadata:
  name: init2-pod
  labels:
    app: test1
spec:
  containers:
  - name: centos3
    image: centos
   # imagePullPolicy: IfNotPresent 

[root@master01 k8s-yaml]# kubectl apply -f init2.yml --force
[root@master01 k8s-yaml]# kubectl describe pod init2-pod 
Events:
  Type     Reason     Age              From               Message
  ----     ------     ----             ----               -------
  Normal   Scheduled  7s               default-scheduler  Successfully assigned default/init2-pod to node02
  Normal   Pulling    5s (x2 over 7s)  kubelet            Pulling image "centos"
[root@master01 k8s-yaml]# kubectl describe pod init2-pod 
Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  86s                default-scheduler  Successfully assigned default/init2-pod to node02
  Normal   Pulled     84s                kubelet            Successfully pulled image "centos" in 1.127308104s
  Normal   Pulled     83s                kubelet            Successfully pulled image "centos" in 1.190259011s
  Normal   Pulled     66s                kubelet            Successfully pulled image "centos" in 1.472428673s

二、https就是加密的http

端口443

tcp,建立连接和普通的tcp是一样的

三次握手------->SSL/TCL握手,握手过程是为了建立安全的加密通信通道

SSL/TCL握手的过程:

1、客户端向服务端发送一个信息,包含客户端支持的SSL/TCL的协议版本,加密算法的列表,随机数等等。

2、服务端收到消息之后,回复客户端,确认使用的SSL/TCL的加密版本,加密算法,发送随机数给客户端

发送随机数是为了双方确认身份。

3、服务端发送数字证书给客户端,数字证书包含服务器的公匙

数字证书.......买,第二种,服务器自己签发(自己做)客户端有了公钥之后,就可以确认服务器的真实身份。

4、密钥交换,服务端和客户端协商一个对称加密的密钥,用于后续的加密通信。密钥怎么生成:服务器的公钥加密之后生成客户端的对称密钥,然后服务器解密,得到密钥。

5、只有上述完成之后,服务端和客户端才能进行加密的通信。加密的本质就是服务器和客户端之间互相认证的密钥对。

三、https和docker harbor仓库:

------------------安装docker------------------

[root@k8s4 ~]#systemctl stop firewalld

[root@k8s4 ~]#setenforce 0

yum install -y yum-utils device-mapper-persistent-data lvm2 

yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo 

yum install -y docker-ce-24.0.1 docker-ce-cli-24.0.1 containerd.io

[root@k8s4 ~]#vim /etc/docker/daemon.json 

{
  "registry-mirrors": [
                "https://hub-mirror.c.163.com",
                "https://docker.m.daocloud.io",
                "https://ghcr.io",
                "https://mirror.baidubce.com",
                "https://docker.nju.edu.cn"
   ]
}


[root@k8s4 ~]# systemctl daemon-reload 
[root@k8s4 ~]# systemctl restart docker
[root@k8s4 ~]# systemctl enable docker
-------------------------------------------------

-----------------安装docker-compose、harbor-offline-installer-------------------------
[root@k8s4 ~]# cd /opt/
[root@k8s4 opt]# rz -E
rz waiting to receive.
[root@k8s4 opt]# rz -E
rz waiting to receive.
[root@k8s4 opt]# ls
containerd                   harbor-offline-installer-v2.8.1.tgz  test
docker-compose-linux-x86_64  jenkins-2.396-1.1.noarch.rpm
[root@k8s4 opt]#  mv docker-compose-linux-x86_64 /usr/local/bin/docker-compose
[root@k8s4 opt]# chmod +x /usr/local/bin/docker-compose
[root@k8s4 opt]# tar zxvf harbor-offline-installer-v2.8.1.tgz
harbor/harbor.v2.8.1.tar.gz
harbor/prepare
harbor/LICENSE
harbor/install.sh
harbor/common.sh
harbor/harbor.yml.tmpl
--------------------安装完成---------------------


------------------制作https-------------------
[root@k8s4 harbor]# cp harbor.yml.tmpl harbor.yml
[root@k8s4 harbor]# mkdir -p /data/cert
[root@k8s4 harbor]# cd /data/cert/
[root@k8s4 cert]# ls
[root@k8s4 cert]# openssl genrsa -des3 -out server.key 2048
Generating RSA private key, 2048 bit long modulus
................................................................+++
...............................................................................+++
e is 65537 (0x10001)
Enter pass phrase for server.key:123456
Verifying - Enter pass phrase for server.key:123456


解释:
openssl genrsa -des3 -out server.key 2048

openssl :生成RSA的密钥

genrsa 生成RSA的密钥

-des3 :DES的加密算法对私钥进程加密

-out server.key :私钥文件的文件名

2048:指定 RSA 密钥的位数为 2048 位


根据私钥文件按签发请求文件:
[root@k8s4 cert]# openssl req -new -key server.key -out server.csr
Enter pass phrase for server.key:123456
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:CN
State or Province Name (full name) []:JS
Locality Name (eg, city) [Default City]:NJ
Organization Name (eg, company) [Default Company Ltd]:XY
Organizational Unit Name (eg, section) []:XY
Common Name (eg, your name or your server's hostname) []:hub.dn.com
Email Address []:admin@qq.com

Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:
[root@k8s4 cert]# ls
server.csr  server.key
[root@k8s4 cert]# cp server.key server.key.org
[root@k8s4 cert]# ls
server.csr  server.key  server.key.org
[root@k8s4 cert]# openssl rsa -in server.key.org -out server.key
Enter pass phrase for server.key.org:123456
writing RSA key
[root@k8s4 cert]# openssl x509 -req -days 1000 -in server.csr -signkey server.key -out server.crt
Signature ok
subject=/C=CN/ST=JS/L=NJ/O=XY/OU=XY/CN=hub.dn.com/emailAddress=admin@qq.com
Getting Private key

解释:
openssl x509 -req -days 1000 -in server.csr -signkey server.key -out server.crt

对证书进行签名:

x509:x509证书是openssl常用的公钥证书的标准

-req:输入的文件内置一个证书签名请求,CSR来对CRT文件进行签名证书

-days 1000:证书的有效期是1000天

-in server.csr指定证书签名请求文件.csr

-signkey server.key :用私钥文件对生成的证书进行私自签名,私钥CSR中的公钥是匹配的

-out server.crt:生成自签名证书文件

[root@k8s4 cert]# chmod 777 /data/cert/*
[root@k8s4 cert]# ls
server.crt  server.csr  server.key  server.key.org
[root@k8s4 cert]# cd /opt/harbor/
[root@k8s4 harbor]# ls
common.sh             harbor.yml       install.sh  prepare
harbor.v2.8.1.tar.gz  harbor.yml.tmpl  LICENSE
[root@k8s4 harbor]# vim harbor.yml
harbor.yml       harbor.yml.tmpl  
[root@k8s4 harbor]# vim harbor.yml

  5 hostname: hub.test.com
 17   certificate: /data/cert/server.crt
 18   private_key: /data/cert/server.key
 34 harbor_admin_password: 123456
[root@k8s4 harbor]# ./prepare 
[root@k8s4 harbor]# ./install.sh 
[root@k8s4 /]# scp -r /data root@192.168.168.81:/
[root@k8s4 /]# scp -r /data root@192.168.168.82:/
[root@k8s4 /]# scp -r /data root@192.168.168.83:/
------------------https完成-----------------------


------------三台k8s一起操作-----------------------
[root@master01 k8s-yaml]# mkdir -p /etc/docker/certs.d/hub.test.com
[root@master01 k8s-yaml]# cd /data/cert/
[root@master01 cert]# ls
server.crt  server.csr  server.key  server.key.org
[root@master01 cert]# cp server.crt server.csr server.key /etc/docker/certs.d/hub.test.com/
[root@master01 cert]# cd /etc/docker/certs.d/hub.test.com/
[root@master01 hub.test.com]# ls
server.crt  server.csr  server.key
[root@master01 hub.test.com]# vim /etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.168.81 master01
192.168.168.82 node01
192.168.168.83 node02
192.168.168.84 hub.test.com
[root@master01 hub.test.com]# vim /lib/systemd/system/docker.service

前面已经做好映射,指定镜像仓库
 13 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --insecure-registry=hub.test.com
[root@master01 hub.test.com]# systemctl daemon-reload 
[root@master01 hub.test.com]# systemctl restart docker

[root@master01 hub.test.com]# docker login -u admin -p 123456 https://hub.test.com
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded
--------------------结束同步操作---------------------
[root@node02 hub.test.com]# docker images
[root@node02 hub.test.com]# docker tag nginx:1.22 hub.test.com/test1/nginx:v1
[root@node02 hub.test.com]# docker push hub.test.com/test1/nginx:v1


##此处删除标签镜像,是为了从远程仓库中拉取
[root@node02 hub.test.com]# docker rmi -f hub.test.com/test1/nginx:v1
Untagged: hub.test.com/test1/nginx:v1
Untagged: hub.test.com/test1/nginx@sha256:9081064712674ffcff7b7bdf874c75bcb8e5fb933b65527026090dacda36ea8b

[root@master01 k8s-yaml]# vim init1.yml 

apiVersion: v1
kind: Pod
metadata:
  name: init1-pod
  labels:
    app: test1
spec:
  containers:
  - name: nginx1
    image: hub.test.com/test1/nginx:v1
[root@master01 k8s-yaml]# kubectl apply -f init1.yml
[root@master01 k8s-yaml]# kubectl get pod
[root@master01 k8s-yaml]# kubectl describe pod init1-pod 


Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  10s   default-scheduler  Successfully assigned default/init1-pod to node02
  Normal  Pulling    10s   kubelet            Pulling image "hub.test.com/test1/nginx:v1"
  Normal  Pulled     10s   kubelet            Successfully pulled image "hub.test.com/test1/nginx:v1" in 84.327079ms
  Normal  Created    10s   kubelet            Created container nginx1
  Normal  Started    10s   kubelet            Started container nginx1
相关推荐
sam-12323 分钟前
k8s上部署redis高可用集群
redis·docker·k8s
Suhw2 天前
借助 Pause 容器调试 Pod
k8s·pause容器
运维小文2 天前
K8资源之endpoint资源&EP资源
linux·网络·k8s·运维开发
小安运维日记3 天前
CKA认证 | Day1 k8s核心概念与集群搭建
运维·云原生·容器·kubernetes·云计算·k8s
KubeSphere 云原生3 天前
云原生周刊:Istio 1.24.0 正式发布
云计算·k8s·容器平台·kubesphere
Cat_Xu4 天前
【AliCloud】ack + ack-secret-manager + kms 敏感数据安全存储
阿里云·k8s·terraform
景天科技苑7 天前
【云原生开发】K8S多集群资源管理平台架构设计
云原生·容器·kubernetes·k8s·云原生开发·k8s管理系统
小安运维日记8 天前
Linux云计算 |【第五阶段】CLOUD-DAY8
linux·运维·docker·云计算·k8s·学习方法
少陽君9 天前
k8s Sidecar代理
运维·云原生·容器·kubernetes·k8s
小云小白10 天前
springboot 传统应用程序,适配云原生改造
云原生·系统架构·k8s·springboot