目录
[2.K8S 1.29版本 部署Kuboard (第一种方式)](#2.K8S 1.29版本 部署Kuboard (第一种方式))
[3.K8S 1.29版本 部署Kuboard (第二种方式)](#3.K8S 1.29版本 部署Kuboard (第二种方式))
[4.K8S 1.29版本 使用Kuboard](#4.K8S 1.29版本 使用Kuboard)
一、实验
1.环境
(1)主机
表1 主机
|--------|--------------|--------|----------------|----|
| 主机 | 架构 | 版本 | IP | 备注 |
| master | K8S master节点 | 1.29.0 | 192.168.204.8 | |
| node1 | K8S node节点 | 1.29.0 | 192.168.204.9 | |
| node2 | K8S node节点 | 1.29.0 | 192.168.204.10 | |
(2)master节点查看集群
bash
1)查看node
kubectl get node
2)查看node详细信息
kubectl get node -o wide
2.K8S 1.29版本 部署Kuboard (第一种方式)
(1)查阅
bash
1)官网
https://kuboard.cn/
2)安装说明
https://kuboard.cn/install/v3/install.html
(2)下载 yaml 文件
bash
1)第一种方式
wget https://addons.kuboard.cn/kuboard/kuboard-v3.yaml
2)第二种方式
#华为云的镜像仓库替代 docker hub 分发 Kuboard 所需要的镜像
wget https://addons.kuboard.cn/kuboard/kuboard-v3-swr.yaml
这里采用第1种方式
(3)node2节点拉取kuboard镜像
bash
[root@node2 ~]# docker pull eipwork/kuboard:v3
查看镜像
bash
[root@node2 ~]# docker images
(4)node2节点拉取etcd-host镜像
bash
[root@node2 ~]# docker pull eipwork/etcd-host:3.4.16-2
查看镜像
bash
[root@node2 ~]# docker images
(5)导出Docker镜像
bash
[root@node2 ~]# docker save -o etcd-host.tar eipwork/etcd-host:3.4.16-2
[root@node2 ~]# docker save -o kuboard.tar eipwork/kuboard:v3
(6)复制Docker镜像到node1节点
bash
[root@node2 ~]# scp etcd-host.tar root@node1:~
[root@node2 ~]# scp kuboard.tar root@node1:~
(7)node1节点导入Docker镜像
bash
[root@node1 ~]# docker load -i etcd-host.ta
[root@node1 ~]# docker load -i kuboard.tar
查看镜像
bash
[root@node1 ~]# docker images
(8)修改镜像拉取策略
修改前:
修改后:
(9)master节点生成资源
bash
[root@master ~]# kubectl apply -f kuboard-v3.yaml
(4)观察pod与删除pod
注意kuboard-v3如果Ready一直为0/1,需要修改配置文件(具体可以参考后面的第二种方式)。
bash
[root@master ~]# kubectl get pods -n kuboard -o wide -w
删除
bash
[root@master ~]kubectl delete -f kuboard-v3.yaml
3.K8S 1.29版本 部署Kuboard (第二种方式)
(1)查阅
bash
1)官网
https://kuboard.cn/
2)安装说明
https://kuboard.cn/install/v3/install.html
(2)下载 yaml 文件
bash
1)第一种方式
wget https://addons.kuboard.cn/kuboard/kuboard-v3.yaml
2)第二种方式
#华为云的镜像仓库替代 docker hub 分发 Kuboard 所需要的镜像
wget https://addons.kuboard.cn/kuboard/kuboard-v3-swr.yaml
这里采用第二种方式
(3)node2节点拉取镜像
bash
[root@node2 ~]# docker pull eipwork/kuboard:v3
bash
[root@node2 ~]# docker pull swr.cn-east-2.myhuaweicloud.com/kuboard/etcd-host:3.4.16-2
查看镜像
bash
[root@node2 ~]# docker images
bash
[root@node2 ~]# docker images | grep myhuaweicloud
(5)导出Docker镜像
bash
[root@node2 ~]# docker save -o etcd-host.tar swr.cn-east-2.myhuaweicloud.com/kuboard/etcd-host:3.4.16-2
[root@node2 ~]# docker save -o kuboard.tar swr.cn-east-2.myhuaweicloud.com/kuboard/kuboard:v3
(6)复制Docker镜像到node1节点
bash
[root@node2 ~]# scp etcd-host.tar root@node1:~
[root@node2 ~]# scp kuboard.tar root@node1:~
(7)node1节点导入Docker镜像
bash
[root@node1 ~]# docker load -i etcd-host.tar
[root@node1 ~]# docker load -i kuboard.tar
查看镜像
bash
[root@node1 ~]# docker images
(8)修改镜像拉取策略
修改前:
修改后:
(9)master节点生成资源
bash
[root@master ~]# kubectl apply -f kuboard-v3-swr.yaml
(4)观察pod
bash
[root@master ~]# kubectl get pods -n kuboard -o wide -w
(5)查看日志
bash
[root@master ~]# kubectl logs -f kuboard-v3-6fdbd869b7-5g8lv -n kuboard
生成 KUBOARD_SSO_CLIENT_SECRET: 76425626219e02eb20931235
设置 KuboardAdmin 的默认密码(仅第一次启动时设置) Kuboard123
KUBOARD_ENDPOINT http://192.168.204.10:30080
eyJhbGciOiJSUzI1NiIsImtpZCI6IlVIUE9lVzRIYXpKTWxGLVRuQVJGSzJVVzQyTGdpMjdleW52RTdoOHM3bTAifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzQ0OTQzMzc2LCJpYXQiOjE3MTM0MDczNzYsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJvYXJkIiwicG9kIjp7Im5hbWUiOiJrdWJvYXJkLXYzLTZmZGJkODY5YjctNWc4bHYiLCJ1aWQiOiI3ZTFhMTNlZS1mZjQ1LTQ1ZjctYjBhZS05OTA4NjE5ZmRlNzAifSwic2VydmljZWFjY291bnQiOnsibmFtZSI6Imt1Ym9hcmQtYm9vc3RyYXAiLCJ1aWQiOiI0MTRhNDI1Ni0zMDFhLTQ0Y2YtYmY0ZC00ZjgzMGQwZWQ5Y2EifSwid2FybmFmdGVyIjoxNzEzNDEwOTgzfSwibmJmIjoxNzEzNDA3Mzc2LCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3Vib2FyZDprdWJvYXJkLWJvb3N0cmFwIn0.PmcgfI18gRbNl8xFIYz8xHVfaPlZJxlmNSddRHVlWpCGXD6-lw_9jj-6nD0ANfNnbY5PEM4PzR187xjIBTNcTt6jZMwwto4wifvzHHoMEdfNFPO-1EmDEl_rJS4tJaaaKHMRGuz4Prok28giK_MSPf8ceMARh4ZXhFt2xwMRbY5hgNz1z1YLZl_mQdl4cPe-BK_eH4nZZOHIntuOVXRRFr8E36jyC86IlsothCa65tGHm5_NplZdCG7cz-kJMnNY4r7-ew_FMUPKXqVMMyBgPsIWNQRnrvxHRPs8Drik88t4xmNMg2oWD0qjP6-DHJhzqq3Bnx88EfQ-hUf9a7NHKA
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 113 100 113 0 0 7253 0 --:--:-- --:--:-- --:--:-- 7533
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 113 100 113 0 0 7475 0 --:--:-- --:--:-- --:--:-- 7533
[{
"kind": "NodeList",
"apiVersion": "v1",
"metadata": {
"resourceVersion": "15695"
},
"items": []
},{
"kind": "NodeList",
"apiVersion": "v1",
"metadata": {
"resourceVersion": "15695"
},
"items": []
}]
start kuboard-agent-server
当前 Kuboard 在 K8S 中运行,etcd 独立部署
启动内置的 QuestDB
___ _ ____ ____
/ _ \ _ _ ___ ___| |_| _ \| __ )
| | | | | | |/ _ \/ __| __| | | | _ \
| |_| | |_| | __/\__ \ |_| |_| | |_) |
\__\_\\__,_|\___||___/\__|____/|____/
www.questdb.io
/questdb/bin/questdb.sh: line 66: ps: command not found
JAVA: /questdb/bin/java
QuestDB server 6.0.4
Copyright (C) 2014-2024, all rights reserved.
认证模块:使用本地用户库
启动 kuboard-sso
设置日志级别为 info
time="2024-04-18T02:34:22Z" level=info msg="config using log level: info"
time="2024-04-18T02:34:22Z" level=info msg="config issuer: http://192.168.204.10:30080/sso"
time="2024-04-18T02:34:22Z" level=info msg="config storage: etcd"
time="2024-04-18T02:34:22Z" level=info msg="config static client: KuboardApp"
time="2024-04-18T02:34:22Z" level=info msg="config connector: default"
time="2024-04-18T02:34:22Z" level=info msg="config skipping approval screen"
time="2024-04-18T02:34:22Z" level=info msg="config signing keys expire after: 6h0m0s"
time="2024-04-18T02:34:22Z" level=info msg="config id tokens valid for: 168h0m0s"
time="2024-04-18T02:34:22Z" level=info msg="config device requests valid for: 5m0s"
设置日志级别为 info
[LOG] 2024/04/18 - 10:34:25.917 | /common/etcd.client_config 24 | info | KUBOARD_ETCD_ENDPOINTS=[]
[LOG] 2024/04/18 - 10:34:25.917 | /common/etcd.client_config 52 | info | {[] 0s 1s 0s 0s 0 0 <nil> false [] <nil> <nil> <nil> false}
[LOG] 2024/04/18 - 10:34:25.918 | /initializekuboard.InitializeEtcd 39 | info | 初始化 ./init-etcd-scripts/audit-policy-once.yaml
{"level":"warn","ts":"2024-04-18T10:34:27.818+0800","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"endpoint://client-50d42282-d986-40b3-9355-434ead47ba3d/","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: all SubConns are in TransientFailure, latest connection error: connection error: desc = \"transport: Error while dialing dial tcp: missing address\""}
failed to initialize server: server: failed to list connector objects from storage: context deadline exceeded
^C
重点是最后的报错
(6)编辑配置文件
bash
[root@master ~]# kubectl edit cm kuboard-v3-config -n kuboard
查看:
(7)搜索此配置项,将此配置项替换
替换前:
bash
KUBOARD_SERVER_NODE_PORT: '30080'
替换后:(k8s节点的任意IP)
bash
KUBOARD_ENDPOINT: 'http://192.168.204.10:30080'
(8)删除之前的 pod 让它自己拉起来
bash
[root@master ~]# kubectl delete pod kuboard-v3-6fdbd869b7-5g8lv -n kuboard
(9)查看pod
bash
[root@master ~]# kubectl get pods -n kuboard
详细信息
bash
[root@master ~]# kubectl get pods -n kuboard -o wide
(10) 查看日志
bash
[root@master ~]# kubectl logs -f kuboard-v3-6fdbd869b7-7xh4t -n kuboard
生成 KUBOARD_SSO_CLIENT_SECRET: 09003557b425f87463067e35
设置 KuboardAdmin 的默认密码(仅第一次启动时设置) Kuboard123
start kuboard-agent-server
启动内置的 QuestDB
___ _ ____ ____
/ _ \ _ _ ___ ___| |_| _ \| __ )
| | | | | | |/ _ \/ __| __| | | | _ \
| |_| | |_| | __/\__ \ |_| |_| | |_) |
\__\_\\__,_|\___||___/\__|____/|____/
www.questdb.io
/questdb/bin/questdb.sh: line 66: ps: command not found
JAVA: /questdb/bin/java
{"level":"info","ts":"2024-04-18T10:38:50.230+0800","caller":"embed/etcd.go:117","msg":"configuring peer listeners","listen-peer-urls":["http://0.0.0.0:2380"]}
{"level":"info","ts":"2024-04-18T10:38:50.230+0800","caller":"embed/etcd.go:127","msg":"configuring client listeners","listen-client-urls":["http://0.0.0.0:2379"]}
{"level":"info","ts":"2024-04-18T10:38:50.231+0800","caller":"embed/etcd.go:302","msg":"starting an etcd server","etcd-version":"3.4.14","git-sha":"8a03d2e96","go-version":"go1.12.17","go-os":"linux","go-arch":"amd64","max-cpu-set":4,"max-cpu-available":4,"member-initialized":false,"name":"kuboard-01","data-dir":"/data/etcd-data","wal-dir":"","wal-dir-dedicated":"","member-dir":"/data/etcd-data/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["http://0.0.0.0:2380"],"listen-peer-urls":["http://0.0.0.0:2380"],"advertise-client-urls":["http://0.0.0.0:2379"],"listen-client-urls":["http://0.0.0.0:2379"],"listen-metrics-urls":[],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"kuboard-01=http://0.0.0.0:2380","initial-cluster-state":"new","initial-cluster-token":"tkn","quota-size-bytes":2147483648,"pre-vote":false,"initial-corrupt-check":false,"corrupt-check-time-interval":"0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":""}
{"level":"info","ts":"2024-04-18T10:38:50.233+0800","caller":"etcdserver/backend.go:80","msg":"opened backend db","path":"/data/etcd-data/member/snap/db","took":"1.203708ms"}
{"level":"info","ts":"2024-04-18T10:38:50.237+0800","caller":"etcdserver/raft.go:486","msg":"starting local member","local-member-id":"59a9c584ea2c3f35","cluster-id":"f9f44c4ba0e96dd8"}
{"level":"info","ts":"2024-04-18T10:38:50.237+0800","caller":"raft/raft.go:1530","msg":"59a9c584ea2c3f35 switched to configuration voters=()"}
{"level":"info","ts":"2024-04-18T10:38:50.238+0800","caller":"raft/raft.go:700","msg":"59a9c584ea2c3f35 became follower at term 0"}
{"level":"info","ts":"2024-04-18T10:38:50.238+0800","caller":"raft/raft.go:383","msg":"newRaft 59a9c584ea2c3f35 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]"}
{"level":"info","ts":"2024-04-18T10:38:50.238+0800","caller":"raft/raft.go:700","msg":"59a9c584ea2c3f35 became follower at term 1"}
{"level":"info","ts":"2024-04-18T10:38:50.238+0800","caller":"raft/raft.go:1530","msg":"59a9c584ea2c3f35 switched to configuration voters=(6460912315094810421)"}
{"level":"warn","ts":"2024-04-18T10:38:50.253+0800","caller":"auth/store.go:1366","msg":"simple token is not cryptographically signed"}
{"level":"info","ts":"2024-04-18T10:38:50.258+0800","caller":"etcdserver/quota.go:98","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
{"level":"info","ts":"2024-04-18T10:38:50.259+0800","caller":"etcdserver/server.go:803","msg":"starting etcd server","local-member-id":"59a9c584ea2c3f35","local-server-version":"3.4.14","cluster-version":"to_be_decided"}
{"level":"info","ts":"2024-04-18T10:38:50.260+0800","caller":"etcdserver/server.go:669","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"59a9c584ea2c3f35","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
{"level":"info","ts":"2024-04-18T10:38:50.261+0800","caller":"raft/raft.go:1530","msg":"59a9c584ea2c3f35 switched to configuration voters=(6460912315094810421)"}
{"level":"info","ts":"2024-04-18T10:38:50.261+0800","caller":"membership/cluster.go:392","msg":"added member","cluster-id":"f9f44c4ba0e96dd8","local-member-id":"59a9c584ea2c3f35","added-peer-id":"59a9c584ea2c3f35","added-peer-peer-urls":["http://0.0.0.0:2380"]}
{"level":"info","ts":"2024-04-18T10:38:50.263+0800","caller":"embed/etcd.go:244","msg":"now serving peer/client/metrics","local-member-id":"59a9c584ea2c3f35","initial-advertise-peer-urls":["http://0.0.0.0:2380"],"listen-peer-urls":["http://0.0.0.0:2380"],"advertise-client-urls":["http://0.0.0.0:2379"],"listen-client-urls":["http://0.0.0.0:2379"],"listen-metrics-urls":[]}
{"level":"info","ts":"2024-04-18T10:38:50.263+0800","caller":"embed/etcd.go:579","msg":"serving peer traffic","address":"[::]:2380"}
QuestDB server 6.0.4
Copyright (C) 2014-2024, all rights reserved.
{"level":"info","ts":"2024-04-18T10:38:50.738+0800","caller":"raft/raft.go:923","msg":"59a9c584ea2c3f35 is starting a new election at term 1"}
{"level":"info","ts":"2024-04-18T10:38:50.738+0800","caller":"raft/raft.go:713","msg":"59a9c584ea2c3f35 became candidate at term 2"}
{"level":"info","ts":"2024-04-18T10:38:50.738+0800","caller":"raft/raft.go:824","msg":"59a9c584ea2c3f35 received MsgVoteResp from 59a9c584ea2c3f35 at term 2"}
{"level":"info","ts":"2024-04-18T10:38:50.738+0800","caller":"raft/raft.go:765","msg":"59a9c584ea2c3f35 became leader at term 2"}
{"level":"info","ts":"2024-04-18T10:38:50.738+0800","caller":"raft/node.go:325","msg":"raft.node: 59a9c584ea2c3f35 elected leader 59a9c584ea2c3f35 at term 2"}
{"level":"info","ts":"2024-04-18T10:38:50.739+0800","caller":"etcdserver/server.go:2528","msg":"setting up initial cluster version","cluster-version":"3.4"}
{"level":"info","ts":"2024-04-18T10:38:50.740+0800","caller":"membership/cluster.go:558","msg":"set initial cluster version","cluster-id":"f9f44c4ba0e96dd8","local-member-id":"59a9c584ea2c3f35","cluster-version":"3.4"}
{"level":"info","ts":"2024-04-18T10:38:50.740+0800","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.4"}
{"level":"info","ts":"2024-04-18T10:38:50.740+0800","caller":"etcdserver/server.go:2560","msg":"cluster version is updated","cluster-version":"3.4"}
{"level":"info","ts":"2024-04-18T10:38:50.740+0800","caller":"etcdserver/server.go:2037","msg":"published local member to cluster through raft","local-member-id":"59a9c584ea2c3f35","local-member-attributes":"{Name:kuboard-01 ClientURLs:[http://0.0.0.0:2379]}","request-path":"/0/members/59a9c584ea2c3f35/attributes","cluster-id":"f9f44c4ba0e96dd8","publish-timeout":"7s"}
{"level":"info","ts":"2024-04-18T10:38:50.741+0800","caller":"embed/serve.go:139","msg":"serving client traffic insecurely; this is strongly discouraged!","address":"[::]:2379"}
认证模块:使用本地用户库
启动 kuboard-sso
设置日志级别为 info
time="2024-04-18T02:39:05Z" level=info msg="config using log level: info"
time="2024-04-18T02:39:05Z" level=info msg="config issuer: http://192.168.204.10:30080/sso"
time="2024-04-18T02:39:05Z" level=info msg="config storage: etcd"
time="2024-04-18T02:39:05Z" level=info msg="config static client: KuboardApp"
time="2024-04-18T02:39:05Z" level=info msg="config connector: default"
time="2024-04-18T02:39:05Z" level=info msg="config skipping approval screen"
time="2024-04-18T02:39:05Z" level=info msg="config signing keys expire after: 6h0m0s"
time="2024-04-18T02:39:05Z" level=info msg="config id tokens valid for: 168h0m0s"
time="2024-04-18T02:39:05Z" level=info msg="config device requests valid for: 5m0s"
time="2024-04-18T02:39:05Z" level=info msg="keys expired, rotating"
time="2024-04-18T02:39:06Z" level=info msg="keys rotated, next rotation: 2024-04-18 08:39:06.27155897 +0000 UTC"
time="2024-04-18T02:39:06Z" level=info msg="listening (http) on 0.0.0.0:5556"
设置日志级别为 info
[LOG] 2024/04/18 - 10:39:08.860 | /common/etcd.client_config 24 | info | KUBOARD_ETCD_ENDPOINTS=[127.0.0.1:2379]
[LOG] 2024/04/18 - 10:39:08.861 | /common/etcd.client_config 52 | info | {[127.0.0.1:2379] 0s 1s 0s 0s 0 0 <nil> false [] <nil> <nil> <nil> false}
[LOG] 2024/04/18 - 10:39:08.862 | /initializekuboard.InitializeEtcd 39 | info | 初始化 ./init-etcd-scripts/audit-policy-once.yaml
[LOG] 2024/04/18 - 10:39:08.867 | /initializekuboard.processEtcdObject 115 | info | - 创建对象:KuboardAuditPolicy GLOBAL/GLOBAL
[LOG] 2024/04/18 - 10:39:08.871 | /initializekuboard.InitializeEtcd 39 | info | 初始化 ./init-etcd-scripts/bran-settings-once.yaml
[LOG] 2024/04/18 - 10:39:08.876 | /initializekuboard.processEtcdObject 115 | info | - 创建对象:KuboardBrandSettings GLOBAL/KuboardBrandSettings
[LOG] 2024/04/18 - 10:39:08.886 | /initializekuboard.InitializeEtcd 39 | info | 初始化 ./init-etcd-scripts/login-policy-once.yaml
[LOG] 2024/04/18 - 10:39:08.887 | /initializekuboard.processEtcdObject 115 | info | - 创建对象:KuboardLoginPolicySettings GLOBAL/KuboardLoginPolicySettings
[LOG] 2024/04/18 - 10:39:08.891 | /initializekuboard.InitializeEtcd 39 | info | 初始化 ./init-etcd-scripts/roles.yaml
[LOG] 2024/04/18 - 10:39:08.893 | /initializekuboard.processEtcdObject 115 | info | - 创建对象:KuboardAuthRole GLOBAL/administrator
[LOG] 2024/04/18 - 10:39:08.897 | /initializekuboard.processEtcdObject 115 | info | - 创建对象:KuboardAuthRole GLOBAL/viewer
[LOG] 2024/04/18 - 10:39:08.902 | /initializekuboard.processEtcdObject 115 | info | - 创建对象:KuboardAuthRole GLOBAL/authenticated
[LOG] 2024/04/18 - 10:39:08.905 | /initializekuboard.processEtcdObject 115 | info | - 创建对象:KuboardAuthRole GLOBAL/anonymous
[LOG] 2024/04/18 - 10:39:08.909 | /initializekuboard.processEtcdObject 115 | info | - 创建对象:KuboardAuthRole GLOBAL/sso-user
[LOG] 2024/04/18 - 10:39:08.912 | /initializekuboard.InitializeEtcd 39 | info | 初始化 ./init-etcd-scripts/user.yaml
[LOG] 2024/04/18 - 10:39:08.913 | /initializekuboard.processEtcdObject 115 | info | - 创建对象:KuboardAuthUser GLOBAL/admin
[LOG] 2024/04/18 - 10:39:08.917 | /initializekuboard.processEtcdObject 115 | info | - 创建对象:KuboardAuthGroup GLOBAL/administrators
[LOG] 2024/04/18 - 10:39:08.920 | /initializekuboard.processEtcdObject 115 | info | - 创建对象:KuboardAuthUserInGroup GLOBAL/admin.administrators
[LOG] 2024/04/18 - 10:39:08.924 | /initializekuboard.processEtcdObject 115 | info | - 创建对象:KuboardAuthGlobalRoleBinding GLOBAL/group.administrators.administrator
{"level":"warn","ts":"2024-04-18T10:39:08.927+0800","caller":"grpclog/grpclog.go:60","msg":"transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:2379->127.0.0.1:49530: read: connection reset by peer"}
[LOG] 2024/04/18 - 10:39:08.929 | main.main 115 | info | 使用 http, 端口: 80
[LOG] 2024/04/18 - 10:39:08.930 | /audit/audit_common.questDbDriverURI 49 | info | QuestDB连接参数: postgresql://admin:quest@127.0.0.1:8812/qdb
[LOG] 2024/04/18 - 10:39:08.943 | /kuboard/k8scluster.UpdateNginxConfig 108 | info | updated /etc/nginx/nginx.conf and reloaded nginx.conf
[LOG] 2024/04/18 - 10:39:09.041 | /audit/audit_runtime.doInit 145 | info | CREATE TABLE IF NOT EXISTS audit_events.
[LOG] 2024/04/18 - 10:39:09.041 | /audit/audit_runtime.doInit 146 | info | QuestDB Initilized.
[LOG] 2024/04/18 - 10:39:09.041 | /audit/audit_runtime.initConnection 152 | info | Create QuestDB Connection.
[LOG] 2024/04/18 - 10:39:09.041 | /audit/audit_common.questDbDriverURI 49 | info | QuestDB连接参数: postgresql://admin:quest@127.0.0.1:8812/qdb
[GIN] 2024/04/18 - 10:39:29 | 200 | 917.589µs | 192.168.204.10 | GET "/kuboard-resources/version.json"
[GIN] 2024/04/18 - 10:39:29 | 200 | 1.505643ms | 192.168.204.10 | GET "/kuboard-resources/version.json"
[GIN] 2024/04/18 - 10:39:39 | 200 | 180.381µs | 192.168.204.10 | GET "/kuboard-resources/version.json"
[GIN] 2024/04/18 - 10:39:39 | 200 | 188.272µs | 192.168.204.10 | GET "/kuboard-resources/version.json"
[GIN] 2024/04/18 - 10:39:49 | 200 | 155.596µs | 192.168.204.10 | GET "/kuboard-resources/version.json"
[GIN] 2024/04/18 - 10:39:49 | 200 | 147.34µs | 192.168.204.10 | GET "/kuboard-resources/version.json"
^C
4.K8S 1.29版本 使用Kuboard
(1)访问系统
bash
http://192.168.204.10:30080
(2) 输入初始用户名和密码,并登录
bash
用户名: admin
密码: Kuboard123
(3) 查看集群列表
目前为空
bash
http://192.168.204.10:30080/kuboard/cluster
(4)添加k8s集群
添加
弹框
master节点查看
bash
[root@master ~]# cat /etc/kubernetes/admin.conf
粘贴内容并确认
完成添加
bash
http://192.168.204.10:30080/kubernetes/K8S-1.29/cluster/import
(5)再次查看集群列表
(6)访问集群
选择其中一个方式
(7)查看概要信息
bash
http://192.168.204.10:30080/kubernetes/K8S-1.29/cluster/overview
(8)查看节点信息
bash
http://192.168.204.10:30080/kubernetes/K8S-1.29/cluster/node
(9)查看名称空间
bash
http://192.168.204.10:30080/kubernetes/K8S-1.29/cluster/namespace
二、问题
1.docker如何在node节点间移动镜像
(1)命令
bash
1)在第一台服务器上导出Docker镜像
docker save -o <镜像文件名>.tar <镜像名>
2)将导出的Docker镜像文件从第一台服务器复制到第二台服务器
scp <镜像文件名>.tar <用户名>@<第二台服务器IP地址>:<目标路径>
连接服务器也可以使用ssh连接
scp <镜像文件名>.tar ssh -p 端口号 用户名@服务器地址:<目标路径>
3)在第二台服务器上导入Docker镜像
docker load -i <镜像文件名>.tar
4)其他
提示没有目录的话就创建目录,创建目录命令:
mkdir [目录名称]