Kubernetes集群安装

Kubernetes集群安装

环境准备

192.168.1.53 k8s-master

192.168.1.52 k8s-node-1

192.168.1.51 k8s-node-2

设置三台机器的主机名:

shell 复制代码
Master上执行:
[root@localhost ~]#  hostnamectl --static set-hostname  k8s-master
  Node1上执行:
[root@localhost ~]# hostnamectl --static set-hostname  k8s-node-1
  Node2上执行:
[root@localhost ~]# hostnamectl --static set-hostname  k8s-node-2

在三台机器上设置hosts,均执行如下命令:

shell 复制代码
echo '192.168.1.53    k8s-master
192.168.1.53    etcd
192.168.1.53    registry
192.168.1.52    k8s-node-1
192.168.1.51    k8s-node-2' >> /etc/hosts

cat /etc/hosts

关闭三台机器上的防火墙

shell 复制代码
systemctl disable firewalld.service
systemctl stop firewalld.service

安装相关工具

k8s-master k8s-node-1 k8s-node-2 上都执行

shell 复制代码
yum install -y kubelet kubeadm kubectl kubernetes-cni
可能不能访问源,添加源

#docker yum源
cat >> /etc/yum.repos.d/docker.repo <<EOF
[docker-repo]
name=Docker Repository
baseurl=http://mirrors.aliyun.com/docker-engine/yum/repo/main/centos/7
enabled=1
gpgcheck=0
EOF

#kubernetes yum源
cat >> /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
EOF

vim /etc/resolv.conf 最后增加
nameserver 8.8.8.8

yum install -y kubelet kubeadm kubectl kubernetes-cni
 # kubectl delete node --all
 卸载 sudo yum remove kubelet kubeadm kubectl kubernetes-cni
kubelet --version
kubeadm version
systemctl enable docker.service
systemctl enable kubelet.service

systemctl start kubelet
systemctl status kubelet 查看状态 启动失败
禁用SELinux,让容器可以读取主机文件系统
setenforce 0
vim /etc/sysconfig/selinux
SELINUX=disabled

和docker vim /usr/lib/systemd/system/docker.service的用户systemd一致就可以了,不需要修改

--exec-opt native.cgroupdriver=systemd

在k8s-master上执行

shell 复制代码
kubeadm init --apiserver-advertise-address=192.168.1.53 --kubernetes-version=v1.11.3 --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=swap 执行报错

echo "1" >/proc/sys/net/bridge/bridge-nf-call-iptables

systemctl enable docker.service 有警告的
systemctl enable kubelet.service 有警告的 

k8s-master 上执行------------参看node的这个文件配置,这里有的配置可能多余-------

shell 复制代码
vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf  
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true"
Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_DNS_ARGS=--cluster-dns=10.96.0.10 --cluster-domain=cluster.local"
Environment="KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt"
Environment="KUBELET_CADVISOR_ARGS=--cadvisor-port=0"
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"
Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false"
Environment="KUBELET_CERTIFICATE_ARGS=--rotate-certificates=true --cert-dir=/var/lib/kubelet/pki"
#Environment="KUBELET_EXTRA_ARGS=--v=2 --fail-swap-on=false --pod-infra-container-image=k8s.gcr.io/pause-amd64:3.1"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
#EnvironmentFile=-/etc/sysconfig/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_CGROUP_ARGS $KUBELET_CERTIFICATE_ARGS $KUBELET_EXTRA_ARGS


systemctl daemon-reload
systemctl start kubelet
systemctl status kubelet
journalctl -xefu kubelet 查看日志

添加下载源
cat /etc/docker/daemon.json 
{
  "registry-mirrors": ["http://68e02ab9.m.daocloud.io"]
}
systemctl restart docker

https://hub.docker.com/r/warrior/

先下载镜像  -----------使用下面的镜像版本,这里版本不对应-----

shell 复制代码
docker pull warrior/pause-amd64:3.0
docker tag warrior/pause-amd64:3.0 gcr.io/google_containers/pause-amd64:3.0
docker pull warrior/etcd-amd64:3.0.17
docker tag warrior/etcd-amd64:3.0.17 gcr.io/google_containers/etcd-amd64:3.0.17
docker pull warrior/kube-apiserver-amd64:v1.6.0
docker tag warrior/kube-apiserver-amd64:v1.6.0 gcr.io/google_containers/kube-apiserver-amd64:v1.6.0
docker pull warrior/kube-scheduler-amd64:v1.6.0
docker tag warrior/kube-scheduler-amd64:v1.6.0 gcr.io/google_containers/kube-scheduler-amd64:v1.6.0
docker pull warrior/kube-controller-manager-amd64:v1.6.0
 docker tag warrior/kube-controller-manager-amd64:v1.6.0 gcr.io/google_containers/kube-controller-manager-amd64:v1.6.0
docker pull warrior/kube-proxy-amd64:v1.6.0
docker tag warrior/kube-proxy-amd64:v1.6.0  gcr.io/google_containers/kube-proxy-amd64:v1.6.0
 docker pull gysan/dnsmasq-metrics-amd64:1.0
docker tag gysan/dnsmasq-metrics-amd64:1.0 gcr.io/google_containers/dnsmasq-metrics-amd64:1.0
docker pull warrior/k8s-dns-kube-dns-amd64:1.14.1
docker tag warrior/k8s-dns-kube-dns-amd64:1.14.1 gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.1
 docker pull warrior/k8s-dns-dnsmasq-nanny-amd64:1.14.1
docker tag warrior/k8s-dns-dnsmasq-nanny-amd64:1.14.1 gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.1
docker pull warrior/k8s-dns-sidecar-amd64:1.14.1
docker tag warrior/k8s-dns-sidecar-amd64:1.14.1 gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.1 
docker pull awa305/kube-discovery-amd64:1.0 
docker tag awa305/kube-discovery-amd64:1.0 gcr.io/google_container/kube-discovery-amd64:1.0 
docker pull gysan/exechealthz-amd64:1.2
docker tag gysan/exechealthz-amd64:1.2 gcr.io/google_container/exechealthz-amd64:1.2


kubeadm init --apiserver-advertise-address=192.168.1.53 --kubernetes-version=v1.11.3 --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=all


kubeadm config images list --kubernetes-version=v1.11.3
三台机器上下载镜像 ----------------------------------------------------
docker pull mirrorgooglecontainers/kube-apiserver-amd64:v1.11.3
docker tag  mirrorgooglecontainers/kube-apiserver-amd64:v1.11.3 k8s.gcr.io/kube-apiserver-amd64:v1.11.3
docker pull mirrorgooglecontainers/kube-controller-manager-amd64:v1.11.3
docker tag mirrorgooglecontainers/kube-controller-manager-amd64:v1.11.3 k8s.gcr.io/kube-controller-manager-amd64:v1.11.3
docker pull mirrorgooglecontainers/kube-scheduler-amd64:v1.11.3
docker tag mirrorgooglecontainers/kube-scheduler-amd64:v1.11.3 k8s.gcr.io/kube-scheduler-amd64:v1.11.3
docker pull mirrorgooglecontainers/etcd-amd64:3.2.18
docker tag mirrorgooglecontainers/etcd-amd64:3.2.18 k8s.gcr.io/etcd-amd64:3.2.18
                
docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.11.3 
docker tag mirrorgooglecontainers/kube-proxy-amd64:v1.11.3  k8s.gcr.io/kube-proxy-amd64:v1.11.3     
docker pull mirrorgooglecontainers/pause:3.1 
docker tag mirrorgooglecontainers/pause:3.1  k8s.gcr.io/pause:3.1
docker pull mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.0
docker tag mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.0 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0
docker pull coredns/coredns:1.1.3
docker tag coredns/coredns:1.1.3  k8s.gcr.io/coredns:1.1.3
docker pull mirrorgooglecontainers/k8s-dns-sidecar-amd64:1.14.11
docker tag mirrorgooglecontainers/k8s-dns-sidecar-amd64:1.14.11 k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.11
docker pull mirrorgooglecontainers/k8s-dns-kube-dns-amd64:1.14.11
docker tag mirrorgooglecontainers/k8s-dns-kube-dns-amd64:1.14.11 k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.11
docker pull mirrorgooglecontainers/pause-amd64:3.1
docker tag mirrorgooglecontainers/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1

出现如下信息,表示安装成功
kubeadm join 192.168.1.53:6443 --token e1d3u3.ilw4fb5cpt51xjf0 --discovery-token-ca-cert-hash sha256:72d07cb010102ae7f1733753a1ac07d0a402125f3326a41056174c69de6fe228

安装成功后,按提示执行下面命令                                
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

k8s-node-1 k8s-node-2执行

shell 复制代码
echo "1" >/proc/sys/net/bridge/bridge-nf-call-iptables

kubeadm join 192.168.1.53:6443 --token e1d3u3.ilw4fb5cpt51xjf0 --discovery-token-ca-cert-hash sha256:72d07cb010102ae7f1733753a1ac07d0a402125f3326a41056174c69de6fe228 --ignore-preflight-errors=swap

因为测试主机上还运行其他服务,关闭swap可能会对其他服务产生影响,所以这里修改kubelet的启动参数 --fail-swap-on=false 去掉这个限制
vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf  
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
Environment="KUBELET_DNS_ARGS=--cluster-dns=10.96.0.10 --cluster-domain=cluster.local"
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"
Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
#EnvironmentFile=-/etc/sysconfig/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_DNS_ARGS $KUBELET_CGROUP_ARGS $KUBELET_EXTRA_ARGS

systemctl daemon-reload 
systemctl start kubelet 启动失败没有关系 执行下面命令会去启动 生成配置文件 
kubeadm join 192.168.1.53:6443 --token e1d3u3.ilw4fb5cpt51xjf0 --discovery-token-ca-cert-hash sha256:72d07cb010102ae7f1733753a1ac07d0a402125f3326a41056174c69de6fe228 --ignore-preflight-errors=swap

然后查看状态启动成功了
systemctl status kubelet
journalctl -xefu kubelet 查看日志

master 上执行

https://blog.csdn.net/zhuchuangang/article/details/76572157/ 参考 安装flannel

shell 复制代码
kubectl get nodes 状态为NotReady 网络不通

mkdir /docker
cd /docker/
kubectl --namespace kube-system apply -f https://raw.githubusercontent.com/coreos/flannel/v0.8.0/Documentation/kube-flannel-rbac.yml
 
wget https://raw.githubusercontent.com/coreos/flannel/v0.8.0/Documentation/kube-flannel.yml
yum -y install wget

cat kube-flannel.yml 
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "type": "flannel",
      "delegate": {
        "isDefaultGateway": true
      }
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      hostNetwork: true
      nodeSelector:
        beta.kubernetes.io/arch: amd64
      tolerations:
      - key: node-role.kubernetes.io/master
        operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.8.0-amd64
        command: [ "/opt/bin/flanneld", "--ip-masq", "--kube-subnet-mgr" ]
        securityContext:
          privileged: true
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      - name: install-cni
        image: quay.io/coreos/flannel:v0.8.0-amd64
        command: [ "/bin/sh", "-c", "set -e -x; cp -f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conf; while true; do sleep 3600; done" ]
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg



kubectl --namespace kube-system apply -f ./kube-flannel.yml
kubectl get cs

kubectl get nodes 过一会状态为Ready了

参考链接:

https://blog.csdn.net/zhuchuangang/article/details/76572157/

相关推荐
南猿北者1 小时前
docker容器
docker·容器
YCyjs2 小时前
K8S群集调度二
云原生·容器·kubernetes
Hoxy.R2 小时前
K8s小白入门
云原生·容器·kubernetes
€☞扫地僧☜€5 小时前
docker 拉取MySQL8.0镜像以及安装
运维·数据库·docker·容器
全能全知者7 小时前
docker快速安装与配置mongoDB
mongodb·docker·容器
为什么这亚子8 小时前
九、Go语言快速入门之map
运维·开发语言·后端·算法·云原生·golang·云计算
ZHOU西口10 小时前
微服务实战系列之玩转Docker(十八)
分布式·docker·云原生·架构·数据安全·etcd·rbac
牛角上的男孩11 小时前
Istio Gateway发布服务
云原生·gateway·istio
JuiceFS12 小时前
好未来:多云环境下基于 JuiceFS 建设低运维模型仓库
运维·云原生
景天科技苑13 小时前
【云原生开发】K8S多集群资源管理平台架构设计
云原生·容器·kubernetes·k8s·云原生开发·k8s管理系统