K8S 与Docker兼容问题

k8s v1.18.0 => Docker v18.x

k8s v1.19.0 => Docker v19.x

软件 版本
Linux操作系统 CentOS 7.9.2009 (Core) x64
Kubernetes 1.8.0
Docker 18.06.3-ce
角色 IP 组件 推荐配置(最低)
master 192.168.137.101 kubelet
kubeadm
kubectl
docker
CUP 2 核 +
内存 2G +
node1 192.168.137.102 kubelet
kubeadm
kubectl
docker
CUP 2 核 +
内存 2G +
node2 192.168.137.103 kubelet
kubeadm
kubectl
docker
CUP 2 核 +
内存 2G +
  1. # 修改hostname
  2. # vi /etc/hostname
  3. # 192.168.137.101
  4. hostnamectl set-hostname master
  5. # 192.168.137.102
  6. hostnamectl set-hostname node1
  7. # 192.168.137.103
  8. hostnamectl set-hostname node2
  1. # 将本机IP指向hostname
  2. vi /etc/hosts
  3. 192.168.137.101 master
  4. 192.168.137.102 node1
  5. 192.168.137.103 node2
  6. reboot -h # 重启(可以做完全部前期准备后再重启)
  1. # Master节点端口放行
  2. # Kubernetes API Server 6443
  3. firewall-cmd --zone=public --add-port=6443/tcp --permanent
  4. # etcd server client api 2379~2380
  5. firewall-cmd --zone=public --add-port=2379-2380/tcp --permanent
  6. # kubelet 10250, kube-scheduler 10251, kube-controller-manager 10252
  7. firewall-cmd --zone=public --add-port=10250-10252/tcp --permanent
  8. # Node节点端口放行
  9. # kubelet API 10250
  10. firewall-cmd --zone=public --add-port=10250/tcp --permanent
  11. # NodePort Services 30000~32767
  12. firewall-cmd --zone=public --add-port=30000-32767/tcp --permanent
  13. firewall-cmd --reload
  14. firewall-cmd --list-ports
  1. systemctl disable firewalld
  2. systemctl stop firewalld
  1. # 安装 wget
  2. yum install -y wget
  3. # 下载 docker 镜像源
  4. wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
  5. # docker安装版本查看
  6. yum list docker-ce --showduplicates | sort -r
  7. # 安装 docker
  8. yum -y install docker-ce
  9. # 指定版本
  10. yum -y install docker-ce-18.06.3.ce-3.el7
  11. # 设置开机自启动
  12. systemctl enable docker && systemctl start docker
  13. # 版本检查
  14. docker --version
  15. Docker version 18.06.3-ce, build d7080c1
  1. vi /etc/docker/daemon.json
  2. {
  3. "registry-mirrors": [
  4. "https://1nj0zren.mirror.aliyuncs.com",
  5. "https://docker.mirrors.ustc.edu.cn",
  6. "http://f1361db2.m.daocloud.io",
  7. "https://registry.docker-cn.com"
  8. ],
  9. "exec-opts": [
  10. "native.cgroupdriver=systemd"
  11. ],
  12. "log-driver": "json-file",
  13. "log-opts": {
  14. "max-size": "100m"
  15. },
  16. "storage-driver": "overlay2"
  17. }
  18. #重新加载配置文件
  19. systemctl daemon-reload
  20. #重启Docker
  21. systemctl restart docker

由于国内网络原因, 官方文档中的地址不可用, 本文替换为阿里云镜像地址, 执行以下代码即可:

  1. cat <<EOF > /etc/yum.repos.d/kubernetes.repo
  2. [kubernetes]
  3. name=Kubernetes
  4. baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
  5. enabled=1
  6. gpgcheck=1
  7. repo_gpgcheck=1
  8. gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
  9. exclude=kube*
  10. EOF
  11. # 注意:gpgkey 后面的两个网址中间是空格,不是换行,复制后出现换行会导致安装出错
  1. yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
  2. # 指定版本
  3. yum install -y kubelet-1.18.0 kubeadm-1.18.0 kubectl-1.18.0 --disableexcludes=kubernetes
  4. # 如下出现错误 [Errno -1] repomd.xml signature could not be verified for kubernetes 则是 repo 的 gpg 验证不通过导致的,可以修改 /etc/yum.repos.d/kubernetes.repo 中的 repo_gpgcheck=0 跳过验证。
  5. systemctl enable kubelet && systemctl start kubelet
  1. cat <<EOF > /etc/sysctl.d/k8s.conf
  2. net.bridge.bridge-nf-call-ip6tables = 1
  3. net.bridge.bridge-nf-call-iptables = 1
  4. EOF
  5. sysctl --system

注意: 以上的全部操作, 在 Node 机器上也需要执行. 注意hostname等不能相同.

  1. kubeadm config print init-defaults > kubeadm-init.yaml
  2. vi kubeadm-init.yaml
  3. #################################################################
  4. localAPIEndpoint:
  5. #advertiseAddress: 1.2.3.4
  6. advertiseAddress: 192.168.137.101 # 本机IP
  7. nodeRegistration:
  8. #name: localhost.localdomain
  9. name: master
  10. #imageRepository: k8s.gcr.io
  11. imageRepository: registry.aliyuncs.com/google_containers # 镜像仓库
  12. networking:
  13. dnsDomain: cluster.local
  14. serviceSubnet: 10.96.0.0/12
  15. podSubnet: 10.244.0.0/16 # 新增Pod子网络
  16. #################################################################
  17. :wq
  1. apiVersion: kubeadm.k8s.io/v1beta2
  2. bootstrapTokens:
  3. - groups:
  4. - system:bootstrappers:kubeadm:default-node-token
  5. token: abcdef.0123456789abcdef
  6. ttl: 24h0m0s
  7. usages:
  8. - signing
  9. - authentication
  10. kind: InitConfiguration
  11. localAPIEndpoint:
  12. #advertiseAddress: 1.2.3.4
  13. advertiseAddress: 192.168.137.101
  14. bindPort: 6443
  15. nodeRegistration:
  16. criSocket: /var/run/dockershim.sock
  17. #name: localhost.localdomain
  18. name: master
  19. taints:
  20. - effect: NoSchedule
  21. key: node-role.kubernetes.io/master
  22. ---
  23. apiServer:
  24. timeoutForControlPlane: 4m0s
  25. apiVersion: kubeadm.k8s.io/v1beta2
  26. certificatesDir: /etc/kubernetes/pki
  27. clusterName: kubernetes
  28. controllerManager: {}
  29. dns:
  30. type: CoreDNS
  31. etcd:
  32. local:
  33. dataDir: /var/lib/etcd
  34. #imageRepository: k8s.gcr.io
  35. imageRepository: registry.aliyuncs.com/google_containers
  36. kind: ClusterConfiguration
  37. kubernetesVersion: v1.18.0
  38. networking:
  39. dnsDomain: cluster.local
  40. serviceSubnet: 10.96.0.0/12
  41. podSubnet: 10.244.0.0/16
  42. scheduler: {}
  1. kubeadm init \
  2. --apiserver-advertise-address=192.168.137.101 \
  3. --image-repository registry.aliyuncs.com/google_containers \
  4. --kubernetes-version v1.18.0 \
  5. --service-cidr=10.1.0.0/16 \
  6. --pod-network-cidr=10.244.0.0/16
  1. kubeadm config images pull --config kubeadm-init.yaml
  1. # 注意不要重复执行
  2. sed -i 's/KUBELET_EXTRA_ARGS=/KUBELET_EXTRA_ARGS="--fail-swap-on=false"/' /etc/sysconfig/kubelet
  3. # 临时关闭
  4. swapoff -a
  1. kubeadm init --config kubeadm-init.yaml
  2. # 出现端口被占用情况
  3. kubeadm reset
  4. kubeadm init --config kubeadm-init.yaml --ignore-preflight-errors=Swap
  5. # reset后初始化提示文件已存在
  6. rm -rf /etc/kubernetes/manifests
  7. rm -rf /var/lib/etcd
  1. # 出现下面文字表示初始化成功:
  2. Then you can join any number of worker nodes by running the following on each as root:
  3. kubeadm join 192.168.137.101:6443 --token abcdef.0123456789abcdef \
  4. --discovery-token-ca-cert-hash sha256:d126a8ec9cb47ac4bfae5a2d7501172da937d91b1ccf0eae093a9a3687c841f2
  1. # 配置kubectl执行命令环境
  2. mkdir -p $HOME/.kube
  3. cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  4. chown $(id -u):$(id -g) $HOME/.kube/config
  5. # 执行kubectl命令查看机器节点
  6. kubectl get node
  7. -----------------------------------------
  8. NAME STATUS ROLES AGE VERSION
  9. master NotReady master 48m v1.18.8

使用以下命令安装 Calico

  1. wget https://docs.projectcalico.org/manifests/calico.yaml
  2. # 获取网络信息
  3. firewall-cmd --get-active-zones
  4. public
  5. interfaces: eth0
  6. vi calico.yaml # 大概从 3639 行开始,有些改动没有则追加
  7. #####################################################################
  8. # Cluster type to identify the deployment type
  9. - name: CLUSTER_TYPE
  10. value: "k8s,bgp"
  11. # Auto-detect the BGP IP address.
  12. - name: IP
  13. value: "autodetect"
  14. # IP automatic detection.
  15. - name: IP_AUTODETECTION_METHOD
  16. value: "interface=eth.*"
  17. # Enable IPIP
  18. - name: CALICO_IPV4POOL_IPIP
  19. #value: "Always"
  20. value: "Never"
  21. #####################################################################
  22. # 构建calico网络
  23. kubectl apply -f calico.yaml
  24. # 检查结果
  25. kubectl get po -n kube-system -o wide | grep calico

检查 master 的状态是否已经成为 Ready

  1. kubectl get node
  2. NAME STATUS ROLES AGE VERSION
  3. master Ready master 5m20s v1.18.0

安装文档: Web UI (Dashboard)

部署文档:Web UI (Dashboard)

解决GitHub的raw.githubusercontent.com无法连接问题

1、进入网址 https://site.ip138.com/raw.Githubusercontent.com/

2、输入 raw.githubusercontent.com,查询对应的IP地址:151.101.108.133

3、编辑/etc/hosts文件配置映射:151.101.108.133 raw.githubusercontent.com

  1. # 下载配置文件
  2. wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml
  3. # 创建 pod
  4. kubectl apply -f recommended.yaml
  5. # 查看 pods 状态
  6. kubectl get pods --all-namespaces | grep dashboard
  7. # 使用 nodeport方式 将 dashboard服务 暴露在集群外,指定使用 30443 端口
  8. kubectl patch svc kubernetes-dashboard -n kubernetes-dashboard \
  9. -p '{"spec":{"type":"NodePort","ports":[{"port":443,"targetPort":8443,"nodePort":30443}]}}'
  10. # 查看暴露的service,已修改为nodeport类型
  11. kubectl -n kubernetes-dashboard get svc
  12. # 此时我们可以访问登录面板: https://192.168.137.101:30443,但是暂时还无法登录
  1. # 删除现有的dashboard服务
  2. kubectl delete -f recommended.yaml
  3. # 重命名 recommended.yaml
  4. mv recommended.yaml dashboard-svc.yaml
  5. # 修改配置项
  6. vi dashboard-svc.yaml
  7. #####################################################################
  8. kind: Service
  9. apiVersion: v1
  10. metadata:
  11. labels:
  12. k8s-app: kubernetes-dashboard
  13. name: kubernetes-dashboard
  14. namespace: kubernetes-dashboard
  15. spec:
  16. type: NodePort # 服务类型改为 NodePort
  17. ports:
  18. - port: 443
  19. targetPort: 8443
  20. nodePort: 30443 # 暴露端口 30443
  21. selector:
  22. k8s-app: kubernetes-dashboard
  23. #####################################################################
  24. :wq
  25. # 重新创建 pod
  26. kubectl apply -f dashboard-svc.yaml

文档地址: Creating sample user

  1. vi dashboard-svc-account.yaml
  2. #####################################################################
  3. apiVersion: v1
  4. kind: ServiceAccount
  5. metadata:
  6. name: dashboard-admin
  7. namespace: kube-system
  8. ---
  9. apiVersion: rbac.authorization.k8s.io/v1
  10. kind: ClusterRoleBinding
  11. metadata:
  12. name: dashboard-admin
  13. roleRef:
  14. kind: ClusterRole
  15. name: cluster-admin
  16. apiGroup: rbac.authorization.k8s.io
  17. subjects:
  18. - kind: ServiceAccount
  19. name: dashboard-admin
  20. namespace: kube-system
  21. #####################################################################
  22. :wq
  23. # 执行
  24. kubectl apply -f dashboard-svc-account.yaml

官方文档中提供了登录 1.7.X 以上版本的登录方式,而且步骤很不清晰,我们自己按下面步骤操作即可:

  1. grep 'client-certificate-data' ~/.kube/config | head -n 1 | awk '{print $2}' | base64 -d >> kubecfg.crt
  2. grep 'client-key-data' ~/.kube/config | head -n 1 | awk '{print $2}' | base64 -d >> kubecfg.key
  3. # 生成证书时会提示输入密码, 可以直接两次回车跳过.
  4. openssl pkcs12 -export -clcerts -inkey kubecfg.key -in kubecfg.crt -out kubecfg.p12 -name "kubernetes-client"
  5. # kubecfg.p12 即需要导入客户端机器的证书. 将证书拷贝到客户端机器上: 若生成证书时跳过了密码, 导入时提示填写密码直接回车即可
  6. scp root@192.168.137.101:/root/.kube/kubecfg.p12 ./
  7. # 此时我们可以访问登录面板: https://192.168.137.101:30443 ,登录时会提示选择证书, 确认后会提示输入当前用户名密码(注意是电脑的用户名密码).

文档地址: Bearer Token

  1. # 获取 Token:
  2. kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep dashboard-admin | awk '{print $1}')
  3. # 复制该Token到登录页, 点击登录即可
  1. # 关闭交换空间
  2. swapoff -a
  3. # 如果前面执行 kubeadm init 命令后没有保留 kubeadm join 语句,需要执行如下命令重新生成:
  4. kubeadm token create --print-join-command
  5. kubeadm join 192.168.137.101:6443 --token ngqaor.ayhyq00qb3o0gxjk --discovery-token-ca-cert-hash sha256:4c18ecc6e9bd4457308b028123cbd16b2d3cbdefb14ec1e61b43a15e05ab63b3
  6. # 执行如下命令将 Node 加入集群:
  7. kubeadm join 192.168.137.101:6443 --token ngqaor.ayhyq00qb3o0gxjk \
  8. --discovery-token-ca-cert-hash sha256:4c18ecc6e9bd4457308b028123cbd16b2d3cbdefb14ec1e61b43a15e05ab63b3

添加完毕后, 在 master 上查看节点状态:

  1. # 查看所有节点状态
  2. kubectl get nodes
  3. NAME STATUS ROLES AGE VERSION
  4. master Ready master 6h38m v1.18.0
  5. node1 Ready <none> 32m v1.18.0
  6. node2 Ready <none> 32m v1.18.0
  7. # 查看所有 pod 状态
  8. kubectl get po --all-namespaces
  9. NAMESPACE NAME READY STATUS RESTARTS AGE
  10. kube-system calico-kube-controllers-65d7476764-zgfp2 1/1 Running 0 5h44m
  11. kube-system calico-node-dk6v2 0/1 Running 0 5h44m
  12. kube-system calico-node-rgt4x 0/1 PodInitializing 0 9m19s
  13. kube-system calico-node-tzvn2 0/1 Running 0 9m29s
  14. kube-system coredns-7ff77c879f-5hgb6 1/1 Running 0 6h15m
  15. kube-system coredns-7ff77c879f-l7wpq 1/1 Running 0 6h15m
  16. kube-system etcd-master 1/1 Running 0 6h15m
  17. kube-system kube-apiserver-master 1/1 Running 0 6h15m
  18. kube-system kube-controller-manager-master 1/1 Running 0 6h15m
  19. kube-system kube-proxy-6jf4p 1/1 Running 0 6h15m
  20. kube-system kube-proxy-nrsr2 1/1 Running 0 9m19s
  21. kube-system kube-proxy-sfh7l 1/1 Running 0 9m29s
  22. kube-system kube-scheduler-master 1/1 Running 0 6h15m
  23. kubernetes-dashboard dashboard-metrics-scraper-6b4884c9d5-kh88n 1/1 Running 0 124m
  24. kubernetes-dashboard kubernetes-dashboard-7b544877d5-csfkz 1/1 Running 0 124m

版权声明:本文为Run2948原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
本文链接:https://www.cnblogs.com/Run2948/p/Setup_Kubernetes_On_CentOS7.html