高可用K8S构建3master+3node+keepalived+haproxy
视频地址:https://www.bilibili.com/video/BV1w4411y7Go?p=66
所需安装包在视频评论区
安装准备
系统:
CentOS-7-x86_64-Minimal-1810.iso
虚拟机配置:
网络:
虚拟机网络使用本地网卡共享到VMnet1,虚拟机使用VMnet1。
网卡配置
VMnet1配置:
vmware网络配置:
关闭DHCP,网段改为192.168.66.0
安装过程略过,最小化安装,全部默认,设置root密码。
设置系统主机名以及 Host 文件的相互解析,其他机器同理
hostnamectl set-hostname k8s-master01
echo “192.168.66.10 k8s-master01
192.168.66.11 k8s-master02
192.168.66.12 k8s-master03
192.168.66.20 k8s-node01
192.168.66.21 k8s-node02
192.168.66.22 k8s-node03
192.168.66.100 k8s-harbor” >> /etc/hosts
vi /etc/selinux/config
SELINUX=disabled
安装依赖包
yum -y install yum-utils
yum-config-manager –add-repo http://mirrors.aliyun.com/repo/Centos-7.repo
yum install -y conntrack ntpdate ntp ipvsadm ipset jq iptables curl sysstat libseccomp wget vim net-tools git
设置防火墙为 Iptables 并设置空规则
systemctl stop firewalld && systemctl disable firewalld
yum -y install iptables-services && systemctl start iptables && systemctl enable iptables && iptables -F && service iptables save
关闭 SELINUX
swapoff -a && sed -i ‘/ swap / s/^\(.*\)$/#\1/g’ /etc/fstab
setenforce 0 && sed -i ‘s/^SELINUX=.*/SELINUX=disabled/’ /etc/selinux/config
调整内核参数,对于 K8S
cat > kubernetes.conf <<EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0 # 禁止使用 swap 空间,只有当系统 OOM 时才允许使用它
vm.overcommit_memory=1 # 不检查物理内存是否够用
vm.panic_on_oom=0 # 开启 OOM
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOF
cp kubernetes.conf /etc/sysctl.d/kubernetes.conf
sysctl -p /etc/sysctl.d/kubernetes.conf
调整系统时区
# 设置系统时区为 中国/上海
timedatectl set-timezone Asia/Shanghai
# 将当前的 UTC 时间写入硬件时钟
timedatectl set-local-rtc 0
# 重启依赖于系统时间的服务
systemctl restart rsyslog
systemctl restart crond
关闭系统不需要服务
systemctl stop postfix && systemctl disable postfix
设置 rsyslogd 和 systemd journald
mkdir /var/log/journal # 持久化保存日志的目录
mkdir /etc/systemd/journald.conf.d
cat > /etc/systemd/journald.conf.d/99-prophet.conf <<EOF
[Journal]
# 持久化保存到磁盘
Storage=persistent
# 压缩历史日志
Compress=yes
SyncIntervalSec=5m
RateLimitInterval=30s
RateLimitBurst=1000
# 最大占用空间 10G
SystemMaxUse=10G
# 单日志文件最大 200M
SystemMaxFileSize=200M
# 日志保存时间 2 周
MaxRetentionSec=2week
# 不将日志转发到 syslog
ForwardToSyslog=no
EOF
systemctl restart systemd-journald
升级系统内核为 4.44
CentOS 7.x 系统自带的 3.10.x 内核存在一些 Bugs,导致运行的 Docker、Kubernetes 不稳定,例如:
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
# 安装完成后检查 /boot/grub2/grub.cfg 中对应内核 menuentry 中是否包含 initrd16 配置,如果没有,再安装
一次!
yum –enablerepo=elrepo-kernel install -y kernel-lt
# 设置开机从新内核启动
grub2-set-default “CentOS Linux (4.4.248-1.el7.elrepo.x86_64) 7 (Core)”
CentOS Linux (4.4.248-1.el7.elrepo.x86_64) 7 (Core)
# 重启后安装内核源文件
yum –enablerepo=elrepo-kernel install kernel-lt-devel-$(uname -r) kernel-lt-headers-$(uname -r)
关闭 NUMA
cp /etc/default/grub{,.bak}
vim /etc/default/grub # 在 GRUB_CMDLINE_LINUX 一行添加 `numa=off` 参数,如下所示:
diff /etc/default/grub.bak /etc/default/grub
< GRUB_CMDLINE_LINUX=”crashkernel=auto rd.lvm.lv=centos/root rhgb quiet”
cp /boot/grub2/grub.cfg{,.bak}
grub2-mkconfig -o /boot/grub2/grub.cfg
——————————
所有节点
kube-proxy开启ipvs的前置条件
modprobe br_netfilter
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe — ip_vs
modprobe — ip_vs_rr
modprobe — ip_vs_wrr
modprobe — ip_vs_sh
modprobe — nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
yum-config-manager \
–add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
安装 Docker 软件
yum install -y yum-utils device-mapper-persistent-data lvm2
yum update -y && yum install -y docker-ce
reboot
grub2-set-default “CentOS Linux (4.4.248-1.el7.elrepo.x86_64) 7 (Core)”
reboot
## 创建 /etc/docker 目录
mkdir /etc/docker
# 配置 daemon.
cat > /etc/docker/daemon.json <<EOF
{
“exec-opts”: [“native.cgroupdriver=systemd”],
“log-driver”: “json-file”,
“log-opts”: {
“max-size”: “100m”
}
}
EOF
mkdir -p /etc/systemd/system/docker.service.d
# 重启docker服务
systemctl daemon-reload && systemctl restart docker && systemctl enable docker
master1:
在主节点启动 Haproxy 与 Keepalived 容器
导入脚本 > 运行 > 查看可用节点
mkdir -p /usr/local/kubernetes/install
cd !$
yum install lrzsz -y
导入
haproxy.tar
keepalived.tar
kubeadm-basic.images.tar.gz
load-images.sh
start.keep.tar.gz
tar zxvf kubeadm-basic.images.tar.gz
cat load-images.sh
#!/bin/bash
cd /usr/local/kubernetes/install/kubeadm-basic.images
ls /usr/local/kubernetes/install/kubeadm-basic.images | grep -v load-images.sh > /tmp/k8s-images.txt
for i in $( cat /tmp/k8s-images.txt )
do
docker load -i $i
done
rm -rf /tmp/k8s-images.txt
chmod a+x load-images.sh
./load-images.sh
docker load -i haproxy.tar
docker load -i keepalived.tar
tar zxvf start.keep.tar.gz
修改haproxy配置文件
vim data/lb/etc/haproxy.cfg
确保先负载到第一个上,就先填写一个IP ,之后再加全
vim data/lb/start-haproxy.sh
vim data/lb/start-keepalived.sh
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum -y install kubeadm-1.15.1 kubectl-1.15.1 kubelet-1.15.1
systemctl enable kubelet.service
初始化主节点
mkdir images
mv * images/
cd images
kubeadm config print init-defaults > kubeadm-config.yaml
vim kubeadm-config.yaml
在master02 03创建目录
mkdir -p /usr/local/kubernetes/install/images
然后master01:
scp -r * root@k8s-master02:/usr/local/kubernetes/install/images
scp -r * root@k8s-master03:/usr/local/kubernetes/install/images
master01:
mv data/ /
cd /data/lb
ls
./start-haproxy.sh
netstat -antlup | grep 6444
./start-keepalived.sh
ip addr show
kubeadm init –config=kubeadm-config.yaml –experimental-upload-certs | tee kubeadm-init.log
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
cat ~/.kube/config
可以这这里看到IP信息
kubectl get node
master02 03
cd /usr/local/kubernetes/install/images
docker load -i haproxy.tar
docker load -i keepalived.tar
cat load-images.sh
#!/bin/bash
cd /usr/local/kubernetes/install/images/kubeadm-basic.images
ls /usr/local/kubernetes/install/images/kubeadm-basic.images | grep -v load-images.sh > /tmp/k8s-images.txt
for i in $( cat /tmp/k8s-images.txt )
do
docker load -i $i
done
rm -rf /tmp/k8s-images.txt
chmod a+x load-images.sh
./load-images.sh
mv data/ /
cd /data/lb/
./start-haproxy.sh
netstat -antlup | grep 6444
./start-keepalived.sh
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum -y install kubeadm-1.15.1 kubectl-1.15.1 kubelet-1.15.1
systemctl enable kubelet.service
初始化主节点
cd /usr/local/kubernetes/install/images
vim kubeadm-config.yaml
加入节点
kubeadm join 192.168.66.100:6444 –token abcdef.0123456789abcdef \
–discovery-token-ca-cert-hash sha256:824dd354a9785a0ca2c624ffcee1cea77b4931dbf82a123d4a5d32bffd6f4cf4 \
–control-plane –certificate-key 9d6240640a9164e5161cededde0527e94ecdc8044c0c45b5565bf2662ac7120a
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl get node
master03加入后:
kubectl get pod -n kube-system
状态都还是notready 修改haproxy
vim /data/lb/etc/haproxy.cfg
docker ps
docker rm -f HAProxy-K8S && bash /data/lb/start-haproxy.sh
scp etc/haproxy.cfg root@k8s-master02:/data/lb/etc/
scp etc/haproxy.cfg root@k8s-master02:/data/lb/etc/
master02 03:
docker rm -f HAProxy-K8S && bash /data/lb/start-haproxy.sh
部署flannel网络
master 01
cd /usr/local/kubernetes/install/images
echo “199.232.68.133 raw.githubusercontent.com” >> /etc/hosts
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl get node
把master01关机
shutdown -h now
master02\03:
多次尝试
kubectl get node
修改地址
vim ~/.kube/config
修改成自己的IP
kubectl get node
启动master01
—————-
查看集群状态:
kubectl get endpoints kube-controller-manager –namespace=kube-system -o yaml
controller-manager的工作节点是master02,其他两个阻塞状态
kubectl get endpoints kube-scheduler –namespace=kube-system -o yaml
kube-scheduler是matser03
Etcd 集群状态查看
kubectl -n kube-system exec etcd-k8s-master01 — etcdctl \
–endpoints=https://192.168.66.10:2379 \
–ca-file=/etc/kubernetes/pki/etcd/ca.crt \
–cert-file=/etc/kubernetes/pki/etcd/server.crt \
–key-file=/etc/kubernetes/pki/etcd/server.key cluster-health
指定kube-system名称空间运行 etcd-k8s-master01容器,执行容器内部命令etcdctl指定访问地址66.10:2379端口(etcd客户端连接端口 :2379内部互联端口(监听节点间通信) :2380),指定CA证书和SERVER的证书、秘钥,检查集群健康。
加入Node节点
node :
需要node提前装完docker,然后装K8s
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum -y install kubeadm-1.15.1 kubectl-1.15.1 kubelet-1.15.1
systemctl enable kubelet.service
mkdir -p /usr/local/kubernetes/install/images/
master01 : scp -r kubeadm-basic.images load-images.sh root@192.168.66.20:/usr/local/kubernetes/install/images/
cd /usr/local/kubernetes/install/images/
vim load-images.sh
#!/bin/bash
cd /usr/local/kubernetes/install/images/kubeadm-basic.images
ls /usr/local/kubernetes/install/images/kubeadm-basic.images | grep -v load-images.sh > /tmp/k8s-images.txt
for i in $( cat /tmp/k8s-images.txt )
do
docker load -i $i
done
rm -rf /tmp/k8s-images.txt
./load-images.sh
master01:cat /usr/local/kubernetes/install/images/kubeadm-init.log
官网太慢,等待下完镜像或者手动从master导到Node
kubectl get pod -n kube-system -o wide
kubectl get pod -n kube-system