Kubernetes集群搭建核心步骤
前言
这里使用环境:Ubuntu 18.04,以下所有操作都在root用户下进行
其实kubenetes安装非常核心的几步就是
- 安装docker
- 安装kubeadm
- 通过kubeadm init创建master节点
- 安装master的网络插件
- 通过kubeadm join添加node节点
一、所有节点安装docker
export LC_ALL=C
apt-get -y autoremove docker docker-engine docker.io docker-ce
apt-get update -y
apt-get install -y \
apt-transport-https \
ca-certificates \
curl \
software-properties-common
curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add -
add-apt-repository "deb [arch=amd64] http://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"
apt-get update -y
# docker源加速
mkdir -p /etc/docker
echo \'{"registry-mirrors":["https://pee6w651.mirror.aliyuncs.com"]}\' > /etc/docker/daemon.json
# 安装docker
apt-get install docker.io -y
# 启动和自启动
systemctl start docker
systemctl enable docker
二、所有节点安装kubeadm
apt-get update && apt-get install -y apt-transport-https curl
curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -
#配置kubernetes阿里源
tee /etc/apt/sources.list.d/kubernetes.list <<-\'EOF\'
deb https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl
# 设置停止自动更新
apt-mark hold kubelet kubeadm kubectl
# kubelet开机自启动
systemctl enable kubelet && systemctl start kubelet
安装了kubeadm, kubelet、kubectl、kubernetes-cni都会被安装好
三、安装master节点
安装命令如下:
kubeadm init --kubernetes-version=v1.11.3 --apiserver-advertise-address 172.18.0.1 --pod-network-cidr=10.244.0.0/16
说明
–kubernetes-version: 用于指定 k8s版本
–apiserver-advertise-address:用于指定使用 Master的哪个network interface进行通信,若不指定,则 kubeadm会自动选择具有默认网关的 interface
–pod-network-cidr:用于指定Pod的网络范围。该参数使用依赖于使用的网络方案,本文将使用经典的flannel网络方案。
这里运行报错
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
我们根据提示加上–ignore-preflight-errors=all,再次运行
kubeadm init --kubernetes-version=v1.11.3 --apiserver-advertise-address 172.18.0.1 --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=all```
如果没有FQ的同学可能会遇到拉不下镜像来的情况,可以通过下面命令解决
docker pull mirrorgooglecontainers/kube-apiserver-amd64:v1.11.3
docker pull mirrorgooglecontainers/kube-controller-manager-amd64:v1.11.3
docker pull mirrorgooglecontainers/kube-scheduler-amd64:v1.11.3
docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.11.3
docker pull mirrorgooglecontainers/pause:3.1
docker pull mirrorgooglecontainers/etcd-amd64:3.2.18
docker pull coredns/coredns:1.1.3
拉完后,再tag为我们需要的镜像
docker tag docker.io/mirrorgooglecontainers/kube-proxy-amd64:v1.11.3 k8s.gcr.io/kube-proxy-amd64:v1.11.3
docker tag docker.io/mirrorgooglecontainers/kube-scheduler-amd64:v1.11.3 k8s.gcr.io/kube-scheduler-amd64:v1.11.3
docker tag docker.io/mirrorgooglecontainers/kube-apiserver-amd64:v1.11.3 k8s.gcr.io/kube-apiserver-amd64:v1.11.3
docker tag docker.io/mirrorgooglecontainers/kube-controller-manager-amd64:v1.11.3 k8s.gcr.io/kube-controller-manager-amd64:v1.11.3
docker tag docker.io/mirrorgooglecontainers/etcd-amd64:3.2.18 k8s.gcr.io/etcd-amd64:3.2.18
docker tag docker.io/mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
docker tag docker.io/coredns/coredns:1.1.3 k8s.gcr.io/coredns:1.1.3
安装成功后会出现kubeadm join --token...
,这个命令就是我们用来添加节点的命令,得记录下来
kubeadm join 172.18.0.1:6443 --token 9fvv8z.7vmiehcohwlll9tj --discovery-token-ca-cert-hash sha256:dc583faabe8bc130137507cca893fbead73a8879e5540545f2122f9a0839d6dc
还会出现第一次使用集群的配置命令,用来将k8s集群的配置命令保存在用户的.kube目录,kubectl默认会使用这个目录授权信息访问k8s集群
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
根据提示执行这三条指令,这时kubectl就可以正常的执行了,我们使用kubectl get nodes来查看一下节点信息
root@VM-0-8-ubuntu:/home/ubuntu# kubectl get nodes
NAME STATUS ROLES AGE VERSION
vm-0-8-ubuntu NotReady master 6m v1.11.3
显示master节点是NotReady,这是因为我们还没有安装网络插件,可以通过kubectl describe node
命令验证
root@VM-0-8-ubuntu:/home/ubuntu# kubectl describe node vm-0-8-ubuntu
...
ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
提示网络插件未就绪
再查看一下k8s默认的kube-system保留命名空间的内容
root@VM-0-8-ubuntu:/home/ubuntu# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-78fcdf6894-vb9pv 0/1 Pending 0 7m
coredns-78fcdf6894-zlkfp 0/1 Pending 0 7m
etcd-vm-0-8-ubuntu 1/1 Running 0 6m
kube-apiserver-vm-0-8-ubuntu 1/1 Running 1 7m
kube-controller-manager-vm-0-8-ubuntu 1/1 Running 2 8m
kube-proxy-jcfpt 1/1 Running 0 7m
kube-scheduler-vm-0-8-ubuntu 1/1 Running 0
coredns处于pending状态
四、部署网络插件
kubernetes默认支持多重网络插件如flannel、weave、calico,
这里使用flanne,就必须要设置–pod-network-cidr参数,10.244.0.0/16是kube-flannel.yml里面配置的默认网段,如果需要修改的话,需要把kubeadm init的–pod-network-cidr参数和后面的kube-flannel.yml里面修改成一样的网段就可以了
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml
安装完成,再次查看node状态
root@VM-0-8-ubuntu:/home/ubuntu# kubectl get nodes
NAME STATUS ROLES AGE VERSION
vm-0-8-ubuntu Ready master 9m v1.11.3
再看master节点,已经变成Ready,再看kube-system的pod
root@VM-0-8-ubuntu:/home/ubuntu# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-78fcdf6894-fndmg 1/1 Running 0 2m
coredns-78fcdf6894-nm282 1/1 Running 0 2m
etcd-vm-0-8-ubuntu 1/1 Running 0 2m
kube-apiserver-vm-0-8-ubuntu 1/1 Running 0 1m
kube-controller-manager-vm-0-8-ubuntu 1/1 Running 0 1m
kube-flannel-ds-85wxw 1/1 Running 0 22s
kube-proxy-95v4f 1/1 Running 0 2m
kube-scheduler-vm-0-8-ubuntu 1/1 Running 0
coredns也正常了
默认情况master节点打上了“污点”,使用的是k8s的Taint/Toleration机制,如下
root@VM-0-8-ubuntu:/home/ubuntu# kubectl describe node vm-0-8-ubuntu
...
Taints: node-role.kubernetes.io/master:NoSchedule
如果我们只是需要一个单节点,可以删除这个Taints
root@VM-0-8-ubuntu:/home/ubuntu# kubectl taint nodes --all node-role.kubernetes.io/master-
node/vm-0-8-ubuntu untainted
五、安装node节点
master和node节点都运行着kubectl,唯一不同的是master需要init,启动kube-apiserver、kube-scheduler、kube-controller-manager这三个系统Pod,
在安装完docker和kubeadm的基础上,只需要运行kubeadm join命令就可以了
kubeadm join 172.18.0.1:6443 --token 9fvv8z.7vmiehcohwlll9tj --discovery-token-ca-cert-hash sha256:dc583faabe8bc130137507cca893fbead73a8879e5540545f2122f9a0839d6dc
六、运行一个demo
创建一个demo.yaml,我们使用nginx:alpine镜像
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:alpine
ports:
- containerPort: 80
执行创建
root@VM-0-8-ubuntu:/home/ubuntu# kubectl apply -f demo.yaml
deployment.apps/hello-deployment created
查看pod
root@VM-0-8-ubuntu:/home/ubuntu# kubectl get po
NAME READY STATUS RESTARTS AGE
demo-deployment-555958bc44-drwvh 1/1 Running 0 26s
demo-deployment-555958bc44-llhr9 1/1 Running 0 26s
可以看到,po已经正常的运行了,replicas代表设置了两个节点。可以使用kubectl exec -it进入容器查看一下
root@VM-0-8-ubuntu:/home/ubuntu# kubectl exec -it demo-deployment-555958bc44-drwvh sh
/ # ls /
bin etc lib mnt proc run srv tmp var
dev home media opt root sbin sys usr
ok,一个简单的demo也运行了