相关文章:Docker 三剑客之 Docker Swarm

这一篇主要是对 Docker Swarm 的完善,增加基于 overlay 组网通信,以便 Docker 容器可以跨主机访问。

不同主机间的容器之间通信方式,大概有三种:

  • 使用端口映射:直接把容器的服务端口映射到主机上,主机直接通过映射出来的端口通信。
  • 把容器放到主机所在的网段:修改 docker 的 ip 分配网段和主机一致,还要修改主机的网络结构。
  • 第三方项目:flannel,weave 或者 pipework 等,这些方案一般都是通过 SDN 搭建 overlay 网络达到容器通信的。

在使用 overlay 组网通信之前,我们先安装 Docker,以及 Docker Machine(Linux 下):

  1. $ sudo curl -L https://github.com/docker/machine/releases/download/v0.13.0/docker-machine-`uname -s`-`uname -m` > /usr/local/bin/docker-machine
  2. $ sudo chmod +x /usr/local/bin/docker-machine

使用脚本一键安装(Docker 镜像加速地址可以更换):

  1. #!/bin/bash
  2. set -e
  3. create_kv() {
  4. echo Creating kvstore machine.
  5. docker-machine create -d virtualbox \
  6. --engine-opt="registry-mirror=https://kvo9moak.mirror.aliyuncs.com" \
  7. kvstore
  8. docker $(docker-machine config kvstore) run -d \
  9. -p "8500:8500" \
  10. progrium/consul --server -bootstrap-expect 1
  11. }
  12. create_master() {
  13. echo Creating cluster master
  14. kvip=$(docker-machine ip kvstore)
  15. docker-machine create -d virtualbox \
  16. --swarm --swarm-master \
  17. --swarm-discovery="consul://${kvip}:8500" \
  18. --engine-opt="cluster-store=consul://${kvip}:8500" \
  19. --engine-opt="cluster-advertise=eth1:2376" \
  20. --engine-opt="registry-mirror=https://kvo9moak.mirror.aliyuncs.com" \
  21. swarm-manager
  22. }
  23. create_nodes(){
  24. kvip=$(docker-machine ip kvstore)
  25. echo Creating cluster nodes
  26. for i in 1 2; do
  27. docker-machine create -d virtualbox \
  28. --swarm \
  29. --swarm-discovery="consul://${kvip}:8500" \
  30. --engine-opt="cluster-store=consul://${kvip}:8500" \
  31. --engine-opt="cluster-advertise=eth1:2376" \
  32. --engine-opt="registry-mirror=https://kvo9moak.mirror.aliyuncs.com" \
  33. swarm-node${i}
  34. done
  35. }
  36. teardown(){
  37. docker-machine rm kvstore -y
  38. docker-machine rm -y swarm-manager
  39. for i in 1 2; do
  40. docker-machine rm -y swarm-node${i}
  41. done
  42. }
  43. case $1 in
  44. up)
  45. create_kv
  46. create_master
  47. create_nodes
  48. ;;
  49. down)
  50. teardown
  51. ;;
  52. *)
  53. echo "Unknow command..."
  54. exit 1
  55. ;;
  56. esac

运行./cluster.sh up,就能自动生成四台主机:

  • 一台 kvstore 运行 consul 服务
  • 一台 swarm master 机器,运行 swarm manager 服务
  • 两台 swarm node 机器,都是运行了 swarm node 服务和 docker daemon 服务

查看四台主机的具体信息:

  1. $ docker-machine ls
  2. NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
  3. kvstore - virtualbox Running tcp://192.168.99.100:2376 v18.03.1-ce
  4. swarm-manager * (swarm) virtualbox Running tcp://192.168.99.101:2376 swarm-manager (master) v18.03.1-ce
  5. swarm-node1 - virtualbox Running tcp://192.168.99.102:2376 swarm-manager v18.03.1-ce
  6. swarm-node2 - virtualbox Running tcp://192.168.99.103:2376 swarm-manager v18.03.1-ce

接下来验证集群是否正确安装?在主机上运行下面命令(主机,不是 Docker 主机):

  1. $ eval $(docker-machine env --swarm swarm-manager)
  2. $ docker info
  3. Containers: 6
  4. Running: 6
  5. Paused: 0
  6. Stopped: 0
  7. Images: 5
  8. Server Version: swarm/1.2.8
  9. Role: primary
  10. Strategy: spread
  11. Filters: health, port, containerslots, dependency, affinity, constraint, whitelist
  12. Nodes: 3
  13. swarm-manager: 192.168.99.101:2376
  14. ID: K6WX:ZYFT:UEHA:KM66:BYHD:ROBF:Z5KG:UHNE:U37V:4KX2:S5SV:YSCA|192.168.99.101:2376
  15. Status: Healthy
  16. Containers: 2 (2 Running, 0 Paused, 0 Stopped)
  17. Reserved CPUs: 0 / 1
  18. Reserved Memory: 0 B / 1.021 GiB
  19. Labels: kernelversion=4.9.93-boot2docker, operatingsystem=Boot2Docker 18.03.1-ce (TCL 8.2.1); HEAD : cb77972 - Thu Apr 26 16:40:36 UTC 2018, ostype=linux, provider=virtualbox, storagedriver=aufs
  20. UpdatedAt: 2018-05-08T10:20:39Z
  21. ServerVersion: 18.03.1-ce
  22. swarm-node1: 192.168.99.102:2376
  23. ID: RPRC:AVBX:7CBJ:HUTI:HI3B:RI6B:QI6O:M2UQ:ZT2I:HZ6J:33BL:HY76|192.168.99.102:2376
  24. Status: Healthy
  25. Containers: 2 (2 Running, 0 Paused, 0 Stopped)
  26. Reserved CPUs: 0 / 1
  27. Reserved Memory: 0 B / 1.021 GiB
  28. Labels: kernelversion=4.9.93-boot2docker, operatingsystem=Boot2Docker 18.03.1-ce (TCL 8.2.1); HEAD : cb77972 - Thu Apr 26 16:40:36 UTC 2018, ostype=linux, provider=virtualbox, storagedriver=aufs
  29. UpdatedAt: 2018-05-08T10:21:09Z
  30. ServerVersion: 18.03.1-ce
  31. swarm-node2: 192.168.99.103:2376
  32. ID: MKQ2:Y7EO:CKOJ:MGFH:B77S:3EWX:7YJT:2MBQ:CJSN:XENJ:BSKO:RAZP|192.168.99.103:2376
  33. Status: Healthy
  34. Containers: 2 (2 Running, 0 Paused, 0 Stopped)
  35. Reserved CPUs: 0 / 1
  36. Reserved Memory: 0 B / 1.021 GiB
  37. Labels: kernelversion=4.9.93-boot2docker, operatingsystem=Boot2Docker 18.03.1-ce (TCL 8.2.1); HEAD : cb77972 - Thu Apr 26 16:40:36 UTC 2018, ostype=linux, provider=virtualbox, storagedriver=aufs
  38. UpdatedAt: 2018-05-08T10:21:06Z
  39. ServerVersion: 18.03.1-ce
  40. Plugins:
  41. Volume:
  42. Network:
  43. Log:
  44. Swarm:
  45. NodeID:
  46. Is Manager: false
  47. Node Address:
  48. Kernel Version: 4.9.93-boot2docker
  49. Operating System: linux
  50. Architecture: amd64
  51. CPUs: 3
  52. Total Memory: 3.063GiB
  53. Name: 85be09a37044
  54. Docker Root Dir:
  55. Debug Mode (client): false
  56. Debug Mode (server): false
  57. Experimental: false
  58. Live Restore Enabled: false
  59. WARNING: No kernel memory limit support

可以看到集群的具体信息。

然后,接下来在主机上创建 overlay 网络,创建命令:

  1. $ docker network create -d overlay net1
  2. d6a8a22298485a044b19fcbb62033ff1b4c3d4bd6a8a2229848

然后我们查看刚创建名称为net1的 overlay 网络,命令:

  1. $ docker network ls
  2. NETWORK ID NAME DRIVER SCOPE
  3. d6a8a2229848 net1 overlay global
  4. 9c7f0e793838 swarm-manager/bridge bridge local
  5. 93787d9ba7ed swarm-manager/host host local
  6. 72fd1e63889e swarm-manager/none null local
  7. c73e00c4c76c swarm-node1/bridge bridge local
  8. 95983d8f1ef1 swarm-node1/docker_gwbridge bridge local
  9. a8a569d55cc9 swarm-node1/host host local
  10. e7fa8403b226 swarm-node1/none null local
  11. 7f1d219b5c08 swarm-node2/bridge bridge local
  12. e7463ae8c579 swarm-node2/docker_gwbridge bridge local
  13. 9a1f0d2bbdf5 swarm-node2/host host local
  14. bea626348d6d swarm-node2/none null local

接下来,我们创建两个容器(主机上执行,使用 Docker Swarm 很方便),测试使用net1网络,是否可以相互访问,创建命令:

  1. $ docker run -d --net=net1 --name=c1 -e constraint:node==swarm-node1 busybox top
  2. dab080b33e76af0e4c71c9365a6e57b2191b7aacd4f715ca11481403eccce12a
  3. $ docker run -d --net=net1 --name=c2 -e constraint:node==swarm-node2 busybox top
  4. 583fde42182a7e8f27527d5c55163a32102dba566ebe1f13d1951ac214849c8d

查看刚创建的容器运行情况:

  1. $ docker ps
  2. CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
  3. 583fde42182a busybox "top" 3 seconds ago Up 2 seconds swarm-node2/c2
  4. dab080b33e76 busybox "top" 18 seconds ago Up 18 seconds swarm-node1/c1

接下来,我们查看net1网络的具体详情:

  1. $ docker network inspect net1
  2. [
  3. {
  4. "Name": "net1",
  5. "Id": "d6a8a2229848d40ce446f8f850a0e713a6c88a9b43583cc463f437f306724f28",
  6. "Created": "2018-05-08T09:21:42.408840755Z",
  7. "Scope": "global",
  8. "Driver": "overlay",
  9. "EnableIPv6": false,
  10. "IPAM": {
  11. "Driver": "default",
  12. "Options": {},
  13. "Config": [
  14. {
  15. "Subnet": "10.0.0.0/24",
  16. "Gateway": "10.0.0.1"
  17. }
  18. ]
  19. },
  20. "Internal": false,
  21. "Attachable": false,
  22. "Ingress": false,
  23. "ConfigFrom": {
  24. "Network": ""
  25. },
  26. "ConfigOnly": false,
  27. "Containers": {
  28. "583fde42182a7e8f27527d5c55163a32102dba566ebe1f13d1951ac214849c8d": {
  29. "Name": "c2",
  30. "EndpointID": "b7fcb0039ab21ff06b36ef9ba008c324fabf24badfe45dfa6a30db6763716962",
  31. "MacAddress": "",
  32. "IPv4Address": "10.0.0.3/24",
  33. "IPv6Address": ""
  34. },
  35. "dab080b33e76af0e4c71c9365a6e57b2191b7aacd4f715ca11481403eccce12a": {
  36. "Name": "c1",
  37. "EndpointID": "8a80a83230edfdd9921357f08130fa19ef0b917bc4426aa49cb8083af9edb7f6",
  38. "MacAddress": "",
  39. "IPv4Address": "10.0.0.2/24",
  40. "IPv6Address": ""
  41. }
  42. },
  43. "Options": {},
  44. "Labels": {}
  45. }
  46. ]

可以看到,我们刚刚创建的两个容器信息(包含 IP 地址),也在里面。

然后我们测试下两个容器是否可以相互访问(直接 ping 容器名称,也可以访问):

  1. $ docker exec c1 ping -c 3 10.0.0.3
  2. PING 10.0.0.3 (10.0.0.3): 56 data bytes
  3. 64 bytes from 10.0.0.3: seq=0 ttl=64 time=0.903 ms
  4. 64 bytes from 10.0.0.3: seq=1 ttl=64 time=0.668 ms
  5. 64 bytes from 10.0.0.3: seq=2 ttl=64 time=0.754 ms
  6. --- 10.0.0.3 ping statistics ---
  7. 3 packets transmitted, 3 packets received, 0% packet loss
  8. round-trip min/avg/max = 0.668/0.775/0.903 ms
  9. $ docker exec c2 ping -c 3 10.0.0.2
  10. PING 10.0.0.2 (10.0.0.2): 56 data bytes
  11. 64 bytes from 10.0.0.2: seq=0 ttl=64 time=0.827 ms
  12. 64 bytes from 10.0.0.2: seq=1 ttl=64 time=0.702 ms
  13. 64 bytes from 10.0.0.2: seq=2 ttl=64 time=0.676 ms
  14. --- 10.0.0.2 ping statistics ---
  15. 3 packets transmitted, 3 packets received, 0% packet loss
  16. round-trip min/avg/max = 0.676/0.735/0.827 ms
  17. $ docker exec c2 ping -c 3 c1
  18. PING c1 (10.0.0.2): 56 data bytes
  19. 64 bytes from 10.0.0.2: seq=0 ttl=64 time=1.358 ms
  20. 64 bytes from 10.0.0.2: seq=1 ttl=64 time=0.663 ms
  21. 64 bytes from 10.0.0.2: seq=2 ttl=64 time=0.761 ms
  22. --- c1 ping statistics ---
  23. 3 packets transmitted, 3 packets received, 0% packet loss
  24. round-trip min/avg/max = 0.663/0.927/1.358 ms

另附 Docker Swarm 架构图:

参考资料:

版权声明:本文为xishuai原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
本文链接:https://www.cnblogs.com/xishuai/p/docker-swarm-overlay.html