Docker Swarm集群搭建

发布于:2025-03-18 ⋅ 阅读:(18) ⋅ 点赞:(0)

Docker Swarm集群搭建

1.准备环境

搭建Docker Swarm集群最低需要准备三台设备,且均需要提前安装好Docker。我这里准备了四台机器用于搭建集群,分别是:

DockerSwarm-Node1

lemon@DockerSwarm-Node1:~$ docker --version
Docker version 28.0.1, build 068a01e
lemon@DockerSwarm-Node1:~$

DockerSwarm-Node2

lemon@DockerSwarm-Node2:~$ docker --version
Docker version 28.0.1, build 068a01e
lemon@DockerSwarm-Node2:~$

DockerSwarm-Node3

lemon@DockerSwarm-Node3:~$ docker --version
Docker version 28.0.1, build 068a01e
lemon@DockerSwarm-Node3:~$

DockerSwarm-Node4

lemon@DockerSwarm-Node4:~$ docker --version
Docker version 28.0.1, build 068a01e
lemon@DockerSwarm-Node4:~$

Docker Swarm在安装好docker之后就属于自带的功能,我们需要做的就是启用该服务并配置集群环境。
在系统中使用命令: sudo docker info | grep -i ‘Swarm:’ 查看Swarm状态属于inactive 未激活使用的状态

lemon@DockerSwarm-Node1:~$ sudo docker info | grep -i 'Swarm:'
 Swarm: inactive
lemon@DockerSwarm-Node1:~$

接下来就需要激活Docker Swarm功能并搭建集群

2.搭建集群

激活Docker Swarm功能和搭建集群的操作,再Docker的官方手册中也写的很详细,可以参考官方手册:https://docs.docker.com/engine/swarm/swarm-tutorial/create-swarm/

再Node1节点中激活Docker Swarm功能(任意一个节点即可)

使用命令:docker swarm init --advertise-addr <机器IP地址>

lemon@DockerSwarm-Node1:~$ docker swarm init --advertise-addr 192.168.31.115
Swarm initialized: current node (5x45oj5l6oqnvav19gnni36s7) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-27kyz0d17dsylyfsuvl9qa0x5d7u7uqmq0yeoevgoqhsjy3z3t-ai36j3kzh7xmwxbwygs3v28gx 192.168.31.115:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
lemon@DockerSwarm-Node1:

在激活Docker Swarm的同时还提供了加入该集群的命令,另外我们可以再次查看Docker Swarm的状态。
使用命令:sudo docker info | grep -i ‘Swarm:’

lemon@DockerSwarm-Node1:~$ sudo docker info | grep -i 'Swarm:'
 Swarm: active
lemon@DockerSwarm-Node1:~$

可以看到Docker Swarm服务已经进入就绪状态,我需要在其他节点执行上面的加入该集群的命令。如果docker swarm init命令中提供的命令找不到了,也可以重新在已经激活Docker Swarm服务的机器上再获取一个加入集群的命令。如下

使用命令:docker swarm join-token worker

lemon@DockerSwarm-Node1:~$ docker swarm join-token worker
To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-27kyz0d17dsylyfsuvl9qa0x5d7u7uqmq0yeoevgoqhsjy3z3t-ai36j3kzh7xmwxbwygs3v28gx 192.168.31.115:2377

lemon@DockerSwarm-Node1:~$

我们依次再Node2~4的机器上执行加入集群的命令

DockerSwarm-Node2

lemon@DockerSwarm-Node2:~$  docker swarm join --token SWMTKN-1-27kyz0d17dsylyfsuvl9qa0x5d7u7uqmq0yeoevgoqhsjy3z3t-ai36j3kzh7xmwxbwygs3v28gx 192.168.31.115:2377
This node joined a swarm as a worker.
lemon@DockerSwarm-Node2:~$

DockerSwarm-Node3

lemon@DockerSwarm-Node3:~$ docker swarm join --token SWMTKN-1-27kyz0d17dsylyfsuvl9qa0x5d7u7uqmq0yeoevgoqhsjy3z3t-ai36j3kzh7xmwxbwygs3v28gx 192.168.31.115:2377
This node joined a swarm as a worker.
lemon@DockerSwarm-Node3:~$

DockerSwarm-Node4

lemon@DockerSwarm-Node4:~$ docker swarm join --token SWMTKN-1-27kyz0d17dsylyfsuvl9qa0x5d7u7uqmq0yeoevgoqhsjy3z3t-ai36j3kzh7xmwxbwygs3v28gx 192.168.31.115:2377
This node joined a swarm as a worker.
lemon@DockerSwarm-Node4:~$

此时我们已经将全部的Node2,Node3,Node4节点加入了以Node1为主节点的Docker Swarm集群中,此处需要注意的是,在那一台机器上激活的Docker Swarm功能,该机器就为当前集群的Master节点。我们可以再该机器上查看集群的状态来印证,如下:

使用命令:docker node ls

lemon@DockerSwarm-Node1:~$ docker node ls
ID                            HOSTNAME            STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
5x45oj5l6oqnvav19gnni36s7 *   DockerSwarm-Node1   Ready     Active         Leader           28.0.1
y08p0vgxh42lgarxqkdn6hata     DockerSwarm-Node2   Ready     Active                          28.0.1
mols5vh46baeiageeab12snpp     DockerSwarm-Node3   Ready     Active                          28.0.1
l9f1wm3p04wjyiporn4h5cknl     DockerSwarm-Node4   Ready     Active                          28.0.1
lemon@DockerSwarm-Node1:~$

如上输出信息,可以看到集群存在四个节点,Node1节点为集群的Leader 即Master节点。

此处需要注意的一点是,docker node ls查看集群状态的命令只能再Leader节点中执行,如果再其他从节点执行,就会提示错误信息。如下:

再Node2节点执行命令:docker node ls

lemon@DockerSwarm-Node2:~$ docker node ls
Error response from daemon: This node is not a swarm manager. Worker nodes can't be used to view or modify cluster state. Please run this command on a manager node or promote the current node to a manager.
lemon@DockerSwarm-Node2:~$

此时集群并未实现高可用的功能,如果Master节点Node1挂掉了,集群就崩溃了,我们还需要设置其他节点,让其形成选举制度,再集群挂掉的时候,选举出一个Master节点。

在Master节点使用命令:docker node promote

lemon@DockerSwarm-Node1:~$ docker node promote DockerSwarm-Node2
Node DockerSwarm-Node2 promoted to a manager in the swarm.
lemon@DockerSwarm-Node1:~$ docker node promote DockerSwarm-Node3
Node DockerSwarm-Node3 promoted to a manager in the swarm.
lemon@DockerSwarm-Node1:~$ docker node promote DockerSwarm-Node4
Node DockerSwarm-Node4 promoted to a manager in the swarm.
lemon@DockerSwarm-Node1:~$

此时再查看集群状态:

lemon@DockerSwarm-Node1:~$ docker node ls
ID                            HOSTNAME            STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
5x45oj5l6oqnvav19gnni36s7 *   DockerSwarm-Node1   Ready     Active         Leader           28.0.1
y08p0vgxh42lgarxqkdn6hata     DockerSwarm-Node2   Ready     Active         Reachable        28.0.1
mols5vh46baeiageeab12snpp     DockerSwarm-Node3   Ready     Active         Reachable        28.0.1
l9f1wm3p04wjyiporn4h5cknl     DockerSwarm-Node4   Ready     Active         Reachable        28.0.1
lemon@DockerSwarm-Node1:~$

集群Node2,Node3,Node4的管理状态变为Reachable,代表此时已经启用了高可用功能。此时在其他几个节点执行查询集群的状态命令就不会报上面的错误提示信息了。

在Node2节点中执行查询集群状态的命令:docker node ls

lemon@DockerSwarm-Node2:~$ docker node ls
ID                            HOSTNAME            STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
5x45oj5l6oqnvav19gnni36s7     DockerSwarm-Node1   Ready     Active         Leader           28.0.1
y08p0vgxh42lgarxqkdn6hata *   DockerSwarm-Node2   Ready     Active         Reachable        28.0.1
mols5vh46baeiageeab12snpp     DockerSwarm-Node3   Ready     Active         Reachable        28.0.1
l9f1wm3p04wjyiporn4h5cknl     DockerSwarm-Node4   Ready     Active         Reachable        28.0.1
lemon@DockerSwarm-Node2:~$

此时在Node2节点中,也可以查看集群的状态。

我们可以主动关闭Node1节点,查看选举制度是否生效。

直接关机Node1节点:

lemon@DockerSwarm-Node1:~$ sudo poweroff
[sudo] lemon 的密码:

Broadcast message from root@DockerSwarm-Node1 on pts/1 (Sun 2025-03-16 03:29:32 CST):

The system will power off now!

lemon@DockerSwarm-Node1:~$ client_loop: send disconnect: Connection reset

再Node2节点中执行查询集群状态命令:

lemon@DockerSwarm-Node2:~$ docker node ls
ID                            HOSTNAME            STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
5x45oj5l6oqnvav19gnni36s7     DockerSwarm-Node1   Down      Active         Unreachable      28.0.1
y08p0vgxh42lgarxqkdn6hata *   DockerSwarm-Node2   Ready     Active         Reachable        28.0.1
mols5vh46baeiageeab12snpp     DockerSwarm-Node3   Ready     Active         Leader           28.0.1
l9f1wm3p04wjyiporn4h5cknl     DockerSwarm-Node4   Ready     Active         Reachable        28.0.1
lemon@DockerSwarm-Node2:~$

此时可以看到,集群的Master节点分配给了Node3

此时我们在将关机的Node1节点启动,Node1节点启动后并不会因为Node1节点的回归,集群的控制权就交由Node1节点。

lemon@DockerSwarm-Node1:~$ docker node ls
ID                            HOSTNAME            STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
5x45oj5l6oqnvav19gnni36s7 *   DockerSwarm-Node1   Ready     Active         Reachable        28.0.1
y08p0vgxh42lgarxqkdn6hata     DockerSwarm-Node2   Ready     Active         Reachable        28.0.1
mols5vh46baeiageeab12snpp     DockerSwarm-Node3   Ready     Active         Leader           28.0.1
l9f1wm3p04wjyiporn4h5cknl     DockerSwarm-Node4   Ready     Active         Reachable        28.0.1
lemon@DockerSwarm-Node1:~$

3.创建集群网络

在任意节点上创建一个overlay类型的集群网络

使用命令:docker network create -d overlay --subnet=172.28.0.0/24 --gateway=172.28.0.1 --attachable cluster-net

lemon@DockerSwarm-Node1:~$ docker network create  -d overlay --subnet=172.28.0.0/24 --gateway=172.28.0.1 --attachable cluster-net
r458yfy9skk33ph542vams5ew
lemon@DockerSwarm-Node1:~$

执行成功后,在该集群的四个节点中均可见该网络

DockerSwarm-Node1

lemon@DockerSwarm-Node1:~$ docker network ls
NETWORK ID     NAME              DRIVER    SCOPE
1d863041e51a   bridge            bridge    local
r458yfy9skk3   cluster-net       overlay   swarm
2f2d3796d3f1   docker_gwbridge   bridge    local
2d406107084e   host              host      local
w30cnc780ylf   ingress           overlay   swarm
67cd71d55df3   none              null      local
lemon@DockerSwarm-Node1:~$

DockerSwarm-Node2

lemon@DockerSwarm-Node2:~$ docker network ls
NETWORK ID     NAME              DRIVER    SCOPE
e6e5e02bd761   bridge            bridge    local
r458yfy9skk3   cluster-net       overlay   swarm
6d28186bb844   docker_gwbridge   bridge    local
2d406107084e   host              host      local
w30cnc780ylf   ingress           overlay   swarm
67cd71d55df3   none              null      local
lemon@DockerSwarm-Node2:~$

DockerSwarm-Node3

lemon@DockerSwarm-Node3:~$ docker network ls
NETWORK ID     NAME              DRIVER    SCOPE
2f2cf8b5ef91   bridge            bridge    local
r458yfy9skk3   cluster-net       overlay   swarm
ea3b1805de92   docker_gwbridge   bridge    local
2d406107084e   host              host      local
w30cnc780ylf   ingress           overlay   swarm
67cd71d55df3   none              null      local
lemon@DockerSwarm-Node3:~$

DockerSwarm-Node4

lemon@DockerSwarm-Node4:~$ docker network ls
NETWORK ID     NAME              DRIVER    SCOPE
77bb31af5d42   bridge            bridge    local
r458yfy9skk3   cluster-net       overlay   swarm
66a8009034c9   docker_gwbridge   bridge    local
2d406107084e   host              host      local
w30cnc780ylf   ingress           overlay   swarm
67cd71d55df3   none              null      local
lemon@DockerSwarm-Node4:~$

至此集群网络环境已经准备好

4.启动服务

当集群环境搭建好了,就可以在集群中启动服务了,此处需要准备的是需要提前在每个节点上准备好服务所需镜像。如果是在生产环境中一般搭配私有Docker仓库使用,在启动服务的时候,现场拉取镜像。但是国内目前从Docker官方仓库拉取镜像不太好拉取,需要些技术手段。

此处我提前在每个节点上准备好了Nginx的最新镜像:

# Node1
lemon@DockerSwarm-Node1:~$ docker pull nginx
Using default tag: latest
latest: Pulling from library/nginx
7cf63256a31a: Pull complete
bf9acace214a: Pull complete
513c3649bb14: Pull complete
d014f92d532d: Pull complete
9dd21ad5a4a6: Pull complete
943ea0f0c2e4: Pull complete
103f50cb3e9f: Pull complete
Digest: sha256:9d6b58feebd2dbd3c56ab5853333d627cc6e281011cfd6050fa4bcf2072c9496
Status: Downloaded newer image for nginx:latest
docker.io/library/nginx:latest
lemon@DockerSwarm-Node1:~$

# Node2
lemon@DockerSwarm-Node2:~$ docker pull nginx
Using default tag: latest
latest: Pulling from library/nginx
7cf63256a31a: Pull complete
bf9acace214a: Pull complete
513c3649bb14: Pull complete
d014f92d532d: Pull complete
9dd21ad5a4a6: Pull complete
943ea0f0c2e4: Pull complete
103f50cb3e9f: Pull complete
Digest: sha256:9d6b58feebd2dbd3c56ab5853333d627cc6e281011cfd6050fa4bcf2072c9496
Status: Downloaded newer image for nginx:latest
docker.io/library/nginx:latest
lemon@DockerSwarm-Node2:~$

# Node3
lemon@DockerSwarm-Node3:~$ docker pull nginx
Using default tag: latest
latest: Pulling from library/nginx
7cf63256a31a: Pull complete
bf9acace214a: Pull complete
513c3649bb14: Pull complete
d014f92d532d: Pull complete
9dd21ad5a4a6: Pull complete
943ea0f0c2e4: Pull complete
103f50cb3e9f: Pull complete
Digest: sha256:9d6b58feebd2dbd3c56ab5853333d627cc6e281011cfd6050fa4bcf2072c9496
Status: Downloaded newer image for nginx:latest
docker.io/library/nginx:latest
lemon@DockerSwarm-Node3:~$

# Node4
lemon@DockerSwarm-Node4:~$ docker pull nginx
Using default tag: latest
latest: Pulling from library/nginx
7cf63256a31a: Pull complete
bf9acace214a: Pull complete
513c3649bb14: Pull complete
d014f92d532d: Pull complete
9dd21ad5a4a6: Pull complete
943ea0f0c2e4: Pull complete
103f50cb3e9f: Pull complete
Digest: sha256:9d6b58feebd2dbd3c56ab5853333d627cc6e281011cfd6050fa4bcf2072c9496
Status: Downloaded newer image for nginx:latest
docker.io/library/nginx:latest
lemon@DockerSwarm-Node4:~$

在集群中启动一个Nginx服务

使用命令:docker service create --name cluster-nginx --replicas 2 -p 8888:80 --network=cluster-net nginx:latest

lemon@DockerSwarm-Node1:~$ docker service create --name cluster-nginx --replicas 2 -p 8888:80 --network=cluster-net nginx:latest
yupu5nib5kh36qhrbsh48f68z
overall progress: 2 out of 2 tasks
1/2: running   [==================================================>]
2/2: running   [==================================================>]
verify: Service yupu5nib5kh36qhrbsh48f68z converged
lemon@DockerSwarm-Node1:~$

此时Nginx就启动了两个复本在这个集群当中。我们可以使用命令docker service ls 来查看

lemon@DockerSwarm-Node1:~$ docker service ls
ID             NAME            MODE         REPLICAS   IMAGE          PORTS
yupu5nib5kh3   cluster-nginx   replicated   2/2        nginx:latest   *:8888->80/tcp
lemon@DockerSwarm-Node1:~$

如果像要查看集群再那一个节点启动,可以使用命令 docker service ps 来查看

lemon@DockerSwarm-Node1:~$ docker service ps yupu5nib5kh3
ID             NAME              IMAGE          NODE                DESIRED STATE   CURRENT STATE           ERROR     PORTS
b5wimzvc3mbq   cluster-nginx.1   nginx:latest   DockerSwarm-Node1   Running         Running 3 minutes ago
vzdw8j7l5pcc   cluster-nginx.2   nginx:latest   DockerSwarm-Node2   Running         Running 3 minutes ago
lemon@DockerSwarm-Node1:~$

由上面可以看到两个副本运行在集群Node1和Node2节点中。

此时让我们来搞点破坏,将Node1节点关机后再看服务是否会切换

lemon@DockerSwarm-Node1:~$ sudo poweroff
[sudo] lemon 的密码:

Broadcast message from root@DockerSwarm-Node1 on pts/1 (Sun 2025-03-16 04:36:51 CST):

The system will power off now!

lemon@DockerSwarm-Node1:~$ client_loop: send disconnect: Connection reset

此时我们在Node2节点中查询集群状态:

lemon@DockerSwarm-Node2:~$ docker node ls
ID                            HOSTNAME            STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
5x45oj5l6oqnvav19gnni36s7     DockerSwarm-Node1   Down      Active         Unreachable      28.0.1
y08p0vgxh42lgarxqkdn6hata *   DockerSwarm-Node2   Ready     Active         Reachable        28.0.1
mols5vh46baeiageeab12snpp     DockerSwarm-Node3   Ready     Active         Reachable        28.0.1
l9f1wm3p04wjyiporn4h5cknl     DockerSwarm-Node4   Ready     Active         Leader           28.0.1
lemon@DockerSwarm-Node2:~$

此时节点Node1已经不可用,Node4被选举为Master节点

再次查看我们的服务:

lemon@DockerSwarm-Node2:~$ docker service ls
ID             NAME            MODE         REPLICAS   IMAGE          PORTS
yupu5nib5kh3   cluster-nginx   replicated   3/2        nginx:latest   *:8888->80/tcp
lemon@DockerSwarm-Node2:~$
lemon@DockerSwarm-Node2:~$ docker service ps yupu5nib5kh3
ID             NAME                  IMAGE          NODE                DESIRED STATE   CURRENT STATE            ERROR     PORTS
k1hgye8tl6ws   cluster-nginx.1       nginx:latest   DockerSwarm-Node4   Running         Running 2 minutes ago
b5wimzvc3mbq    \_ cluster-nginx.1   nginx:latest   DockerSwarm-Node1   Shutdown        Running 38 minutes ago
6951l9nrkato   cluster-nginx.2       nginx:latest   DockerSwarm-Node2   Running         Running 2 minutes ago
vzdw8j7l5pcc    \_ cluster-nginx.2   nginx:latest   DockerSwarm-Node2   Shutdown        Complete 2 minutes ago
lemon@DockerSwarm-Node2:~$

如上信息,可以看到服务有Node1节点切换至Node4节点运行。

此时我们在启动停掉的Node1节点

5.访问服务

访问这个Nginx可以使用四个节点的 IP + 端口均可以访问。

Node1节点:
在这里插入图片描述
Node2节点:
在这里插入图片描述
Node3节点:
在这里插入图片描述
Node4节点:
在这里插入图片描述

6.集群崩溃

Docker Swarm集群环境要求至少有一半以上的管理节点在线,否则集群会崩溃。

如下,我将停止Node1和Node2节点,演示集群崩溃情况:

# Node1
lemon@DockerSwarm-Node1:~$ sudo poweroff
[sudo] lemon 的密码:

Broadcast message from root@DockerSwarm-Node1 on pts/1 (Sun 2025-03-16 04:36:51 CST):

The system will power off now!

lemon@DockerSwarm-Node1:~$ client_loop: send disconnect: Connection reset

# Node2
lemon@DockerSwarm-Node2:~$ sudo poweroff
[sudo] lemon 的密码:

Broadcast message from root@DockerSwarm-Node1 on pts/1 (Sun 2025-03-16 04:37:41 CST):

The system will power off now!

lemon@DockerSwarm-Node2:~$ client_loop: send disconnect: Connection reset

此时在Node3节点中查询,就能看到集群崩溃的错误提示,如下:

lemon@DockerSwarm-Node3:~$ docker node ls
Error response from daemon: rpc error: code = Unknown desc = The swarm does not have a leader. It's possible that too few managers are online. Make sure more than half of the managers are online.
lemon@DockerSwarm-Node3:~$

虽然集群已经崩溃,但是我们的副本只有2个,所以当前服务就分配给了Node3和Node4节点,此时服务依旧可以访问,但是Node1和Node2节点就无法访问了。

Node3节点:
在这里插入图片描述
Node4节点:
在这里插入图片描述

7.集群的弹性伸缩

查询当前集中中的副本数

DockerSwarm-Node1节点

lemon@DockerSwarm-Node1:~$ docker service ls
ID             NAME            MODE         REPLICAS   IMAGE          PORTS
yupu5nib5kh3   cluster-nginx   replicated   2/2        nginx:latest   *:8888->80/tcp
lemon@DockerSwarm-Node1:~$

可以看到当前集群服务cluster-nginx存在两个副本,我们将其扩展至3个副本

可以使用两个命令:

命令①:docker service update --replicas <副本数量> <服务名>

lemon@DockerSwarm-Node1:~$ docker service update --replicas 3 cluster-nginx
cluster-nginx
overall progress: 3 out of 3 tasks
1/3: running   [==================================================>]
2/3: running   [==================================================>]
3/3: running   [==================================================>]
verify: Service cluster-nginx converged
lemon@DockerSwarm-Node1:~$

命令②:docker service scale <服务名>=<副本数量>

lemon@DockerSwarm-Node1:~$ docker service scale cluster-nginx=3
cluster-nginx scaled to 3
overall progress: 3 out of 3 tasks
1/3: running   [==================================================>]
2/3: running   [==================================================>]
3/3: running   [==================================================>]
verify: Service cluster-nginx converged
lemon@DockerSwarm-Node1:~$

扩展完成后可以查询是否扩展成功

使用命令:docker service ls

lemon@DockerSwarm-Node1:~$ docker service ls
ID             NAME            MODE         REPLICAS   IMAGE          PORTS
yupu5nib5kh3   cluster-nginx   replicated   3/3        nginx:latest   *:8888->80/tcp
lemon@DockerSwarm-Node1:~$