使用Swarm工具搭建docker集群

发布于:2025-06-03 ⋅ 阅读:(19) ⋅ 点赞:(0)

序:需要三台安装好docker环境的虚拟机

在Centos7中,在线/离线安装Docker:在Centos7中,在线/离线安装Docker_centos7在线安装docker-CSDN博客


目录

1、配置集群环境

2、修改各节点的主机名

3、配置IP地址主机名映射表

(1)先将master节点上配置

(2)将master节点上的配置文件传输给node1和node2主机节点 

4、测试各主机之间的连通性

5、测试与外网的连通性,并进行时钟同步(三台主机均需操作)

6、三个节点主机上均需正确安装docker,并配置镜像加速器

7、修改docker.service配置文件,开启2375tcp端口,并重启docker服务

8、在各主机节点上拉取swarm镜像

9、在master节点上创建集群

10、将node1和node2加入集群

11、验证集群,在master节点中查看集群中各节点的信息

12、在swarm中部署服务,此处以部署nginx服务为例

(1)在各节点上拉取nginx镜像

(2)在master节点上创建子网--"nginx_net",用于使不同的主机上的容器网络互通

(3)在master节点上创建一个副本数为1的nginx容器

(4)伸缩容器

13、节点宕机处理

(1)模拟节点宕机状态,可先将node1节点上的docker服务关闭

(2)再master主机节点上查看swarm集群中节点的状态

14、在swarm中使用数据卷

(1)在三台主机节点上创建数据卷,数据卷名为"volume-test"

(2)在三台主机节点上的/var/lib/docker/volume/volume-test/_data目录中,新增index.html文件

(3)在tj_master节点上创建衣服本为3的容器swarm-nginx,挂载volume-test到容器的/usr/share/nginx/html目录中,并映射端口

(4)验证效果


1、配置集群环境

主机名 IP地址 角色节点
master 192.32.20.10/24 管理节点
node1 192.32.20.20/24 工作节点
node2 192.32.20.30/24 工作节点

2、修改各节点的主机名

[root@localhost ~]# hostnamectl set-hostname master
[root@localhost ~]# bash
[root@master ~]#
[root@localhost ~]# hostnamectl set-hostname node1
[root@localhost ~]# bash
[root@node1 ~]#
[root@localhost ~]# hostnamectl set-hostname node2
[root@localhost ~]# bash
[root@node2 ~]#

3、配置IP地址主机名映射表

(1)先将master节点上配置

[root@master ~]# vi /etc/hosts
[root@master ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.32.20.10 master
192.32.20.20 node1
192.32.20.30 node2

(2)将master节点上的配置文件传输给node1和node2主机节点 

[root@master ~]# scp /etc/hosts root@node1:/etc/hosts
The authenticity of host 'node1 (192.32.20.20)' can't be established.
ECDSA key fingerprint is SHA256:bsaRja0KRPLlvRorcIv/oxGF6ER1tE6qzNHYKc8oL8U.
ECDSA key fingerprint is MD5:ba:a3:51:96:6d:1e:3d:a3:88:77:01:04:25:b9:fb:73.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node1,192.32.20.20' (ECDSA) to the list of known hosts.
root@node1's password:
hosts                                                                                                       100%  216   244.4KB/s   00:00

[root@master ~]# scp /etc/hosts root@node2:/etc/hosts
The authenticity of host 'node2 (192.32.20.30)' can't be established.
ECDSA key fingerprint is SHA256:bsaRja0KRPLlvRorcIv/oxGF6ER1tE6qzNHYKc8oL8U.
ECDSA key fingerprint is MD5:ba:a3:51:96:6d:1e:3d:a3:88:77:01:04:25:b9:fb:73.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node2,192.32.20.30' (ECDSA) to the list of known hosts.
root@node2's password:
hosts   

4、测试各主机之间的连通性

  • master节点
[root@master ~]# ping node1 -c4
PING node1 (192.32.20.20) 56(84) bytes of data.
64 bytes from node1 (192.32.20.20): icmp_seq=1 ttl=64 time=0.420 ms
64 bytes from node1 (192.32.20.20): icmp_seq=2 ttl=64 time=0.555 ms
64 bytes from node1 (192.32.20.20): icmp_seq=3 ttl=64 time=5.54 ms
64 bytes from node1 (192.32.20.20): icmp_seq=4 ttl=64 time=2.68 ms

--- node1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3022ms
rtt min/avg/max/mdev = 0.420/2.302/5.546/2.077 ms

[root@master ~]# ping node2 -c4
PING node2 (192.32.20.30) 56(84) bytes of data.
64 bytes from node2 (192.32.20.30): icmp_seq=1 ttl=64 time=0.599 ms
64 bytes from node2 (192.32.20.30): icmp_seq=2 ttl=64 time=0.545 ms
64 bytes from node2 (192.32.20.30): icmp_seq=3 ttl=64 time=1.04 ms
64 bytes from node2 (192.32.20.30): icmp_seq=4 ttl=64 time=1.15 ms

--- node2 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3003ms
rtt min/avg/max/mdev = 0.545/0.835/1.155/0.267 ms
  • node1节点 
[root@node1 ~]# ping master -c4
PING master (192.32.20.10) 56(84) bytes of data.
64 bytes from master (192.32.20.10): icmp_seq=1 ttl=64 time=0.395 ms
64 bytes from master (192.32.20.10): icmp_seq=2 ttl=64 time=0.723 ms
64 bytes from master (192.32.20.10): icmp_seq=3 ttl=64 time=0.282 ms
64 bytes from master (192.32.20.10): icmp_seq=4 ttl=64 time=0.757 ms

--- master ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
rtt min/avg/max/mdev = 0.282/0.539/0.757/0.205 ms

[root@node1 ~]# ping node2 -c4
PING node2 (192.32.20.30) 56(84) bytes of data.
64 bytes from node2 (192.32.20.30): icmp_seq=1 ttl=64 time=1.22 ms
64 bytes from node2 (192.32.20.30): icmp_seq=2 ttl=64 time=0.269 ms
64 bytes from node2 (192.32.20.30): icmp_seq=3 ttl=64 time=0.829 ms
64 bytes from node2 (192.32.20.30): icmp_seq=4 ttl=64 time=0.520 ms

--- node2 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3003ms
rtt min/avg/max/mdev = 0.269/0.710/1.224/0.357 ms
  •  node2节点
[root@node2 ~]# ping master -c4
PING master (192.32.20.10) 56(84) bytes of data.
64 bytes from master (192.32.20.10): icmp_seq=1 ttl=64 time=0.562 ms
64 bytes from master (192.32.20.10): icmp_seq=2 ttl=64 time=0.292 ms
64 bytes from master (192.32.20.10): icmp_seq=3 ttl=64 time=0.393 ms
64 bytes from master (192.32.20.10): icmp_seq=4 ttl=64 time=1.02 ms

--- master ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
rtt min/avg/max/mdev = 0.292/0.567/1.021/0.279 ms

[root@node2 ~]# ping node1 -c4
PING node1 (192.32.20.20) 56(84) bytes of data.
64 bytes from node1 (192.32.20.20): icmp_seq=1 ttl=64 time=0.673 ms
64 bytes from node1 (192.32.20.20): icmp_seq=2 ttl=64 time=0.878 ms
64 bytes from node1 (192.32.20.20): icmp_seq=3 ttl=64 time=0.333 ms
64 bytes from node1 (192.32.20.20): icmp_seq=4 ttl=64 time=2.37 ms

--- node1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3003ms
rtt min/avg/max/mdev = 0.333/1.065/2.379/0.783 ms

5、测试与外网的连通性,并进行时钟同步(三台主机均需操作)

## 此处仅展示master节点的操作

[root@master ~]# ping aliyun.com -c4
PING aliyun.com (106.11.172.9) 56(84) bytes of data.
64 bytes from 106.11.172.9 (106.11.172.9): icmp_seq=1 ttl=128 time=25.2 ms
64 bytes from 106.11.172.9 (106.11.172.9): icmp_seq=2 ttl=128 time=23.7 ms
64 bytes from 106.11.172.9 (106.11.172.9): icmp_seq=3 ttl=128 time=22.3 ms
64 bytes from 106.11.172.9 (106.11.172.9): icmp_seq=4 ttl=128 time=24.8 ms

--- aliyun.com ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3005ms
rtt min/avg/max/mdev = 22.380/24.045/25.245/1.119 ms

(时钟同步服务器可自行配置,此处选择阿里云的时钟服务器)
[root@master ~]# ntpdate ntp1.aliyun.com
 2 Jun 04:52:42 ntpdate[2949]: adjust time server 118.31.3.89 offset -0.149026 sec

6、三个节点主机上均需正确安装docker,并配置镜像加速器

## 此处仅展示master节点的操作

## 查看docker的版本信息
[root@master ~]# docker version
Client: Docker Engine - Community
 Version:           26.1.4
 API version:       1.45
 Go version:        go1.21.11
 Git commit:        5650f9b
 Built:             Wed Jun  5 11:32:04 2024
 OS/Arch:           linux/amd64
 Context:           default

Server: Docker Engine - Community
 Engine:
  Version:          26.1.4
  API version:      1.45 (minimum version 1.24)
  Go version:       go1.21.11
  Git commit:       de5c9cf
  Built:            Wed Jun  5 11:31:02 2024
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.6.33
  GitCommit:        d2d58213f83a351ca8f528a95fbd145f5654e957
 runc:
  Version:          1.1.12
  GitCommit:        v1.1.12-0-g51d5e94
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

## 查看镜像加速器的配置信息
[root@master ~]# cat /etc/docker/daemon.json
{
"registry-mirrors":["https://docker.1panel.live"]
}

7、修改docker.service配置文件,开启2375tcp端口,并重启docker服务

## 修改配置文件中"[service]"内容的参数
[root@master ~]# vi /lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --containerd=/run/containerd/containerd.sock

## 将docker配置文件传输给node1和node2主机节点
[root@master ~]# scp /lib/systemd/system/docker.service root@node1:/lib/systemd/system/docker.service                           root@node1's password:
docker.service                                                                                100% 2006   962.0KB/s   00:00
[root@master ~]# scp /lib/systemd/system/docker.service root@node2:/lib/systemd/system/docker.service
root@node2's password:
docker.service                                                                                100% 2006     1.4MB/s   00:00

## 重启docker服务
[root@master ~]# systemctl daemon-reload
[root@master ~]# systemctl restart docker
[root@master ~]# systemctl status docker
● docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2025-06-02 05:02:53 EDT; 6s ago
     Docs: https://docs.docker.com
 Main PID: 3116 (dockerd)
    Tasks: 8
   Memory: 34.6M
   CGroup: /system.slice/docker.service
           └─3116 /usr/bin/dockerd -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --cont...
## 检测各节点中2375端口是否开放
[root@master ~]# netstat -ntlp | grep 2375
tcp6       0      0 :::2375                 :::*                    LISTEN      3116/dockerd      
[root@node1 ~]# netstat -ntlp | grep 2375
tcp6       0      0 :::2375                 :::*                    LISTEN      3700/dockerd      
[root@node2 ~]# netstat -ntlp | grep 2375
tcp6       0      0 :::2375                 :::*                    LISTEN      3661/dockerd      

8、在各主机节点上拉取swarm镜像

## 此处仅展示master节点的操作

[root@master ~]# docker pull swarm
Using default tag: latest
latest: Pulling from library/swarm
38e5683d7755: Pull complete
083aff163606: Pull complete
2064f1a73c6b: Pull complete
Digest: sha256:2de8883e2933840ed7ee7360ea1eed314bf8aeac37c0692b9ca651630fde3b7f
Status: Downloaded newer image for swarm:latest
docker.io/library/swarm:latest

[root@master ~]# docker images
REPOSITORY   TAG       IMAGE ID       CREATED       SIZE
swarm        latest    1a5eb59a410f   4 years ago   12.7MB

9、在master节点上创建集群

## 在master节点上初始化集群,获得唯一的token,作为集群的唯一标识

[root@master ~]# docker swarm init --advertise-addr 192.32.20.10
Swarm initialized: current node (jfntj7166fm5etglkvqte74p5) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-497b4igdcpt085my2by9yztfo7p3bq8jk3bgfohy6us5xh8s6j-7rzyhyyj219t29foozk6c7fi7 192.32.20.10:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

10、将node1和node2加入集群

## 在上一步可知道集群的唯一token值为"--token SWMTKN-1-497b4igdcpt085my2by9yztfo7p3bq8jk3bgfohy6us5xh8s6j-7rzyhyyj219t29foozk6c7fi7 192.32.20.10:2377"

[root@node1 ~]# docker swarm join --token SWMTKN-1-497b4igdcpt085my2by9yztfo7p3bq8jk3bgfohy6us5xh8s6j-7rzyhyyj219t29foozk6c7fi7 192.32.20.10:2377
This node joined a swarm as a worker.

[root@node2 ~]# docker swarm join --token SWMTKN-1-497b4igdcpt085my2by9yztfo7p3bq8jk3bgfohy6us5xh8s6j-7rzyhyyj219t29foozk6c7fi7 192.32.20.10:2377
This node joined a swarm as a worker.

11、验证集群,在master节点中查看集群中各节点的信息

[root@master ~]# docker node ls
ID                            HOSTNAME   STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
jfntj7166fm5etglkvqte74p5 *   master     Ready     Active         Leader           26.1.4
zo37xed5971opkvdy1a86b7qn     node1      Ready     Active                          26.1.4
inrtx3wi6wzycmr9v18gk2064     node2      Ready     Active                          26.1.4

12、在swarm中部署服务,此处以部署nginx服务为例

(1)在各节点上拉取nginx镜像

## 此处仅展示master节点的操作

[root@master ~]# docker pull nginx
Using default tag: latest
latest: Pulling from library/nginx
61320b01ae5e: Pull complete
670a101d432b: Pull complete
405bd2df85b6: Pull complete
cc80efff8457: Pull complete
2b9310b2ee4b: Pull complete
6c4aa022e8e1: Pull complete
abddc69cb49d: Pull complete
Digest: sha256:fb39280b7b9eba5727c884a3c7810002e69e8f961cc373b89c92f14961d903a0
Status: Downloaded newer image for nginx:latest
docker.io/library/nginx:latest

[root@master ~]# docker images
REPOSITORY   TAG       IMAGE ID       CREATED       SIZE
nginx        latest    be69f2940aaf   6 weeks ago   192MB
swarm        latest    1a5eb59a410f   4 years ago   12.7MB

(2)在master节点上创建子网--"nginx_net",用于使不同的主机上的容器网络互通

[root@master ~]# docker network create -d overlay nginx_net
bd6cwi4qvpnjwpx170c2aq2k0

[root@master ~]# docker network ls
NETWORK ID     NAME              DRIVER    SCOPE
e02fef1cacd1   bridge            bridge    local
d4e8f7158806   docker_gwbridge   bridge    local
d70f62f8a958   host              host      local
y05cypqmfx11   ingress           overlay   swarm
bd6cwi4qvpnj   nginx_net         overlay   swarm
de80db81b634   none              null      local

(3)在master节点上创建一个副本数为1的nginx容器

[root@master ~]# docker service create --replicas 1 --network nginx_net --name my-test -p 9999:80 ngi
psk0lm34x9z7ivtefshfwzx44
overall progress: 1 out of 1 tasks
1/1: running   [==================================================>]
verify: Service psk0lm34x9z7ivtefshfwzx44 converged

## 查看my-test容器信息,可以知道该容器运行在tj_node1节点上
[root@master ~]# docker service ps my-test
ID             NAME        IMAGE          NODE      DESIRED STATE   CURRENT STATE           ERROR     PORTS
nyaejni07zdt   my-test.1   nginx:latest   node1     Running         Running 4 minutes ago

## 查看my-test容器的服务信息
[root@master ~]# docker service inspect --pretty my-test
ID:             psk0lm34x9z7ivtefshfwzx44
Name:           my-test
Service Mode:   Replicated
 Replicas:      1
Placement:
UpdateConfig:
 Parallelism:   1
 On failure:    pause
 Monitoring Period: 5s
 Max failure ratio: 0
 Update order:      stop-first
RollbackConfig:
 Parallelism:   1
 On failure:    pause
 Monitoring Period: 5s
 Max failure ratio: 0
 Rollback order:    stop-first
ContainerSpec:
 Image:         nginx:latest@sha256:fb39280b7b9eba5727c884a3c7810002e69e8f961cc373b89c92f14961d903a0
 Init:          false
Resources:
Networks: nginx_net
Endpoint Mode:  vip
Ports:
 PublishedPort = 9999
  Protocol = tcp
  TargetPort = 80
  PublishMode = ingress

(4)伸缩容器

  ## 先将my-test容器扩展到5个

[root@master ~]# docker service scale my-test=5
my-test scaled to 5
overall progress: 5 out of 5 tasks
1/5: running   [==================================================>]
2/5: running   [==================================================>]
3/5: running   [==================================================>]
4/5: running   [==================================================>]
5/5: running   [==================================================>]
verify: Service my-test converged

[root@master ~]# docker service ps my-test
ID             NAME        IMAGE          NODE      DESIRED STATE   CURRENT STATE            ERROR     PORTS
nyaejni07zdt   my-test.1   nginx:latest   node1     Running         Running 5 minutes ago
rfvuw88zf912   my-test.2   nginx:latest   master    Running         Running 19 seconds ago
t5mbd4prb40y   my-test.3   nginx:latest   node2     Running         Running 18 seconds ago
y9s99m84p18y   my-test.4   nginx:latest   node2     Running         Running 18 seconds ago
6clcwgwowkyk   my-test.5   nginx:latest   node1     Running         Running 20 seconds ago

 ## 可再将my-test容器缩容到2个

[root@master ~]# docker service scale my-test=2
my-test scaled to 2
overall progress: 2 out of 2 tasks
1/2: running   [==================================================>]
2/2: running   [==================================================>]
verify: Service my-test converged

[root@master ~]# docker service ps my-test
ID             NAME        IMAGE          NODE      DESIRED STATE   CURRENT STATE            ERROR     PORTS
nyaejni07zdt   my-test.1   nginx:latest   node1     Running         Running 6 minutes ago
rfvuw88zf912   my-test.2   nginx:latest   master    Running         Running 55 seconds ago

13、节点宕机处理

        若某个节点处出现了宕机情况,则节点会从Swarm集群中被移出,此时宕机节点上运行的容器会被调度到其他节点上,以满足制定数量的副本运行状态。

(1)模拟节点宕机状态,可先将node1节点上的docker服务关闭

[root@node1 ~]# systemctl stop docker
Warning: Stopping docker.service, but it can still be activated by:
  docker.socket

[root@node1 ~]# systemctl status docker
● docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
   Active: inactive (dead) since Mon 2025-06-02 05:26:38 EDT; 6s ago
     Docs: https://docs.docker.com
  Process: 3661 ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --containerd=/run/containerd/containerd.sock (code=exited, status=0/SUCCESS)
 Main PID: 3661 (code=exited, status=0/SUCCESS)

(2)再master主机节点上查看swarm集群中节点的状态

[root@master ~]# docker node ls
ID                            HOSTNAME   STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
jfntj7166fm5etglkvqte74p5 *   master     Ready     Active         Leader           26.1.4
zo37xed5971opkvdy1a86b7qn     node1      Down      Active                          26.1.4
inrtx3wi6wzycmr9v18gk2064     node2      Ready     Active                          26.1.4

## 此时可以看到,原本运行在node1上的容器“my-test.1”被调度到node2节点上

[root@master ~]# docker service ps my-test
ID             NAME            IMAGE          NODE      DESIRED STATE   CURRENT STATE            ERROR     PORTS
na0tu98bpz9h   my-test.1       nginx:latest   node2     Running         Running 28 seconds ago     
nyaejni07zdt    \_ my-test.1   nginx:latest   node1     Shutdown        Running 12 minutes ago     
rfvuw88zf912   my-test.2       nginx:latest   master    Running         Running 6 minutes ago      

## 当node1节点重新启动docker服务后,node1节点原有的容器不会自动调度到node1节点上

[root@node1 ~]# systemctl restart docker
[root@node1 ~]# systemctl status docker
● docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2025-06-02 05:31:53 EDT; 13s ago
     Docs: https://docs.docker.com
 Main PID: 5003 (dockerd)
    Tasks: 9
   Memory: 35.2M
   CGroup: /system.slice/docker.service
           └─5003 /usr/bin/dockerd -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --contai...

[root@master ~]# docker service ps my-test
ID             NAME            IMAGE          NODE      DESIRED STATE   CURRENT STATE             ERROR     PORTS
na0tu98bpz9h   my-test.1       nginx:latest   node2     Running         Running 2 minutes ago
nyaejni07zdt    \_ my-test.1   nginx:latest   node1     Shutdown        Shutdown 40 seconds ago
rfvuw88zf912   my-test.2       nginx:latest   master    Running         Running 8 minutes ago

14、在swarm中使用数据卷

(1)在三台主机节点上创建数据卷,数据卷名为"volume-test"

  ## 此处仅展示master节点的操作

[root@master ~]# docker volume create --name volume-test
volume-test

[root@master ~]# docker volume ls
DRIVER    VOLUME NAME
local     volume-test

[root@master ~]# docker volume inspect volume-test
[
    {
        "CreatedAt": "2025-06-02T05:34:17-04:00",
        "Driver": "local",
        "Labels": null,
        "Mountpoint": "/var/lib/docker/volumes/volume-test/_data",
        "Name": "volume-test",
        "Options": null,
        "Scope": "local"
    }
]

(2)在三台主机节点上的/var/lib/docker/volume/volume-test/_data目录中,新增index.html文件

[root@master ~]# cd /var/lib/docker/volumes/volume-test/_data
[root@master _data]# echo "This is nginx-test in master" > index.html
[root@master _data]# cat index.html
This is nginx-test in master
 
[root@node1 ~]# cd /var/lib/docker/volumes/volume-test/_data
[root@node1 _data]# echo "This is nginx-test in node1" > index.html
[root@node1 _data]# cat index.html
This is nginx-test in node1

[root@node2 ~]# cd /var/lib/docker/volumes/volume-test/_data
[root@node2 _data]# echo "This is nginx-test in node2" > index.html
[root@node2 _data]# cat index.html
This is nginx-test in node2

(3)在tj_master节点上创建衣服本为3的容器swarm-nginx,挂载volume-test到容器的/usr/share/nginx/html目录中,并映射端口

[root@master ~]# docker service create --replicas 3 --mount type=volume,src=volume-test,dst=/usr/share/nginx/html --name swarm-nginx -p 8001:80 nginx
2xbxqamfrvw45vbvz45irsc8h
overall progress: 3 out of 3 tasks
1/3: running   [==================================================>]
2/3: running   [==================================================>]
3/3: running   [==================================================>]
verify: Service 2xbxqamfrvw45vbvz45irsc8h converged

[root@master ~]# docker service ps swarm-nginx
ID             NAME            IMAGE          NODE      DESIRED STATE   CURRENT STATE            ERROR     PORTS
pk3x25l5qv4d   swarm-nginx.1   nginx:latest   master    Running         Running 31 seconds ago
pfpfte8sthoh   swarm-nginx.2   nginx:latest   node2     Running         Running 31 seconds ago
bqjbhb5i44dh   swarm-nginx.3   nginx:latest   node1     Running         Running 31 seconds ago

(4)验证效果

  ## 通过本地回环的方式访问

[root@master ~]# for i in {1..10};do curl node1:8001;done
This is nginx-test in master
This is nginx-test in node2
This is nginx-test in node1
This is nginx-test in master
This is nginx-test in node2
This is nginx-test in node1
This is nginx-test in master
This is nginx-test in node2
This is nginx-test in node1
This is nginx-test in master

[root@master ~]# for i in {1..10};do curl node2:8001;done
This is nginx-test in master
This is nginx-test in node2
This is nginx-test in node1
This is nginx-test in master
This is nginx-test in node2
This is nginx-test in node1
This is nginx-test in master
This is nginx-test in node2
This is nginx-test in node1
This is nginx-test in master

   ## 通过浏览器的方式访问

 

 


网站公告

今日签到

点亮在社区的每一天
去签到