完整实验命令解析:从集群搭建到负载均衡配置(2)

发布于:2025-08-28 ⋅ 阅读:(20) ⋅ 点赞:(0)

一、环境准备与基础网络配置

1.1 节点角色与网络规划

节点角色 主机名 所属网段 IP 地址 网关 核心功能
Web 服务器 web1 10.1.8.0/24 10.1.8.11 10.1.8.10(后期调整为 10.1.8.20) 部署 Nginx/HTTPD,提供 Web 服务
Web 服务器 web2 10.1.8.0/24 10.1.8.12 10.1.8.10(后期调整为 10.1.8.20) 部署 Nginx/HTTPD,提供 Web 服务
Web 服务器 web3 10.1.8.0/24 10.1.8.13 10.1.8.10(后期调整为 10.1.8.20) 部署 Nginx/HTTPD,提供 Web 服务
负载均衡器(初期) lb 10.1.1.0/24 10.1.1.10 无需配置 部署 LVS,实现负载均衡
客户端 client1 10.1.8.0/24 10.1.8.21 10.1.8.10(后期调整为 10.1.8.20) 测试 Web 服务访问
客户端 client2 10.1.1.0/24 10.1.1.21 10.1.1.10(后期调整为 10.1.1.20) 测试负载均衡与高可用
路由器 router 多网段(10.1.8.0/24、10.1.1.0/24、10.1.2.0/24) 10.1.8.20、10.1.1.20、10.1.2.20 - 实现多网段互通
NFS 服务器 nfs 10.1.2.0/24 10.1.2.100 10.1.2.20 提供共享存储,存储 Web 静态资源
高可用负载均衡器 ha1 10.1.8.0/24 10.1.8.14 10.1.8.20 部署 HAProxy+Keepalived,主负载均衡节点
高可用负载均衡器 ha2 10.1.8.0/24

1.2 基础网络配置(全节点)

1.2.1 10.1.8.0/24 网段节点(web1、web2、web3、client1)
  • web 服务器

 [root@web1-3 ~]#
 yum install -y nginx
 systemctl enable nginx --now
 echo Welcome to $(hostname) > /usr/share/nginx/html/index.html
  • lvs 服务器

 [root@lvs ~]# 
 yum install -y ipvsadm
 # systemctl enable ipvsadm
 # 等ipvs规则配置完成后再启动ipvsadm服务
  • client服务器

 # 10.1.8.0/24 网段网关为10.1.8.10   #web123  client1
 nmcli connection modify ens33 ipv4.gateway 10.1.8.10
 nmcli connection up ens33  
 ​
 # 10.1.1.0/24 网段网关为10.1.1.10  #client2  lb不用配置
 nmcli connection modify ens33 ipv4.gateway 10.1.1.10
 nmcli connection up ens33  

二、Web 服务部署(web1、web2、web3 节点)

  • 配置 web

 [root@web1-3 ~]#
 ​
 # 部署 web
 yum install -y nginx
 echo Welcome to $(hostname) > /usr/share/nginx/html/index.html 
 systemctl enable nginx.service --now
 ​
 ​
 #验证 Web 服务可用性
 [root@client1 ~ 10:56:20]# curl 10.1.8.11
 Welcome to web1.laoma.cloud
 [root@client1 ~ 10:57:49]# curl 10.1.8.12
 Welcome to web2.laoma.cloud
 [root@client1 ~ 10:57:53]# curl 10.1.8.13
 Welcome to web3.laoma.cloud
 ​
 ​

三、LVS 负载均衡部署(lb 节点)

  • 配置 LVS

 [root@lb ~ 10:55:16]# echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf
 [root@lb ~ 10:59:15]# sysctl -p
 net.ipv4.ip_forward = 1
 ​
 ​
 [root@lb ~ 10:59:21]# yum install -y ipvsadm
 [root@lb ~ 10:59:38]# touch /etc/sysconfig/ipvsadm
 [root@lb ~ 10:59:38]# systemctl enable ipvsadm --now
 ​
 # 创建轮询负载
 [root@lb ~ 10:59:38]# ipvsadm -A -t 10.1.1.10:80 -s rr
 [root@lb ~ 11:00:03]# ipvsadm -a -t 10.1.1.10:80 -r 10.1.8.11 -m
 [root@lb ~ 11:00:03]# ipvsadm -a -t 10.1.1.10:80 -r 10.1.8.12 -m
 [root@lb ~ 11:00:03]# ipvsadm -a -t 10.1.1.10:80 -r 10.1.8.13 -m
 [root@lb ~ 11:00:03]# ipvsadm-save -n > /etc/sysconfig/ipvsadm
 ​
 ​
 # 核实配置是否生效
 [root@lb ~ 11:00:04]# ipvsadm -Ln
 [root@lb ~ 11:00:59]# for i in {1..90};do curl -s 10.1.1.10 ;done|sort|uniq -c
      30 Welcome to web1.laoma.cloud
      30 Welcome to web2.laoma.cloud
      30 Welcome to web3.laoma.cloud
 ​
  • 负载均衡模式更改为加权轮询

 [root@lb ~ 11:01:12]# ipvsadm -E -t 10.1.1.10:80 -s wrr
 [root@lb ~ 11:01:33]# ipvsadm -e -t 10.1.1.10:80 -r 10.1.8.12 -m -w 2
 [root@lb ~ 11:01:33]# ipvsadm -e -t 10.1.1.10:80 -r 10.1.8.13 -m -w 3
 [root@lb ~ 11:01:34]# ipvsadm -Ln
 ​
 访问验证
 [root@lb ~ 11:01:46]# for i in {1..90};do curl -s 10.1.1.10 ;done|sort|uniq -c
      15 Welcome to web1.laoma.cloud
      30 Welcome to web2.laoma.cloud
      45 Welcome to web3.laoma.cloud
 ​
 ​
思考

此时client1是否可以通过10.1.1.10访问后端服务器?

答:不能访问

client1 发出去数据包经过10.1.1.10的ipvs模块处理,而后端web收到数据包后根据来源地址10.1.8.21进行回复,也就是直接返回client1 导致数据包没有返回LVS处理

如果不能,需要如何配置才能实现访问?

 [root@web1 ~ 10:57:32]# nmcli connection modify ens33 ipv4.routes '10.1.8.21 255.255.255.255 10.1.8.10'
 [root@web1 ~ 11:02:32]# nmcli connection up ens33
 ​
 [root@web2 ~ 10:57:32]# nmcli connection modify ens33 ipv4.routes '10.1.8.21 255.255.255.255 10.1.8.10'
 [root@web2 ~ 11:02:32]# nmcli connection up ens33
 ​
 [root@web3 ~ 10:57:32]# nmcli connection modify ens33 ipv4.routes '10.1.8.21 255.255.255.255 10.1.8.10'
 [root@web3 ~ 11:02:32]# nmcli connection up ens33
 ​

 #清空之前实验
 [root@lb ~ 11:02:09]# > /etc/sysconfig/ipvsadm
 [root@lb ~ 11:16:16]# systemctl restart ipvsadm.service 
 ​
 进入虚拟机把lb的第二张网卡拔掉
 ​
 #web1 2 3 client1 执行
 # 10.1.8.0/24 网段网关为10.1.8.20
 nmcli connection modify ens33 ipv4.gateway 10.1.8.20
 nmcli connection up ens33
 ​
 #client2 执行
 # 10.1.1.0/24 网段网关为10.1.1.20
 nmcli connection modify ens33 ipv4.gateway 10.1.1.20
 nmcli connection up ens33

四、LVS-DR 模式部署(全节点适配)

4.1 虚拟网卡(Dummy)配置

  • Web 节点(web1、web2、web3)配置

# 增加虚拟网卡,子网掩码一定要32位
[root@client2 ~ 11:20:31]# nmcli connection add type dummy ifname dummy con-name dummy ipv4.method manual ipv4.addresses 10.1.8.100/32
连接 "dummy" (cafa29cd-6424-4356-9dc0-edc6b044be44) 已成功添加。
[root@client2 ~ 11:32:10]# nmcli connection up dummy

[root@web1 ~ 11:20:31]# nmcli connection add type dummy ifname dummy con-name dummy ipv4.method manual ipv4.addresses 10.1.8.100/32
连接 "dummy" (c1d840b5-f6f9-4aa9-9688-2318a45628e1) 已成功添加。
[root@web1 ~ 11:34:09]# nmcli connection up dummy


[root@web2 ~ 11:20:31]# nmcli connection add type dummy ifname dummy con-name dummy ipv4.method manual ipv4.addresses 10.1.8.100/32
连接 "dummy" (c1d840b5-f6f9-4aa9-9688-2318a45628e1) 已成功添加。
[root@web2 ~ 11:34:09]# nmcli connection up dummy


[root@web3 ~ 11:20:31]# nmcli connection add type dummy ifname dummy con-name dummy ipv4.method manual ipv4.addresses 10.1.8.100/32
连接 "dummy" (c1d840b5-f6f9-4aa9-9688-2318a45628e1) 已成功添加。
[root@web3 ~ 11:34:09]# nmcli connection up dummy



[root@client2 ~ 11:32:11]# nmcli connection delete dummy 
成功删除连接 "dummy" (cafa29cd-6424-4356-9dc0-edc6b044be44)。
[root@client2 ~ 11:35:56]# nmcli c
NAME   UUID                                  TYPE      DEVICE 
ens33  555eece5-af4c-45ae-bab9-c07e68d0e649  ethernet  ens33  

# LVS 节点(lb)配置
[root@web1 ~ 11:33:40]# ip -br a
lo               UNKNOWN        127.0.0.1/8 ::1/128 
ens33            UP             10.1.8.11/24 fe80::20c:29ff:feb2:fcae/64 
dummy0           DOWN           
dummy            UNKNOWN        10.1.8.100/32 fe80::6f18:c0cb:74d0:ea0d/64 


[root@web2 ~ 11:34:04]# ip -br a
lo               UNKNOWN        127.0.0.1/8 ::1/128 
ens33            UP             10.1.8.12/24 fe80::20c:29ff:fefe:b2c/64 
dummy0           DOWN           
dummy            UNKNOWN        10.1.8.100/32 fe80::f4aa:4d23:ac32:7129/64


[root@web3 ~ 11:36:41]# ip -br a
lo               UNKNOWN        127.0.0.1/8 ::1/128 
ens33            UP             10.1.8.13/24 fe80::20c:29ff:fe1d:4a9c/64 
dummy0           DOWN           
dummy            UNKNOWN        10.1.8.100/32 fe80::30ca:ef21:2f8:fe5c/64 

4.2 LVS-DR 模式规则配置(lb 节点)

[root@lb ~ 11:36:58]# nmcli connection add type dummy ifname dummy con-name dummy ipv4.method manual ipv4.addresses 10.1.8.100/32
连接 "dummy" (8cdb619b-460e-4b83-afe3-5f855a601d4d) 已成功添加。
[root@lb ~ 11:40:46]# nmcli connection up dummy
连接已成功激活(D-Bus 活动路径:/org/freedesktop/NetworkManager/ActiveConnection/4)
[root@lb ~ 11:40:48]# ip -br a
lo               UNKNOWN        127.0.0.1/8 ::1/128 
ens33            UP             10.1.8.10/24 fe80::20c:29ff:fe55:d621/64 
dummy0           DOWN           
dummy            UNKNOWN        10.1.8.100/32 fe80::d941:efae:a684:17ea/64 

[root@lb ~ 11:40:59]# ipvsadm -A -t 10.1.8.100:80 -s rr
[root@lb ~ 11:42:26]# ipvsadm -a -t 10.1.8.100:80 -r 10.1.8.11:80
[root@lb ~ 11:42:26]# ipvsadm -a -t 10.1.8.100:80 -r 10.1.8.12:80
[root@lb ~ 11:42:26]# ipvsadm -a -t 10.1.8.100:80 -r 10.1.8.13:80
[root@lb ~ 11:42:26]# ipvsadm-save -n > /etc/sysconfig/ipvsadm
[root@lb ~ 11:42:27]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.1.1.10:80 wrr
  -> 10.1.8.11:80                 Masq    1      0          0         
  -> 10.1.8.12:80                 Masq    2      0          0         
  -> 10.1.8.13:80                 Masq    3      0          0         
TCP  10.1.8.100:80 rr
  -> 10.1.8.11:80                 Route   1      0          0         
  -> 10.1.8.12:80                 Route   1      0          0         
  -> 10.1.8.13:80                 Route   1      0          0         
[root@lb ~ 11:42:59]# ipvsadm -D -t 10.1.1.10:80
[root@lb ~ 11:43:20]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.1.8.100:80 rr
  -> 10.1.8.11:80                 Route   1      0          0         
  -> 10.1.8.12:80                 Route   1      0          0         
  -> 10.1.8.13:80                 Route   1      0          0         

#DR 模式可用性测试(client1、client2 节点)
[root@client2 ~ 11:36:58]# curl http://10.1.8.100
Welcome to web3.laoma.cloud
[root@client2 ~ 11:44:04]# curl http://10.1.8.100
Welcome to web2.laoma.cloud
[root@client2 ~ 11:44:06]# curl http://10.1.8.100
Welcome to web1.laoma.cloud

[root@client1 ~ 11:36:58]# curl http://10.1.8.100
Welcome to web3.laoma.cloud
[root@client1 ~ 11:45:11]# curl http://10.1.8.100
Welcome to web2.laoma.cloud
[root@client1 ~ 11:45:12]# curl http://10.1.8.100
Welcome to web1.laoma.cloud


[root@client1 ~ 11:45:12]# for i in {1..90};do curl -s 10.1.8.100 ;done|sort|uniq
Welcome to web1.laoma.cloud
Welcome to web2.laoma.cloud
Welcome to web3.laoma.cloud

[root@client2 ~ 11:46:31]# for i in {1..90};do curl -s 10.1.8.100 ;done|sort|uniq -c
     30 Welcome to web1.laoma.cloud
     30 Welcome to web2.laoma.cloud
     30 Welcome to web3.laoma.cloud

五、Keepalived 高可用部署(web1、web2 节点,后期迁移至 ha1、ha2)

5.1 初期 Web 节点高可用配置(web1 为主、web2 为备)

[root@web1 ~ 14:02:06]# nmcli connection delete dummy 
成功删除连接 "dummy" (6a249f96-28ab-41c5-8f22-e9f0f3e395bc)。

[root@lb ~ 14:05:28]#nmcli connection delete dummy 


reboot 
  • 配置 web2,作为备节点

[root@web2 ~]# 
yum install -y keepalived
cp /etc/keepalived/keepalived.conf{,.ori}
vim /etc/keepalived/keepalived.conf

 ! Configuration File for keepalived

global_defs {
   router_id web2
}
#vim
vrrp_instance nginx {
    state BACKUP
    interface ens33
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass lyk@123
    }
    virtual_ipaddress {
        10.1.8.100/24
    }
}

[root@web2 ~ 14:17:54]# systemctl enable keepalived.service --now

[root@web2 ~ 14:18:09]# systemctl restart keepalived.service 

[root@web2 ~ 14:19:28]# ip -br a
lo               UNKNOWN        127.0.0.1/8 ::1/128 
ens33            UP             10.1.8.12/24 10.1.8.100/24 fe80::20c:29ff:fefe:b2c/64 


[root@client1 ~ 14:05:28]# while true;do curl -s http://10.1.8.100/;sleep 1;done
Welcome to web2.laoma.cloud
Welcome to web2.laoma.cloud
......

[root@client2 ~ 14:05:27]# while true;do curl -s http://10.1.8.100/;sleep 1;done
Welcome to web2.laoma.cloud
Welcome to web2.laoma.cloud
Welcome to web2.laoma.cloud
Welcome to web2.laoma.cloud
.....
5.1.2 主节点配置(web1 节点)
[root@web2 ~ 14:19:38]# scp /etc/keepalived/keepalived.conf web1:/etc/keepalived/keepalived.conf
Warning: Permanently added 'web1,10.1.8.11' (ECDSA) to the list of known hosts.
keepalived.conf                                            100%  320   998.8KB/s   00:00   


# 双 VIP 高可用配置(web1、web2 节点)
# web1 节点配置(主 10.1.8.100,备 10.1.8.200)
[root@web1 ~ 14:16:44]# vim /etc/keepalived/keepalived.conf
[root@web1 ~ 14:36:08]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id web1
}

vrrp_instance web {
    state MASTER
    interface ens33
    virtual_router_id 51
    priority 200
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass lyk@123
    }
    virtual_ipaddress {
        10.1.8.100/24
    }
}

[root@web1 ~ 14:36:19]# systemctl restart keepalived.service 
#client链接显示
.......
Welcome to web1.laoma.cloud
Welcome to web1.laoma.cloud
Welcome to web1.laoma.cloud
Welcome to web1.laoma.cloud
Welcome to web1.laoma.cloud
Welcome to web1.laoma.cloud
......
[root@web2 ~ 14:34:11]# ip -br a
lo               UNKNOWN        127.0.0.1/8 ::1/128 
ens33            UP             10.1.8.12/24 fe80::20c:29ff:fefe:b2c/64 


[root@web1 ~ 14:38:42]# systemctl stop keepalived.service 

#client链接显示
.......
Welcome to web2.laoma.cloud
Welcome to web2.laoma.cloud
Welcome to web2.laoma.cloud
Welcome to web2.laoma.cloud
Welcome to web2.laoma.cloud
Welcome to web2.laoma.cloud
......

#web2 节点配置(备 10.1.8.100,主 10.1.8.200)
[root@web1 ~ 14:49:14]# vim /etc/keepalived/keepalived.conf
[root@web1 ~ 14:49:18]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id web1
}

vrrp_instance web_1 {
    state MASTER
    interface ens33
    virtual_router_id 51
    priority 200
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass lyk@123
    }
    virtual_ipaddress {
        10.1.8.100/24
    }
}
vrrp_instance web_2 {
    state BACKUP
    interface ens33
    virtual_router_id 52
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass lyk@123
    }
    virtual_ipaddress {
        10.1.8.200/24
    }
}



#web2
[root@web2 ~ 14:37:35]# vim /etc/keepalived/keepalived.conf
[root@web2 ~ 14:50:03]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id web2
}


vrrp_instance web_1 {
    state BACKUP
    interface ens33
    virtual_router_id 52
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass lyk@123
    }
    virtual_ipaddress {
        10.1.8.100/24
    }
}

vrrp_instance web_2 {
    state MASTER
    interface ens33
    virtual_router_id 51
    priority 200
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass lyk@123
    }
    virtual_ipaddress {
        10.1.8.200/24
    }
}



[root@web1 ~ 14:51:38]# systemctl restart keepalived.service 
[root@web1 ~ 14:52:10]# ip -br a
lo               UNKNOWN        127.0.0.1/8 ::1/128 
ens33            UP             10.1.8.11/24 10.1.8.100/24 fe80::20c:29ff:feb2:fcae/64 

[root@web2 ~ 14:51:12]# systemctl restart keepalived.service 
[root@web2 ~ 14:52:16]# ip -br a
lo               UNKNOWN        127.0.0.1/8 ::1/128 
ens33            UP             10.1.8.12/24 10.1.8.200/24 fe80::20c:29ff:fefe:b2c/64 


#验证 高可用切换测试(client1 节点)
[root@client1 ~ 14:40:09]# while true;do curl -s http://10.1.8.100/;sleep 1;done
Welcome to web1.laoma.cloud
Welcome to web1.laoma.cloud
Welcome to web1.laoma.cloud
^C
[root@client1 ~ 14:53:26]# while true;do curl -s http://10.1.8.200/;sleep 1;done
Welcome to web2.laoma.cloud
Welcome to web2.laoma.cloud
Welcome to web2.laoma.cloud
Welcome to web2.laoma.cloud
^C

#断开web1,查看web2
[root@web1 ~ 14:52:26]# systemctl stop keepalived.service 
[root@web2 ~ 14:55:19]# ip -br a
lo               UNKNOWN        127.0.0.1/8 ::1/128 
ens33            UP             10.1.8.12/24 10.1.8.200/24 10.1.8.100/24 fe80::20c:29ff:fefe:b2c/64 
[root@web1 ~ 14:55:07]# nmcli c
NAME        UUID                                  TYPE      DEVICE 
有线连接 1  57b39c8d-d270-3ce9-95e4-48b5823381a6  ethernet  ens36  
ens33       555eece5-af4c-45ae-bab9-c07e68d0e649  ethernet  ens33  

[root@web1 ~ 15:05:28]# nmcli connection modify 有线连接\ 1 ipv4.method manual ipv4.addresses 10.1.2.11/24 connection.id ens36;nmcli connection up ens36

[root@web2 ~ 15:07:10]# nmcli connection modify 有线连接\ 1 ipv4.method manual ipv4.addresses 10.1.2.12/24 connection.id ens36;nmcli connection up ens36

[root@web3 ~ 15:07:31]# nmcli connection modify 有线连接\ 1 ipv4.method manual ipv4.addresses 10.1.2.13/24 connection.id ens36;nmcli connection up ens36



[root@web1 ~ 15:07:04]# systemctl stop keepalived
[root@web2 ~ 15:07:04]# systemctl stop keepalived
[root@web3 ~ 15:07:04]# systemctl stop keepalived
[root@lb ~ 14:05:28]# init 0

[root@web 1 2 3 ~ 15:13:37]# ip -br a


用cluster-tpl 克隆三台虚拟机nfs(修改网卡为vm (仅主机2))  ha1 ha2(不变)

#nfs (2.100)为2.0网络段
router 主机添加网卡(一共三个)添加网卡(设置vm (2仅主机))

#  **nfs**虚拟机界面执行
hostnamectl set-hostname nfs.lyk.cloud
nmcli c
nmcli connection modify ens33 ipv4.addresses 10.1.2.100/24 ipv4.gateway 10.1.2.20
nmcli connection up ens33
bash
ping 1.1.1.1


#Xshell执行
[root@router ~ 15:26:51]# nmcli c
NAME        UUID                                  TYPE      DEVICE 
ens33       555eece5-af4c-45ae-bab9-c07e68d0e649  ethernet  ens33  
ens36       c4a81250-34ce-3a67-a3ea-bacfb0289b97  ethernet  ens36  
有线连接 1  e7758ba8-0110-3e86-883c-8aa1bbdf1f2b  ethernet  --     
[root@router ~ 15:27:05]# nmcli connection modify 有线连接\ 1 ipv4.method manual ipv4.addresses 10.1.2.20/24 connection.id ens37
[root@router ~ 15:29:39]# nmcli connection up ens37
连接已成功激活(D-Bus 活动路径:/org/freedesktop/NetworkManager/ActiveConnection/11)

[root@router ~ 15:31:07]# ip -br a
lo               UNKNOWN        127.0.0.1/8 ::1/128 
ens33            UP             10.1.8.20/24 fe80::20c:29ff:fe62:b97a/64 
ens36            UP             10.1.1.20/24 fe80::f529:7e26:4c51:56cc/64 
ens37            UP             10.1.2.20/24 fe80::1334:fcc3:8f69:5e5a/64 

[root@nfs ~ 15:28:08]# ping 1.1.1.1
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
64 bytes from 1.1.1.1: icmp_seq=2 ttl=127 time=144 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=127 time=186 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=127 time=390 ms
64 bytes from 1.1.1.1: icmp_seq=5 ttl=127 time=145 ms
......

hostnamectl set-hostname ha1.lyk.cloud
nmcli connection modify ens33 ipv4.addresses 10.1.8.14 ipv4.gateway 10.1.8.20
nmcli connection up ens33
bash

hostnamectl set-hostname ha2.lyk.cloud
nmcli connection modify ens33 ipv4.addresses 10.1.8.15 ipv4.gateway 10.1.8.20
nmcli connection up ens33
bash

[root@client2 ~ 15:52:26]# vim /etc/hosts
[root@client2 ~ 15:52:35]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

########### cluster ################
10.1.8.100      www.laoma.cloud         www
10.1.8.10       lb.laoma.cloud          lb
10.1.8.11       web1.laoma.cloud        web1
10.1.8.12       web2.laoma.cloud        web2
10.1.8.13       web3.laoma.cloud        web3
10.1.8.14       ha1.laoma.cloud         ha1
10.1.8.15       ha2.laoma.cloud         ha2
10.1.8.20       router.laoma.cloud      router
10.1.8.21       client1.laoma.cloud     client1
10.1.1.21       client2.laoma.cloud     client2

#client2推送到其他节点
[root@client2 ~ 15:59:47]# for host in 10.1.8.1{1..5} 10.1.8.20 10.1.8.21 ; do scp /etc/hosts $host:/etc/hosts; done

六、NFS 共享存储部署(nfs 节点与 web 节点)

6.1 NFS 服务器配置(nfs 节点)

6.1.1 安装 NFS 服务并创建共享目录
  • web三个加网卡,vm (仅主机2)

# 安装软件  web1 2 3  nfs
yum install -y nfs-utils

#再次添加一行nfs  再次推送
[root@client2 ~ 16:01:05]# vim /etc/hosts
[root@client2 ~ 16:00:37]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

########### cluster ################
10.1.8.100      www.laoma.cloud         www
10.1.8.10       lb.laoma.cloud          lb
10.1.8.11       web1.laoma.cloud        web1
10.1.8.12       web2.laoma.cloud        web2
10.1.8.13       web3.laoma.cloud        web3
10.1.8.14       ha1.laoma.cloud         ha1
10.1.8.15       ha2.laoma.cloud         ha2
10.1.8.20       router.laoma.cloud      router
10.1.8.21       client1.laoma.cloud     client1
10.1.1.21       client2.laoma.cloud     client2
10.1.2.100      nfs.laoma.cloud         nfs
  • 安装 NFS 服务并创建共享目录

[root@nfs ~ 16:19:02]# mkdir /var/www/html/ -p
[root@nfs ~ 16:19:55]# echo Welcome to www.lyk.cloud > /var/www/html/index.html
[root@nfs ~ 16:20:01]# echo '/var/www 10.1.2.0/24(rw,sync)' >> /etc/exports
[root@nfs ~ 16:27:46]# systemctl restart nfs-server
[root@nfs ~ 16:20:06]# systemctl enable nfs-server.service --now
[root@nfs ~ 16:21:12]# systemctl status nfs

[root@web1 ~ 16:22:14]# systemctl disable nginx.service --now
Removed symlink /etc/systemd/system/multi-user.target.wants/nginx.service.
[root@web1 ~ 16:26:11]# systemctl start httpd

[root@web1 ~ 16:22:14]# systemctl disable nginx.service --now
Removed symlink /etc/systemd/system/multi-user.target.wants/nginx.service.
[root@web1 ~ 16:26:11]# systemctl start httpd

[root@web2 ~ 16:26:32]# yum install -y httpd
[root@web2 ~ 16:26:54]# echo Welcome to $(hostname) > /var/www/html/index.html 
[root@web2 ~ 16:26:54]# systemctl enable httpd.service --now
Created symlink from /etc/systemd/system/multi-user.target.wants/httpd.service to /usr/lib/systemd/system/httpd.service.


[root@web3 ~ 16:27:03]# systemctl disable nginx --now
Removed symlink /etc/systemd/system/multi-user.target.wants/nginx.service.
[root@web3 ~ 16:27:09]# systemctl enable httpd.service --now

#验证 NFS 共享(web1 节点)
[root@web1 ~ 16:28:00]# showmount -e nfs
Export list for nfs:
/var/www 10.1.2.0/24

# Web 节点挂载 NFS(web1、web2、web3 节点)
# 配置永久挂载(/etc/fstab)
[root@web1 ~ 16:35:19]# vim /etc/fstab 
#最后一行添加
【】echo 'nfs.laoma.cloud:/var/www /var/www/ nfs defaults         0 0' >> /etc/fstab
#或者
[root@web1 ~ 17:01:10]# cat /etc/fstab 

#
# /etc/fstab
# Created by anaconda on Fri Aug  1 15:45:32 2025
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root /                       xfs     defaults        0 0
UUID=b54b3764-2b2b-4a76-a0ec-83e308071ae5 /boot                   xfs     defaults        0 0
/dev/mapper/centos-swap swap                    swap    defaults        0 0
nfs.laoma.cloud:/var/www /var/www/ nfs defaults         0 0


[root@web1 ~ 17:01:10]# mount /var/www/
[root@web1 ~ 17:02:20]# df -h /var/www/
文件系统                  容量  已用  可用 已用% 挂载点
nfs.laoma.cloud:/var/www   50G  1.6G   49G    4% /var/www


#web2 3
[root@web2 ~ 16:26:54]# echo 'nfs.laoma.cloud:/var/www /var/www/ nfs defaults         0 0' >> /etc/fstab
[root@web2 ~ 16:42:10]# mount /var/www/
[root@web2 ~ 17:10:11]# df -h /var/www/
文件系统                  容量  已用  可用 已用% 挂载点
nfs.laoma.cloud:/var/www   50G  1.6G   49G    4% /var/www

[root@web3 ~ 16:26:54]# echo 'nfs.laoma.cloud:/var/www /var/www/ nfs defaults         0 0' >> /etc/fstab
[root@web3 ~ 16:42:10]# mount /var/www/
[root@web3 ~ 17:10:11]# df -h /var/www/
文件系统                  容量  已用  可用 已用% 挂载点
nfs.laoma.cloud:/var/www   50G  1.6G   49G    4% /var/www

#验证
[root@client2 ~ 16:58:02]# curl 10.1.8.13
Welcome to www.lyk.cloud
[root@client2 ~ 17:12:12]# curl 10.1.8.12
Welcome to www.lyk.cloud
[root@client2 ~ 17:12:15]# curl 10.1.8.11
Welcome to www.lyk.cloud

七、HAProxy 负载均衡部署(ha1、ha2 节点)

7.1 HAProxy 基础配置(ha1、ha2 节点)

[root@web1 ~ 17:09:54]# umount /var/www 
[root@web2 ~ 17:09:54]# umount /var/www 
[root@web3 ~ 17:09:54]# umount /var/www 

#验证
[root@client2 ~ 17:12:17]# curl 10.1.8.11
Welcome to web1.laoma.cloud
[root@client2 ~ 17:14:47]# curl 10.1.8.12
Welcome to web2.laoma.cloud
[root@client2 ~ 17:14:50]# curl 10.1.8.13
Welcome to web3.laoma.cloud
7.1.1 安装 HAProxy 并备份配置
[root@ha1-2 ~]# 

yum install -y haproxy
# 备份 haproxy 配置文件
cp /etc/haproxy/haproxy.cfg{,.ori}

# 修改 haproxy 配置文件,最后添加以下内容
echo '
########### web 代理 ###########
frontend http_front
    bind *:80
    use_backend http_back
backend http_back
    balance     roundrobin
    server  node1 10.1.8.11:80 check
    server  node2 10.1.8.12:80 check
    server  node3 10.1.8.13:80 check
' >> /etc/haproxy/haproxy.cfg

# 启用并启动服务
systemctl enable haproxy.service --now

#验证 HAProxy 负载效果(client2 节点)
[root@client2 ~ 17:14:51]# curl 10.1.8.14
Welcome to web1.laoma.cloud
[root@client2 ~ 17:24:28]# curl 10.1.8.14
Welcome to web2.laoma.cloud
[root@client2 ~ 17:24:30]# curl 10.1.8.14
Welcome to web3.laoma.cloud

7.2 HAProxy+Keepalived 高可用配置(ha1 为主、ha2 为备)

7.2.1 主节点配置(ha1 节点)
[root@ha1 ~ 17:25:35]# cp /etc/keepalived/keepalived.conf{,.bak}
[root@ha1 ~ 17:25:35]# vim /etc/keepalived/keepalived.conf

! Configuration File for keepalived

global_defs {
   router_id ha1
}

vrrp_instance nginx {
    state MASTER
    interface ens33
    virtual_router_id 51
    priority 110
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass lyk@123
    }
    virtual_ipaddress {
        10.1.8.100/24
    }
}



[root@ha1 ~ 17:26:07]# systemctl enable keepalived.service --now
Created symlink from /etc/systemd/system/multi-user.target.wants/keepalived.service to /usr/lib/systemd/system/keepalived.service.


7.2.2 备节点配置(ha2 节点)
yum install -y keepalived ipvsadm
cp /etc/keepalived/keepalived.conf{,.bak}
vim /etc/keepalived/keepalived.conf


! Configuration File for keepalived

global_defs {
   router_id ha2
}

vrrp_instance nginx {
    state BACKUP
    interface ens33
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass lyk@123
    }
    virtual_ipaddress {
        10.1.8.100/24
    }
}

systemctl enable keepalived.service --now

八、集群功能与高可用测试

8.1 功能性测试(client2 节点)

[root@client1 ~ 17:30:02]# while true ;do curl -s www.laoma.cloud;sleep 1;done
1 2 3轮流
#好像没效果
[root@ha1 ~ 17:26:12]# systemctl stop keepalived.service

#停web3
[root@web3 ~ 17:20:45]# systemctl stop httpd.service 

Welcome to web1.laoma.cloud
Welcome to web2.laoma.cloud

......
没有web3了

#web1 挂载
[root@web1 ~ 17:19:57]# mount -a
Welcome to web2.laoma.cloud
Welcome to www.lyk.cloud
Welcome to web2.laoma.cloud
....
web1变成www

#恢复web1 3
[root@web1 ~ 17:35:32]# umount /var/www 
[root@web3 ~ 17:34:32]# systemctl restart httpd.service 

8.2 高可用切换测试

8.2.1 负载均衡主节点故障测试
[root@client2 ~ 17:29:31]# while true ;do curl -s http://10.1.8.100;sleep 1;done
Welcome to web3.laoma.cloud
Welcome to web2.laoma.cloud
Welcome to web1.laoma.cloud
Welcome to web3.laoma.cloud
......

测试1:停止 ha1 上 keepalived 服务。

[root@ha1 ~ 17:32:39]# systemctl stop keepalived.service

[root@client2 ~ 17:29:31]# while true ;do curl -s http://10.1.8.100;sleep 1;done
Welcome to web3.laoma.cloud
Welcome to web2.laoma.cloud
Welcome to web1.laoma.cloud
Welcome to web3.laoma.cloud
......
无影响

结果:客户端无感知故障,正常访问集群。

测试2:恢复 ha1 上 keepalived 服务。

[root@ha1 ~ 17:46:04]# systemctl start keepalived.service
**结果**:客户端无感知故障,正常访问集群。
负载均衡测试

测试1:停止 web2 上 httpd.service ,监控客户端访问情况。

[root@web2 ~ 17:48:27]# systemctl stop httpd.service 

结果:大概 15 秒,LVS 将 web2 从后端虚拟主机中剔除

测试2:启动 web2 上 httpd.service,监控客户端访问情况。

[root@web2 ~]# systemctl start httpd.service 

结果:大概 5 秒,LVS将web2加入后端虚拟主机中。


网站公告

今日签到

点亮在社区的每一天
去签到