Keepalive+LVS+Nginx+NFS高可用架构
搭建 DR 模式
Keepalived是集群管理中保证集群高可用的一个服务软件,用来防止单点故障。
LVS工作在内核层,性能高效,能够处理大量并发请求,支持多种负载均衡算法和工作模式,适应不同的应用场景
Keepalive+LVS+Nginx+NFS高可用架构
- 实验目的:
- 客户端通过访问 LVS高可用集群VIP:192.168.98.100可以访问到NFS业务服务器的内容。
- 当VS-mastert负载均衡断开服务后,LVS-backup可以进行备用,不影响用户访问业务。
- 当WEB1或者WEB2关闭服务后,另外一台服务器可以正常访问业务。
每台主机都要关闭防火墙和SELinux
systemctl disable --now firewalld
临时关闭Selinux
setenforce 0
主机 | 角色 | 安装软件 | IP |
---|---|---|---|
nfs | NFS业务服务器 | nfs-utils | 192.168.98.138 |
Web1 | Web服务 | nfs-utils、nginx | 192.168.98.41 |
Web2 | Web服务 | nfs-utils、nginx | 192.168.98.42 |
LVS-master | 负载均衡 | ipvsadm、keepalived | 192.168.98.31 VIP:192.168.98.100 |
LVS-backup | 负载均衡 | ipvsadm、keepalived | 192.168.98.32 VIP:192.168.98.100 |
client | 客户端 | 192.168.98. |
RS 的网关是 LVS 的 IP
1. NFS 业务服务器(192.168.98.138)
- 挂载,安装软件nfs-utils
- 创建共享目录
- 配置 /etc/exports,开启服务
systemctl start nfs-server - 暴露共享位置
showmount -t ip
,本机IP(nfs主机IP)
要先启动服务再暴露共享位置,否则会报错RPC - 写一个.html文件共享给Web主机
echo $(hostname -I) > /nfs/web/index.html
- 到nginx主机配置相关
# 1.挂载,安装软件nfs-utils
[root@nfs ~]# mount /dev/sr0 /mnt/
mount: /mnt: WARNING: source write-protected, mounted read-only.
[root@nfs ~]# dnf install nfs-utils -y
# 2.创建共享目录
[root@nfs ~]# mkdir /nfs/web -p
# 3.配置 /etc/exports,开启服务
[root@nfs ~]# vim /etc/exports
[root@nfs ~]# cat /etc/exports
/nfs/web 192.168.98.41(rw,no_root_squash) 192.168.98.42(rw,no_root_squash)
或者
/nfs/web 192.168.98.*(rw,no_root_squash) #权限可写sync
[root@nfs ~]# systemctl start nfs-server
# 4.暴露共享位置(要先启动服务在暴露共享位置,否则会报错RPC)
[root@nfs ~]# showmount -e 192.168.98.138
Export list for 192.168.98.138:
/nfs/web 192.168.98.42,192.168.98.41
# 5. 写一个.html文件共享给Web主机
[root@nfs ~]# echo $(hostname -I) > /nfs/web/index.html
[root@nfs ~]# cd /nfs/web/
[root@nfs web]# ls
index.html
2. Web服务集群(搭建RS服务器)
- IP:
[root@Web1 ~]# nmcli device show ens160
GENERAL.DEVICE: ens160
GENERAL.TYPE: ethernet
GENERAL.HWADDR: 00:0C:29:BA:BD:60
GENERAL.MTU: 1500
GENERAL.STATE: 100 (connected)
GENERAL.CONNECTION: ens160
GENERAL.CON-PATH: /org/freedesktop/NetworkManager/ActiveConnection/2
WIRED-PROPERTIES.CARRIER: on
IP4.ADDRESS[1]: 192.168.98.41/24
IP4.GATEWAY: 192.168.98.2
IP4.ROUTE[1]: dst = 192.168.98.0/24, nh = 0.0.0.0, mt = 100
IP4.ROUTE[2]: dst = 0.0.0.0/0, nh = 192.168.98.2, mt = 100
IP4.DNS[1]: 223.5.5.5
IP6.ADDRESS[1]: fe80::20c:29ff:feba:bd60/64
IP6.GATEWAY: --
IP6.ROUTE[1]: dst = fe80::/64, nh = ::, mt = 1024
RS 的网关是 LVS 的 IP
- 挂载,安装软件 nfs-utils、nginx
- 挂载首页目录/usr/share/nginx/html/
mount -t nfs nfs主机IP:nfs共享目录 本机共享目录
- 启动服务,查看是否与nfs主机同步文件
- 增加内核参数
vim /etc/sysctl.conf
net.ipv4.ip_forward=1
sysctl -p
- Web1(192.168.98.41)
# 1.挂载,安装软件 nfs-utils、nginx
[root@Web1 ~]# mount /dev/sr0 /mnt/
mount: /mnt: WARNING: source write-protected, mounted read-only.
[root@Web1 ~]# dnf install nginx nfs-utils -y
# 2.挂载首页目录
[root@Web1 ~]# mount -t nfs 192.168.98.138:/nfs/web /usr/share/nginx/html/
[root@Web1 ~]# df /usr/share/nginx/html/
Filesystem 1K-blocks Used Available Use% Mounted on
192.168.98.138:/nfs/web 46587904 1754112 44833792 4% /usr/share/nginx/html
# 3.启动服务,测试,查看是否与nfs主机同步文件
[root@Web1 ~]# systemctl start nfs-server nginx
[root@Web1 ~]# showmount -e 192.168.98.138
Export list for 192.168.98.138:
/nfs/web 192.168.98.42,192.168.98.41
[root@Web1 ~]# cd /usr/share/nginx/html/
[root@Web1 html]# ls
[root@Web1 html]# ls
index.html
[root@Web1 ~]# curl localhost
nfs 192.168.98.138
可以通过克隆进行高效率配置,关闭Web1服务器,对这台服务器进行克隆,(先启动Web2再重新启动Web1,防止IP冲突)操作改变主机名、修改IP
- Web2(192.168.98.42)
[root@Web2 ~]# mount /dev/sr0 /mnt/
mount: /mnt: WARNING: source write-protected, mounted read-only.
[root@Web2 ~]# dnf install nginx nfs-utils -y
[root@Web2 ~]# mount -t nfs 192.168.98.138:/nfs/web /usr/share/nginx/html/
[root@Web2 ~]# df /usr/share/nginx/html/
Filesystem 1K-blocks Used Available Use% Mounted on
192.168.98.138:/nfs/web 46587904 1754112 44833792 4% /usr/share/nginx/html
[root@Web2 ~]# systemctl start nfs-server
[root@Web2 ~]# systemctl start nginx
[root@Web2 ~]# ls /usr/share/nginx/html/
index.html
[root@Web2 ~]# curl localhost
nfs 192.168.98.138
- 查看nginx的网页文件存储目录
[root@Web1 ~]# rpm -ql nginx | grep html
/usr/share/nginx/html/404.html
/usr/share/nginx/html/50x.html
/usr/share/nginx/html/icons
/usr/share/nginx/html/icons/poweredby.png
/usr/share/nginx/html/index.html
/usr/share/nginx/html/nginx-logo.png
/usr/share/nginx/html/poweredby.png
/usr/share/nginx/html/system_noindex_logo.png
开机自启动
[root@Web1 ~]# systemctl enable nginx nfs-server
Created symlink /etc/systemd/system/multi-user.target.wants/nginx.service → /usr/lib/systemd/system/nginx.service.
Created symlink /etc/systemd/system/multi-user.target.wants/nfs-server.service → /usr/lib/systemd/system/nfs-server.service.
[root@Web2 ~]# systemctl enable nginx nfs-server
Created symlink /etc/systemd/system/multi-user.target.wants/nginx.service → /usr/lib/systemd/system/nginx.service.
Created symlink /etc/systemd/system/multi-user.target.wants/nfs-server.service → /usr/lib/systemd/system/nfs-server.service.
# 重启主机后,查看开机自启动是否开启成功
[root@Web1 ~]# ps -ef | grep nginx
root 1742 1 0 13:28 ? 00:00:00 nginx: master process /usr/sbin/nginx
nginx 1743 1742 0 13:28 ? 00:00:00 nginx: worker process
nginx 1744 1742 0 13:28 ? 00:00:00 nginx: worker process
nginx 1745 1742 0 13:28 ? 00:00:00 nginx: worker process
nginx 1746 1742 0 13:28 ? 00:00:00 nginx: worker process
root 2142 1501 0 15:07 pts/0 00:00:00 grep --color=auto nginx
自动挂载
- 写自动挂载的目录
写自动挂载文件:
[root@Web1 ~]# vim /etc/sysctl.d/
[root@Web1 ~]# vim /etc/fstab
[root@Web1 ~]# vim /etc/rc.d/init.d/
[root@Web1 ~]# vim /etc/fstab
[root@Web1 ~]# cat /etc/fstab
........
/dev/mapper/rhel-root / xfs defaults 0 0
UUID=a656d423-6d9a-4a0a-b794-9161d8d66b0b /boot xfs defaults 0 0
UUID=EDBD-EDDF /boot/efi vfat umask=0077,shortname=winnt 0 2
/dev/mapper/rhel-swap none swap defaults 0 0
192.168.98.138:/nfs/web /usr/share/nginx/html/ nfs defaults 0 0
[root@Web1 ~]# systemctl daemon-reload
[root@Web1 ~]# mount -a #挂载全部
配置nginx(为了区分Web1与Web2访问的文件内容)
[root@nfs ~]# cd /nfs/web/
[root@nfs web]# ls
index.html
[root@nfs web]# mv index.html index1.html
[root@nfs web]# echo "Web1 index.html" > index1.html
[root@nfs web]# ls
index1.html
[root@nfs web]# echo "Web2 index.html" > index2.html
[root@nfs web]# ls
index1.html index2.html
现在共享目录下有两个文件,现在访问会报错
[root@Web1 ~]# curl localhost
<html>
<head><title>403 Forbidden</title></head>
<body>
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx/1.20.1</center>
</body>
</html>
[root@Web1 ~]# vim /etc/nginx/conf.d/web1.conf
[root@Web1 ~]# cat /etc/nginx/conf.d/web1.conf
server {
listen 80;
server_name 192.168.98.41;
location / {
root /usr/share/nginx/html;
index index1.html;
}
}
[root@Web1 ~]# systemctl restart nginx
[root@Web1 ~]# curl 192.168.98.41
Web1 index.html
[root@Web1 ~]# curl localhost #localhost代表本机,我们配置nginx时用的时IP
<html>
<head><title>403 Forbidden</title></head>
<body>
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx/1.20.1</center>
</body>
</html>
3. LVS主机(Keepalived+lvs)
- IP:
[root@master ~]# nmcli device show ens160
GENERAL.DEVICE: ens160
GENERAL.TYPE: ethernet
GENERAL.HWADDR: 00:0C:29:2A:3F:65
GENERAL.MTU: 1500
GENERAL.STATE: 100 (connected)
GENERAL.CONNECTION: ens160
GENERAL.CON-PATH: /org/freedesktop/NetworkManager/ActiveConnection/2
WIRED-PROPERTIES.CARRIER: on
IP4.ADDRESS[1]: 192.168.98.31/24
IP4.GATEWAY: 192.168.98.2
IP4.ROUTE[1]: dst = 192.168.98.0/24, nh = 0.0.0.0, mt = 100
IP4.ROUTE[2]: dst = 0.0.0.0/0, nh = 192.168.98.2, mt = 100
IP4.DNS[1]: 223.5.5.5
IP6.ADDRESS[1]: fe80::20c:29ff:fe2a:3f65/64
IP6.GATEWAY: --
IP6.ROUTE[1]: dst = fe80::/64, nh = ::, mt = 1024
RS 的网关是 LVS 的 IP
- 安装 ipvsadm、keepalived
- 初始化文件
ipvsadm-save -n > /etc/sysconfig/ipvsadm
,启动服务
systemctl start ipvsadm - 添加虚拟 IP
ifconfig NAT 192.168.98.100 netmask 255.255.255.255 up
ip addr add 192.168.98.100 dev NAT
(在lvs主机curl虚拟IP) - 通过在keepalived配置文件中的内容,已经配置了ipvsadm的相关内容
- LVS-master(192.168.98.31)
# 1.安装 ipvsadm
[root@master ~]# dnf install keepalived ipvsadm -y
# 2.初始化文件 ipvsadm-save -n > /etc/sysconfig/ipvsadm 启动服务
[root@master ~]# ipvsadm-save -n > /etc/sysconfig/ipvsadm
[root@master ~]# systemctl start ipvsadm
# 3.配置keepalived
[root@master ~]# vim /etc/keepalived/keepalived.conf
[root@master ~]# cat /etc/keepalived/keepalived.conf
global_defs {
router_id lvs-master
}
vrrp_instance VI_1 {
state MASTER
interface ens160
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.98.100
}
}
#配置lvs,需要指定VIP地址
virtual_server 192.168.98.100 80 {
delay_loop 6 #健康检查的间隔时间,单位为秒
lb_algo rr #负载均衡的算法,rr表示轮询,wrr:带权重
lb_kind DR #负载均衡的模式,此处为DR模式,支持的模式:NAT、DR、TUN
persistence_timeout 50 #持久化时间,默认为秒
# 此处的配置相当于ipvsadm -A -t 协议(t:tcp协议) 192.168.98.100:80 --s wrr -p 50 #好处:让会话一直保持,设置为0,则表示不持久化
protocol TCP #负载协议
#配置真实服务器,配置方式:IP 端口号 相当于 ipvsadm -a -t 192.168.98.100:80 -g -w 1,-g:DR
real_server 192.168.98.41 80 {
weight 1 #权重,默认为1
TCP_CHECK { #检测
connect_timeout 3 #连接时间,单位为秒,即3秒中如果未连通,则表示此主机服务挂了
retry 3 #重试次数
delay_before_retry 3 #重试间隔时间
}
}
real_server 192.168.98.42 80 {
weight 1
TCP_CHECK {
connect_timeout 3
retry 3
delay_before_retry 3
}
}
}
# 这里面配置的东西,相当于配置了lvsadm
[root@master ~]# systemctl restart ipvsadm keepalived
# 4.配置lvs
[root@master ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.98.100:80 rr persistent 50 #加了持久
-> 192.168.98.41:80 Route 1 0 0
-> 192.168.98.42:80 Route 1 0 0
- LVS-backup(192.168.98.32)
[root@backup ~]# dnf install keepalived ipvsadm -y
[root@backup ~]# ipvsadm-save -n > /etc/sysconfig/ipvsadm
[root@backup ~]# systemctl start ipvsadm
[root@backup ~]# systemctl start keepalived
[root@backup ~]# vim /etc/keepalived/keepalived.conf
[root@backup ~]# cat /etc/keepalived/keepalived.conf
global_defs {
router_id lvs-backup
}
vrrp_instance VI_1 {
state BACKUP
interface ens160
virtual_router_id 51
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.98.100
}
}
virtual_server 192.168.98.100 80 {
delay_loop 6
lb_algo rr
lb_kind DR
persistence_timeout 50
protocol TCP
real_server 192.168.98.41 80 {
weight 1
TCP_CHECK {
connect_timeout 3
retry 3
delay_before_retry 3
}
}
real_server 192.168.98.42 80 {
weight 1
TCP_CHECK {
connect_timeout 3
retry 3
delay_before_retry 3
}
}
}
[root@backup ~]# systemctl restart keepalived ipvsadm
[root@backup ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.98.100:80 rr persistent 50
-> 192.168.98.41:80 Route 1 0 0
-> 192.168.98.42:80 Route 1 0 0
4. 回到Web主机,修改
- 添加虚拟IP
ifconfig lo:1 192.168.98.100 netmask 255.255.255.255 broadcast 192.168.98.100 up
- 配置内核参数 /etc/sysctl.conf
- 增加路由
route add -host 192.168.98.100 dev lo:1
route -n
[root@Web1 ~]# ifconfig lo:1 192.168.98.100 netmask 255.255.255.255 broadcast 192.168.98.100 up
[root@Web1 ~]# ip a show lo
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet 192.168.98.100/32 brd 192.168.98.100 scope global lo:1
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
[root@Web2 ~]# ifconfig lo:1 192.168.98.100 netmask 255.255.255.255 broadcast 192.168.98.100 up
[root@Web2 ~]# ip a show lo
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet 192.168.98.100/32 brd 192.168.98.100 scope global lo:1
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
- 内核参数
[root@Web1 ~]# cat >> /etc/sysctl.conf <<EOF
> net.ipv4.conf.lo.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.lo.arp_announce = 2
net.ipv4.ip_forward=0
> EOF
[root@Web1 ~]# sysctl -p
net.ipv4.conf.lo.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.lo.arp_announce = 2
net.ipv4.ip_forward = 0
[root@Web2 ~]# cat >> /etc/sysctl.conf <<EOF
> net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.lo.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.lo.arp_announce = 2
net.ipv4.ip_forward=0
> EOF
[root@Web2 ~]# sysctl -p
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.lo.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.lo.arp_announce = 2
net.ipv4.ip_forward = 0
- 增加路由
[root@Web1 ~]# route add -host 192.168.98.100 dev lo:1
[root@Web1 ~]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.98.2 0.0.0.0 UG 100 0 0 ens160
192.168.98.0 0.0.0.0 255.255.255.0 U 100 0 0 ens160
192.168.98.100 0.0.0.0 255.255.255.255 UH 0 0 0 lo
[root@Web2 ~]# route add -host 192.168.98.100 dev lo:1
[root@Web2 ~]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.98.2 0.0.0.0 UG 100 0 0 ens160
192.168.98.0 0.0.0.0 255.255.255.0 U 100 0 0 ens160
192.168.98.100 0.0.0.0 255.255.255.255 UH 0 0 0 lo
5. 客户端测试(浏览器http://192.168.98.100)
停止nginx服务
- 停掉Web1:
[root@Web1 ~]# systemctl stop nginx
# ipvsadm服务:
[root@master ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.98.100:80 rr persistent 50
-> 192.168.98.42:80 Route 1 0 0
不可访问,访问报错
- 重启
[root@Web1 ~]# systemctl start nginx
[root@master ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.98.100:80 rr persistent 50
-> 192.168.98.41:80 Route 1 0 0
-> 192.168.98.42:80 Route 1 0 0
停止keepalived服务(高可用)
当前状态所有主机,所有服务全部开启的
#虚拟IP在master主机上
[root@master ~]# ip ad
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:0c:29:2a:3f:65 brd ff:ff:ff:ff:ff:ff
altname enp3s0
inet 192.168.98.31/24 brd 192.168.98.255 scope global noprefixroute ens160
valid_lft forever preferred_lft forever
inet 192.168.98.100/32 scope global ens160
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe2a:3f65/64 scope link noprefixroute
valid_lft forever preferred_lft forever
[root@backup ~]# ip ad
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:0c:29:25:66:fb brd ff:ff:ff:ff:ff:ff
altname enp3s0
inet 192.168.98.32/24 brd 192.168.98.255 scope global noprefixroute ens160
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe25:66fb/64 scope link noprefixroute
valid_lft forever preferred_lft forever
- 停止master主机的keepalived
[root@master ~]# systemctl stop keepalived
[root@master ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:0c:29:2a:3f:65 brd ff:ff:ff:ff:ff:ff
altname enp3s0
inet 192.168.98.31/24 brd 192.168.98.255 scope global noprefixroute ens160
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe2a:3f65/64 scope link noprefixroute
valid_lft forever preferred_lft forever
#虚拟IP漂移到backup主机
[root@backup ~]# ip ad
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:0c:29:25:66:fb brd ff:ff:ff:ff:ff:ff
altname enp3s0
inet 192.168.98.32/24 brd 192.168.98.255 scope global noprefixroute ens160
valid_lft forever preferred_lft forever
inet 192.168.98.100/32 scope global ens160
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe25:66fb/64 scope link noprefixroute
valid_lft forever preferred_lft forever
仍旧都可访问