MySQL高可用部署

发布于:2025-07-26 ⋅ 阅读:(19) ⋅ 点赞:(0)

一、MHA(一主多从模式)

1.环境准备(所有节点)

# 关闭防火墙和 SELinux
systemctl stop firewalld
systemctl disable firewalld
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
setenforce 0


# 配置主机名和 hosts 文件
# 主库(master)
hostnamectl set-hostname mysql-master
cat >> /etc/hosts << EOF
192.168.10.71 mha-manager
192.168.10.72 mysql-master
192.168.10.73 mysql-slave1
192.168.10.74 mysql-slave2
EOF

# 从库1(slave1)和从库2(slave2)执行相同操作,修改hostnamectl为对应主机名


# 安装依赖包
apt install -y libdbd-mysql-perl libconfig-tiny-perl liblog-dispatch-perl libparallel-forkmanager-perl

2. 部署 MySQL 主从复制(Master+Slave)

参照Mysql主从复制部署_mysql replication部署操作-CSDN博客

3.部署 MHA Manager(管理节点)

(1)安装 MHA Manager

wget https://github.com/yoshinorim/mha4mysql-manager/releases/download/v0.58/mha4mysql-manager-0.58-0.el7.noarch.rpm
yum install -y mha4mysql-node-0.58-0.el7.noarch.rpm
yum install -y mha4mysql-manager-0.58-0.el7.noarch.rpm

(2)配置 SSH 无密码登录

参考ssh免密登录-CSDN博客

(3)创建 MHA 配置文件

mkdir -p /etc/mha/masterha
vi /etc/mha/masterha/app1.cnf

配置内容:

[server default]
manager_workdir=/var/log/masterha/app1
manager_log=/var/log/masterha/app1/manager.log
master_binlog_dir=/var/lib/mysql
user=mha_admin  # MHA管理用户
password=admin  # MHA管理用户密码
ping_interval=1  # 检测间隔(秒)
repl_user=yang  # 复制用户
repl_password=admin  # 复制用户密码
ssh_user=root  # SSH用户
secondary_check_script=masterha_secondary_check -s 192.168.10.73 -s 192.168.10.74    # 双节点检测
master_ip_failover_script=/usr/local/bin/master_ip_failover
report_script=/usr/local/bin/send_report

[server1]
hostname=192.168.10.72
candidate_master=1
check_repl_delay=0

[server2]
hostname=192.168.10.73
port=3306
candidate_master=1  # 候选主库
check_repl_delay=0  # 不检查复制延迟

[server3]
hostname=192.168.10.74
no_master=1

4.部署 MHA Node(所有 MySQL 节点)

# 1.安装 MHA Node
# 在所有MySQL节点(master、slave1、slave2)执行
wget https://github.com/yoshinorim/mha4mysql-node/releases/download/v0.58/mha4mysql-node-0.58-0.el7.noarch.rpm
yum install -y mha4mysql-node-0.58-0.el7.noarch.rpm


# 2.创建 MHA 管理用户
# 在所有MySQL节点执行
CREATE USER 'mha_admin'@'%' IDENTIFIED BY 'admin';
GRANT ALL PRIVILEGES ON *.* TO 'mha_admin'@'%';
FLUSH PRIVILEGES;

# MySQL 8.0版本
CREATE USER 'mha_admin'@'%' IDENTIFIED WITH mysql_native_password BY 'password';
GRANT ALL PRIVILEGES ON *.* TO 'mha_admin'@'%';
FLUSH PRIVILEGES;

 5.配置 VIP 切换脚本

在 mha-manager 节点上创建 /usr/local/bin/master_ip_failover

vi /usr/local/bin/master_ip_failover
chmod +x /usr/local/bin/master_ip_failover

脚本内容:

#!/usr/bin/env perl
use strict;
use warnings FATAL => 'all';

my $vip = '192.168.10.100';  # VIP 地址
my $brdc = '192.168.10.255'; # 广播地址
my $ifdev = 'eth0:1';         # 网卡名称
my $key = '1';
my $ssh_user = 'root';
my $ssh_port = 22;

# 获取参数
my $command = shift;
my $ssh_host = shift;
my $new_master_host = shift;

# 定义日志文件
open(my $logfh, '>>', '/var/log/masterha/master_ip_failover.log') or die "Can't open log file: $!";

sub log_write {
    my $msg = shift;
    print $logfh "$msg\n";
    print STDERR "$msg\n";
}

log_write "\n\n### Starting master_ip_failover script at " . localtime() . " ###";
log_write "Command: $command";
log_write "SSH Host: $ssh_host";
log_write "New Master Host: $new_master_host";

# 执行命令
if ($command eq "stop" || $command eq "stopssh") {
    # 停止时移除 VIP
    my $exit_code = 0;
    if ($ssh_host) {
        log_write "Removing VIP from old master via SSH...";
        my $output = `ssh -p $ssh_port $ssh_user\@$ssh_host "/sbin/ip addr del $vip/24 dev $ifdev label $ifdev:$key"`;
        log_write "Output: $output";
        $exit_code = $? >> 8;
    } else {
        log_write "Removing VIP directly...";
        my $output = `/sbin/ip addr del $vip/24 dev $ifdev label $ifdev:$key`;
        log_write "Output: $output";
        $exit_code = $? >> 8;
    }
    log_write "Finished removing VIP. Exit code: $exit_code";
    exit $exit_code;
}
elsif ($command eq "start") {
    # 启动时添加 VIP
    my $exit_code = 0;
    log_write "Adding VIP to new master...";
    my $output = `ssh -p $ssh_port $ssh_user\@$new_master_host "/sbin/ip addr add $vip/24 dev $ifdev label $ifdev:$key"`;
    log_write "Output: $output";
    $exit_code = $? >> 8;
    
    if ($exit_code != 0) {
        log_write "Failed to add VIP. Exiting with code $exit_code";
        exit $exit_code;
    }
    
    # 刷新 ARP
    log_write "Sending ARP refresh...";
    $output = `ssh -p $ssh_port $ssh_user\@$new_master_host "/sbin/arping -U -I $ifdev -c 5 $vip"`;
    log_write "Output: $output";
    $exit_code = $? >> 8;
    
    log_write "Finished adding VIP. Exit code: $exit_code";
    exit $exit_code;
}
elsif ($command eq "status") {
    # 检查 VIP 状态
    log_write "Checking VIP status on $ssh_host...";
    my $output = `ssh -p $ssh_port $ssh_user\@$ssh_host "/sbin/ip addr show $ifdev:$key | grep $vip"`;
    if ($? >> 8 == 0) {
        log_write "VIP exists on $ssh_host";
        exit 0;
    } else {
        log_write "VIP does not exist on $ssh_host";
        exit 1;
    }
} else {
    log_write "Unknown command: $command";
    exit 1;
}

5.验证和启动 MHA

# 检查 SSH 连通性
# 在mha-manager节点执行
masterha_check_ssh --conf=/etc/mha/masterha/app1.cnf

# 检查复制状态
masterha_check_repl --conf=/etc/mha/masterha/app1.cnf

# 启动 MHA Manager
nohup masterha_manager --conf=/etc/mha/masterha/app1.cnf --remove_dead_master_conf --ignore_last_failover < /dev/null > /var/log/masterha/app1/manager.log 2>&1 &

# 查看 MHA 状态
masterha_check_status --conf=/etc/mha/masterha/app1.cnf

注意事项:
二进制日志保留:主库需保留足够的 binlog(expire-logs-days设置长一些),避免故障转移时从库需要旧 binlog。
VIP 管理:生产环境建议配合 Keepalived 或 LVS 实现 VIP 自动漂移。
监控告警:监控 MHA Manager 日志(/var/log/masterha/app1/manager.log)和 MySQL 复制状态。
参数优化:根据实际情况调整ping_interval(检测频率)和secondary_check_script(双节点检测)。 

二、PXC(多主模式)

1.环境准备

# 关闭防火墙和 SELinux
systemctl stop firewalld
systemctl disable firewalld
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
setenforce 0


# 配置主机名和 hosts 文件
# 节点1
hostnamectl set-hostname pxc-node1
echo "192.168.4.1 pxc-node1" >> /etc/hosts
echo "192.168.4.2 pxc-node2" >> /etc/hosts
echo "192.168.4.3 pxc-node3" >> /etc/hosts

# 节点2和节点3执行相同操作,修改hostnamectl为对应主机名

2.安装 PXC 相关软件(所有节点)

# 1.安装依赖包
yum install -y epel-release
yum install -y libev lsof perl-Compress-Raw-Bzip2 perl-Compress-Raw-Zlib perl-DBD-MySQL perl-DBI perl-Digest perl-Digest-MD5 perl-IO-Compress perl-Net-Daemon perl-PlRPC qpress socat openssl openssl-devel

# 2.卸载 mariadb(如果已安装)
rpm -e mariadb-libs --nodeps

# 3.安装 XtraBackup
yum -y install percona-xtrabackup-24-2.4.18-1.el7.x86_64.rpm

# 4.创建 MySQL 用户和组
groupadd -r mysql
useradd -M -s /bin/false -r -g mysql mysql

# 5.安装 Percona XtraDB Cluster
yum install https://repo.percona.com/yum/percona-release-latest.noarch.rpm
yum install Percona-XtraDB-Cluster-57

3.配置 PXC(所有节点)

(1)创建数据目录并设置权限

mkdir -p /data/local/percona-xtradb-cluster/data
chown -R mysql:mysql /data/local/percona-xtradb-cluster/data
mkdir -p /data/local/percona-xtradb-cluster/run
chown -R mysql:mysql /data/local/percona-xtradb-cluster/run
mkdir -p /data/logs/mysql
chown -R mysql:mysql /data/logs/mysql
touch /data/logs/mysql/error.log

(2)修改配置文件

         默认的/etc/my.cnf文件可以删除或重新指定路径,写入以下内容(以节点 1 为例,节点 2 和节点 3 需修改server_id、wsrep_node_name、wsrep_node_address)

socket = /data/local/percona-xtradb-cluster/run/mysql.sock
datadir = /data/local/percona-xtradb-cluster/data
socket = /data/local/percona-xtradb-cluster/run/mysql.sock
pid-file = /data/local/percona-xtradb-cluster/run/mysql.pid
wsrep_cluster_address=gcomm://192.168.4.1,192.168.4.2,192.168.4.3
pxc_strict_mode=ENFORCING
wsrep_cluster_name=test-pxc
wsrep_node_name=pxc-node1
wsrep_node_address=192.168.4.1

4.启动 PXC 集群

# 引导第一个节点(以节点 1 为例)
systemctl start mysql@bootstrap.service

# 启动其他节点(节点 2 和节点 3)
systemctl start mysql
systemctl disable mysql

5.初始化配置和测试

获取初始密码并修改:
        在任意节点上通过grep password /data/logs/mysql/error.log获取临时密码,然后登录 MySQL 修改密码。
创建 SST 传输账号:
        登录 MySQL 后执行以下命令创建 SST 传输所需的账号

grant all privileges on *.* to'sst'@'localhost' identified by 'password';

 查看集群状态:
        在任意节点上登录 MySQL,执行show status like 'wsrep%';命令,查看集群状态相关信息,确保wsrep_cluster_size显示正确的节点数,wsrep_local_state值为 4(表示正常),wsrep_ready为ON。
测试集群:
        可以在一个节点上创建数据库、表并插入数据,然后在其他节点上查看是否能同步数据,或者停止某个节点的 MySQL 服务,观察其他节点的集群状态和数据是否正常,再重新启动该节点,查看数据是否能恢复同步。


网站公告

今日签到

点亮在社区的每一天
去签到