【Linux】Zookeeper 部署

发布于:2024-11-02 ⋅ 阅读:(15) ⋅ 点赞:(0)

Zookeeper 搭建方式

  • 单机模式:Zookeeper只运行在一台服务器上,适合测试环境
  • 伪集群模式:就是在一台物理机上运行多个Zookeeper 实例;
  • 集群模式:Zookeeper运行于一个集群上,适合生产环境,这个计算机集群被称为一个“集合体”(ensemble)

1、单机模式

1.1 安装

# 获取安装包
# 若服务器不能联网,则手动下载上传,官网:https://zookeeper.apache.org/releases.html
[root@S-CentOS app]#  wget https://archive.apache.org/dist/zookeeper/zookeeper-3.5.7/apache-zookeeper-3.5.7-bin.tar.gz

# 解压
[root@S-CentOS app]#  tar -zxvf apache-zookeeper-3.5.7-bin.tar.gz -C /app/

# 重命名
[root@S-CentOS app]# mv apache-zookeeper-3.5.7-bin zookeeper-3.5.7

# 创建目录
[root@S-CentOS app]# cd zookeeper-3.5.7
[root@S-CentOS zookeeper-3.5.7]# mkdir -p data logs

# 修改配置
[root@S-CentOS zookeeper-3.5.7]# cd conf
[root@S-CentOS conf]# cp zoo_sample.cfg zoo.cfg
[root@S-CentOS conf]# vim zoo.cfg

zoo.cfg

dataDir=/app/zookeeper-3.5.7/data

dataLogDir=/app/zookeeper-3.5.7/logs

虽然是可选的,最好还是把 data ⽬录移出 /tmp ⽬录,以防⽌ ZooKeeper 填满了根分区(root partition)

1.2 启动 ZK 服务器

# 启动ZK
[root@S-CentOS zookeeper-3.5.7]# bin/zkServer.sh start 
ZooKeeper JMX enabled by default
Using config: /app/zookeeper-3.5.7/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED

# 查看ZK进程
[root@S-CentOS zookeeper-3.5.7]# jps -l
143856 org.apache.zookeeper.server.quorum.QuorumPeerMain
143900 sun.tools.jps.Jps

# 查看ZK状态
[root@S-CentOS zookeeper-3.5.7]# bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /app/zookeeper-3.5.7/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Mode: standalone

# 停止ZK
[root@S-CentOS zookeeper-3.5.7]# bin/zkServer.sh stop
ZooKeeper JMX enabled by default
Using config: /app/zookeeper-3.5.7/bin/../conf/zoo.cfg
Stopping zookeeper ... STOPPED

如果在前台中运⾏以便查看服务器的输出:

[root@S-CentOS zookeeper-3.5.7]# bin/zkServer.sh start-foreground 

如果 ZK 启动失败:

[root@S-CentOS zookeeper-3.5.7]# bin/zkServer.sh start 
ZooKeeper JMX enabled by default
Using config: /app/zookeeper-3.5.7/bin/../conf/zoo.cfg
Starting zookeeper ... FAILED TO START

查看日志:

[root@S-CentOS zookeeper-3.5.7]# tail -f logs/zookeeper-root-server-rlkj-gw-ecsb-04.out
2022-02-10 18:59:55,422 [myid:] - INFO  [main:QuorumPeerConfig@135] - Reading configuration from: /app/zookeeper-3.5.7/bin/../conf/zoo.cfg
2022-02-10 18:59:55,624 [myid:] - ERROR [main:ZooKeeperServerMain@79] - Unable to start AdminServer, exiting abnormally
org.apache.zookeeper.server.admin.AdminServer$AdminServerException: Problem starting AdminServer on address 0.0.0.0, port 8080 and command URL /commands
        at org.apache.zookeeper.server.admin.JettyAdminServer.start(JettyAdminServer.java:107)
        at org.apache.zookeeper.server.ZooKeeperServerMain.runFromConfig(ZooKeeperServerMain.java:138)
        at org.apache.zookeeper.server.ZooKeeperServerMain.initializeAndRun(ZooKeeperServerMain.java:106)
        at org.apache.zookeeper.server.ZooKeeperServerMain.main(ZooKeeperServerMain.java:64)
        at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:128)
        at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:82)
Caused by: java.io.IOException: Failed to bind to /0.0.0.0:8080
        at org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:346)
        at org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:307)
        at org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:80)
        at org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:231)
        at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
        at org.eclipse.jetty.server.Server.doStart(Server.java:385)
        at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
        at org.apache.zookeeper.server.admin.JettyAdminServer.start(JettyAdminServer.java:103)
        ... 5 more
Caused by: java.net.BindException: Address already in use
        at sun.nio.ch.Net.bind0(Native Method)
        at sun.nio.ch.Net.bind(Net.java:438)
        at sun.nio.ch.Net.bind(Net.java:430)
        at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:225)
        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
        at org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:342)
        ... 12 more
Unable to start AdminServer, exiting abnormally

分析:在3.5.5版本及以上,Zookeeper 提供了一个内嵌的Jetty容器来运行 AdminServer,默认占用的是 8080端口,AdminServer 主要是来查看 Zookeeper 的一些状态,如果机器上有其他程序(比如:Tomcat)占用了 8080 端口,也会导致 Starting zookeeper … FAILED TO START 的问题。

解决方案:

① 修改zoo.cfg,禁用adminServer

admin.enableServer=false

② 修改zoo.cfg,修改端口号

admin.serverPort=9090

1.3 Zookeeper 客户端

# 启动客户端
[root@S-CentOS zookeeper-3.5.7]# bin/zkCli.sh
Connecting to localhost:2181
2022-04-01 14:03:02,279 [myid:] - INFO  [main:Environment@109] - Client environment:zookeeper.version=3.5.7-...
2022-04-01 14:03:02,282 [myid:] - INFO  [main:Environment@109] - Client environment:host.name=rlkj-gw-ecsb-04
2022-04-01 14:03:02,282 [myid:] - INFO  [main:Environment@109] - Client environment:java.version=1.8.0_91
2022-04-01 14:03:02,284 [myid:] - INFO  [main:Environment@109] - Client environment:java.vendor=Oracle Corporation
2022-04-01 14:03:02,284 [myid:] - INFO  [main:Environment@109] - Client environment:java.home=/app/jdk1.8.0_91/jre
2022-04-01 14:03:02,284 [myid:] - INFO  [main:Environment@109] - Client environment:java.class.path=/app/zookeeper...
2022-04-01 14:03:02,284 [myid:] - INFO  [main:Environment@109] - Client environment:java.library.path=/usr/...
2022-04-01 14:03:02,284 [myid:] - INFO  [main:Environment@109] - Client environment:java.io.tmpdir=/tmp
2022-04-01 14:03:02,284 [myid:] - INFO  [main:Environment@109] - Client environment:java.compiler=<NA>
2022-04-01 14:03:02,284 [myid:] - INFO  [main:Environment@109] - Client environment:os.name=Linux
2022-04-01 14:03:02,284 [myid:] - INFO  [main:Environment@109] - Client environment:os.arch=amd64
2022-04-01 14:03:02,284 [myid:] - INFO  [main:Environment@109] - Client environment:os.version=4.4.186-1.el7.elrepo.x86_64
2022-04-01 14:03:02,284 [myid:] - INFO  [main:Environment@109] - Client environment:user.name=appuser
2022-04-01 14:03:02,284 [myid:] - INFO  [main:Environment@109] - Client environment:user.home=/home/appuser
2022-04-01 14:03:02,285 [myid:] - INFO  [main:Environment@109] - Client environment:user.dir=/app/zookeeper-3.5.7
2022-04-01 14:03:02,285 [myid:] - INFO  [main:Environment@109] - Client environment:os.memory.free=235MB
2022-04-01 14:03:02,286 [myid:] - INFO  [main:Environment@109] - Client environment:os.memory.max=241MB
2022-04-01 14:03:02,286 [myid:] - INFO  [main:Environment@109] - Client environment:os.memory.total=241MB
2022-04-01 14:03:02,289 [myid:] - INFO  [main:ZooKeeper@868] - Initiating client connection, connectString=localhost:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@3f8f9dd6
2022-04-01 14:03:02,294 [myid:] - INFO  [main:X509Util@79] - Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation
2022-04-01 14:03:02,300 [myid:] - INFO  [main:ClientCnxnSocket@237] - jute.maxbuffer value is 4194304 Bytes
2022-04-01 14:03:02,308 [myid:] - INFO  [main:ClientCnxn@1653] - zookeeper.request.timeout value is 0. feature enabled=
Welcome to ZooKeeper!
2022-04-01 14:03:02,314 [myid:localhost:2181] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@1112] - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
JLine support is enabled
2022-04-01 14:03:02,365 [myid:localhost:2181] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@959] - Socket connection established, initiating session, client: /127.0.0.1:48086, server: localhost/127.0.0.1:2181
[zk: localhost:2181(CONNECTING) 0] 2022-04-01 14:03:02,412 [myid:localhost:2181] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@1394] - Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x104616124490000, negotiated timeout = 30000

WATCHER::

WatchedEvent state:SyncConnected type:None path:null


# 退出客户端
[zk: localhost:2181(CONNECTED) 5] quit
WATCHER::
WatchedEvent state:Closed type:None path:null
2022-04-01 14:43:43,222 [myid:] - INFO  [main:ZooKeeper@1422] - Session: 0x104616124490000 closed
2022-04-01 14:43:43,222 [myid:] - INFO  [main-EventThread:ClientCnxn$EventThread@524] - EventThread shut down for session: 0x104616124490000

(1)日志消息:

[main:Environment@109]:告诉我们各种各样的环境变量的配置以及客户端使⽤了什么JAR包。

[main:ZooKeeper@868] - Initiating client connection:消息本⾝说明到底发⽣了什么,⽽额外的重要细节说明了客户端尝试连接到客户端发送的连接串localhost/127.0.0.1:2181中的⼀个服务器
[myid:localhost:2181] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@1394]:确认信息,说明客户端与本地的ZooKeeper服务器建⽴了TCP连接。后⾯的⽇志信息确认了会话的建⽴,并告诉我们会话ID为:0x104616124490000。最后客户端库通过SyncConncted事件通知了应⽤。应⽤需要实现Watcher对象来处理这个事件。

(2)创建一个会话流程:

① 客户端启动程序来建立⼀个会话。

② 客户端尝试连接到localhost/127.0.0.1:2181。

③ 客户端连接成功,服务器开始初始化这个新会话。

④ 会话初始化成功完成。

⑤ 服务器向客户端发送⼀个SyncConnected事件。

1.4 ZK节点

[zk: localhost:2181(CONNECTED) 0] ls /
[zookeeper]

此刻znode树为空,除了节点/zookeeper之外,该节点内标记了ZooKeeper服务所需的元数据树。

(1)创建节点

[zk: localhost:2181(CONNECTED) 1] create /workers ""
Created /workers
[zk: localhost:2181(CONNECTED) 2] ls /
[workers, zookeeper]

(2)删除节点

[zk: localhost:2181(CONNECTED) 3] delete /workers
[zk: localhost:2181(CONNECTED) 4] ls /
[zookeeper]

2、伪集群模式

2.1 创建三个server的数据目录和日志目录

以zoo1为例,其余两个服务重复操作。

[root@S-CentOS zookeeper-3.5.7]# mkdir -p pseudo/zoo1
[root@S-CentOS zookeeper-3.5.7]# cd pseudo/zoo1
[root@S-CentOS zoo1]# mkdir data logs conf

2.2 配置服务器编号

其他服务分别输入2、3…

[root@S-CentOS zoo1]# cd data
[root@S-CentOS data]# echo 1 > myid

2.3 修改配置文件

[root@S-CentOS zookeeper-3.5.7]# cp conf/zoo_sample.cfg ../conf/zoo1.cfg
[root@S-CentOS data]# vim ../conf/zoo1.cfg

zoo1.cfg

dataDir=/app/zookeeper-3.5.7/pseudo/zoo1/data
dataLogDir=/app/zookeeper-3.5.7/pseudo/zoo1/logs
clientPort=2181
server.1=localhost:2881:3881
server.2=localhost:2882:3882
server.3=localhost:2883:3883

注意:server.1=localhost:2881:3881 之后不能有空格,否则会报错

配置参数解读

server.A=B:C:D。

A 是一个数字,表示这个是第几号服务器;

集群模式下配置一个文件 myid,这个文件在 dataDir 目录下,这个文件里面有一个数据,就是 A 的值,Zookeeper 启动时读取此文件,拿到里面的数据与 zoo.cfg 里面的配置信息比较,从而判断到底是哪个 server。

B :是这个服务器的地址或主机名;

C :TCP端口,用于仲裁通信,即这个服务器 Follower 与集群中的 Leader 服务器交换信息的端口;

D :TCP端口,用于群首选举,即是万一集群中的 Leader 服务器挂了,需要一个端口来重新进行选举,选出一个新的Leader,而这个端口就是用来执行选举时服务器相互通信的端口

2.4 启动zookeeper实例

启动第一个服务器节点:

[root@S-CentOS data]# cd /app/zookeeper-3.5.7/bin
[root@S-CentOS bin]# ./zkServer.sh start /app/zookeeper-3.5.7/pseudo/zoo1/conf/zoo1.cfg
ZooKeeper JMX enabled by default
Using config: /app/zookeeper-3.5.7/pseudo/zoo1/conf/zoo1.cfg
Starting zookeeper ... STARTED

查看服务器日志记录:

… [myid:1] - INFO [QuorumPeer[myid=1]/…:2181:QuorumPeer@670] - LOOKING … [myid:1] - INFO [QuorumPeer[myid=1]/…:2181:FastLeaderElection@740] -
New election. My id = 1, proposed zxid=0x0 … [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@542] - Notification: 1 …, LOOKING (my state) … [myid:1] - WARN [WorkerSender[myid=1]:QuorumCnxManager@368] - Cannot open channel to 2 at election address /127.0.0.1:3334 Java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:351)

这个服务器疯狂地尝试连接到其他服务器,然后失败,如果我们启动另⼀个服务器

[root@S-CentOS bin]# ./zkServer.sh start /app/zookeeper-3.5.7/pseudo/zoo1/conf/zoo2.cfg    
ZooKeeper JMX enabled by default
Using config: /app/zookeeper-3.5.7/pseudo/zoo2/conf/zoo2.cfg
Client port found: 2182. Client address: localhost.
Mode: leader

此时构成仲裁的法定⼈数,第⼆个服务器的⽇志记录zookeeper.out:

… [myid:2] - INFO [QuorumPeer[myid=2]/…:2182:Leader@345] - LEADING - LEADER ELECTION TOOK - 279 … [myid:2] - INFO [QuorumPeer[myid=2]/…:2182:FileTxnSnapLog@240] - Snapshotting: 0x0 to ./data/version-2/snapshot.0

该⽇志指出服务器2已经被选举为群⾸。

此时服务器1的⽇志:

… [myid:1] - INFO [QuorumPeer[myid=1]/…:2181:QuorumPeer@738] - FOLLOWING … [myid:1] - INFO [QuorumPeer[myid=1]/…:2181:ZooKeeperServer@162] - Created server … … [myid:1] - INFO [QuorumPeer[myid=1]/…:2181:Follower@63] - FOLLOWING - LEADER ELECTION TOOK - 212

服务器1 作为服务器2 的追随者被激活。

此时具有了符合法定仲裁 (三分之⼆)的可⽤服务器,在此刻服务开始可⽤。

我们现在需要配置客户端来连接到服务上,连接字符串需要列出所有组成服务的服务器host:port对。

对于这个例⼦,连接串为"127.0.0.1:2181,127.0.0.1:2182,127.0.0.1:2183"(我们包含第三个服务器的信息,即使我们永远不启动它,因为这可以说明ZooKeeper⼀些有⽤的属性)。

使⽤zkCli.sh来访问集群:

[root@S-CentOS bin]# ./zkCli.sh -server localhost:2181,localhost:2182,localhost:2183

当连接到服务器后,我们会看到以下形式的消息:

[myid:localhost:2182] - INFO [main-SendThread(localhost:2182):ClientCnxn$SendThread@1394] - Session establishment complete on server localhost/0:0:0:0:0:0:0:1:2182, sessionid = 0x20461e0bb3a0000, negotiated timeout = 30000

注意⽇志消息中的端口号,在本例中的2182。

如果通过 Ctrl + C 来停⽌客户端并重启多次它,我们将会看到端口号在2181-2182之间来回变化。

我们也许还会注意到尝试2183端连接失败的消息,之后成功连接到某⼀个服务器端的消息。

客户端以随机顺序连接到连接串中的服务器,这样可以用ZooKeeper 来实现⼀个简单的负载均衡。不过,客户端⽆法指定优先选择的服务器来进⾏连接。例如,如果我们有5个ZooKeeper服务器的⼀个集合,其中3个在美国西海岸,另外两个在美国东海岸,为了确保客户端只连接到本地服务器上,我们可以使在东海岸客户端的连接串中只出现东海岸的服务器, 在西海岸客户端的连接串中只有西海岸的服务器。

2.5 报错

  • org.apache.zookeeper.server.quorum.QuorumPeerConfig$ConfigException: Address unresolved: 10.200.202.41:3882

server.1=localhost:2881:3881 配置之后存在空格,去掉重启即可

3、集群模式

假定三台服务器ip分别为:192.168.10.11192.168.10.12192.168.10.13

部署 ZK:

# 获取安装包
# 若服务器不能联网,则手动下载上传,官网:https://zookeeper.apache.org/releases.html
[root@S-CentOS app]#  wget https://archive.apache.org/dist/zookeeper/zookeeper-3.5.7/apache-zookeeper-3.5.7-bin.tar.gz

# 解压
[root@S-CentOS app]#  tar -zxvf apache-zookeeper-3.5.7-bin.tar.gz

# 重命名
[root@S-CentOS app]# mv apache-zookeeper-3.5.7-bin zookeeper-3.5.7

# 创建目录
[root@S-CentOS app]# cd zookeeper-3.5.7
[root@S-CentOS zookeeper-3.5.7]# mkdir data logs

# 修改配置
[root@S-CentOS zookeeper-3.5.7]# cd conf
[root@S-CentOS conf]# cp zoo_sample.cfg zoo.cfg
[root@S-CentOS conf]# vim zoo.cfg

zoo.cfg 文件:

tickTime=2000

initLimit=10

syncLimit=5

dataDir=/app/zookeeper-3.5.7/data

dataLogDir=/app/zookeeper-3.5.7/logs

server.1=192.168.10.11:3188:3288
server.2=192.168.10.12:3188:3288
server.3=192.168.10.13:3188:3288

将 Zookeeper 拷贝至其他两台机器:

scp -r /app/zookeeper-3.5.7 192.168.10.11:/app/zookeeper-3.5.7/
scp -r /app/zookeeper-3.5.7 192.168.10.12:/app/zookeeper-3.5.7/

每个节点的dataDir指定的目录下创建一个 myid 的文件:

# 每个节点ID不同,192.168.10.11 -> 1、 192.168.10.12 -> 2、 192.168.10.13 -> 3
echo 1 > /app/zookeeper-3.5.7/data/myid

启动ZK服务(每个节点都启动)

# 启动ZK
[root@S-CentOS zookeeper-3.5.7]# bin/zkServer.sh start 
ZooKeeper JMX enabled by default
Using config: /app/zookeeper-3.5.7/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED

4、ZK 配置参数

(1)tickTime = 2000:通信心跳时间,单位毫秒

Zookeeper服务器之间或客户端与服务器之间维持心跳的时间间隔,每隔tickTime时间就会发送一个心跳;最小的session过期时间为2倍tickTime

(2)initLimit = 10:LF 初始通信时限

Leader和Follower初始连接时能容忍的最多心跳数(tickTime的数量)

Follower在启动过程中,会从Leader同步所有最新数据,然后确定自己能够对外服务的起始状态。

Leader允许Follower在 ( initLimit * tickTime ) 时间内完成这个工作

(3)syncLimit = 5:LF 同步通信时限

Leader和Follower之间通信时间如果超过syncLimit * tickTime,Leader认为Follwer下线了,从服务器列表中删除Follwer

(4)dataDir:保存存储快照文件snapshot的目录

默认的tmp目录,容易被Linux系统定期删除,所以一般不用默认的tmp目录。

默认情况下,事务日志也会存储在这里。建议同时配置参数dataLogDir, 事务日志的写性能直接影响zk性能

(5)clientPort = 2181:客户端连接端口,即对外服务端口,通常不做修改

(6)maxClientCnxns:单个客户端与单台服务器之间的连接数的限制,是ip级别的,默认是60

如果设置为0,那么表明不作任何限制

(7)autopurge.snapRetainCount:指定了需要保留的文件数目,默认是保留3个

这个参数和下面参数配合使用

(8)autopurge.purgeInterval:指定了清理频率,单位是小时

3.4.0及之后版本,ZK提供了自动清理事务日志和快照文件的功能

需要配置一个1或更大的整数,0表示不开启自动清理功能

(9)globalOutstandingLimit:最大请求堆积数,默认是1000

ZK运行的时候, 尽管server已经没有空闲来处理更多的客户端请求了,但是还是允许客户端将请求提交到服务器上来,以提高吞吐性能。当然,为了防止Server内存溢出,这个请求堆积数还是需要限制下的

5、脚本

(1)伪集群

pseudoCluster.sh:

#!/bin/bash
case $1 in
        "start"){
                # 启动zookeeper服务器
                cd /app/zookeeper-3.5.7/bin
                ./zkServer.sh start ../pseudo/zoo1/zoo1.cfg
                ./zkServer.sh start ../pseudo/zoo2/zoo2.cfg
                ./zkServer.sh start ../pseudo/zoo3/zoo3.cfg
        };;
        "stop"){
                # 停止Zookeeper集群
                cd /app/zookeeper-3.5.7/bin
                ./zkServer.sh stop ../pseudo/zoo1/zoo1.cfg
                ./zkServer.sh stop ../pseudo/zoo2/zoo2.cfg
                ./zkServer.sh stop ../pseudo/zoo3/zoo3.cfg

                # 清除遗留数据
                cd ../pseudo
                rm -rf zoo1/data/*
                rm -rf zoo1/logs/*
                echo 1 > zoo1/data/myid

                rm -rf zoo2/data/*
                rm -rf zoo2/logs/*
                echo 2 > zoo2/data/myid

                rm -rf zoo3/data/*
                rm -rf zoo3/logs/*
                echo 3 > zoo3/data/myid
        };;
        "status"){
                cd /app/zookeeper-3.5.7/bin
                ./zkServer.sh status ../pseudo/zoo1/zoo1.cfg
                ./zkServer.sh status ../pseudo/zoo2/zoo2.cfg
                ./zkServer.sh status ../pseudo/zoo3/zoo3.cfg
        };;
        "client"){
                cd ../bin
                ./zkCli.sh -server localhost:2181,localhost:2182,localhost:2183
        };;
        *){
        	printf "参数仅支持:start、stop、status、stop、client\n"

        };;
esac

(2)分布式集群

cluster.sh

#!/bin/bash
case $1 in
	"start"){
		for i in hadoop102 hadoop103 hadoop104
		do
 			echo ---------- zookeeper $i 启动 ------------
			ssh $i "/app/zookeeper-3.5.7/bin/zkServer.sh start"
		done
	};;
	"stop"){
		for i in hadoop102 hadoop103 hadoop104
		do
			echo ---------- zookeeper $i 停止 ------------ 
			ssh $i "/opt/module/zookeeper-3.5.7/bin/zkServer.sh stop"
		done
	};;
	"status"){
		for i in hadoop102 hadoop103 hadoop104
		do
			echo ---------- zookeeper $i 状态 ------------ 
			ssh $i "/opt/module/zookeeper-3.5.7/bin/zkServer.sh status"
		done
	};;
esac

(3)脚本使用

# 编辑脚本
[root@S-CentOS zookeeper-3.5.7]# vim zkStop.sh

# 增加脚本执行权限
[root@S-CentOS zookeeper-3.5.7]# chmod u+x zkStop.sh

# 启动集群
[root@S-CentOS zookeeper-3.5.7]# zkStop.sh start

# 停止集群
[root@S-CentOS zookeeper-3.5.7]# zkStop.sh stop

6、xsync 同步脚本

https://blog.csdn.net/nalw2012/article/details/98322637


网站公告

今日签到

点亮在社区的每一天
去签到