deepseek部署:ELK + Filebeat + Zookeeper + Kafka

发布于:2025-02-26 ⋅ 阅读:(19) ⋅ 点赞:(0)

## 1. 概述

本文档旨在指导如何在7台机器上部署ELK(Elasticsearch, Logstash, Kibana)堆栈、Filebeat、Zookeeper和Kafka。该部署方案适用于日志收集、处理和可视化场景。

## 2. 环境准备

### 2.1 机器分配

| 机器编号 | 主机名 | IP地址 | 部署组件

|----------|--------------|--------------|-----------------------------------------------|

| 1 | node1 | 192.168.1.1 | Elasticsearch, Zookeeper, Kafka

| 2 | node2 | 192.168.1.2 | Elasticsearch, Zookeeper, Kafka

| 3 | node3 | 192.168.1.3 | Elasticsearch, Zookeeper, Kafka

| 4 | node4 | 192.168.1.4 | Logstash, Kibana

| 5 | node5 | 192.168.1.5 | Logstash, Kibana

| 6 | node6 | 192.168.1.6 | Filebeat

| 7 | node7 | 192.168.1.7 | Filebeat

### 2.2 系统要求

  • 操作系统:CentOS 7.x 或 Ubuntu 18.04 LTS
  • Java版本:JDK 11
  • 内存:至少16GB
  • 硬盘:至少100GB
  • 网络:所有机器之间互通

### 2.3 软件版本

  • Elasticsearch: 7.10.0
  • Logstash: 7.10.0
  • Kibana: 7.10.0
  • Filebeat: 7.10.0
  • Zookeeper: 3.6.2
  • Kafka: 2.7.0

## 3. 部署步骤

### 3.1 安装Java

在所有机器上安装JDK 11:

```bash

sudo yum install java-11-openjdk-devel # CentOS

sudo apt-get install openjdk-11-jdk # Ubuntu

```

验证安装:

```bash

java -version

```

### 3.2 部署Zookeeper

  1. de1、node2、node3上部署Zookeeper。

    1. 下载并解压Zookeeper:

```bash

wget https://downloads.apache.org/zookeeper/zookeeper-3.6.2/apache-zookeeper-3.6.2-bin.tar.gz

  1. tar -xzf apache-zookeeper-3.6.2-bin.tar.gz

    mv apache-zookeeper-3.6.2-bin /opt/zookeeper

    ```

    2. 配置Zookeeper:

  1. 在`/opt/zookeeper/conf`目录下创建`zoo.cfg`文件:

    ```ini

    tickTime=2000

    initLimit=10

    syncLimit=5

    dataDir=/var/lib/zookeeper

    clientPort=2181

    server.1=node1:2888:3888

    server.2=node2:2888:3888

    server.3=node3:2888:3888

    ```

    在`dataDir`目录下创建`myid`文件,内容分别为1、2、3。

    3. 启动Zookeeper:

```bash

/opt/zookeeper/bin/zkServer.sh start

```

### 3.3 部署Kafka

  1. de1、node2、node3上部署Kafka。

    1. 下载并解压Kafka:

```bash

wget https://downloads.apache.org/kafka/2.7.0/kafka_2.13-2.7.0.tgz

  1. tar -xzf kafka_2.13-2.7.0.tgz

    mv kafka_2.13-2.7.0 /opt/kafka

    ```

    2. 配置Kafka:

  1. 修改`/opt/kafka/config/server.properties`:

    ```properties

    broker.id=1 # 在node2和node3上分别改为2和3

    listeners=PLAINTEXT://node1:9092 # 在node2和node3上分别改为node2和node3

    zookeeper.connect=node1:2181,node2:2181,node3:2181

    ```

    3. 启动Kafka:

```bash

/opt/kafka/bin/kafka-server-start.sh /opt/kafka/config/server.properties &

```

### 3.4 部署Elasticsearch

  1. de1、node2、node3上部署Elasticsearch。

    1. 下载并解压Elasticsearch:

```bash

wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.10.0-linux-x86_64.tar.gz

  1. tar -xzf elasticsearch-7.10.0-linux-x86_64.tar.gz

    mv elasticsearch-7.10.0 /opt/elasticsearch

    ```

    2. 配置Elasticsearch:

  1. 修改`/opt/elasticsearch/config/elasticsearch.yml`:

    ```yaml

    cluster.name: my-cluster

    node.name: node1 # 在node2和node3上分别改为node2和node3

    network.host: 0.0.0.0

    discovery.seed_hosts: ["node1", "node2", "node3"]

    cluster.initial_master_nodes: ["node1", "node2", "node3"]

    ```

    3. 启动Elasticsearch:

```bash

/opt/elasticsearch/bin/elasticsearch &

```

### 3.5 部署Logstash

  1. de4、node5上部署Logstash。

    1. 下载并解压Logstash:

```bash

wget https://artifacts.elastic.co/downloads/logstash/logstash-7.10.0-linux-x86_64.tar.gz

  1. tar -xzf logstash-7.10.0-linux-x86_64.tar.gz

    mv logstash-7.10.0 /opt/logstash

    ```

    2. 配置Logstash:

  1. 创建`/opt/logstash/config/logstash.conf`:

    ```yaml

    input {

    kafka {

    bootstrap_servers => "node1:9092,node2:9092,node3:9092"

    topics => ["logs"]

    }

    }

    output {

    elasticsearch {

    hosts => ["node1:9200", "node2:9200", "node3:9200"]

    index => "logs-%{+YYYY.MM.dd}"

    }

    }

    ```

    3. 启动Logstash:

```bash

/opt/logstash/bin/logstash -f /opt/logstash/config/logstash.conf &

```

### 3.6 部署Kibana

  1. de4、node5上部署Kibana。

    1. 下载并解压Kibana:

```bash

wget https://artifacts.elastic.co/downloads/kibana/kibana-7.10.0-linux-x86_64.tar.gz

  1. tar -xzf kibana-7.10.0-linux-x86_64.tar.gz

    mv kibana-7.10.0-linux-x86_64 /opt/kibana

    ```

    2. 配置Kibana:

修改`/opt/kibana/config/kibana.yml`:

```yaml

server.host: "0.0.0.0"

elasticsearch.hosts: ["http://node1:9200", "http://node2:9200", "http://node3:9200

  1. ```

    3. 启动Kibana:

```bash

/opt/kibana/bin/kibana &

```

### 3.7 部署Filebeat

  1. de6、node7上部署Filebeat。

    1. 下载并安装Filebeat:

```bash

wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.10.0-linux-x86_64.tar.gz

  1. tar -xzf filebeat-7.10.0-linux-x86_64.tar.gz

    mv filebeat-7.10.0-linux-x86_64 /opt/filebeat

    ```

    2. 配置Filebeat:

  • `/opt/filebeat/filebeat.yml`:

    ```yaml

    filebeat.inputs:

    - type: log

paths:

  1. - /var/log/*.log

    output.kafka:

    hosts: ["node1:9092", "node2:9092", "node3:9092"]

    topic: "logs"

    ```

    3. 启动Filebeat:

  1. ```bash

    /opt/filebeat/filebeat -e -c /opt/filebeat/filebeat.yml &

    ```

    ## 4. 验证部署

    1. 访问Kibana:`http://node4:5601` 或 `http://node5:5601

  2. . 在Kibana中创建索引模式`logs-*`,并查看日志数据。