6-5 实现daemonset和sidecar日志收集

发布于:2022-12-08 ⋅ 阅读:(805) ⋅ 点赞:(0)

前言

在上一节我们准备好日志收集环境 6-4 搭建ELK及Kafka日志收集环境

其中日志源可以通过node节点收集,或者使用sidecar容器收集,它们主要区别为:

node节点收集,基于daemonset部署日志收集进程,实现json-file类型(标准输出/dev/stdout、错误输 出/dev/stderr)日志收集,即应用程序产生的标准输出和错误输出的日志。

使用sidecar容器(一个pod多容器)收集当前pod内一个或者多个业务容器的日志(通常基于emptyDir实现业务容器与sidcar之间的日志共享)。



Daemonset收集日志

制作镜像

FROM logstash:7.12.1

USER root
WORKDIR /usr/share/logstash 
ADD logstash.yml /usr/share/logstash/config/logstash.yml
ADD logstash.conf /usr/share/logstash/pipeline/logstash.conf 
nerdctl build -t easzlab.io.local:5000/myhub/logstash:v7.12.1-json-file-log-v1 .
nerdctl push easzlab.io.local:5000/myhub/logstash:v7.12.1-json-file-log-v1

部署DaemonSet

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: logstash-elasticsearch
  namespace: kube-system
  labels:
    k8s-app: logstash-logging
spec:
  selector:
    matchLabels:
      name: logstash-elasticsearch
  template:
    metadata:
      labels:
        name: logstash-elasticsearch
    spec:
      # master节点也采集日志
      tolerations:
      - key: node-role.kubernetes.io/master
        operator: Exists
        effect: NoSchedule
      containers:
      - name: logstash-elasticsearch
        image: easzlab.io.local:5000/myhub/logstash:v7.12.1-json-file-log-v1 
        env:
        # kafka集群连接
        - name: "KAFKA_SERVER"
          value: "192.168.100.175:9092,192.168.100.176:9092,192.168.100.177:9092"
        - name: "TOPIC_ID"
          value: "jsonfile-log-topic"
        - name: "CODEC"
          value: "json"
        volumeMounts:
        - name: varlog #定义宿主机系统日志挂载路径
          mountPath: /var/log #宿主机系统日志挂载点
        - name: varlibdockercontainers #定义容器日志挂载路径,和logstash配置文件中的收集路径保持一直
          mountPath: /var/log/pods #containerd挂载路径,此路径与logstash的日志收集路径必须一致
          readOnly: false
      terminationGracePeriodSeconds: 30
      volumes:
      - name: varlog
        hostPath:
          path: /var/log #宿主机系统日志
      - name: varlibdockercontainers
        hostPath:
          path: /var/log/pods #containerd的宿主机日志路径

在每个节点,包括master,部署了logstash pod。

kubectl get pod -A
kube-system   logstash-elasticsearch-28p4n               1/1     Running   0              33s
kube-system   logstash-elasticsearch-57vhq               1/1     Running   0              33s
kube-system   logstash-elasticsearch-8hx7l               1/1     Running   0              33s
kube-system   logstash-elasticsearch-kppzk               1/1     Running   0              33s
kube-system   logstash-elasticsearch-mtxzl               1/1     Running   0              33s
kube-system   logstash-elasticsearch-xpc6x               1/1     Running   0              33s

在es集群查看系统日志与应用日志:


在kibana添加索引模式为 jsonfile-daemonset-syslog-* 与 jsonfile-daemonset-applog-*


在kibana的discover选项查看图形化日志:


Sidecar收集日志

制作镜像

Dockerfile和damonset一样,配置文件不同。

vim logstash.conf

input {
  file {
    # catalina日志
    path => "/var/log/applog/catalina.out"
    start_position => "beginning"
    type => "app1-sidecar-catalina-log"
  }
  file {
    # 访问日志 
    path => "/var/log/applog/localhost_access_log.*.txt"
    start_position => "beginning"
    type => "app1-sidecar-access-log"
  }
}

output {
  if [type] == "app1-sidecar-catalina-log" {
    kafka {
      # 环境变量将在部署pod时赋值
      bootstrap_servers => "${KAFKA_SERVER}"
      topic_id => "${TOPIC_ID}"
      batch_size => 16384  #logstash每次向ES传输的数据量大小,单位为字节
      codec => "${CODEC}" 
   } }

  if [type] == "app1-sidecar-access-log" {
    kafka {
      bootstrap_servers => "${KAFKA_SERVER}"
      topic_id => "${TOPIC_ID}"
      batch_size => 16384
      codec => "${CODEC}"
  }}
}

生成边车镜像:

nerdctl build -t easzlab.io.local:5000/myhub/logstash:v7.12.1-sidecar .
nerdctl push easzlab.io.local:5000/myhub/logstash:v7.12.1-sidecar

部署Sidecar

部署Deployment和Service,内含tomcat web容器,sidecar容器。

---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: myserver-tomcat-app1-deployment-label
  name: myserver-tomcat-app1-deployment
  namespace: myserver
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myserver-tomcat-app1-selector
  template:
    metadata:
      labels:
        app: myserver-tomcat-app1-selector
    spec:
      containers:
      - name: sidecar-container
        image: easzlab.io.local:5000/myhub/logstash:v7.12.1-sidecar
        imagePullPolicy: IfNotPresent
        env:
        - name: "KAFKA_SERVER"
          value: "192.168.100.175:9092,192.168.100.176:9092,192.168.100.177:9092"
        - name: "TOPIC_ID"
          value: "tomcat-app1-topic"
        - name: "CODEC"
          value: "json"
        volumeMounts:
        - name: applogs
          mountPath: /var/log/applog
      - name: myserver-tomcat-app1-container
        image: easzlab.io.local:5000/myhub/tomcat-app1:v2
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 8080
          protocol: TCP
          name: http
        volumeMounts:
        - name: applogs
          mountPath: /apps/tomcat/logs
        startupProbe:
          httpGet:
            path: /myapp/index.html
            port: 8080
          initialDelaySeconds: 5 #首次检测延迟5s
          failureThreshold: 3  #从成功转为失败的次数
          periodSeconds: 3 #探测间隔周期
        readinessProbe:
          httpGet:
            path: /myapp/index.html
            port: 8080
          initialDelaySeconds: 5
          periodSeconds: 3
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 3
        livenessProbe:
          httpGet:
            path: /myapp/index.html
            port: 8080
          initialDelaySeconds: 5
          periodSeconds: 3
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 3
      volumes:
      - name: applogs #定义通过emptyDir实现业务容器与sidecar容器的日志共享,以让sidecar收集业务容器中的日志
        emptyDir: {}

---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: myserver-tomcat-app1-service-label
  name: myserver-tomcat-app1-service
  namespace: myserver
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 8080
    nodePort: 30080
  selector:
    app: myserver-tomcat-app1-selector

在logstash服务器192.168.100.170,添加新的配置文件:

vim /etc/logstash/conf.d/sidecar.conf

input {
  kafka {
    bootstrap_servers => "192.168.100.175:9092,192.168.100.176:9092,192.168.100.177:9092"
    topics => ["tomcat-app1-topic"]
    codec => "json"
  }
}

output {
  if [type] == "app1-sidecar-access-log" {
    elasticsearch {
      hosts => ["192.168.100.171:9200","192.168.100.172:9200","192.168.100.173:9200"]
      index => "sidecar-app1-accesslog-%{+YYYY.MM.dd}"
    }
  }

  if [type] == "app1-sidecar-catalina-log" {
    elasticsearch {
      hosts => ["192.168.100.171:9200","192.168.100.172:9200","192.168.100.173:9200"]
      index => "sidecar-app1-catalinalog-%{+YYYY.MM.dd}"
    }
  }
}

systemctl daemon-reload
systemctl restart logstash.service

在Kafka客户端offset explorer查看tomcat相关日志:


在es集群插件查看sidecar相关数据:


在kibana查看图形化日志数据:



本文含有隐藏内容,请 开通VIP 后查看