logback日志格式高级玩法

发布于:2024-09-18 ⋅ 阅读:(13) ⋅ 点赞:(0)

背景

在这篇文章中logback抽取公共配置,主要讲了关于公共jar的公共配置的思路,主要是思路的玩法,但是按照上篇文章的做法投入到生产环境中,特定场景下,是存在一些问题的,主要如下:

我目前指定的特殊场景是将日志采集到logstash哈

目前上一个版本中,本地生成日志,以及filebeat采集没有任何问题,但是遇到logstash采集后,需要对一些日志进行处理的时候(比如多行日志处理、json格式化,多层json,特殊字符处理等等一些问题),会出现一些日志格式的问题

  • 日志中包含特殊字符的,logstash采集不成功;
  • Java异常处理的时候,日志会生成多行,filebeat采集或者logstash采集不方便;
  • 埋点日志的时候,如果埋入嵌套json,会有问题;
  • logback的一些高级玩法,比如日志替换,截取等功能

上面存在的问题,对日志采集造成很大的困扰,所以,我们本片文章用来解决上面这些问题的

实践

转义特殊字符

使用logback中%replace进行转义,eg:对下面的双引号进行转义:

    <appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <file>/tmp/logs/sys/${springApplicationName}.log</file>
        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
           ...
        </rollingPolicy>
        <encoder>
            <pattern>{"timestamp": "%d{yyyy-MM-dd HH:mm:ss.SSS}", "thread": "%thread", "class": "%logger{50}", "msg": %replace(%msg){'"','\\"'}, "exception":"%exception{10}" }%n</pattern>
        </encoder>
    </appender>

我在Java项目中的输出为:

@Slf4j
public class test{
    public void aa(){
      log.info("我是引号\"\"");
    }

日志输出:

//转义前
{"timestamp": "2024-08-30 16:23:10.634", "thread": "http-nio-8081-exec-1", "class": "c.h.s.controller.ApiController", "msg": 我是引号"", "exception":"" }
//转义后
{"timestamp": "2024-08-30 16:23:10.634", "thread": "http-nio-8081-exec-1", "class": "c.h.s.controller.ApiController", "msg": 我是引号\"\", "exception":"" }

可以看到,中间的引号被转义了

优点:可以直接使用logback中的原生语法进行转义,方便简单

缺点:灵活性差,不支持嵌套json,适用的场景太少

截取字符串

同样可以使用%replace进行截取,因为replace语法支持的是正则,比如我们要对这么一个字符串进行截取如下:

要截取的字符串格式:

	// 下面内容来自于logback的%SW_CTX字段
	[SW_CTX:[test::xx,test-v2-d4f846dc8-7xhzm,db71a05c59e14357a085a543cc8e4b1a.73.17250066867459995,d5303fb9a39d466aaf06f5481e1608cc.89.17250066867474438,0]]

下面对上述的字符串进行截取

//获取test-v2-d4f846dc8-7xhzm内容
%replace(%SW_CTX){'.*?,(.*?),.*', '$1'}
//输出  test-v2-d4f846dc8-7xhzm

//获取 d5303fb9a39d466aaf06f5481e1608cc.89.17250066867474438  内容
%replace(%SW_CTX){'.*?,.*,(.*?),.*', '$1'}

优点:可以直接使用logback中的原生语法,没必要使用mdc语法,简单

缺点:灵活性差,匹配场景相对较少,正则精准度要求高

logstash-logback-encoder框架

官方网站:logfellow/logstash-logback-encoder

为什么要使用这个框架呢?

该框架有如下优点:

  • 如果日志直接推送logstash,他会默认一些字段,不必你自己定义,很适合logstash解析字段;
  • 会将你定义的日志,可以尝试解析为json;
  • 支持多行异常日志处理,默认给你转为一行日志,并支持分隔符,筛选特定异常生成,异常日志大小,异常栈深度等功能;
  • 支持复杂json,比如自动转义,并且支持将json平铺到根目录
  • 封装了很多函数,比如#asLong,#asJson, #tryJson等等,支持各种自定义,拓展性好

上述的场景在生产环境都很常见,解决了很多痛点。

如何使用

引入pom

        <dependency>
            <groupId>net.logstash.logback</groupId>
            <artifactId>logstash-logback-encoder</artifactId>
            <version>${logstash.logback.version}</version>
        </dependency>

这里注意一下版本,springboot2和spirngboot3使用的时候会有冲突,以及jdk,也有冲突

logstash-logback-encoder Minimum Java Version supported
8.x 11
7.x 8
6.x 8
5.x 7
<= 4.x 6

接下来就是在xml定义appender

logstashEncoder

如果你不想自定义字段,那么直接使用logstashEncoder或者logstashAccessEncoder

<appender name="JSON_LOG" class="ch.qos.logback.core.rolling.RollingFileAppender">
		<File>${catalina.base}/logs/stdout-json.log</File>
		<encoder charset="UTF-8" class="net.logstash.logback.encoder.LogstashEncoder">
				<!-- 显示行号,类,方法信息 -->
				<includeCallerData>true</includeCallerData>
		</encoder>
		<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
				<fileNamePattern>${catalina.base}/logs/stdout-json.log.%d{yyyy-MM-dd}</fileNamePattern>
		</rollingPolicy>
</appender>
<appender name="JSON_ASYNC" class="ch.qos.logback.classic.AsyncAppender">
		<appender-ref ref="JSON_LOG" />
		<discardingThreshold>0</discardingThreshold>
		<queueSize>1000</queueSize>
		<includeCallerData>true</includeCallerData>
</appender>
<root level="info">
		<appender-ref ref="JSON_ASYNC"/>
</root>

这样打印出来的日志就是json格式,默认包含如下字段

{
    "@timestamp": "2018-11-19T18:13:05.167+08:00",
    "@version": 1,
    "message": "your log message",
    "logger_name": "com.package.TestController",
    "thread_name": "http-bio-8181-exec-2",
    "level": "INFO",
    "level_value": 20000,
    "HOSTNAME": "xxx",
    "caller_class_name": "com.package.TestController",
    "caller_method_name": "testMethod",
    "caller_file_name": "TestController.java",
    "caller_line_number": 239
}
LoggingEventCompositeJsonEncoder

但一般生产肯定不这么做,肯定都是自定义字段的

下面展示的是自定义字段方式,采用的是LoggingEventCompositeJsonEncoder appender

<appender name="bizlogAppender" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <file>/home/logs/biz/${springApplicationName}-biz.log</file>
        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
            <!-- 日志文件路径 -->
            <fileNamePattern>/home/logs/biz/${springApplicationName}.%d{yyyy-MM-dd}.%i-biz.log</fileNamePattern>
            <!-- 保留的历史日志文件数量 -->
            <maxHistory>7</maxHistory>
            <!-- 每个日志文件的最大大小 -->
            <timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
                <maxFileSize>100MB</maxFileSize>
            </timeBasedFileNamingAndTriggeringPolicy>
        </rollingPolicy>
        <encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
            <providers>
                <timestamp>
                    <!-- 时间戳 -->
                    <fieldName>#time</fieldName>
                    <pattern>[UNIX_TIMESTAMP_AS_NUMBER]</pattern>
                </timestamp>
                <pattern>
                    <pattern>
                        {
                        "#datetime": "%d{yyyy-MM-dd HH:mm:ss.SSS}",
                        "#level": "%level",
                        "#thread": "%thread",
                        "#class": "%logger{50}",
                        "#app_pod_ip": "%X{app_pod_ip}",
                        "#ip": "%X{#ip}",
                        "#forwarded_prefix": "%X{forwarded_prefix}",
                        "#user_agent": "%X{user_agent}",
                        "#request_id": "%X{request_id}",
                        "#fb_msg": "%msg"
                        }
                    </pattern>
                </pattern>
                <stackTrace>
                    <throwableConverter class="net.logstash.logback.stacktrace.ShortenedThrowableConverter">
                        <maxDepthPerThrowable>30</maxDepthPerThrowable>
                        <maxLength>2048</maxLength>
                        <rootCauseFirst>true</rootCauseFirst>
                    </throwableConverter>
                </stackTrace>
            </providers>
        </encoder>
    </appender>
JSON平铺处理

上面的配置其实也符合大部分需求了,但是还是会有一些特殊的场景,比如我想让我自己自定义的json(两层嵌套),在es可支持搜索和聚合,这种以往的做法是,在logstash层单独对每个业务json逐个解析,灵活性很差,但是使用这个框架后,就解决了这个问题

目前该框架有两种实现方式

  • Markers:只在外层json输出打印,不在message字段中追加;
  • StructuredArguments:在外层json输出中打印,同时在message字段中追加。

在使用的过程中,一定要在xml额外配置下列代码:

<!-- 想使用StructuredArguments 必须加这个,不然生成的日志不显示 -->
<arguments/>
<!-- 想使用Markers 必须加这个,不然生成的日志不显示 -->
<logstashMarkers/>



<!-- 完整示例 -->
<appender name="bizlogAppender" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <file>/home/logs/biz/${springApplicationName}-biz.log</file>
        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
            <!-- 日志文件路径 -->
            <fileNamePattern>/home/logs/biz/${springApplicationName}.%d{yyyy-MM-dd}.%i-biz.log</fileNamePattern>
            <!-- 保留的历史日志文件数量 -->
            <maxHistory>7</maxHistory>
            <!-- 每个日志文件的最大大小 -->
            <timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
                <maxFileSize>100MB</maxFileSize>
            </timeBasedFileNamingAndTriggeringPolicy>
        </rollingPolicy>
        <encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
            <providers>
                <arguments/>
                <logstashMarkers/>
                <timestamp>
                    <fieldName>#time</fieldName>
                    <pattern>[UNIX_TIMESTAMP_AS_NUMBER]</pattern>
                </timestamp>
                <pattern>
                    <pattern>
                        {
                        "#datetime": "%d{yyyy-MM-dd HH:mm:ss.SSS}",
                        "#level": "%level",
                        "#thread": "%thread",
                        "#class": "%logger{50}",
                        "#app_pod_ip": "%X{app_pod_ip}",
                        "#ip": "%X{#ip}",
                        "#forwarded_prefix": "%X{forwarded_prefix}",
                        "#user_agent": "%X{user_agent}",
                        "#request_id": "%X{request_id}",
                        "#fb_msg": "%msg"
                        }
                    </pattern>
                </pattern>
                <stackTrace>
                    <throwableConverter class="net.logstash.logback.stacktrace.ShortenedThrowableConverter">
                        <maxDepthPerThrowable>30</maxDepthPerThrowable>
                        <maxLength>2048</maxLength>
                        <rootCauseFirst>true</rootCauseFirst>
                    </throwableConverter>
                </stackTrace>
            </providers>
        </encoder>
    </appender>

这两个的使用示例有很多,可以去官方网站去查看,我这里列举几个示例

使用StructuredArguments

Java代码

import static net.logstash.logback.argument.StructuredArguments.*;
@Slf4j
public class test{
    public void aa(){
       log.info("我是fb_msg信息",value("ss","aaaa"),value("cccc","xxx"));
    }
    //实际日志输出效果
    // {"ss":"aaaa","cccc":"xxx","#time":1724049241255,"#datetime":"2024-08-19 14:34:01.255","#log_uuid":" N/A","#trace_id":"[TID: N/A]","#fb_collect_app":"test-sys","#system_env":"dev","#profiles_active":"test","#trace_ctx":"[SW_CTX: N/A]","#app_pod_name":"SW_CTX: N/A","#trace_segment_id":"SW_CTX: N/A","#level":"INFO","#thread":"http-nio-8081-exec-1","#class":"c.h.s.controller.ApiController","#forwarded_prefix":"N/A","#user_agent":"Apache-HttpClient/4.5.14 (Java/17.0.10)","#request_id":"N/A","#fb_msg":"我是fb_msg信息"}
    
    
    //注意,我多了一个{}
    public void aa(){
       log.info("我是fb_msg信息{}{}",value("ss","aaaa"),value("cccc","xxx"));
    }
    //实际日志输出效果
    // {"ss":"aaaa","cccc":"xxx","#time":1725013594821,"#datetime":"2024-08-30 18:26:34.821","#log_uuid":" N/A","#trace_id":"[TID: N/A]","#fb_collect_app":"test-sys","#system_env":"dev","#profiles_active":"test","#trace_ctx":"[SW_CTX: N/A]","#app_pod_name":"SW_CTX: N/A","#trace_segment_id":"SW_CTX: N/A","#level":"INFO","#thread":"http-nio-8081-exec-1","#class":"c.h.s.controller.ApiController","#forwarded_prefix":"N/A","#user_agent":"Apache-HttpClient/4.5.14 (Java/17.0.10)","#request_id":"N/A","#fb_msg":"我是fb_msg信息aaaaxxx"}
    
        //注意,我多了一个{}
    public void aa(){
        QueryParamForm param = (QueryParamForm)ContextManager.get(ConstantUtil.QUERY_PARAM);
        JSONObject tmp = new JSONObject();
        tmp.put("vvv","nnn");
        param.setParams(tmp);
        log.info("log message {}", fields(param));
    }
    
    //{"code":"xxx","userId":xxx,"roles":["xx"],"envType":"SERVER","params":{"vvv":"nnn"},"page":1,"pageSize":10,"#time":1725013594822,"#datetime":"2024-08-30 18:26:34.822","#log_uuid":" N/A","#trace_id":"[TID: N/A]","#fb_collect_app":"test-sys","#system_env":"dev","#profiles_active":"test","#trace_ctx":"[SW_CTX: N/A]","#app_pod_name":"SW_CTX: N/A","#trace_segment_id":"SW_CTX: N/A","#level":"INFO","#thread":"http-nio-8081-exec-1","#class":"c.h.s.controller.ApiController","#forwarded_prefix":"N/A","#user_agent":"Apache-HttpClient/4.5.14 (Java/17.0.10)","#request_id":"N/A","#fb_msg":"log message QueryParamForm(code=xxx, userId=xxx, roles=[xxx], envType=SERVER, params={\"vvv\":\"nnn\"}, page=1, pageSize=10)"}
    
public class QueryParamForm implements Serializable {
    private String code;
    private Long userId;
    private List<String> roles;
    private String envType;
    private JSONObject params = new JSONObject();
    private Integer page = 1;
    private Integer pageSize = 10;
}

使用Markers

Java代码

import static net.logstash.logback.argument.StructuredArguments.*;
@Slf4j
public class test{
    public void aa(){
       log.info(appendFields(param), "log message test marker");
    }
    //实际日志输出效果
    // {"code":"xxx","userId":xxx,"roles":["xxx"],"envType":"SERVER","params":{"vvv":"nnn"},"page":1,"pageSize":10,"#time":1725013594823,"#datetime":"2024-08-30 18:26:34.823","#log_uuid":" N/A","#trace_id":"[TID: N/A]","#fb_collect_app":"test-sys","#system_env":"dev","#profiles_active":"test","#trace_ctx":"[SW_CTX: N/A]","#app_pod_name":"SW_CTX: N/A","#trace_segment_id":"SW_CTX: N/A","#level":"INFO","#thread":"http-nio-8081-exec-1","#class":"c.h.s.controller.ApiController","#forwarded_prefix":"N/A","#user_agent":"Apache-HttpClient/4.5.14 (Java/17.0.10)","#request_id":"N/A","#fb_msg":"log message test marker"}

更多示例请查看 logfellow/logstash-logback-encoder: Logback JSON encoder and appenders (github.com)

logback mdc用法

需求场景:我想在日志中有自己自定义的字段和内容,比如我想让日志中包含user-agent,requestId,userId,订单号等等一系列业务标识字段,这种就可以考虑使用该方式

本质核心:mdc采用线程本地存储机制,在线程入口位置,设置参数,后续打印日志,可以从线程本地存储中取出参数值,维护的是一个threadLocal

所以设置的mdc参数,在业务处理结束后,应及时清理。避免在后续的无关业务处理中输出了之前设置的参数值*

代码中使用

MDC.put("userId",1000);

logback.xml中使用

%X{userId}

日常场景示例:

我们需要埋入一系列的header信息

/**
 * @description: 在logback日志输出中增加MDC参数选项
 * 注意,此Filter尽可能的放在其他Filter之前
 * 可以在logback.xml文件的layout部分,通过%X{key}的方式使用MDC中的变量
 */
public class HttpRequestMdcFilter implements Filter {

    @Override
    public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException, ServletException {
        try{
            buildMdc(request);
        }catch(Exception e){
            log.error("buildMdc error",e);
        }

        try {
            chain.doFilter(request,response);
        } finally {
            MDC.clear();//must be,threadLocal
        }

    }

    private void buildMdc(ServletRequest request) {
        HttpServletRequest req = (HttpServletRequest)request;

        String userAgent = StringUtils.defaultIfBlank(req.getHeader(MdcConstants.REQUEST_USER_AGENT_ORI),MdcConstants.REQUEST_NULL);
        String forwaqdedPrefix = StringUtils.defaultIfBlank(req.getHeader(MdcConstants.REQUEST_FORWARDED_PREFIX_ORI),MdcConstants.REQUEST_NULL);
        String requestId = StringUtils.defaultIfBlank(req.getHeader(MdcConstants.REQUEST_ID_ORI),MdcConstants.REQUEST_NULL);
        MDC.put(MdcConstants.REQUEST_USER_AGENT,userAgent);
        MDC.put(MdcConstants.REQUEST_FORWARDED_PREFIX,forwaqdedPrefix);
        MDC.put(MdcConstants.REQUEST_ID,requestId);
    }

    @Override
    public void destroy() {
        Filter.super.destroy();
    }
}

MdcConstants如下:

public class MdcConstants {
    //上层服务名称
    public static final String REQUEST_FORWARDED_PREFIX_ORI = " x-forwarded-prefix";
    public static final String REQUEST_FORWARDED_PREFIX = "forwarded_prefix";
    //浏览器详细信息
    public static final String REQUEST_USER_AGENT_ORI = "User-Agent";
    public static final String REQUEST_USER_AGENT = "user_agent";
    //x-request-id
    public static final String REQUEST_ID_ORI = "x-request-id";
    public static final String REQUEST_ID = "request_id";

    public static final String REQUEST_NULL = "N/A";
}

logback-spring.xml如下:

注意看%X{…}的使用

<appender name="bizlogAppender" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <file>/home/logs/biz/${springApplicationName}-biz.log</file>
        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
            <!-- 日志文件路径 -->
            <fileNamePattern>/home/logs/biz/${springApplicationName}.%d{yyyy-MM-dd}.%i-biz.log</fileNamePattern>
            <!-- 保留的历史日志文件数量 -->
            <maxHistory>7</maxHistory>
            <!-- 每个日志文件的最大大小 -->
            <timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
                <maxFileSize>100MB</maxFileSize>
            </timeBasedFileNamingAndTriggeringPolicy>
        </rollingPolicy>
        <encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
            <providers>
                <timestamp>
                    <!-- 时间戳 -->
                    <fieldName>#time</fieldName>
                    <pattern>[UNIX_TIMESTAMP_AS_NUMBER]</pattern>
                </timestamp>
                <pattern>
                    <pattern>
                        {
                        "#datetime": "%d{yyyy-MM-dd HH:mm:ss.SSS}",
                        "#level": "%level",
                        "#thread": "%thread",
                        "#class": "%logger{50}",
                        "#app_pod_ip": "%X{app_pod_ip}",
                        "#ip": "%X{#ip}",
                        "#forwarded_prefix": "%X{forwarded_prefix}",
                        "#user_agent": "%X{user_agent}",
                        "#request_id": "%X{request_id}",
                        "#fb_msg": "%msg"
                        }
                    </pattern>
                </pattern>
                <stackTrace>
                    <throwableConverter class="net.logstash.logback.stacktrace.ShortenedThrowableConverter">
                        <maxDepthPerThrowable>30</maxDepthPerThrowable>
                        <maxLength>2048</maxLength>
                        <rootCauseFirst>true</rootCauseFirst>
                    </throwableConverter>
                </stackTrace>
            </providers>
        </encoder>
    </appender>

参考文献

logback日志通过mdc机制添加日志属性

logfellow/logstash-logback-encoder


网站公告

今日签到

点亮在社区的每一天
去签到