1. 分布式日志调用链路架构图
日志的收集选型采用ELFK来收集集群情况下的日志,通过读取应用日志文件,通过轻量日志采集器File Beat发送到Logstash,经Logstash格式化日志,持久化存储到ElasticSearch,通过kibana可视化分析展示日志数据,高速搜索排查日志信息,定位线上问题
分布式日志调用链路架构
2. springboot logstash集成
2.1 maven引入依赖
<dependency>
<groupId>net.logstash.logback</groupId>
<artifactId>logstash-logback-encoder</artifactId>
<version>5.3</version>
</dependency>
2.2 在src/main/resources目录新建logback-spring.xml文件
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE configuration>
<configuration>
<include resource="org/springframework/boot/logging/logback/defaults.xml"/>
<include resource="org/springframework/boot/logging/logback/console-appender.xml"/>
<!--日志文件保存路径-->
<property name="LOG_HOME" value="/Users/zhouxinlei/logs" />
<springProperty name="springAppName" scope="context" source="spring.application.name" />
<!-- 日志文件的路径 -->
<property name="LOG_FILE" value="${LOG_HOME}/${springAppName}.log"/>
<contextName>${springAppName}</contextName>
<!--每天记录日志到文件appender-->
<appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>${LOG_FILE}</file>
<append>true</append>
<rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
<fileNamePattern>${LOG_FILE}.%d{yyyy-MM-dd}.%i.gz</fileNamePattern>
<maxHistory>7</maxHistory>
<maxFileSize>10MB</maxFileSize>
</rollingPolicy>
<encoder charset="UTF-8" class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
<providers>
<timestamp>
<timeZone>UTC</timeZone>
</timestamp>
<pattern>
<pattern>
{
"severity": "%level",
"service": "${springAppName:-}",
"trace": "%X{X-B3-TraceId:-}",
"span": "%X{X-B3-SpanId:-}",
"exportable": "%X{X-Span-Export:-}",
"pid": "${PID:-}",
"thread": "%thread",
"class": "%logger{40}",
"rest": "%message"
}
</pattern>
</pattern>
</providers>
</encoder>
</appender>
<logger name="com.apache.ibatis" level="TRACE"/>
<logger name="java.sql.Connection" level="DEBUG"/>
<logger name="java.sql.Statement" level="DEBUG"/>
<logger name="java.sql.PreparedStatement" level="DEBUG"/>
<logger name="org.springframework.kafka" level="ERROR"/>
<logger name="org.apache.kafka" level="ERROR"/>
<root level="INFO">
<appender-ref ref="CONSOLE"/>
<appender-ref ref="FILE"/>
</root>
</configuration>
在encoder 中注入net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder类,格式化json日志输出到文件,ELFK收集格式化好的json日志
到此关于springboot应用的日志已经收集好归类成文件。
3. ELFK的收集
完整的日志收集系统已经完成,后期再写一篇基于kafka收集应用日志到ELK日志收集系统
评论区