ELK Stack Integration with Spring Boot Applications


The ELK Stack is made of three open-source products: 1) Elasticsearch, 2) Logstash, and 3) Kibana.

Elasticsearch: It is a NoSQL database which is based on the open-source search engine called Lucene. So Elasticsearch is a search and analytics engine.

Logstash: It is a data processing pipeline tool which accepts inputs from (multiple) sources, performs different transformations, and exports the data to targets (Elasticsearch).

Kibana: Kibana helps users to visualize data with graphs and chart in Elasticsearch.

You can use combination of these, i.e., the ELK stack for log aggregation.

Related Posts:

In the microservices architecture, log aggregation is an important part and it allows you to understand, analyze and diagnose certain malfunctions, as it can help you follow and trace the different actions carried out by the actors of our system. Log details of multiple services can be aggregated by an aggregation system (for example, ELK stack), where it can be stored and monitored in one place.

One of the problems most developers face difficulty of tracing logs as your microservices applications grow and requests propagate from one microservice to another microservice. So it could be quite part to try to figure out how a particular request travels through the application when you don’t have any idea regarding the implementation of microservices you are calling. So ELK can help you to find issues in your applications by analyzing the logs.

In this article, I will show you how to configure the ELK stack and monitor spring boot application’s log. I will set up the ELK servers and then aggregate the log details generated by the Spring boot application.


Elasticsearch 7.12.1, Logstash 7.12.1, Kibana 7.12.1, Java at least 8, Spring Boot 2.4.5, Maven 3.6.3

Elasticsearch Configuration

Download Elasticsearch zip and extract or unzip it. Navigate to the /bin directory and execute the file elasticsearch.bat from windows command line tool. Your server will be started on http://localhost:9200/ address. Accessing this URL in the browser you will see the following response:

elk stack integration with spring boot applications

Kibana Configuration

Download Kibana and unzip the file. Navigate to the /bin directory and run the kibana.bat file from the command prompt.

Kibana server will start on port 5601, and you can access the server at http://localhost:5601/.

elk stack integration with spring boot applications

Logstash Configuration

Download Logstash and unzip the file. You need to configure the Logstash server to pass the Spring boot application logs. Create a file logstash-config.conf under /config folder with the following content:

input {
    file {
    type => "java"
    path => "C:/eclipse_2021_03_R/spring-boot-elk-integration/elk-spring.log"
    codec => multiline {
        pattern => "^%{TIMESTAMP_ISO8601} "
        negate => true
        what => "previous"

filter {
    date {
        match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]

output {
  stdout {
    codec => rubydebug
  elasticsearch {
    hosts => ["http://localhost:9200"]

A logstash configuration file consists of mainly 3 sections:


The input section in the configuration file defines the input plugin to use. Each plugin has its own configuration options, which you should know before using it. In this example, it will be a log file from Spring Boot application. The path of the log file has been mentioned using the path plugin.

A codec plugin changes the data representation of an event. Codecs are essentially stream filters that can operate as part of an input or output.

The pattern should match what you believe to be an indicator that the field is part of a multi-line event.

The what must be previous or next and indicates the relation to the multi-line event.

The negate can be true or false (defaults to false). If true, a message not matching the pattern will constitute a match of the multiline filter and the what will be applied. (vice-versa is also true)

For example, Java stack traces are multiline and usually have the message starting at the far-left, with each subsequent line indented.


The filter section in the configuration file defines what filter plugins you want to use, or in other words, what processing you want to apply to the logs. A filter plugin performs intermediary processing on an event. Filters are often applied conditionally depending on the characteristics of the event.

In this example, it parses dates from fields to use as the Logstash timestamp for an event.


The output plugin in the configuration file defines the destination to which you want to send the logs to (for example, elasticsearch).

Spring Boot Application

Now you can create a Spring Boot application with the following class:

package com.roytuts.spring.boot.elk;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.http.HttpStatus;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;

public class SpringELKApp {

	public static void main(String[] args) {
		SpringApplication.run(SpringELKApp.class, args);


class ELKRestController {

	private static final Logger LOG = LoggerFactory.getLogger(ELKRestController.class);

	public ResponseEntity<String> greet() {
		LOG.info("Hello World!");

		return new ResponseEntity<String>("Hello World!", HttpStatus.OK);


Create application.properties file under class path folder (src/main/resources) with the following log file location:


The above log file elk-spring.log should be created under the root folder of the project. This file automatically be created once your Spring Boot application gets started up.

Testing the Application

Now start your Spring Boot application by running the main class – SpringELKApp.

Start your Logstash server by executing the following command on your command prompt:

<logstash installation root directory> bin\logstash.bat -f config\logstash-config.conf

Logstash server starts on port 9600 and can be accessed at http://localhost:9600/:


Kibana Dashboard Configuration

If you have no already opened Kibana then open at http://localhost:5601/. Now click on Discover under Kibana or search by typing Discover in the top search box. Or instead search Index Patterns in the search box and select under Kibana. Here you need to click on Create Index Pattern.

index patterns

Enter logstash* and click on Next Step button.

index patterns

On the next screen, select the @timestamp option from the dropdown and click on the Create Index Pattern button.

index pattern

Now search Discover from the search box or click on Discover to see the logs from Spring Boot application.

Make sure you hit the GET request REST endpoint /hello (http://localhost:8080/hello) to get the logs in the Kibana dashboard for visualization.

elk stack integration with spring boot applications

That’s all about how to configure ELK stack for log aggregation from Spring Boot applications. This is a basic example and you can explore more on Kibana dashboard or Logstash configurations for what you need.

Another example I can show you here is if you need to tag exception stack trace as stacktrace then you can put the following configuration into your logstash-config.conf file inside the filter{ ... } plugin:

if [message] =~ "\tat" {
grok {
		match => ["message", "^(\tat)"]
		add_tag => ["stacktrace"]

You can later download the entire logstash-config.conf file from the Source Code section below.

Add another endpoint to the existing REST controller class with the following code:

public ResponseEntity<String> error() {
	try {
		int i = 1 / 0; // exception occurs
	} catch (Exception e) {

		String msg = e.getMessage();

		LOG.error("Error Message: " + msg);

		return new ResponseEntity<String>(msg, HttpStatus.INTERNAL_SERVER_ERROR);

	return new ResponseEntity<String>("Ok", HttpStatus.OK);

The above code snippets always raise exception. You can visualize the log in Kibana once you hit the /err endpoint.

elk stack integration with spring boot applications

By expanding the log you can see the following message:

elk stack integration with spring boot applications

Few things:

  • You can even send logs to a remote Logstash instance via TCP protocol instead of listening to the local log file.
  • You can use multiple log files as a source of input and you can configure more filters in the Logstash config file.
  • You can create different index patterns in Logstash

Source Code


Leave a Reply

Your email address will not be published. Required fields are marked *