How To Deploy Logstash and Filebeat On Kubernetes With ECK and SSL

Raphael De Lio
5 min readMay 25, 2020

Twitter | LinkedIn | YouTube | Instagram

With more than 200 plugins, Logstash makes it easy to ingest, transform and ship all your data to Elasticsearch! It’s one of the main applications of the Elastic Stack and the subject of our story today!

Filebeat, on the other hand, is part of the Beats family and will be responsible for collecting all the logs generated by the containers in your Kubernetes cluster and ship them to Logstash

If you read my previous story you already know that Kibana and Elasticsearch are part of the Elastic Cloud On Kubernetes (ECK). Although Logstash is not… It doesn’t mean that we can’t make them all work together with an easy configuration!

In this guide we will learn how to easily deploy Logstash and make it work with your previously deployed Elasticsearch cluster on ECK.

We will be going through:
• Configure Logstash
• Deploy Logstash On Kubernetes
• Expose Logstash
• Create Your First Grok Filter
• Deploy Filebeat

CONFIGURE LOGSTASH

Before we can actually deploy our Logstash instance, we must set up its properties and build our first pipeline. To do it we will create a configmap with two files: logstash.conf and logstash.yml .

logstash.yml will hold our Logstash configuration properties, while logstash.conf will define how our pipeline must work, its inputs, filters and outputs.

Below you can find our example configmap. Our yaml file holds two properties, the host, which will be the 0.0.0.0 and the path where our pipeline will be. Our conf file will have an input configured to receive files from the Beats family (filebeat, heartbeat…), our filters will be blank for now and our output will be our Elasticsearch previously deployed.

Now that we understand our config map, we can deploy it by running kubectl apply -f logstash-cm.yaml in the directory where our file is located.

DEPLOY LOGSTASH

Now that we have our configuration set, we can deploy our Logstash pod on Kubernetes. To make our life easier, ECK stores our Elasticsearch password and SSL certificate in Kubernetes as secrets.

In our example file below we can see that we are mounting three different volumes. One is holding our SSL certificate that is retrivied from its secret, the other is holding our yaml file and the last one is holding our conf file.

The password for the Elasticsearch cluster is also retrieved from its secret and if you deployed Elasticsearch with a different name you also need to rename the secrets in the yaml file.

We can deploy our Logstash pod by running kubectl apply -f logstash.yaml in the same directory where the file is located.

CREATING LOGSTASH SERVICE

To allow Logstash to communicate with our Elasticsearch cluster we must create the service using the file below. Be aware that this service won’t expose our Logstash instance with a LoadBalancer or a NodePort.

Just run kubectl apply -f logstash-service.yaml in the same directory where the file below is located. And that’s it, you can check if everything succeeded and that Logstash successfully to Elasticsearch by checking its logs with: kubectl logs logstash and seeing: Successfully started Logstash API endpoint {:port=>9600}

CREATE YOUR FIRST GROK FILTER

Grok gives us the possibility of parsing a String message into multiple fields to be stored in Elasticsearch. In this example we will be using Spring Boot default logging layout, which would print something like:

2020–05–25 22:08:58.212 INFO 6435 — — [main] com.example.demo.DemoApplication : Started DemoApplication in 0.958 seconds (JVM running for 1.535)

In Elasticsearch, we would see this whole message as only one field. Instead, we will be applying a Grok filter to parse it and have multiple fields.

To create our filter we will be using an online tool called Grok Debugger that allows us to test our patterns and see if they actually match our log message.

Grok comes with a few patterns pre loaded and you can see all of them here.

For the message above we will apply:
• %{TIMESTAMP_ISO8601:Date} for the date
• %{LOGLEVEL:Level} for the logging level
• %{INT:ProcessID} for the process ID
• %{DATA:Thread} for the thread name
• %{JAVACLASS:Class} for the Java Class
• %{GREEDYDATA:LogMessage} for the logged message

Add the spaces (There are two spaces before the level) and other characters between all the properties and you should have something like:

%{TIMESTAMP_ISO8601:Date} %{LOGLEVEL:Level} %{INT:ProcessID} — — \[%{DATA:ThreadName}\] %{JAVACLASS:Class} : %{GREEDYDATA:LogMessage}

Test it in the Grok Debugger and you should see something like:

Now you can apply your filter to the logstash.conf file by updating the configmap file with the new info, as you can see in the example below:

And then apply it by running:kubectl apply -f logstash-configmap.yaml and restart the pod by running: kubectl apply -f logstash.yaml .

To make sure everything succeeded check Logstash logs with: kubectl logs logstash .

You should see: Successfully started Logstash API endpoint {:port=>9600}

DEPLOY FILEBEAT

Now that our Grok Filter is working, we need Filebeat to collect the logs from our containers and ship them to Logstash to be processed.

Let’s start by applying the config map with the setting for Filebeat by running kubectl apply -f filebeat-configmap.yaml in the same directory where the file below is located.

You can see that Filebeat will be collecting the logs from /var/log/containers/ and shipping them to Logstash.

To allow Filebeat to see the logs from container in other namespaces, we must define a Service Account, a Cluster Role and then bind them together. You can do it by running kubectl apply -f filebeat-authorization.yaml in the same directory where the file below is located.

Now that we have everything set, we can deploy Filebeat by running kubectl apply -f filebeat-daemonset.yaml in the same directory where the file below is located.

And voilà! You now have a fresh instance of Logstash connected to Elasticsearch with SSL and ingesting logs from Filebeat, processing them with Grok and shipping them to Elasticsearch! Congratulations!

Contribute

Writing takes time and effort. I love writing and sharing knowledge, but I also have bills to pay. If you like my work, please, consider donating through Buy Me a Coffee: https://www.buymeacoffee.com/RaphaelDeLio

Or by sending me BitCoin: 1HjG7pmghg3Z8RATH4aiUWr156BGafJ6Zw

Follow Me on Social Media

Stay connected and dive deeper into the world of Elasticsearch with me! Follow my journey across all major social platforms for exclusive content, tips, and discussions.

Twitter | LinkedIn | YouTube | Instagram

You might also enjoy:

--

--

Raphael De Lio

Software Consultant @ Xebia - Dutch Kotlin User Group Organizer: https://kotlin.nl