How To Use Custom Patterns On Grok Filter For Logstash Running On Kubernetes

Raphael De Lio
5 min readAug 12, 2020

--

Twitter | LinkedIn | YouTube | Instagram

Everything is working fine, your Elasticsearch instance is running, being fed by your Logstash instance with all these nice filters you have set and all the stakeholders are able to see all this precious information on Kibana. Furthermore, everything is running on Kubernetes, this is the perfect world.

And then this happens… You are requested to filter logs with a pattern you haven’t seen before. You go through the library of pre-defined Grok patterns and you cannot find the one you need. Oh boy, you will have to create a new one, is that even possible?

You do some research, you come across the official documentation where the Elastic team teach you how to do it, but yet, your instance is running on Kubernetes, you will have to change its deployment and although it’s easy to do, it might not be quite simple to see how to do it.

Follow me on this story today and you will learn how to implement custom Grok patterns for your Logstash running on Kubernetes.

If you followed my previous tutorials on how to Deploy the Elastic Stack with the Elastic Cloud On Kubernetes (ECK) and how to Deploy Logstash and Filebeat On Kubernetes With ECK and SSL, you already have everything we need running on Kubernetes. If you still don’t have everything running, follow the tutorials above.

For this tutorial we will use the base of these tutorials previously mentioned.

Let’s get started!

Summary
1. Building The Pattern
2. Update logstash-configmap.yml
3. Update logstash-pod.yaml
4. Alternative Method: Oniguruma
5. Conclus

1. Building The Pattern

First of all, we need to identify what pattern we want to match. In this example, we will use a Regex that will match any HTML tag: <[^>]*>

2. Update logstash-configmap.yml

In order to add new patterns we will need to create a new file. The way we are going to do it is by adding this new file to the config map.

As you remember, we had two files on our config map: logstash.yml and logstash.conf

We will add a new one that we will call custom_patterns.txt . The name doesn’t matter, you can name it whatever you want.

We will add it to the file and add the new pattern to it. The name of the pattern must be separated of the pattern itself by a space, as in the example below:

Now we need to update the filter in order to use this new pattern. Our previous filter, as you can see above, looked like this:

%{TIMESTAMP_ISO8601:Date}  %{LOGLEVEL:Level} %{INT:ProcessID} --- \[%{DATA:ThreadName}\] %{JAVACLASS:Class} : %{GREEDYDATA:LogMessage}

For this example, we will change it and pretend our application is logging an HTML tag, as in the example below:

%{TIMESTAMP_ISO8601:Date}  %{LOGLEVEL:Level} %{INT:ProcessID} --- \[%{DATA:ThreadName}\] %{JAVACLASS:Class} : Most used HTML tag: %{HTML_TAG:HtmlTag}

Good. Before we continue, let’s test it in the Grok Debugger that you can find in the Dev Tools section of Kibana (Don’t forget to add the Custom Pattern in the debugger):

Grok Debugger

Great! It’s working! Let’s update our logstash-configmap.yml now. Inside the grok block of our filter, we will also need to add a path to our custom pattern. We will add it, by adding it as a new key before the match line. e.g.:

patterns_dir => ["/opt/logstash/custom_patterns"]

Your file should look like the one below:

And now let’s apply this new configuration on Kubernetes by running:

kubectl apply -f logstash-configmap.yaml

Good job! 🕺

3. Updating logstash-pod.yaml

It’s time to update our logstash-pod.yaml. Remember that directory we defined earlier? We need to copy the file we created to this path in the container that is inside our pod. You’ve done it before with logstash.conf and logstash.yml , now, we are going to do the same thing with custom_patterns.txt.

In order to do so, we will need to add a new volume and a new volume mount to our logsatsh-pod.yaml manifest, as in the examples below:

volumes:
- name: custom-patterns-volume
configMap:
name: logstash-configmap
items:
- key: custom_patterns.txt
path: custom_patterns.txt

And:

volumeMounts:
- name: custom-logstash-patterns-volume
mountPath: /opt/logstash/custom_patterns

Your manifest will look like this:

Delete the old pod by running on the same folder where your manifest is located:

kubectl delete -f logstash-pod.yml

And then recreate it by running:

kubectl create -f logstash-pod.yml

And that’s it! You should now be able to see your custom pattern being matched by Grok on your logs.

Congratulations! 🔥

4. Alternative Way: Oniguruma

If you don’t want to have all this work of the previous way, you can use the Oniguruma syntax.

Instead of adding a new file, you can just update the filter you already had using the Oniguruma syntax. The syntax is the following:

(?<field_name>the pattern here)

In our example, it would be:

(?<HtmlTag><[^>]*>)

Therefore, our Grok Matcher would be:

%{TIMESTAMP_ISO8601:Date}  %{LOGLEVEL:Level} %{INT:ProcessID} --- \[%{DATA:ThreadName}\] %{JAVACLASS:Class} : Most used HTML tag: (?<HtmlTag><[^>]*>)

Then all you need to do is update your config map manifest with your new matcher. You will not need a new file or volumes. Much easier, right? 😉

I’m very happy you got to this part of the story, I‘m truly thankful for this.
Support my work: follow me and clap to this story.

5. Conclusion

Although the second way looks much simpler, you might still want to use the first method.

When you have too complex RegEx it might affect the readability and maintainability of your filters. Having the same complex RegEx rewritten multiple times is not the best practice, therefore, having your patterns in a separate file would be the best approach.

Besides that, you might come across a situation where you need a very specific pattern that will never be reused. Or you might even only need one filter at all. In this case, maintainability or readability is not an issue and you can opt for the simpler approach.

Contribute

Writing takes time and effort. I love writing and sharing knowledge, but I also have bills to pay. If you like my work, please, consider donating through Buy Me a Coffee: https://www.buymeacoffee.com/RaphaelDeLio

Or by sending me BitCoin: 1HjG7pmghg3Z8RATH4aiUWr156BGafJ6Zw

Follow Me on Social Media

Stay connected and dive deeper into the world of Elasticsearch with me! Follow my journey across all major social platforms for exclusive content, tips, and discussions.

Twitter | LinkedIn | YouTube | Instagram

--

--

No responses yet