Cézar Pauxis challenged logic and poverty to become a prodigy in the software development field. At the age of only 17 years old, he was hired by the biggest Brazilian electronic payments company.
“I had a device that would overheat so quickly I had to put it in the freezer. There were others I could only use part of the screen.”
Cézar Pauxis describes the routine of “problematic relationships” with many of the second hand mobile phones he had during his teenage year — the only ones that fit his family budget.
But it was through workarounds and perseverance that…
In this tutorial you will learn how you can connect your Spring Boot application to a Elasticsearch instance using Spring Data and SSL.
If you still don’t have an instance of Elasticsearch running, you can check one of my previous stories and find out how you can deploy a secure one on Kubernetes with SSL:
Python is one of the most popular programming languages and even though it is easy to learn it has all of the power and resources necessary to get advanced work done. Ready to get started?
Installing Python is an easy thing to do. Just go to www.python.org/downloads and download the latest version for the operating system you are currently using. Then, regardless of the operating system you are using, just execute the downloaded file and go through the steps of the installation leaving all options as default.
In my previous stories I covered how we can Deploy Kibana and Elasticsearch using Elastic Cloud On Kubernetes (ECK) and how we can Deploy Logstash and Filebeat using Elastic Cloud On Kubernetes.
In today’s story I’m going to cover how we can deploy Heartbeat, one of the key tools of the Elastic Stack and the one that makes sure all your hosts are still up by asking them a simple question: Are you alive? Heartbeat then ships this information to Elasticsearch with other information for further analysis.
If you still haven’t got Elasticsearch, Kibana and Logstash deployed, make sure you…
In this tutorial we are going to see how we can download the result of a BigQuery query into a CSV file in our computer by using Kotlin, the Java BigQuery API Client Library provided by Google and the OpenCSV Library.
Let’s get it Started!
You should start by setting up authentication to run the client library. You can do it by following the official documentation.
For examples purposes, I will start a new Kotlin project using the Gradle Build Tool and add the BigQuery and the OpenCSV dependencies:
We will use the Shakespeare sample data table…
The easiest pattern to be implemented in Kotlin (only one line needed) is also very controversial. Although many people claim that the singleton pattern is actually an anti-pattern and frequently misused, in this story we are going to see how we can implement it in Kotlin and that it can be very useful when used appropriatelly.
Singleton specifies that only an instance of a given class can exist and it will be used by the whole application. Thus, we have a single and global point of access to this object.
It should be used when you need to control concurrent…
Everything is working fine, your Elasticsearch instance is running, being fed by your Logstash instance with all these nice filters you have set and all the stakeholders are able to see all this precious information on Kibana. Furthermore, everything is running on Kubernetes, this is the perfect world.
And then this happens… You are requested to filter logs with a pattern you haven’t seen before. You go through the library of pre-defined Grok patterns and you cannot find the one you need. Oh boy, you will have to create a new one, is that even possible?
The time has come. If you have followed my previous stories on how to Deploy Elasticsearch and Kibana On Kubernetes and how to Deploy Logstash and Filebeat On Kubernetes you probably have deployed the version 7.7.0 of the stack.
In June, 2020, the version 7.8.0 was released. Kibana has a new User Interface, Elasticsearch comes with new features, etc… You might be willing to update but yet, you are concerned that this might cause the loss of all your data that was already ingested into Elasticsearch or maybe you are afraid the stack will be down for a long time…
Everyone is susceptible to mistakes, but those who can foresee them are able to prevent them much before they occur.
There are many ways you can lose all your data and you probably don’t want it to happen. But if it eventually does, not having a backup might be your worst nightmare.
Today we will be going through how to backup our Elasticsearch cluster (Running on Kubernetes) data into Amazon S3 and protect it from unexpected data losses.
We all make mistakes and we make them quite often. Deleting a pod, a persistence volume or even a whole namespace is an easy thing to do in Kubernetes and if you delete the right one you can say goodbye to your Elasticsearch cluster.
Today we will be going through how to backup our Elasticsearch cluster (Running on Kubernetes) data into Google Cloud Storage and protect it from unexpected data losses.