Let's Embark! Shipping Kubernetes Logs with fluentd, Loggly, and Kustomize (Part 5)

Today we're setting up a Kubernetes Logging Solution using Loggly and some simple Kubernetes commands.

Let's Embark! Shipping Kubernetes Logs with fluentd, Loggly, and Kustomize (Part 5)

Today we’re going to be covering a straightforward topic, but one that makes troubleshooting Kubernetes clusters and errors a lot easier than using kubectl logs or the Kubernetes Dashboard. We’re going to be using Loggly as our provider, quite simply because it’s the easiest to set up and has a very generous log tier.

First, let’s set up our monitoring namespace. If you already have this set up, skip this step.

# Creates Namespace
kubectl create namespace monitoring

Now, we need to create our monitoring-secrets Secret, which will provide our API token that will allow our fluentd DaemonSet to ship logs to Loggly. For instructions on how to create a Loggly API Token, please see the Loggly Documentation.

Update the following file with your token, then apply it against your cluster.

kubectl apply -f monitoring_secrets.yaml

The next thing we need to do is download the fluentd DaemonSet definition from the official repositories.

wget https://raw.githubusercontent.com/fluent/fluentd-kubernetes-daemonset/master/fluentd-daemonset-elasticsearch-rbac.yaml

Now that we have our DaemonSet, we’re going to use Kustomize to apply our configuration and apply the customisations to our cluster. Our changes are quite simple and include:

  • Updating the namespace to monitoring.
  • Adding tags.
  • Using our LOGGLY_TOKEN from monitoring_secrets.yaml.

Create the following two files:

Once they're created, we just need to generate the customised YAML and apply it against our Cluster.

kubectl kustomize . | kubectl apply -f -

That's it! We're up and running and our logs are being shipped out to Loggly. Next time you need to do a bit of troubleshooting, just head over to the console and check out your logs streaming in real time!