Today we’re going to be covering a straightforward topic, but one that makes troubleshooting Kubernetes clusters and errors a lot easier than using
kubectl logs or the Kubernetes Dashboard. We’re going to be using Loggly as our provider, quite simply because it’s the easiest to set up and has a very generous log tier.
First, let’s set up our
monitoring namespace. If you already have this set up, skip this step.
# Creates Namespace kubectl create namespace monitoring
Now, we need to create our monitoring-secrets Secret, which will provide our API token that will allow our
fluentd DaemonSet to ship logs to Loggly. For instructions on how to create a Loggly API Token, please see the Loggly Documentation.
Update the following file with your token, then apply it against your cluster.
# monitoring_secrets.yaml --- apiVersion: v1 kind: Secret metadata: name: monitoring-secrets namespace: monitoring data: loggly_token: <LOGGLY_TOKEN>
kubectl apply -f monitoring_secrets.yaml
The next thing we need to do is download the
fluentd DaemonSet definition from the official repositories.
Now that we have our DaemonSet, we’re going to use Kustomize to apply our configuration and apply the customisations to our cluster. Our changes are quite simple and include:
- Updating the namespace to monitoring.
- Adding tags.
- Using our
Create the following two files.
# kustomization.yaml --- apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization namespace: monitoring resources: - fluentd-daemonset-elasticsearch-rbac.yaml patchesStrategicMerge: - custom.yaml
# custom.yaml --- apiVersion: apps/v1 kind: DaemonSet metadata: name: fluentd namespace: monitoring labels: k8s-app: fluentd-logging spec: template: spec: containers: - name: fluentd env: - name: LOGGLY_TOKEN valueFrom: secretKeyRef: name: monitoring-secrets key: loggly_token - name: LOGGLY_TAGS value: k8s, fluentd
Once they're created, we just need to generate the customised YAML and apply it against our Cluster.
kubectl kustomize . | kubectl apply -f -
That's it! We're up and running and our logs are being shipped out to Loggly. Next time you need to do a bit of troubleshooting, just head over to the console and check out your logs streaming in real time!