Docker Swarm log collection

Docker swarm services log collection using Logspout

posted 2018-01-27 by Thomas Kooi

Docker Swarm mode Logspout

There are many solutions for performing log collection or log management. Personally I’ve got experience with using the ELK stack. So this post will focus on ELK with Logspout for log collection.

We will discuss some practices that worked best for me, some things to watch out for and some nice examples for using this in a Swarm mode cluster.

Log collection using logspout

Logspout by Gliderlabs is a tool for Log routing for Docker container logs. Using it is super easy and all you will need to do is running a Docker container with a little bit of configuration.

Logspout works with adapters, making it easy to use ship your log files to different sources.

For instance, the example from their Github repository readme:

$ docker run --name="logspout" \
	--volume=/var/run/docker.sock:/var/run/docker.sock \
	gliderlabs/logspout \
	syslog+tls://logs.papertrailapp.com:55555

This would send all docker container logs to a syslog deamon running at logs.papertrailapp.com:55555.

Since I am using Logstash to process all incoming log files, I am running Logspout in combination with the logstash-logspout adapter. It requires a custom build image, with the instructions on how to do so nicely described in the readme file on Github.

Alternatively, you can use thojkooi/logspout-logstash, an Docker image I prepared and maintain for my personal usage.

This logspout-logstash adapter has some awesome features that make it easier to search and analyse your logs.

For instance, it will sent the container label’s to Logstash as part of the log message. Each task / container that is part of a Docker service running in your cluster has a bunch of com.docker.swarm.* labels. These can help you identify exactly for which service a given log message is in tools like Kibana.

Another thing I love is that it lets you add additional fields to your log entries, using the environment variable LOGSTASH_FIELDS. They have some great examples in the readme. At work I use it for seperating various containers and their environment (LOGSTASH_FIELDS=environment=tst).

Deploying logspout as a Docker Service

I’ve deployed logspout as a global docker service in my clusters, as I need it to run on all nodes and forward the logs to the same endpoint.

Logspout and logstash

Here is a sample docker-compose file to deploy it as a stack:

version: '3.2'
services:
    logspout:
        image: thojkooi/logspout-logstash:1.0.0
        environment:
            ROUTE_URIS: 'logstash+tcp://logs.ams3.containerinfra.com:5000'
            LOGSTASH_FIELDS: 'environment=production'
        volumes:
            - type: 'bind'
              target: '/var/run/docker.sock'
              source: '/var/run/docker.sock'
        deploy:
            mode: global

And I deploy it as such:

$ docker stack deploy -f docker-compose.yml log-collection

Using the right log driver

Docker has a bunch of options for logging driver plugins. You can set defaults for an entire node, or just use a specific one for a specific container. You could send your log files directly to a syslog deamon using the right Docker log plugin, write to journald or awslogs.

Using those drivers will prevent you from having to use an agent from collecting the desired logs. You can modify the output of those by using log tags

For simplicity, I tend to stick with the json-file log driver in combination with logspout-logstash. I’ve not yet run into any issues with this set up.

I’ve yet to properly test drive some of the other log drivers and see how they work for me. When I do, I will update this post with my findings.