Configure docker to send logs to AWS cloudwatch

In this post, I will show how to configure docker to send logs from containers to AWS cloudwatch log.

At the end of the tutorial, there will be several useful commands for debugging and testing.

Environment:

  • OS Ubuntu 18.04.
ubuntu@ip-10-0-5-139:~$ lsb_release -a
No LSB modules are available.
Distributor ID:	Ubuntu
Description:	Ubuntu 18.04.1 LTS
Release:	18.04
Codename:	bionic
  • docker version.
ubuntu@ip-10-0-5-139:~/$ docker version
Client:
 Version:           18.09.0
 API version:       1.39
 Go version:        go1.10.4
 Git commit:        4d60db4
 Built:             Wed Nov  7 00:49:01 2018
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.0
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.4
  Git commit:       4d60db4
  Built:            Wed Nov  7 00:16:44 2018
  OS/Arch:          linux/amd64
  Experimental:     false

Step 1: Obtain AWS credential.

  • Open your AWS console and navigate to the IAM console.
  • Select Policy. Create a new Policy with the following credentials in the service CloudWatchLogs: DescribeLogStreams, GetLogEvents, CreateLogGroup, CreateLogStream, PutLogEvents. In the end, the policy should be something like:
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "logs:CreateLogStream",
                "logs:DescribeLogStreams",
                "logs:GetLogEvents",
                "logs:CreateLogGroup",
                "logs:PutLogEvents"
            ],
            "Resource": "*"
        }
    ]
}
  • In Groups, create a new group to include the created policy.
  • In Users, create a new user and add that user to the created group.
  • In the setting of the created user, navigate to Security Credentials to obtain Access key id and Secret Access Key.
    You have to store the Secret Access Key because it will be shown only once in the AWS console when it is created.  Access key id, however, can be checked at any time.
    Assume that the aws access key id and aws secret access key are my-aws-access-key and my-secret-access-key, respectively.

Step 2: setup awslogs driver for docker daemon.

  • Execute sudo systemctl edit docker. An editor will be opened. Add the following content to the opened file.
[Service]
Environment="AWS_ACCESS_KEY_ID=my-aws-access-key"
Environment="AWS_SECRET_ACCESS_KEY=my-secret-access-key"

The above command actually opens a file with the name /etc/systemd/system/docker.service.d/override.conf. override.conf can be any name, you can rename to anything if you have multiple configurations. This file has to be placed directly in that directory, and may not be placed in any subdirectory.

Check if the content is set up correctly with cat

# cat /etc/systemd/system/docker.service.d/override.conf
[Service]
Environment="AWS_ACCESS_KEY_ID=my-aws-access-key"
Environment="AWS_SECRET_ACCESS_KEY=my-secret-access-key"
  • Flush the change with sudo systemctl daemon-reload
  • Restart the docker daemon sudo systemctl restart docker
  • (Optional step) To watch for output from the service triggered by sudo systemctl, use journalctl -fu docker

All above commands are equivalent to execute dockerd command with the environment variables being overwritten. They are equivalent to executing.

sudo systemctl stop docker
env AWS_ACCESS_KEY_ID=my-aws-access-key AWS_SECRET_ACCESS_KEY=my-secret-access-key /usr/bin/dockerd

These commands may be useful for debugging if anything does not work.

Step 3: start docker with awslogs log driver.

docker run --rm --log-driver awslogs --log-opt awslogs-region=ap-northeast-1 --log-opt awslogs-group=test --log-opt awslogs-create-group=true busybox /bin/echo hello-world

Now docker should create a new log group named test (if not exist) and a new log stream in the log group test

You can check the result of this step using journalctl (or dockerd). These commands are very verbose and useful to debug if there is any problem, such as invalid aws keys, or incorrect AWS policy configuration.

Step 4: test if the log has been written.

  • Go to AWS console -> CloudWatch -> Logs -> test -> <stream name>

Or from the command line:

  • List all streams from a log group.
env AWS_ACCESS_KEY_ID=my-aws-access-key AWS_SECRET_ACCESS_KEY=my-aws-secret-acess-key aws --region ap-northeast-1 logs describe-log-streams --log-group-name test
  • Pick up the first (that was created most recently) stream name from the output, suppose the stream name is my-stream-name
  • Show the last output log of the log stream.
env AWS_ACCESS_KEY_ID=my-aws-access-key AWS_SECRET_ACCESS_KEY=my-aws-secret-acess-key aws --region ap-northeast-1 logs get-log-events --log-group-name test --log-stream-name my-stream-name --limit 1 --no-start-from-head

The output will be something like.

{
    "events": [
        {
            "timestamp": 1556096908637,
            "message": "hello-world",
            "ingestionTime": 1556096908780
        }
    ],
    "nextForwardToken": "f/34702120663734923507202190247632410213615935025352933376",
    "nextBackwardToken": "b/34702120663734923507202190247632410213615935025352933376"
}

(Optional) Step 5: configure your nginx server container to flush output to the standard output.

access_log /dev/stdout main;
error_log stderr warn;

Note that the entry for error is stderr, not /dev/stderr.

Currently, there is no difference whether the error logs are directed to stdout or stderr, I just suggest this configuration for practical usage.

Source