In this post I will show how to configure docker to send log from container to AWS cloudwatch log.

My tutorial will also show how to debug if something wrong happens.

Environment:

  • OS Ubuntu 18.04
ubuntu@ip-10-0-5-139:~$ lsb_release -a
No LSB modules are available.
Distributor ID:	Ubuntu
Description:	Ubuntu 18.04.1 LTS
Release:	18.04
Codename:	bionic
  • docker version
ubuntu@ip-10-0-5-139:~/$ docker version
Client:
 Version:           18.09.0
 API version:       1.39
 Go version:        go1.10.4
 Git commit:        4d60db4
 Built:             Wed Nov  7 00:49:01 2018
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.0
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.4
  Git commit:       4d60db4
  Built:            Wed Nov  7 00:16:44 2018
  OS/Arch:          linux/amd64
  Experimental:     false

Step 1: Obtain AWS credential

  • Open your AWS console and navigate to IAM console
  • Select Policy. Then create new Policy with following credentials in service CloudWatchLogs: DescribeLogStreams, GetLogEvents, CreateLogGroup, CreateLogStream, PutLogEvents. In other words, the policy should be something like
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "logs:CreateLogStream",
                "logs:DescribeLogStreams",
                "logs:GetLogEvents",
                "logs:CreateLogGroup",
                "logs:PutLogEvents"
            ],
            "Resource": "*"
        }
    ]
}
  • In Groups, create new group include that policy
  • In Users, create new user and add this user to the group which has just been created
  • In setting of the newly created user, navigate to Security Credentials to obtain Access key id and Secret Access Key.
    You have to store Secret Access Key because it is shown only once from AWS console when it is created.  Access key id, however, can be checked any time.
    Support that the aws access key id and aws secret access key are my-aws-access-key and my-secret-access-key, respectively.

Step 2: setup awslogs driver for docker daemon

  • Execute sudo systemctl edit docker. Editor will be opened. Add following content to the being opened file
[Service]
Environment="AWS_ACCESS_KEY_ID=my-aws-access-key"
Environment="AWS_SECRET_ACCESS_KEY=my-secret-access-key"

The above command actually opening file named /etc/systemd/system/docker.service.d/override.conf. override.conf can be any name, try rename it to anything if you have multiple configurations. This file has to be placed directly in that directory, and may not be placed in any sub directory.

Check if the content is setup correctly via cat

cat /etc/systemd/system/docker.service.d/override.conf
[Service]
Environment="AWS_ACCESS_KEY_ID=my-aws-access-key"
Environment="AWS_SECRET_ACCESS_KEY=my-secret-access-key"
  • Flush the change with sudo systemctl daemon-reload
  • Restart docker daemon sudo systemctl restart docker
  • (Optional) To watch for output from the service triggered by sudo systemctl, use journalctl -fu docker

All above command are equivalent to execute dockerd command with environment variables overwritten. All of above commands in this step are equivalent to executing.

sudo systemctl stop docker
env AWS_ACCESS_KEY_ID=my-aws-access-key AWS_SECRET_ACCESS_KEY=my-secret-access-key /usr/bin/dockerd

This command is used for debugging purpose if the following steps in this post do not work

Step 3: start docker with awslogs log driver

docker run --rm --log-driver awslogs --log-opt awslogs-region=ap-northeast-1 --log-opt awslogs-group=test --log-opt awslogs-create-group=true busybox /bin/echo hello-world

Now docker should create new log group named test (if not exist) and new log stream in group test

In this step, you also should check for journalctl (or dockerd) command result. These commands' output is very useful to check if there is any thing wrong, especially with AWS policy configuration.

Step 4: test if log has been written

  • Go to AWS console -> CloudWatch -> Logs -> test -> <stream name>

Or in command line

  • List all streams from log group
env AWS_ACCESS_KEY_ID=my-aws-access-key AWS_SECRET_ACCESS_KEY=my-aws-secret-acess-key aws --region ap-northeast-1 logs describe-log-streams --log-group-name test
  • Pick up the first (which is created most recently) stream name from the output, suppose the stream name is my-stream-name
  • Show the last output log of the log stream
env AWS_ACCESS_KEY_ID=my-aws-access-key AWS_SECRET_ACCESS_KEY=my-aws-secret-acess-key aws --region ap-northeast-1 logs get-log-events --log-group-name test --log-stream-name my-stream-name --limit 1 --no-start-from-head

The output will be something like

{
    "events": [
        {
            "timestamp": 1556096908637,
            "message": "hello-world",
            "ingestionTime": 1556096908780
        }
    ],
    "nextForwardToken": "f/34702120663734923507202190247632410213615935025352933376",
    "nextBackwardToken": "b/34702120663734923507202190247632410213615935025352933376"
}

(Optional) Step 5: configure your nginx server container to flush output to standard output

access_log /dev/stdout main;
error_log stderr warn;

Note that entry for error is stderr not /dev/stderr.

Currently, there is no different whether error logs are directed to stdout or stderr, I just suggest this configuration for practical use.

Source