How to add a user to an AWS EKS cluster

Initially, when you create a Kubernetes cluster in AWS (or we call EKS = Elastic Kubernetes Service), only the creator can have full access to the cluster.

Other AWS users, even with the Administrator access of the AWS organization, can see the cluster name and several general configurations, but cannot access internal information such as cluster resources without additional configuration.

The reason is that the Kubernetes cluster has its independent internal RBAC (role-based access control) system. The typical errors when a user without RBAC allowance tries to access the cluster from the console are:

Your current user or role does not have access to Kubernetes objects on this EKS cluster
This may be due to the current user or role not having Kubernetes RBAC permissions to describe cluster resources or not having an entry in the cluster’s auth config map.Learn more

Or

No Nodes
This cluster does not have any Nodes, or you don't have permission to view them.

Or

Error loading namespaces
namespaces is forbidden: User "" cannot list resource "namespaces" in API group "" at the cluster scope

Why does only the cluster creator has special access to the cluster?

From AWS support page, AWS assigns the cluster creator with system:masters permission in the core of the cluster.

Initially, only the creator of the Amazon EKS cluster has system:masters permissions to configure the cluster. To extend system:masters permissions to other users and roles, you must add the aws-authConfigMap to the configuration of the Amazon EKS cluster. The ConfigMap allows other IAM entities, such as users and roles, to access the Amazon EKS cluster.

How to add a none creator user to an EKS cluster?

Suppose that your organization id is 123456789011, your none-creator user name is my-account.

Access your cluster, edit the aws-auth ConfigMap in kube-system namespace. If this file does not exist, create a new one. This file is read by the AWS console to determine your permission to view the cluster information.

Here is the default content of this file:

apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
  selfLink: /api/v1/namespaces/kube-system/configmaps/aws-auth
  uid: eaa040ed-198a-4643-ffff-07d6da7ac115
  resourceVersion: '208618345'
  creationTimestamp: '2020-03-22T05:09:15Z'
data:
  mapRoles: |-
    - groups:
      - system:bootstrappers
      - system:nodes
      rolearn: arn:aws:iam::12345678911:role/NodeInstanceRole
      username: system:node:{{EC2PrivateDNSName}}

Modify the config map to follow:

apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
  selfLink: /api/v1/namespaces/kube-system/configmaps/aws-auth
  uid: eaa040ed-198a-4643-ffff-07d6da7ac115
  resourceVersion: '208618345'
  creationTimestamp: '2020-03-22T05:09:15Z'
data:
  mapRoles: |-
    - groups:
      - system:bootstrappers
      - system:nodes
      rolearn: arn:aws:iam::123456789011:role/NodeInstanceRole
      username: system:node:{{EC2PrivateDNSName}}
  mapUsers: |-
    - userarn: arn:aws:iam::123456789011:user/my-account
      groups:
        - system:masters
Add a new entry to the mapUsers to allow your access from my-account to the cluster

Note that the content under mapRoles or mapUsers are NOT object, they are all string. So, the |- token is required.

After modifying the config map, if you access the console and still see the following error:

Error loading namespaces
namespaces is forbidden: User "" cannot list resource "namespaces" in API group "" at the cluster scope

It is probably you have a typo mistake on the group's name, such as system:nodes or system:master are wrong permission group name.

Add permission to an account from an outside organization

In the last post, I introduced how to manage multiple AWS organizations from a single AWS account here.

To add permission to an account staying outside. You need to add the following entry to the aws-auth config map.

apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
  selfLink: /api/v1/namespaces/kube-system/configmaps/aws-auth
  uid: eaa040ed-198a-4643-ffff-07d6da7ac115
  resourceVersion: '208618345'
  creationTimestamp: '2020-03-22T05:09:15Z'
data:
  mapRoles: |-
    - groups:
      - system:bootstrappers
      - system:nodes
      rolearn: arn:aws:iam::123456789011:role/NodeInstanceRole
      username: system:node:{{EC2PrivateDNSName}}
    - rolearn: arn:aws:iam::123456789011:role/role-for-my-org
      groups:
      - system:masters

Where 123456789011 is the organization id of the cluster owner, role-for-my-org is the role name you created to allow access from the external organization ( my-org organization).

If you finished setup local credentials (check the mentioned post), you can update kube-config with

aws eks update-kubeconfig --name my-cluster --profile my-client