Blog Home

Provision a Kubernetes Cluster in Amazon EKS with Weaveworks eksctl and AWS CDK

The Reaction Commerce infrastructure team uses Infrastructure as Code (IaC) tooling to describe the resources that need to be provisioned (especially cloud computing resources such as Amazon Web Services) by means of code. We do so because manual provisioning is error prone. What is not automated will come back and bite you, as experience has shown over and over again.

IaC tools can be declarative (“what to do”) or procedural (“how to do it”). We prefer tools that are declarative, such as CloudFormation and Terraform, over tools that are procedural, such as Ansible. This is because declarative tools describe the desired state of the system, and also keep track of that state, thus also serving as up-to-date documentation.

We also prefer IaC tools that use a real programming language. One such tool is the AWS Cloud Development Kit (CDK) which allows you to describe the AWS resources to be provisioned using either TypeScript or Python. The CDK code can be created and edited in a real IDE such as VS Code or IntelliJ Idea. The infrastructure team at Reaction currently uses CDK with TypeScript (but we have some Python sample code as well).

For comparison purposes, we will show in this blog post two approaches for provisioning an AWS EKS cluster. The first approach uses a command-line tool from Weaveworks called eksctl. The second approach uses the AWS CDK. In both cases, the IaC tools create and deploy CloudFormation stacks behind the scenes. If you prefer to work with the command line and shell scripts, then you may like eksctl. If you prefer to write code in a programming language such as TypeScript or Python, then you may like the AWS CDK more.

If you want to experiment with the Reaction Commerce services in a local environment, we recommend using the reaction-platform tooling which is based on Docker Compose. However, in non-local environments composed of multiple machines for fault tolerance and scalability, you need to take advantage of a container orchestration platform. At Reaction Commerce, we recommend using Kubernetes.

The rest of this blog post was written by Wiston Mwinzi, Senior DevOps Engineer at Reaction Commerce. We will follow this post with other ones describing some interesting (and hopefully useful to others as well) work we have been doing in the engineering team at Reaction Commerce. Enjoy this article and stay tuned for more!

Here is Wiston:

Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. It has a considerable growth and is by far the most widely used orchestration platform. It has helped force a disruption in the market as new and more organizations have made it a tool of their choice.

A CNCF survey confirmed that over 60% of kubernetes workloads ran on AWS.

To support this vast ecosystem, Amazon released AWS EKS with primary objective being to run Kubernetes open source container management framework, at scale, on AWS.

Why Amazon EKS?

Amazon Elastic Kubernetes Service (Amazon EKS) is a managed service that makes it easy for you to run Kubernetes on AWS without needing to stand up or maintain your own Kubernetes control plane. It runs Kubernetes control plane instances across multiple Availability Zones to ensure high availability and also automatically detects and replaces unhealthy control plane instances.

  • Fully managed service, meaning it operates and maintains your Kubernetes control plane.
  • Runs the Kubernetes management infrastructure for you across multiple AWS availability zones to eliminate a single point of failure.
  • Certified Kubernetes conformant so you can use existing tooling and plugins from partners and the Kubernetes community.
  • Applications running on any standard Kubernetes environment are fully compatible and can be easily migrated to Amazon EKS.
  • Amazon EKS can also implement automatic patches

An Overview of EKS

With Amazon EKS, you can take advantage of all the performance, scale, reliability, and availability of the AWS platform, as well as integrations with AWS networking and security services, such as Application Load Balancers for load distribution, IAM for role based access control, and VPC for pod networking

EKS usage algorithm. Source: Amazon AWS

An AWS EKS cluster consists of two primary components:

1) The EKS control plane

The EKS control plane consists of control plane nodes that run the Kubernetes software, such as etcd and the Kubernetes API server. The control plane runs in an account managed by AWS, and the Kubernetes API is exposed via the Amazon EKS endpoint associated with your cluster. Each AWS EKS cluster control plane is single-tenant and unique, and runs on its own set of AWS EC2 instances.

2) EKS worker nodes that are registered with the control plane

EKS worker nodes run in your AWS account and connect to your cluster's control plane via the API server endpoint and a certificate file that is created for your cluster.

EKS architecture schemas. Source: Amazon AWS

Getting Started

We are going to provision an EKS cluster first by using eksctl from Weaveworks, a simple CLI tool for creating clusters on EKS, and second by using AWS CDK a software development framework for defining cloud infrastructure in code.

1. Provisioning an EKS cluster with eksctl

eksctl is a simple CLI tool for creating clusters on EKS - Amazon's new managed Kubernetes service for EC2. It is written in Go, and uses CloudFormation.

eksctl

This procedure assumes that you have installed eksctl, and that your eksctl version is at least 0.1.37. You can check your version with the following command:

eksctl version

Create an Amazon EKS cluster and worker nodes with the following command:

eksctl create cluster \
--name devEKSCluster \
--version 1.14 \
--nodegroup-name devWorkers \
--node-type t3.medium \
--nodes 3 \
--nodes-min 1 \
--nodes-max 3 \
--node-ami auto

Output:

[ℹ]  using region us-east-2
[ℹ]  setting availability zones to [us-east-2b us-east-2c us-east-2d]
[ℹ]  subnets for us-east-2b - public:192.168.0.0/19 private:192.168.96.0/19
[ℹ]  subnets for us-east-2c - public:192.168.32.0/19 private:192.168.128.0/19
[ℹ]  subnets for us-east-2d - public:192.168.64.0/19 private:192.168.160.0/19
[ℹ]  nodegroup "devWorkers" will use "ami-03992bafc801b7520f" [AmazonLinux2/1.12]
[ℹ]  creating EKS cluster "devEKSCluster" in "us-east-2" region
[ℹ]  will create 2 separate CloudFormation stacks for cluster itself and the initial nodegroup
[ℹ]  if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-east-2 --name=devEKSCluster'
[ℹ]  building cluster stack "eksctl-devEKSCluster-cluster"
[ℹ]  creating nodegroup stack "eksctl-devEKSCluster-nodegroup-devWorkers"
[✔]  all EKS cluster resource for "devEKSCluster" had been created
[✔]  saved kubeconfig as "/Users/myuser/.kube/config"
[ℹ]  adding role "arn:aws:iam::111122223333:role/eksctl-devEKSCluster-nodegroup-devWorkers-NodeInstanceRole-IJP4S12W3020" to auth ConfigMap
[ℹ]  nodegroup "devWorkers" has 0 node(s)
[ℹ]  waiting for at least 1 node(s) to become ready in "devWorkers"
[ℹ]  nodegroup "devWorkers" has 3 node(s)
[ℹ]  node "ip-192-168-22-17.us-east-2.compute.internal" is not ready
[ℹ]  node "ip-192-168-32-184.us-east-2.compute.internal" is ready
[ℹ]  node "ip-192-168-51-220.us-east-2.compute.internal" is not ready
[ℹ]  kubectl command should work with "/Users/myuser/.kube/config", try 'kubectl get nodes'
[✔]  EKS cluster "devEKSCluster" in "us-east-2" region is ready

Cluster provisioning usually takes between 10 and 15 minutes.

You can customize your cluster by using a config file. Just run


eksctl create cluster -f cluster.yaml

to apply a cluster.yaml file:

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: basic-cluster
  region: eu-north-1

nodeGroups:
  - name: ng-1
    instanceType: m5.large
    desiredCapacity: 10
  - name: ng-2
    instanceType: m5.xlarge
    desiredCapacity: 2

Once you have created a cluster, you will find that cluster credentials were added in ~/.kube/config. If you have kubectl v1.10.x as well as aws-iam-authenticator commands in your PATH, you should be able to use kubectl. You will need to make sure to use the same AWS API credentials for this also. Check EKS docs for instructions. If you installed eksctl via Homebrew, you should have all of these dependencies installed already.

You can now test that your kubectl configuration is correct:

kubectl get svc

Output:

NAME             TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
svc/kubernetes   ClusterIP   10.100.0.1   <none>        443/TCP   1m

Manage cluster authentication

Amazon EKS uses IAM to provide authentication to your Kubernetes cluster (through the aws eks get-token command, available in version 1.16.156 or greater of the AWS CLI, or the AWS IAM Authenticator for Kubernetes), but it still relies on native Kubernetes Role Based Access Control (RBAC) for authorization. This means that IAM is only used for authentication of valid IAM entities. All permissions for interacting with your Amazon EKS cluster’s Kubernetes API is managed through the native Kubernetes RBAC system.

Managing Users and IAM roles for your cluster

When you create an AWS EKS cluster, the IAM entity user or role, such as a federated user that creates the cluster, is automatically granted system:masters permissions in the cluster's RBAC configuration. To grant additional AWS users or roles the ability to interact with your cluster, you must edit the aws-auth ConfigMap within Kubernetes.

Let us implement this.

Step 1: Create IAM roles

We will create 2 IAM roles: KubernetesAdmin and KubernetesDeveloper.

ACCOUNT_ID=$(aws sts get-caller-identity --output text --query 'Account')
POLICY=$(echo -n '{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Principal":{"AWS":"arn:aws:iam::'; echo -n "$ACCOUNT_ID"; echo -n ':root"},"Action":"sts:AssumeRole","Condition":{}}]}')\n

aws iam create-role \
  --role-name KubernetesAdmin \
  --description "Kubernetes administrator role (for AWS IAM Authenticator for Kubernetes)." \
  --assume-role-policy-document "$POLICY" \
  --output text \
  --query 'Role.Arn'

aws iam create-role \
  --role-name KubernetesDeveloper \
  --description "Kubernetes developer role (for AWS IAM Authenticator for Kubernetes)." \
  --assume-role-policy-document "$POLICY" \
  --output text \
  --query 'Role.Arn'

Step 2: Create IAM policies

We will create two IAM policies allowing other IAM users to assume these roles.

Policy ARN: arn:aws:iam::111122223333:policy/assume-KubernetesAdmin-role

JSON:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowAssumeOrganizationAccountRole",
            "Effect": "Allow",
            "Action": "sts:AssumeRole",
            "Resource": "arn:aws:iam::111122223333:role/KubernetesAdmin"
        }
    ]
}

Policy ARN: arn:aws:iam::111122223333:policy/assume-KubernetesDeveloper-role

JSON:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowAssumeOrganizationAccountRole",
            "Effect": "Allow",
            "Action": "sts:AssumeRole",
            "Resource": "arn:aws:iam::111122223333:role/KubernetesDeveloper"
        }
    ]
}

Step 3: Create IAM groups

We will create two IAM groups: eks-administrators and eks-developers.

Associate the IAM policy assume-KubernetesAdmin-role with group eks-administrators and associate IAM policy assume-KubernetesDeveloper-role with group eks-developers

Step 4: Update aws-auth configmap

We will modify the aws-auth ConfigMap and add 2 entries, one for the KubernetesAdmin IAM role and one for the KubernetesDeveloper IAM role

  • The KubernetesAdmin role is mapped to the system:masters Kubernetes group.
  • The KubernetesDeveloper role is mapped to a Kubernetes user called k8s-developer-user.

Here is the JSON we used:

$ cat aws-auth.json
{
    "apiVersion": "v1",
    "data": {
        "mapAccounts": "[]",
        "mapRoles":
        "[{\"rolearn\":\"arn:aws:iam::111122223333:role/EKSClusterStack-AdminRole38563C57-46U4O5W4VLED\",\"groups\":[\"system:masters\"]},{\"rolearn\":\"arn:aws:iam::111122223333:role/EKSClusterStack-devEKSClusterDefa-1S2QPL8TGWXJD\",\"username\":\"system:node:{{EC2PrivateDNSName}}\",\"groups\":[\"system:bootstrappers\",\"system:nodes\"]},{\"rolearn\":\"arn:aws:iam::111122223333:role/EKSClusterStack-EKSClusterStor-88M3J8VX6J2E\",\"username\":\"system:node:{{EC2PrivateDNSName}}\",\"groups\":[\"system:bootstrappers\",\"system:nodes\"]},{\"rolearn\":\"arn:aws:iam::111122223333:role/KubernetesAdmin\",\"groups\":[\"system:masters\"]}{\"rolearn\":\"arn:aws:iam::111122223333:role/KubernetesDeveloper\",\"username\":\"k8s-developer-user\",\"groups\":[]},{\"rolearn\":\"arn:aws:iam::111122223333:role/developer\",\"username\":\"develop\",\"groups\":[]},{\"rolearn\":\"arn:aws:iam::111122223333:role/admin\",\"username\":\"admin\",\"groups\":[\"system:masters\"]}]",
        "mapUsers": "[{\"userarn\":\"arn:aws:iam::111122223333:user/dev1\", \"username\":\"dev1\"}]"
    },
    "kind": "ConfigMap",
    "metadata": {
        "name": "aws-auth",
        "namespace": "kube-system"
    }
}

Apply the above JSON file via:

Step 5: Create role and roleBinding Kubernetes objects

We will create a Role and RoleBinding objects in the staging namespace in Kubernetes, mapping the desired actions to the user k8s-developer-user.

Here is the YAML file we used:

cat developer-role-rolebinding.yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: developer
  namespace: staging
rules:
  - apiGroups:
      - ""
      - "apps"
      - "batch"
      - "extensions"
    resources:
      - "configmaps"
      - "cronjobs"
      - "deployments"
      - "events"
      - "ingresses"
      - "jobs"
      - "pods"
      - "pods/attach"
      - "pods/exec"
      - "pods/log"
      - "pods/portforward"
      - "secrets"
      - "services"
    verbs:
      - "create"
      - "delete"
      - "describe"
      - "get"
      - "list"
      - "patch"
      - "update"
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: developer
  namespace: staging
subjects:
- kind: User
  name: k8s-developer-user
roleRef:
  kind: Role
  name: developer
  apiGroup: rbac.authorization.k8s.io

Apply this YAML with:

$ kubectl apply -f developer-role-rolebinding.yaml

Step 6: Create two test IAM users

At this stage, let us create two test IAM users: eks-developer in the eks-developers group and eks-admin in the eks-administrators group.

To test them, set AWS profiles for them in ~/.aws/credentials, then perform the next step.

Step 7: Update kubeconfig

We will then add a generic admin-type or developer-type user in the kubectl config file. The important part is that it is mapped to the right role ARN.

For developers:

$ cat eks-kubeconfig-developer

For admins:

$ cat eks-kubeconfig-admin

Any IAM user added to eks-developers group will assume assume-KubernetesDeveloper-role policy with resource KubernetesDeveloper IAM role which is mapped to kubernetes user k8s-developer-user. The k8s-developer-user user is mapped to Kubernetes role developer via the developer RoleBinding.

Note

Since this is a RoleBinding and not ClusterBinding, users will only have access limited to the namespace they are. For this example, k8s-developer-user has access limited only in the staging namespace.

2: Provisioning an EKS cluster with the AWS Cloud Development Kit (CDK)

The AWS Cloud Development Kit (CDK) is a software development framework for defining your cloud infrastructure in code and provisioning it through AWS CloudFormation.

Why AWS CDK

The CDK integrates fully with AWS services and offers a higher-level object-oriented abstraction to define AWS resources.

You can think of the CDK as a cloud infrastructure “compiler”. It provides a set of high-level class libraries, called Constructs, that abstract AWS cloud resources and encapsulate AWS best practices.

When you run your CDK application, it is compiled into a CloudFormation Template, the "assembly language" for AWS cloud infrastructure. The template is then ready for processing by the CloudFormation provisioning engine for deployment into your AWS account.

Provisioning an EKS cluster

Create a directory for your app with an empty Git repository:

mkdir demo-cdk && cd demo-cdk

Step 1: Initialize new app

To initialize your new AWS CDK app, you use the cdk init command.

cdk init --language typescript

This will create an app with below directory structure:

|- README.md
|- bin
|- cdk.json
|- jest.config.js
|- lib
|- package.json
|- test
|- tsconfig.json
|- node_modules
|- package-lock.json

Step 2: Define the stack

Replace bin/demo-cdk.ts contents with below code snippet:

#!/usr/bin/env node
import 'source-map-support/register';
import cdk = require('@aws-cdk/core');

import { DemoCdkStack } from "../lib/rcEksCluster";

const app = new cdk.App();

//*********************************************************************//
//************************* env varibles ******************************//
//*********************************************************************//
const app_env = {
    
    region: "us-east-1",
    account: "111122223333"
};

//*********************************************************************//
//************************* your vpc stack ****************************//
//*********************************************************************//
const vpcStack = new VPCStack(app, 'VPCStack', {env: app_env});

//*********************************************************************//
//************************* eKSCluster Stack **************************//
//*********************************************************************//
new DemoCdkStack(app, 'DemoCdkStack',{env: app_env, vpc: vpcStack.rcVpc});


In the lib directory, create eksCluster.ts and replace its contents with below code snippet:


import cdk = require("@aws-cdk/core");
import ec2 = require("@aws-cdk/aws-ec2");
import eks = require("@aws-cdk/aws-eks");
import asg = require("@aws-cdk/aws-autoscaling");
import iam = require("@aws-cdk/aws-iam");
import certmgr = require('@aws-cdk/aws-certificatemanager');
import elbv2 = require('@aws-cdk/aws-elasticloadbalancingv2');


interface EKSStackProps extends cdk.StackProps {
    vpc: ec2.IVpc;
}

export class DemoCdkStack extends cdk.Stack {
    private vpc: ec2.IVpc;

    constructor(scope: cdk.Construct, id: string, props?: EKSStackProps) {
        super(scope, id, props);
        
   
//*********************************************************************//
//************************ Context variables **************************//
//*********************************************************************//

        const current_env = this.node.tryGetContext("env.type");
        const eksClusterName = this.node.tryGetContext("eks.clusterName");
        const nodeGroupKeyName = this.node.tryGetContext("eks.nodeGroupKeyName");
        const nodeGroupMaxCapacity = this.node.tryGetContext("eks.nodeGroupMaxCapacity");
        const nodeGroupMinCapacity = this.node.tryGetContext("eks.nodeGroupMinCapacity");
        const nodeGroupDesiredCapacity = this.node.tryGetContext("eks.nodeGroupDesiredCapacity");
        const nodeGroupInstanceType = this.node.tryGetContext("eks.nodeGroupInstanceType");
        const nodeGroupInstancePort = this.node.tryGetContext("eks.nodeGroupInstancePort");


//*********************************************************************//
//******************************  VPC *********************************//
//*********************************************************************//
if (props)
            this.vpc = props.vpc;
        else
            this.vpc = ec2.Vpc.fromLookup(this, current_env+"Vpc", {
                vpcName: "VPCStack/"+current_env+"Vpc"
            });
            
//*********************************************************************//
//****************************  ROLES *********************************//
//*********************************************************************//
const eksRole = new iam.Role(this, 'eksRole', {
            roleName: 'eksRole',
            assumedBy: new iam.ServicePrincipal('eks.amazonaws.com')
        });
       eksRole.addManagedPolicy(iam.ManagedPolicy.fromAwsManagedPolicyName('AmazonEKSServicePolicy'));
       eksRole.addManagedPolicy(iam.ManagedPolicy.fromAwsManagedPolicyName('AmazonEKSClusterPolicy'));

const eksClusterAdmin = new iam.Role(this, 'eksClusterAdmin', {
            assumedBy: new iam.AccountRootPrincipal()
        });
       
//*********************************************************************//
//******************************  SGs *********************************//
//*********************************************************************//
const rcALBSecurityGroup = new ec2.SecurityGroup(
            this,
            current_env + 'LoadBalancerSG',
            {
                vpc: this.vpc,
                securityGroupName: current_env + 'LoadBalancerSG',
                description: "Access to " + current_env + 'LoadBalancer',
                allowAllOutbound: true
            }
        );
rcALBSecurityGroup.addIngressRule(
            ec2.Peer.anyIpv4(),
            ec2.Port.tcp(80)
        );

rcALBSecurityGroup.addIngressRule(
            ec2.Peer.anyIpv4(),
            ec2.Port.tcp(443)
        );
        
//*********************************************************************//
//*********************** Cluster Props *******************************//
//*********************************************************************//
const clusterProps = {
            vpc: this.vpc,
            role: eksRole,
            mastersRole: eksClusterAdmin,
            clusterName: current_env+eksClusterName,
            defaultCapacity: 0
        };
       
//*********************************************************************//
//************************* EKS Cluster *******************************//
//*********************************************************************//
const cluster = new eks.Cluster(this, eksClusterName, clusterProps);
       
//*********************************************************************//
//********************************* ACM *******************************//
//*********************************************************************//
const acmCert = this.node.tryGetContext("acm.cert");
const rcCert = certmgr.Certificate.fromCertificateArn(this, current_env+'Cert', acmCert);

     
//*********************************************************************//
//************************ EksOptimizedImage  *************************//
//*********************************************************************//
const eksOptimizedImage = {
            //standard or GPU-optimized
            nodeType: eks.NodeType.STANDARD
        };
        

const nodeGroupMachineImage = new eks.EksOptimizedImage(eksOptimizedImage);

//*********************************************************************//
//******************************* ASG *********************************//
//*********************************************************************//


const rcAsg = new asg.AutoScalingGroup(this, current_env+'ASG', {
            vpc: this.vpc,
            instanceType: nodeGroupInstanceType,
            machineImage: nodeGroupMachineImage,
            keyName: nodeGroupKeyName,
            minCapacity: nodeGroupMinCapacity,
            maxCapacity: nodeGroupMaxCapacity,
            desiredCapacity: nodeGroupDesiredCapacity,
            updateType: asg.UpdateType.ROLLING_UPDATE,
            vpcSubnets: {subnetType: ec2.SubnetType.PRIVATE}
        });
       
//*********************************************************************//
//*************************** ALB *************************************//
//*********************************************************************//

const rcAlb = new elbv2.ApplicationLoadBalancer(this, current_env + 'ALB', {
            loadBalancerName: current_env + 'ALB',
            vpc: this.vpc,
            securityGroup: rcALBSecurityGroup,
            internetFacing: true,
            vpcSubnets: {subnetType: ec2.SubnetType.PUBLIC}
        });


//*********************************************************************//
//************************** Listeners ********************************//
//*********************************************************************//

const rcHttpListener = rcAlb.addListener(current_env + "HttpListener", {
            protocol: elbv2.ApplicationProtocol.HTTP,
            port: 80,
            open: true
        });
rcHttpListener.addFixedResponse(current_env + "FixedResponse", {
            statusCode: "404"
        });


const cfnHttpListener = rcHttpListener.node.defaultChild as elbv2.CfnListener;
        cfnHttpListener.defaultActions = [{
            type: "redirect",
            redirectConfig: {
                protocol: "HTTPS",
                host: "#{host}",
                path: "/#{path}",
                query: "#{query}",
                port: "443",
                statusCode: "HTTP_301"
            }
        }];

        
const rcHttpsListener = rcAlb.addListener(current_env + "HttpsListener", {
            protocol: elbv2.ApplicationProtocol.HTTPS,
            port: 443,
            open: true,
            certificateArns: [rcCert.certificateArn],
            sslPolicy: elbv2.SslPolicy.RECOMMENDED
        });


rcHttpsListener.addTargets(current_env + 'HttpsTargetGroup', {
            targetGroupName: current_env + 'TargetGroup',
            healthCheck: {
                path: '/healthz',
                interval: cdk.Duration.minutes(1)
            },
            port: nodeGroupInstancePort,
            protocol: elbv2.ApplicationProtocol.HTTP,
            targets: [rcAsg]
        });
       
//*********************************************************************//
//*********************** Add worker nodes ****************************//
//*********************************************************************//
cluster.addAutoScalingGroup(rcAsg, {
            mapRole: true
        })

//*********************************************************************//
//*********************** IAM Roles ***********************************//
//*********************************************************************//
const clusterAdminRole = new iam.Role(this, 'clusterAdmin', {
            roleName: 'KubernetesAdmin',
            assumedBy: new iam.AccountRootPrincipal()
        });


const developerRole = new iam.Role(this, 'developer', {
            roleName: 'KubernetesDeveloper',
            assumedBy: new iam.AccountRootPrincipal()
        });
     
//*********************************************************************//
//********************** IAM Groups ***********************************//
//*********************************************************************//
const eksAdminGroup = new iam.Group(this, 'eks-administrators', {
            groupName: 'eks-administrators',
        });


const eksDeveloperGroup = new iam.Group(this, 'eks-developers', {
            groupName: 'eks-developers',
        });
   
   
//*********************************************************************//
//******************** IAM Managed Policies ***************************//
//*********************************************************************//
const adminPolicyStatement = new iam.PolicyStatement({
            resources: [clusterAdminRole.roleArn],
            actions: ['sts:AssumeRole'],
            effect: iam.Effect.ALLOW
        });
        
        
const developerPolicyStatement = new iam.PolicyStatement({
            resources: [developerRole.roleArn],
            actions: ['sts:AssumeRole'],
            effect: iam.Effect.ALLOW
        });


const assumeEKSAdminRole = new iam.ManagedPolicy(this,'assumeEKSAdminRole',         {
            managedPolicyName: 'assume-KubernetesAdmin-role'
        });
        assumeEKSAdminRole.addStatements(adminPolicyStatement);
        assumeEKSAdminRole.attachToGroup(eksAdminGroup);


const assumeEKSDeveloperRole = new iam.ManagedPolicy(this, 'assumeEKSDeveloperRole', {
            managedPolicyName: 'assume-KubernetesDeveloper-role'
        });
        assumeEKSDeveloperRole.addStatements(developerPolicyStatement);
        assumeEKSDeveloperRole.attachToGroup(eksDeveloperGroup);

//*********************************************************************//
//***************** Map Roles in K8s ConfigMap ************************//
//*********************************************************************//
cluster.awsAuth.addRoleMapping(developerRole, {
            groups: ['reaction:devs'],
            username: 'k8s-developer-user'
        });


cluster.awsAuth.addMastersRole(clusterAdminRole, 'k8s-cluster-admin-user');

    }
}

Use npm to install your imported packages:

npm i @aws-cdk/aws-ec2

In order to dynamically load information into your stack, you can do so by updating the context, a key-value pair entries in cdk.json. Open cdk.json in your favourite editor and add below information:

{
  "app": "npx ts-node bin/dev.ts",
  "context": {
    "env.type": "dev",
    "eks.clusterName": "EKSCluster",
    "eks.nodeGroupKeyName": "reaction",
    "eks.nodeGroupDesiredCapacity": 3,
    "eks.nodeGroupInstanceType": "m5.xlarge",
    "eks.nodeGroupMaxCapacity": 10,
    "eks.nodeGroupMinCapacity": 1,
    "eks.nodeGroupInstancePort":30080,
    "acm.cert": "<your cert arn>",
  }
}

Let us now go through some resources for the EKSClusterStack.  

Define a VPC

The VPC is where the EKS Control Plane ENIs will be placed. You lookup an existing VPC:

 if (props)
            this.vpc = props.vpc;
        else
            this.vpc = ec2.Vpc.fromLookup(this, current_env+"Vpc", {
                vpcName: "VPCStack/"+current_env+"Vpc"
            });

Define a role

You need a role that provides permissions for the Kubernetes control plane to make calls to AWS API operations on our behalf. Let's call it eksRole:

 const eksRole = new iam.Role(this, 'eksRole', {
            roleName: 'eksRole',
            assumedBy: new iam.ServicePrincipal('eks.amazonaws.com')
        });

The above role will assume the following EKS policies:

eksRole.addManagedPolicy(iam.ManagedPolicy.fromAwsManagedPolicyName('AmazonEKSServicePolicy'));
        eksRole.addManagedPolicy(iam.ManagedPolicy.fromAwsManagedPolicyName('AmazonEKSClusterPolicy'));

Define a MastersRole

The Amazon EKS construct library allows you to specify an IAM role that will be granted system:masters privileges on your cluster. Without specifying a mastersRole, you won't be able to interact manually with the cluster. You need then to have this role:

const eksClusterAdmin = new iam.Role(this, 'eksClusterAdmin', {
            assumedBy: new iam.AccountRootPrincipal()
        });

Define a SecurityGroup

Define a Security Group to use for an AutoScalingGroup with an IngressRules on port 80 and 443:

 const rcALBSecurityGroup = new ec2.SecurityGroup(
            this,
            current_env + 'LoadBalancerSG',
            {
                vpc: this.vpc,
                securityGroupName: current_env + 'LoadBalancerSG',
                description: "Access to " + current_env + 'LoadBalancer',
                allowAllOutbound: true
            }
        );
        rcALBSecurityGroup.addIngressRule(
            ec2.Peer.anyIpv4(),
            ec2.Port.tcp(80)
        );

        rcALBSecurityGroup.addIngressRule(
            ec2.Peer.anyIpv4(),
            ec2.Port.tcp(443)
        );

Define a Certificate

Import an existing certificate for the HTTPS Listener. You can do so from its ARN:

const acmCert = this.node.tryGetContext("acm.cert");
        const rcCert = certmgr.Certificate.fromCertificateArn(this, current_env+'Cert', acmCert);

Define an eksOptimizedImage

You can use an Amazon Linux 2 image from the latest EKS Optimized AMI published in AWS Systems Manager:

 const eksOptimizedImage = {
            //standard or GPU-optimized
            nodeType: eks.NodeType.STANDARD
        };

Define an application Load Balancer

Define an application load balancer by creating an instance of Application LoadBalancer, adding a Listener to the load balancer and adding Targets to the Listener:

 const rcAlb = new elbv2.ApplicationLoadBalancer(this, current_env + 'ALB', {
            loadBalancerName: current_env + 'ALB',
            vpc: this.vpc,
            securityGroup: rcALBSecurityGroup,
            internetFacing: true,
            vpcSubnets: {subnetType: ec2.SubnetType.PUBLIC}
        });
        
const rcHttpListener = rcAlb.addListener(current_env + "HttpListener", {
            protocol: elbv2.ApplicationProtocol.HTTP,
            port: 80,
            open: true
        });
        rcHttpListener.addFixedResponse(current_env + "FixedResponse", {
            statusCode: "404"
        });

        const cfnHttpListener = rcHttpListener.node.defaultChild as elbv2.CfnListener;
        cfnHttpListener.defaultActions = [{
            type: "redirect",
            redirectConfig: {
                protocol: "HTTPS",
                host: "#{host}",
                path: "/#{path}",
                query: "#{query}",
                port: "443",
                statusCode: "HTTP_301"
            }
        }];

        // Add a listener on a particular port.
        const rcHttpsListener = rcAlb.addListener(current_env + "HttpsListener", {
            protocol: elbv2.ApplicationProtocol.HTTPS,
            port: 443,
            open: true,
            certificateArns: [rcCert.certificateArn],
            sslPolicy: elbv2.SslPolicy.RECOMMENDED
        });

        rcHttpsListener.addTargets(current_env + 'HttpsTargetGroup', {
            targetGroupName: current_env + 'TargetGroup',
            healthCheck: {
                path: '/healthz',
                interval: cdk.Duration.minutes(1)
            },
            port: nodeGroupInstancePort,
            protocol: elbv2.ApplicationProtocol.HTTP,
            targets: [rcAsg]
        });

       

Define an EKS cluster

The EKS cluster will have the following additional ClusterProps:

  • clusterName: Name for the cluster.
  • defaultCapacity: Number of instances to allocate as an initial capacity for this cluster. Instance type can be configured through defaultCapacityInstanceType, which defaults to m5.large
  • vpcSubnets: Where to place EKS Control Plane ENIs.
 const clusterProps = {
            vpc: this.vpc,
            role: eksRole,
            mastersRole: eksClusterAdmin,
            clusterName: current_env+eksClusterName,
            defaultCapacity: 0
        };
        
        const cluster = new eks.Cluster(this, eksClusterName, clusterProps);

Default a Capacity AutoScaling

By default, an EKS cluster is created with x2 m5.large instances. The cluster.defaultCapacity property will reference the AutoScalingGroup resource for the default capacity. It will be undefined if defaultCapacity is set to 0:

cluster.defaultCapacity!.scaleOnCpuUtilization('up', {
            targetUtilizationPercent: 80
        });

Define Workers

You can add workers as an AutoScalingGroup (ASG) through addAutoScalingGroup() function. First, create the ASG:

    const rcAsg = new asg.AutoScalingGroup(this, current_env+'ASG', {
            vpc: this.vpc,
            instanceType: nodeGroupInstanceType,
            machineImage: nodeGroupMachineImage,
            keyName: nodeGroupKeyName,
            minCapacity: nodeGroupMinCapacity,
            maxCapacity: nodeGroupMaxCapacity,
            desiredCapacity: nodeGroupDesiredCapacity,
            updateType: asg.UpdateType.ROLLING_UPDATE,
            vpcSubnets: {subnetType: ec2.SubnetType.PRIVATE}
        });

Then add the ASG to the EKS cluster:


 addAutoScalingGroup(rcAsg, {
            mapRole: true
        })

Step 3: Interact with the EKS cluster

Define a clusterAdminRole and a developerRole that can be assumed by all users in the account:

 const clusterAdminRole = new iam.Role(this, 'clusterAdmin', {
            roleName: 'KubernetesAdmin',
            assumedBy: new iam.AccountRootPrincipal()
        });

 const developerRole = new iam.Role(this, 'developer', {
            roleName: 'KubernetesDeveloper',
            assumedBy: new iam.AccountRootPrincipal()
        });

Define two IAM groups, eksAdminGroup and eksDeveloperGroup, where you can assign your users:

const eksAdminGroup = new iam.Group(this, 'eks-administrators', {
            groupName: 'eks-administrators',
        });


const eksDeveloperGroup = new iam.Group(this, 'eks-developers', {
            groupName: 'eks-developers',
        });

Create adminPolicyStatement and developerPolicyStatement policy statements for the assume-KubernetesAdmin-role and assume-KubernetesDeveloper-role managed policies respectively. These managed policies will be assumed by the IAM groups defined above:

 const adminPolicyStatement = new iam.PolicyStatement({
            resources: [clusterAdminRole.roleArn],
            actions: ['sts:AssumeRole'],
            effect: Effect.ALLOW
        });
 const developerPolicyStatement = new iam.PolicyStatement({
            resources: [developerRole.roleArn],
            actions: ['sts:AssumeRole'],
            effect: Effect.ALLOW
        });
        
const assumeEKSAdminRole = new iam.ManagedPolicy(this, 'assumeEKSAdminRole', {
            managedPolicyName: 'assume-KubernetesAdmin-role'
        });
        assumeEKSAdminRole.addStatements(adminPolicyStatement);
        assumeEKSAdminRole.attachToGroup(eksAdminGroup);


        const assumeEKSDeveloperRole = new iam.ManagedPolicy(this, 'assumeEKSDeveloperRole', {
            managedPolicyName: 'assume-KubernetesDeveloper-role'
        });
        assumeEKSDeveloperRole.addStatements(developerPolicyStatement);
        assumeEKSDeveloperRole.attachToGroup(eksDeveloperGroup);

Finally, map the IAM role developerRole to a Kubernetes user k8s-developer-user and the IAM role clusterAdminRole to a Kubernetes user k8s-cluster-admin-user. These users will be used to map the Kubernetes resources i.e RoleBinding, ClusterBinding:

 cluster.awsAuth.addRoleMapping(developerRole, {
            groups: [],
            username: 'k8s-developer-user'
        });

        cluster.awsAuth.addMastersRole(clusterAdminRole, 'k8s-cluster-admin-user');

To view all your CDK stacks, run:

cdk ls

Step 4: Deploy the Cluster

cdk deploy EKSClusterStack --profile <Your profile>

Ideally it should take around 10 min to deploy the stack. You should view all the AWS resources that are being created.


Stack ARN:
arn:aws:cloudformation:us-east-2:111122223333:stack/VPCStack/270e5fa0-e518-11e9-8593-062398501aa6
EKSClusterStack
EKSClusterStack: deploying...
EKSClusterStack: creating CloudFormation changeset...
  0/15 | 19:19:34 | UPDATE_IN_PROGRESS   | AWS::CloudFormation::Stack                | EKSClusterStack User Initiated
  0/15 | 19:20:00 | UPDATE_IN_PROGRESS   | AWS::CloudFormation::Stack                | kubectl-layer-8C2542BC-BF2B-4DFE-B765-E181FD30A9A0 (kubectllayer8C2542BCBF2B4DFEB765E181FD30A9A0617C4ADA)
  1/15 | 19:20:11 | UPDATE_COMPLETE      | AWS::CloudFormation::Stack                | kubectl-layer-8C2542BC-BF2B-4DFE-B765-E181FD30A9A0 (kubectllayer8C2542BCBF2B4DFEB765E181FD30A9A0617C4ADA)
  1/15 | 19:20:21 | UPDATE_COMPLETE_CLEA | AWS::CloudFormation::Stack                | EKSClusterStack
  1/15 | 19:20:22 | UPDATE_IN_PROGRESS   | AWS::CloudFormation::Stack                | kubectl-layer-8C2542BC-BF2B-4DFE-B765-E181FD30A9A0 (kubectllayer8C2542BCBF2B4DFEB765E181FD30A9A0617C4ADA)
  2/15 | 19:20:32 | UPDATE_COMPLETE      | AWS::CloudFormation::Stack                | kubectl-layer-8C2542BC-BF2B-4DFE-B765-E181FD30A9A0 (kubectllayer8C2542BCBF2B4DFEB765E181FD30A9A0617C4ADA)
  3/15 | 19:20:32 | UPDATE_COMPLETE      | AWS::CloudFormation::Stack                | EKSClusterStack

 ✅  EKSClusterStack

Outputs:
EKSClusterStack.EKSClusterGetTokenCommand10DBF41A = aws eks get-token --cluster-name devEKSCluster --region us-east-2 --role-arn arn:aws:iam::111122223333:role/EKSClusterStack-eksClusterAdminE955DB57-QYI4WH3BN4
EKSClusterStack.EKSClusterConfigCommand3809C9C9 = aws eks update-kubeconfig --name devEKSCluster --region us-east-2 --role-arn arn:aws:iam::111122223333:role/EKSClusterStack-eksClusterAdmin55DB57-QYI4WHLC3BN4

Stack ARN:
arn:aws:cloudformation:us-east-2:111122223333:stack/EKSClusterStack/d0f12430-ea6d-11e9-82f4-0a7bd72fc8

Once deployed successfully, export your profile and run the following update-kubeconfig command:

aws eks update-kubeconfig --name devEKSCluster --role-arn arn:aws:iam::111122223333:role/EKSClusterStack-eksClusterAdmin55DB57-7ETXSLGYDO0C

Updated context arn:aws:eks:us-east-2:111122223333:cluster/devEKSCluster in /Users/myuser/.kube/config

You should see that  kubeconfig has been updated. You can use kubectl to inspect resources from the cluster such as services and nodes:

$ kubectl get services
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   172.20.0.1   <none>        443/TCP   5d7h

$ kubectl get nodes
NAME                                         STATUS   ROLES    AGE    VERSION
ip-10-0-120-248.us-east-2.compute.internal   Ready    <none>   5d7h   v1.14.7-eks-1861c5
ip-10-0-159-232.us-east-2.compute.internal   Ready    <none>   5d7h   v1.14.7-eks-1861c5
ip-10-0-179-87.us-east-2.compute.internal    Ready    <none>   5d7h   v1.14.7-eks-1861c5

Step 5: Create Role and RoleBinding objects

Create Role and RoleBinding objects in a staging namespace in Kubernetes, mapping the desired actions to the user k8s-developer-user.

Here is a YAML manifest you can use:

cat dev-rbac-application.yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: application
  namespace: staging
rules:
  - apiGroups:
      - ""
      - "apps"
      - "batch"
      - "extensions"
    resources:
      - "configmaps"
      - "cronjobs"
      - "deployments"
      - "events"
      - "ingresses"
      - "jobs"
      - "pods"
      - "pods/attach"
      - "pods/exec"
      - "pods/log"
      - "pods/portforward"
      - "secrets"
      - "services"
    verbs:
      - "create"
      - "delete"
      - "describe"
      - "get"
      - "list"
      - "patch"
      - "update"
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: developer-application
  namespace: staging
subjects:
- kind: User
  name: k8s-developer-user
roleRef:
  kind: Role
  name: application
  apiGroup: rbac.authorization.k8s.io

Apply this YAML with:

$ kubectl apply -f dev-rbac-application.yaml

Step 6: Create two test IAM users

At this stage, you can create two test IAM users: eks-developer in the eks-developers group and eks-admin in the eks-administrators group.

To test them, set AWS profiles for them in ~/.aws/credentials, then perform the next step.

Step 7: Update kubeconfig

Add a generic admin-type or developer-type user in the kubectl config file. The important part is that it is mapped to the right role ARN .

For developers:

$ cat eks-kubeconfig-developer

For admins:

$ cat eks-kubeconfig-admin

At this stage, you have a fully running EKS cluster provisioned with the AWS CDK:

  • Active EKS cluster control plane (managed by AWS)
  • Worker nodes deployed and joined to the cluster
  • ConfigMap deployed and updated with necessary roles
  • Kubernetes users created

That's all!

comments powered by Disqus