Sunday, December 30, 2018

How to Setup AWS EKS Cluster with Worker Node and Management Server

AWS EKS: Amazon Elastic Container Server for Kubernetes (EKS) is a managed service that allows us to run Kubernetes Cluster with needing to stand up or maintain own Kubernetes Control Plane.

Kubernetes is an open source System for automating the deployment, Scaling and management of the containerized application.

Amazon EKS can be integrated with many AWS services.
  • ELB for Load Balancing
  • IAM for authentication
  • Amazon VPC for isolation


Note: It's recommended to use the same user to provision EKS cluster and connect from the management kubectl server. 

Step 1: Setup New User and IAM Role:
Before, Provision EKS cluster. We need to set up a user (Programmatic access not console) and one IAM role to provision EKS cluster and required resource on behalf of you. Let's follow the step below to create User and IAM Role.
A- Create Amason EKS Service Roles:
  • Log on to AWS Console
  • Go to IAM 
  • Select Roles and choose Create Role
  • Select AWS Services in Type of trusted entity
  • Choose EKS


  • Next: Permissions
  • Again Next
  • Add Tag optional 
  • Next
  • Enter Role Name EKS-Service 


  • Select Create to finish and create the Role.
EKS Service role created successfully

B- Create A policy to grant EKS full access
  • Log on to AWS Console
  • Go to IAM 
  • Choose Policies
  • Create Policy
  • Choose JSON format and paste these lines:
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "eks:*"
            ],
            "Resource": "*"
        }
    ]
}


  • Click on Review Policy
  • Enter Policy Name-  EKS_Full_Access


  • Click on Create
EKS Full Access policy created

C- Create a user:
  • Log on to AWS Console
  • Go to IAM 
  • Select User and choose Add User
  • On next Page Enter User Name
  • Choose Access Type Programmatic access


  • Select Next permissions
  • Choose attach existing policies directly
  • Select policy as in the screenshot below and Administrator also
  • Next
  • Add Tags optional - add a tag if you want
  • NEXT
  • Review final selection and Click on Create button to finish.
A User set up successfully. We will use this user to Create EKS Cluster from the AWS CLI and manage our EKS Cluster.
D - Generate Access and Secret Key 
In order to Create and manage EKS Cluster, we need to set up AWS CLI so aws-iam-authenticator can communicate with our cluster using AWS CLI credentials profile. 
  • Log on to AWS Console
  • Go to IAM 
  • Select EKS-User 
  • Select Security credentials TAB
  • Click on Create access key



Download and keep access & secret key somewhere, We will use these key in AWS CLI configuration in the next step.

Step 2:  Create Amazon EKS Cluster VPC:
We need a VPC to provision EKS Cluster and nodes, you can use existing VPC but it's best practice to create a new VPC with subnets and Security Groups. let's follow the Steps below to Create new EKS Cluster VPC.


  • Click NEXT
  • Enter Stack Name, VPCBlock, Subnet01Block, Subnet02Block and Subnet03Block.

  • click Next
  • Optionally, you can specify Tag, rollback trigger and cloud watch monitoring. or Click Next to leave this section.
  • Review defined configuration 
  • Click Create to finish and create the EKS Cluster VPC
It may take 5 to 10 minutes. After that VPC will be available. Please note down Subnets and security Group ID. We will use them in AWS CLI to create EKS Cluster.
For Example, see below picture.


Step 3: Setup Management system - 
We need a System to create and Manage EKS cluster. We need to install following packages to manage EKS cluster.
  • AWS CLI
  • aws-iam-authenticator 
  • Kubecutl 
  • access and secret keys
A- Install AWS CLI:
In my case, I am using Ubuntu 18.04. follow the command below to finish the installation.
  • Check python version
# python --version
Python 3.6.5
if you don't have python installed, do the installation.
  • Install python-pip
# apt install python-pip
  • Install awscli  
# pip install awscli --upgrade
  • Make aws command available 
# ln -s /usr/local/bin/aws /usr/bin/aws
  • Verify installation.
# aws --version

aws-cli/1.16.81 Python/2.7.15rc1 Linux/4.15.0-39-generic botocore/1.12.71
AWSCLI installtion completed successfully. 

B- Set up aws-iam-authenticator:
  • Download aws-iam-authenticator
# wget https://amazon-eks.s3-us-west-2.amazonaws.com/1.11.5/2018-12-06/bin/linux/amd64/aws-iam-authenticator
  • Make aws-iam-authenticator executable
# chmod +x aws-iam-authenticator
  • Set path for aws-iam-authenticator 
#  cp aws-iam-authenticator /usr/bin/
  • Verify aws-iam-authenticator installation
# aws-iam-authenticator help
C- Setup kubectl:
Kubectl is an important tool to manage Amazon EKS cluster. let's setup kubectl 
  • Download kubectl
# curl -o kubectl https://amazon-eks.s3-us-west-2.amazonaws.com/1.11.5/2018-12-06/bin/linux/amd64/kubectl
  • Make kubectl executable
# chmod +x kubectl
  • Set kubectl path 
# cp kubectl /usr/bin
  • Verify kubectl installation
# kubectl version --short --client
Client Version: v1.11.5
All three tools have been set up successfully.

D- Setup AWS CLI access and secret key:
We need to set up AWS CLI  access to our AWS account, In order to provision Amazon EKS cluster and managing EKS cluster resource.
  • Run the following command on your system.
#  aws configure

AWS Access Key ID [****************CRBA]: KTxttt65DFDFDDFDNR6G88
AWS Secret Access Key [****************uiJj]: DkfjdkfjDFDFDFDFJwqpUdR+awgPaPPdkjkijkjkij
Default region name [us-east-1]: us-east-1
Default output format [table]: table

All done, Now management system is ready to provision Amazon EKS cluster.
Step 4: Create an Amazon EKS Cluster:
After setting up all the prerequisites, it's time to create EKS cluster. 
  • Get Role ARN from the AWS console in the IAM section:


  • Get subnets and Security group ids from cloud formations output

  • Run the following command to create EKS Cluster:
# aws eks create-cluster --name Cluster_Name --role-arn Enter-Role-ARN_get-from-aws-console_see-first picture-above --resources-vpc-config subnetIds=Enter_all_three_subnets_ID_Get-details-from-cloudformation-stack-VPC-Stack_example-above-2nd-picture,securityGroupIds=Enter-security-gropu-ID_Get-from-CloudFormation-stack-VPC-Stack-above-picture
  • Once done, your output would be like below:
{
    "cluster": {
        "name": "devel",
        "arn": "arn:aws:eks:us-west-2:111122223333:cluster/EKS-Cluster02",
        "createdAt": 1527785885.159,
        "version": "1.10",
        "roleArn": "arn:aws:iam::111122223333:role/eks-service-role-AWSServiceRoleForAmazonEKS-AFNL4H8HB71F",
        "resourcesVpcConfig": {
            "subnetIds": [
                "subnet-a9189fe2",
                "subnet-50432629"
            ],
            "securityGroupIds": [
                "sg-f5c54184"
            ],
            "vpcId": "vpc-a54041dc"
        },
        "status": "CREATING",
        "certificateAuthority": {}
    }
}
After successful creation of EKS cluster. we can connect our cluster.


Step 5:  Configure kubectl for amazon EKS cluster:
We to update kubeconfig file on your management system.
  • Run the following command to update your EKS cluster details in the kubectl config file
# aws eks update-kubeconfig --name EKS-Cluster02
Added new context arn:aws:eks:us-east-1:55568115433:cluster/EKS-Cluster02 to /home/amar/.kube/config
EKS Cluster added to the kubectl config file.
  • Verify Cluster Access
# kubectl get svc
NAME           TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.100.0.1      <none>        443/TCP   23h
Successfully connected to Amazon EKS Cluster

Step 6: Launch and configure EKS Worker nodes:
Now, when EKS VPC and Control plane is ready. We can launch and configure Worker nodes.

Fill all other details- make sure you enter correct EKS cluster name otherwise worker node can't communicate with Cluster. see details picute below.





  •  Next
  •  Next page is optional you can leave it for now or choose your desire setting.
  • Next
  • Review your configuration & Acknowledge
  • Click on Create to finish and setup worker nods.

Our Worker node created successfully.

Step 7: Join worker Node to Cluster:
Once Worker node's stack ready, we can join them to EKS cluster.
  • SSH to management System
  • Get Worker node ARN from Cloud Formation Stack from AWS Console, see picture below

  • Download Config Map
# curl -O https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2018-12-10/aws-auth-cm.yaml
  • Update your Worker node's ARN in the aws-auth-cm.yaml file.
  • Edit the file in your favourite editor and replace yourARN with Red highlighted line.
 apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
data:
  mapRoles: |
    - rolearn: <ARN of instance role (not instance profile)>
      username: system:node:{{EC2PrivateDNSName}}
      groups:
        - system:bootstrappers
        - system:nodes
  • Save and exit from the file.
  • Appy configuration 
# kubectl apply -f aws-auth-cm.yaml
Configuration updated Successfully.
  • Verify Added node.
# kubectl get node

Both the node added successfully. 

Finally, We have successfully provision EKS Cluster, Worker Node and A management system to manage the EKS cluster.


2 comments:

  1. Azure Data Lakeis a data storage offering on the Azure cloud platform that helps businesses process large amounts of unstructured data. This blog primarily covers Data Lake.

    ReplyDelete