Build Your First Kubernetes Application with AWS EKS

Deep
AWS in Plain English
8 min readSep 24, 2021

--

Introduction to EKS

Amazon Elastic Kubernetes Service (Amazon EKS) is a managed service that runs Kubernetes on AWS and we don’t need to manage the control plane. All the installation/maintenance/patching and HA will be taken care of by AWS automatically. With this, we don’t have to spend our time on managing the control plane and we can focus only on running our workloads (i.e. applications/services) on Kubernetes.

What you will learn from this blog:

As part of this blog, we are going to create a production-ready 3 tier application which has web tier, app tier and DB tier in EKS. We are going to create all the related Kubernetes manifests and spin them in EKS and check the connectivity from end to end.

EKS Objects Used:

Kubernetes pod, Replica Set, Deployment, ConfigMaps, Secrets, Services(External/NodePort/ClusterIP), Ingress, etc.

AWS Services Used:

AWS EKS, AWS Load Balancer(ALB), AWS Certificate Manager, AWS Route53, AWS ECR.

As we are not covering everything from basics, to understand this blog better you should have little/good knowledge about Kubernetes and above mentioned AWS services.

Application Architecture

EKS Application Architecture

Let us review the application architecture quickly. We have AWS EKS Cluster present on us-east-1a and us-east-1b availability zones in public subnet. And we have 2 EC2 nodes (as a nodegroup) which handles workload and present in private subnets in us-east-1a and us-east-1b availability zones. The EC2 nodes will reach EKS cluster kubeapi server through NAT Gateways present in the public subnet.

On the workload perspective, we have AWS Application Load Balancer(ALB) in the public subnet and all the user request will come through the ALB. We have context-based path routing set up in ELB using Kubernetes ingress, so any HTTP request which has /app1/* will go to app1 NodePort service, and load-balanced between app1 pods. Similarly, any request with /app2/* will land in app2 pod using app2 NodePort service. Also, app1 can communicate to AWS RDS using external service in Kubernetes.

Provisioning Kubernetes

I will be using eksctl to provision EKS managed cluster and node groups in us-east-1 region.

Create ControlPlane

$ eksctl create cluster name=ekscluster --region=us-east-1 --zones=us-east-1a,us-east-1b --without-nodegroup2021-09-23 16:18:46 [✔]  all EKS cluster resources for "ekscluster" have been created
2021-09-23 16:20:55 [ℹ] kubectl command should work with "/Users/deepabi/.kube/config", try 'kubectl get nodes'
2021-09-23 16:20:55 [✔] EKS cluster "ekscluster" in "us-east-1" region is ready
$ eksctl get cluster --region=us-east-1NAME REGION EKSCTL CREATED
ekscluster us-east-1 True
$ eksctl utils associate-iam-oidc-provider --region us-east-1 --cluster ekscluster --approve2021-09-23 16:25:38 [ℹ] will create IAM Open ID Connect provider for cluster "ekscluster" in "us-east-1"
2021-09-23 16:25:40 [✔] created IAM Open ID Connect provider for cluster "ekscluster" in "us-east-1"

This will automatically create EKS Control Plane components for you in public subnet. It will take around 20–30mins to create the control plane.

Create NodeGroup

$ eksctl create nodegroup --cluster=ekscluster --region=us-east-1 --name=eksng --node-type=t2.medium --nodes=2 --nodes-min=2 --nodes-max=4 --node-volume-size=20 --ssh-access --ssh-public-key=kube  --managed --asg-access --external-dns-access --full-ecr-access --appmesh-access --alb-ingress-access --node-private-networking2021-09-23 16:34:17 [✔]  created 1 managed nodegroup(s) in cluster "ekscluster"
2021-09-23 16:34:19 [ℹ] checking security group configuration for all nodegroups
2021-09-23 16:34:19 [ℹ] all nodegroups have up-to-date configuration

The above command will create 4 subnets, 2 private and 2 public and 2 nodes are created in the private subnet. And it will create IAM Role with necessary permissions and attach it with 2 EC2 instances. Now we have our Managed EKS Control Plane and NodeGroup with 2 Nodes (to take our workloads) are ready. Verify nodes using kubectl commands.

$ kubectl get nodes -o wideNAME                              STATUS   ROLES    AGE   VERSION
ip-192-168-104-179.ec2.internal Ready <none> 38m v1.20.7-eks-135321
ip-192-168-88-107.ec2.internal Ready <none> 37m v1.20.7-eks-135321

Create and Containerize our Application

Here we are planning to create to create a simple web application using python3 and containerize the application using Docker.

App1 named as user-service-app which adds/lists users in the MySQL DB. App2 is a simple nginx webapp. Based on the URL context, the request will be transferred to App1 and App2 by AWS ALB.

App1 file

server.py

Dockerfile for App1

Dockerfile

requirements.txt for App1

requirements.txt

Now create Docker image using the Dockerfile and push it to DockerHub.

docker build -t deepanmurugan/user-service-app:1.0.0 .
docker push deepanmurugan/user-service-app:1.0.0

We have now containerized our App1, let’s containerize our App2 also. App2 will be simple nginx webapp with custom index.html file.

Dockerfile for App2

Dockerfile

custom index.html for App2

<h1>App2 is up and running</h1>

Create docker image and push to DockerHub.

docker build -t deepanmurugan/nginxapp:1.0.0 .
docker push deepanmurugan/nginxapp:1.0.0

Now we have both of our applications ready. Create a mysql DB on AWS RDS for App 1 to connect. Launch the mysql instance in private subnet where we have our nodegroups present.

DB Initial Setup

mysql> CREATE DATABASE newdb;
Query OK, 1 row affected (0.00 sec)
mysql> CREATE TABLE `newdb`.`USER_DETAILS` (`username` varchar(255),`password` varchar(255),`gender` varchar(255), PRIMARY KEY (username));
Query OK, 0 rows affected (0.02 sec)

Things we have in place now,

  1. Kubernetes Control plane.
  2. Kubernetes nodegroup consists of 2 nodes.
  3. Docker image for App1 in DockerHub.
  4. Docker image for App2 in DockerHub.
  5. Mysql DB instance in AWS RDS.

Let’s move on to Kubernetes to create relevant objects and configure ingress and load balancer.

Create app1 deployment and its service.

$ kubectl apply -f user-service-app1.ymldeployment.apps/user-service-app-deployment created
appservice/user-service-app-nodeport-service created

The value of DB_HOST in the above yml refers the External name service mysql-service that we created below.

Create mysql-service.

$ kubectl apply -f mysql-service.ymlservice/mysql-service created

Create app2 (nginx app).

$ kubectl apply -f nginx-app2.ymldeployment.apps/nginx-app created
service/nginx-app-service created

Create service account alb-ingress-controller and provision the ingress controller pod.

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/master/docs/examples/rbac-role.yamlclusterrole.rbac.authorization.k8s.io/alb-ingress-controller created
clusterrolebinding.rbac.authorization.k8s.io/alb-ingress-controller created
serviceaccount/alb-ingress-controller created

Above command will create Cluster Role, Role Binding and Service Account for ingress controller to create kubernetes objects. But the ingress controller has to create AWS resources as well, so we have to annotate the service account with AWS Role (that has permission for provisioning/describing ELB, Instances, Certificates etc…)

$ aws iam create-policy --policy-name ALBIngressControllerIAMPolicy --policy-document file://ALBIngressController.json

Once the policy is created, annotate the service account alb-ingress-controller with the IAM Role (which has the created IAM Policy).

$ eksctl create iamserviceaccount --region us-east-1 --name alb-ingress-controller --namespace kube-system --cluster ekscluster --attach-policy-arn arn:aws:iam::account_id:policy/ALBIngressControllerIAMPolicy --override-existing-serviceaccounts --approve2021-09-23 17:31:36 [ℹ]  serviceaccount "kube-system/alb-ingress-controller" already exists
2021-09-23 17:31:37 [ℹ] updated serviceaccount "kube-system/alb-ingress-controller"

Create Ingress Controller.

$ kubectl apply -f alb-ingress-controller.yamldeployment.apps/alb-ingress-controller created

Create ALB and ALB Resources.

$ kubectl apply -f alb.yamlWarning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
ingress.extensions/ingress-apps created
$ kubectl get ingress -o wideNAME CLASS HOSTS ADDRESS PORTS AGE
ingress-apps <none> * 500498a2-default-ingressap-b39d-11077563.us-east-1.elb.amazonaws.com 80 62s

After I run the above alb.yml file, I see the below errors on the pod.

E0921 11:58:14.204326 1 controller.go:217] kubebuilder/controller “msg”=”Reconciler error” “error”=”failed to build LoadBalancer configuration due to failed to resolve 2 qualified subnet with at least 8 free IP Addresses for ALB. Subnets must contains these tags: ‘kubernetes.io/cluster/ekscluster’: [‘shared’ or ‘owned’] and ‘kubernetes.io/role/elb’: [‘’ or ‘1’]. See https://kubernetes-sigs.github.io/aws-alb-ingress-controller/guide/controller/config/#subnet-auto-discovery for more details. Resolved qualified subnets: ‘[]’” “controller”=”alb-ingress-controller” “request”={“Namespace”:”default”,”Name”:”ingress-apps”}

I0921 11:58:14.149627 1 request_pagination.go:107] Request: ec2/DescribeSubnets, Payload: { Filters: [{ Name: “tag:kubernetes.io/cluster/ekscluster”, Values: [“owned”,”shared”] },{ Name: “tag:kubernetes.io/role/elb”, Values: [“”,”1"] }]}

I created a tag named, kubernetes.io/cluster/ekscluster and value as shared in 2 public subnets, after that ingress controller were able to detect the subnets for creating ALB and it provisioned the ALB.

This happened because we created cluster without nodegroup and added node group separately. If you face the same error please tag the values accordingly.

Once you apply the alb resource, it will automatically create ALB in AWS and add the context path rules. Navigate to the target groups and see if the health checks are passing. Once heath checks are passed, you will be able to access the apps using below url.

Validating the resources

Let’s check the kubernetes components after creating all.

$ kubectl get podsNAME                                      READY STATUS RESTARTS AGE
nginx-app-7b944d44db-ldb94 1/1 Running 0 7m44s
nginx-app-7b944d44db-nnfrc 1/1 Running 0 7m44s
user-service-app-deployment-7dd8574d69–97ctk 1/1 Running 0 40m
user-service-app-deployment-7dd8574d69-w56hj 1/1 Running 0 40m
$ kubectl get ingress -o wideNAME CLASS HOSTS ADDRESS PORTS AGE
ingress.networking.k8s.io/ingress-apps <none> * 500498a2-default-ingressap-b39d-1107719563.us-east-1.elb.amazonaws.com 80 12m
$ kubectl get svcNAME TYPE CLUSTE
R-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 104m
service/mysql-service ExternalName <none> user-service-db.cvfb9w4kqhvm.us-east-1.rds.amazonaws.com <none> 28m
service/nginx-app-service NodePort 10.100.233.84 <none> 80:32262/TCP 50s
service/user-service-app-nodeport-service NodePort 10.100.169.93 <none> 80:32275/TCP 28m

We have all the things in place properly. Let’s see the service user-service-app nodeport service.

The user-service-app nodeport service has backend serving at port 32275. And in target group we have targets serving traffic in port 32275. Which means the ALB receives traffic at port 80 and based on URL Context it sends traffic to the node at port 32275 and we have nodeport service listening on port 32275 which receives the traffic and loadbalance the traffic between the available backend pods. Same applies for the other service nginxapp service.

Let’s access the app and see the access.

Let’s try some DB operation from App1 and see if App1 is able to reach the mysql external name service.

Both the apps are accessible based on URL context and App1 can reach the MySQL DB service.

In the next part let’s make this app more reliable using some init containers, liveliness and readiness probe and integrating it with ACM.

Reference to DockerHub and GitHub

user-service-app — https://github.com/deepanmurugan/user-service-app

user-service-app image — https://hub.docker.com/repository/docker/deepanmurugan/user-service-app

nginxapp image — https://hub.docker.com/repository/docker/deepanmurugan/nginxapp

More content at plainenglish.io

--

--