Navigate AWS EKS with Fargate: A Hands-On Project Guide to Seamless Kubernetes Deployment and Ingress Configuration

Victor Okoli
AWS in Plain English
9 min readMar 20, 2024

In today’s rapidly evolving tech landscape, mastering cloud-native technologies is essential for businesses striving to stay competitive and agile. Among these technologies, Kubernetes stands out as a powerhouse for container orchestration, while AWS Elastic Kubernetes Service (EKS) offers a robust platform for managing Kubernetes clusters on the AWS cloud.

With the help of this thorough guide, we will travel through the complexities of AWS EKS together. However, we will not end there. We’ll take it a notch further by combining it with Fargate, a serverless computation engine for containers, and also dive into the details of Kubernetes deployment and Ingress configuration.

Whether you’re a seasoned Devops engineer or solution architect looking to enhance your skills or a newcomer eager to dive into the world of Kubernetes, this hands-on project guide provides the insights and practical knowledge you need to succeed. So, let’s embark on this journey together and unlock the power of AWS EKS with Fargate!

In a Kubernetes cluster, there are 2 components, which are the control plane and the data plane, often referred to as the master node and worker node. The number of nodes differs depending on the organizational requirements; there is no limit. But three master nodes are a common starting point for small to medium-sized clusters. This configuration provides redundancy and fault tolerance while keeping management overhead manageable.

Key points covered in the guide include:

  1. Kubernetes Components: A Kubernetes cluster comprises a control plane (the master node) and a data plane (the worker node). While there’s no limit to the number of nodes, starting with three master nodes provides redundancy and fault tolerance.
  2. Benefits of AWS EKS: AWS EKS streamlines the Kubernetes cluster creation and management process, reducing complexity and providing additional features as a managed service. It handles the management of the master node, including the control plane components.
  3. Manual Cluster Setup: Without AWS EKS, you would manually set up Kubernetes clusters on AWS instances, which involves installing and configuring various components on the master and worker nodes. This process can be error-prone (API server not working, ETCD crashes, expired certificates, scheduler not working) and time-consuming.
  4. AWS EKS Options: AWS offers two options for joining master and worker nodes:
  • EC2 Instances: Create and maintain EC2 instances manually, then connect them to the master node under AWS EKS management.
  • Fargate is a serverless compute engine for containers that manages worker nodes automatically, allowing users to focus on application development while AWS handles infrastructure management.

As stated earlier, Fargate will be used for this project.

Before we begin creating clusters on EKS, there are three prerequisites that are needed in order to interact with EKS.

Kubectl: This is a command-line tool for working with Kubernetes clusters. For more information, see Installing or updating Kubectl. See link below

To confirm availability, you can run the command:

kubectl version --client

Eksctl — A command-line tool for working with EKS clusters that automates many individual tasks. For more information, see the link below.

To confirm availability, you can run the command:

eksctl version

AWS CLI — A command-line tool for working with AWS services, including Amazon EKS. For more information, see the link below.

To confirm availability, you can run the command:

aws --version

After installing the AWS CLI, we recommend that you also configure it with your aws credentials using the command below

aws configure

Now let’s create our cluster. Search for EKS on the task plane and click on clusters. Now, in order to create it, there are lots of parameters to be included, as you can see below. There should be something better.

One of the most preferred ways to create a cluster in an organization is using the eksctl. This is a command-line utility used to manage eks clusters.

eksctl create cluster --name demo-cluster-1 --region us-east-1 --fargate

This usually takes 15–20 minutes depending on your internet connection, among other things, so exercise a bit of patience here as the cluster is being built.

As seen above, the cluster has been set up. You can also confirm this by checking the clusters on the GUI.

Now, as we are more accustomed to using the CLI, let us update the kubeconfig file, which helps us check pods, deployments, and services using it via the CLI instead of using the GUI. Run the command below to initiate the command.

aws eks update-kubeconfig --name demo-cluster-1 --region us-east-1

It worked; see the image below.

Now, before we begin deployment, let’s create a Fargate profile, preferably in our namespace; however, this could be done in your default namespace. Run the command below with your details.

eksctl create fargateprofile --cluster demo-cluster-1 --region us-east-1 --name alb-sample-app --namespace game-2048

Profile created; see image below.

Now let’s deploy our configurations with the command below:

kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.5.4/docs/examples/2048/2048_full.yaml

This file has all our YAML configurations for namespace, deployment, services, and ingress created in that order.

All resources were created as seen below.

Now all resources were created properly, as seen above, but however, for the ingress to work and function properly, it needs the ingress controller, and without this, the ingress is and will always be in a dead state. So basically, 4 out of the 4 resources were created, but 3 are running. All but ingress.

Still not clear?

The image above shows that the pods are running; the same is true for the service except for the ingress. No doubt, it shows Ingress has been created but has no address, so how can external users access our application? Simple—we need to create an ingress controller. Once created, it reads the <ingress-2048> created already, and it will create and configure a load balancer through which our application can be reached via an address.

OIDC CONNECTOR

The ALB controller that is running needs to access the application load balancer. It needs to have IAM integrated, and to do this, we can use the IAM OIDC provider. The AIM OIDC provider is crucial when setting up IAM roles for service accounts in AWS EKS, as it defines the OIDC provider that AWS will use for authentication and token validation within your Kubernetes cluster. Run the command:

eksctl utils associate-iam-oidc-provider --cluster demo-cluster-1 --approve

It has been connected, as seen below.

Returning to our ALB controller, we would like to provide access to a pod in Kubernetes to access several AWS services, including the Application Load Balancer. This will require IAM roles and policies because it needs to communicate with AWS APIs. Let’s download the IAM policy using the command below:

curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.5.4/docs/install/iam_policy.json

IAM Policy.

aws iam create-policy --policy-name AWSLoadBalancerControllerIAMPolicy --policy-document file://iam_policy.json

The IAM Policy has been created, as seen below.

This IAM role creates a service account and attaches it to the policy we just created. Run the command below.

eksctl create iamserviceaccount \
--cluster=<your-cluster-name> \
--namespace=kube-system \
--name=aws-load-balancer-controller \
--role-name AmazonEKSLoadBalancerControllerRole \
--attach-policy-arn=arn:aws:iam::<your-aws-account-id>:policy/AWSLoadBalancerControllerIAMPolicy \
--approve

The IAM role has been created and attached to the policy, as seen below.

Now, going back to the ALB controller we spoke about earlier, we are going to use helm charts to create the ALB controller and use the service account to run the pod. To add the Helm chart repository, run the command:

helm repo add eks https://aws.github.io/eks-charts

After adding, we then update the repository using the command below:

helm repo update eks

Now let’s install the helm chart using the command below. Please make the appropriate modifications, such as vpcId, region, and clustername.

helm install aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system --set clusterName=demo-cluster-1  --set
serviceAccount.create=false --set serviceAccount.name=aws-load-balancer-controller --set region=us-east-1 --set vpcId=0a1e98bf27f98a279

The AWS load balancer has been installed.

Now that this load balancer has been installed and is operational, make sure that at least two replicas are up and running. Anything besides this, we will have to troubleshoot. Use the following command to verify that two replicas are operating in two different AZs at minimum:

kubectl get deploy -n kube-system

As you can see below, at least 2 pods are running in the aws-load-balancer-controller we just created.

We can now verify through the GUI (AWS Console) whether the load balancer has been built by the aws-load-balancer-controller. The load balancer has been created, as can be seen below.

When we attempt to check for the ingress, we will discover that it has been made visible through an address that this load balancer has created. Use the following command to look for ingress:

kubectl get ingress -n game-2048

It was successful. The following image just indicates that outside users can visit our application using the address.

So let us check whether this works by entering the address into our browser to verify it.

Yay! Users are able to access our application externally now that it is deployed.

Thank you for joining us on this journey to explore the power of AWS EKS with Fargate. We hope this guide has provided valuable insights and practical knowledge to help you succeed in your cloud-native endeavors.

PS: Remember to remove the cluster you made, or else you can continue to accrue charges. This is very important.

If you found this useful, please let me know in the comment section. Also,if you encounter any roadblocks when trying this, let me know as well. We can sure work something out. Until next time, happy cloud computing!

In Plain English 🚀

Thank you for being a part of the In Plain English community! Before you go:

Published in AWS in Plain English

New AWS, Cloud, and DevOps content every day. Follow to join our 3.5M+ monthly readers.

Written by Victor Okoli

Enthusiastic learner. DevOps. Exploring the synergy between development and operations while making sense of our complex world. Forever an Engineer.

No responses yet

What are your thoughts?