Simplify Rolling Updates on AWS EKS with a Developer Control Plane: perform reliable upgrades with zero downtime
Kubernetes has several options for configuration and formatting, but only a few developers understand them well. As more configurations go into your application, the application becomes susceptible to bugs. In the long run, these errors can affect the performance of your entire DevOps toolchain and result in unsuccessful upgrades when introducing changes.
CTO.ai's Developer Control Plane simplifies the process of creating and managing applications composed of multiple containers. If you have a complex application made up of multiple Kubernetes applications, our Developer Control Plane will help your application achieve higher availability, resiliency, and simplified updates.
This tutorial will teach you how to use Kubernetes Rolling Updates to deploy your applications to the Kubernetes cluster with zero downtime.
Prerequisites
- Kubectl installed
- Kubernetes and Docker installed on your machine
- An AWS Account
- CTO.ai's CLI installed
Kubernetes provides two native deployment strategies that we can take advantage of depending on the needs of your system:
Recreate Strategy
In the recreate strategy, all old pods are killed at once and get replaced at once with the new ones. The recreate strategy allows you to update all your resources in a brief downtime period.
Rolling Update
The rolling update strategy is a default strategy that replaces pods of the previous versions of your application with pods of the new version without downtime.
How does Kubernetes perform a rolling update Deployment?
Deployment —> Replica Set —> Pods
If you are deploying a web app in an image (version 1), you will create a deployment manifest file for your application and apply the manifest configurations using the kubectl
command. Kubernetes will create a replica set, and the replica set will schedule the required number of pods for the deployment.
When planning a new release with new features, we will implement them and make them available in a new image (version 2), and also inside the deployment file. Finally, we’ll change the image to point to the latest version of the image. This modification will trigger Kubernetes to roll out a new deployment. The Kubernetes API will create a new replica set, which will transition to the new version and once the health check passes, the service will update itself, pass the IP address to its endpoints, and start forwarding traffic to it.
Let's get started.
For this guide, we’ll be using CTO.ai's EKS -EC2 Workflow (which is open-source on GitHub) to create and set up your EKS infrastructure.
Before you do that, you need to create your secrets in the CTO.ai dashboard. We’ll create four secret keys: your AWS ACCESS KEY, AWS ACCOUNT NUMBER, AWS SECRET KEY, and your GITHUB TOKEN.
- First, sign up or login to your CTO.ai account and access the dashboard. Navigate to Settings, select Secrets and attach your key values. Click on New Secret. There, you'll paste your Secret Keys and Secret Values.
2. Clone the EKS Workflow on GitHub to set up your Kubernetes Cluster on EKS. Our EKS Workflows is open source on GitHub, and you can run it and set up the infrastructure directly on your terminal.
3. Next, build your pipelines locally with the CTO.ai CLI using the ops build .
command. The ops build
command will build your workflow for sharing, your Docker image from your Dockerfile, and the set of files located in the specified path you created in your source code.
4. Run your pipelines locally with the CTO.ai CLI. The ops run .
command will start the workflow you built from your team or registry.
5. Run and set up your entire EKS infrastructure on AWS using ops run -b .
- The process will build your Docker image and start loading up your EKS stack.
6. Your AWS EKS infrastructure stack has started deploying.
7. Confirm if you have any running resources using the kubectl get all
command and that you can also see your EKS cluster on the AWS console.
8. Next, update and write your Yaml file
to include the details of your deployment like the name, replicaset, and specs configuration. The specification for your deployment object will have the functionalities for rolling updates and rollouts. You can edit the deployment file below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: express-app
spec:
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
selector:
matchLabels:
app: express-app
template:
metadata:
labels:
app: express-app
spec:
containers:
- image: tolatemitope/express-app:v1
name: express-app
ports:
- containerPort: 8080
In our deployment file, we have a deployment strategy with type rolling update
. Inside the rolling update we have 2 values; max unavailable = 25%
and maxsurge = 25%.
This means that when Kubernetes is performing the incremental upgrade at any time, there should be at least 75% available for you to run your Kubernetes resources. These values can be adjusted according to your requirements.
9. Create the deployment object into your cluster using the kubectl apply -f deployment.yml –record=true
command. You can see that your deployment file has been created.
10. Next, check the status of your deployment using kubectl rollout status deployment express-app
11. Create a manifest service object so you can access and expose your application within your cluster. The Service object has a selector with a label. The pods we created in the deployment object also have the same label.
kind: Service
apiVersion: v1
metadata:
name: express-app
Spec:
selector:
app: express-app
ports:
- protocol: TCP
port: 80
targetPort: 8080
nodePort: 30000
type: LoadBalancer
12. Apply your service manifest configuration using kubectl apply -f service.yml
- You can get your service external IP which will create the load balancer in the cloud.
13. Describe your service object using kubectl describe svc express-app
In your configurations you can see the endpoints of the service object which is going to load balance.
We created the Deployments and Service objects below, and we can see that our pods are running in the cluster and we can access the Service object through the load balancer. Currently, we are running on version 1 of the deployment config; I’ll show you how to upgrade your application from version 1 to version 2 with zero downtime.
14. Back in your deployment file, change version v1
to v2
apiVersion: apps/v1
kind: Deployment
metadata:
name: express-app
spec:
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
selector:
matchLabels:
app: express-app
template:
metadata:
labels:
app: express-app
spec:
containers:
- image: tolatemitope/express-app:v2
name: express-app
ports:
- containerPort: 8080
15. Apply the new changes you made using kubectl apply -f deployment.yml
16. Get and see the rollout status of your deployment express-app using the kubectl rollout status deployment express-app
command. This gives a clear insight with logs on how your express-app application is upgraded to V2 with incremental updates and without any issues.
Final Thoughts
And just like that, you can see how fast and easy it is to roll out deployment changes from one version to another. With CTO.ai, the complete development and upgrade cycle from initial design to production deployment becomes shorter. Using our Developer Control Plane, you can quickly catch defects before releasing them to production and recover your application faster.
Start your stress-free deployment journey on AWS EKS with a free sign-up