Introduction

Transitioning from Virtual Machines (VMs) to Kubernetes can be a pivotal move for organizations seeking to explore the power of container orchestration. Coupled with CTO.ai, a robust continuous integration and delivery platform, this transition can be streamlined and efficient. This blog delves into the practical steps and considerations for making this significant shift.

Assessing the Current State of Your Virtual Machine

Before embarking on the transition, assess the current VM-based applications, dependencies, and architecture. Evaluate the scalability, resource utilization, and deployment needs to define the scope of migration.

Containerizing Applications

  • Dockerize Applications: Begin by packaging applications and their dependencies into Docker containers, which will be orchestrated by Kubernetes.
  • CTO.ai Configuration: Utilize CTO.ai’s ops.yml to define workflows and automate the containerization process, ensuring consistent and reproducible builds.

Below is an example of an Express JS App containerized to an application, and deployed using our ECS Fargate Workflow written in our ops.yml configurations.

version: "1"
pipelines:
  - name: sample-expressjs-pipeline-aws-ecs-fargate:0.1.2
    description: build a release for deployment
    env:
      static:
        - DEBIAN_FRONTEND=noninteractive
        - ORG=workflows-sh
        - REPO=sample-expressjs-aws-ecs-fargate
        - AWS_REGION=us-west-1
        - STACK_TYPE=aws-ecs-fargate
      secrets:
        - GITHUB_TOKEN
        - AWS_ACCESS_KEY_ID
        - AWS_SECRET_ACCESS_KEY
        - AWS_ACCOUNT_NUMBER
    events:
      - "github:workflows-sh/sample-expressjs-aws-ecs-fargate:pull_request.merged"
      - "github:workflows-sh/sample-expressjs-aws-ecs-fargate:pull_request.opened"
      - "github:workflows-sh/sample-expressjs-aws-ecs-fargate:pull_request.synchronize"
    jobs:
      - name: sample-expressjs-build-job-aws-ecs-fargate
        description: sample-expressjs build step
        packages:
          - git
          - unzip
          - python
        steps:
          - curl https://s3.amazonaws.com/aws-cli/awscli-bundle-1.18.200.zip -o awscli-bundle.zip
          - unzip awscli-bundle.zip && ./awscli-bundle/install -b ~/bin/aws
          - export PATH=~/bin:$PATH
          - aws --version
          - git clone https://oauth2:[email protected]/$ORG/$REPO
          - cd $REPO && ls -asl
          - git fetch && git checkout $REF
          - aws ecr get-login-password --region $AWS_REGION | docker login --username AWS --password-stdin $AWS_ACCOUNT_NUMBER.dkr.ecr.$AWS_REGION.amazonaws.com/$REPO
          - docker build -f Dockerfile -t $AWS_ACCOUNT_NUMBER.dkr.ecr.$AWS_REGION.amazonaws.com/$REPO-$STACK_TYPE:$REF .
          - docker push $AWS_ACCOUNT_NUMBER.dkr.ecr.$AWS_REGION.amazonaws.com/$REPO-$STACK_TYPE:$REF
services:
  - name: sample-expressjs-service-aws-ecs-fargate:0.1.1
    description: A sample expressjs service
    run: node /ops/index.js
    port: [ '8080:8080' ]
    sdk: off
    domain: ""
    env:
      static:
        - PORT=8080
    events:
      - "github:workflows-sh/sample-expressjs-aws-ecs-fargate:pull_request.merged"
      - "github:workflows-sh/sample-expressjs-aws-ecs-fargate:pull_request.opened"
      - "github:workflows-sh/sample-expressjs-aws-ecs-fargate:pull_request.synchronize"
    trigger:
     - build
     - publish
     - start

Setting Up a Kubernetes Cluster

  • Choose a Hosting Provider: Decide whether to host the Kubernetes cluster on-premise or with a cloud provider like AWS, GCP, or Azure.
  • Cluster Configuration: Use CTO.ai to automate the configuration and deployment of Kubernetes clusters, using CTO.ai Jobs for reusable configuration snippets.

We have our open-source Kubernetes workflows you can use to get started with your Kubernetes Cluster. To get started, start by connecting your GitHub account and installing the EKS EC2 ASG workflow we support,  and choose the repository where your Kubernetes configurations are stored.

git clone "https://github.com/workflows-sh/aws-eks-ec2-asg-cdk.git"

cd aws-eks-ec2-asg-cdk

Next, set up and add your secret keys, AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY AWS_ACCOUNT_NUMBER, and GITHUB_TOKEN with write permissions to the project secret settings in CTO.ai.

After cloning the repo from GitHub, run and build your Workflow using ops build -b . and deploy your infrastructure to AWS.

Deploying Applications to Kubernetes

  • Kubernetes Manifests: Create Kubernetes manifests for each application, defining how the application should run and interact with the cluster.
  • CTO.ai Deployment: With CTO.ai’s deployment capabilities to automate the deployment of applications to the Kubernetes cluster, ensuring seamless and error-free releases.
version: "1"
pipelines:
  - name: sample-expressjs-pipeline-aws-eks-ec2-asg-cdk:0.1.3
    description: build a release for deployment
    env:
      static:
        - DEBIAN_FRONTEND=noninteractive
        - ORG=workflows-sh
        - REPO=sample-expressjs-aws-eks-ec2-asg-cdk
        - AWS_REGION=us-west-1
        - STACK_TYPE=aws-eks-ec2-asg-cdk
      secrets:
        - GITHUB_TOKEN
        - AWS_ACCESS_KEY_ID
        - AWS_SECRET_ACCESS_KEY
        - AWS_ACCOUNT_NUMBER
    events:
      - "github:workflows-sh/sample-expressjs-aws-eks-ec2-asg-cdk:pull_request.merged"
      - "github:workflows-sh/sample-expressjs-aws-eks-ec2-asg-cdk:pull_request.opened"
      - "github:workflows-sh/sample-expressjs-aws-eks-ec2-asg-cdk:pull_request.synchronize"
    jobs:
      - name: sample-expressjs-build-job-aws-eks-ec2-asg-cdk
        description: sample-expressjs build step
        packages:
          - git
          - unzip
          - python
        steps:
          - curl https://s3.amazonaws.com/aws-cli/awscli-bundle-1.18.200.zip -o awscli-bundle.zip
          - unzip awscli-bundle.zip && ./awscli-bundle/install -b ~/bin/aws
          - export PATH=~/bin:$PATH
          - aws --version
          - git clone https://oauth2:[email protected]/$ORG/$REPO
          - cd $REPO && ls -asl
          - git fetch && git checkout $REF

Managing Stateful Applications

  • Persistent Storage: For stateful applications, define persistent storage solutions compatible with Kubernetes, such as Persistent Volumes (PV) and Persistent Volume Claims (PVC).
  • Database Migration: Utilize CTO.ai to automate database migration tasks, ensuring data consistency and integrity during the transition.

Scaling and Load Balancing

  • Horizontal Pod Autoscaling: Implement autoscaling in Kubernetes to dynamically adjust the number of running pods based on CPU utilization or other select metrics.
  • CTO.ai Optimization: Optimize CTO.ai workflows to handle scaling efficiently, ensuring resource availability and optimal performance.

Continuous Optimization

  • Optimization Strategies: Regularly assess the performance and resource utilization of Kubernetes clusters and applications, optimizing configurations as needed.
  • CTO.ai Insights: Utilize CTO.ai Insights for continuous feedback and analytics on performance, helping in refining and optimizing the deployment pipelines.

Security Considerations

  • Kubernetes Security: Implement role-based access control (RBAC), network policies, and security contexts to enhance the security of the Kubernetes cluster.
  • CTO.ai Security Features: Use CTO.ai’s security features, such as environment variables, secrets, and configs, to safeguard sensitive information.

Conclusion

Transitioning from VMs to Kubernetes using CTO.ai can significantly boost an organization’s deployment efficiency, scalability, and resource utilization. By carefully planning the migration, automating key processes with CTO.ai, and using the extensive capabilities of Kubernetes, organizations can successfully navigate this transition and realize the manifold benefits of container orchestration in their application deployment landscape.