How to set up a CI/CD Pipeline with AWS, and CTO.ai

The world of software development has shifted gears, with an increasing focus on CI/CD methodologies and delivering high-quality applications faster. To achieve these objectives, implementing a robust Continuous Integration (CI) and Continuous Deployment (CD) pipeline is vital. Continuous Integration (CI) is the process of integrating code changes from multiple developers into a shared repository, while Continuous Deployment (CD) automates the deployment of these changes to production. CI/CD streamlines the software development lifecycle, enabling teams to build, test, and deploy applications more efficiently.

CTO.ai CI/CD Pipelines facilitate rapid integration and deployment of code changes, significantly reducing the time to deliver new features or fixes and enabling teams to iterate faster. With our Pipeline configurations, developers can configure high-level processes and build deployment workflows to sit their project’s specific requirements.

In this blog post, we’ll discuss how to set up a CI/CD Pipeline with the AWS ECS Fargate workflow and CTO.ai.


Prerequisites

  • CTO.ai Account and CTO.ai CLI installed
  • Docker installed on your local machine
  • Access to an AWS Account & CLI installed on your machine

Set up AWS Infrastructure

Before you start building your AWS ECS Fargate workflow, you have to create your keys and token and attach it to the CTO.ai Secrets vault. Secrets are encrypted environment variables that CTO.ai utilizes within your workflow to build and run your application and deployments. In this tutorial, you’ll create your AWS ACCESS KEY, and AWS SECRET ACCESS KEY from your IAM console on AWS. Also, get your AWS ACCOUNT NUMBER from your AWS console, and create a PERSONAL_ACCESS_TOKEN on GitHub.

After creating your tokens, attach the values of those tokens in your Secret settings in CTO.ai.

Next, clone and start creating workflows with the AWS ECS Fargate workflow on GitHub; you can install it by cloning the repository.

git clone https://github.com/workflows-sh/aws-ecs-fargate.git

cd aws-ecs-fargate
  • Login to your CTO.ai account using the Ops CLI. Sign in using the ops account:signin command.

Next, you can set up your AWS ECS Fargate infrastructure using the ops run -b . command. This command will set up your entire ECS Fargate stack using CloudFormation.

In your terminal, select set up an Environment.

Wait for some time; it’ll build your Docker image and start loading up your ECS Fargate stack.

Configure Sample Application with Ops.yml file

To build, run, and deploy your CI/CD Pipelines on the AWS ECS Fargate workflow, we must write and configure an ops.yml file in your code repository on GitHub. Your code repository can be any sample application written in Node or any language of your choice. In the ops.yml file we have a series of commands starting with the setup commands and the environment variables.

In this ops.yml file: https://github.com/workflows-sh/aws-ecs-fargate/blob/main/ops.yml

  • Replace your GitHub org and GitHub repo in the event section.
  • In the jobs section, we specify the packages we will install for the pipeline. Jobs are fully automated steps that define what to do in your application. Jobs also lets you install and manage packages to be installed in your CTO.ai AWS ECS Fargate workflow.
  • In the steps section, we define the required commands for our Pipelines. You can export your given path, run the commands to install the AWS ECS packages, clone your application repo, and push and build your application changes from your Dockerfile into your elastic container registry.
  • After configuring your ops.yml file, save it and push the changes to your repository.

Trigger Pipelines and Services with Event Triggers

When you’re done configuring your ops.yml file, you can trigger your Pipelines and Services using events configured in your ops.yml file in your GitHub repository. In our GitHub repository, we configured our CI/CD Pipelines to run when we merge a pull request or create a tag.

Build, Run and Publish your CI/CD Pipelines

Build your Pipelines using ops build . command.

  • After building your Pipelines, you need to run and publish them using ops run . and ops publish . respectively. When you publish your Pipeline, you are executing a series of processes for your application using your defined steps, dependencies, and Pipeline settings.

Build, Run, and Publish Services

Services in CTO.ai lets you streamline development processes and improve collaboration in your team. With CTO.ai services, you can preview your application before deploying it to production. When you push new changes and trigger your application using GitHub event triggers, CTO.ai generates a unique preview URL for every application deployment.

Build services using the ops build . command, and select preview of Sample-App.

When you are done building your service run it using ops run . and eventually publish it using ops publish .

In your GitHub repository, merge a pull request or create a tag on your application using the GitHub triggers we configured earlier. Merge your pull request, and visit your dashboard in CTO.ai. You will see your Sample-App Pipeline running.

Click on your Services console to get your preview URLs; you will see the service running with the RUN ID and version.

Enable faster development and deployment with AWS and CTO.ai

If you’d like to learn how to configure and build CI/CD Pipelines on your AWS ECS Fargate resources, we have a starter project pack for you on GitHub to get coding.

By automatically running tests and checks on every commit, CTO.ai Pipelines help catch bugs and other issues early in the development process, leading to better overall code equality. With our AWS ECS Fargate workflow, you can scale as your project grows, allowing you to easily add more resources, reduce the likelihood of unexpected errors, and make deployments more reliable.