Automation is key for streamlining your work processes, and CTO.ai commands are the best way to supercharge your application workflow. This lets you create custom software development and deployment configurations directly in your CTO.ai workflows. CTO.ai commands are fully integrated into your environments so that you can create, pass, and deploy your infrastructure in a completely customized workflow.

CTO.ai commands are collections of configurations that allow you to build custom execution cloud-native workflows on your applications. The configurations in an ops.yaml file can be reused in different environments by changing your resources' parameters and environment variables. With CTO.ai Commands, we’ve rethought the CI/CD build, deployment, and release process from the ground up for the world of automation and deployment.

In this post, I will introduce Commands configuration and explain how to adopt them in your environment and application configurations.

What are Commands Workflows

Commands workflows are generic configurations for your application infrastructure. If you have a large-scale or complex application, you might end up creating one configuration build per application, and you might end up creating the same configuration logic on your workflows. Command workflows let you create generic reusable Pipelines to deploy and use across your workflows on CTO.ai. CTO.ai commands parameters let DevOps and Platform engineers extend the capabilities of jobs, secrets, and configs, by providing easy-to-build, deploy and reuse command configuration syntax.

Commands in CTO.ai have specific properties that can interact with your application workflows. These properties and parameters are configured in the ops.yaml file attached to your application; here is how a CTO.ai command looks like:

version: "1"
commands:
 - name: setup-do-k8s-cdktf:0.1.0
   run: ./node_modules/.bin/ts-node /ops/src/setup.ts
   description: "Setup Kubernetes infrastructure on DigitalOcean"
   env:
     static:
       - STACK_TYPE=do-k8s-cdktf
       - STACK_ENTROPY=20220921
       - TFC_ORG=cto-ai
       - REGION=nyc3
     secrets:
       - DO_TOKEN
       - DO_SPACES_ACCESS_KEY_ID
       - DO_SPACES_SECRET_ACCESS_KEY
       - TFC_TOKEN
     configs:
       - DEV_DO_K8S_CDKTF_STATE
       - STG_DO_K8S_CDKTF_STATE
       - PRD_DO_K8S_CDKTF_STATE
       - DO_DEV_K8S_CONFIG
       - DO_STG_K8S_CONFIG
       - DO_PRD_K8S_CONFIG
       - DO_DEV_REDIS_CONFIG
       - DO_DEV_POSTGRES_CONFIG
       - DO_DEV_MYSQL_CONFIG

Creating Reusable Commands

In this section, I will describe how to use CTO.ai commands to deploy a service, set up Kubernetes infrastructure, and deploy your workflow to an environment. As we discussed earlier, commands let you configure and build reusable configurations for your microservices and application resources. Commands define the service, runtime, or environment used to build your infrastructure and execute your application jobs and steps.

Let me give you an example of defining steps for your application workflow within commands:

version: "1"
commands:

# setup aws ecs fargate infrastructure
 - name: setup:0.2.0
   run: ./node_modules/.bin/ts-node /ops/src/setup.ts
   description: "setup an environment"
   # environment variables
   env:
   # add static env vars
     static:
       - STACK_TYPE=aws-ecs-fargate
       - AWS_REGION=us-west-1
     # add and store aws secrets
     secrets:
       - AWS_ACCESS_KEY_ID
       - AWS_SECRET_ACCESS_KEY
       - AWS_ACCOUNT_NUMBER

     # pass environment host and database configurations

     configs:
       - DEV_AWS_ECS_FARGATE_STATE
       - STG_AWS_ECS_FARGATE_STATE
       - PRD_AWS_ECS_FARGATE_STATE
       - DEV_AWS_ECS_FARGATE_CLUSTER_VAULT_ARN
       - STG_AWS_ECS_FARGATE_CLUSTER_VAULT_ARN
       - PRD_AWS_ECS_FARGATE_CLUSTER_VAULT_ARN
       - DEV_AWS_ECS_FARGATE_SERVICE_VAULT_ARN
       - STG_AWS_ECS_FARGATE_SERVICE_VAULT_ARN
       - PRD_AWS_ECS_FARGATE_SERVICE_VAULT_ARN


 # deploy environment on aws ecs fargate workflow


 - name: deploy:0.2.0


   run: ./node_modules/.bin/ts-node /ops/src/deploy.ts


   description: "deploy to an environment"


   env:
   # add static env vars
     static:
       - STACK_TYPE=aws-ecs-fargate
       - AWS_REGION=us-west-1
   # add and store aws secrets
     secrets:
       - AWS_ACCESS_KEY_ID
       - AWS_SECRET_ACCESS_KEY
       - AWS_ACCOUNT_NUMBER
   # pass environment host, resource, and database connections

     configs:
       - DEV_AWS_ECS_FARGATE_STATE
       - STG_AWS_ECS_FARGATE_STATE
       - PRD_AWS_ECS_FARGATE_STATE
       - DEV_AWS_ECS_FARGATE_CLUSTER_VAULT_ARN
       - STG_AWS_ECS_FARGATE_CLUSTER_VAULT_ARN
       - PRD_AWS_ECS_FARGATE_CLUSTER_VAULT_ARN
       - DEV_AWS_ECS_FARGATE_SERVICE_VAULT_ARN
       - STG_AWS_ECS_FARGATE_SERVICE_VAULT_ARN
       - PRD_AWS_ECS_FARGATE_SERVICE_VAULT_ARN

   # destroy aws ecs fargate infrastructure

 - name: destroy:0.1.0
   run: ./node_modules/.bin/ts-node /ops/src/destroy.ts
   description: "destroy an environment"
   env:
   # add static env vars
     static:
       - STACK_TYPE=aws-ecs-fargate
       - AWS_REGION=us-west-1
   # add and store aws secrets
     secrets:
       - AWS_ACCESS_KEY_ID
       - AWS_SECRET_ACCESS_KEY
       - AWS_ACCOUNT_NUMBER

In this example, I’m setting up an AWS ECS Fargate infrastructure in the ops.yaml file. In the command section, we define the ECS Fargate setup steps with the name, run property path, and the description. Next, we’ll define our environment variables and secrets so our commands will be executed on our ECS Fargate infrastructure. After we’ve defined our secrets and environment variables, we can not pass and store our development, staging, and production configs. Configs in CTO.ai let you store your relational and non-relational data required for your automation, like your application database, resource name variables, infrastructure identifiers, and config details required for your workflow automation.

In the configuration above, you can define your own CTO.ai commands for your workflow; you can write reusable configurations on your existing and new infrastructure by passing the parameters and command objects required for your configuration.


Conclusion

As shown in the example above, commands give DevOps and Platform engineers the ability to reuse configuration files and define their steps using command parameters. Reusing CTO.ai command workflows helps your team optimize your infrastructure builds and increase the readability of the syntax within your command configuration files. CTO.ai commands integrate seamlessly with all your favorite cloud vendors, self-hosted Kubernetes container registries, and git repositories.