Defining Workflows with ops.yml Configurations
The ops.yml
file is the primary configuration file defining Commands, Pipelines, and Services workflows on the CTO.ai platform. Alongside a Dockerfile
that defines the runtime environment for executing your workflow’s custom code, your ops.yml
configuration controls how workflows are run on our platform.
A single ops.yml
config file may be used to define one or more CTO.ai workflows.
This page is intended to give you an understanding of how an ops.yml
file is used to configure workflows on the CTO.ai platform, as well as the relationship between various elements within an ops.yml
file and the workflow container image that is built as a result.
If you wish to jump right in to configuring your workflows, our ops.yml Reference page explains the structure of an ops.yml
file and the meaning of the elements within.
Anatomy of a Workflow
At the simplest level, each workflow defined in an ops.yml
file executes its run
command in the container built from the workflows’ common Dockerfile
.
Dockerfile
A Dockerfile
defines the common containerized runtime environment that is built to execute the workflows in a given ops.yml
file. Each ops.yml
file that defines one or more Commands or Services workflow (or an individual workflows in the .ops/jobs/
directory of a Pipelines workflow) may correspond with exactly one Dockerfile
. Thus, a single ops.yml
file that defines several Commands workflows will use the single Dockerfile
included in that directory to build a common container image for running those workflows.
A minimal functional container image—as shown above in the default Dockerfile
generated for Bash-based Commands workflows—pulls one of our base images, imports your application code (ADD --chown=ops:9999 . .
), and… that’s it. For all workflows, the entrypoint for your workflow container is defined in your ops.yml
file (described below), rather than being directly defined as part of the final container image.
Self-Contained Workflows
The files in the following box are the basic ops.yml
configurations that are generated when initializing new Commands or Services workflows that use our Bash runtime environment.
In both of the examples above, the basic file contains some basic metadata (name
, description
) and a shell command to run
in the workflow’s container. When you build a workflow using the ops build
CLI command, the resulting container image will be tagged with the name
value and act as the runtime environment for the command specified under the run
key.
In the Services workflow example, the port
and domain
values are additional configuration settings that control the execution of the workflow’s container image on the CTO.ai platform.
Job-Based Workflows
The default ops.yml
file for a Pipeline, our third type of workflow, is shown in the box below. Unlike Commands and Services workflows, the application logic for Pipelines resides in one or more jobs
defined by the workflow. The steps
key for each Job defines an array of strings to be passed to the workflow container; when a Pipeline workflow is built, these steps are added to the final image as lines in a shell script to be run within the resulting container.
For any given ops.yml
file defining one or more Pipeline Jobs, you can run ops init . -j
in the directory containing that ops.yml
file to generate scaffolding code for each Job. Separate template directories for each Job are created as subdirectories of .ops/jobs/
, with each Pipeline Job functioning as a self-contained Commands workflow.
For a deeper explanation of the ops init
subcommand and the scaffolding generated by our CLI, our Workflows Overview document has you covered.
Configuring Workflow Behavior with Environment Variables
In line with the best practices laid out for Twelve-Factor Applications, workflows on the CTO.ai platform provide multiple methods to make your application’s configuration available in its execution environment.
The simplest way to pass application configuration to your workflow’s runtime environment is through static
environment variables. The static
key within the env
mapping accepts an array of strings defining environment variables in KEY=val
format. For example, the ops.yml
excerpt below shows how the environment variable DOCKER_REGISTRY
can be set to the value ghcr.io
in the Pipeline workflow’s execution environment:
Of course, static
env variables must be set on a per-workflow basis, and if the value of the variable must be changed in multiple workflows, you’d have to manually change the value in the ops.yml
file of each workflow. To make it easier for you to manage your application configurations, we offer Configs and Secrets Stores, where you can centrally set these values for all workflows published to your CTO.ai team.
Before a workflow can use a value from either your Secrets Store or Configs Store, you’ll need to make that value accessible as an environment variable by specifying its key in the secrets
or configs
array under the workflow’s env
mapping. Unlike static
variables, however, you only need to specify the keys of the respective values to make available in the workflow’s runtime environment.
In the example above, all three types of environment variables are set under the env
mapping: static
, configs
, and secrets
. However, each value comes from a different place:
- The
static
variableDOCKER_REGISTRY
is set asghcr.io
directly in theops.yml
file. - The
configs
variableBASE_API_URL
retrieves the value of that key from the Configs Store of the Team that owns the workflow. - The
secrets
variableGITHUB_TOKEN
retrieves the value of that key from the Secrets Store of the Team that owns the workflow.
You can define configs
or secrets
values for your CTO.ai Team from our Dashboard or via our CLI.
Fetching Variables via SDK
In addition to the features described above which allow you to pass Configs and Secrets to your workflow as environment variables, it’s possible to access your Secrets and Configs from your application code without needing to specify their keys in your ops.yml
file. This can be done using methods such as sdk.getSecret()
and sdk.getConfig()
—two examples from our Node SDK.
Container Filesystem Restrictions
Keeping in line with standard security practices, workflow containers are run as a non-privileged user to minimize the damage that could be caused if a malicious process were to gain access to your Commands, Pipelines, or Services workflows.
Workflow containers are run with both their user:group
set to ops:9999
. Because of this runtime restriction, if your application code requires elevated privileges, you will need to ensure the necessary permissions are configured in your Dockerfile
.
To write files within a running container, the /home/ops
and /tmp
directories are writeable by default.
If your workflow requires write access in a directory beyond the default writeable directories, you should set the ownership of that location to the ops:9999
user. This is accomplished by adding the --chown
flag to an ADD
or COPY
instruction in the Dockerfile
, or by using a RUN
instruction to chmod
the files or directories that you wish to be writeable: