Best Practices for Commands Workflows
For general guidance about writing the Dockerfiles that define containerized environments for your workflows, be sure to review the Best practices for writing Dockerfiles guide published by Docker.
Avoid Unnecessary ops.yml Features
A number of options available for your Commands workflows via ops.yml
configurations are intended for running those Commands locally and may cause problems if they are included in a published workflow on the CTO.ai platform. The ops.yml
configuration options which do not work in all execution environments for Commands are the following:
bind
port
mountCwd
mountHome
For maximum compatibility, set both mountCwd
and mountHome
to false
. Those two features, as well as the bind
option, are all intended to allow you to run code on your local machine inside your Command container while you are developing it. When you are running a Commands workflow from Slack, your workflow is running remotely on our platform; thus, there is no local directory for the container to bind to.
Other features, like the port
setting, are available locally for development purposes and available in production for other types of workflows—such as Services workflows—but they should not be used for Commands workflows in production.
Managing Container Privileges
Workflow containers running on the CTO.ai platform are run by an unprivileged user (e.g. ops
), enforcing the security best practice of not running containers as root
. Because of this, the logic of your Commands workflow shouldn’t rely on a specific action being performed by the root
user.
To perform any action that requires system-level privileges (e.g. installing a package), you should place that action in your workflow’s Dockerfile, e.g.:
Avoid Image Bloat
Remove unneeded files
Any given Docker container typically runs on an image composed of multiple immutable layers in a stack, where each layer roughly corresponds to one instruction in an image’s Dockerfile.
As your container image is built, all files that are added to a layer will persist as part of that layer’s read-only filesystem. Even if you add a subsequent instruction to your Dockerfile to delete a file from a previous layer, the current layer’s filesystem will include a “whiteout” property that makes the “deleted” file from a lower layer unreachable—it won’t actually be removed from the lower layer’s filesystem.
Because of this, it’s important that each layer cleans up any files that were used to build the layer but which are aren’t used when the image is run. For example, when installing software with a package manager (apt
on Debian), many cache files are created to speed up the next usage of the package manager. Ensure that these cache files are removed as part of the same Dockerfile instruction that created them, e.g.:
The final shell command (rm
) included in the above example Dockerfile instruction removes the cache files that were added when we updated the package manager and installed git. They aren’t needed when our image is run, so we are ensuring they’ve been deleted before moving on to the next layer in the Dockerfile.
Remove intermediate files from layers
When you need to install a tool in your container image for the purpose of installing or configuring another tool, you should ensure that the intermediate tool is removed from that layer. Within a single Dockerfile instruction, you should install the tool you require, then remove the intermediate package—this ensures unnecessary files aren’t included in the final image.
Only install necessary packages
By default, Debian’s package manager apt
installs both the mandatory dependencies of a package and the optional recommends that may support a package. The recommended packages are often not necessary in a containerized environment, so you can forgo their installation by passing the --no-install-recommends
flag to the apt-get install
command.
Please note that, in some cases, failing to install the recommended packages may cause unexpected behavior, so it may take some experimentation to confirm which packages don’t need their optional dependencies.
Use Multi-Stage Builds
Multi-stage builds are a feature provided by Docker to help you avoid the issues described above. By building your application in one container and copying the build artifacts to a lightweight runtime container, you can avoid packaging development and build dependencies in your final image.
The following heavily-commented Dockerfile demonstrates how you might build your application in your own custom container image, then copy the resulting build artifacts from the builder image to an official CTO.ai base image for use with our platform: