Introduction

Container runtimes are programs that can create and execute containers that are fetched from images.

Where Do Runtimes Live?

The Linux Kernel has all the facilities that make containers perform different processes like instantiating containers, running containers with the help of facilities like namespaces, c groups, SE Linux/ AppArmor, and Linux Kernel.

History

In 2002, Kernel Namespaces became available in the Linux kernel and that was the first feature related to containers. Then in 2006 OpenVZ was created so you could run containers, but you needed a Kernel level patch to create your server and run your containers with that specific kernel image. In 2007, another feature related to containers became available in the Kernel and that’s c groups. In 2009, Mesos started as a research project and it was the first implementation of c groups in userspace. In 2011, LXC became the first stand-alone container runtime. Eventually, in 2013, Docker became so popular that they came out with the first release which was based on LXC.

In 2014, LXC didn’t provide all the nitty-gritty isolation that developers wanted to do with the kernel support so they released the Docker lib container on top of the lip container. In the same year, rkt became available as an alternative to Docker.

In 2016, the Kubernetes community came up with CRI (cri-o, rktlet, cri-containerd) because they had to do a lot of modifications to make rkt work on Kubernetes. The goal of the Kubernetes community was to make sure that if there were any new container runtimes, they’re going to implement the Container Runtime Interface so Kubernetes doesn’t have to care about what it needs to call out different processes in the background to run containers.

Some of the CRI shim available are:

  • CRI-O
  • RKTLET
  • CRI-Containerd
  • FRAKTI
  • Kata Containers

  1. OpenVZ Containers: OpenVZ was first released in 2005 with a Kernel level patch that lets you run containers. The interesting thing about Open VZ is that it supports live migration. From the virtualization world, you can have your container running on a physical machine and live migrated to another machine in real-time with potentially no downtime.

    Cons:
  • Create your custom Kernel
  • Currently, there is no stable version or open vz is unstable.
  • OpenVZ 7 is not as stable as openVZ 6
  • No CRI or Kubernetes yet

2. Mesos: Mesos has the first implementation of c groups and C++, it has a very good performance and supports Docker Appc images. The best use for Mesos will be in your Spark and Flink frameworks for big data applications. The Cons is that you can’t run these containers outside Mesos so you’d need the Mesos framework to make these containers run.

3. LXC: LXC’s nickname is Chroot on steroids and has an active community. Its main components are LXC (the actual runtime written in C), LXD (daemon that manages your containers and images written in GO), and LX Fuse (matches the file system). It currently has no image support and adoption (no k8s yet).

4. Rkt: rkt supports two types of images (Appc and Docker). It’s a single process runtime that uses its own command rkt. OCI compliance is still in development and they plan to do it through something similar to run c. It’s got some unique features like support for Trusted Platform Modules (TPM), and you can run some workloads in isolated VMs and it also supports SEinux.

Pros

  • Works with Kubernetes
  • Pod-based semantics
  • Single process
  • Runs Docker, appc, OCI (in dev) images.
  • KVM hypervisor for containers.
  • The process can be managed directly by systemd.

Cons

  • CoreOS the company behind rkt got acquired by Red Hat and they are not so focused on runtimes that much anymore. They’re more focused on Kubernetes' distributions.
  • Rklet (CRI) project is under development
  • Minimal adoption in production.

RKT has been discontinued.

5. Docker/Containerd: The first name was Docker, now we have Docker daemon, runC, Containerd, and CRI-Containerd. The CRI-Containerd became a CRI plugin some years ago, and the Containerd daemon handles storage, imaging, networking and it takes care of runC.

In the previous years within Containerd, you had Docker managing your storage, networking, and imaging and on the new Containerd, now you have these functions moved to a Containerd daemon but Containerd still instantiates the containers. In the Docker runtime, there were concerns over mismatch/lack of release sync between Docker releases and Kubernetes releases.

Kubernetes is deprecating Docker as a container runtime after v1.20.

     

CRI - Container Runtime Interface

CRI is just an interface, it’s not containerd or CRI plug-in. It’s just an interface that was introduced in 2016 so that Kubernetes can talk to multiple runtimes. It’s a GRPC interface and the shims available are CRI-O, RKlet (in development), Frakti (to run VMs), and the CRI plugin (formerly cri-containerd).

To implement a CRI integration with Kubernetes for running containers, a container runtime environment must be compliant with the Open Container Initiative (OCI)

  • Rkt: rkt supports two types of images (Appc and Docker). It’s a single process runtime that uses its own command rkt. OCI compliance is still in development and they plan to do it through something similar to run c. It’s got some unique features like support for Trusted Platform Modules (TPM), and you can run some workloads in isolated VMs and it also supports SEinux.

Pros

  • Works with Kubernetes
  • Pod-based semantics
  • Single process
  • Runs Docker, appc, OCI (in dev) images.
  • KVM hypervisor for containers.
  • The process can be managed directly by systemd.

Cons

  • CoreOS the company behind rkt got acquired by Red Hat and they are not so focused on runtimes that much anymore. They’re more focused on Kubernetes' distributions.
  • Rklet (CRI) project is under development
  • Minimal adoption in production.

RKT is completely discontinued

  • Containerd/CRI-containerd: In this diagram below, you have an extra daemon CRI containerd that kubelet will talk to it using GRPC and then this daemon will have an image service and runtime service and would talk to another daemon called containerd through GRPC and that daemon will instantiate all the Pods and the containers.

Containerd is available as a daemon for Linux and Windows. It manages the complete container lifecycle of its host system, from image transfer and storage to container execution and supervision to low-level storage to network attachments.

Now, you have a single daemon your kubelet talks to CRI and then to the CRI plug-in and then it has an image service and runtime service while the daemon takes care of instantiating the Pods behind your containers.

Pros

  • It’s very widely adopted
  • Very stable
  • Great performance
  • Community and Support
  • Supports Windows - Windows Containers over Windows (WCOW), and Linux Containers over Windows (LCOW).

Cons

  • Runtime coupled with daemon, so when you restart the containerd daemon all the containers get restarted
  • Standard packaging has too many daemons
  • Naming confusion

containerd-cri doesn't exist anymore, and support for CRI has been included in containerd

  • CRI-O: CRI-O is a shim, not an actual runtime, it’s a shim that allows Kubernetes to talk to another container runtime. Its nickname is “CRY-O”. The main components are:

    - CRI-O Daemon which is very similar to containerd. It handles the storage, imaging, networking, and the Kubelet talks CRI to it. In the CRI-O daemon the common Daemon is very similar to Docker shim which helps keep your container up and running and also monitors your container. With CRI-O you can run any OCI compliant runtime. CRI-O is used in Red Hat Openshift.

PROS

  • OCI & Kubernetes ready.
  • Great performance.
  • Three daemons by default are packed for Kubernetes.
  • Containers don’t get restarted as opposed to containerd and Docker.
  • Extendable to other runtimes.


CONS

  • Cannot run containers outside Kubernetes.
  • No default for image management.
  • Use/installation on non-RH distros can be complicated

  • Crun: Started in 2017 by RedHat, mainly implemented in C. It got the best performance, OCI complaint, there’s not a lot of adoption so it’s a work in progress.

  • Kata Containers: Kata containers were started in 2017 by the OpenStack foundation. In Kata, two projects (runV & ClearContainers) runV from Hypervisor and ClearContainers from intel. The aim of this project is to run containers in VMs. The security is very good and it’s OCI compliant.  Kata containers works with Docker, cri-o, and containerd and it also supports ARM, x86_64, AMD 64, and IBM p and zseries.

Pros

  • Great isolation
  • Great security
  • Stability of VMs.
  • Stateful Apps (Data security, VM attached to storage)

Cons

  • AWS, GCP, Azure VMs cannot be used yet.
  • Slower than runc
  • Access to KVM

  • Frakti: Frakti lets Kubernetes run pods and containers directly inside hypervisors via runV. It is lightweight and portable, but can provide much stronger isolation with an independent kernel than linux-namespace-based container runtimes.

Container Runtime Interface has enabled a ”insertable” model for container runtime underneath Kubernetes.

Frakti has also been discontinued.

     

Other notable runtimes

  • Nvidia runtime: Specific implementation of GPU, and it’s a modified version of runc with libnvidia-container.
  • Railcar: OCI implementation in rust.
  • Pouch: Alibaba’s runtime, it’s a shim that uses runc in the background.
  • Systemd-nspawn: Based on systemd management, it’s a lightweight isolation mechanism not quite like a runtime, it just provides some namespace isolation.
  • Lmctfy: Lmctfy got opensource in 2015, but it’s no longer being developed because they are focusing more on OCI runtimes.

Best Usage Practices.

  • Stability → Containerd
  • Best Kubernetes Integration → CRIO or cri-containerd (Now CRI Plugin)
  • Good Performance → Kata
  • Better Performance → Any with runC or rkt
  • Best Performance → crun
  • Kubernetes & Mesos Env → Containerd
  • Spark & Flink → Containerd
  • Best Security & Isolation → Kata

Using CTO.ai Infrastructure toolchain, you will be able to customize your CI/CD for build, test, and release directly from your different runtime environments. By integrating our Delivery workflows your organization will be able to create and manage on-demand Kubernetes environments, securely manage developer access credentials on all cloud environments and easily track delivery across all your applications and Kubernetes resources.

To get started with CTO.ai, sign up on our platform, or request a demo here.

If you have specific questions or issues configuring the file, I’d love to hear about them. Contact us here or ask a question in the CTO.ai Community!

You can also follow us on Twitter and on our blog. And if you’ve enjoyed this post, please, take a second to share it on Twitter.