Scale Kubernetes Workloads and remove limits using CRD on a Developer Control Plane (Part 1)

One of the most critical Kubernetes limitations is that most of the resources and objects, including containers, have a set of default limits from the default Kubernetes API. These limits prevent you from managing resources and extending the Kubernetes API to be more flexible for your workloads. Kubernetes resources and objects can be extended, amplified, and flexible using custom resource definitions (CRDs) hosted on a Developer Control Plane.

CRDs, together with a Developer Control Plane, give you almost unlimited possibilities. You can adapt your Kubernetes workloads in a way that will allow it to access your entire cluster and handle resource lifecycle, replication, and older parts of your Infrastructure. Custom resources are also ideal if you want to abstract a collection of resources. Other than that, use custom resources to use Kubernetes API fields such as .spec and .metadata or fully declarative APIs.

Use cases for Custom Resources

A resource in the Kubernetes API (like pods, deployments, and services) is made up of a particular type of object and exposed via an endpoint. Kubernetes allows you to extend your object using Custom Resource Definitions so that you can add more advanced functionality and introduce more components to your cluster and Developer Control Plane based on your needs. Using CRDs on Kubernetes lets you define, create, and persist any custom object.


What is a Custom Resource?

What is a Resource?
A Resource is like an endpoint in the Kubernetes API that stores a collection of API objects. In Kubernetes, your Kubernetes resource uses the `apps/v1` API Endpoint group.

What is a Custom Resource
Custom resources are an extension of the Kubernetes API. It’s a user-defined custom resource that is not available by default. When you create a custom resource, you can access it using kubectl When creating a custom resource; it provides a declarative API that lets you create your YAML file with the changes you want. Once you create a custom resource, it’ll create a declarative API for you to create and provide your Yaml file with your desired state.

How CRD will work in Kubernetes

Whenever you get or describe commands using kubectl it will go into the aggregation service (api-resources) where it’s completely aggregated and will check for Kube resources which come by default with the installation.

What is a Custom Resource Definition

The CustomResourceDefinition API is an extension of existing API resources that allows you to define custom resources using the Yaml format. Once you create a CRD, it will create a new custom controller to handle your create, update, and delete events.

In this tutorial series, you’ll learn how to scale your Kubernetes workloads and extend the API to use the Grafana-Prometheus CRD on the Developer Control Plane.

Prerequisites

  • Kubectl installed on your local machine
  • Kubernetes and Docker installed on your machine
  • AWS Account configured with access, secret permissions
  • CTO.ai Account and CTO.ai CLI installed

Guide

In this demo, we’ll use CTO.ai EKS -EC2-ASG Workflow, which is open-source on GitHub, to create and set up your EKS infrastructure.

Before you do that, you need to create your secrets in the CTO.ai dashboard. We’ll create four secrets: AWS ACCESS KEY, AWS ACCOUNT NUMBER, AWS SECRET KEY, and your GITHUB TOKEN.

  1. Sign up or log in to your CTO.ai Account

2. In your CTO.ai dashboard, click on Settings, select Secrets, and paste your Secret Key and Secret Values.

3. Clone the EKS Workflow on GitHub to set up your Kubernetes Cluster on EKS. Our EKS Workflows is open source on GitHub, and you can run it and set up the infrastructure directly on your terminal.

4. Next, build your pipelines locally with the CTO.ai CLI using the ops build . command. The ops build command will build your workflow for sharing your Docker image from your Dockerfile and the set of files located in the specified path you created in your source code.

5. Run your pipelines locally with the CTO.ai CLI. The ops run . command will start the workflow you built from your team or registry.

  • Run and set up your entire EKS  infrastructure on AWS  using ops run -b .

  • The process will build your Docker image and start loading up your EKS stack.

6. Confirm if you have any running resources using the kubectl get all command and that you can also see your EKS cluster on the AWS console.


We’ll be deploying the custom resource definitions and components of the Prometheus Stack directly on the Developer Control Plane. Using the custom resource definition, you can manage all your Prometheus components and add more changes and values beyond the Kubernetes default limits. The Prometheus operator will manage running the custom resource definition setup.

The Prometheus stack is open source on GitHub and will deploy all the Kubernetes components like Secrets, Services, ConfigMaps, Deployment, and the Custom Resource Definition.

7. Add Prometheus community helm chart using helm repo add Prometheus-community https://prometheus-community.github.io/helm-charts

  • Update the repo using the helm repo update command

8. Install the Prometheus helm chart, which contains all the CRD you’ll use using the helm install prometheus  prometheus-community/kube-prometheus-stack command.

9. Next, get your pods and see if all components are created using kubectl get pods

  • Get all your components on CLI using kubectl get all

The Prometheus stateful set is the actual core Prometheus server. The Prometheus pod is created, and the operator will manage it.

In the deployment configuration, your deployment configurations are listed, and we have:

  • Grafana prometheus operator
  • Kube state metrics operator. Kube state metrics monitor the health of deployments and stateful sets inside the cluster.
  • The Replica Sets are created by deployment and specify the number of pods your deployment is willing to create.
  • The Daemon set is a component that will run on every worker node and transforms them into Prometheus metrics so they can be scrapped.
  • Each service has its component that interacts with our monitoring stack.
  • Get your ConfigMap, created using the kubectl get configmap  command.  The ConfigMap comprises the configuration files and the root files for each config dashboards, resource clusters, workloads, and api server.

In this article, you can see how fast and simplified it is for you to set up a Developer Control Plane and deploy your Custom Resource Definition on a Developer Control Plane.


Watch our for part 2 of this series, we'll be looking at removing limits from the Kubernetes API to enable advanced insight, scaling your Kubernetes services, and gather key metrics from your Developer Control Plane using Grafana and Prometheus.