In part 1 of this series, we deployed our EKS infrastructure using the Developer Control Plane and scaled the Kubernetes API by installing the Prometheus Grafana Kube stack.
In the second part of this series, we will be configuring our Custom Resource Definition (CRD) and setting up Grafana and Prometheus to gather insights and key metrics from our Developer Control Plane.
- Back in your terminal, check out your secret configs using
kubectl get secrets. The
kubectl get secretscommand will list your certificates, username, and passwords for your different UI. Kubernetes Secret is an object that contains a small amount of sensitive data, such as a password, a token, or a key.
- The CRDs are also created in the Prometheus stack. This provides a more advanced way of extending the Kubernetes API beyond its default limits.
2. Next, describe your Prometheus Stateful Set and Alert Manager so you can make edits and changes in your
- Describe the Prometheus deployment so you can edit the prometheus operartor in the
kubectl describe deployment prometeus-kube-prometheus-operator > oper.yaml
The Describe command gives you some information about the container and image. Back in your terminal, you can open each component in your code editor and change any parameter that suits your workflow. It’s recommended you leave everything as default.
3. After configuring and ensuring all components are running, we will access Grafana using port forwarding. Before you can port forward, you need to get the port in which your Grafana pod is running.
- Get your pods using
kubectl get pod
- Get the specific port from your pod using the
kubectl logs prometeus-grafana-67899bdcd9-z9f8d -c grafanacommand.
- You’ll see that your pod is listening on port
4. Port forward your deployment using
kubectl port-forward deployment/prometeus-grafana 3000
Kubectl port-forward allows you to access and interact with internal Kubernetes cluster processes from your localhost. You can use this method to investigate issues and adjust your services locally without the need to expose them beforehand.
5. In your browser, go to
http://localhost:3000 and see your Grafana welcome page.
- Enter the username, which is admin, and the password, which is prom-operator. Click on login, and you’ll see your Grafana Dashboard.
- If you click on Dashboards icon, you’ll be able to see all the dashboards for your Kubernetes resources.
6. Next, in the Alert Manager Dashboards, you can see the general overview of your stack.
- You can add more dashboards and get the total metrics of your Kubernetes resources. Back in your Dashboards, click on Dashboards, and select Kubernetes API Server.
From the dashboard, you can see your:
- Error budget
- Write SLI (Requests)
- Write Availability
- You can also see your CPU usage, memory quota, and pod usage.
- Get and view your monitor Scheduler and your Namespace usage.
7. Configure and setup your Prometheus Graphs and status by enabling port-forwarding on your Prometheus pod using the
kubectl port-forward prometheus-prometeus-kube-prometheus-prometheus-0 9090 command
With the Prometheus UI, you’ll know which metrics endpoints have been executed and get deeper insights using the Prom-QL language.
http://localhost:9090in your browser. You will see your Prometheus UI.
- Next, click on Alerts. You’ll be able to see every alert configured for you.
- In the status bar, click on targets.
- You’ll be able to see the target of your monitors and know if they are unhealthy or not.
Do more with the Developer Control Plane today
In these tutorial series, you learned how to extend the Kubernetes API using CRD. Configuring CRDs makes it easy to handle the entire lifecycle of each custom object of your Kubernetes resources deployed on the Developer Control Plane. You can now customize your own resources and extend the Kubernetes API to achieve other functionalities like monitoring your nodes and specifying operational tasks on your Developer Control Plane.
Our Developer Control Plane supports more functionality and core features for your Kubernetes resources, like enabling auto-deployment, managing containers, and solving a wide range of deployment issues for your Kubernetes applications. You can also extend our Kubernetes operator pattern to automate specific complex tasks, handle and remove limits from your Kubernetes objects, and auto-scale your processing capacities to match user demands.