The recommended way to deploy your Kubernetes dashboard is to use a kubectl
proxy which is a way of accessing the Kubernetes REST API which uses HTTP for connection between the localhost (your local machine) and the proxy server.
However, there is an easier way of accessing your Kubernetes dashboard for your cluster. And this is exposing your dashboard service as a NodePort on your cluster, and then figuring out which node of your cluster has the pods for the dashboard service and subsequently accessing that dashboard.
In this tutorial, I will be showing you how to install the Kubernetes dashboard for your Kubernetes Cluster. This will allow you to manage your cluster using the Web UI dashboard
You can use your dashboard to get an overview of your applications running on your cluster, as well as create or modify individual Kubernetes resources. With this, you may also execute commands based on permissions. You can use HTTPS or Login via Bearer Token, or you can use RBAC to define access to UI components.
Using the Kubernetes dashboard:
- You can get display info about Workload (Deployments, Pods, ReplicationSets,), Namespaces, Nodes & Storage.
- You can deploy your applications.
- Scale, modify and delete your applications, pods, services.
- You can manage any Kubernetes resources using the command
kubectl
After we deploy the dashboard, it will create a dedicated namespace and other required resources for it.
Steps
- Visit the dashboard docs on Kubernetes and copy the dashboard command to your terminal or you can clone the GitHub repository and install the
yaml
file directly using the kubectl command.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.4.0/aio/deploy/recommended.yaml
2. Check the Kubernetes dashboard GitHub repository which has all the files for you to create with and all the resources that are required for your dashboards such as namespace, services, deployments, service accounts, and role-based accounts.
3. From here, the service account, cluster role, deployment, and all the resources are created.
4. You may get a total overview of your pod information, service information, deployment, and replica set using kubectl get all
.
5. You may get a total overview of your pod information, service information, deployment, and replica set using kubectl get all
.
6. Check and confirm if your Kubernetes dashboard namespace was created.
7. Next, check to see if your pods are running on your namespace. Two pods should be running within this namespace; dashboard metrics for your metrics collector and your Kubernetes-dashboard pod.
Temporarily, you can use the kubectl proxy command when you want to access the Kubernetes dashboard when you need it.
After applying the Kubernetes dashboard command, enter the kubectl proxy
in your terminal.
8. Change the type for your service from Cluster IP to NodePort for you to access your Kubernetes dashboard from your web browser. Edit your service using kubectl edit service/kubernetes-dashboard
- Go to the bottom of the file, change the type from ClusterIP to NodePort, and add the
nodePort
to spec. When you are done, save your changes.
9. You can confirm using kubectl get svc
. From there, you can see that it has changed to Nodeport with our external Port. Our Kubernetes service is now running on NodePort 32321.
10. We have opened port 32321 for our NodePort, this lets you access your dashboard outside your cluster. Get the IP address of your Node by using the kubectl get nodes -o wide
command. Here my address is 192.168.65.4
So our Kubernetes-dashboard will be accessible from https://192.168.65.4:32321
11. To access your Kubernetes dashboard, we will be using your Token.
12. Describe your service accounts for your Kubernetes-dashboard using kubectl -n kubernetes-dashboard describe sa kubernetes-dashboard
13. Describe and get your secrets for your Token using kubectl -n kubernetes-dashboard describe secret kubernetes-dashboard-token-vrzxm
- Use your token generated to login into your Kubernetes Dashboard.
14. Visit your NodePort IP in your browser, enter your Token, and Sign in.
15. When you Sign in, you will see your Workload Status, a detailed overview of your Deployments, and Pods.
16. Check your different services like Namespace, Labels, Internal and External Endpoints.
17. Get your CPU Requests, limits, and total Memory Requests of your Nodes.
18. View your active namespaces.
19. Also know when your Cluster role bindings were created, and get the specific names of each role.
20. View your logs from your Pods directly on your Kubernetes dashboard.
21. You can add and deploy a new resource when you click on the + icon on your dashboard.
22. From here, I will be uploading a YAML file specifying your resources to deploy. Select your file and click on Upload.
23. You will see your Deployments created and your Workload Status. You can also see your total images, Pods, Labels, Namespaces, and Replica Sets.
24. Wait for some time you will see that your Deployments, Pods, and Replica Sets have been created.
Looking for a Developer Control Plane for Kubernetes?
By integrating CTO.ai CI/CD and Command based delivery workflows, you can set up a powerful control plane with our rich set of resources and continuous delivery toolset. With our Delivery workflows, you can amplify your Kubernetes workloads as well as manage your workflow around Kubernetes if you don’t want to build all of your workflow tools from scratch.
CTO.ai is more than just a deployment tool, it’s an extensive Kubernetes dashboard and management plane.
Help your entire team manage and scale your cloud-native infrastructure without the need to hire more hard-to-find DevOps / Platform Engineers. By implementing our delivery workflows, it will always sync your workloads from your CI pipeline and run your continuous deployment automatically. Setting up triggers and previewing your deployments with our friendly user interface is an easy and simple process.
Using our command-based workflow, you can script out and input commands on what you want to do. You can continuously check for updates in your workloads and when any change occurs, and it will instantly pull your workflow from your repository and run it right into your cluster.
To get started with CTO.ai, sign up on our platform, or request a demo here.
If you have specific questions or issues configuring the file, I’d love to hear about them. Contact us here or ask a question in the CTO.ai Community!
You can also follow us on Twitter and on our blog. And if you’ve enjoyed this post, please, take a second to share it on Twitter.
Comments