In the previous article, we looked at the introduction to Service Mesh, Istio, and the Istio control plane. In this article, we’ll have a hands-on session on Istio and how to configure it for our Kubernetes Cluster.
Service Mesh is a solution for managing and enabling communication between individual microservices in an application or microservice stack. Service Mesh allows you to define the services you want to talk to each other about and set up additional configurations inside each application to enable and secure communication between services within your resources in your cluster.
In this tutorial, we’ll learn how to install Istio Service Mesh in a Kubernetes cluster.
Here is a quick list of what we'll accomplish in this post:
- Install Istio Core on your Kubernetes Cluster.
- Set up and Configure Istio Addons for Tracing & Visualization.
- Configure Traffic management, and Envoy Proxy.
- Deploy a microservice application in the cluster.
Getting Started
- If you don’t have a running Kubernetes Cluster, you can create one by following our Elastic Kubernetes Service guide here.
- Next, download the latest version of Istio releases depending on the operating system of your local machine using
curl -L https://istio.io/downloadIstio | sh -
and move your installation to your directory. - In the bin folder from your Istio installation, you’ll see the Istio command line
istioctl
- To build the control plane, we’ve to make the commands executable with Istioctl, and export the path to your path variable using the bash command
export PATH=$PATH:/Users<yser_name>i/folder//istio/istio-1.16.2/bin
- Next, run
istioctl
in your terminal to see that youristioctl
CLI is working.
- Get the running namespaces in your Kubernetes cluster.
- Install Istio by running the
istio install
command in your cluster.
- Next, confirm if your Istio installation was successful using
kubectl get ns
- Get the specific Istio pods from the Istio-system Namespace using
kubectl get pod -n istio-system
In your pods; you can see the Istiod component, which is the control plane and the data plane proxy.
Deploy a Sample Application
We’ll be using the Google Cloud microservices demo application.Clone the repository, and get the kubernetes-manifests.yaml
file in the release
folder.
- Next, after cloning your repo apply the
kubernetes manifest
file usingkubectl apply -f kubernetes-manifests.yaml
in the default Namespace. - You can see that all your microservices are created.
- Confirm and get your pods; you can see all of them running using
kubectl get pods
In your pods, the Istio Core are running in your cluster.
Configure Envoy Proxy
To configure your envoy proxy, you’ll have to label your Namespace with a configuration with the injection label.
- To add the labels to your default Namespace, label it using
kubectl label namespace default istio-injection-enabled
- Next, delete your pods and recreate them to see the proxies for each pod.
- Next, apply the manifests file configuration using
kubectl apply -f kubernetes-manifests.yaml
Working with Istio service mesh, you don’t always need to configure your containers before your proxies are injected. Confirm if your deployment was successful usingkubectl get pods
. You can see that two containers are running.
- View the details and ports of the pod using
kubectl describe pod currencyservice-5877b8dbcc-4jwbc
in the annotation label you can see your init containers image. From the configuration, you can see that Istio injected the init container to the pods, and you have your microservice container application created too.
- There’s no proxy definition in the manifest file or your init container, the proxy definition and injection is done by Istio automatically. In the above configuration, we now have our Istio component running in the cluster, with the envoy proxy container into all the pods created in a default namespace.
Istio collects the metrics from your containers so you can see the data on how your microservice is performing, requests they are getting, and metrics data.
- Back in your Istio directory, you have in your samples; you have your addons directory there. When you open it you’ll see all the supported integrations. To apply these changes to your cluster you’ve to appy it using
kubectl apply -f <file_name>
- Confirm if the installation was successful using
kubcetl get pod -n istio-system
You will see all your running pods installed.
- To access your pods, you’ve to get the IP from services. To view your runnig services use the
kubectl get svc -n istio-system
command.
- Enable port-forwarding to access your services using
kubectl port-forward svc/kali -n istio-system 20001
. Open the link; you’ll see your Prometheus graph.
Increase Kubernetes deployment and Management Efficiency using CTO.ai
With CTO.ai Kubernetes workflows, you can change your configurations and fire up your deployment changes on any host and cloud provider with just a click. With our Open-source templates, you can configure routing rules, enable service-to-service communication in your Kubernetes cluster with TLS so you can easily control the flow of traffic and API calls between your resource and services directly on our Developer Control Plane.
Now go forth, and try this out! If you’d like to see the example we made in action, sign up for free on our Platform.
Comments