Menu

/ Blog / Engineering / Move Your Kubernetes Stack to Istio: A Simple Example

Move Your Kubernetes Stack to Istio: A Simple Example

SADA Says

Share:

Move your Kubernetes Stack to Istio

With the emergence of containers and container orchestration systems such as Kubernetes, microservices architectures have become more and more prevalent. Because of this, many new sets of challenges have appeared. It is true that each microservice in itself is simpler, but the system as a whole is now more complex and difficult to manage. In this context, service mesh solutions were designed to tackle these issues. One such solution in particular stands out in the market: “Istio”.

In this blog we will explore how to take a sample application, deploy it to Kubernetes, then make the necessary changes for it to work with Istio. For this purpose, we selected the WordPress application that is provided by the Kubernetes community repo as an example. 

WHAT ARE WE DEPLOYING?

This test application is simply composed of the WordPress frontend and a data store based on MySQL, each deployed separately in its own Kubernetes deployment and service objects. While this is not a perfect example of a microservice application, the changes that we will induce here are representative enough for the transformation needed for any other application to work with Istio.

Kubernetes to Istio Diagram

PREREQUISITES

To follow along with this example, you will need access to a Kubernetes cluster and kubectl installed. If you don’t have one, hop on to Google Cloud to create a Google Kubernetes Engine (GKE) cluster.

Deployment Process With/Without Istio

To make things clear, this part will make the comparison of the deployment process with or without Istio enabled:

In the first stage, we will go through how we would normally deploy the WordPress application to a Kubernetes cluster

In the second stage, we will go through the steps needed to enable the same application to work with Istio: Specifically, we will use Helm to install it in the cluster, then we will review the original Kubernetes manifests to ensure any changes needed to satisfy Istio requirements and best practices are made. Finally, we will create the resources to enable external access to the application when in the presence of Istio.

1 – INSTALLATION STEPS WITHOUT ISTIO

The steps below will create a separate namespace for WordPress, create a secret MySQL database password and then deploy MySQL and WordPress.

For your convenience, we have copied the WordPress manifests from the Kubernetes repo in GitHub to a separate repo to have everything in a central place.

kubectl create ns wp
kubectl create secret generic mysql-pass --from-literal=password=s2cr*et -n wp
git clone git@github.com:raddaoui/blogs_presentations.git
cd blogs_presentations/blogs/move_k8s_stack_to_istio
pushd wordpress
kubectl apply -f mysql-deployment.yaml -n wp
kubectl apply -f wordpress-deployment.yaml -n wp
popd

If we check the services created after few minutes:

kubectl get svc -n wp

NAME           TYPE        CLUSTER-IP   EXTERNAL-IP     PORT(S)      
wordpress     LoadBalancer 10.114.6.4 35.239.201.90  80:32051/TCP
wordpress-mysql ClusterIP    None                  3306/TCP

You can see, there’s a service called WordPress with type LoadBalancer listening on port 80. You can use its external IP to access the WordPress dashboard as follows: http://35.239.201.90:80

2 – INSTALLATION STEPS WITH ISTIO ENABLED

a- Installation

We will use Helm to install the Istio chart while using the values from the demo profile since it installs most of the components. For more information about the components installed in the different profiles, hop on to the official website (https://istio.io/docs/setup/additional-setup/config-profiles/).

# move to the helm directory and install helm client locally and helm tiller in the cluster
pushd helm
./install_helm.sh

# add ISTIO helm repo
helm repo add istio.io https://storage.googleapis.com/istio-release/releases/1.3.2/charts/

# check which charts are available for installation:
helm search istio

NAME CHART VERSION  APP VERSION  DESCRIPTION                        
istio.io/istio  1.3.2  1.3.2 Helm chart for all istio components
istio.io/istio-cni  1.3.2 1.3.2 Helm chart for istio-cni components
istio.io/istio-init 1.3.2 1.3.2 Helm chart to initialize Istio CRDs

# create a separate namespace for istio
kubectl create namespace istio-system
# Install the istio-init chart to bootstrap all the Istio’s CRDs:
helm install istio.io/istio-init --name istio-init --namespace istio-system
 
# Verify that all 23 Istio CRDs were added
kubectl get crds | grep 'istio.io' | wc -l
23

helm install istio.io/istio --name istio --namespace istio-system -f values-istio-demo.yaml 
popd helm

b- Prepare Manifests

Istio’s philosophy is to integrate with your application without having to modify anything. However, there are a few requirements to ensure before we deploy in the presence of Istio. A summary of the requirements is listed here

There are basically two main rules to follow: 

  • First, we should name each service port and prefix it with the protocol it’s communicating with in the following format: <protocol>-<suffix>. This is to take advantage of Istio’s L7 routing features.
  • Second, we have to ensure each deployment object has two labels: “app” and “version.” These represent the component and the version of the application installed by that deployment object. If you only have one version, you can label it as version: v1. These labels help Istio recognize the different versions available in the cluster to enable traffic management rules and better visualize tracing and monitoring for your application.

Let’s implement that in our original manifests:

Looking at the services object, we have a couple: one for MySQL and one for the WordPress frontend.

The WordPress service object has the following port spec:

metadata:
  name: wordpress
  labels:
    app: wordpress
spec:
  ports:
    - port: 80

and the MySQL service object has the following port spec:

metadata:
  name: wordpress-mysql
  labels:
    app: wordpress
spec:
  ports:
    - port: 3306

To satisfy the Istio requirements, we need to name each of these ports. One simple solution might look like this:

For the WordPress service port spec:

metadata:
  name: wordpress
  labels:
    app: wordpress
spec:
  ports:
    - name: http
       port: 80

For the MySQL service port spec:

metadata:
  name: wordpress-mysql
  labels:
    app: wordpress
spec:
  ports:
    - name: mysql
      port: 3306

Moving to the deployment object:

We have the following labels for the WordPress deployment:

metadata:
  name: wordpress
  labels:
    app: wordpress
spec:
  selector:
    matchLabels:
      app: wordpress
      tier: frontend
  template:
    metadata:
      labels:
        app: wordpress
        tier: frontend

and these labels for the MySQL deployment:

metadata:
  name: wordpress-mysql
  labels:
    app: wordpress
spec:
  selector:
    matchLabels:
      app: wordpress
      tier: mysql
  template:
    metadata:
      labels:
        app: wordpress
        tier: mysql

As we mentioned before, we need to assign each one the app and version label, one solution might look like this:

For the WordPress deployment:

metadata:
  name: wordpress
  labels:
    app: wordpress-frontend
    version: v1
spec:
  selector:
    matchLabels:
      app: wordpress-frontend
      version: v1
  template:
    metadata:
      labels:
        app: wordpress-frontend
        version: v1

and for MySQL deployment:

metadata:
  name: wordpress-mysql
  labels:
    app: wordpress-mysql
    version: v1
spec:
  selector:
    matchLabels:
      app: wordpress-mysql
      version: v1
  template:
    metadata:
      labels:
        app: wordpress-mysql
        version: v1

Since we changed the labels on the deployments, we need to change the selector on the services accordingly so that they can find the right pods that belong to them. One thing to note here: the selector on the service should only use the app label so that in cases where more than one version is deployed for a component, the service is still load balancing to those.

The resulting manifests are under the wordpress-istio folder. Feel free to give a more thorough look.

Now, our manifests are ready. Let’s deploy them. But, before that, let’s create a separate namespace and enable Istio automatic injection of envoy sidecars alongside each pod. This is done by applying the label: istio-injection=enabled in that namespace.

pushd wordpress-istio
kubectl create ns wp-istio
kubectl label namespace wp-istio istio-injection=enabled
kubectl create secret generic mysql-pass --from-literal=password=s2cr*et -n wp-istio
kubectl apply -f mysql-deployment.yaml -n wp-istio
kubectl apply -f wordpress-deployment.yaml -n wp-istio 

Let’s look at the pods created in the new namespace:

kubectl get pods -n wp-istio
NAME                               READY   STATUS    RESTARTS   AGE
wordpress-57bd6f484f-lpz2q         2/2     Running     0       9m4s
wordpress-mysql-7576dd97f7-rw7df   2/2     Running     0       9m10s 

You will notice that each pod has 2 containers now. This is because Istio has injected an envoy proxy sidecar container which intercepts all network traffic to the pod. This is how Istio is able to manage all ingress and egress traffic to the service mesh.

Now, let’s look at the services again:

kubectl get svc -n wp-istio

NAME       TYPE           CLUSTER-IP     EXTERNAL-IP       PORT(S) 
wordpress  LoadBalancer  10.114.13.175  104.197.201.142 80:30984/TCP
wordpress-mysql ClusterIP    None                 3306/TCP  

If you try to access the newly created WordPress using the external IP and port (http://104.197.201.142:80) in a browser, you will notice that nothing is served there. This is because Istio blocks all traffic coming to the service mesh which is not coming through one of its envoy proxies.

Enabling external access to your application in Istio world is made possible through an Istio object called Gateway. This is often referred to as north-south traffic as opposed to east-west traffic between pods communicating through their relative envoy sidecars.

From the official website, an ingress Gateway describes a load balancer operating at the edge of the mesh that receives incoming HTTP/TCP connections. It configures exposed ports, protocols, etc. But, unlike Kubernetes Ingress Resources, does not include any traffic routing configuration. 

In other words, a Gateway object allows you to expose ports and protocols on the ingress Gateway load balancer that we created when we installed Istio.

kubectl get svc istio-ingressgateway  -n istio-system

NAME                   TYPE           CLUSTER-IP   EXTERNAL-IP
istio-ingressgateway   LoadBalancer   10.4.6.155   35.239.143.244  

An example Gateway configuration that will enable http traffic on port 80 of our ingress Gateway “istio-ingressgateway” is below. 

Note that you can deploy more than one ingress Gateway in your cluster. This is why you have to add the selector field to explicitly specify where to create the Gateway.

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: wordpress-gateway
spec:
  selector:
    istio: ingressgateway # use istio default controller
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - "*" 

We created the configuration to open a Gateway on the edge of the service mesh. What’s missing now is a route that will forward the traffic coming through that Gateway to the appropriate service. This is done through an object called VirtualService.

From the official website, each VirtualService consists of a set of routing rules that are evaluated in order, letting Istio match each given request to the VirtualService to a specific real destination within the mesh. 

If we have http traffic coming from the Gateway, you can create route rules based among others on the hostname and the path of the url or even the http headers of the requests that we are receiving. In our case, since we are sending all traffic to one service: “wordpress-frontend.” We won’t try to match any pattern, and we will only create a default route using spec.route.

The object definition will look as follows:

---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: wordpress
spec:
  hosts:
  - "*"
  gateways:
  - wordpress-gateway
  http:
  - route:
    - destination:
        host: wordpress
        port:
          number: 80

This describes a route for the traffic that is getting in through the “wordpress-gateway” we described in the last step to the WordPress service on port 80.

# let’s apply those and test everything
kubectl apply -f wordpress-gateway.yaml -n wp-istio

Finally, to access the newly deployed WordPress with Istio, let’s get the IP of the “Istio ingress-gateway loadbalancer” again, and put it in a variable:

kubectl get svc istio-ingressgateway -n istio-system

NAME                   TYPE           CLUSTER-IP   EXTERNAL-IP      
istio-ingressgateway   LoadBalancer   10.4.6.155   35.239.143.244   

export INGRESS_HOST=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
 

As we mentioned before, our WordPress Gateway opens port 80 in this ingress Gateway so our application should be available now on the http://$INGRESS_HOST:80

Congratulations! You have deployed your first application with Istio.

SUMMARY

This blogs takes you through a simple example to show off the steps necessary to move your k8s stack to work with Istio. While it’s not the purpose here, there are many powerful Istio features that we have yet to discuss (for example, request routing based on different versions , fault injection, circuit breaking, traffic shifting, mutual tls… and many many others). 

Logo for Google Cloud

New “Flat-Rate” Services for Google Cloud

Fixed-cost packages designed for easy and risk-free cloud services.

Scroll to Top