Istio is an open-source service mesh platform that helps developers manage, secure, and understand the interactions between microservices in a Kubernetes environment. It creates a network of deployed services with load balancing, service-to-service authentication, monitoring, and more, without requiring any changes in service code.
Learn how to set up Istio to enable container communications across multiple Kubernetes clusters. This is an advanced topic and is best suited for readers who already have practical experience with Exoscale SKS and a strong foundational understanding of Kubernetes.
Using Istio to connect two SKS Clusters
While its wide array of functionalities cater to various use cases, our focus in this blog post will be on leveraging Istio’s multi-cluster feature. We will setup Istio accross two clusters (which can also be in different zones) and use a small application to demonstrate the seamless connectivity.
Istio’s secure inter-service communication relies on Envoy proxy sidecars, which can be automatically injected by labeling a namespace. Intercepts of all network communication by these sidecars enable it to transparently implement security policies and data encryption, eliminating the need for code modifications or extra Kubernetes configurations.
Prerequisites
You should already have two SKS clusters up and running on Exoscale. As flavor use medium instances as a minimum (amount of instances is arbitary) and make sure that your security groups are set up correctly according to the SKS Quick Start Guide.
Installing Istio and establishing Connectivity between two SKS Clusters
A few steps are requird to install Istio and establish connectivity:
- Get cluster access
- Install Istioctl
- Generate certificates
- Set up Istio in multi-primary mode
Getting cluster access
The process of configuring multicluster Istio deployments can be done in several ways, but in this tutorial, our approach employs the multi-primary model with different networks, delivering cross-cluster load balancing.
Initially, let’s assign the IDs of your two Exoscale SKS clusters within your shell environment. Be sure to replace the placeholders with the actual IDs of your clusters:
export CTX_CLUSTER1=IDOFYOURFirstCLUSTER
export CTX_CLUSTER2=IDOFYOURSecondCLUSTER
Next, you need to generate a unique kubeconfig for each cluster. It’s crucial to assign distinct kubeconfig usernames for each cluster to facilitate the merging process later on.
Here’s an example:
# Generate a kubeconfig for each cluster, replace the zone(s)
exo compute sks kubeconfig $CTX_CLUSTER1 istio1 -z de-fra-1 > ~/.kube/istio1
exo compute sks kubeconfig $CTX_CLUSTER2 istio2 -z at-vie-2 > ~/.kube/istio2
Now, let’s merge the configuration files from both clusters into ~/.kube/config:
export KUBECONFIG=~/.kube/istio1:~/.kube/istio2
kubectl config view --flatten > ~/.kube/config
By following these steps, we can now easily access both clusters from our local environment.
Installing Istioctl locally
In our next steps, we will install istioctl, a command-line tool used to control the Istio service mesh. However, we won’t be installing Istio into the cluster just yet. You can follow the steps outlined in the Istio’s Official Guide for more details.
Choose a suitable directory where you’d like the istioctl files to be installed on your local system. Once chosen, navigate to the directory and install istioctl as follows:
# Choose any folder here
cd ~/Documents/dev
# Download the latest version of Istio
curl -L https://istio.io/downloadIstio | sh -
# Navigate to the downloaded Istio directory
cd istio*
# Temporarily add the Istio binaries to your PATH for the current shell session
export PATH=$PWD/bin:$PATH
Ensure you remain in this directory within your shell, as we’ll be working with the files located here for the remainder of our process.
Generating certificates
Generating a Root Certificate Authority (CA) and individual certificates for each cluster is crucial as Istio leverages these components to automatically facilitate mutual TLS (mTLS) encryption. mTLS encryption ensures secure, authenticated data transfers not only within a single cluster between containers but also across multiple clusters. For accomplishing this task, we will follow the steps outlined in Istio’s plugin CA guide.
It’s worth noting that when operating in a production environment, it’s recommended to employ a secure method for key protection, such as HashiCorp’s Vault.
These are the commands we use in this case:
# From the istio folder we entered above
# Create a certs folder and go into that
mkdir -p certs
pushd certs
# Generate root certificate and key
make -f ../tools/certs/Makefile.selfsigned.mk root-ca
# For each cluster generate an intermediate certificate and key
make -f ../tools/certs/Makefile.selfsigned.mk cluster1-cacerts
make -f ../tools/certs/Makefile.selfsigned.mk cluster2-cacerts
# Insert them into each cluster using secrets
kubectl --context="${CTX_CLUSTER1}" create namespace istio-system
kubectl --context="${CTX_CLUSTER1}" create secret generic cacerts -n istio-system \
--from-file=cluster1/ca-cert.pem \
--from-file=cluster1/ca-key.pem \
--from-file=cluster1/root-cert.pem \
--from-file=cluster1/cert-chain.pem
kubectl --context="${CTX_CLUSTER2}" create namespace istio-system
kubectl --context="${CTX_CLUSTER2}" create secret generic cacerts -n istio-system \
--from-file=cluster2/ca-cert.pem \
--from-file=cluster2/ca-key.pem \
--from-file=cluster2/root-cert.pem \
--from-file=cluster2/cert-chain.pem
# Return back into istios folder
popd
Setting Up Istio in Multi-Primary Mode
Having taken care of the prerequisites, you can now proceed to install Istio by following the guide provided by Istio for a Multi-Primary setup on different networks.
When generating the IstioOperator configuration for each cluster, you have the option to enable smart DNS proxying right away. Here’s an example configuration for cluster1:
# For example for cluster1
cat <<EOF > cluster1.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
values:
global:
meshID: mesh1
multiCluster:
clusterName: cluster1
network: network1
meshConfig:
defaultConfig:
proxyMetadata:
# Enable basic DNS proxying
ISTIO_META_DNS_CAPTURE: "true"
# Enable automatic address allocation, optional
ISTIO_META_DNS_AUTO_ALLOCATE: "true"
EOF
Note
Follow each guide step carefully and, if interested in the DNS option, simply switch out the configuration file with the one provided above.
DNS Proxying becomes crucial when you have a Kubernetes service exclusively on Cluster 1, but you want it to be accessible from Cluster 2. Without DNS Proxying, you would need to specify the IP address directly. Alternatively, you can enable DNS Proxying for each individual Deployment.
During installation, you’ll need to set up an east-west gateway. This gateway is integral for facilitating cross-cluster communication, effectively setting up a load balancer for each cluster that acts as an incoming gateway.
To verify your installation, refer to the Istio verficication guide. It’s worth noting that the verification process works even without DNS proxying, since it creates identical deployments and services in each cluster.
Deploying a Simple Demo Application and accessing it Cross-Cluster
For our demonstration, we’ll deploy a simple “Hello World” application in Cluster 2 and then access it directly from Cluster 1, leveraging the secure service mesh.
To begin, let’s create a new namespace in each cluster and enable sidecar injection. The sidecar injector is used to automatically establish secure communication within the service mesh.
kubectl --context="${CTX_CLUSTER1}" create namespace exodemo
kubectl --context="${CTX_CLUSTER2}" create namespace exodemo
kubectl label --context="${CTX_CLUSTER1}" namespace exodemo \
istio-injection=enabled
kubectl label --context="${CTX_CLUSTER2}" namespace exodemo \
istio-injection=enabled
Next, we’ll deploy the “Hello World” application on Cluster 2:
kubectl --context="${CTX_CLUSTER2}" -n exodemo run exo-webtest --image=exo.container-registry.com/exoscale-images/exo-webtest:v2 --port=3000
We’ll also create an internal ClusterIP service on Cluster 2 which exposes the pod:
kubectl --context="${CTX_CLUSTER2}" -n exodemo expose pod exo-webtest --port=3000
Moving over to Cluster 1, we’ll launch a simple Alpine container with CURL to test accessing our service. In this example, I’ve included the annotation to enable DNS proxying for this container, assuming you potentially didn’t enable it cluster-wide:
cat <<EOF | kubectl --context="${CTX_CLUSTER1}" -n exodemo apply -f -
apiVersion: v1
kind: Pod
metadata:
name: alpine-curl
annotations:
proxy.istio.io/config: |
proxyMetadata:
ISTIO_META_DNS_CAPTURE: "true"
ISTIO_META_DNS_AUTO_ALLOCATE: "true"
spec:
containers:
- name: shell
image: alpine:latest
command:
- sh
- -c
- |
apk update && apk add curl && sleep 3600
EOF
Let’s now open a shell in the Alpine container:
kubectl exec --context="${CTX_CLUSTER1}" -n exodemo -it alpine-curl -- /bin/sh
Lastly, let’s ping the service from the other cluster:
curl exo-webtest:3000
If successful, you should see something like the following output, which signifies we can now access services of the other cluster:
# curl exo-webtest:3000
<html><head><title>Hello from exo-webtest</title></head><body><img width=250px src=https://www.exoscale.com/static/img/exoscale-logo-full-201711.svg alt=ExoscaleLogo><br><p>Hello World from host exo-webtest!</p><p>VERSION: v2</p></body></html>
This simple demonstration showcases the power of Istio’s service mesh in enabling secure and efficient cross-cluster communication.
Further ideas for the sample application
The demonstration provided in this tutorial can be further enhanced by integrating Istio’s Gateway and VirtualService components.
A Gateway handles incoming and outgoing traffic from the internet, effectively load balancing it across the clusters. Note that this Gateway differs from the East-West Gateway you deployed used for inter-cluster communication.
Meanwhile, a VirtualService defines the rules for routing requests within an Istio service mesh. It allows for complex and flexible traffic management strategies. For instance, you could distribute requests to different versions of a service based on certain criteria or percentages or do A/B rollouts/testing.
Wrapping Up
This tutorial provided a hands-on exploration of using Istio within Exoscale SKS clusters. We showcased how Istio facilitates efficient cross-cluster communication — a critical aspect in ensuring high availability in a distributed system. We brought these concepts to life with a practical application, deploying it across two clusters and establishing secure communication between them.
However, it’s important to note that we have only scratched the surface of Istio’s capabilities in this tutorial. Istio is a powerful and versatile tool that offers a wide range of features beyond those covered here. From sophisticated traffic management and robust network policies to telemetry and reporting, Istio has a lot to offer when it comes to managing and securing microservices in complex, distributed systems.
We hope this tutorial serves as a useful starting point for leveraging the potential of service mesh and advanced setups within your Exoscale SKS clusters.