@ -6,15 +6,22 @@ description: Single Consul Datacenter deployed in multiple Kubernetes clusters
# Single Consul Datacenter in Multiple Kubernetes Clusters
-> Requires consul-helm v0.32.1 or higher.
This page describes deploying a single Consul datacenter in multiple Kubernetes clusters,
with servers and clients running in one cluster and only clients in the rest of the clusters.
This example uses two Kubernetes clusters, but this approach could be extended to using more than two.
This page describes how to deploy a single Consul datacenter in multiple Kubernetes clusters,
with both servers and clients running in one cluster, and only clients running in the rest of the clusters.
In this example, we will use two Kubernetes clusters, but this approach could be extended to using more than two.
## Requirements
* Consul-Helm version `v0.32.1` or higher
* This deployment topology requires that the Kubernetes clusters have a flat network
for both pods and nodes so that pods or nodes from one cluster can connect
to pods or nodes in another. In many hosted Kubernetes environments, this may have to be explicitly configured based on the hosting provider's network. Refer to the following documentation for instructions:
* [Azure AKS CNI](https://docs.microsoft.com/en-us/azure/aks/concepts-network#azure-cni-advanced-networking)
If a flat network is unavailable across all Kubernetes clusters, follow the instructions for using [Admin Partitions](/docs/enterprise/admin-partitions), which is a Consul Enterprise feature.
~> **Note:** This deployment topology requires that your Kubernetes clusters have a flat network
for both pods and nodes, so that pods or nodes from one cluster can connect
to pods or nodes in another. If a flat network is not available across all Kubernetes clusters, follow the instructions for using [Admin Partitions](/docs/enterprise/admin-partitions), which is a Consul Enterprise feature.
## Prepare Helm release name ahead of installs
@ -23,7 +30,7 @@ The Helm chart uses the Helm release name as a prefix for the
ACL resources that it creates, such as tokens and auth methods. If the names of the Helm releases
are identical, subsequent Consul on Kubernetes clusters overwrite existing ACL resources and cause the clusters to fail.
Before you proceed with installation, prepare the Helm release names as environment variables for both the server and client installs to use.
Before proceeding with installation, prepare the Helm release names as environment variables for both the server and client install.
```shell-session
$ export HELM_RELEASE_SERVER=server
@ -34,8 +41,7 @@ Before you proceed with installation, prepare the Helm release names as environm
## Deploying Consul servers and clients in the first cluster
First, we will deploy the Consul servers with Consul clients in the first cluster.
For that, we will use the following Helm configuration:
First, deploy the first cluster with Consul Servers and Clients with the example Helm configuration below.
<CodeBlockConfig filename="cluster1-config.yaml">
@ -61,30 +67,30 @@ ui:
</CodeBlockConfig>
Note that we are deploying in a secure configuration, with gossip encryption,
TLS for all components, and ACLs. We are enabling the Consul Service Mesh and the controller for CRDs
so that we can use them to later verify that our services can connect with each other across clusters.
Note that this will deploy a secure configuration with gossip encryption,
TLS for all components and ACLs. In addition, this will enable the Consul Service Mesh and the controller for CRDs
that can be used later to verify the connectivity of services across clusters.
We're also setting UI's service type to be `NodePort`.
This is needed so that we can connect to servers from another cluster without using the pod IPs of the servers,
The UI's service type is set to be `NodePort`.
This is needed to connect to servers from another cluster without using the pod IPs of the servers,
which are likely going to change.
To deploy, first we need to generate the Gossip encryption key and save it as a Kubernetes secret.
To deploy, first generate the Gossip encryption key and save it as a Kubernetes secret.
Once the installation finishes and all components are running and ready,
we need to extract the gossip encryption key we've created, the CA certificate
and the ACL bootstrap token generated during installation,
so that we can apply them to our second Kubernetes cluster.
Once the installation finishes and all components are running and ready, the following information needs to be extracted (using the below command) and applied to the second Kubernetes cluster.
* The Gossip encryption key created
* The CA certificate generated during installation
* The ACL bootstrap token generated during installation
@ -93,15 +99,19 @@ $ kubectl get secret consul-gossip-encryption-key ${HELM_RELEASE_SERVER}-consul-
## Deploying Consul clients in the second cluster
~> **Note:** If multiple Kubernetes clusters will be joined to the Consul Datacenter, then the following instructions will need to be repeated for each additional Kubernetes cluster.
Now we can switch to the second Kubernetes cluster where we will deploy only the Consul clients
Switch to the second Kubernetes cluster where Consul clients will be deployed
that will join the first Consul cluster.
First, we need to apply credentials we've extracted from the first cluster to the second cluster:
```shell-session
$ kubectl config use-context <K8S_CONTEXT_NAME>
```
First, apply the credentials extracted from the first cluster to the second cluster:
~> When Transparent proxy is enabled, services in one Kubernetes cluster that need to communicate with a service in another Kubernetes cluster must have a explicit upstream configured through the ["consul.hashicorp.com/connect-service-upstreams"](/docs/k8s/annotations-and-labels#consul-hashicorp-com-connect-service-upstreams) annotation.
~> When Transparent proxy is enabled, services in one Kubernetes cluster that need to communicate with a service in another Kubernetes cluster must have an explicit upstream configured through the ["consul.hashicorp.com/connect-service-upstreams"](/docs/k8s/annotations-and-labels#consul-hashicorp-com-connect-service-upstreams) annotation.
Now that we have our Consul cluster in multiple k8s clusters up and running, we will
deploy two services and verify that they can connect to each other.
Now that the Consul cluster spanning across multiple k8s clusters is up and running, deploy two services in separate k8s clusters and verify that they can connect to each other.
First, we'll deploy `static-server` service in the first cluster:
First, deploy `static-server` service in the first cluster:
<CodeBlockConfig filename="static-server.yaml">
@ -271,9 +278,9 @@ spec:
</CodeBlockConfig>
Note that we're defining a Service intention so that our services are allowed to talk to each other.
Note that defining a Service intention is required so that our services are allowed to talk to each other.
Then we'll deploy `static-client` in the second cluster with the following configuration:
Next, deploy `static-client` in the second cluster with the following configuration:
<CodeBlockConfig filename="static-client.yaml">
@ -321,9 +328,11 @@ spec:
</CodeBlockConfig>
Once both services are up and running, we can connect to the `static-server` from `static-client`:
Once both services are up and running, try connecting to the `static-server` from `static-client`: