Update single-dc-multi-k8s.mdx

pull/15455/head
Iryna Shustava 2022-11-17 16:49:17 -07:00
parent ca414959df
commit bb4f27a87f
No known key found for this signature in database
GPG Key ID: 5971648779A5A8A4
1 changed files with 12 additions and 48 deletions

View File

@ -10,18 +10,12 @@ description: >-
~> **Note:** When running Consul across multiple Kubernetes clusters, we recommend using [admin partitions](/docs/enterprise/admin-partitions) for production environments. This Consul Enterprise feature allows you to accommodate multiple tenants without resource collisions when administering a cluster at scale. Admin partitions also enable you to run Consul on Kubernetes clusters across a non-flat network. ~> **Note:** When running Consul across multiple Kubernetes clusters, we recommend using [admin partitions](/docs/enterprise/admin-partitions) for production environments. This Consul Enterprise feature allows you to accommodate multiple tenants without resource collisions when administering a cluster at scale. Admin partitions also enable you to run Consul on Kubernetes clusters across a non-flat network.
This page describes deploying a single Consul datacenter in multiple Kubernetes clusters, This page describes deploying a single Consul datacenter in multiple Kubernetes clusters,
with servers and clients running in one cluster and only clients in the rest of the clusters. with servers running in one cluster and only Consul on Kubernetes components in the rest of the clusters.
This example uses two Kubernetes clusters, but this approach could be extended to using more than two. This example uses two Kubernetes clusters, but this approach could be extended to using more than two.
## Requirements ## Requirements
* Consul-Helm version `v0.32.1` or higher * Consul-Helm version `v1.0.0` or higher
* This deployment topology requires that the Kubernetes clusters have a flat network
for both pods and nodes so that pods or nodes from one cluster can connect
to pods or nodes in another. In many hosted Kubernetes environments, this may have to be explicitly configured based on the hosting provider's network. Refer to the following documentation for instructions:
* [Azure AKS CNI](https://docs.microsoft.com/en-us/azure/aks/concepts-network#azure-cni-advanced-networking)
* [AWS EKS CNI](https://docs.aws.amazon.com/eks/latest/userguide/pod-networking.html)
* [GKE VPC-native clusters](https://cloud.google.com/kubernetes-engine/docs/concepts/alias-ips).
* Either the Helm release name for each Kubernetes cluster must be unique, or `global.name` for each Kubernetes cluster must be unique to prevent collisions of ACL resources with the same prefix. * Either the Helm release name for each Kubernetes cluster must be unique, or `global.name` for each Kubernetes cluster must be unique to prevent collisions of ACL resources with the same prefix.
## Prepare Helm release name ahead of installs ## Prepare Helm release name ahead of installs
@ -34,14 +28,14 @@ Before proceeding with installation, prepare the Helm release names as environme
```shell-session ```shell-session
$ export HELM_RELEASE_SERVER=server $ export HELM_RELEASE_SERVER=server
$ export HELM_RELEASE_CLIENT=client $ export HELM_RELEASE_CONSUL=consul
... ...
$ export HELM_RELEASE_CLIENT2=client2 $ export HELM_RELEASE_CONSUL2=consul2
``` ```
## Deploying Consul servers and clients in the first cluster ## Deploying Consul servers in the first cluster
First, deploy the first cluster with Consul Servers and Clients with the example Helm configuration below. First, deploy the first cluster with Consul servers with the example Helm configuration below.
<CodeBlockConfig filename="cluster1-values.yaml"> <CodeBlockConfig filename="cluster1-values.yaml">
@ -56,10 +50,6 @@ global:
gossipEncryption: gossipEncryption:
secretName: consul-gossip-encryption-key secretName: consul-gossip-encryption-key
secretKey: key secretKey: key
connectInject:
enabled: true
controller:
enabled: true
ui: ui:
service: service:
type: NodePort type: NodePort
@ -86,17 +76,15 @@ Now install Consul cluster with Helm:
$ helm install ${HELM_RELEASE_SERVER} --values cluster1-values.yaml hashicorp/consul $ helm install ${HELM_RELEASE_SERVER} --values cluster1-values.yaml hashicorp/consul
``` ```
Once the installation finishes and all components are running and ready, the following information needs to be extracted (using the below command) and applied to the second Kubernetes cluster. Once the installation finishes and all components are running and ready, the following information needs to be extracted (using the below command) and applied to the second Kubernetes cluster.
* The Gossip encryption key created
* The CA certificate generated during installation * The CA certificate generated during installation
* The ACL bootstrap token generated during installation * The ACL bootstrap token generated during installation
```shell-session ```shell-session
$ kubectl get secret consul-gossip-encryption-key ${HELM_RELEASE_SERVER}-consul-ca-cert ${HELM_RELEASE_SERVER}-consul-bootstrap-acl-token --output yaml > cluster1-credentials.yaml $ kubectl get secret ${HELM_RELEASE_SERVER}-consul-ca-cert ${HELM_RELEASE_SERVER}-consul-bootstrap-acl-token --output yaml > cluster1-credentials.yaml
``` ```
## Deploying Consul clients in the second cluster ## Deploying Consul Kubernetes in the second cluster
~> **Note:** If multiple Kubernetes clusters will be joined to the Consul Datacenter, then the following instructions will need to be repeated for each additional Kubernetes cluster. ~> **Note:** If multiple Kubernetes clusters will be joined to the Consul Datacenter, then the following instructions will need to be repeated for each additional Kubernetes cluster.
Switch to the second Kubernetes cluster where Consul clients will be deployed Switch to the second Kubernetes cluster where Consul clients will be deployed
@ -124,38 +112,27 @@ global:
bootstrapToken: bootstrapToken:
secretName: cluster1-consul-bootstrap-acl-token secretName: cluster1-consul-bootstrap-acl-token
secretKey: token secretKey: token
gossipEncryption:
secretName: consul-gossip-encryption-key
secretKey: key
tls: tls:
enabled: true enabled: true
enableAutoEncrypt: true
caCert: caCert:
secretName: cluster1-consul-ca-cert secretName: cluster1-consul-ca-cert
secretKey: tls.crt secretKey: tls.crt
externalServers: externalServers:
enabled: true enabled: true
# This should be any node IP of the first k8s cluster # This should be any node IP of the first k8s cluster or the load balancer IP if using LoadBalancer service type for the UI.
hosts: ["10.0.0.4"] hosts: ["10.0.0.4"]
# The node port of the UI's NodePort service # The node port of the UI's NodePort service or the load balancer port.
httpsPort: 31557 httpsPort: 31557
tlsServerName: server.dc1.consul tlsServerName: server.dc1.consul
# The address of the kube API server of this Kubernetes cluster # The address of the kube API server of this Kubernetes cluster
k8sAuthMethodHost: https://kubernetes.example.com:443 k8sAuthMethodHost: https://kubernetes.example.com:443
client:
enabled: true
join: ["provider=k8s kubeconfig=/consul/userconfig/cluster1-kubeconfig/kubeconfig label_selector=\"app=consul,component=server\""]
extraVolumes:
- type: secret
name: cluster1-kubeconfig
load: false
connectInject: connectInject:
enabled: true enabled: true
``` ```
</CodeBlockConfig> </CodeBlockConfig>
Note the references to the secrets extracted and applied from the first cluster in ACL, gossip, and TLS configuration. Note the references to the secrets extracted and applied from the first cluster in ACL and TLS configuration.
The `externalServers.hosts` and `externalServers.httpsPort` The `externalServers.hosts` and `externalServers.httpsPort`
refer to the IP and port of the UI's NodePort service deployed in the first cluster. refer to the IP and port of the UI's NodePort service deployed in the first cluster.
@ -187,23 +164,10 @@ reach the Kubernetes API in that cluster.
The easiest way to get it is from the `kubeconfig` by running `kubectl config view` and grabbing The easiest way to get it is from the `kubeconfig` by running `kubectl config view` and grabbing
the value of `cluster.server` for the second cluster. the value of `cluster.server` for the second cluster.
Lastly, set up the clients so that they can discover the servers in the first cluster.
For this, Consul's cloud auto-join feature
for the [Kubernetes provider](/docs/install/cloud-auto-join#kubernetes-k8s) can be used.
This can be configured by saving the `kubeconfig` for the first cluster as a Kubernetes secret in the second cluster
and referencing it in the `clients.join` value. Note that the secret is made available to the client pods
by setting it in `client.extraVolumes`.
~> **Note:** The kubeconfig provided to the client should have minimal permissions.
The cloud auto-join provider will only need permission to read pods.
Please see [Kubernetes Cloud auto-join](/docs/install/cloud-auto-join#kubernetes-k8s)
for more details.
Now, proceed with the installation of the second cluster. Now, proceed with the installation of the second cluster.
```shell-session ```shell-session
$ helm install ${HELM_RELEASE_CLIENT} --values cluster2-values.yaml hashicorp/consul $ helm install ${HELM_RELEASE_CONSUL} --values cluster2-values.yaml hashicorp/consul
``` ```
## Verifying the Consul Service Mesh works ## Verifying the Consul Service Mesh works