diff --git a/website/content/docs/k8s/connect/index.mdx b/website/content/docs/k8s/connect/index.mdx index fcbfa9254f..df907ff538 100644 --- a/website/content/docs/k8s/connect/index.mdx +++ b/website/content/docs/k8s/connect/index.mdx @@ -178,7 +178,7 @@ spec: spec: containers: - name: static-client - image: tutum/curl:latest + image: curlimages/curl:latest # Just spin & wait forever, we'll use `kubectl exec` to demo command: ['/bin/sh', '-c', '--'] args: ['while true; do sleep 30; done;'] diff --git a/website/content/docs/k8s/connect/terminating-gateways.mdx b/website/content/docs/k8s/connect/terminating-gateways.mdx index 95bfbf2515..249e6806f6 100644 --- a/website/content/docs/k8s/connect/terminating-gateways.mdx +++ b/website/content/docs/k8s/connect/terminating-gateways.mdx @@ -268,7 +268,7 @@ spec: spec: containers: - name: static-client - image: tutum/curl:latest + image: curlimages/curl:latest command: ['/bin/sh', '-c', '--'] args: ['while true; do sleep 30; done;'] serviceAccountName: static-client diff --git a/website/content/docs/k8s/installation/deployment-configurations/single-dc-multi-k8s.mdx b/website/content/docs/k8s/installation/deployment-configurations/single-dc-multi-k8s.mdx new file mode 100644 index 0000000000..58d56b2a66 --- /dev/null +++ b/website/content/docs/k8s/installation/deployment-configurations/single-dc-multi-k8s.mdx @@ -0,0 +1,301 @@ +--- +layout: docs +page_title: Single Consul Datacenter in Multiple Kubernetes Clusters - Kubernetes +description: Single Consul Datacenter deployed in multiple Kubernetes clusters +--- + +# Single Consul Datacenter in Multiple Kubernetes Clusters + +-> Requires consul-helm v0.32.1 or higher. + +This page describes how to deploy a single Consul datacenter in multiple Kubernetes clusters, +with both servers and clients running in one cluster, and only clients running in the rest of the clusters. +In this example, we will use two Kubernetes clusters, but this approach could be extended to using more than two. + +~> **Note:** This deployment topology requires that your Kubernetes clusters have a flat network +for both pods and nodes, so that pods or nodes from one cluster can connect +to pods or nodes in another. + +## Deploying Consul servers and clients in the first cluster + +First, we will deploy the Consul servers with Consul clients in the first cluster. +For that, we will use the following Helm configuration: + +```yaml +# cluster1-config.yaml +global: + datacenter: dc1 + tls: + enabled: true + enableAutoEncrypt: true + acls: + manageSystemACLs: true + gossipEncryption: + secretName: consul-gossip-encryption-key + secretKey: key +connectInject: + enabled: true +controller: + enabled: true +ui: + service: + type: NodePort +``` + +Note that we are deploying in a secure configuration, with gossip encryption, +TLS for all components, and ACLs. We are enabling the Consul Service Mesh and the controller for CRDs +so that we can use them to later verify that our services can connect with each other across clusters. + +We're also setting UI's service type to be `NodePort`. +This is needed so that we can connect to servers from another cluster without using the pod IPs of the servers, +which are likely going to change. + +To deploy, first we need to generate the Gossip encryption key and save it as a Kubernetes secret. + +```shell +$ kubectl create secret generic consul-gossip-encryption-key --from-literal=key=$(consul keygen) +``` + +Now we can install our Consul cluster with Helm: +```shell +$ helm install cluster1 -f cluster1-config.yaml hashicorp/consul +``` + +~> **Note:** The Helm release name must be unique for each Kubernetes cluster. +That is because the Helm chart will use the Helm release name as a prefix for the +ACL resources that it creates, such as tokens and auth methods. If the names of the Helm releases +are the same, the Helm installation in subsequent clusters will clobber existing ACL resources. + +Once the installation finishes and all components are running and ready, +we need to extract the gossip encryption key we've created, the CA certificate +and the ACL bootstrap token generated during installation, +so that we can apply them to our second Kubernetes cluster. + +```shell +kubectl get secret consul-gossip-encryption-key cluster1-consul-ca-cert cluster1-consul-bootstrap-acl-token -o yaml > cluster1-credentials.yaml +``` + +## Deploying Consul clients in the second cluster +~> **Note:** If multiple Kubernetes clusters will be joined to the Consul Datacenter, then the following instructions will need to be repeated for each additional Kubernetes cluster. + +Now we can switch to the second Kubernetes cluster where we will deploy only the Consul clients +that will join the first Consul cluster. + +First, we need to apply credentials we've extracted from the first cluster to the second cluster: + +```shell +$ kubectl apply -f cluster1-credentials.yaml +``` +To deploy in the second cluster, we will use the following Helm configuration: + +```yaml +# cluster2-config.yaml +global: + enabled: false + datacenter: dc1 + acls: + manageSystemACLs: true + bootstrapToken: + secretName: cluster1-consul-bootstrap-acl-token + secretKey: token + gossipEncryption: + secretName: consul-gossip-encryption-key + secretKey: key + tls: + enabled: true + enableAutoEncrypt: true + caCert: + secretName: cluster1-consul-ca-cert + secretKey: tls.crt +externalServers: + enabled: true + # This should be any node IP of the first k8s cluster + hosts: ["10.0.0.4"] + # The node port of the UI's NodePort service + httpsPort: 31557 + tlsServerName: server.dc1.consul + # The address of the kube API server of this Kubernetes cluster + k8sAuthMethodHost: https://kubernetes.example.com:443 +client: + enabled: true + join: ["provider=k8s kubeconfig=/consul/userconfig/cluster1-kubeconfig/kubeconfig label_selector=\"app=consul,component=server\""] + extraVolumes: + - type: secret + name: cluster1-kubeconfig + load: false +connectInject: + enabled: true +``` + +Note that we're referencing secrets from the first cluster in ACL, gossip, and TLS configuration. + +Next, we need to set up the `externalServers` configuration. + +The `externalServers.hosts` and `externalServers.httpsPort` +refer to the IP and port of the UI's NodePort service deployed in the first cluster. +Set the `externalServers.hosts` to any Node IP of the first cluster, +which you can see by running `kubectl get nodes -o wide`. +Set `externalServers.httpsPort` to the `nodePort` of the `cluster1-consul-ui` service. +In our example, the port is `31557`. + +```shell +$ kubectl get service cluster1-consul-ui --context cluster1 +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +cluster1-consul-ui NodePort 10.0.240.80 443:31557/TCP 40h +``` + +We set the `externalServer.tlsServerName` to `server.dc1.consul`. This the DNS SAN +(Subject Alternative Name) that is present in the Consul server's certificate. +We need to set it because we're connecting to the Consul servers over the node IP, +but that IP isn't present in the server's certificate. +To make sure that the hostname verification succeeds during the TLS handshake, we need to set the TLS +server name to a DNS name that *is* present in the certificate. + +Next, we need to set `externalServers.k8sAuthMethodHost` to the address of the second Kubernetes API server. +This should be the address that is reachable from the first cluster, and so it cannot be the internal DNS +available in each Kubernetes cluster. Consul needs it so that `consul login` with the Kubernetes auth method will work +from the second cluster. +More specifically, the Consul server will need to perform the verification of the Kubernetes service account +whenever `consul login` is called, and to verify service accounts from the second cluster it needs to +reach the Kubernetes API in that cluster. +The easiest way to get it is to set it from your `kubeconfig` by running `kubectl config view` and grabbing +the value of `cluster.server` for the second cluster. + +Lastly, we need to set up the clients so that they can discover the servers in the first cluster. +For this, we will use Consul's cloud auto-join feature +for the [Kubernetes provider](https://www.consul.io/docs/install/cloud-auto-join#kubernetes-k8s). +To use it we need to provide a way for the Consul clients to reach the first Kubernetes cluster. +To do that, we need to save the `kubeconfig` for the first cluster as a Kubernetes secret in the second cluster +and reference it in the `clients.join` value. Note that we're making that secret available to the client pods +by setting it in `client.extraVolumes`. + +~> **Note:** The kubeconfig you're providing to the client should have minimal permissions. +The cloud auto-join provider will only need permission to read pods. +Please see [Kubernetes Cloud auto-join](https://www.consul.io/docs/install/cloud-auto-join#kubernetes-k8s) +for more details. + +Now we're ready to install! + +```shell +helm install cluster2 -f cluster2-config.yaml hashicorp/consul +``` + +## Verifying the Consul Service Mesh works + +Now that we have our Consul cluster in multiple k8s clusters up and running, we will +deploy two services and verify that they can connect to each other. + +First, we'll deploy `static-server` service in the first cluster: + +```yaml +--- +apiVersion: consul.hashicorp.com/v1alpha1 +kind: ServiceIntentions +metadata: + name: static-server +spec: + destination: + name: static-server + sources: + - name: static-client + action: allow +--- +apiVersion: v1 +kind: Service +metadata: + name: static-server +spec: + type: ClusterIP + selector: + app: static-server + ports: + - protocol: TCP + port: 80 + targetPort: 8080 +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: static-server +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: static-server +spec: + replicas: 1 + selector: + matchLabels: + app: static-server + template: + metadata: + name: static-server + labels: + app: static-server + annotations: + "consul.hashicorp.com/connect-inject": "true" + spec: + containers: + - name: static-server + image: hashicorp/http-echo:latest + args: + - -text="hello world" + - -listen=:8080 + ports: + - containerPort: 8080 + name: http + serviceAccountName: static-serve +``` + +Note that we're defining a Service intention so that our services are allowed to talk to each other. + +Then we'll deploy `static-client` in the second cluster with the following configuration: + +```yaml +apiVersion: v1 +kind: Service +metadata: + name: static-client +spec: + selector: + app: static-client + ports: + - port: 80 +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: static-client +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: static-client +spec: + replicas: 1 + selector: + matchLabels: + app: static-client + template: + metadata: + name: static-client + labels: + app: static-client + annotations: + "consul.hashicorp.com/connect-inject": "true" + "consul.hashicorp.com/connect-service-upstreams": "static-server:1234" + spec: + containers: + - name: static-client + image: curlimages/curl:latest + command: [ "/bin/sh", "-c", "--" ] + args: [ "while true; do sleep 30; done;" ] + serviceAccountName: static-client +``` + +Once both services are up and running, we can connect to the `static-server` from `static-client`: + +```shell +$ kubectl exec deploy/static-client -- curl -s localhost:1234 +"hello world" +``` diff --git a/website/data/docs-nav-data.json b/website/data/docs-nav-data.json index f29bda1028..a673c06115 100644 --- a/website/data/docs-nav-data.json +++ b/website/data/docs-nav-data.json @@ -406,6 +406,10 @@ "title": "Consul Servers Outside Kubernetes", "path": "k8s/installation/deployment-configurations/servers-outside-kubernetes" }, + { + "title": "Single Consul Datacenter in Multiple Kubernetes Clusters", + "path": "k8s/installation/deployment-configurations/single-dc-multi-k8s" + }, { "title": "Consul Enterprise", "path": "k8s/installation/deployment-configurations/consul-enterprise"