diff --git a/website/content/docs/connect/cluster-peering/create-manage-peering.mdx b/website/content/docs/connect/cluster-peering/create-manage-peering.mdx deleted file mode 100644 index 71b3f50d2e..0000000000 --- a/website/content/docs/connect/cluster-peering/create-manage-peering.mdx +++ /dev/null @@ -1,537 +0,0 @@ ---- -layout: docs -page_title: Cluster Peering - Create and Manage Connections -description: >- - Generate a peering token to establish communication, export services, and authorize requests for cluster peering connections. Learn how to create, list, read, check, and delete peering connections. ---- - - -# Create and Manage Cluster Peering Connections - -A peering token enables cluster peering between different datacenters. Once you generate a peering token, you can use it to establish a connection between clusters. Then you can export services and create intentions so that peered clusters can call those services. - -## Create a peering connection - -Cluster peering is enabled by default on Consul servers as of v1.14. For additional information, including options to disable cluster peering, refer to [Configuration Files](/docs/agent/config/config-files). - -The process to create a peering connection is a sequence with multiple steps: - -1. Create a peering token -1. Establish a connection between clusters -1. Export services between clusters -1. Authorize services for peers - -You can generate peering tokens and initiate connections on any available agent using either the API, CLI, or the Consul UI. If you use the API or CLI, we recommend performing these operations through a client agent in the partition you want to connect. - -The UI does not currently support exporting services between clusters or authorizing services for peers. - -### Create a peering token - -To begin the cluster peering process, generate a peering token in one of your clusters. The other cluster uses this token to establish the peering connection. - -Every time you generate a peering token, a single-use establishment secret is embedded in the token. Because regenerating a peering token invalidates the previously generated secret, you must use the most recently created token to establish peering connections. - - - - -In `cluster-01`, use the [`/peering/token` endpoint](/api-docs/peering#generate-a-peering-token) to issue a request for a peering token. - -```shell-session -$ curl --request POST --data '{"Peer":"cluster-02"}' --url http://localhost:8500/v1/peering/token -``` - -The CLI outputs the peering token, which is a base64-encoded string containing the token details. - -Create a JSON file that contains the first cluster's name and the peering token. - - - -```json -{ - "Peer": "cluster-01", - "PeeringToken": "eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJhZG1pbiIsImF1ZCI6IlNvbHIifQ.5T7L_L1MPfQ_5FjKGa1fTPqrzwK4bNSM812nW6oyjb8" -} -``` - - - - - - -In `cluster-01`, use the [`consul peering generate-token` command](/commands/peering/generate-token) to issue a request for a peering token. - -```shell-session -$ consul peering generate-token -name cluster-02 -``` - -The CLI outputs the peering token, which is a base64-encoded string containing the token details. -Save this value to a file or clipboard to be used in the next step on `cluster-02`. - - - - - -1. In the Consul UI for the datacenter associated with `cluster-01`, click **Peers**. -1. Click **Add peer connection**. -1. In the **Generate token** tab, enter `cluster-02` in the **Name of peer** field. -1. Click the **Generate token** button. -1. Copy the token before you proceed. You cannot view it again after leaving this screen. If you lose your token, you must generate a new one. - - - - -### Establish a connection between clusters - -Next, use the peering token to establish a secure connection between the clusters. - - - - -In one of the client agents in "cluster-02," use `peering_token.json` and the [`/peering/establish` endpoint](/api-docs/peering#establish-a-peering-connection) to establish the peering connection. This endpoint does not generate an output unless there is an error. - -```shell-session -$ curl --request POST --data @peering_token.json http://127.0.0.1:8500/v1/peering/establish -``` - -When you connect server agents through cluster peering, their default behavior is to peer to the `default` partition. To establish peering connections for other partitions through server agents, you must add the `Partition` field to `peering_token.json` and specify the partitions you want to peer. For additional configuration information, refer to [Cluster Peering - HTTP API](/api-docs/peering). - -You can dial the `peering/establish` endpoint once per peering token. Peering tokens cannot be reused after being used to establish a connection. If you need to re-establish a connection, you must generate a new peering token. - - - - - -In one of the client agents in "cluster-02," issue the [`consul peering establish` command](/commands/peering/establish) and specify the token generated in the previous step. The command establishes the peering connection. -The commands prints "Successfully established peering connection with cluster-01" after the connection is established. - -```shell-session -$ consul peering establish -name cluster-01 -peering-token token-from-generate -``` - -When you connect server agents through cluster peering, they peer their default partitions. -To establish peering connections for other partitions through server agents, you must add the `-partition` flag to the `establish` command and specify the partitions you want to peer. -For additional configuration information, refer to [`consul peering establish` command](/commands/peering/establish) . - -You can run the `peering establish` command once per peering token. -Peering tokens cannot be reused after being used to establish a connection. -If you need to re-establish a connection, you must generate a new peering token. - - - - - -1. In the Consul UI for the datacenter associated with `cluster 02`, click **Peers** and then **Add peer connection**. -1. Click **Establish peering**. -1. In the **Name of peer** field, enter `cluster-01`. Then paste the peering token in the **Token** field. -1. Click **Add peer**. - - - - -### Export services between clusters - -After you establish a connection between the clusters, you need to create a configuration entry that defines the services that are available for other clusters. Consul uses this configuration entry to advertise service information and support service mesh connections across clusters. - -First, create a configuration entry and specify the `Kind` as `"exported-services"`. - - - -```hcl -Kind = "exported-services" -Name = "default" -Services = [ - { - ## The name and namespace of the service to export. - Name = "service-name" - Namespace = "default" - - ## The list of peer clusters to export the service to. - Consumers = [ - { - ## The peer name to reference in config is the one set - ## during the peering process. - Peer = "cluster-02" - } - ] - } -] -``` - - - -Then, add the configuration entry to your cluster. - -```shell-session -$ consul config write peering-config.hcl -``` - -Before you proceed, wait for the clusters to sync and make services available to their peers. You can issue an endpoint query to [check the peered cluster status](#check-peered-cluster-status). - -### Authorize services for peers - -Before you can call services from peered clusters, you must set service intentions that authorize those clusters to use specific services. Consul prevents services from being exported to unauthorized clusters. - -First, create a configuration entry and specify the `Kind` as `"service-intentions"`. Declare the service on "cluster-02" that can access the service in "cluster-01." The following example sets service intentions so that "frontend-service" can access "backend-service." - - - -```hcl -Kind = "service-intentions" -Name = "backend-service" - -Sources = [ - { - Name = "frontend-service" - Peer = "cluster-02" - Action = "allow" - } -] -``` - - - -If the peer's name is not specified in `Peer`, then Consul assumes that the service is in the local cluster. - -Then, add the configuration entry to your cluster. - -```shell-session -$ consul config write peering-intentions.hcl -``` - -### Authorize Service Reads with ACLs - -If ACLs are enabled on a Consul cluster, sidecar proxies that access exported services as an upstream must have an ACL token that grants read access. -Read access to all imported services is granted using either of the following rules associated with an ACL token: -- `service:write` permissions for any service in the sidecar's partition. -- `service:read` and `node:read` for all services and nodes, respectively, in sidecar's namespace and partition. -For Consul Enterprise, access is granted to all imported services in the service's partition. -These permissions are satisfied when using a [service identity](/docs/security/acl/acl-roles#service-identities). - -Example rule files can be found in [Reading Servers](/docs/connect/config-entries/exported-services#reading-services) in the `exported-services` config entry documentation. - -Refer to [ACLs System Overview](/docs/security/acl) for more information on ACLs and their setup. - -## Manage peering connections - -After you establish a peering connection, you can get a list of all active peering connections, read a specific peering connection's information, check peering connection health, and delete peering connections. - -### List all peering connections - -You can list all active peering connections in a cluster. - - - - -After you establish a peering connection, [query the `/peerings/` endpoint](/api-docs/peering#list-all-peerings) to get a list of all peering connections. For example, the following command requests a list of all peering connections on `localhost` and returns the information as a series of JSON objects: - -```shell-session -$ curl http://127.0.0.1:8500/v1/peerings - -[ - { - "ID": "462c45e8-018e-f19d-85eb-1fc1bcc2ef12", - "Name": "cluster-02", - "State": "ACTIVE", - "Partition": "default", - "PeerID": "e83a315c-027e-bcb1-7c0c-a46650904a05", - "PeerServerName": "server.dc1.consul", - "PeerServerAddresses": [ - "10.0.0.1:8300" - ], - "CreateIndex": 89, - "ModifyIndex": 89 - }, - { - "ID": "1460ada9-26d2-f30d-3359-2968aa7dc47d", - "Name": "cluster-03", - "State": "INITIAL", - "Partition": "default", - "Meta": { - "env": "production" - }, - "CreateIndex": 109, - "ModifyIndex": 119 - }, -] -``` - - - - -After you establish a peering connection, run the [`consul peering list`](/commands/peering/list) command to get a list of all peering connections. -For example, the following command requests a list of all peering connections and returns the information in a table: - -```shell-session -$ consul peering list - -Name State Imported Svcs Exported Svcs Meta -cluster-02 ACTIVE 0 2 env=production -cluster-03 PENDING 0 0 - ``` - - - - -In the Consul UI, click **Peers**. The UI lists peering connections you created for clusters in a datacenter. - -The name that appears in the list is the name of the cluster in a different datacenter with an established peering connection. - - - -### Read a peering connection - -You can get information about individual peering connections between clusters. - - - - -After you establish a peering connection, [query the `/peering/` endpoint](/api-docs/peering#read-a-peering-connection) to get peering information about for a specific cluster. For example, the following command requests peering connection information for "cluster-02" and returns the info as a JSON object: - -```shell-session -$ curl http://127.0.0.1:8500/v1/peering/cluster-02 - -{ - "ID": "462c45e8-018e-f19d-85eb-1fc1bcc2ef12", - "Name": "cluster-02", - "State": "INITIAL", - "PeerID": "e83a315c-027e-bcb1-7c0c-a46650904a05", - "PeerServerName": "server.dc1.consul", - "PeerServerAddresses": [ - "10.0.0.1:8300" - ], - "CreateIndex": 89, - "ModifyIndex": 89 -} -``` - - - - -After you establish a peering connection, run the [`consul peering read`](/commands/peering/list) command to get peering information about for a specific cluster. -For example, the following command requests peering connection information for "cluster-02": - -```shell-session -$ consul peering read -name cluster-02 - -Name: cluster-02 -ID: 3b001063-8079-b1a6-764c-738af5a39a97 -State: ACTIVE -Meta: - env=production - -Peer ID: e83a315c-027e-bcb1-7c0c-a46650904a05 -Peer Server Name: server.dc1.consul -Peer CA Pems: 0 -Peer Server Addresses: - 10.0.0.1:8300 - -Imported Services: 0 -Exported Services: 2 - -Create Index: 89 -Modify Index: 89 - -``` - - - - -In the Consul UI, click **Peers**. The UI lists peering connections you created for clusters in that datacenter. Click the name of a peered cluster to view additional details about the peering connection. - - - -### Check peering connection health - -You can check the status of your peering connection to perform health checks. - -To confirm that the peering connection between your clusters remains healthy, query the [`health/service` endpoint](/api-docs/health) of one cluster from the other cluster. For example, in "cluster-02," query the endpoint and add the `peer=cluster-01` query parameter to the end of the URL. - -```shell-session -$ curl \ - "http://127.0.0.1:8500/v1/health/service/?peer=cluster-01" -``` - -A successful query includes service information in the output. - -### Delete peering connections - -You can disconnect the peered clusters by deleting their connection. Deleting a peering connection stops data replication to the peer and deletes imported data, including services and CA certificates. - - - - -In "cluster-01," request the deletion through the [`/peering/ endpoint`](/api-docs/peering#delete-a-peering-connection). - -```shell-session -$ curl --request DELETE http://127.0.0.1:8500/v1/peering/cluster-02 -``` - - - - -In "cluster-01," request the deletion through the [`consul peering delete`](/commands/peering/list) command. - -```shell-session -$ consul peering delete -name cluster-02 - -Successfully submitted peering connection, cluster-02, for deletion -``` - - - - -In the Consul UI, click **Peers**. The UI lists peering connections you created for clusters in that datacenter. - -Next to the name of the peer, click **More** (three horizontal dots) and then **Delete**. Click **Delete** to confirm and remove the peering connection. - - - - -## L7 traffic management between peers - -The following sections describe how to enable L7 traffic management features between peered clusters. - -### Service resolvers for redirects and failover - -As of Consul v1.14, you can use [dynamic traffic management](/consul/docs/connect/l7-traffic) to configure your service mesh so that services automatically failover and redirect between peers. The following examples update the [`service-resolver` config entry](/docs/connect/config-entries/) in `cluster-01` so that Consul redirects traffic intended for the `frontend` service to a backup instance in peer `cluster-02` when it detects multiple connection failures. - - - -```hcl -Kind = "service-resolver" -Name = "frontend" -ConnectTimeout = "15s" -Failover = { - "*" = { - Targets = [ - {Peer = "cluster-02"} - ] - } -} -``` - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: ServiceResolver -metadata: - name: frontend -spec: - connectTimeout: 15s - failover: - '*': - targets: - - peer: 'cluster-02' - service: 'frontend' - namespace: 'default' -``` - -```json -{ - "ConnectTimeout": "15s", - "Kind": "service-resolver", - "Name": "frontend", - "Failover": { - "*": { - "Targets": [ - { - "Peer": "cluster-02" - } - ] - } - }, - "CreateIndex": 250, - "ModifyIndex": 250 -} -``` - - - -### Service splitters and custom routes - -The `service-splitter` and `service-router` configuration entry kinds do not support directly targeting a service instance hosted on a peer. To split or route traffic to a service on a peer, you must combine the definition with a `service-resolver` configuration entry that defines the service hosted on the peer as an upstream service. For example, to split traffic evenly between `frontend` services hosted on peers, first define the desired behavior locally: - - - -```hcl -Kind = "service-splitter" -Name = "frontend" -Splits = [ - { - Weight = 50 - ## defaults to service with same name as configuration entry ("frontend") - }, - { - Weight = 50 - Service = "frontend-peer" - }, -] -``` - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: ServiceSplitter -metadata: - name: frontend -spec: - splits: - - weight: 50 - ## defaults to service with same name as configuration entry ("web") - - weight: 50 - service: frontend-peer -``` - -```json -{ - "Kind": "service-splitter", - "Name": "frontend", - "Splits": [ - { - "Weight": 50 - }, - { - "Weight": 50, - "Service": "frontend-peer" - } - ] -} -``` - - - -Then, create a local `service-resolver` configuration entry named `frontend-peer` and define a redirect targeting the peer and its service: - - - -```hcl -Kind = "service-resolver" -Name = "frontend-peer" -Redirect { - Service = frontend - Peer = "cluster-02" -} -``` - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: ServiceResolver -metadata: - name: frontend-peer -spec: - redirect: - peer: 'cluster-02' - service: 'frontend' -``` - -```json -{ - "Kind": "service-resolver", - "Name": "frontend-peer", - "Redirect": { - "Service": "frontend", - "Peer": "cluster-02" - } -} -``` - - - diff --git a/website/content/docs/connect/cluster-peering/index.mdx b/website/content/docs/connect/cluster-peering/index.mdx index 184598c546..5645338550 100644 --- a/website/content/docs/connect/cluster-peering/index.mdx +++ b/website/content/docs/connect/cluster-peering/index.mdx @@ -1,15 +1,15 @@ --- layout: docs -page_title: Service Mesh - What is Cluster Peering? +page_title: Cluster peering overview description: >- Cluster peering establishes communication between independent clusters in Consul, allowing services to interact across datacenters. Learn about the cluster peering process, differences with WAN federation for multi-datacenter deployments, and technical constraints. --- -# What is Cluster Peering? +# Overview You can create peering connections between two or more independent clusters so that services deployed to different partitions or datacenters can communicate. -## Overview +## What is Cluster Peering? Cluster peering is a process that allows Consul clusters to communicate with each other. The cluster peering process consists of the following steps: @@ -24,6 +24,16 @@ For detailed instructions on establishing cluster peering connections, refer to > To learn how to peer clusters and connect services across peers in AWS Elastic Kubernetes Service (EKS) environments, complete the [Consul Cluster Peering on Kubernetes tutorial](https://learn.hashicorp.com/tutorials/consul/cluster-peering-aws?utm_source=docs). +## Cluster peering architecture + +Mesh gateways are required for you to route service mesh traffic between peered Consul clusters. Clusters can reside in different clouds or runtime environments where general interconnectivity between all services in all clusters is not feasible. + +At a minimum, a peered cluster exporting a service must have a mesh gateway registered. +For Enterprise, this mesh gateway must also be registered in the same partition as the exported services. +To use the `local` mesh gateway mode, there must also be a mesh gateway regsitered in the importing cluster. + +Unlike mesh gateways for WAN-federated datacenters and partitions, mesh gateways between peers terminate mTLS sessions to decrypt data to HTTP services and then re-encrypt traffic to send to services. Data must be decrypted in order to evaluate and apply dynamic routing rules at the destination cluster, which reduces coupling between peers. + ### Differences between WAN federation and cluster peering WAN federation and cluster peering are different ways to connect Consul deployments. WAN federation connects multiple datacenters to make them function as if they were a single cluster, while cluster peering treats each datacenter as a separate cluster. As a result, WAN federation requires a primary datacenter to maintain and replicate global states such as ACLs and configuration entries, but cluster peering does not. diff --git a/website/content/docs/connect/cluster-peering/usage/create-cluster-peering.mdx b/website/content/docs/connect/cluster-peering/usage/create-cluster-peering.mdx new file mode 100644 index 0000000000..56fbe48559 --- /dev/null +++ b/website/content/docs/connect/cluster-peering/usage/create-cluster-peering.mdx @@ -0,0 +1,207 @@ +--- +layout: docs +page_title: Cluster Peering - Create and Manage Connections +description: >- + Generate a peering token to establish communication, export services, and authorize requests for cluster peering connections. Learn how to create, list, read, check, and delete peering connections. +--- + +# Establish Cluster Peering Connections + +A peering token enables cluster peering between different datacenters. Once you generate a peering token, you can use it to establish a connection between clusters. Then you can export services and create intentions so that peered clusters can call those services. + +The process to create a peering connection is a sequence with multiple steps: + +1. Create a peering token +1. Establish a connection between clusters +1. Export services between clusters +1. Authorize services for peers + +You can generate peering tokens and initiate connections on any available agent using either the API, CLI, or the Consul UI. If you use the API or CLI, we recommend performing these operations through a client agent in the partition you want to connect. + +The UI does not currently support exporting services between clusters or authorizing services for peers. + +## Create a peering token + +To begin the cluster peering process, generate a peering token in one of your clusters. The other cluster uses this token to establish the peering connection. + +Every time you generate a peering token, a single-use establishment secret is embedded in the token. Because regenerating a peering token invalidates the previously generated secret, you must use the most recently created token to establish peering connections. + + + + +In `cluster-01`, use the [`/peering/token` endpoint](/api-docs/peering#generate-a-peering-token) to issue a request for a peering token. + +```shell-session +$ curl --request POST --data '{"Peer":"cluster-02"}' --url http://localhost:8500/v1/peering/token +``` + +The CLI outputs the peering token, which is a base64-encoded string containing the token details. + +Create a JSON file that contains the first cluster's name and the peering token. + + + +```json +{ + "Peer": "cluster-01", + "PeeringToken": "eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJhZG1pbiIsImF1ZCI6IlNvbHIifQ.5T7L_L1MPfQ_5FjKGa1fTPqrzwK4bNSM812nW6oyjb8" +} +``` + + + + + + +In `cluster-01`, use the [`consul peering generate-token` command](/commands/peering/generate-token) to issue a request for a peering token. + +```shell-session +$ consul peering generate-token -name cluster-02 +``` + +The CLI outputs the peering token, which is a base64-encoded string containing the token details. +Save this value to a file or clipboard to be used in the next step on `cluster-02`. + + + + + +1. In the Consul UI for the datacenter associated with `cluster-01`, click **Peers**. +1. Click **Add peer connection**. +1. In the **Generate token** tab, enter `cluster-02` in the **Name of peer** field. +1. Click the **Generate token** button. +1. Copy the token before you proceed. You cannot view it again after leaving this screen. If you lose your token, you must generate a new one. + + + + +### Establish a connection between clusters + +Next, use the peering token to establish a secure connection between the clusters. + + + + +In one of the client agents in "cluster-02," use `peering_token.json` and the [`/peering/establish` endpoint](/api-docs/peering#establish-a-peering-connection) to establish the peering connection. This endpoint does not generate an output unless there is an error. + +```shell-session +$ curl --request POST --data @peering_token.json http://127.0.0.1:8500/v1/peering/establish +``` + +When you connect server agents through cluster peering, their default behavior is to peer to the `default` partition. To establish peering connections for other partitions through server agents, you must add the `Partition` field to `peering_token.json` and specify the partitions you want to peer. For additional configuration information, refer to [Cluster Peering - HTTP API](/api-docs/peering). + +You can dial the `peering/establish` endpoint once per peering token. Peering tokens cannot be reused after being used to establish a connection. If you need to re-establish a connection, you must generate a new peering token. + + + + + +In one of the client agents in "cluster-02," issue the [`consul peering establish` command](/commands/peering/establish) and specify the token generated in the previous step. The command establishes the peering connection. +The commands prints "Successfully established peering connection with cluster-01" after the connection is established. + +```shell-session +$ consul peering establish -name cluster-01 -peering-token token-from-generate +``` + +When you connect server agents through cluster peering, they peer their default partitions. +To establish peering connections for other partitions through server agents, you must add the `-partition` flag to the `establish` command and specify the partitions you want to peer. +For additional configuration information, refer to [`consul peering establish` command](/commands/peering/establish) . + +You can run the `peering establish` command once per peering token. +Peering tokens cannot be reused after being used to establish a connection. +If you need to re-establish a connection, you must generate a new peering token. + + + + + +1. In the Consul UI for the datacenter associated with `cluster 02`, click **Peers** and then **Add peer connection**. +1. Click **Establish peering**. +1. In the **Name of peer** field, enter `cluster-01`. Then paste the peering token in the **Token** field. +1. Click **Add peer**. + + + + +### Export services between clusters + +After you establish a connection between the clusters, you need to create a configuration entry that defines the services that are available for other clusters. Consul uses this configuration entry to advertise service information and support service mesh connections across clusters. + +First, create a configuration entry and specify the `Kind` as `"exported-services"`. + + + +```hcl +Kind = "exported-services" +Name = "default" +Services = [ + { + ## The name and namespace of the service to export. + Name = "service-name" + Namespace = "default" + + ## The list of peer clusters to export the service to. + Consumers = [ + { + ## The peer name to reference in config is the one set + ## during the peering process. + Peer = "cluster-02" + } + ] + } +] +``` + + + +Then, add the configuration entry to your cluster. + +```shell-session +$ consul config write peering-config.hcl +``` + +Before you proceed, wait for the clusters to sync and make services available to their peers. You can issue an endpoint query to [check the peered cluster status](#check-peered-cluster-status). + +### Authorize services for peers + +Before you can call services from peered clusters, you must set service intentions that authorize those clusters to use specific services. Consul prevents services from being exported to unauthorized clusters. + +First, create a configuration entry and specify the `Kind` as `"service-intentions"`. Declare the service on "cluster-02" that can access the service in "cluster-01." The following example sets service intentions so that "frontend-service" can access "backend-service." + + + +```hcl +Kind = "service-intentions" +Name = "backend-service" + +Sources = [ + { + Name = "frontend-service" + Peer = "cluster-02" + Action = "allow" + } +] +``` + + + +If the peer's name is not specified in `Peer`, then Consul assumes that the service is in the local cluster. + +Then, add the configuration entry to your cluster. + +```shell-session +$ consul config write peering-intentions.hcl +``` + +### Authorize Service Reads with ACLs + +If ACLs are enabled on a Consul cluster, sidecar proxies that access exported services as an upstream must have an ACL token that grants read access. +Read access to all imported services is granted using either of the following rules associated with an ACL token: +- `service:write` permissions for any service in the sidecar's partition. +- `service:read` and `node:read` for all services and nodes, respectively, in sidecar's namespace and partition. +For Consul Enterprise, access is granted to all imported services in the service's partition. +These permissions are satisfied when using a [service identity](/docs/security/acl/acl-roles#service-identities). + +Example rule files can be found in [Reading Servers](/docs/connect/config-entries/exported-services#reading-services) in the `exported-services` config entry documentation. + +Refer to [ACLs System Overview](/docs/security/acl) for more information on ACLs and their setup. \ No newline at end of file diff --git a/website/content/docs/connect/cluster-peering/usage/manage-connections.mdx b/website/content/docs/connect/cluster-peering/usage/manage-connections.mdx new file mode 100644 index 0000000000..edb5284c2f --- /dev/null +++ b/website/content/docs/connect/cluster-peering/usage/manage-connections.mdx @@ -0,0 +1,32 @@ +--- +layout: docs +page_title: Manage Cluster Peering Connections +description: >- + +--- + +# Manage Cluster Peering Connections + +After you establish a cluster peering connection, you can get a list of all active peering connections, read a specific peering connection's information, and delete peering connections. + +This page describes how to use the Consul UI to manage cluster peering connections. To manage cluster peering through the HTTP API, refer to the [`/peering` endpoint reference](/consul/api-docs/peering). If you prefer to use the command line, refer to [`peering` CLI command reference](/consul/commands/peering). + +## List all peering connections + +You can list all active peering connections in a cluster. + +In the Consul UI, click **Peers**. The UI lists peering connections you created for clusters in a datacenter. The name that appears in the list is the name of the cluster in a different datacenter with an established peering connection. + +## Read a peering connection + +You can get information about individual peering connections between clusters. + +In the Consul UI, click **Peers**. The UI lists peering connections you created for clusters in that datacenter. Click the name of a peered cluster to view additional details about the peering connection. + +## Delete peering connections + +You can disconnect the peered clusters by deleting their connection. Deleting a peering connection stops data replication to the peer and deletes imported data, including services and CA certificates. + +In the Consul UI, click **Peers**. The UI lists peering connections you created for clusters in that datacenter. + +Next to the name of the peer, click **More** (three horizontal dots) and then **Delete**. Click **Delete** to confirm and remove the peering connection. \ No newline at end of file diff --git a/website/content/docs/connect/cluster-peering/usage/peering-traffic-management.mdx b/website/content/docs/connect/cluster-peering/usage/peering-traffic-management.mdx new file mode 100644 index 0000000000..4814004376 --- /dev/null +++ b/website/content/docs/connect/cluster-peering/usage/peering-traffic-management.mdx @@ -0,0 +1,158 @@ +--- +layout: docs +page_title: Cluster Peering l7 Traffic Management +description: >- + +--- + +# L7 Traffic Management for Cluster Peering + +This topic describes how to use the [`service-resolver` configuration entry](/docs/connect/config-entries/) to set up redirects and failovers between services that have an existing cluster peering connection. + +## Service resolvers for redirects and failover + +When you use cluster peering to connect datacenters through their admin partitions, you can use [dynamic traffic management](/consul/docs/connect/l7-traffic) to configure your service mesh so that services automatically failover and redirect between peers. + +However, the `service-splitter` and `service-router` configuration entry kinds do not support directly targeting a service instance hosted on a peer. To split or route traffic to a service on a peer, you must combine the definition with a `service-resolver` configuration entry that defines the service hosted on the peer as an upstream service. + +## Update service resolver configuration + +The following examples updates the [`service-resolver` config entry](/docs/connect/config-entries/) in `cluster-01` so that Consul redirects traffic intended for the `frontend` service to a backup instance in peer `cluster-02` when it detects multiple connection failures. + + + +```hcl +Kind = "service-resolver" +Name = "frontend" +ConnectTimeout = "15s" +Failover = { + "*" = { + Targets = [ + {Peer = "cluster-02"} + ] + } +} +``` + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +kind: ServiceResolver +metadata: + name: frontend +spec: + connectTimeout: 15s + failover: + '*': + targets: + - peer: 'cluster-02' + service: 'frontend' + namespace: 'default' +``` + +```json +{ + "ConnectTimeout": "15s", + "Kind": "service-resolver", + "Name": "frontend", + "Failover": { + "*": { + "Targets": [ + { + "Peer": "cluster-02" + } + ] + } + }, + "CreateIndex": 250, + "ModifyIndex": 250 +} +``` + +## Update service splitter and router configurations + +To resolve traffic from a peer, define the behavior in `service-splitter` and `service-router` configuration entries and combine them with a `service-resolver` condiguration entry. For example, to split traffic evenly between `frontend` services hosted on peers, first define the desired behavior locally: + + + +```hcl +Kind = "service-splitter" +Name = "frontend" +Splits = [ + { + Weight = 50 + ## defaults to service with same name as configuration entry ("frontend") + }, + { + Weight = 50 + Service = "frontend-peer" + }, +] +``` + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +kind: ServiceSplitter +metadata: + name: frontend +spec: + splits: + - weight: 50 + ## defaults to service with same name as configuration entry ("web") + - weight: 50 + service: frontend-peer +``` + +```json +{ + "Kind": "service-splitter", + "Name": "frontend", + "Splits": [ + { + "Weight": 50 + }, + { + "Weight": 50, + "Service": "frontend-peer" + } + ] +} +``` + + + +Then, create a local `service-resolver` configuration entry named `frontend-peer` and define a redirect targeting the peer and its service: + + + +```hcl +Kind = "service-resolver" +Name = "frontend-peer" +Redirect { + Service = frontend + Peer = "cluster-02" +} +``` + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +kind: ServiceResolver +metadata: + name: frontend-peer +spec: + redirect: + peer: 'cluster-02' + service: 'frontend' +``` + +```json +{ + "Kind": "service-resolver", + "Name": "frontend-peer", + "Redirect": { + "Service": "frontend", + "Peer": "cluster-02" + } +} +``` + + \ No newline at end of file