2014-04-11 19:03:12 +00:00
---
2020-04-07 18:55:19 +00:00
layout: docs
page_title: Multiple Datacenters - Basic Federation with the WAN Gossip Pool
sidebar_current: docs-guides-datacenters
description: >-
One of the key features of Consul is its support for multiple datacenters. The
architecture of Consul is designed to promote low coupling of datacenters so
that connectivity issues or failure of any datacenter does not impact the
availability of Consul in other datacenters. This means each datacenter runs
independently, each having a dedicated group of servers and a private LAN
gossip pool.
2014-04-11 19:03:12 +00:00
---
2019-04-03 15:45:54 +00:00
# Multiple Datacenters: Basic Federation with the WAN Gossip
2014-04-11 19:03:12 +00:00
2014-04-17 21:45:53 +00:00
One of the key features of Consul is its support for multiple datacenters.
2020-04-09 23:46:54 +00:00
The [architecture](/docs/internals/architecture) of Consul is designed to
2015-03-01 03:20:02 +00:00
promote a low coupling of datacenters so that connectivity issues or
2014-04-11 19:03:12 +00:00
failure of any datacenter does not impact the availability of Consul in other
2015-03-01 03:20:02 +00:00
datacenters. This means each datacenter runs independently, each having a dedicated
2020-04-09 23:46:54 +00:00
group of servers and a private LAN [gossip pool](/docs/internals/gossip).
2014-04-11 19:03:12 +00:00
2019-04-03 15:45:54 +00:00
## The WAN Gossip Pool
2017-08-04 23:14:39 +00:00
2017-03-28 18:56:55 +00:00
This guide covers the basic form of federating Consul clusters using a single
WAN gossip pool, interconnecting all Consul servers.
2017-04-05 19:13:23 +00:00
[Consul Enterprise](https://www.hashicorp.com/products/consul/) version 0.8.0 added support
2017-03-28 18:56:55 +00:00
for an advanced multiple datacenter capability. Please see the
2020-04-09 23:46:54 +00:00
[Advanced Federation Guide](/docs/guides/advanced-federation) for more details.
2017-03-28 18:56:55 +00:00
2019-04-03 15:45:54 +00:00
## Setup Two Datacenters
2017-03-28 18:56:55 +00:00
2019-04-03 15:45:54 +00:00
To get started, follow the [
Deployment guide](https://learn.hashicorp.com/consul/advanced/day-1-operations/deployment-guide/) to
2015-03-01 03:20:02 +00:00
start each datacenter. After bootstrapping, we should have two datacenters now which
we can refer to as `dc1` and `dc2`. Note that datacenter names are opaque to Consul;
they are simply labels that help human operators reason about the Consul clusters.
2014-04-11 19:03:12 +00:00
2020-04-09 23:46:54 +00:00
To query the known WAN nodes, we use the [`members`](/docs/commands/members)
2019-04-03 15:45:54 +00:00
command with the `-wan` parameter on either datacenter.
2014-04-11 19:03:12 +00:00
2020-04-07 23:56:08 +00:00
```shell
2014-04-11 19:03:12 +00:00
$ consul members -wan
```
2019-04-03 15:45:54 +00:00
This will provide a list of all known members in the WAN gossip pool. In
this case, we have not connected the servers so there will be no output.
`consul members -wan` should
2015-03-01 03:20:02 +00:00
only contain server nodes. Client nodes send requests to a datacenter-local server,
so they do not participate in WAN gossip. Client requests are forwarded by local
servers to a server in the target datacenter as necessary.
2014-04-11 19:03:12 +00:00
2019-04-03 15:45:54 +00:00
## Join the Servers
The next step is to ensure that all the server nodes join the WAN gossip pool (include all the servers in all the datacenters).
2014-04-11 19:03:12 +00:00
2020-04-07 23:56:08 +00:00
```shell
2014-04-11 19:03:12 +00:00
$ consul join -wan <server 1> <server 2> ...
```
2020-04-09 23:46:54 +00:00
The [`join`](/docs/commands/join) command is used with the `-wan` flag to indicate
2015-03-01 03:20:02 +00:00
we are attempting to join a server in the WAN gossip pool. As with LAN gossip, you only
need to join a single existing member, and the gossip protocol will be used to exchange
information about all known members. For the initial setup, however, each server
2017-03-28 18:56:55 +00:00
will only know about itself and must be added to the cluster. Consul 0.8.0 added WAN join
flooding, so if one Consul server in a datacenter joins the WAN, it will automatically
join the other servers in its local datacenter that it knows about via the LAN.
2014-04-11 19:03:12 +00:00
2019-04-03 15:45:54 +00:00
### Persist Join with Retry Join
In order to persist the `join` information, the following can be added to each server's configuration file, in both datacenters. For example, in `dc1` server nodes.
```json
"retry_join_wan":[
"dc2-server-1",
"dc2-server-2"
],
```
## Verify Multi-DC Configuration
2020-04-09 23:46:54 +00:00
Once the join is complete, the [`members`](/docs/commands/members) command can be
2015-03-01 03:20:02 +00:00
used to verify that all server nodes gossiping over WAN.
2020-04-07 23:56:08 +00:00
```shell
2019-04-03 15:45:54 +00:00
$ consul members -wan
Node Address Status Type Build Protocol DC Segment
dc1-server-1 127.0.0.1:8701 alive server 1.4.3 2 dc1 <all>
dc2-server-1 127.0.0.1:8702 alive server 1.4.3 2 dc2 <all>
```
2015-03-01 03:20:02 +00:00
We can also verify that both datacenters are known using the
2020-04-09 23:46:54 +00:00
[HTTP Catalog API](/api/catalog#catalog_datacenters):
2014-04-11 19:03:12 +00:00
2020-04-07 23:56:08 +00:00
```shell
2014-04-11 19:03:12 +00:00
$ curl http://localhost:8500/v1/catalog/datacenters
["dc1", "dc2"]
```
As a simple test, you can try to query the nodes in each datacenter:
2020-04-07 23:56:08 +00:00
```shell
2014-04-11 19:03:12 +00:00
$ curl http://localhost:8500/v1/catalog/nodes?dc=dc1
2019-04-03 15:45:54 +00:00
{
"ID": "ee8b5f7b-9cc1-a382-978c-5ce4b1219a55",
"Node": "dc1-server-1",
"Address": "127.0.0.1",
"Datacenter": "dc1",
"TaggedAddresses": {
"lan": "127.0.0.1",
"wan": "127.0.0.1"
},
"Meta": {
"consul-network-segment": ""
},
"CreateIndex": 12,
"ModifyIndex": 14
}
2016-10-07 08:42:37 +00:00
```
2020-04-06 20:27:35 +00:00
2020-04-07 23:56:08 +00:00
```shell
2019-04-03 15:45:54 +00:00
$ curl http://localhost:8500/v1/catalog/nodes?dc=dc2
{
"ID": "ee8b5f7b-9cc1-a382-978c-5ce4b1219a55",
"Node": "dc2-server-1",
"Address": "127.0.0.1",
"Datacenter": "dc1",
"TaggedAddresses": {
"lan": "127.0.0.1",
"wan": "127.0.0.1"
},
"Meta": {
"consul-network-segment": ""
},
"CreateIndex": 11,
"ModifyIndex": 16
}
2016-10-07 08:42:37 +00:00
```
2014-04-11 19:03:12 +00:00
2020-04-06 20:27:35 +00:00
## Network Configuration
2019-04-03 15:45:54 +00:00
2014-04-11 19:03:12 +00:00
There are a few networking requirements that must be satisfied for this to
2015-03-01 03:20:02 +00:00
work. Of course, all server nodes must be able to talk to each other. Otherwise,
2014-04-11 19:03:12 +00:00
the gossip protocol as well as RPC forwarding will not work. If service discovery
2015-03-01 03:20:02 +00:00
is to be used across datacenters, the network must be able to route traffic
2014-04-11 19:03:12 +00:00
between IP addresses across regions as well. Usually, this means that all datacenters
must be connected using a VPN or other tunneling mechanism. Consul does not handle
2017-03-28 18:56:55 +00:00
VPN or NAT traversal for you.
2020-04-06 20:27:35 +00:00
Note that for RPC forwarding to work the bind address must be accessible from remote nodes.
2017-12-14 00:57:29 +00:00
Configuring `serf_wan`, `advertise_wan_addr` and `translate_wan_addrs` can lead to a
2020-04-06 20:27:35 +00:00
situation where `consul members -wan` lists remote nodes but RPC operations fail with one
2017-12-14 00:57:29 +00:00
of the following errors:
- `No path to datacenter`
- `rpc error getting client: failed to get conn: dial tcp <LOCAL_ADDR>:0-><REMOTE_ADDR>:<REMOTE_RPC_PORT>: i/o timeout`
The most likely cause of these errors is that `bind_addr` is set to a private address preventing
the RPC server from accepting connections across the WAN. Setting `bind_addr` to a public
address (or one that can be routed across the WAN) will resolve this issue. Be aware that
exposing the RPC server on a public port should only be done **after** firewall rules have
been established.
2020-04-09 23:46:54 +00:00
The [`translate_wan_addrs`](/docs/agent/options#translate_wan_addrs) configuration
2017-03-28 18:56:55 +00:00
provides a basic address rewriting capability.
2019-04-03 15:45:54 +00:00
## Data Replication
In general, data is not replicated between different Consul datacenters. When a
request is made for a resource in another datacenter, the local Consul servers forward
an RPC request to the remote Consul servers for that resource and return the results.
If the remote datacenter is not available, then those resources will also not be
available, but that won't otherwise affect the local datacenter. There are some special
situations where a limited subset of data can be replicated, such as with Consul's built-in
2020-04-09 23:46:54 +00:00
[ACL replication](/docs/guides/acl#outages-and-acl-replication) capability, or
2019-04-03 15:45:54 +00:00
external tools like [consul-replicate](https://github.com/hashicorp/consul-replicate/).
## Summary
In this guide you setup WAN gossip across two datacenters to create
2020-04-06 20:27:35 +00:00
basic federation. You also used the Consul HTTP API to ensure the
2019-04-03 15:45:54 +00:00
datacenters were properly configured.