mirror of https://github.com/hashicorp/consul
80 lines
5.2 KiB
Markdown
80 lines
5.2 KiB
Markdown
---
|
||
layout: docs
|
||
page_title: Architecture
|
||
description: >-
|
||
Learn about the Consul-Terraform-Sync architecture and high-level CTS components, such as the Terraform driver and tasks.
|
||
---
|
||
|
||
# Consul-Terraform-Sync Architecture
|
||
|
||
Consul-Terraform-Sync (CTS) is a service-oriented tool for managing network infrastructure near real-time. CTS runs as a daemon and integrates the network topology maintained by your Consul cluster with your network infrastructure to dynamically secure and connect services.
|
||
|
||
## CTS workflow
|
||
|
||
The following diagram shows the CTS workflow as it monitors the Consul service catalog for updates.
|
||
|
||
[![Consul-Terraform-Sync Architecture](/img/nia-highlevel-diagram.svg)](/img/nia-highlevel-diagram.svg)
|
||
|
||
1. CTS monitors the state of Consul’s service catalog and its KV store. This process is described in [Watcher and Views](#watcher-and-views).
|
||
1. CTS detects a change.
|
||
1. CTS prompts Terraform to update the state of the infrastructure.
|
||
|
||
|
||
## Watcher and views
|
||
|
||
CTS uses Consul’s [blocking queries](/api-docs/features/blocking) functionality to monitor Consul for updates. If an endpoint does not support blocking queries, CTS uses polling to watch for changes. These mechanisms are referred to in CTS as watchers.
|
||
|
||
The watcher maintains a separate thread for each value monitored and runs any tasks that depend on the watched value whenever it's updated. These threads are referred to as views. For example, a thread may run a task to update a proxy when the watcher sees that an instance changes to an unhealthy state.
|
||
|
||
## Tasks
|
||
|
||
A task is the action triggered by the updated data monitored in Consul. It
|
||
takes that dynamic service data and translates it into a call to the
|
||
infrastructure application to configure it with the updates. It uses a driver
|
||
to push out these updates, the initial driver being a local Terraform run. An
|
||
example of a task is to automate a firewall security policy rule with
|
||
discovered IP addresses for a set of Consul services.
|
||
|
||
## Drivers
|
||
|
||
A driver encapsulates the resources required to communicate the updates to the
|
||
network infrastructure. The following [drivers](/docs/nia/network-drivers#terraform) are supported:
|
||
|
||
- Terraform driver
|
||
- Terraform Cloud driver<EnterpriseAlert inline />
|
||
|
||
Each driver includes a set of providers that [enables support](/docs/nia/terraform-modules) for a wide variety of infrastructure applications.
|
||
|
||
## State storage and persistence
|
||
|
||
The following types of state information are associated with CTS.
|
||
|
||
### Terraform state information
|
||
|
||
By default, CTS stores [Terraform state data](https://www.terraform.io/docs/state/index.html) in the Consul KV, but you can specify where this information is stored by configuring the `backend` setting in the [Terraform driver configuration](/docs/nia/configuration#backend). If the backend is not configured to a local location, then the data persists if CTS stops.
|
||
|
||
### CTS task and event data
|
||
|
||
By default, CTS stores task and event data in the Consul KV. This data is transient and does not persist unless you configure [CTS to run with high availability enabled](/docs/nia/usage/run-ha). High availability is an enterprise feature that promotes CTS resiliency. When high availability is enabled, CTS stores and persists task changes and events that occur when an instance stops.
|
||
|
||
The data stored when operating in high availability mode includes task changes made using the task API or CLI, such as creating a new task, deleting a task, or enabling/disabling a task. You can empty the leader’s stored state information by starting CTS with the [`-reset-storage` flag](/docs/nia/cli/start#options).
|
||
|
||
## Instance compatibility checks (high availability)
|
||
|
||
If you [run CTS with high availability enabled](/docs/nia/usage/run-ha), CTS performs instance compatibility checks to ensure that all instances in the cluster behave consistently, enabling CTS to properly perform automations configured in the state storage.
|
||
|
||
The only incompatibility CTS checks for are tasks that are configured with a [local module](/docs/nia/configuration#module). CTS instances that do not include this module directory are incompatible. Example log:
|
||
|
||
```shell-session
|
||
[ERROR] ha.compat: error="compatibility check failure: stat ./example-module: no such file or directory"
|
||
```
|
||
Refer to [Error Messages](/docs/nia/usage/errors-ref) for additional information.
|
||
|
||
CTS instances perform a compatibility check on start-up based on the stored state and every five minutes after starting. If the check detects an incompatible CTS instance, it generates a log so that an operator can address it.
|
||
|
||
CTS will continue to run and log the error message if it finds an incompatibility. CTS can still elect an incompatible instance to be the leader, but tasks affected by the incompatibility will not run successfully. This is because all active CTS instances enter [`once-mode`](/docs/nia/cli/start#modes) and run the tasks once when initially elected.
|
||
|
||
## Security guidelines
|
||
|
||
We recommend following the network security guidelines described in the [Secure Consul-Terraform-Sync for Production](https://learn.hashicorp.com/tutorials/consul/consul-terraform-sync-secure?utm_source=WEBSITE&utm_medium=WEB_IO&utm_offer=ARTICLE_PAGE&utm_content=DOCS) tutorial. The tutorial contains a checklist of best practices to secure your CTS installation for a production environment.
|