Remove guides that live in learn.hashicorp.com now (#9563)

pull/9436/head
Luke Kysow 2021-01-14 08:46:55 -08:00 committed by GitHub
parent 8be5a4b38a
commit 10b8090db9
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
16 changed files with 0 additions and 5472 deletions

View File

@ -1,50 +0,0 @@
---
layout: docs
page_title: ACL Guides
description: >-
Consul provides an optional Access Control List (ACL) system which can be used
to control access to data and APIs. Select the following guide for your use
case.
---
# ACL Documentation and Guides
Consul uses Access Control Lists (ACLs) to secure the UI, API, CLI, service communications, and agent communications. At the core, ACLs operate by grouping rules into policies, then associating one or more policies with a token.
The following documentation and guides will help you understand and implement
ACLs.
## ACL Documentation
### ACL System
Consul provides an optional Access Control List (ACL) system which can be used to control access to data and APIs. The ACL system is a Capability-based system that relies on tokens which can have fine grained rules applied to them. The [ACL System documentation] details the functionality of Consul ACLs.
### ACL Rules
A core part of the ACL system is the rule language, which is used to describe the policy that must be enforced. The [ACL Rules documentation] is useful
when creating rule specifications.
### ACL Legacy System
The ACL system in Consul 1.3.1 and older is now called legacy. For information on bootstrapping the legacy system, ACL rules, and a general ACL system overview, read the legacy [documentation](/docs/acl/acl-legacy).
### ACL Migration
[The migration documentation](/docs/acl/acl-migrate-tokens) details how to upgrade
existing legacy tokens after upgrading to 1.4.0. It will briefly describe what changed, and then walk through the high-level migration process options, finally giving some specific examples of migration strategies. The new ACL system has improvements for the security and management of ACL tokens and policies.
## ACL Guides
We have several guides for setting up and configuring Consul's ACL system. They include how to bootstrap the ACL system in Consul version 1.4.0 and newer. Please select one of the following guides to get started.
~> Note: the following are located on HashiCorp Learn. By selecting
one of the guides, you will be directed to a new site.
### Bootstrapping the ACL System
Learn how to control access to Consul resources with this step-by-step [tutorial](https://learn.hashicorp.com/tutorials/consul/access-control-setup-production) on bootstrapping the ACL system in Consul 1.4.0 and newer. This guide also includes additional steps for configuring the anonymous token, setting up agent-specific default tokens, and creating tokens for Consul UI use.
### Securing Consul with ACLs
The _Bootstrapping the ACL System_ guide walks you through how to set up ACLs on a single datacenter. Because it introduces the basic concepts and syntax we recommend completing it before starting this guide. This guide builds on the first guide with recommendations for production workloads on a single datacenter.

File diff suppressed because it is too large Load Diff

View File

@ -1,242 +0,0 @@
---
name: ACL Replication for Multiple Datacenters
content_length: 15
id: acl-replication
products_used:
- Consul
description: 'Configure tokens, policies, and roles to work across multiple datacenters.'
---
You can configure tokens, policies and roles to work across multiple datacenters. ACL replication has several benefits.
1. It enables authentication of nodes and services between multiple datacenters.
1. The secondary datacenter can provide failover for all ACL components created in the primary datacenter.
1. Sharing policies reduces redundancy for the operator.
## Prerequisites
Before starting this guide, each datacenter will need to have ACLs enabled, the process is outlined in the [Securing Consul with ACLs
guide](/consul/security-networking/production-acls). This guide includes the additional ACL replication configuration for the Consul
agents not covered in the Securing Consul with ACL guide.
Additionally,
[Basic Federation with WAN Gossip](/consul/security-networking/datacenters) is required.
## Introduction
In this guide, you will setup ACL replication. This is a multi-step process
that includes:
- Setting the `primary_datacenter` parameter on all Consul agents in the primary datacenter.
- Creating the replication token.
- Configuring the `primary_datacenter` parameter on all Consul agents in the secondary datacenter.
- Enabling token replication on the servers in the secondary datacenter.
- Applying the replication token to all the servers in the secondary datacenter.
You should complete this guide during the initial ACL bootstrapping
process.
-> After ACLs are enabled you must have a privileged token to complete any
operation on either datacenter. You can use the initial
`bootstrap` token as your privileged token.
## Configure the Primary Datacenter
~> Note, if your primary datacenter uses the default `datacenter` name of
`dc1`, you must set a different `datacenter` parameter on each secondary datacenter.
Otherwise, both datacenters will be named `dc1` and there will be conflicts.
### Consul Servers and Clients
You should explicitly set the `primary_datacenter` parameter on all servers
and clients, even though replication is enabled by default on the primary
datacenter. Your agent configuration should be similar to the example below.
```json
{
"datacenter": "primary_dc",
"primary_datacenter": "primary_dc",
"acl": {
"enabled": true,
"default_policy": "deny",
"down_policy": "extend-cache",
"enable_token_persistence": true
}
}
```
The `primary_datacenter`
[parameter](/docs/agent/options#primary_datacenter)
sets the primary datacenter to have authority for all ACL information. It
should also be set on clients, so that the they can forward API
requests to the servers.
Finally, start the agent.
```shell-session
$ consul agent -config-file=server.json
```
Complete this process on all agents. If you are configuring ACLs for the
first time, you will also need to [compelete the bootstrapping process](/consul/security-networking/production-acls) now.
## Create the Replication Token for ACL Management
Next, create the replication token for managing ACLs
with the following privileges.
- acl = "write" which will allow you to replicate tokens.
- operator = "read" for replicating proxy-default configuration entries.
- service_prefix, policy = "read" and intentions = "read" for replicating
service-default configuration entries, CA, and intention data.
```hcl
acl = "write"
operator = "read"
service_prefix "" {
policy = "read"
intentions = "read"
}
```
Now that you have the ACL rules defined, create a policy with those rules.
```shell-session
$ consul acl policy create -name replication -rules @replication-policy.hcl
ID: 240f1d01-6517-78d3-ec32-1d237f92ab58
Name: replication
Description: Datacenters:
Rules: acl = "write"
operator = "read"
service_prefix "" { policy = "read" intentions = "read" }
```
Finally, use your newly created policy to create the replication token.
```shell-session
$ consul acl token create description "replication token" -policy-name replication
AccessorID: 67d55dc1-b667-1835-42ab-64658d64a2ff
SecretID: fc48e84d-3f4d-3646-4b6a-2bff7c4aaffb
Description: replication token
Local: false
Create Time: 2019-05-09 18:34:23.288392523 +0000 UTC
Policies:
240f1d01-6517-78d3-ec32-1d237f92ab58 - replication
```
## Enable ACL Replication on the Secondary Datacenter
Once you have configured the primary datacenter and created the replication
token, you can setup the secondary datacenter.
-> Note, your initial `bootstrap` token can be used for the necessary
privileges to complete any action on the secondary servers.
### Configure the Servers
You will need to set the `primary_datacenter` parameter to the name of your
primary datacenter and `enable_token_replication` to true on all the servers.
```json
{
"datacenter": "dc_secondary",
"primary_datacenter": "primary_dc",
"acl": {
"enabled": true,
"default_policy": "deny",
"down_policy": "extend-cache",
"enable_token_persistence": true,
"enable_token_replication": true
}
}
```
Now you can start the agent.
```shell-session
$ consul agent -config-file=server.json
```
Repeat this process on all the servers.
### Apply the Replication Token to the Servers
Finally, apply the replication token to all the servers using the CLI.
```shell-session
$ consul acl set-agent-token replication <token>
ACL token "replication" set successfully
```
Once token replication has been enabled, you will also be able to create
datacenter local tokens.
Repeat this process on all servers. If you are configuring ACLs for the
first time, you will also need to [set the agent token](/consul/security-networking/production-acls#add-the-token-to-the-agent).
Note, the clients do not need the replication token.
### Configure the Clients
For the clients, you will need to set the `primary_datacenter` parameter to the
name of your primary datacenter and `enable_token_replication` to true.
```json
{
"datacenter": "dc_secondary",
"primary_datacenter": "primary_dc",
"acl": {
"enabled": true,
"default_policy": "deny",
"down_policy": "extend-cache",
"enable_token_persistence": true,
"enable_token_replication": true
}
}
```
Now you can start the agent.
```shell-session
$ consul agent -config-file=server.json
```
Repeat this process on all clients. If you are configuring ACLs for the
first time, you will also need to [set the agent token](/consul/security-networking/production-acls#add-the-token-to-the-agent).
## Check Replication
Now that you have set up ACL replication, you can use the [HTTP API](/api/acl#check-acl-replication) to check
the configuration.
```shell-session
$ curl http://localhost:8500/v1/acl/replication?pretty
{
"Enabled":true,
"Running":true,
"SourceDatacenter":"primary_dc",
"ReplicationType":"tokens",
"ReplicatedIndex":19,
"ReplicatedTokenIndex":22,
"LastSuccess":"2019-05-09T18:54:09Z",
"LastError":"0001-01-01T00:00:00Z"
}
```
Notice, the "ReplicationType" should be "tokens". This means tokens, policies,
and roles are being replicated.
## Summary
In this guide you setup token replication on multiple datacenters. You can complete this process on an existing datacenter, with minimal
modifications. Mainly, you will need to restart the Consul agent when updating
agent configuration with ACL parameters.
If you have not configured other secure features of Consul,
[certificates](consul/security-networking/certificates) and
[encryption](consul/security-networking/agent-encryption),
we recommend doing so now.

View File

@ -1,437 +0,0 @@
---
layout: docs
page_title: Connecting Services Across Datacenters
description: |-
Connect services and secure inter-service communication across datacenters
using Consul Connect and mesh gateways.
---
## Introduction
Consul Connect is Consuls service mesh offering, which allows users to observe
and secure service-to-service communication. Because Connect implements mutual
TLS between services, it also enabled us to build mesh gateways, which provide
users with a way to help services in different datacenters communicate with each
other. Mesh gateways take advantage of Server Name Indication (SNI), which is an
extension to TLS that allows them to see the destination of inter-datacenter
traffic without decrypting the message payload.
Using mesh gateways for inter-datacenter communication can prevent each Connect
proxy from needing an accessible IP address, and frees operators from worrying
about IP address overlap between datacenters.
In this guide, you will configure Consul Connect across multiple Consul
datacenters and use mesh gateways to enable inter-service traffic between them.
Specifically, you will:
1. Enable Connect in both datacenters
1. Deploy the two mesh gateways
1. Register services and Connect sidecar proxies
1. Configure intentions
1. Test that your services can communicate with each other
For the remainder of this guide we will refer to mesh gateways as "gateways".
Anywhere in this guide where you see the word gateway, assume it is specifically
a mesh gateway (as opposed to an API or other type of gateway).
## Prerequisites
To complete this guide you will need two wide area network (WAN) joined Consul
datacenters with access control list (ACL) replication enabled. If you are
starting from scratch, follow these guides to set up your datacenters, or use
them to check that you have the proper configuration:
- [Deployment Guide](/consul/datacenter-deploy/deployment-guide)
- [Securing Consul with ACLs](/consul/security-networking/production-acls)
- [Basic Federation with WAN Gossip](/consul/security-networking/datacenters)
You will also need to enable ACL replication, which you can do by following the
[ACL Replication for Multiple
Datacenters](/consul/day-2-operations/acl-replication) guide with the following
modification.
When creating the [replication token for ACL
management](/consul/day-2-operations/acl-replication#create-the-replication-token-for-acl-management),
it will need the following policy:
```json
{
"acl": "write",
"operator": "write",
"service_prefix": {
"": {
"policy": "read"
}
}
}
```
The replication token needs different permissions depending on what you want to
accomplish. The above policy allows for ACL policy, role, and token replication
with `acl:write`, CA replication with `operator:write` and intention and
configuration entry replication with `service:*:read`.
You will also need to install [Envoy](https://www.envoyproxy.io/) alongside your
Consul clients. Both the gateway and sidecar proxies will need to get
configuration and updates from a local Consul client.
Lastly you should set [`enable_central_service_config = true`](/docs/agent/options#enable_central_service_config)
on your Consul clients, which will allow them to centrally configrure the
sidecar and mesh gateway proxies.
## Enable Connect in Both Datacenters
Once you have your datacenters set up and ACL replication configured, its time
to enable Connect in each of them sequentially. Connects certificate authority
(which is distinct from the Consul certificate authority that you manage using
the CLI) will automatically bootstrap as soon as a server with Connect enabled
becomes the server clusters leader. You can also use [Vault as a Connect
CA](/docs/connect/ca/vault).
!> **Warning:** If you are using this guide as a production playbook, we
strongly recommend that you enable Connect in each of your datacenters by
following the [Connect in Production
guide](/consul/developer-segmentation/connect-production),
which includes production security recommendations.
### Enable Connect in the primary datacenter
Enable Connect in the primary data center and bootstrap the Connect CA by adding
the following snippet to the server configuration for each of your servers.
```json
connect {
"enabled": true
}
```
Load the new configuration by restarting each server one at a time, making sure
to maintain quorum. This will be a similar process to performing a [rolling
restart during
upgrades](/docs/upgrading#standard-upgrades).
Stop the first server by running the following [leave
command](/commands/leave).
```shell-session
$ consul leave
```
Once the server shuts down restart it and make sure that it is healthy and
rejoins the other servers. Repeat this process until you've restarted all the
servers with Connect enabled.
### Enable Connect in the secondary datacenter
Once Connect is enabled in the primary datacenter, follow the same process to
enable Connect in the secondary datacenter. Add the following configuration to
the configuration for your servers, and restart them one at a time, making sure
to maintain quorum.
```json
connect {
"enabled": true
}
```
The `primary_datacenter` setting that was required in order to enable ACL
replication between datacenters also specifies which datacenter will write
intentions and act as the [root CA for Connect](/docs/connect/connect-internals#connections-across-datacenters).
Intentions, which allow or deny inter-service communication, are automatically
replicated to the secondary datacenter.
## Deploy Gateways
Connect mesh gateways proxy requests from services in one datacenter to services
in another, so you will need to deploy your gateways on nodes that can reach
each other over the network. As we mentioned in the prerequisites,
you will need to make sure that both Envoy and Consul are installed on the
gateway nodes. You wont want to run any services on these nodes other than
Consul and Envoy because they necessarily will have access to the WAN.
### Generate Tokens for the Gateways
Youll need to [generate a
token](/consul/security-networking/production-acls#apply-individual-tokens-to-the-services)
for each gateway that gives it read access to the entire catalog.
Create a file named `mesh-gateway-policy.json` containing the following content.
```json
{
"node_prefix": {
"": {
"policy": "read"
}
}
}
{
"service_prefix": {
"": {
"policy": "read"
}
}
}
{
"service": {
"mesh-gateway": {
"policy": "write"
}
}
}
```
Next, create and name a new ACL policy using the file you just made.
```shell-session
$ consul acl policy create \
-name mesh-gateway \
-rules @mesh-gateway-policy.json
```
Generate a token for each gateway from the new policy.
```shell-session
$ consul acl token create -description "mesh-gateway primary datacenter token" \
-policy-name mesh-gateway
```
```shell-session
$ consul acl token create \
-description "mesh-gateway secondary datacenter token" \
-policy-name mesh-gateway
```
Youll apply those tokens when you deploy the gateways.
### Deploy the Gateway for your primary datacenter
Register and start the gateway in your primary datacenter with the following
command.
```shell-session
$ consul connect envoy -mesh-gateway -register \
-service-name "gateway-primary"
-address "<your private address>" \
-wan-address "<your externally accessible address>"\
-token=<token for the primary dc gateway>
```
### Deploy the Gateway for your Secondary Datacenter
Register and start the gateway in your secondary datacenter with the following
command.
```shell-session
$ consul connect envoy -mesh-gateway -register \
-service-name "gateway-secondary"
-address "<your private address>" \
-wan-address "<your externally accessible address>"\
-token=<token for the secondary dc gateway>
```
### Configure Sidecar Proxies to use Gateways
Next, create a [centralized
configuration](/docs/connect/config-entries/proxy-defaults)
file for all the sidecar proxies in both datacenters called
`proxy-defaults.json`. This file will instruct the sidecar proxies to send all
their inter-datacenter traffic through the gateways. It should contain the
following:
```json
{
"Kind": "proxy-defaults",
"Name": "global",
"MeshGateway": "local"
}
```
Write the centralized configuration you just created with the following command.
```shell-session
$ consul config write proxy-defaults.json
```
Once this step is complete, you will have set up Consul Connect with gateways
across multiple datacenters. Now you are ready to register the services that
will use Connect.
## Register a Service in Each Datacenter to Use Connect
You can register a service to use a sidecar proxy by including a sidecar proxy
stanza in its registration file. For this guide, you can use socat to act as a
backend service and register a dummy service called web to represent the client
service. Those names are used in our examples. If you have services that you
would like to connect, feel free to use those instead.
~> **Caution:** Connect takes its default intention policy from Consuls default
ACL policy. If you have set your default ACL policy to deny (as is recommended
for secure operation) and are adding Connect to already registered services,
those services may lose connection to each other until you set an intention
between them to allow communication.
### Register a back end service in one datacenter
In one datacenter register a backend service and add an Envoy sidecar proxy
registration. To do this you will either create a new registration file or edit
an existing one to include a sidecar proxy stanza. If you are using socat as
your backend service, you will create a new file called `socat.json` that will
contain the below snippet. Since you have ACLs enabled, you will have to [create
a token for the
service](/consul/security-networking/production-acls#apply-individual-tokens-to-the-services).
```json
{
"service": {
"name": "socat",
"port": 8181,
"token": "<token here>",
"connect": { "sidecar_service": {} }
}
}
```
Note the Connect stanza of the registration with the `sidecar_service` and
`token` options. This is what you would add to an existing service registration
if you are not using socat as an example.
Reload the client with the new or modified registration.
```shell-session
$ consul reload
```
Then start Envoy specifying which service it will proxy.
```shell-session
$ consul connect envoy -sidecar-for socat
```
If you are using socat as your example, start it now on the port you specified
in your registration by running the following command.
```shell-session
$ socat -v tcp-l:8181,fork exec:"/bin/cat"
```
Check that the socat service is running by accessing it using netcat on the same
node. It will echo back anything you type.
```shell-session
$ nc 127.0.0.1 8181
hello
hello
echo
echo
```
Stop the running netcat service by typing `ctrl + c`.
### Register a front end service in the other datacenter
Now in your other datacenter, you will register a service (with a sidecar proxy)
that calls your backend service. Your registration will need to list the backend
service as your upstream. Like the backend service, you can use an example
service, which will be called web, or append the connect stanza to an existing
registration with some customization.
To use web as your front end service, create a registration file called
`web.json` that contains the following snippet.
```json
{
"service": {
"name": "web",
"port": 8080,
"token": "<token here>",
"connect": {
"sidecar_service": {
"proxy": {
"upstreams": [
{
"destination_name": "socat",
"datacenter": "primary",
"local_bind_port": 8181
}
]
}
}
}
}
}
```
Note the Connect part of the registration, which specifies socat as an
upstream. If you are using another service as a back end, replace `socat` with
its name and the `8181` with its port.
Reload the client with the new or modified registration.
```shell-session
$ consul reload
```
Then start Envoy and specify which service it will proxy.
```shell-session
$ consul connect envoy -sidecar-for web
```
## Configure Intentions to Allow Communication Between Services
Now that your services both use Connect, you will need to configure intentions
in order for them to communicate with each other. Add an intention to allow the
front end service to access the back end service. For web and socat the command
would look like this.
```shell-session
$ consul intention create web socat
```
Consul will automatically forward intentions initiated in the in the secondary
datacenter to the primary datacenter, where the servers will write them. The
servers in the primary datacenter will then automatically replicate the written
intentions back to the secondary datacenter.
## Test the connection
Now that you have services using Connect, verify that they can contact each
other. If you have been using the example web and socat services, from the node
and datacenter where you registered the web service, start netcat and type
something for it to echo.
```shell-session
$ nc 127.0.0.1 8181
hello
hello
echo
echo
```
## Summary
In this guide you configured two WAN-joined datacenters to use Consul Connect,
deployed gateways in each datacenter, and connected two services to each other
across datacenters.
Gateways know where to route traffic because of Server Name Indication (SNI)
where the client service sends the destination as part of the TLS handshake.
Because gateways rely on TLS to discover the traffics destination, they require
Consul Connect to route traffic.
### Next Steps
Now that youve seen how to deploy gateways to proxy inter-datacenter traffic,
you can deploy multiple gateways for redundancy or availability. The gateways
and proxies will automatically round-robin load balance traffic between the
gateways.
If you are using Kubernetes you can configure Connect and deploy gateways for
your Kubernetes cluster using the Helm chart. Learn more in the [Consuls
Kubernetes documentation](/docs/platform/k8s/helm)
Visit the Consul documentation for a full list of configurations for [Consul
Connect](/docs/connect), including [mesh gateway
configuration options](/docs/connect/mesh-gateway).

View File

@ -1,442 +0,0 @@
---
name: Secure Service-to-Service Communication
content_length: 12
id: connect-services
products_used:
- Consul
description: This guide demonstrates Consul Connect using internal proxies as sidecars.
level: Implementation
---
Consul Connect secures service-to-service communication with authorization and
encryption. Applications can use sidecar proxies in a service mesh
configuration to automatically establish TLS connections for inbound and
outbound connections without being aware of Connect at all. In addition to
securing your services, Connect can also intercept [data about service-to-service
communications][consul-l7] and surface it to monitoring tools.
In this guide, you will register two services and their [sidecar] proxies in
the Consul catalog. You will then start the services and sidecar proxies.
Finally, you will demonstrate that the service-to-service communication is going
through the proxies by stopping traffic with an "[intention]".
[![Flow diagram showing end user traffic being sent to the Dashboard Service at
port 9002. The dashboard service makes requests for the counting service to the
local Connect Proxy at port 5000. This traffic then traverses the Connect mesh
over dynamic ports. The traffic exits the Connect mesh from the counting service's
local proxy. The proxy sends this traffic to the counting service itself at port
9003.][img-flow]][img-flow]
While this guide uses elements that are not suitable for production
environments—Consul dev agents, internal proxies, and mock services—it will
teach you the common process for deploying your own services using Consul
Connect. At the end of this guide, we also present additional information
about adapting this process to more production-like environments.
## Prerequisites
To complete this guide, you will need a local [dev agent], which enables
Connect by default.
This guide uses the following example service applications. Download and unzip
the executables to follow along.
- [Counting Service]
- [Dashboard Service]
### Verify your Consul agent health
To ensure that Consul is running and accessible from the command line, use the
`consul members` command to verify your agent status.
```shell-session
$ consul members
Node Address Status Type Build Protocol DC Segment
hostname.local 127.0.0.1:8301 alive server 1.6.1 2 dc1 <all>
```
If you receive an error message, verify that you have a local Consul dev agent
running and try again.
## Register the services and sidecar proxies
Services have to be registered with Consul. Consul shares this information
around the cluster so that operators or other services can determine the
location of a service. Connect also uses service registrations to determine
where to send proxied traffic to.
There are several ways to register services in Consul:
- directly from a Consul-aware application
- from an orchestrator, like [Nomad][services-nomad] or [Kubernetes][services-k8s]
- [using configuration files][services-config] that are loaded at node startup
- [using the API][services-api] to register them with a JSON or HCL
specification
- [using the CLI][services-cli] to simplify this submission process
For this guide, we will use the [`consul service register`][services-cli] CLI
command to load them into the catalog.
### Create the counting service definition
First, define the Counting service and its sidecar proxy in a file named
`counting.hcl`. The definition should include the name of the service, the port
the service listens on, and a [connect] block with the [sidecar_service] block.
This block is empty so Consul will use default parameters. The definition also
includes an optional service health check.
```hcl
service {
name = "counting"
id = "counting-1"
port = 9003
connect {
sidecar_service {}
}
check {
id = "counting-check"
http = "http://localhost:9003/health"
method = "GET"
interval = "1s"
timeout = "1s"
}
}
```
Services and sidecar proxies can be defined in either HCL or JSON. There is a
[JSON version][counting-1.json] version of the service definition in the
[demo-consul-101 project].
### Create the dashboard service definition
Create the Dashboard service and proxy definition in the same way. First,
create a file named `dashboard.hcl`.
```hcl
service {
name = "dashboard"
port = 9002
connect {
sidecar_service {
proxy {
upstreams = [
{
destination_name = "counting"
local_bind_port = 5000
}
]
}
}
}
check {
id = "dashboard-check"
http = "http://localhost:9002/health"
method = "GET"
interval = "1s"
timeout = "1s"
}
}
```
There is a [JSON version][dashboard.json] of the service definition in the
[demo-consul-101 project].
Notice that the dashboard definition also includes an upstream block. Upstreams
are ports on the local host that will be proxied to the destination service.
The upstream block's local_bind_port value is the port your service will
communicate with to reach the service you depend on. The destination name is
the Consul service name that the local_bind_port will proxy to.
In our scenario, the dashboard service depends on the counting service. With
this configuration, when dashboard service connects to localhost:5000 it is
proxied across the service mesh to the counting service.
### Register the services and proxies
Finally, you can submit the service definitions to your Consul agent. If you
are using the JSON definitions, ensure that the filenames end in ".json"
instead of ".hcl".
```shell-session
$ consul services register counting.hcl
Registered service: counting
```
```shell-session
$ consul services register dashboard.hcl
Registered service: dashboard
```
-> **Challenge:** After completing the guide, try doing it again using one of
the other service registration mechanisms mentioned earlier in the guide to
register the services.
### Verify the services are registered
Now that you have registered your services and sidecar proxies, run `consul catalog services` to verify that they are present.
```shell-session
$ consul catalog services
consul
counting
counting-sidecar-proxy
dashboard
dashboard-sidecar-proxy
```
### Create a Connect intention
Intentions define access control for services via Connect and are used to
control which services may establish connections. The default intention
behavior is defined by the [default ACL policy].
In this guide, this step is not necessary since the default ACL policy of the
dev agent is "allow all", so Connect connections are automatically allowed as
well. However, we will create explicit intentions as a part of deploying
Connect-enabled services.
-> **Best Practice:** Creating an explicit intention helps protect your
service against changes to the implied permissions. For example, a change in
`default_policy` or the introduction of a global deny-all intention would
impact services without explicit intentions defined.
```shell-session
$ consul intention create dashboard counting
Created: dashboard => counting (allow)
```
## Start the services and sidecar proxies
Now that you have created all the necessary configuration to describe your
service's connections, it's time to start your services and their sidecar
proxies. We are using the `&` operator to run the services as background tasks.
However, because they write to the console, it's best to run them in their own
shell session.
Run these commands to start the applications:
```shell-session
$ PORT=9002 COUNTING_SERVICE_URL="http://localhost:5000" ./dashboard-service &
$ PORT=9003 ./counting-service &
```
Next, start the sidecar proxies that will run as [sidecar] processes along with
the service applications.
We are using the Consul Connect's built-in proxy for this guide. In a
production deployment, we recommend using Envoy instead.
```shell-session
$ consul connect proxy -sidecar-for counting > counting-proxy.log &
$ consul connect proxy -sidecar-for dashboard > dashboard-proxy.log &
```
### Check the dashboard interface
Open a browser and navigate to `http://localhost:9002`.
You should see a screen similar to the following. There is a connection
indicator in the top right that will turn green and say "Connected" when the
dashboard service is in communication with the counting service.
[![Image of Dashboard UI. There is white text on a magenta background, with the
page title "Dashboard" at the top left. There is a green indicator in the top
right with the word connected in white. There is a large number 19 to show
sample counting output. The node name that the counting service is running on,
host01, is in very small monospaced type underneath the large
numbers.][img-screenshot1]][img-screenshot1]
If your application is not connected, check that the Counting service is
running and healthy by viewing it in the Consul UI at `http://localhost:8500`.
## Test the sidecar proxy connections
To test that traffic is flowing through the sidecar proxies, you will control
traffic with an intention.
First, deny the Dashboard service access to the Counting service.
```shell-session
$ consul intention create -deny -replace dashboard counting
Created: dashboard => counting (deny)
```
Refresh your browser, the connection indicator in the Dashboard ui will now say
"Disconnected"
[![Image of Dashboard UI. There is white text on a magenta background, with the
page title "Dashboard" at the top left. There is a red indicator in the top
right with the words "Counting Service is Unreachable" in white. There is a
large number -1 to show sample counting output. The word "Unreachable"
surrounded by square brackets is in monospaced type underneath the large
numbers.][img-screenshot2]][img-screenshot2]
You can restore communication between the services by replacing the `deny`
intention with an `allow`.
```shell-session
$ consul intention create -allow -replace dashboard counting
```
Back in the browser, verify that the dashboard reconnects to the counting
service.
## Clean up
Once you are done with this guide, you can begin cleaning up by closing the
terminal in which your counting-service, dashboard-service, and proxies are
running. This should automatically stop these processes.
Delete the intention from Consul.
```shell-session
$ consul intention delete dashboard counting
Intention deleted.
```
Deregister the services.
```shell-session
$ consul services deregister counting.hcl
Deregistered service: counting
$ consul services deregister dashboard.hcl
Deregistered service: dashboard
```
## Extend these concepts
When you want to apply this learning to a proof-of-concept or production
environment, there are some additional considerations.
### Enable Connect and gRPC
When Consul is started with the `-dev` flag, it will automatically enable
Consul Connect and provide a default port for gRPC communication. These have to
be configured explicitly for regular Consul agents.
```hcl
# ...
ports {
"grpc" = 8502
}
connect {
enabled = true
}
```
For JSON configurations:
```json
{
// ...
"ports": {
"grpc": 8502
},
"connect": {
"enabled": true
}
}
```
### Download Envoy
In this guide we used the built-in Connect proxy. For production deployments
and to enable L7 features, you should use Envoy.
You can obtain container-based builds of Envoy directly from the [Envoy
Website], or you can obtain a packages of Envoy binary builds from a
third-party project, [getenvoy.io].
Consul will need to be able to find the "envoy" binary on the path. You can
extract the binary from the official Envoy Docker containers.
To do this, create a container named "envoy-extract" based on the
"envoyproxy/envoy" container.
```shell-session
$ docker create --name "envoy-extract" "envoyproxy/envoy"
docker create --name "envoy-extract" "envoyproxy/envoy"
Unable to find image 'envoyproxy/envoy:latest' locally
latest: Pulling from envoyproxy/envoy
16c48d79e9cc: Pull complete
3c654ad3ed7d: Pull complete
6276f4f9c29d: Pull complete
a4bd43ad48ce: Pull complete
ef9506777d3e: Pull complete
2e7ad8d4ceb7: Pull complete
d9e379d45dad: Pull complete
b283a3f5aebc: Pull complete
095fe71f6465: Pull complete
Digest: sha256:a7769160c9c1a55bb8d07a3b71ce5d64f72b1f665f10d81aa1581bc3cf850d09
Status: Downloaded newer image for envoyproxy/envoy:latest
8d7bb45ea75f4344c6e050e5e1d3423937c4a1a51700ce34c3cf62a5ce3960dd
```
Use the `docker cp` command to copy the envoy file out of the container into
the current directory.
```shell-session
$ docker cp "envoy-extract:/usr/local/bin/envoy" "envoy"
```
Now that you have the binary, you can remove the "envoy-extract" container.
```shell-session
$ docker rm "envoy-extract"
envoy-extract
```
Once you have the binary extracted and in your path, Consul will automatically
use it when you run the `consul connect envoy` command. The following examples
demonstrate how to start the service sidecar proxies with Envoy.
```shell-session
$ consul connect envoy -sidecar-for counting > counting-proxy.log &
$ consul connect envoy -sidecar-for dashboard > dashboard-proxy.log &
```
## Summary
Now that you have completed this guide, you have familiarized yourself with a
basic Connect-enabled service deployment. You created and registered Consul
service definitions that describe how two services communicate with each other.
After starting the application and sidecar proxies, you used Consul Connect
intentions to control traffic flow between services. Finally, you learned about
additional requirements required to take the concepts to a proof-of-concept
environment.
[connect]: https://www.consul.io/docs/connect
[consul-l7]: https://learn.hashicorp.com/consul/developer-mesh/l7-observability-k8s
[counting service]: https://github.com/hashicorp/demo-consul-101/releases/download/0.0.2/counting-service_linux_amd64.zip
[counting-1.json]: https://raw.githubusercontent.com/hashicorp/demo-consul-101/master/demo-config-localhost/counting-1.json
[dashboard service]: https://github.com/hashicorp/demo-consul-101/releases/download/0.0.2/dashboard-service_linux_amd64.zip
[dashboard.json]: https://raw.githubusercontent.com/hashicorp/demo-consul-101/master/demo-config-localhost/dashboard.json
[default acl policy]: https://www.consul.io/docs/agent/options#acl_default_policy
[demo-consul-101 project]: https://github.com/hashicorp/demo-consul-101
[dev agent]: https://learn.hashicorp.com/consul/getting-started/agent
[docker guide]: https://learn.hashicorp.com/consul/day-0/containers-guide
[envoy website]: https://www.envoyproxy.io/docs/envoy/latest/install/building#pre-built-binaries
[getenvoy.io]: https://www.getenvoy.io/
[img-flow]: /static/img/consul/connect-getting-started/consul_connect_demo_service_flow.png
[img-screenshot1]: /static/img/consul/connect-getting-started/screenshot1.png
[img-screenshot2]: /static/img/consul/connect-getting-started/screenshot2.png
[intention]: https://www.consul.io/docs/connect/intentions
[services-api]: https://www.consul.io/api/agent/service#register-service
[services-cli]: https://www.consul.io/commands/services
[services-config]: https://www.consul.io/docs/agent/services#service-definition
[services-nomad]: https://www.nomadproject.io/docs/job-specification/service
[sidecar]: https://docs.microsoft.com/en-us/azure/architecture/patterns/sidecar
[sidecar_service]: https://www.consul.io/docs/connect/registration/sidecar-service
[services-k8s]: https://www.consul.io/docs/platform/k8s/connect#installation-and-configuration

View File

@ -1,323 +0,0 @@
In this guide you will use Consul to configure F5 BIG-IP nodes and server pools.
You will set up a basic F5 BIG-IP AS3 declaration that generates the load
balancer backend-server-pool configuration based on the available service
instances registered in Consul service discovery.
## Prerequisites
To complete this guide, you will need previous experience with F5 BIG-IP and
Consul. You can either manually deploy the necessary infrastructure, or use the
terraform demo code.
### Watch the Video - Optional
Consul's intetgration with F5 was demonstrated in a webinar. If you would prefer
to lear about the integration but aren't ready to try it out, you can [watch the
webinar recording
instead](https://www.hashicorp.com/resources/zero-touch-application-delivery-with-f5-big-ip-terraform-and-consul).
### Manually deploy your infrastructure
You should configure the following infrastructure.
- A single Consul datacenter with server and client nodes, and the configuration
directory for Consul agents at `/etc/consul.d/`.
- A running instance of the F5 BIG-IP platform. If you dont already have one
you can use a [hosted AWS
instance](https://aws.amazon.com/marketplace/pp/B079C44MFH) for this guide.
- The AS3 package version 3.7.0
[installed](https://clouddocs.f5.com/products/extensions/f5-appsvcs-extension/latest/userguide/installation.html)
on your F5 BIG-IP platform.
- Standard web server running on a node, listening on HTTP port 80. We will use
NGINX in this guide.
### Deploy a demo using Terraform - Optional
You can set up the prerequisites on your own, or use the terraform
configuration in [this
repository](https://github.com/hashicorp/f5-terraform-consul-sd-webinar) to set
up a testing environment.
Once your environment is set up, you'll be able to visit the F5 GUI at
`<F5_IP>:8443/tmui/login.jsp` where `<F5_IP>` is the address provided in your
Terraform output. Login with the username `admin` and the password from your
Terraform output.
### Verify your environment
Check your environment to ensure you have a healthy Consul datacenter by
checking your datacenter members. You can do this by running the `consul members` command on the machine where Consul is running, or by accessing the
Consul web UI at the IP address of your consul instances, on port 8500.
```shell-session
$ consul memberss
Node Address Status Type Build Protocol DC Segment
consul 10.0.0.100:8301 alive server 1.5.3 2 dc1 <all>
nginx 10.0.0.109:8301 alive client 1.5.3 2 dc1 <default>
```
In this sample environment we have one Consul server node, and one web server
node with a Consul client.
## Register a Web Service
To register the web service on one of your client nodes with Consul, create a
service definition in Consul's config directory `/etc/consul.d/` named
`nginx-service.json`. Paste in the following configuration, which includes a tcp
check for the web server so that Consul can monitor its health.
```json
{
"service": {
"name": "nginx",
"port": 80,
"checks": [
{
"id": "nginx",
"name": "nginx TCP Check",
"tcp": "localhost:80",
"interval": "5s",
"timeout": "1s"
}
]
}
}
```
Reload the client to read the new service definition.
```shell-session
$ consul reload
```
In a broswer window, visit the services page of the Consul web UI at
`<your-consul-ip>:8500/ui/dc1/services/nginx`.
![Consul UI with NGINX registered](/static/img/consul-f5-nginx.png 'Consul web
UI with a healthy NGINX service')
You should notice your instance of the nginx service listed and healthy.
## Apply an AS3 Declaration
Next you will configure BIG-IP to use Consul Service discovery with an AS3
declaration. You will use cURL to apply the declaration to the BIG-IP Instance.
First construct an authorization header to authenticate our API call with
BIG-IP. You will need to use a username and password for your instance. Below is
an example for username “admin”, and password “password”.
```shell-session
$ echo -n 'admin:password' | base64
YWRtaW46YWRtaW4=
```
Now use cURL to send the authorized declaration to the BIG-IP Instance. Use the
value you created above for your BIG-IP instance in the authorization header.
Remember t o replace `<your-BIG-IP-mgmt-ip>` with the real IP address.
```shell-session
$ curl -X POST \
https://<your-BIG-IP-mgmt-ip>/mgmt/shared/appsvcs/declare \
-H 'authorization: Basic <your-authorization-header>' \
-d '{
"class": "ADC",
"schemaVersion": "3.7.0",
"id": "Consul_SD",
"controls": {
"class": "Controls",
"trace": true,
"logLevel": "debug"
},
"Consul_SD": {
"class": "Tenant",
"Nginx": {
"class": "Application",
"template": "http",
"serviceMain": {
"class": "Service_HTTP",
"virtualPort": 8080,
"virtualAddresses": [
"<your-BIG-IP-virtual-ip>"
],
"pool": "web_pool"
},
"web_pool": {
"class": "Pool",
"monitors": [
"http"
],
"members": [
{
"servicePort": 80,
"addressDiscovery": "consul",
"updateInterval": 5,
"uri": "http://<your-consul-ip>:8500/v1/catalog/service/nginx"
}
]
}
}
}
}
'
```
You should get a similar output to the following after youve applied your
declaration.
```json
{
"results": [
{
"message": "success",
"lineCount": 26,
"code": 200,
"host": "localhost",
"tenant": "Consul_SD",
"runTime": 3939
}
],
"declaration": {
"class": "ADC",
"schemaVersion": "3.7.0",
"id": "Consul_SD",
"controls": {
"class": "Controls",
"trace": true,
"logLevel": "debug",
"archiveTimestamp": "2019-09-06T03:12:06.641Z"
},
"Consul_SD": {
"class": "Tenant",
"Nginx": {
"class": "Application",
"template": "http",
"serviceMain": {
"class": "Service_HTTP",
"virtualPort": 8080,
"virtualAddresses": ["10.0.0.200"],
"pool": "web_pool"
},
"web_pool": {
"class": "Pool",
"monitors": ["http"],
"members": [
{
"servicePort": 80,
"addressDiscovery": "consul",
"updateInterval": 5,
"uri": "http://10.0.0.100:8500/v1/catalog/service/nginx"
}
]
}
}
},
"updateMode": "selective"
}
}
```
The above declaration does the following:
- Creates a partition (tenant) named `Consul_SD`.
- Defines a virtual server named `serviceMain` in `Consul_SD` partition with:
- A pool named web_pool monitored by the http health monitor.
- NGINX Pool members autodiscovered via Consul's [catalog HTTP API
endpoint](/api-docs/catalog#list-nodes-for-service).
For the `virtualAddresses` make sure to substitute your BIG-IP Virtual
Server.
- A URI specific to your Consul environment for the scheme, host, and port of
your consul address discovery. This could be a single server, load balanced
endpoint, or co-located agent, depending on your requirements. Make sure to
replace the `uri` in your configuration with the IP of your Consul client.
You can find more information on Consul SD declarations in [F5s Consul service
discovery
documentation](https://clouddocs.f5.com/products/extensions/f5-appsvcs-extension/latest/declarations/discovery.html#service-discovery-using-hashicorp-consul)
You can read more about composing AS3 declarations in the [F5 documentation](https://clouddocs.f5.com/products/extensions/f5-appsvcs-extension/latest/userguide/composing-a-declaration.html). The Terraform provider for BIG-IP [also supports AS3 resources](https://www.terraform.io/docs/providers/bigip/r/bigip_as3.html).
## Verify BIG-IP Consul Communication
Use the `consul monitor` command on the consul agent specified in the AS3 URI to
verify that you are receiving catalog requests from the BIG-IP instance.
```shell-session
$ consul monitor -log-level=debug
2019/09/06 03:16:50 [DEBUG] http: Request GET /v1/catalog/service/nginx (103.796µs) from=10.0.0.200:29487
2019/09/06 03:16:55 [DEBUG] http: Request GET /v1/catalog/service/nginx (104.95µs) from=10.0.0.200:42079
2019/09/06 03:17:00 [DEBUG] http: Request GET /v1/catalog/service/nginx (98.652µs) from=10.0.0.200:45536
2019/09/06 03:17:05 [DEBUG] http: Request GET /v1/catalog/service/nginx (101.242µs) from=10.0.0.200:45940
```
Check that the interval matches the value you supplied in your AS3 declaration.
## Verify the BIG-IP Dynamic Pool
Check the network map of the BIG-IP instance to make sure that the NGINX
instances registered in Consul are also in your BIG-IP dynamic pool.
To check the network map, open a browser window and navigate to
`https://<your-big-IP-mgmt-ip>/tmui/tmui/locallb/network_map/app/?xui=false#!/?p=Consul_SD`.
Remember to replace the IP address.
![NGINX instances in BIG-IP](/static/img/consul-f5-partition.png 'NGINX
instances listed in the BIG-IP web graphical user interface')
You can read more about the network map in the [F5
documentation](https://support.f5.com/csp/article/K20448153#accessing%20map).
## Test the BIG-IP Virtual Server
Now that you have a healthy virtual service, you can use it to access your web
server.
```shell-session
$ curl <your-BIG-IP-virtual-ip>:8080
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
```
## Summary
The F5 BIG-IP AS3 service discovery integration with Consul queries Consul's
catalog on a regular, configurable basis to get updates about changes for a
given service, and adjusts the node pools dynamically without operator
intervention.
In this guide you configured an F5 BIG-IP instance to natively integrate with
Consul for service discovery. You were able to monitor dynamic node registration
for a web server pool member, and test it with a virtual server.
As a follow up, you can add or reemove web server nodes reegistered with Consul
and validate that the network map on the F5 BIG-IP updates automatically.

View File

@ -1,460 +0,0 @@
---
name: Traffic Splitting for Service Deployments
content_length: 15
id: connect-splitting
products_used:
- Consul
description: >-
In this guide you will split layer-7 traffic, using Envoy proxies configured
by
Consul, to roll out a new version of a service. You can use this method for
zero-downtime, blue-green, and canary deployments.
level: Implementation
---
-> **Note:** This guide requires Consul 1.6.0 or newer.
When you deploy a new version of a service, you need a way to start using the
new version without causing downtime for your end users. You can't just take the
old version down and deploy the new one, because for a brief period you would
cause downtime. This method runs the additional risk of being hard to roll back
if there are unexpected problems with the new version of the service.
You can solve this problem by deploying the new service, making sure it works in
your production environment with a small amount of traffic at first, then slowly
shifting traffic over as you gain confidence (from monitoring) that it is
performing as expected. Depending on the rate at which you shift the traffic and
the level of monitoring you have in place, a deployment like this might be
called a zero-downtime, blue-green, canary deployment, or something else.
In this guide you will deploy a new version of a service and shift HTTP
traffic slowly to the new version.
## Prerequisites
The steps in this guide use Consuls service mesh feature, Consul Connect. If
you arent already familiar with Connect you can learn more by following [this
guide](https://learn.hashicorp.com/tutorials/consul/get-started-service-networking).
We created a demo environment for the steps we describe here. The environment
relies on Docker and Docker Compose. If you do not already have Docker and
Docker Compose, you can install them from [Dockers install
page](https://docs.docker.com/install/).
## Environment
This guide uses a two-tiered application made of of three services: a
public web service, two versions of the API service, and Consul. The Web service
accepts incoming traffic and makes an upstream call to the API service. At the
start of this scenario version 1 of the API service is already running in
production and handling traffic. Version 2 contains some changes you will ship
in a canary deployment.
![Architecture diagram of the splitting demo. A web service directly connects to two different versions of the API service through proxies. Consul configures those proxies.](/static/img/consul-splitting-architecture.png)
## Start the Environment
First clone the repo containing the source and examples for this guide.
```shell-session
$ git clone git@github.com:hashicorp/consul-demo-traffic-splitting.git
```
Change directories into the cloned folder, and start the demo environment with
`docker-compose up`. This command will run in the foreground, so youll need to
open a new terminal window after you run it.
```shell-session
$ docker-compose up
Creating consul-demo-traffic-splitting_api_v1_1 ... done
Creating consul-demo-traffic-splitting_consul_1 ... done
Creating consul-demo-traffic-splitting_web_1 ... done
Creating consul-demo-traffic-splitting_web_envoy_1 ... done
Creating consul-demo-traffic-splitting_api_proxy_v1_1 ... done
Attaching to consul-demo-traffic-splitting_consul_1, consul-demo-traffic-splitting_web_1, consul-demo-traffic-splitting_api_v1_1, consul-demo-traffic-splitting_web_envoy_1, consul-demo-traffic-splitting_api_proxy_v1_1
```
Consul is preconfigured to run as a single server, with all the
configurations for splitting enabled.
- Connect is enabled - Traffic shaping requires that you use Consul Connect.
- gRPC is enabled - splitting also requires the you use Envoy as a sidecar
proxy, and Envoy gets its configuration from Consul via gRPC.
- Central service configuration is enabled - you will use configuration entries
to specify the API service protocol, and define your splitting ratios.
These settings are defined in the Consul configuration file at
`consul_config/consul.hcl`, which contains the follwoing.
```hcl
data_dir = "/tmp/"
log_level = "DEBUG"
server = true
bootstrap_expect = 1
ui = true
bind_addr = "0.0.0.0"
client_addr = "0.0.0.0"
connect {
enabled = true
}
ports {
grpc = 8502
}
enable_central_service_config = true
```
You can find the service definitions for this demo in the `service_config`
folder. Note the metadata stanzas in the registrations for versions 1 and 2 of
the API service. Consul will use the metadata you define here to split traffic
between the two services. The metadata stanza contains the following.
```json
"meta": {
"version": "1"
},
```
Once everything is up and running, you can view the health of the registered
services by checking the Consul UI at
[http://localhost:8500](http://localhost:8500). The docker compose file has
started and registered Consul, the web service, a sidecar for the web service,
version 1 of the API service, and a sidecar for the API service.
![List of services in the Consul UI including Consul, and the web and API services with their proxies](/static/img/consul-splitting-services.png)
Curl the Web endpoint to make sure that the whole application is running. The
Web service will get a response from version 1 of the API service.
```shell-session
$ curl localhost:9090
Hello World
###Upstream Data: localhost:9091###
Service V1
```
Initially, you will want to deploy version 2 of the API service to production
without sending any traffic to it, to make sure that it performs well in a new
environment. Prevent traffic from flowing to version 2 when you register it, you
will preemptively set up a traffic split to send 100% of your traffic to
version 1 of the API service, and 0% to the not-yet-deployed version 2.
## Configure Traffic Splitting
Traffic splitting makes use of configuration entries to centrally configure
services and Envoy proxies. There are three configuration entries you need to
create to enable traffic splitting:
- Service defaults for the API service to set the protocol to HTTP.
- Service splitter which defines the traffic split between the service subsets.
- Service resolver which defines which service instances are version 1 and 2.
### Configuring Service Defaults
Traffic splitting requires that the upstream application uses HTTP, because
splitting happens on layer 7 (on a request-by-request basis). You will tell
Consul that your upstream service uses HTTP by setting the protocol in a
service-defaults configuration entry for the API service. This configuration
is already in your demo environment at `l7_config/api_service_defaults.json`. It
contains the following.
```json
{
"kind": "service-defaults",
"name": "api",
"protocol": "http"
}
```
To apply the configuration, you can either use the Consul CLI or the API. In
this example well use the CLI to write the configuration, providing the file location.
```shell-session
$ consul config write l7_config/api_service_defaults.json
```
Find more information on `service-defaults` configuration entries in the
[documentation](/docs/connect/config-entries/service-defaults).
-> **Automation Tip:** To automate interactions with configuration entries, use
the HTTP API endpoint [`http://localhost:8500/v1/config`](/api/config).
### Configuring the Service Resolver
The next configuration entry you need to add is the service resolver, which
allows you to define how Consuls service discovery selects service instances
for a given service name.
Service resolvers allow you to filter for subsets of services based on
information in the service registration. In this example, we are going to define
the subsets “v1” and “v2” for the API service, based on their registered
metadata. API service version 1 in the demo is already registered with the
service metadata `version:1`, and an optional tag, `v1`, to make the version
number appear in the UI. When you register version 2 you will give it the
metadata `version:2`, which Consul will use to find the right service, and
optional tag `v2`. The `name` field is set to the name of the service in the
Consul service catalog.
The service resolver is already in your demo environment at
`l7_config/api_service_resolver.json` and it contains the following
configuration.
```json
{
"kind": "service-resolver",
"name": "api",
"subsets": {
"v1": {
"filter": "Service.Meta.version == 1"
},
"v2": {
"filter": "Service.Meta.version == 2"
}
}
}
```
Write the service resolver configuration entry using the CLI and providing the
location, just like in the previous example.
```shell-session
$ consul config write l7_config/api_service_resolver.json
```
Find more information about service resolvers in the
[documentation](/docs/connect/config-entries/service-resolver).
### Configure Service Splitting - 100% of traffic to Version 1
Next, youll create a configuration entry that will split percentages of traffic
to the subsets of your upstream service that you just defined. Initially, you
want the splitter to send all traffic to v1 of your upstream service, which
prevents any traffic from being sent to v2 when you register it. In a production
scenario, this would give you time to make sure that v2 of your service is up
and running as expected before sending it any real traffic.
The configuration entry for service splitting has the `kind` of
`service-splitter`. Its `name` specifies which service that the splitter will
act on. The `splits` field takes an array which defines the different splits; in
this example, there are only two splits; however, it is [possible to configure
multiple sequential
splits](/docs/connect/l7-traffic-management#splitting).
Each split has a `weight` which defines the percentage of traffic to distribute
to each service subset. The total weights for all splits must equal 100. For
your initial split, configure all traffic to be directed to the service subset
v1.
The service splitter already exists in your demo environment at
`l7_config/api_service_splitter_100_0.json` and contains the following
configuration.
```json
{
"kind": "service-splitter",
"name": "api",
"splits": [
{
"weight": 100,
"service_subset": "v1"
},
{
"weight": 0,
"service_subset": "v2"
}
]
}
```
Write this configuration entry using the CLI as well.
```shell-session
$ consul config write l7_config/api_service_splitter_100_0.json
```
This concludes the set up of the first stage in your deployment; you can now
launch the new version of the API service without it immediately being used.
### Start and Register API Service Version 2
Next youll start version 2 of the API service, and register it with the
settings that you used in the configuration entries for resolution and
splitting. Start the service, register it, and start its connect sidecar with
the following command. This command will run in the foreground, so youll need
to open a new terminal window after you run it.
```shell-session
$ docker-compose -f docker-compose-v2.yml up
```
Check that the service and its proxy have registered by checking for new `v2`
tags next to the API service and API sidecar proxies in the Consul UI.
### Configure Service Splitting - 50% Version 1, 50% Version 2
Now that version 2 is running and registered, the next step is to gradually
increase traffic to it by changing the weight of the v2 service subset in the
service splitter configuration. In this example you will increase the percent of
traffic destined for the the v2 service to 50%. In a production roll out you
would typically set the initial percent to be much lower. You can specify
percentages as low as 0.01%.
Remember; total service percent must equal 100, so in this example you will
reduce the percent of the v1 subset to 50. The configuration file is already in
your demo environment at `l7_config/api_service_splitter_50_50.json` and it
contains the following.
```json
{
"kind": "service-splitter",
"name": "api",
"splits": [
{
"weight": 50,
"service_subset": "v1"
},
{
"weight": 50,
"service_subset": "v2"
}
]
}
```
Write the new configuration using the CLI.
```shell-session
$ consul config write l7_config/api_service_splitter_50_50.json
```
Now that youve increased the percentage of traffic to v2, curl the web service
again. Consul will equally distribute traffic across both of the service
subsets.
```shell-session
$ curl localhost:9090
Hello World
###Upstream Data: localhost:9091###
Service V1
$ curl localhost:9090
Hello World
###Upstream Data: localhost:9091###
Service V2
$ curl localhost:9090
Hello World
###Upstream Data: localhost:9091###
Service V1
```
### Configure Service Splitting - 100% Version 2
Once you are confident that the new version of the service is operating
correctly, you can send 100% of traffic to the version 2 subset. The
configuration for a 100% split to version 2 contains the following.
```json
{
"kind": "service-splitter",
"name": "api",
"splits": [
{
"weight": 0,
"service_subset": "v1"
},
{
"weight": 100,
"service_subset": "v2"
}
]
}
```
Apply it with the CLI, providing the path to the configuration entry.
```shell-session
$ consul config write l7_config/api_service_splitter_0_100.json
```
Now when you curl the web service again. 100% of traffic goes to the version
2 subset.
```shell-session
$ curl localhost:9090
Hello World
###Upstream Data: localhost:9091###
Service V2
$ curl localhost:9090
Hello World
###Upstream Data: localhost:9091###
Service V2
$ curl localhost:9090
Hello World
###Upstream Data: localhost:9091###
Service V2
```
Typically in a production environment, you would now remove the version 1
service to release capacity in your cluster. Once you remove version 1's
registration from Consul you can either remove the splitter and resolver
entirely, or leave them in place, removing the stanza that sends traffic to
version 1, so that you can eventually deploy version 3 without it receiving any
initial traffic.
Congratulations, youve now completed the deployment of version 2 of your
service.
## Demo Cleanup
To stop and remove the containers and networks that you created you will run
`docker-compose down` twice: once for each of the docker compose commands you
ran. Because containers you created in the second compose command are running on
the network you created in the first command, you will need to bring down the
environments in the opposite order that you created them in.
First youll stop and remove the containers created for v2 of the API service.
```shell-session
$ docker-compose -f docker-compose-v2.yml down
Stopping consul-demo-traffic-splitting_api_proxy_v2_1 ... done
Stopping consul-demo-traffic-splitting_api_v2_1 ... done
WARNING: Found orphan containers (consul-demo-traffic-splitting_api_proxy_v1_1, consul-demo-traffic-splitting_web_envoy_1, consul-demo-traffic-splitting_consul_1, consul-demo-traffic-splitting_web_1, consul-demo-traffic-splitting_api_v1_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up.
Removing consul-demo-traffic-splitting_api_proxy_v2_1 ... done
Removing consul-demo-traffic-splitting_api_v2_1 ... done
Network consul-demo-traffic-splitting_vpcbr is external, skipping
```
Then, youll stop and remove the containers and the network that you created in
the first docker compose command.
```shell-session
$ docker-compose down
Stopping consul-demo-traffic-splitting_api_proxy_v1_1 ... done
Stopping consul-demo-traffic-splitting_web_envoy_1 ... done
Stopping consul-demo-traffic-splitting_consul_1 ... done
Stopping consul-demo-traffic-splitting_web_1 ... done
Stopping consul-demo-traffic-splitting_api_v1_1 ... done
Removing consul-demo-traffic-splitting_api_proxy_v1_1 ... done
Removing consul-demo-traffic-splitting_web_envoy_1 ... done
Removing consul-demo-traffic-splitting_consul_1 ... done
Removing consul-demo-traffic-splitting_web_1 ... done
Removing consul-demo-traffic-splitting_api_v1_1 ... done
Removing network consul-demo-traffic-splitting_vpcbr
```
## Summary
In this guide, we walked you through the steps required to perform Canary
deployments using traffic splitting and resolution.
Find out more about L7 traffic management settings in the
[documentation](/docs/connect/l7-traffic-management).

View File

@ -1,329 +0,0 @@
---
name: Consul with Containers
content_length: 15
id: containers-guide
products_used:
- Consul
description: >-
HashiCorp provides an official Docker image for running Consul and this guide
demonstrates its basic usage.
level: Implementation
---
# Consul with Containers
In this guide, you will learn how to deploy two, joined Consul agents each running in separate Docker containers. You will also register a service and perform basic maintenance operations. The two Consul agents will form a small datacenter.
By following this guide you will learn how to:
1. Get the Docker image for Consul
1. Configure and run a Consul server
1. Configure and run a Consul client
1. Interact with the Consul agents
1. Perform maintenance operations (backup your Consul data, stop a Consul agent, etc.)
The guide is Docker-focused, but the principles you will learn apply to other container runtimes as well.
!> Security Warning This guide is not for production use. Please refer to the [Consul Reference Architecture](https://learn.hashicorp.com/tutorials/consul/reference-architecture) for Consul best practices and the [Docker Documentation](https://docs.docker.com/) for Docker best practices.
## Prerequisites
### Docker
You will need a local install of Docker running on your machine for this guide. You can find the instructions for installing Docker on your specific operating system [here](https://docs.docker.com/install/).
### Consul (Optional)
If you would like to interact with your containerized Consul agents using a local install of Consul, follow the instructions [here](/docs/install) and install the binary somewhere on your PATH.
## Get the Docker Image
First, pull the latest image. You will use Consul's official Docker image in this guide.
```shell-session
$ docker pull consul
```
Check the image was downloaded by listing Docker images that match `consul`.
```shell-session
$ docker images -f 'reference=consul'
REPOSITORY TAG IMAGE ID CREATED SIZE
consul latest c836e84db154 4 days ago 107MB
```
## Configure and Run a Consul Server
Next, you will use Docker command-line flags to start the agent as a server, configure networking, and bootstrap the datacenter when one server is up.
```shell-session
$ docker run \
-d \
-p 8500:8500 \
-p 8600:8600/udp \
--name=badger \
consul agent -server -ui -node=server-1 -bootstrap-expect=1 -client=0.0.0.0
```
Since you started the container in detached mode, `-d`, the process will run in the background. You also set port mapping to your local machine as well as binding the client interface of our agent to 0.0.0.0. This allows you to work directly with the Consul datacenter from your local machine and to access Consul's UI and DNS over localhost. Finally, you are using Docker's default bridge network.
Note, the Consul Docker image sets up the Consul configuration directory at `/consul/config` by default. The agent will load any configuration files placed in that directory.
~> The configuration directory is **not** exposed as a volume and will not persist data. Consul uses it only during startup and does not store any state there.
To avoid mounting volumes or copying files to the container you can also save [configuration JSON](/docs/agent/options#configuration-files) to that directory via the environment variable `CONSUL_LOCAL_CONFIG`.
### Discover the Server IP Address
You can find the IP address of the Consul server by executing the `consul members` command inside of the `badger` container.
```shell-session
$ docker exec badger consul members
Node Address Status Type Build Protocol DC Segment
server-1 172.17.0.2:8301 alive server 1.4.4 2 dc1 <all>
```
## Configure and Run a Consul Client
Next, deploy a containerized Consul client and instruct it to join the server by giving it the server's IP address. Do not use detached mode, so you can reference the client logs during later steps.
```shell-session
$ docker run \
--name=fox \
consul agent -node=client-1 -join=172.17.0.2
==> Starting Consul agent...
==> Joining cluster...
Join completed. Synced with 1 initial agents
==> Consul agent running!
Version: 'v1.4.4'
Node ID: '4b6da3c6-b13f-eba2-2b78-446ffa627633'
Node name: 'client-1'
Datacenter: 'dc1' (Segment: '')
Server: false (Bootstrap: false)
Client Addr: [127.0.0.1] (HTTP: 8500, HTTPS: -1, gRPC: -1, DNS: 8600)
Cluster Addr: 172.17.0.4 (LAN: 8301, WAN: 8302)
Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false
```
In a new terminal, check that the client has joined by executing the `consul members` command again in the Consul server container.
```shell-session
$ docker exec badger consul members
Node Address Status Type Build Protocol DC Segment
server-1 172.17.0.2:8301 alive server 1.4.3 2 dc1 <all>
client-1 172.17.0.3:8301 alive client 1.4.3 2 dc1 <default>
```
Now that you have a small datacenter, you can register a service and
perform maintenance operations.
## Register a Service
Start a service in a third container and register it with the Consul client. The basic service increments a number every time it is accessed and returns that number.
Pull the container and run it with port forwarding so that you can access it from your web browser by visiting [http://localhost:9001](http://localhost:9001).
```shell-session
$ docker pull hashicorp/counting-service:0.0.2
$ docker run \
-p 9001:9001 \
-d \
--name=weasel \
hashicorp/counting-service:0.0.2
```
Next, you will register the counting service with the Consul client by adding a service definition file called `counting.json` in the directory `consul/config`.
```shell-session
$ docker exec fox /bin/sh -c "echo '{\"service\": {\"name\": \"counting\", \"tags\": [\"go\"], \"port\": 9001}}' >> /consul/config/counting.json"
```
Since the Consul client does not automatically detect changes in the
configuration directory, you will need to issue a reload command for the same container.
```shell-session
$ docker exec fox consul reload
Configuration reload triggered
```
If you go back to the terminal window where you started the client, you should see logs showing that the Consul client received the hangup signal, reloaded its configuration, and synced the counting service.
```shell
2019/07/01 21:49:49 [INFO] agent: Caught signal: hangup
2019/07/01 21:49:49 [INFO] agent: Reloading configuration...
2019/07/01 21:49:49 [INFO] agent: Synced service "counting"
```
### Use Consul DNS to Discover the Service
Now you can query Consul for the location of your service using the following dig command against Consul's DNS.
```shell-session
$ dig @127.0.0.1 -p 8600 counting.service.consul
; <<>> DiG 9.10.6 <<>> @127.0.0.1 -p 8600 counting.service.consul
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 47570
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 2
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;counting.service.consul. IN A
;; ANSWER SECTION:
counting.service.consul. 0 IN A 172.17.0.3
;; ADDITIONAL SECTION:
counting.service.consul. 0 IN TXT "consul-network-segment="
;; Query time: 1 msec
;; SERVER: 127.0.0.1#8600(127.0.0.1)
;; WHEN: Tue Jul 02 09:02:38 PDT 2019
;; MSG SIZE rcvd: 104
```
You can also see your newly registered service in Consul's UI, [http://localhost:8500](http://localhost:8500).
![Consul UI with Registered Service](/img/consul-containers-ui-services.png 'Consul UI with Registered Service')
## Consul Container Maintenance Operations
### Accessing Containers
You can access a containerized Consul datacenter in several different ways.
#### Docker Exec
You can execute Consul commands directly inside of your Consul containers using `docker exec`.
```shell-session
$ docker exec <container_id> consul members
Node Address Status Type Build Protocol DC Segment
server-1 172.17.0.2:8301 alive server 1.5.2 2 dc1 <all>
client-1 172.17.0.3:8301 alive client 1.5.2 2 dc1 <default>
```
#### Docker Exec Attach
You can also issue commands inside of your container by opening an interactive shell and using the Consul binary included in the container.
```shell-session
$ docker exec -it <container_id> /bin/sh
/ # consul members
Node Address Status Type Build Protocol DC Segment
server-1 172.17.0.2:8301 alive server 1.5.2 2 dc1 <all>
client-1 172.17.0.3:8301 alive client 1.5.2 2 dc1 <default>
```
#### Local Consul Binary
If you have a local Consul binary in your PATH you can also export the `CONSUL_HTTP_ADDR` environment variable to point to the HTTP address of a remote Consul server. This will allow you to bypass `docker exec <container_id> consul <command>` and use `consul <command>` directly.
```shell-session
$ export CONSUL_HTTP_ADDR=<consul_server_ip>:8500
$ consul members
Node Address Status Type Build Protocol DC Segment
server-1 172.17.0.2:8301 alive server 1.5.2 2 dc1 <all>
client-1 172.17.0.3:8301 alive client 1.5.2 2 dc1 <default>
```
In this guide, you are binding your containerized Consul server's client address to 0.0.0.0 which allows us to communicate with our Consul datacenter with a local Consul install. By default, the client address is bound to localhost.
```shell-session
$ which consul
/usr/local/bin/consul
$ consul members
Node Address Status Type Build Protocol DC Segment
server-1 172.17.0.2:8301 alive server 1.5.2 2 dc1 <all>
client-1 172.17.0.3:8301 alive client 1.5.2 2 dc1 <default>
```
### Stopping, Starting, and Restarting Containers
The official Consul container supports stopping, starting, and restarting. To stop a container, run `docker stop`.
```shell-session
$ docker stop <container_id>
```
To start a container, run `docker start`.
```shell-session
$ docker start <container_id>
```
To do an in-memory reload, send a SIGHUP to the container.
```shell-session
$ docker kill --signal=HUP <container_id>
```
### Removing Servers from the Datacenter
As long as there are enough servers in the datacenter to maintain [quorum](/docs/internals/consensus#deployment-table), Consul's [autopilot](/docs/guides/autopilot) feature will handle removing servers whose containers were stopped. Autopilot's default settings are already configured correctly. If you override them, make sure that the following [settings](/docs/agent/options#autopilot) are appropriate.
- `cleanup_dead_servers` must be set to true to make sure that a stopped container is removed from the datacenter.
- `last_contact_threshold` should be reasonably small, so that dead servers are removed quickly.
- `server_stabilization_time` should be sufficiently large (on the order of several seconds) so that unstable servers are not added to the datacenter until they stabilize.
If the container running the currently-elected Consul server leader is stopped, a leader election will be triggered.
When a previously stopped server container is restarted using `docker start <container_id>`, and it is configured to obtain a new IP, autopilot will add it back to the set of Raft peers with the same node-id and the new IP address, after which it can participate as a server again.
### Backing-up Data
You can back-up your Consul datacenter using the [consul snapshot](/commands/snapshot) command.
```shell-session
$ docker exec <container_id> consul snapshot save backup.snap
```
This will leave the `backup.snap` snapshot file inside of your container. If you are not saving your snapshot to a [persistent volume](https://docs.docker.com/storage/volumes/) then you will need to use `docker cp` to move your snapshot to a location outside of your container.
```shell-session
$ docker cp <container_id>:backup.snap ./
```
Users running the Consul Enterprise Docker containers can run the [consul snapshot agent](/commands/snapshot/agent) to save backups automatically. Consul Enterprise's snapshot agent also allows you to save snapshots to Amazon S3 and Azure Blob Storage.
### Environment Variables
You can add configuration by passing the configuration JSON via the environment variable `CONSUL_LOCAL_CONFIG`.
```shell-session
$ docker run \
-d \
-e CONSUL_LOCAL_CONFIG='{
"datacenter":"us_west",
"server":true,
"enable_debug":true
}' \
consul agent -server -bootstrap-expect=3
```
Setting `CONSUL_CLIENT_INTERFACE` or `CONSUL_BIND_INTERFACE` on `docker run` is equivalent to passing in the `-client` flag(documented [here](/docs/agent/options#_client)) or `-bind` flag(documented [here](/docs/agent/options#_bind)) to Consul on startup.
Setting the `CONSUL_ALLOW_PRIVILEGED_PORTS` runs setcap on the Consul binary, allowing it to bind to privileged ports. Note that not all Docker storage backends support this feature (notably AUFS).
```shell-session
$ docker run -d --net=host -e 'CONSUL_ALLOW_PRIVILEGED_PORTS=' consul -dns-port=53 -recursor=8.8.8.8
```
## Summary
In this guide you learned to deploy a containerized Consul datacenter. You also learned how to deploy a containerized service and how to configure your Consul client to register that service with your Consul datacenter.
You can continue learning how to deploy a Consul datacenter in production by completing the [Day 1 track](/consul/datacenter-deploy/day1-deploy-intro). The track includes securing the datacenter with Access Control Lists and encryption, DNS configuration, and datacenter federation.
For additional reference documentation on the official Docker image for Consul, refer to the following websites:
- [Consul Documenation](/docs)
- [Docker Documentation](https://docs.docker.com/)
- [Consul @ Dockerhub](https://hub.docker.com/_/consul)
- [hashicorp/docker-consul GitHub Repository](https://github.com/hashicorp/docker-consul)

View File

@ -1,208 +0,0 @@
---
name: '[Enterprise] Register and Discover Services within Namespaces'
content_length: 8
id: discovery-namespaces
products_used:
- Consul
description: In this guide you will register and discover services within a namespace.
level: Implementation
---
!> **Warning:** This guide is a draft and has not been fully tested.
!> **Warning:** Consul 1.7 is currently a beta release.
Namespaces allow multiple teams within the same organization to share the same
Consul datacenter(s) by separating services, key/value pairs, and other Consul
data per team. This provides operators with the ability to more easily run
Consul as a service. Namespaces also enable operators to [delegate ACL
management](/consul/namespaces/secure-namespaces).
Any service that is not registered in a namespace will be added to the `default`
namespace. This means that all services are namespaced in Consul 1.7 and newer,
even if the operator has not created any namespaces.
By the end of this guide, you will register two services in the Consul catalog:
one in the `default` namespace and one in an operator-configured namespace.
After you have registered the services, you will then use the Consul CLI, API
and UI to discover all the services registered in the Consul catalog.
## Perquisites
To complete this guide you will need at least a [local dev
agent](/consul/getting-started/install) running Consul Enterprise 1.7 or newer.
Review the documentation for downloading the [Enterprise
binary](/docs/enterprise#applied-after-bootstrapping).
You can also use an existing Consul datacenter that is running Consul Enterprise
1.7 or newer.
You should have at least one namespace configured. Review the [namespace
management]() documentation or execute the following command to create a
namespace.
```shell-session
$ consul namespace create app-team
```
## Register services in namespaces
You can register services in a namespace by using your existing workflow and
adding namespace information to the registration. There are two ways to add a
service to a namespace:
- adding the `namespace` option to the service registration file.
- using the `namespace` flag with the API or CLI at registration time.
If you would like to migrate an existing service into a new namespace,
re-register the service with the new namespace information.
### Default namespace
To register a service in the `default` namespace, use your existing registration
workflow; you do not need to add namespace information. In the example below,
you will register the `mysql` service in the default namespace.
First, create a service registration file for the MySQL service and its sidecar
proxy.
```hcl
service {
name = “mysql"
port = 9003
connect {sidecar_proxy}
}
```
Next, register the service and its sidecar proxy using the Consul CLI by
specifying the registration file.
```shell-session
$ consul services register mysql.hcl
```
### App-team namespace
To register a service in a user-defined namespace, include the namespace in the
registration file, or pass it with a flag at registration time. In this guide,
we will include the namespace in the file.
First, create the service registration file named `wordpress.hcl`. Paste in the
following registration, which includes the service name and port, and a sidecar
proxy, along with the namespace.
```hcl
service {
name = “wordpress"
port = 9003
connect {sidecar_proxy}
namespace = "app-team"
}
```
Next register the service and its sidecar proxy.
```shell-session
$ consul services register wordpress.hcl -namespace app-team
```
## Discover services
You can discover namespaced services using all the usual methods for service
discovery in Consul: the CLI, web UI, DNS interface, and HTTP API.
### Consul CLI
To get a list of services in the default namespace use the `consul catalog` CLI
command. You do not need to add the flag any discover services in the `default`
namespace.
```shell-session
$ consul catalog services
consul
mysql
mysql-proxy
```
Notice that you do not see services that are registered in the app-team
namespace.
Add the `-namepsace` flag to discover services within a user-created namespace.
In the example below, you will use the `-namespace` flag with the CLI to
discover all services registered in the app-team namespace.
```shell-session
$ consul catalog services -namespace app-team
consul
wordpress
wordpress-proxy
```
Notice that you do not see services that are registered in the default
namespace. To discover all services in the catalog, you will need to query all
Consul namespaces.
```shell-session
$ consul catalog services
consul
mysql
mysql-proxy
$ consul catalog services -namespace app-team
consul
wordpress
wordpress-proxy
```
### Consul UI
You can also view namespaced-services in the Consul UI. Select a namespace using
the drop-down menu at the top of the top navigation. Then go to the “Services”
tab to see the services within the namespace.
Before you select a namespace the UI will list the services in the `default`
namespace.
![IMAGE FROM RFC! REPLACE ME AT BETA LAUNCH](/static/img/consul/namespaces/consul-namespace-dropdown.png)
### DNS Interface
~> **Note:** To default to the `namespace` parameter in the DNS query, you must
set the `prefer_namespace` option to `true` in the [agent's configuration]().
The new query structure will be, `service.namespace.consul`. This will disable
the ability to query by datacenter only. However, you can add both namespace and
datacenter to the query, `service.namespace.datacenter.consul`.
To discover the location of service instances, you can use the DNS interface.
```shell-session
$ dig 127.0.0.1 -p 8500 wordpress.service.app-team.consul
<output should show one service>
```
If you dont specify a namespace in the query, you will get results from the
default namespace.
```shell-session
$ dig 127.0.0.1 -p 8500 wordpress.service.consul
<output should show no services>
```
### Consul HTTP API
The Consul HTTP API is more verbose than the DNS API; it allows you to discover
the service locations and additional metadata. To discover service information
within a namespace, add the `ns=` query parameter to the call.
```shell
curl http://127.0.0.1:8500/v1/catalog/service/wordpress?ns=app-team
<output shows one service>
```
## Summary
In this guide, you registered two services: the WordPress service in the
app-team namespace and the MySQL service in the `default` namespace. You then
used the Consul CLI to discover services in both namespaces.
You can use ACLs to secure access to data, including services, in namespaces.
After ACLs are enabled, you will be able to restrict access to the namespaces
and all the data registered in that namespace.

View File

@ -1,202 +0,0 @@
---
layout: docs
page_title: Deploy Consul with Kubernetes
description: Deploy Consul on Kubernetes with the official Helm chart.
---
# Deploy Consul with Kubernetes
In this guide you will deploy a Consul datacenter with the official Helm chart.
You do not need to update any values in the Helm chart for a basic
installation. However, you can create a values file with parameters to allow
access to the Consul UI.
~> **Security Warning** This guide is not for production use. By default, the
chart will install an insecure configuration of Consul. Please refer to the
[Kubernetes documentation](/docs/platform/k8s)
to determine how you can secure Consul on Kubernetes in production.
Additionally, it is highly recommended to use a properly secured Kubernetes
cluster or make sure that you understand and enable the recommended security
features.
To complete this guide successfully, you should have an existing Kubernetes
cluster, and locally configured [Helm](https://helm.sh/docs/using_helm/) and
[kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/). If you do not have an
existing Kubernetes cluster you can use the [Minikube with Consul guide](/docs/guides/minikube) to get started
with Consul on Kubernetes.
## Deploy Consul
You can deploy a complete Consul datacenter using the official Helm chart. By
default, the chart will install three Consul
servers and client on all Kubernetes nodes. You can review the
[Helm chart
values](/docs/platform/k8s/helm#configuration-values)
to learn more about the default settings.
### Download the Helm Chart
First, you will need to clone the official Helm chart from HashiCorp's Github
repo.
```shell-session
$ git clone https://github.com/hashicorp/consul-helm.git
```
You do not need to update the Helm chart before deploying Consul, it comes with
reasonable defaults. Review the [Helm chart
documentation](/docs/platform/k8s/helm) to learn more
about the chart.
### Helm Install Consul
To deploy Consul you will need to be in the same directory as the chart.
```shell-session
$ cd consul-helm
```
Now, you can deploy Consul using `helm install`. This will deploy three servers
and agents on all Kubernetes nodes. The process should be quick, less than 5
minutes.
```shell-session
$ helm install ./consul-helm
NAME: mollified-robin LAST DEPLOYED: Mon Feb 25 15:57:18 2019 NAMESPACE: default STATUS: DEPLOYED
NAME READY STATUS RESTARTS AGE
mollified-robin-consul-25r6z 0/1 ContainerCreating 0 0s
mollified-robin-consul-4p6hr 0/1 ContainerCreating 0 0s
mollified-robin-consul-n82j6 0/1 ContainerCreating 0 0s
mollified-robin-consul-server-0 0/1 Pending 0 0s
mollified-robin-consul-server-1 0/1 Pending 0 0s
mollified-robin-consul-server-2 0/1 Pending 0 0s
```
The output above has been reduced for readability. However, you can see that
there are three Consul servers and three Consul clients on this three node
Kubernetes cluster.
## Access Consul UI
To access the UI you will need to update the `ui` values in the Helm chart.
Alternatively, if you do not wish to upgrade your cluster, you can set up [port
forwarding](/docs/platform/k8s/run#viewing-the-consul-ui) with
`kubectl`.
### Create Values File
First, create a values file that can be passed on the command line when
upgrading.
```yaml
# values.yaml
global:
datacenter: hashidc1
syncCatalog:
enabled: true
ui:
service:
type: 'LoadBalancer'
server:
affinity: |
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app: {{ template "consul.name" . }}
release: "{{ .Release.Name }}"
component: server
topologyKey: kubernetes.io/hostname
```
This file renames your datacenter, enables catalog sync, sets up a load
balancer service for the UI, and enables [affinity](/docs/platform/k8s/helm#v-server-affinity) to allow only one
Consul pod per Kubernetes node.
The catalog sync parameters will allow you to see
the Kubernetes services in the Consul UI.
### Initiate Rolling Upgrade
Finally, initiate the
[upgrade](/docs/platform/k8s/run#upgrading-consul-on-kubernetes)
with `helm upgrade` and the `-f` flag that passes in your new values file. This
processes should also be quick, less than a minute.
```shell-session
$ helm upgrade consul -f values.yaml
```
You can now use `kubectl get services` to discover the external IP of your Consul UI.
```shell-session
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
consul ExternalName <none> consul.service.consul <none> 11d
kubernetes ClusterIP 122.16.14.1 <none> 443/TCP 137d
mollified-robin-consul-dns ClusterIP 122.16.14.25 <none> 53/TCP,53/UDP 13d
mollified-robin-consul-server ClusterIP None <none> 8500/TCP 13d
mollified-robin-consul-ui LoadBalancer 122.16.31.395 36.276.67.195 80:32718/TCP 13d
```
Additionally, you can use `kubectl get pods` to view the new catalog sync
process. The [catalog sync](/docs/platform/k8s/helm#v-synccatalog) process will sync
Consul and Kubernetes services bidirectionally by
default.
```shell-session
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
mollified-robin-consul-d8mnp 1/1 Running 0 15d
mollified-robin-consul-p4m89 1/1 Running 0 15d
mollified-robin-consul-qclqc 1/1 Running 0 15d
mollified-robin-consul-server-0 1/1 Running 0 15d
mollified-robin-consul-server-1 1/1 Running 0 15d
mollified-robin-consul-server-2 1/1 Running 0 15d
mollified-robin-consul-sync-catalog-f75cd5846-wjfdk 1/1 Running 0 13d
```
The service should have `consul-ui` appended to the deployment name. Note, you
do not need to specify a port when accessing the dashboard.
## Access Consul
In addition to accessing Consul with the UI, you can manage Consul with the
HTTP API or by directly connecting to the pod with `kubectl`.
### Kubectl
To access the pod and data directory you can exec into the pod with `kubectl` to start a shell session.
```shell-session
$ kubectl exec -it mollified-robin-consul-server-0 /bin/sh
```
This will allow you to navigate the file system and run Consul CLI commands on
the pod. For example you can view the Consul members.
```shell-session
$ consul members
Node Address Status Type Build Protocol DC Segment
mollified-robin-consul-server-0 172.20.2.18:8301 alive server 1.4.2 2 hashidc1 <all>
mollified-robin-consul-server-1 172.20.0.21:8301 alive server 1.4.2 2 hashidc1 <all>
mollified-robin-consul-server-2 172.20.1.18:8301 alive server 1.4.2 2 hashidc1 <all>
gke-tier-2-cluster-default-pool-leri5 172.20.1.17:8301 alive client 1.4.2 2 hashidc1 <default>
gke-tier-2-cluster-default-pool-gnv4 172.20.2.17:8301 alive client 1.4.2 2 hashidc1 <default>
gke-tier-2-cluster-default-pool-zrr0 172.20.0.20:8301 alive client 1.4.2 2 hashidc1 <default>
```
### Consul HTTP API
You can use the Consul HTTP API by communicating to the local agent running on
the Kubernetes node. You can read the
[documentation](/docs/platform/k8s/run#accessing-the-consul-http-api)
if you are interested in learning more about using the Consul HTTP API with Kubernetes.
## Summary
In this guide, you deployed a Consul datacenter in Kubernetes using the
official Helm chart. You also configured access to the Consul UI. To learn more
about deploying applications that can use Consul's service discovery and
Connect, read the example in the [Minikube with Consul
guide](/docs/guides/minikube#step-2-deploy-custom-applications).

View File

@ -1,369 +0,0 @@
---
layout: docs
page_title: Layer 7 Observability with Kubernetes and Consul Connect
description: |-
Collect and visualize layer 7 metrics from services in your Kubernetes cluster
using Consul Connect, Prometheus, and Grafana.
---
A service mesh is made up of proxies deployed locally alongside each service
instance, which control network traffic between their local instance and other
services on the network. These proxies "see" all the traffic that runs through
them, and in addition to securing that traffic, they can also collect data about
it. Starting with version 1.5, Consul Connect is able to configure Envoy proxies
to collect layer 7 metrics including HTTP status codes and request latency, along
with many others, and export those to monitoring tools like Prometheus.
In this guide, you will deploy a basic metrics collection and visualization
pipeline on a Kubernetes cluster using the official Helm charts for Consul,
Prometheus, and Grafana. This pipeline will collect and display metrics from a
demo application.
-> **Tip:** While this guide shows you how to deploy a metrics pipeline on
Kubernetes, all the technologies the guide uses are platform agnostic;
Kubernetes is not necessary to collect and visualize layer 7 metrics with Consul
Connect.
Learning Objectives:
- Configure Consul Connect with metrics using Helm
- Install Prometheus and Grafana using Helm
- Install and start the demo application
- Collect metrics
## Prerequisites
If you already have a Kubernetes cluster with Helm and kubectl up and running,
you can start on the demo right away. If not, set up a Kubernetes cluster using
your favorite method that supports persistent volume claims, or install and
start [Minikube](https://kubernetes.io/docs/tasks/tools/install-minikube/). If
you do use Minikube, you may want to start it with a little bit of extra memory.
```shell-session
$ minikube start --memory 4096
```
You will also need to install
[kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl),
and both install and initialize
[Helm](https://helm.sh/docs/using_helm/#installing-helm) by following their
official instructions.
If you already had Helm installed, check that you have up
to date versions of the Grafana, Prometheus, and Consul charts. You can update
all your charts to the latest versions by running `helm repo update`.
Clone the GitHub repository that contains the configuration files you'll use
while following this guide, and change directories into it. We'll refer to this
directory as your working directory, and you'll run the rest of the commands in
this guide from inside it.
```shell-session
$ git clone https://github.com/hashicorp/consul-k8s-l7-obs-guide.git
$ cd consul-k8s-l7-obs-guide
```
## Deploy Consul Connect Using Helm
Once you have set up the prerequisites, you're ready to install Consul. Start by
cloning the official Consul Helm chart into your working directory.
```shell-session
$ git clone https://github.com/hashicorp/consul-helm.git
```
Open the file in your working directory called `consul-values.yaml`. This file
will configure the Consul Helm chart to:
- specify a name for your Consul datacenter
- enable the Consul web UI
- enable secure communication between pods with Connect
- configure the Consul settings necessary for layer 7 metrics collection
- specify that this Consul cluster should run one server
- enable metrics collection on servers and agents so that you can monitor the
Consul cluster itself
You can override many of the values in Consul's values file using annotations on
specific services. For example, later in the guide you will override the
centralized configuration of `defaultProtocol`.
```yaml
# name your datacenter
global:
datacenter: dc1
server:
# use 1 server
replicas: 1
bootstrapExpect: 1
disruptionBudget:
enabled: true
maxUnavailable: 0
client:
enabled: true
# enable grpc on your client to support consul connect
grpc: true
ui:
enabled: true
connectInject:
enabled: true
# inject an envoy sidecar into every new pod,
# except for those with annotations that prevent injection
default: true
# these settings enable L7 metrics collection and are new in 1.5
centralConfig:
enabled: true
# set the default protocol (can be overwritten with annotations)
defaultProtocol: 'http'
# tell envoy where to send metrics
proxyDefaults: |
{
"envoy_dogstatsd_url": "udp://127.0.0.1:9125"
}
```
!> **Warning:** By default, the chart will install an insecure configuration of
Consul. This provides a less complicated out-of-box experience for new users but
is not appropriate for a production setup. Make sure that your Kubernetes
cluster is properly secured to prevent unwanted access to Consul, or that you
understand and enable the
[recommended Consul security features](/docs/internals/security).
Currently, some of these features are not supported in the Helm chart and
require additional manual configuration.
Now install Consul in your Kubernetes cluster and give Kubernetes a name for
your Consul installation. The output will be a list of all the Kubernetes
resources created (abbreviated in the code snippet).
```shell-session
$ helm install -f consul-values.yaml --name l7-guide ./consul-helm
NAME: consul
LAST DEPLOYED: Wed May 1 16:02:40 2019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
```
Check that Consul is running in your Kubernetes cluster via the Kubernetes
dashboard or CLI. If you are using Minikube, the below command will run in your
current terminal window and automatically open the dashboard in your browser.
```shell-session
$ minikube dashboard
```
Open a new terminal tab to let the dashboard run in the current one, and change
directories back into `consul-k8s-l7-obs-guide`. Next, forward the port for the
Consul UI to localhost:8500 and navigate to it in your browser. Once you run the
below command it will continue to run in your current terminal window for as
long as it is forwarding the port.
```shell-session
$ kubectl port-forward l7-guide-consul-server-0 8500:8500
Forwarding from 127.0.0.1:8500 -> 8500
Forwarding from [::1]:8500 -> 8500
Handling connection for 8500
```
Let the consul dashboard port forwarding run and open a new terminal tab to the
`consul-k8s-l7-obs-guide` directory.
## Deploy the Metrics Pipeline
In this guide, you will use Prometheus and Grafana to collect and visualize
metrics. Consul Connect can integrate with a variety of other metrics tooling as
well.
### Deploy Prometheus with Helm
You'll follow a similar process as you did with Consul to install Prometheus via
Helm. First, open the file named `prometheus-values.yaml` that configures the
Prometheus Helm chart.
The file specifies how often Prometheus should scrape for metrics, and which
endpoints it should scrape from. By default, Prometheus scrapes all the
endpoints that Kubernetes knows about, even if those endpoints don't expose
Prometheus metrics. To prevent Prometheus from scraping these endpoints
unnecessarily, the values file includes some relabel configurations.
Install the official Prometheus Helm chart using the values in
`prometheus-values.yaml`.
```shell-session
$ helm install -f prometheus-values.yaml --name prometheus stable/prometheus
NAME: prometheus
LAST DEPLOYED: Wed May 1 16:09:48 2019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
```
The output above has been abbreviated; you will see all the Kubernetes resources
that the Helm chart created. Once Prometheus has come up, you should be able to
see your new services on the Minikube dashboard and in the Consul UI. This
might take a short while.
### Deploy Grafana with Helm
Installing Grafana will follow a similar process. Open and look through the file
named `grafana-values.yaml`. It configures Grafana to use Prometheus as its
datasource.
Use the official Helm chart to install Grafana with your values file.
```shell-session
$ helm install -f grafana-values.yaml --name grafana stable/grafana
NAME: grafana
LAST DEPLOYED: Wed May 1 16:57:11 2019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
...
NOTES:
1. Get your 'admin' user password by running:
kubectl get secret --namespace default grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
2. The Grafana server can be accessed via port 80 on the following DNS name from within your cluster:
grafana.default.svc.cluster.local
Get the Grafana URL to visit by running these commands in the same shell:
export POD_NAME=$(kubectl get pods --namespace default -l "app=grafana,release=grafana" -o jsonpath="{.items[0].metadata.name}")
kubectl --namespace default port-forward $POD_NAME 3000
3. Login with the password from step 1 and the username: admin
```
Again, the above output has been abbreviated. At the bottom of your terminal
output are shell-specific instructions to access your Grafana UI and log in,
displayed as a numbered list. Accessing Grafana involves:
1. Getting the secret that serves as your Grafana password
1. Forwarding the Grafana UI to localhost:3000, which will not succeed until
Grafana is running
1. Visiting the UI and logging in
Once you have logged into the Grafana UI, hover over the dashboards icon (four
squares in the left hand menu) and then click the "manage" option. This will
take you to a page that gives you some choices about how to upload Grafana
dashboards. Click the black "Import" button on the right hand side of the
screen.
![Add a dashboard using the Grafana GUI](/img/consul-grafana-add-dash.png)
Open the file called `overview-dashboard.json` and copy the contents into the
json window of the Grafana UI. Click through the rest of the options, and you
will end up with a blank dashboard, waiting for data to display.
### Deploy a Demo Application on Kubernetes
Now that your monitoring pipeline is set up, deploy a demo application that will
generate data. We will be using Emojify, an application that recognizes faces in
an image and pastes emojis over them. The application consists of a few
different services and automatically generates traffic and HTTP error codes.
All the files defining Emojify are in the `app` directory. Open `app/cache.yml`
and take a look at the file. Most of services that make up Emojify communicate
over HTTP, but the cache service uses gRPC. In the annotations section of the
file you'll see where `consul.hashicorp.com/connect-service-protocol` specifies
gRPC, overriding the `defaultProtocol` of HTTP that we centrally configured in
Consul's value file.
At the bottom of each file defining part of the Emojify app, notice the block
defining a `prometheus-statsd` pod. These pods translate the metrics that Envoy
exposes to a format that Prometheus can scrape. They won't be necessary anymore
once Consul Connect becomes compatible with Envoy 1.10. Apply the configuration
to deploy Emojify into your cluster.
```shell-session
$ kubectl apply -f app
```
Emojify will take a little while to deploy. Once it's running you can check that
it's healthy by taking a look at your Kubernetes dashboard or Consul UI. Next,
visit the Emojify UI. This will be located at the IP address of the host where
the ingress server is running, at port 30000. If you're using Minikube you can
find the UI with the following command.
```shell-session
$ minikube service emojify-ingress --url
http://192.168.99.106:30000
```
Test the application by emojifying a picture. You can do this by pasting the
following URL into the URL bar and clicking the submit button. (We provide a
demo URL because Emojify can be picky about processing some image URLs if they
don't link directly to the actual picture.)
`https://emojify.today/pictures/1.jpg`
Now that you know the application is working, start generating automatic load so
that you will have some interesting metrics to look at.
```shell-session
$ kubectl apply -f traffic.yaml
```
## Collect Application Metrics
Envoy exposes a huge number of
[metrics](https://www.envoyproxy.io/docs/envoy/v1.10.0/operations/stats_overview),
but you will probably only want to monitor or alert on a subset of them. Which
metrics are important to monitor will depend on your application. For this
getting-started guide we have preconfigured an Emojify-specific Grafana
dashboard with a couple of basic metrics, but you should systematically consider
what others you will need to collect as you move from testing into production.
### Review Dashboard Metrics
Now that you have metrics flowing through your pipeline, navigate back to your
Grafana dashboard at `localhost:3000`. The top row of the dashboard displays
general metrics about the Emojify application as a whole, including request and
error rates. Although changes in these metrics can reflect application health
issues once you understand their baseline levels, they don't provide enough
information to diagnose specific issues.
The following rows of the dashboard report on some of the specific services that
make up the emojify application: the website, API, and cache services. The
website and API services show request count and response time, while the cache
reports on request count and methods.
## Clean up
If you've been using Minikube, you can tear down your environment by running
`minikube delete`.
If you want to get rid of the configurations files and Consul Helm chart,
recursively remove the `consul-k8s-l7-obs-guide` directory.
`
```shell-session
$ cd ..
$ rm -rf consul-k8s-l7-obs-guide
```
## Summary
In this guide, you set up layer 7 metrics collection and visualization in a
Minikube cluster using Consul Connect, Prometheus, and Grafana, all deployed via
Helm charts. Because all of these programs can run outside of Kubernetes, you
can set this pipeline up in any environment or collect metrics from workloads
running on mixed infrastructure.
To learn more about the configuration options in Consul that enable layer 7
metrics collection with or without Kubernetes, refer to [our
documentation](/docs/connect/proxies/envoy). For more information on
centrally configuring Consul, take a look at the [centralized configuration
documentation](/docs/agent/config-entries).

View File

@ -1,283 +0,0 @@
---
name: Consul-Kubernetes Deployment Guide
content_length: 14
id: kubernetes-production-deploy
products_used:
- Consul
description: >-
This guide covers the necessary steps to install and configure a new Consul
cluster on Kubernetes.
level: Advanced
---
This guide covers the necessary steps to install and configure a new Consul
cluster on Kubernetes, as defined in the [Consul Reference Architecture
guide](/consul/day-1-operations/kubernetes-reference#consul-datacenter-deployed-in-kubernetes).
By the end of this guide, you will be able to identify the installation
prerequisites, customize the Helm chart to fit your environment requirements,
and interact with your new Consul cluster.
~> You should have the following configured before starting this guide: Helm
installed and configured locally, tiller running in the Kubernetes cluster, and
the Kubernetes CLI configured.
## Configure Kubernetes Permissions to Deploy Consul
Before deploying Consul, you will need to create a new Kubernetes service
account with the correct permissions and to authenticate it on the command
line. You will need Kubernetes operators permissions to create and modify
policies, deploy services, access the Kubernetes dashboard, create secrets, and
create RBAC objects. You can find documentation for RBAC and service accounts
for the following cloud providers.
- [AKS](https://docs.microsoft.com/en-us/azure/aks/kubernetes-service-principal)
- [EKS](https://docs.aws.amazon.com/eks/latest/userguide/install-aws-iam-authenticator.html)
- [GCP](https://console.cloud.google.com/iam-admin/serviceaccounts)
Note, Consul can be deployed on any properly configured Kubernetes cluster in
the cloud or on premises.
Once you have a service account, you will also need to add a permission to
deploy the helm chart. This is done with the `clusterrolebinding` method.
```shell-session
$ kubectl create clusterrolebinding kubernetes-dashboard -n kube-system --clusterrole=cluster-admin --serviceaccount=kube-system:kubernetes-dashboard
```
Finally, you may need to create Kubernetes secrets to store Consul data. You
can reference these secrets in the customized Helm chart values file.
- If you have purchased Enterprise Consul, the enterprise license file should be
used with the official image, `hashicorp/consul-enterprise:1.5.0-ent`.
- Enable
[encryption](/docs/agent/encryption#gossip-encryption) to secure gossip traffic within the Consul cluster.
~> Note, depending on your environment, the previous secrets may not be
necessary.
## Configure Helm Chart
Now that you have prepared your Kubernetes cluster, you can customize the Helm
chart. First, you will need to download the latest official Helm chart.
```shell-session
$ git clone https://github.com/hashicorp/consul-helm.git
```
The `consul-helm` directory will contain a `values.yaml` file with example
parameters. You can update this file to customize your Consul deployment. Below
we detail some of the parameters you should customize and provide an example
file, however you should consider your particular production needs when
configuring your chart.
### Global Values
The global values will affect all the other parameters in the chart.
To enable all of the Consul components in the Helm chart, set `enabled` to
`true`. This means servers, clients, Consul DNS, and the Consul UI will be
installed with their defaults. You should also set the following global
parameters based on your specific environment requirements.
- `image` is the name and tag of the Consul Docker image.
- `imagek8s` is the name and tag of the Docker image for the consul-k8s binary.
- `datacenter` the name of your Consul datacenter.
- `domain` the domain Consul uses for DNS queries.
For security, set the `bootstrapACLs` parameter to true. This will enable
Kubernetes to initially setup Consul's [ACL
system](/docs/acl/acl-system).
Read the Consul Helm chart documentation to review all the [global
parameters](/docs/platform/k8s/helm#v-global).
### Consul UI
To enable the Consul web UI update the `ui` section to your values file and set
`enabled` to `true`.
Note, you can also set up a [loadbalancer
resource](https://github.com/hashicorp/demo-consul-101/tree/master/k8s#implement-load-balancer)
or other service type in Kubernetes to make it easier to access the UI.
### Consul Servers
For production deployments, you will need to deploy [3 or 5 Consul
servers](/docs/internals/consensus#deployment-table)
for quorum and failure tolerance. For most deployments, 3 servers are adequate.
In the server section set both `replicas` and `bootstrapExpect` to 3. This will
deploy three servers and cause Consul to wait to perform leader election until
all three are healthy. The `resources` will depend on your environment; in the
example at the end of the guide, the resources are set for a large environment.
#### Affinity
To ensure the Consul servers are placed on different Kubernetes nodes, you will
need to configure affinity. Otherwise, the failure of one Kubernetes node could
cause the loss of multiple Consul servers, and result in quorum loss. By
default, the example `values.yaml` has affinity configured correctly.
#### Enterprise License
If you have an [Enterprise
license](https://www.hashicorp.com/products/consul/enterprise) you should
reference the Kubernetes secret in the `enterpriseLicense` parameter.
Read the Consul Helm chart documentation to review all the [server
parameters](/docs/platform/k8s/helm#v-server)
### Consul Clients
A Consul client is deployed on every Kubernetes node, so you do not need to
specify the number of clients for your deployments. You will need to specify
resources and enable gRPC. The resources in the example at the end of this guide
should be
sufficient for most production scenarios since Consul clients are designed for
horizontal scalability. Enabling `grpc` enables the GRPC listener on port 8502
and exposes it to the host. It is required to use Consul Connect.
Read the Consul Helm chart documentation to review all the [client
parameters](/docs/platform/k8s/helm#v-client)
### Consul Connect Injection Security
Even though you enabled Consul server communication over Connect in the server section, you will also
need to enable `connectInject` by setting `enabled` to `true`. In the
`connectInject` section you will also configure security features. Enabling the
`default` parameter will allow the injector to automatically inject the Connect
sidecar into all pods. If you would prefer to manually annotate which pods to inject, you
can set this to false. Setting the 'aclBindingRuleSelector`parameter to`serviceaccount.name!=default` ensures that new services do not all receive the
same token if you are only using a default service account. This setting is
only necessary if you have enabled ACLs in the global section.
Read more about the [Connect Inject
parameters](/docs/platform/k8s/helm#v-connectinject).
## Complete Example
Your finished values file should resemble the following example. For more
complete descriptions of all the available parameters see the `values.yaml`
file provided with the Helm chart and the [reference
documentation](/docs/platform/k8s/helm).
```yaml
# Configure global settings in this section.
global:
# Enable all the components within this chart by default.
enabled: true
# Specify the Consul and consul-k8s images to use
image: 'consul:1.5.0'
imagek8s: 'hashicorp/consul-k8s:0.8.1'
domain: consul
datacenter: primarydc
# Bootstrap ACLs within Consul. This is highly recommended.
bootstrapACLs: true
# Gossip encryption
gossipEncryption: |
secretName: "encrypt-key"
secretKey: "key
# Configure your Consul servers in this section.
server:
enabled: true
connect: true
# Specify three servers that wait till all are healthy to bootstrap the Consul cluster.
replicas: 3
bootstrapExpect: 3
# Specify the resources that servers request for placement. These values will serve a large environment.
resources: |
requests:
memory: "32Gi"
cpu: "4"
disk: "50Gi"
limits:
memory: "32Gi"
cpu: "4"
disk: "50Gi"
# If using Enterprise, reference the Kubernetes secret that holds your license here
enterpriseLicense:
secretName: 'consul-license'
secretKey: 'key'
# Prevent Consul servers from co-location on Kubernetes nodes.
affinity: |
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app: {{ template "consul.name" . }}
release: "{{ .Release.Name }}"
component: server
topologyKey: kubernetes.io/hostname
# Configure Consul clients in this section
client:
enabled: true
# Specify the resources that clients request for deployment.
resources: |
requests:
memory: "8Gi"
cpu: "2"
disk: "15Gi"
limits:
memory: "8Gi"
cpu: "2"
disk: "15Gi"
grpc: true
# Enable and configure the Consul UI.
ui:
enabled: true
# Configure security for Consul Connect pod injection
connectInject:
enabled: true
default: true
namespaceSelector: 'my-namespace'
aclBindingRuleSelector: “serviceaccount.name!=default”
```
## Deploy Consul
Now that you have customized the `values.yml` file, you can deploy Consul with
Helm. This should only take a few minutes. The Consul pods should appear in the
Kubernetes dashboard immediately and you can monitor the deployment process
there.
```shell-session
$ helm install ./consul-helm -f values.yaml
```
To check the deployment process on the command line you can use `kubectl`.
```shell-session
$ kubectl get pods
```
## Summary
In this guide, you configured Consul, using the Helm chart, for a production
environment. This involved ensuring that your cluster had a properly
distributed server cluster, specifying enough resources for your agents,
securing the cluster with ACLs and gossip encryption, and enabling other Consul
functionality including Connect and the Consul UI.
Now you can interact with your Consul cluster through the UI or CLI.
If you exposed the UI using a load balancer it will be available at the
`LoadBalancer Ingress` IP address and `Port` that is output from the following
command. Note, you will need to replace _consul server_ with the server name
from your cluster.
```shell-session
$ kubectl describe services consul-server
```
To access the Consul CLI, open a terminal session using the Kubernetes CLI.
```shell-session
$ kubectl exec <pod name> -it /bin/ash
```
To learn more about how to interact with your Consul cluster or use it for
service discovery, configuration or segmentation, try one of Learns
[Operations or Development tracks](/consul/#advanced). Follow the [Security and
Networking track](/consul/?track=security-networking#security-networking) to
learn more about securing your Consul cluster.

View File

@ -1,151 +0,0 @@
---
name: Managing ACL Policies
content_length: 15
id: managing-acl-policies
products_used:
- Consul
description: >-
In this guide, you'll learn how to discover the minimum privileges required to
complete operations within your Consul datacenter and how to manage access
using the operator-only implementation method.
level: Implementation
---
This guide is for Operators with the responsibility of creating and managing ACL tokens for a Consul datacenter. It includes several recommendations on how to discover the minimum privileges required to complete operations. Throughout the guide we'll provide examples and use cases that you can adapt to your environment, however, it does not include environment specific recommendations. After completing this guide, you will have a better understanding of how to effectively manage ACL policies and tokens.
We expect operators to automate the policy and token generation process in production environments. Additionally, if you are using a container orchestrator, the process will vary even though the concepts in this guide will still be applicable. If you are using the official Consul-Kubernetes Helm chart to deploy Consul, use the [authentication method documentation](/docs/acl/auth-methods) instead of generating policies manually or automating the methods here.
## Prerequisites
We provide high-level recommendations in this guide, however, we will not describe the command by command token generation process. To learn how to create tokens, read the [ACL bootstrapping guide](https://learn.hashicorp.com/tutorials/consul/access-control-setup-production).
This guide assumes the `default_policy` of `deny` is set on all agents, in accordance to the [security model documentation](/docs/internals/security#secure-configuration).
## Security and Usability
The examples in this guide illustrate how to create multiple policies that can be used to accomplish the same task. For example, using an exact match resource rule, is the most secure. It grants the least privileges necessary to accomplish the task. Generally, creating policies and tokens with the least privileges will result in more policy definitions. Alternatively, for a simplified process, the prefix resources rules can apply to zero-to-many objects. The trade-off of a less complicated token creation process is wider potential blast radius on token or workload compromise.
## Discover Required Privileges
After bootstrapping the ACL system and configuring Consul agents with tokens, you will need to create tokens to complete any additional task within the datacenter including registering services.
Before discovering the minimum privileges, it's important to understand the basic components of a token. A rule is a specific privilege and the basic unit of a token. Rules are combined to create policies. There are two main parts of a rule, the resource and policy disposition. The resource is the object that the rule applies to and the policy dispositions dictates privileges. The example below applies to any service object named "web". The policy disposition grants read privileges.
![ACL Rule Diagram](/static/img/consul/ACL-rule.png 'ACL Diagram with rules')
To discover the minimum privileges required for a specific operation, we have three recommendations.
First, focus on the data in your environment that needs to be secured. Ensure your sensitive data has policies that are specific and limited. Since policies can be combined to create tokens, you will usually write more policies for sensitive data. Sensitive data could be a specific application or a set of values in the key-value store.
Second, reference the Consul docs, both the [rules page](/docs/acl/acl-rules) and [API pages](/api), often to understand the required privileges for any given operation.
The rules documentation explains the 11 rule resources. The following four resource types are critical for any operating datacenter with ACLs enabled.
| Rule | Summary |
| ------------------------------ | --------------------------------------------------------------------------------------------------------------- |
| `acl` | ACL rules grant privileges for ACL operations including to create,update, or view tokens and policies. |
| `node` and `node_prefix` | Node rules grant privileges for node-level registration, including adding agents to the datacenter and catalog. |
| `service` and `service_prefix` | Service rules grant privileges for service-level registration, including adding services to the catalog. |
| `operator` | The operator grants privileges for datacenter operations, including interacting with Raft. |
On the API pages, each endpoint will have a table that includes required ACLs. The node health endpoint, shown below, requires node and service read to view all checks for the specified node.
![API Health Endpoint](/static/img/consul/api-endpoint.png 'Screenshot of Health endpoint page')
Finally, before using a token in production, you should test that it has the correct privileges. If the token being used is missing a privilege, then Consul will issue a 403 permission denied error.
```shell-session
$ consul operator raft list-peers
Error getting peers: Failed to retrieve raft configuration: Unexpected response code: 403 (rpc error making call: Permission denied)
```
### Operations Example: Listing Consul Agents
To view all Consul agents in the datacenter, you can use the `consul members` CLI command. You will need at least read privileges for every agent in the datacenter. Depending on your threat model you could have either of the following policies.
An individual rule and policy for each agent: this requires you to add a new policy to the token every time a new agent is added to the datacenter. Your token would have as many policies as agents in the datacenter. This method is ideal for a static environment, and for maximum security.
```hcl
agent "server-one"
{ policy = "read"
}
```
If you have a dynamic or very large environment, you may want to consider creating one policy that can apply to all agents, to reduce the Operation Teams workload.
```hcl
agent_prefix ""
{ policy = "read"
}
```
## Secure Access Control: Operator-Only Access
The most secure access control implementation restricts tokens with `acl="write"` policies to only one or a few trusted operators. Tokens with the policy `acl = "write"` grant the holder unlimited privileges, because they can generate tokens with any other resource and policy. The operators are responsible for creating all other policies and tokens that grant limited access to the datacenter. We refer to this implementation as the operator-only implementation. This implementation type is the most secure, and most complex to manage.
For this implementation type operators are responsible for managing policies and tokens for:
- service registration
- Connect proxy registration
- intention management
- agent management
- API, CLI, and UI access
For static or non-containerized workflows, this implementation type is straightforward and the operator's workload scales linearly. If the workflow is dynamic, implementing an automation tool will ensure the operator's workload does not scale exponentially.
!> If you need to share token generation responsibilities with other teams, anyone with the ability to create tokens (`acl = "write"`) effectively has unrestricted access to the datacenter. That individual will have the ability to access or delete any data in Consul including ACL tokens, Connect Intentions and all Catalog and KV data.
### Operator-Only Implementation Example
In this following example Operators retain responsibility for service token management, but delegate access control between Connect-enabled services to the security team.
-> Note: The service registration examples describe the token generation process for per-service tokens, therefore, they are only applicable if you are able to create per-service tokens.
Initially, the Operator creates a single token with intention management privileges for the Security Team, and another service token for the Developer. For intention management the `intention` policy disposition should be included in the service rule.
```hcl
service "wordpress"
{ policy = "read"
intentions = "write"
}
```
This enables the security team to create an intention that allows the `wordpress` service to open new connections to the upstream `mysql` service.
```shell
consul intention create wordpress mysql
```
The Security Team responsible for managing intentions would need to create allow intentions when the `default_policy` is set to `deny`, since deny all will be inherited from the ACL configuration.
In this implementation type, Developers are responsible for requesting a token for their service. For a Connect enabled service, the Operator would need to create a policy that provides write privileges for the service and proxy, and read privileges for all services and nodes to enable discovery of other upstream dependencies.
```hcl
service "mysql" {
policy = "write"
}
service "mysql-sidecar-proxy" {
policy = "write"
}
service_prefix "" {
policy = "read"
}
node_prefix "" {
policy = "read"
}
```
With the token the Developer can then register the service with Consul using the `register` command.
```shell-session
$ consul services register mysql
```
The Operator example above illustrates creating policies on the security spectrum. The first example, using an exact match resource rule, is the most secure. It grants the least privileges necessary to accomplish the task. Generally, creating policies and tokens with the least privileges will result in more policy definitions. However, this will help you create the most secure environment. Alternatively, the prefix rule can apply to zero-to-many objects. The trade-off of a less complicated token creation process is security. Note, this applies to all rules not just for agents.
## Next Steps
After setting up access control processes, you will need to implement a token rotation policy. If you are using third-party tool to generate tokens, such as Vault, Consul ACL tokens will adhere to the TTLs set in that third party tool. If you are manually rotating tokens or need to revoke access, you can delete a token at any time with the [API](/api/acl/tokens#delete-a-token).

View File

@ -1,451 +0,0 @@
---
layout: docs
page_title: Securing Consul with ACLs
description: This guide walks though securing your production Consul datacenter with ACLs.
---
The [Bootstrapping the ACL System guide](/advanced/day-1-operations/acl-guide)
walks you through how to set up ACLs on a single datacenter. Because it
introduces the basic concepts and syntax we recommend completing it before
starting this guide. This guide builds on the first guide with recommendations
for production workloads on a single datacenter.
After [bootstrapping the ACL
system](/advanced/day-1-operations/production-acls#bootstrap-the-acl-system),
you will learn how to create tokens with minimum privileges for:
- [Servers and Clients](/advanced/day-1-operations/production-acls#apply-individual-tokens-to-agents)
- [Services](/advanced/day-1-operations/production-acls#apply-individual-tokens-to-services)
- [DNS](/advanced/day-1-operations/production-acls#token-for-dns)
- [Consul KV](/advanced/day-1-operations/production-acls#consul-kv-tokens)
- [Consul UI](/advanced/day-1-operations/production-acls#consul-ui-tokens)
~> **Important:** For best results, use this guide during the [initial
deployment](/advanced/day-1-operations/deployment-guide) of a Consul (version
1.4.3 or newer) datacenter. Specifically, you should have already installed all
agents and configured initial service definitions, but you should not yet rely
on Consul for any service discovery or service configuration operations.
## Bootstrap the ACL System
You will bootstrap the ACL system in two steps, enable ACLs and create the
bootstrap token.
### Enable ACLs on the Agents
To enable ACLs, add the following [ACL
parameters](/docs/agent/options#configuration-key-reference)
to the agent's configuration file and then restart the Consul service. If you
want to reduce Consul client restarts, you can enable the ACLs
on them when you apply the token.
```
# agent.hcl
{
acl = {
enabled = true
default_policy = "deny"
enable_token_persistence = true
}
}
```
~> Note: Token persistence was introduced in Consul 1.4.3. In older versions
of Consul, you cannot persist tokens when using the HTTP API.
In this example, you configured the default policy of "deny", which means you
are in allowlist mode. You also enabled token persistence when using the HTTP
API. With persistence enabled, tokens will be persisted to disk and
reloaded when an agent restarts
~> Note: If you are bootstrapping ACLs on an existing datacenter, enable the
ACLs on the agents first with `default_policy=allow`. Default policy allow will
enable ACLs, but will allow all operations, allowing the cluster to function
normally while you create the tokens and apply them. This will reduce downtime.
You should update the configuration files on all the servers first and then
initiate a rolling restart.
### Create the Initial Bootstrap Token
To create the initial bootstrap token, use the `acl bootstrap` command on one
of the servers.
```shell-session
$ consul acl bootstrap
```
The output gives you important information about the token, including the
associated policy `global-management` and `SecretID`.
~> Note: By default, Consul assigns the `global-management` policy to the
bootstrap token, which has unrestricted privileges. It is important to have one
token with unrestricted privileges in case of emergencies; however you should
only give a small number of administrators access to it. The `SecretID` is a
UUID that you will use to identify the token when using the Consul CLI or HTTP
API.
While you are setting up the ACL system, set the `CONSUL_HTTP_TOKEN`
environment variable to the bootstrap token on one server, for this guide
the example is on server "consul-server-one". This gives you the necessary
privileges to continue
creating policies and tokens. Set the environment variable temporarily with
`export`, so that it will not persist once youve closed the session.
```shell-session
$ export CONSUL_HTTP_TOKEN=<your_token_here>
```
Now, all of the following commands in this guide can
be completed on the same server, in this
case server "consul-server-one".
## Apply Individual Tokens to Agents
Adding tokens to agents is a three step process.
1. [Create the agent
policy](/advanced/day-1-operations/production-acls/create-the-agent-policy).
2. [Create the token with the newly created
policy](/advanced/day-1-operations/production-acls/create-the-agent-token).
3. [Add the token to the agent](/advanced/day-1-operations/production-acls/add-the-token-to-the-agent).
### Create the Agent Policy
We recommend creating agent policies that have write privileges for node
related actions including registering itself in the catalog, updating node
level health checks, and having write access on its configuration file. The
example below has unrestricted privileges for node related actions for
"consul-server-one" only.
```
# consul-server-one-policy.hcl
node "consul-server-one" {
policy = "write"
}
```
When creating agent policies, review the [node rules](/docs/agent/acl-rules#node-rules). Now that
you have
specified the policy, you can initialize it using the Consul
CLI. To create a programmatic process, you could also use
the HTTP API.
```shell-session
$ consul acl policy create -name consul-server-one -rules @consul-server-one-policy.hcl
```
The command output will include the policy information.
Repeat this process for all servers and clients in the Consul datacenter. Each agent should have its own policy based on the
node name, that grants write privileges to it.
### Create the Agent Token
After creating the per-agent policies, create individual tokens for all the
agents. You will need to include the policy in the `consul acl token create`
command.
```shell-session
$ consul acl token create -description "consul-server-one agent token" -policy-name consul-server-one
```
This command returns the token information, which should include a description
and policy information.
Repeat this process for each agent. It is the responsibility of the operator to
save tokens in a secure location; we recommend
[Vault](https://www.vaultproject.io/).
### Add the Token to the Agent.
Finally, apply the tokens to the agents using the HTTP API.
Start with the servers
and ensure they are working correctly before applying the client tokens. Please
review the Bootstrapping the ACL System [guide](/advanced/day-1-operations/acl-guide) for example of setting the token in the agent configuration
file.
```shell-session
$ consul acl set-agent-token -token "<your token here>" agent "<agent token here>"
```
The data file must contain a valid token.
```
# consul-server-one-token.json
{
"Token": "adf4238a-882b-9ddc-4a9d-5b6758e4159e"
}
```
At this point, every agent that has a token can once
again read and write information to Consul, but only for node-related actions.
Actions for individual services are not yet allowed.
~> Note: If you are bootstrapping ACLs on an existing datacenter, remember to
update the default policy to `default_policy = deny` and initiate another
rolling restart. After applying the token.
## Apply Individual Tokens to the Services
The token creation and application process for services is similar to agents.
Create a policy. Use that policy to create a token. Add the token to the
service. Service tokens are necessary for
agent anti-entropy, registering and deregistering the service, and
registering and deregistering the service's checks.
Review the [service
rules](/docs/agent/acl-rules#service-rules) before
getting started.
Below is an example service definition that needs a token after bootstrapping
the ACL system.
```json
{
"service": {
"name": "dashboard",
"port": 9002,
"check": {
"id": "dashboard-check",
"http": "http://localhost:9002/health",
"method": "GET",
"interval": "1s",
"timeout": "1s"
}
}
}
```
This service definition should be located in the [configuration
directory](/docs/agent/options#_config_dir) on one of
the clients.
First, create the policy that will grant write privileges to only the
"dashboard" service. This means the "dashboard" service can register
itself, update its health checks, and write any of the fields in the [service
definition](/docs/agent/services).
```shell
# dashboard-policy.hcl
service "dashboard" {
policy = "write"
}
```
Use the policy definition to initiate the policy.
```shell-session
$ consul acl policy create -name "dashboard-service" -rules @dashboard-policy.hcl
```
Next, create a token with the policy.
```shell-session
$ consul acl token create -description "Token for Dashboard Service" -policy-name dashboard-service
```
The command will return information about the token, which should include a
description and policy information. As usual, save the token to a secure
location.
Finally, add the token to the service definition.
```
{
"service": {
"name": "dashboard",
"port": 9002,
"token": "57c5d69a-5f19-469b-0543-12a487eecc66",
"check": {
"id": "dashboard-check",
"http": "http://localhost:9002/health",
"method": "GET",
"interval": "1s",
"timeout": "1s"
}
}
}
```
If the service is running, you will need to restart it. Unlike with agent
tokens, there is no HTTP API endpoint to apply the token directly to the
service. If the service is registered with a configuration file, you must
also set the token in the configuration file. However, if you register a
service with the HTTP API, you can pass the token in the [header](/api#authentication) with
`X-Consul-Token` and it will be used by the service.
If you are using a sidecar proxy, it can inherit the token from the service
definition. Alternatively, you can create a separate token.
## Token for DNS
Depending on your use case, the token used for DNS may need policy rules for
[nodes](/docs/agent/acl-rules#node-rules),
[services](/docs/agent/acl-rules#service-rules), and
[prepared queries](/docs/agent/acl-rules#prepared-query-rules).
You should apply the token to the Consul agent serving DNS requests. When the
DNS server makes a request to Consul, it will include the token in the request.
Consul can either authorize or revoke the request, depending on the token's
privileges. The token creation for DNS is the same three step process you used
for agents and services, create a policy, create a token, apply the
token.
Below is an example of a policy that provides read privileges for all services,
nodes, and prepared queries.
```
# dns-request-policy.hcl
node_prefix "" {
policy = "read"
}
service_prefix "" {
policy = "read"
}
# only needed if using prepared queries
query_prefix "" {
policy = "read"
}
```
First, create the policy.
```shell-session
$ consul acl policy create -name "dns-requests" -rules @dns-request-policy.hcl
```
Next, create the token.
```shell-session
$ consul acl token create -description "Token for DNS Requests" -policy-name dns-requests
```
Finally, apply the token to the Consul agent serving DNS request in default token ACL
configuration parameter.
```shell-session
$ consul acl set-agent-token -token "<your token here>" default "<dns token>"
```
The data file must contain a valid token.
```
# dns-token.json
{
"Token": "5467d69a-5f19-469b-0543-12a487eecc66"
}
```
Note, if you have multiple agents serving DNS requests you can use the same
policy to create individual tokens for all of them if they are using the same rules.
## Consul KV Tokens
The process of creating tokens for Consul KV follows the same three step
process as nodes and services. First create a policy, then a token, and finally
apply or use the token. However, unlike tokens for nodes and services Consul KV
has many varied use cases.
- Services may need to access configuration data in the key-value store.
- You may want to store distributed lock information for sessions.
- Operators may need access to
update configuration values in the key-value store. .
The [rules for
KV](/docs/agent/acl-rules#key-value-rules) have four
policy levels; `deny`, `write`, `read`, and `list`. Let's review several
examples of `read` and `write`.
Depending on the use case, the token will be applied differently. For services
you will add the token to the HTTP client. For operators use, the
operator will use the token when issuing commands, either with the CLI or API.
### Recursive Reads
```
key_prefix "redis/" {
policy = "read"
}
```
In the above example, we are allowing any key with the prefix `redis/` to be
read. If you issued the command `consul kv get -recurse redis/ -token=<your token>` you would get a list of key/values for `redis/`.
This type of policy is good for allowing operators to recursively read
configuration parameters stored in the KV. Similarly, a "write" policy with the
same prefix would allow you to update any keys that begin with "redis/".
### Write Privileges for One Key
```
key "dashboard-app" {
policy = "write"
}
```
In the above example, we are allowing read and write privileges to the
dashboard-app key. This allows for `get`, `delete`, and `put` operations.
This type of token would allow an application to update and read a value in the
KV store. It would also be useful for operators who need access to set specific
keys.
### Read Privileges for One Key
```
key "counting-app" {
policy = "read"
}
```
In the above example, we are setting a read privileges for a single key,
“counting-app”. This allows for only `get` operations.
This type of token allows an application to simply read from a key to get the
value. This is useful for configuration parameter updates.
## Consul UI Token
Once you have bootstrapped the ACL system, access to the UI will be limited.
The anonymous token grants UI access if no [default
token](/docs/agent/options#acl_tokens_default) is set
on the agents, and all operations will be denied, including viewing nodes and
services.
You can re-enable UI features (with flexible levels of access) by creating and
distributing tokens to operators. Once you have a token, you can use it in the
UI by adding it to the "ACL" page:
![Access Controls](/img/guides/access-controls.png 'Access Controls')
After saving a new token, you will be able to see your tokens.
![Tokens](/img/guides/tokens.png 'Tokens')
The browser stores tokens that you add to the UI. This allows you to distribute
different levels of privileges to operators. Like other tokens, it's up to the
operator to decide the per-token privileges.
Below is an example of policy that
will allow the operator to have read access to the UI for services, nodes,
key/values, and intentions. You need to have "acl = read" to view policies
and tokens. Otherwise you will not be able to access the ACL section of the UI,
not even to view the token you used to access the UI.
```
# operator-ui.hcl
service_prefix "" {
policy = "read"
}
key_prefix "" {
policy = "read"
}
node_prefix "" {
policy = "read"
}
```
## Summary
In this guide you bootstrapped the ACL system for consul and applied tokens to agents and services. You assigned tokens for DNS, Consul KV, and the Consul UI.
To learn more about Consuls security model read the [internals documentation](/docs/internals/security). You can find commands relating to ACLs in our [reference documentation](/commands/acl).

View File

@ -1,138 +0,0 @@
---
name: '[Enterprise] Setup Secure Namespaces'
content_length: 14
id: secure-namespaces
products_used:
- Consul
description: In this guide you setup secure namespaces with ACLs.
level: Implementation
---
!> Warning: This guide is a draft and has not been fully tested.
!> Warning: Consul 1.7 is currently a beta release.
Namespaces can provide separation for teams within a single organization so that they can share access to one or more Consul datacenters without conflict. This allows teams to deploy services without name conflicts and create more granular access to the cluster with namespaced ACLs.
Additionally, namespaced ACLs will allow you to delegate access control to specific resources within the datacenter including services, Connect proxies, key/value pairs, and sessions.
This guide has two main sections, configuring namespaces and creating ACL tokens within a namespace. You must configure namespaces before creating namespaced tokens.
## Prerequisites
To execute the example commands in this guide, you will need a Consul 1.7.x Enterprise datacenter with [ACLs enabled](/consul/security-networking/production-acls). If you do not have an existing datacenter, you can use a single [local agent](/consul/getting-started/agent) or [Consul in containers](/consul/day-0/containers-guide).
You will also need an ACL token with `operator=write` and `acl=write` privileges or you can use a token with the [built-in global management policy](/docs/acl/acl-system#builtin-policies).
### [Optional] Configure Consul CLI
If you are using Docker or other non-local deployment, you can configure a local Consul binary to interact with the deployment. Set the `CONSUL_HTTP_ADDR` [variable](/docs/commands#consul_http_addr) on your local machine or jumphost to the IP address of a client.
```shell-session
$ export CONSUL_HTTP_ADDR=192.17.23.4
```
Note, this jumphost will need to use an ACL token to access the datacenter. The token's necessary privileges will depend on the operations, learn more about privileges by reviewing the ACL [rule documentation](/docs/acl/acl-rules). You can export the token with the `CONSUL_HTTP_TOKEN` [variable](/docs/commands#consul_http_token). Additionally, if you have [TLS encryption configured](/consul/security-networking/certificates) you will need to use valid certificates.
## Configure namespaces
In this section, you will create two namespaces that allow you to separate the datacenter for two teams. Each namespace will have an operator responsible for managing data and access within their namespace. The namespaced-operator will only have access to view and update data and access in their namespace. This allows for complete isolation of data between teams.
To configure and manage namespaces, you will need one super-operator who has visibility into the entire datacenter. It will be their responsibility to set up namespaces. For this guide, you should be the super-operator.
### Create namespace definitions
First, create files to contain the namespace definitions for the `app-team` and `db-team` respectively. The definitions can be JSON or HCL. Save the following configurations, which specify the name and description of each namespace, to the files.
```json
{
"name": "app-team",
"description": "Namespace for app-team managing the production dashboard application"
}
```
```json
{
"name": "db-team",
"description": "Namespace for db-team managing the production counting application"
}
```
These namespace definitions are for minimal configuration, learn more about namespace options in the [documentation]().
### Initialize the namespaces
Next, use the Consul CLI to create each namespace by providing Consul with the namespace definition files. You will need `operator=write` privileges.
```shell-session
$ consul namespace write app-team.json
$ consul namespace write db-team.json
```
Finally, ensure both namespaces were created by viewing all namespaces. You will need `operator=read` privileges, which are included with the `operator=write` privileges, a requirement from the prerequisites.
```shell-session
$ consul namespace list
app-team:
Description:
Namespace for app-team managing the production dashboard application
db-team:
Description:
Namespace for db-team managing the production counting application
default:
Description:
Builtin Default Namespace
```
Alternatively, you can view each namespace with `consul namespace read`. After you create a namespace, you can [update]() or [delete]() it.
## Delegate token management with namespaces
In this section, you will delegate token management to multiple operators. One of the key benefits of namespaces is the ability to delegate responsibilities of token management to more operators. This allows you to provide unrestricted access to portions of the datacenter, ideally to one or a few operators per namespace.
The namespaced-operators are then responsible for managing access to services, Consul KV, and other resources within their namespaces. Additionally, the namespaced-operator should further delegate service-access privileges to developers or end-users. This is consistent with the current ACL management workflow. Before namespaces, only one or a few operators managed tokens for an entire datacenter.
Note that namespaces control access to Consul data and services. They don't have any impact on compute or other node resources, and nodes themselves are not namespaced.
Namespaced-operators will only be aware of data within their namespaces. Without global privileges, they will not be able to see other namespaces.
Nodes are not namespaced, so namespace-operators will be able to see all the nodes in the datacenter.
### Create namespace management tokens
First, the super-operator should use the [built-in namespace-policy](/docs/acl/acl-system#builtin-policies) to create a token for the namespaced-operators. Note, the namespace-management policy ultimately grants them unrestricted privileges for their namespace. You will need `acl=write` privileges to create namespaced-tokens.
```shell-session
$ consul acl token create \
-namespace app-team \
-description "App Team Administrator" \
-policy-name "namespace-management"
<output>
```
```shell-session
$ consul acl token create \
-namespace db-team \
-description "DB Team Administrator" \
-policy-name "namespace-management"
<output>
```
These policies grant privileges to create tokens, which enable the token holders to grant themselves any additional privileges needed for any operation within their namespace. The namespace-policy privileges have the following rules.
```hcl
{
acl = "write"
kv_prefix "" = "write"
service_prefix "" = "write"
session_prefix "" = "write"
}
```
The namespaced-operators can now create tokens for developers or end-users that allow access to see or modify namespaced data, including service registrations, key/value pairs, and sessions.
## Summary
In this guide, you learned how to create namespaces and how to secure the resources within a namespace.
Note, the super-operator can also create policies that can be shared by all namespaces. Shared policies are universal and should be created in the `default` namespace.

View File

@ -1,197 +0,0 @@
---
layout: docs
page_title: Adding & Removing Servers
description: >-
Consul is designed to require minimal operator involvement, however any
changes to the set of Consul servers must be handled carefully. To better
understand why, reading about the consensus protocol will be useful. In short,
the Consul servers perform leader election and replication. For changes to be
processed, a minimum quorum of servers (N/2)+1 must be available. That means
if there are 3 server nodes, at least 2 must be available.
---
# Adding & Removing Servers
Consul is designed to require minimal operator involvement, however any changes
to the set of Consul servers must be handled carefully. To better understand
why, reading about the [consensus protocol](/docs/internals/consensus) will
be useful. In short, the Consul servers perform leader election and replication.
For changes to be processed, a minimum quorum of servers (N/2)+1 must be available.
That means if there are 3 server nodes, at least 2 must be available.
In general, if you are ever adding and removing nodes simultaneously, it is better
to first add the new nodes and then remove the old nodes.
In this guide, we will cover the different methods for adding and removing servers.
## Manually Add a New Server
Manually adding new servers is generally straightforward, start the new
agent with the `-server` flag. At this point the server will not be a member of
any cluster, and should emit something like:
```shell
consul agent -server
[WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
```
This means that it does not know about any peers and is not configured to elect itself.
This is expected, and we can now add this node to the existing cluster using `join`.
From the new server, we can join any member of the existing cluster:
```shell-session
$ consul join <Existing Node Address>
Successfully joined cluster by contacting 1 nodes.
```
It is important to note that any node, including a non-server may be specified for
join. Generally, this method is good for testing purposes but not recommended for production
deployments. For production clusters, you will likely want to use the agent configuration
option to add additional servers.
## Add a Server with Agent Configuration
In production environments, you should use the [agent configuration](/docs/agent/options) option, `retry_join`. `retry_join` can be used as a command line flag or in the agent configuration file.
With the Consul CLI:
```shell-session
$ consul agent -retry-join=["52.10.110.11", "52.10.110.12", "52.10.100.13"]
```
In the agent configuration file:
```shell
{
"bootstrap": false,
"bootstrap_expect": 3,
"server": true,
"retry_join": ["52.10.110.11", "52.10.110.12", "52.10.100.13"]
}
```
[`retry_join`](https://www.consul.io/docs/agent/options#retry-join)
will ensure that if any server loses connection
with the cluster for any reason, including the node restarting, it can
rejoin when it comes back. In additon to working with static IPs, it
can also be useful for other discovery mechanisms, such as auto joining
based on cloud metadata and discovery. Both servers and clients can use this method.
### Server Coordination
To ensure Consul servers are joining the cluster properly, you should monitor
the server coordination. The gossip protocol is used to properly discover all
the nodes in the cluster. Once the node has joined, the existing cluster
leader should log something like:
```text
[INFO] raft: Added peer 127.0.0.2:8300, starting replication
```
This means that raft, the underlying consensus protocol, has added the peer and begun
replicating state. Since the existing cluster may be very far ahead, it can take some
time for the new node to catch up. To check on this, run `info` on the leader:
```shell-session
$ consul info
...
raft:
applied_index = 47244
commit_index = 47244
fsm_pending = 0
last_log_index = 47244
last_log_term = 21
last_snapshot_index = 40966
last_snapshot_term = 20
num_peers = 4
state = Leader
term = 21
...
```
This will provide various information about the state of Raft. In particular
the `last_log_index` shows the last log that is on disk. The same `info` command
can be run on the new server to see how far behind it is. Eventually the server
will be caught up, and the values should match.
It is best to add servers one at a time, allowing them to catch up. This avoids
the possibility of data loss in case the existing servers fail while bringing
the new servers up-to-date.
## Manually Remove a Server
Removing servers must be done carefully to avoid causing an availability outage.
For a cluster of N servers, at least (N/2)+1 must be available for the cluster
to function. See this [deployment table](/docs/internals/consensus#toc_4).
If you have 3 servers and 1 of them is currently failing, removing any other servers
will cause the cluster to become unavailable.
To avoid this, it may be necessary to first add new servers to the cluster,
increasing the failure tolerance of the cluster, and then to remove old servers.
Even if all 3 nodes are functioning, removing one leaves the cluster in a state
that cannot tolerate the failure of any node.
Once you have verified the existing servers are healthy, and that the cluster
can handle a node leaving, the actual process is simple. You simply issue a
`leave` command to the server.
```shell
consul leave
```
The server leaving should contain logs like:
```text
...
[INFO] consul: server starting leave
...
[INFO] raft: Removed ourself, transitioning to follower
...
```
The leader should also emit various logs including:
```text
...
[INFO] consul: member 'node-10-0-1-8' left, deregistering
[INFO] raft: Removed peer 10.0.1.8:8300, stopping replication
...
```
At this point the node has been gracefully removed from the cluster, and
will shut down.
~> Running `consul leave` on a server explicitly will reduce the quorum size. Even if the cluster used `bootstrap_expect` to set a quorum size initially, issuing `consul leave` on a server will reconfigure the cluster to have fewer servers. This means you could end up with just one server that is still able to commit writes because quorum is only 1, but those writes might be lost if that server fails before more are added.
To remove all agents that accidentally joined the wrong set of servers, clear out the contents of the data directory (`-data-dir`) on both client and server nodes.
These graceful methods to remove servers assumes you have a healthly cluster.
If the cluster has no leader due to loss of quorum or data corruption, you should
plan for [outage recovery](/docs/guides/outage#manual-recovery-using-peers-json).
!> **WARNING** Removing data on server nodes will destroy all state in the cluster
## Manual Forced Removal
In some cases, it may not be possible to gracefully remove a server. For example,
if the server simply fails, then there is no ability to issue a leave. Instead,
the cluster will detect the failure and replication will continuously retry.
If the server can be recovered, it is best to bring it back online and then gracefully
leave the cluster. However, if this is not a possibility, then the `force-leave` command
can be used to force removal of a server.
```shell
consul force-leave <node>
```
This is done by invoking that command with the name of the failed node. At this point,
the cluster leader will mark the node as having left the cluster and it will stop attempting to replicate.
## Summary
In this guide we learned the straightforward process of adding and removing servers including;
manually adding servers, adding servers through the agent configuration, gracefully removing
servers, and forcing removal of servers. Finally, we should restate that manually adding servers
is good for testing purposes, however, for production it is recommended to add servers with
the agent configuration.