mirror of https://github.com/hashicorp/consul
67 lines
3.6 KiB
Markdown
67 lines
3.6 KiB
Markdown
---
|
|
layout: docs
|
|
page_title: Consul on Kubernetes Architecture
|
|
description: >-
|
|
A high level overview of Consul on Kubernetes Architecture
|
|
---
|
|
|
|
|
|
# Architecture
|
|
|
|
This topic describes the architecture, components, and resources associated with Consul deployments to Kubernetes. Consul employs the same architectural design on Kubernetes as it does with other platforms (see [Architecture](/docs/architecture)), but Kubernetes provides additional benefits that make operating a Consul cluster easier.
|
|
|
|
Refer to the standard [production deployment guide](https://learn.hashicorp.com/consul/datacenter-deploy/deployment-guide) for important information, regardless of the deployment platform.
|
|
|
|
## Server Agents
|
|
|
|
The server agents are deployed as a `StatefulSet` and use persistent volume
|
|
claims to store the server state. This also ensures that the
|
|
[node ID](/docs/agent/options#_node_id) is persisted so that servers
|
|
can be rescheduled onto new IP addresses without causing issues. The server agents
|
|
are configured with
|
|
[anti-affinity](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity)
|
|
rules so that they are placed on different nodes. A readiness probe is
|
|
configured that marks the pod as ready only when it has established a leader.
|
|
|
|
A Kubernetes `Service` is registered to represent the servers and exposes ports that are required to communicate to the Consul server pods.
|
|
The servers utilize the DNS address of this service to join a Consul cluster, without requiring any other access to the Kubernetes cluster. Additional consul servers may also utilize non-ready endpoints which are published by the Kubernetes service, so that servers can utilize the service for joining during bootstrap and upgrades.
|
|
|
|
Additionally, a **PodDisruptionBudget** is configured so the Consul server
|
|
cluster maintains quorum during voluntary operational events. The maximum
|
|
unavailable is `(n/2)-1` where `n` is the number of server agents.
|
|
|
|
-> **Note:** Kubernetes and Helm do not delete Persistent Volumes or Persistent
|
|
Volume Claims when a
|
|
[StatefulSet is deleted](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#stable-storage),
|
|
so this must done manually when removing servers.
|
|
|
|
## Client Agents
|
|
|
|
The client agents are run as a **DaemonSet**. This places one agent
|
|
(within its own pod) on each Kubernetes node.
|
|
The clients expose the Consul HTTP API via a static port (8500)
|
|
bound to the host port. This enables all other pods on the node to connect
|
|
to the node-local agent using the host IP that can be retrieved via the
|
|
Kubernetes downward API. See
|
|
[accessing the Consul HTTP API](/docs/k8s/installation/install#accessing-the-consul-http-api)
|
|
for an example.
|
|
|
|
We do not use a **NodePort** Kubernetes service because requests to node ports get randomly routed
|
|
to any pod in the service and we need to be able to route directly to the Consul
|
|
client running on our node.
|
|
|
|
-> **Note:** There is no way to bind to a local-only
|
|
host port. Therefore, any other node can connect to the agent. This should be
|
|
considered for security. For a properly production-secured agent with TLS
|
|
and ACLs, this is safe.
|
|
|
|
We run Consul clients as a **DaemonSet** instead of running a client in each
|
|
application pod as a sidecar because this would turn
|
|
a pod into a "node" in Consul and also causes an explosion of resource usage
|
|
since every pod needs a Consul agent. Service registration should be handled via the
|
|
catalog syncing feature with Services rather than pods.
|
|
|
|
-> **Note:** Due to a limitation of anti-affinity rules with DaemonSets,
|
|
a client-mode agent runs alongside server-mode agents in Kubernetes. This
|
|
duplication wastes some resources, but otherwise functions perfectly fine.
|