Backport of docs: remaining agentless docs updates into release/1.14.x (#15461)

* backport of commit ca414959df

* backport of commit bb4f27a87f

* backport of commit 18fb766e9b

* backport of commit 36bc043649

* backport of commit 0fa6507b90

Co-authored-by: Iryna Shustava <iryna@hashicorp.com>
Co-authored-by: Jeff Boruszak <104028618+boruszak@users.noreply.github.com>
pull/15468/merge
hc-github-team-consul-core 2022-11-18 13:11:53 -05:00 committed by GitHub
parent bcfa4879e6
commit e8563be54a
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
16 changed files with 140 additions and 247 deletions

View File

@ -37,8 +37,6 @@ Ensure that the environment you are deploying Consul API Gateway in meets the re
name: consul
connectInject:
enabled: true
controller:
enabled: true
apiGateway:
enabled: true
image: hashicorp/consul-api-gateway:$VERSION

View File

@ -64,8 +64,6 @@ Complete the following procedure after you have provisioned a Kubernetes cluster
dns:
enabled: true
enableRedirection: true
controller:
enabled: true
meshGateway:
enabled: true
replicas: 1

View File

@ -2,7 +2,7 @@
layout: docs
page_title: Configure Health Checks for Consul on Kubernetes
description: >-
Kubernetes has built-in health probes you can sync with Consul's health checks to ensure service mesh traffic is routed to healthy pods. Learn how to register a TTL Health check and use mutating webhooks to redirect k8s liveness, readiness, and startup probes through Envoy proxies.
Kubernetes has built-in health probes you can sync with Consul's health checks to ensure service mesh traffic is routed to healthy pods.
---
# Configure Health Checks for Consul on Kubernetes
@ -14,14 +14,17 @@ Health check synchronization with Consul is done automatically whenever `connect
For each Kubernetes pod that is connect-injected the following will be configured:
1. A [TTL health check](/docs/discovery/checks#ttl) is registered within Consul.
The Consul health check's state will reflect the pod's readiness status,
which is the combination of all Kubernetes probes registered with the pod.
1. A [Consul health check](/consul/api-docs/catalog#register-entity) is registered within Consul catalog.
The Consul health check's state will reflect the pod's readiness status.
1. If the pod is utilizing [Transparent Proxy](/docs/connect/transparent-proxy) mode, the mutating webhook will mutate all `http` based Startup, Liveness, and Readiness probes in the pod to redirect through the Envoy proxy.
This is done with [`ExposePaths` configuration](/docs/connect/registration/service-registration#expose-paths-configuration-reference) for each probe so that kubelet can access the endpoint through the Envoy proxy.
1. If the pod is using [Transparent Proxy](/docs/connect/transparent-proxy) mode,
the mutating webhook will mutate all `http` based Startup, Liveness, and Readiness probes in the pod to redirect through the Envoy proxy.
This is done with
[`ExposePaths` configuration](/docs/connect/registration/service-registration#expose-paths-configuration-reference)
for each probe so that kubelet can access the endpoint through the Envoy proxy.
~> The mutation behavior can be disabled by either setting the `consul.hashicorp.com/transparent-proxy-overwrite-probes` pod annotation to `false` or the `connectInject.defaultOverwriteProbes` Helm value to `false`.
~> The mutation behavior can be disabled by either setting the `consul.hashicorp.com/transparent-proxy-overwrite-probes`
pod annotation to `false` or the `connectInject.defaultOverwriteProbes` Helm value to `false`.
When readiness probes are set for a pod, the status of the pod will be reflected within Consul and will cause Consul to redirect service
mesh traffic to the pod based on the pod's health. If the pod has failing health checks, Consul will no longer use

View File

@ -382,9 +382,6 @@ upgrade the installation using `helm upgrade` for existing installs or
```yaml
connectInject:
enabled: true
controller:
enabled: true
```
This will configure the injector to inject when the

View File

@ -34,8 +34,6 @@ global:
name: consul
connectInject:
enabled: true
controller:
enabled: true
ingressGateways:
enabled: true
gateways:
@ -268,8 +266,6 @@ global:
name: consul
connectInject:
enabled: true
controller:
enabled: true
ingressGateways:
enabled: false # Set to false
gateways:

View File

@ -60,11 +60,6 @@ new and existing services:
1. Next, modify your Helm values:
1. Remove the `defaultProtocol` config. This won't affect existing services.
1. Set:
```yaml
controller:
enabled: true
```
1. Now you can upgrade your Helm chart to the latest version with the new Helm values.
1. From now on, any new service will require a [`ServiceDefaults`](/docs/connect/config-entries/service-defaults)
resource to set its protocol:
@ -164,13 +159,6 @@ You will need to perform the following steps to upgrade:
1. Next, remove this annotation from existing deployments. This will have no
effect on the deployments because the annotation was only used when the
service was first created.
1. Modify your Helm values and add:
```yaml
controller:
enabled: true
```
1. Now you can upgrade your Helm chart to the latest version.
1. From now on, any new service will require a [`ServiceDefaults`](/docs/connect/config-entries/service-defaults)
resource to set its protocol:

View File

@ -68,9 +68,6 @@ connectInject:
# Consul Connect service mesh must be enabled for federation.
enabled: true
controller:
enabled: true
meshGateway:
# Mesh gateways are gateways between datacenters. They must be enabled
# for federation in Kubernetes since the communication between datacenters
@ -358,8 +355,6 @@ global:
secretKey: gossipEncryptionKey
connectInject:
enabled: true
controller:
enabled: true
meshGateway:
enabled: true
server:

View File

@ -374,8 +374,6 @@ global:
connectInject:
enabled: true
controller:
enabled: true
meshGateway:
enabled: true
server:

View File

@ -8,25 +8,22 @@ description: >-
# Join External Servers to Consul on Kubernetes
If you have a Consul cluster already running, you can configure your
Consul clients inside Kubernetes to join this existing cluster.
Consul on Kubernetes installation to join this existing cluster.
The below `values.yaml` file shows how to configure the Helm chart to install
Consul clients that will join an existing cluster.
Consul that will join an existing Consul server cluster.
The `global.enabled` value first disables all chart components by default
so that each component is opt-in. This allows us to _only_ setup the client
agents. We then opt-in to the client agents by setting `client.enabled` to
`true`.
Next, `client.exposeGossipPorts` can be set to `true` or `false` depending on if
you want the clients to be exposed on the Kubernetes internal node IPs (`true`) or
their pod IPs (`false`).
Finally, `client.join` is set to an array of valid
[`-retry-join` values](/docs/agent/config/cli-flags#retry-join). In the
example above, a fake [cloud auto-join](/docs/install/cloud-auto-join)
value is specified. This should be set to resolve to the proper addresses of
your existing Consul cluster.
Next, configure `externalServers` to point it to Consul servers.
The `externalServers.hosts` value must be provided and should be set to a DNS, an IP,
or an `exec=` string with a command returning Consul IPs. Please see [this documentation](https://github.com/hashicorp/go-netaddrs)
on how the `exec=` string works.externalServers
Other values in the `externalServers` section are optional. Please refer to
[Helm Chart configuration](https://developer.hashicorp.com/consul/docs/k8s/helm#h-externalservers) for more details.
<CodeBlockConfig filename="values.yaml">
@ -34,26 +31,16 @@ your existing Consul cluster.
global:
enabled: false
client:
enabled: true
# Set this to true to expose the Consul clients using the Kubernetes node
# IPs. If false, the pod IPs must be routable from the external servers.
exposeGossipPorts: true
join:
- 'provider=my-cloud config=val ...'
externalServers:
hosts: [<consul server DNS, IP or exec= string>]
```
</CodeBlockConfig>
-> **Networking:** Note that for the Kubernetes nodes to join an existing
cluster, the nodes (and specifically the agent pods) must be able to connect
to all other server and client agents inside and _outside_ of Kubernetes over [LAN](/docs/install/glossary#lan-gossip).
If this isn't possible, consider running a separate Consul cluster inside Kubernetes
and federating it with your cluster outside Kubernetes.
You may also consider adopting Consul Enterprise for
[network segments](/docs/enterprise/network-segments).
-> **Note:** If you are looking to join Consul clients to an existing Consul server cluster,
please see [this documentation](https://developer.hashicorp.com/consul/docs/v1.13.x/k8s/deployment-configurations/servers-outside-kubernetes).
## Configuring TLS with Auto-encrypt
## Configuring TLS
-> **Note:** Consul on Kubernetes currently does not support external servers that require mutual authentication
for the HTTPS clients of the Consul servers, that is when servers have either
@ -62,10 +49,9 @@ As noted in the [Security Model](/docs/security#secure-configuration),
that setting isn't strictly necessary to support Consul's threat model as it is recommended that
all requests contain a valid ACL token.
Consul's auto-encrypt feature allows clients to automatically provision their certificates by making a request to the servers at startup.
If you would like to use this feature with external Consul servers, you need to configure the Helm chart with information about the servers
so that it can retrieve the clients' CA to use for securing the rest of the cluster.
To do that, you must add the following values, in addition to the values mentioned above:
If the Consul server has TLS enabled, you would also need to provide the CA certificate that Consul on Kubernetes will
need to talk to the server. First save this certificate in a Kubernetes secret and then provide it in your Helm values below,
in addition to the values mentioned above:
<CodeBlockConfig filename="values.yaml" highlight="2-8">
@ -73,19 +59,17 @@ To do that, you must add the following values, in addition to the values mention
global:
tls:
enabled: true
enableAutoEncrypt: true
caCert:
secretName: <CA certificate secret name>
secretKey: <CA Certificate secret key>
externalServers:
enabled: true
hosts:
- 'provider=my-cloud config=val ...'
hosts: [<consul server DNS, IP or exec= string>]
```
</CodeBlockConfig>
In most cases, `externalServers.hosts` will be the same as `client.join`, however, both keys must be set because
they are used for different purposes: one for Serf LAN and the other for HTTPS connections.
Please see the [reference documentation](/docs/k8s/helm#v-externalservers-hosts)
for more info. If your HTTPS port is different from Consul's default `8501`, you must also set
If your HTTPS port is different from Consul's default `8501`, you must also set
`externalServers.httpsPort`.
## Configuring ACLs
@ -137,8 +121,7 @@ with `consul login`.
```yaml
externalServers:
enabled: true
hosts:
- 'provider=my-cloud config=val ...'
hosts: [<consul server DNS, IP or exec= string>]
k8sAuthMethodHost: 'https://kubernetes.example.com:443'
```
@ -156,17 +139,9 @@ global:
bootstrapToken:
secretName: bootstrap-token
secretKey: token
client:
enabled: true
# Set this to true to expose the Consul clients using the Kubernetes node
# IPs. If false, the pod IPs must be routable from the external servers.
exposeGossipPorts: true
join:
- 'provider=my-cloud config=val ...'
externalServers:
enabled: true
hosts:
- 'provider=my-cloud config=val ...'
hosts: [<consul server DNS, IP or exec= string>]
k8sAuthMethodHost: 'https://kubernetes.example.com:443'
```
@ -184,17 +159,9 @@ global:
enabled: false
acls:
manageSystemACLs: true
client:
enabled: true
# Set this to true to expose the Consul clients using the Kubernetes node
# IPs. If false, the pod IPs must be routable from the external servers.
exposeGossipPorts: true
join:
- 'provider=my-cloud config=val ...'
externalServers:
enabled: true
hosts:
- 'provider=my-cloud config=val ...'
hosts: [<consul server DNS, IP or exec= string>]
k8sAuthMethodHost: 'https://kubernetes.example.com:443'
```

View File

@ -10,18 +10,12 @@ description: >-
~> **Note:** When running Consul across multiple Kubernetes clusters, we recommend using [admin partitions](/docs/enterprise/admin-partitions) for production environments. This Consul Enterprise feature allows you to accommodate multiple tenants without resource collisions when administering a cluster at scale. Admin partitions also enable you to run Consul on Kubernetes clusters across a non-flat network.
This page describes deploying a single Consul datacenter in multiple Kubernetes clusters,
with servers and clients running in one cluster and only clients in the rest of the clusters.
with servers running in one cluster and only Consul on Kubernetes components in the rest of the clusters.
This example uses two Kubernetes clusters, but this approach could be extended to using more than two.
## Requirements
* Consul-Helm version `v0.32.1` or higher
* This deployment topology requires that the Kubernetes clusters have a flat network
for both pods and nodes so that pods or nodes from one cluster can connect
to pods or nodes in another. In many hosted Kubernetes environments, this may have to be explicitly configured based on the hosting provider's network. Refer to the following documentation for instructions:
* [Azure AKS CNI](https://docs.microsoft.com/en-us/azure/aks/concepts-network#azure-cni-advanced-networking)
* [AWS EKS CNI](https://docs.aws.amazon.com/eks/latest/userguide/pod-networking.html)
* [GKE VPC-native clusters](https://cloud.google.com/kubernetes-engine/docs/concepts/alias-ips).
* Consul-Helm version `v1.0.0` or higher
* Either the Helm release name for each Kubernetes cluster must be unique, or `global.name` for each Kubernetes cluster must be unique to prevent collisions of ACL resources with the same prefix.
## Prepare Helm release name ahead of installs
@ -34,14 +28,14 @@ Before proceeding with installation, prepare the Helm release names as environme
```shell-session
$ export HELM_RELEASE_SERVER=server
$ export HELM_RELEASE_CLIENT=client
$ export HELM_RELEASE_CONSUL=consul
...
$ export HELM_RELEASE_CLIENT2=client2
$ export HELM_RELEASE_CONSUL2=consul2
```
## Deploying Consul servers and clients in the first cluster
## Deploying Consul servers in the first cluster
First, deploy the first cluster with Consul Servers and Clients with the example Helm configuration below.
First, deploy the first cluster with Consul servers with the example Helm configuration below.
<CodeBlockConfig filename="cluster1-values.yaml">
@ -56,10 +50,6 @@ global:
gossipEncryption:
secretName: consul-gossip-encryption-key
secretKey: key
connectInject:
enabled: true
controller:
enabled: true
ui:
service:
type: NodePort
@ -86,17 +76,15 @@ Now install Consul cluster with Helm:
$ helm install ${HELM_RELEASE_SERVER} --values cluster1-values.yaml hashicorp/consul
```
Once the installation finishes and all components are running and ready, the following information needs to be extracted (using the below command) and applied to the second Kubernetes cluster.
* The Gossip encryption key created
* The CA certificate generated during installation
* The ACL bootstrap token generated during installation
```shell-session
$ kubectl get secret consul-gossip-encryption-key ${HELM_RELEASE_SERVER}-consul-ca-cert ${HELM_RELEASE_SERVER}-consul-bootstrap-acl-token --output yaml > cluster1-credentials.yaml
$ kubectl get secret ${HELM_RELEASE_SERVER}-consul-ca-cert ${HELM_RELEASE_SERVER}-consul-bootstrap-acl-token --output yaml > cluster1-credentials.yaml
```
## Deploying Consul clients in the second cluster
## Deploying Consul Kubernetes in the second cluster
~> **Note:** If multiple Kubernetes clusters will be joined to the Consul Datacenter, then the following instructions will need to be repeated for each additional Kubernetes cluster.
Switch to the second Kubernetes cluster where Consul clients will be deployed
@ -124,38 +112,27 @@ global:
bootstrapToken:
secretName: cluster1-consul-bootstrap-acl-token
secretKey: token
gossipEncryption:
secretName: consul-gossip-encryption-key
secretKey: key
tls:
enabled: true
enableAutoEncrypt: true
caCert:
secretName: cluster1-consul-ca-cert
secretKey: tls.crt
externalServers:
enabled: true
# This should be any node IP of the first k8s cluster
# This should be any node IP of the first k8s cluster or the load balancer IP if using LoadBalancer service type for the UI.
hosts: ["10.0.0.4"]
# The node port of the UI's NodePort service
# The node port of the UI's NodePort service or the load balancer port.
httpsPort: 31557
tlsServerName: server.dc1.consul
# The address of the kube API server of this Kubernetes cluster
k8sAuthMethodHost: https://kubernetes.example.com:443
client:
enabled: true
join: ["provider=k8s kubeconfig=/consul/userconfig/cluster1-kubeconfig/kubeconfig label_selector=\"app=consul,component=server\""]
extraVolumes:
- type: secret
name: cluster1-kubeconfig
load: false
connectInject:
enabled: true
```
</CodeBlockConfig>
Note the references to the secrets extracted and applied from the first cluster in ACL, gossip, and TLS configuration.
Note the references to the secrets extracted and applied from the first cluster in ACL and TLS configuration.
The `externalServers.hosts` and `externalServers.httpsPort`
refer to the IP and port of the UI's NodePort service deployed in the first cluster.
@ -187,23 +164,10 @@ reach the Kubernetes API in that cluster.
The easiest way to get it is from the `kubeconfig` by running `kubectl config view` and grabbing
the value of `cluster.server` for the second cluster.
Lastly, set up the clients so that they can discover the servers in the first cluster.
For this, Consul's cloud auto-join feature
for the [Kubernetes provider](/docs/install/cloud-auto-join#kubernetes-k8s) can be used.
This can be configured by saving the `kubeconfig` for the first cluster as a Kubernetes secret in the second cluster
and referencing it in the `clients.join` value. Note that the secret is made available to the client pods
by setting it in `client.extraVolumes`.
~> **Note:** The kubeconfig provided to the client should have minimal permissions.
The cloud auto-join provider will only need permission to read pods.
Please see [Kubernetes Cloud auto-join](/docs/install/cloud-auto-join#kubernetes-k8s)
for more details.
Now, proceed with the installation of the second cluster.
```shell-session
$ helm install ${HELM_RELEASE_CLIENT} --values cluster2-values.yaml hashicorp/consul
$ helm install ${HELM_RELEASE_CONSUL} --values cluster2-values.yaml hashicorp/consul
```
## Verifying the Consul Service Mesh works

View File

@ -42,7 +42,7 @@ It includes things like terminating gateways, ingress gateways, etc.)
|[ACL Replication token](/docs/k8s/deployment-configurations/vault/data-integration/replication-token) | Consul server-acl-init job | [`global.secretsBackend.vault.manageSystemACLsRole`](/docs/k8s/helm#v-global-secretsbackend-vault-managesystemaclsrole)|
|[Enterprise license](/docs/k8s/deployment-configurations/vault/data-integration/enterprise-license) | Consul servers<br/>Consul clients | [`global.secretsBackend.vault.consulServerRole`](/docs/k8s/helm#v-global-secretsbackend-vault-consulserverrole)<br/>[`global.secretsBackend.vault.consulClientRole`](/docs/k8s/helm#v-global-secretsbackend-vault-consulclientrole)|
|[Gossip encryption key](/docs/k8s/deployment-configurations/vault/data-integration/gossip) | Consul servers<br/>Consul clients | [`global.secretsBackend.vault.consulServerRole`](/docs/k8s/helm#v-global-secretsbackend-vault-consulserverrole)<br/>[`global.secretsBackend.vault.consulClientRole`](/docs/k8s/helm#v-global-secretsbackend-vault-consulclientrole)|
|[Snapshot Agent config](/docs/k8s/deployment-configurations/vault/data-integration/snapshot-agent-config) | Consul snapshot agent | [`global.secretsBackend.vault.consulSnapshotAgentRole`](/docs/k8s/helm#v-global-secretsbackend-vault-consulsnapshotagentrole)|
|[Snapshot Agent config](/docs/k8s/deployment-configurations/vault/data-integration/snapshot-agent-config) | Consul servers | [`global.secretsBackend.vault.consulServerRole`](/docs/k8s/helm#v-global-secretsbackend-vault-consulserverrole)|
|[Server TLS credentials](/docs/k8s/deployment-configurations/vault/data-integration/server-tls) | Consul servers<br/>Consul clients<br/>Consul components | [`global.secretsBackend.vault.consulServerRole`](/docs/k8s/helm#v-global-secretsbackend-vault-consulserverrole)<br/>[`global.secretsBackend.vault.consulClientRole`](/docs/k8s/helm#v-global-secretsbackend-vault-consulclientrole)<br/>[`global.secretsBackend.vault.consulCARole`](/docs/k8s/helm#v-global-secretsbackend-vault-consulcarole)|
|[Service Mesh and Consul client TLS credentials](/docs/k8s/deployment-configurations/vault/data-integration/connect-ca) | Consul servers | [`global.secretsBackend.vault.consulServerRole`](/docs/k8s/helm#v-global-secretsbackend-vault-consulserverrole)|
|[Webhook TLS certificates for controller and connect inject](/docs/k8s/deployment-configurations/vault/data-integration/connect-ca) | Consul controllers<br/>Consul connect inject | [`global.secretsBackend.vault.controllerRole`](/docs/k8s/helm#v-global-secretsbackend-vault-controllerrole)<br />[`global.secretsBackend.vault.connectInjectRole`](/docs/k8s/helm#v-global-secretsbackend-vault-controllerrole)|
@ -61,7 +61,7 @@ The mapping for secondary data centers is similar with the following differences
|[ACL Replication token](/docs/k8s/deployment-configurations/vault/data-integration/replication-token) | Consul server-acl-init job<br/>Consul servers | [`global.secretsBackend.vault.manageSystemACLsRole`](/docs/k8s/helm#v-global-secretsbackend-vault-managesystemaclsrole)<br/>[`global.secretsBackend.vault.consulServerRole`](/docs/k8s/helm#v-global-secretsbackend-vault-consulserverrole)|
|[Enterprise license](/docs/k8s/deployment-configurations/vault/data-integration/enterprise-license) | Consul servers<br/>Consul clients | [`global.secretsBackend.vault.consulServerRole`](/docs/k8s/helm#v-global-secretsbackend-vault-consulserverrole)<br/>[`global.secretsBackend.vault.consulClientRole`](/docs/k8s/helm#v-global-secretsbackend-vault-consulclientrole)|
|[Gossip encryption key](/docs/k8s/deployment-configurations/vault/data-integration/gossip) | Consul servers<br/>Consul clients | [`global.secretsBackend.vault.consulServerRole`](/docs/k8s/helm#v-global-secretsbackend-vault-consulserverrole)<br/>[`global.secretsBackend.vault.consulClientRole`](/docs/k8s/helm#v-global-secretsbackend-vault-consulclientrole)|
|[Snapshot Agent config](/docs/k8s/deployment-configurations/vault/data-integration/snapshot-agent-config) | Consul snapshot agent | [`global.secretsBackend.vault.consulSnapshotAgentRole`](/docs/k8s/helm#v-global-secretsbackend-vault-consulsnapshotagentrole)|
|[Snapshot Agent config](/docs/k8s/deployment-configurations/vault/data-integration/snapshot-agent-config) | Consul servers | [`global.secretsBackend.vault.consulServerRole`](/docs/k8s/helm#v-global-secretsbackend-vault-consulserverrole)|
|[Server TLS credentials](/docs/k8s/deployment-configurations/vault/data-integration/server-tls) | Consul servers<br/>Consul clients<br/>Consul components | [`global.secretsBackend.vault.consulServerRole`](/docs/k8s/helm#v-global-secretsbackend-vault-consulserverrole)<br/>[`global.secretsBackend.vault.consulClientRole`](/docs/k8s/helm#v-global-secretsbackend-vault-consulclientrole)<br/>[`global.secretsBackend.vault.consulCARole`](/docs/k8s/helm#v-global-secretsbackend-vault-consulcarole)|
|[Service Mesh and Consul client TLS credentials](/docs/k8s/deployment-configurations/vault/data-integration/connect-ca) | Consul servers | [`global.secretsBackend.vault.consulServerRole`](/docs/k8s/helm#v-global-secretsbackend-vault-consulserverrole)|
|[Webhook TLS certificates for controller and connect inject](/docs/k8s/deployment-configurations/vault/data-integration/connect-ca) | Consul controllers<br/>Consul connect inject | [`global.secretsBackend.vault.controllerRole`](/docs/k8s/helm#v-global-secretsbackend-vault-controllerrole)<br />[`global.secretsBackend.vault.connectInjectRole`](/docs/k8s/helm#v-global-secretsbackend-vault-controllerrole)|

View File

@ -56,21 +56,23 @@ $ vault policy write snapshot-agent-config-policy snapshot-agent-config-policy.h
## Create Vault Authorization Roles for Consul
Next, you will create a Kubernetes auth role for the Consul snapshot agent:
Next, you will add this policy to your Consul server Kubernetes auth role:
```shell-session
$ vault write auth/kubernetes/role/consul-server \
bound_service_account_names=<Consul snapshot agent service account> \
bound_service_account_names=<Consul server service account> \
bound_service_account_namespaces=<Consul installation namespace> \
policies=snapshot-agent-config-policy \
ttl=1h
```
Note that if you have other policies associated
with the Consul server service account, you will need to make to include those as well.
To find out the service account name of the Consul snapshot agent,
you can run the following `helm template` command with your Consul on Kubernetes values file:
```shell-session
$ helm template --release-name ${RELEASE_NAME} -s templates/client-snapshot-agent-serviceaccount.yaml hashicorp/consul -f values.yaml
$ helm template --release-name ${RELEASE_NAME} -s templates/server-serviceaccount.yaml hashicorp/consul -f values.yaml
```
## Update Consul on Kubernetes Helm chart
@ -85,7 +87,7 @@ global:
secretsBackend:
vault:
enabled: true
consulSnapshotAgentRole: snapshot-agent
consulServerRole: consul-server
client:
snapshotAgent:
configSecret:

View File

@ -204,11 +204,6 @@ global:
consulCARole: "consul-ca"
controllerRole: "controller-role"
connectInjectRole: "connect-inject-role"
controller:
caCert:
secretName: "controller/cert/ca"
tlsCert:
secretName: "controller/issue/controller-role"
connectInject:
caCert:
secretName: "connect-inject/cert/ca"
@ -228,8 +223,6 @@ server:
load: "false"
connectInject:
enabled: true
controller:
enabled: true
```
</CodeBlockConfig>

View File

@ -475,8 +475,6 @@ Repeat the following steps for each datacenter in the cluster:
connectInject:
replicas: 1
enabled: true
controller:
enabled: true
meshGateway:
enabled: true
replicas: 1

View File

@ -241,8 +241,6 @@ No existing installations found.
Overrides:
connectInject:
enabled: true
controller:
enabled: true
Proceed with installation? (y/N) y

View File

@ -472,8 +472,6 @@ $ consul-k8s status
defaultEnableMerging: true
defaultEnabled: true
enableGatewayMetrics: true
controller:
enabled: true
global:
metrics:
enableAgentMetrics: true