Commit Graph

249 Commits (ca864a4bbb60a4ab13b799b7d2677bd57c1d4850)

Author SHA1 Message Date
Pierre Souchay be50400c62 Distinguish between DC not existing and not being available (#6399) 2019-09-03 09:46:24 -06:00
R.B. Boyer fd1c62ee8b
connect: ensure time.Duration fields retain their human readable forms in the API (#6348)
This applies for both config entries and the compiled discovery chain.

Also omit some other config entries fields when empty.
2019-08-19 15:31:05 -05:00
R.B. Boyer 561b2fe606
connect: generate the full SNI names for discovery targets in the compiler rather than in the xds package (#6340) 2019-08-19 13:03:03 -05:00
R.B. Boyer ae79cdab1b
connect: introduce ExternalSNI field on service-defaults (#6324)
Compiling this will set an optional SNI field on each DiscoveryTarget.
When set this value should be used for TLS connections to the instances
of the target. If not set the default should be used.

Setting ExternalSNI will disable mesh gateway use for that target. It also 
disables several service-resolver features that do not make sense for an 
external service.
2019-08-19 12:19:44 -05:00
R.B. Boyer 1a485011d0
connect: updating a service-defaults config entry should leave an unset protocol alone (#6342)
If the entry is updated for reasons other than protocol it is surprising
that the value is explicitly persisted as 'tcp' rather than leaving it
empty and letting it fall back dynamically on the proxy-defaults value.
2019-08-19 10:44:06 -05:00
R.B. Boyer 3975cb89bf
agent: blocking central config RPCs iterations should not interfere with each other (#6316) 2019-08-14 09:08:46 -05:00
hashicorp-ci 5919c7c184 Merge Consul OSS branch 'master' at commit 8f7586b339 2019-08-13 02:00:43 +00:00
Sarah Adams 8ff1f481fe
add flag to allow /operator/keyring requests to only hit local servers (#6279)
Add parameter local-only to operator keyring list requests to force queries to only hit local servers (no WAN traffic).

HTTP API: GET /operator/keyring?local-only=true
CLI: consul keyring -list --local-only

Sending the local-only flag with any non-GET/list request will result in an error.
2019-08-12 11:11:11 -07:00
Mike Morris 65be58703c
connect: remove managed proxies (#6220)
* connect: remove managed proxies implementation and all supporting config options and structs

* connect: remove deprecated ProxyDestination

* command: remove CONNECT_PROXY_TOKEN env var

* agent: remove entire proxyprocess proxy manager

* test: remove all managed proxy tests

* test: remove irrelevant managed proxy note from TestService_ServerTLSConfig

* test: update ContentHash to reflect managed proxy removal

* test: remove deprecated ProxyDestination test

* telemetry: remove managed proxy note

* http: remove /v1/agent/connect/proxy endpoint

* ci: remove deprecated test exclusion

* website: update managed proxies deprecation page to note removal

* website: remove managed proxy configuration API docs

* website: remove managed proxy note from built-in proxy config

* website: add note on removing proxy subdirectory of data_dir
2019-08-09 15:19:30 -04:00
R.B. Boyer 8e22d80e35
connect: fix failover through a mesh gateway to a remote datacenter (#6259)
Failover is pushed entirely down to the data plane by creating envoy
clusters and putting each successive destination in a different load
assignment priority band. For example this shows that normally requests
go to 1.2.3.4:8080 but when that fails they go to 6.7.8.9:8080:

- name: foo
  load_assignment:
    cluster_name: foo
    policy:
      overprovisioning_factor: 100000
    endpoints:
    - priority: 0
      lb_endpoints:
      - endpoint:
          address:
            socket_address:
              address: 1.2.3.4
              port_value: 8080
    - priority: 1
      lb_endpoints:
      - endpoint:
          address:
            socket_address:
              address: 6.7.8.9
              port_value: 8080

Mesh gateways route requests based solely on the SNI header tacked onto
the TLS layer. Envoy currently only lets you configure the outbound SNI
header at the cluster layer.

If you try to failover through a mesh gateway you ideally would
configure the SNI value per endpoint, but that's not possible in envoy
today.

This PR introduces a simpler way around the problem for now:

1. We identify any target of failover that will use mesh gateway mode local or
   remote and then further isolate any resolver node in the compiled discovery
   chain that has a failover destination set to one of those targets.

2. For each of these resolvers we will perform a small measurement of
   comparative healths of the endpoints that come back from the health API for the
   set of primary target and serial failover targets. We walk the list of targets
   in order and if any endpoint is healthy we return that target, otherwise we
   move on to the next target.

3. The CDS and EDS endpoints both perform the measurements in (2) for the
   affected resolver nodes.

4. For CDS this measurement selects which TLS SNI field to use for the cluster
   (note the cluster is always going to be named for the primary target)

5. For EDS this measurement selects which set of endpoints will populate the
   cluster. Priority tiered failover is ignored.

One of the big downsides to this approach to failover is that the failover
detection and correction is going to be controlled by consul rather than
deferring that entirely to the data plane as with the prior version. This also
means that we are bound to only failover using official health signals and
cannot make use of data plane signals like outlier detection to affect
failover.

In this specific scenario the lack of data plane signals is ok because the
effectiveness is already muted by the fact that the ultimate destination
endpoints will have their data plane signals scrambled when they pass through
the mesh gateway wrapper anyway so we're not losing much.

Another related fix is that we now use the endpoint health from the
underlying service, not the health of the gateway (regardless of
failover mode).
2019-08-05 13:30:35 -05:00
R.B. Boyer c395affc93
connect: expose an API endpoint to compile the discovery chain (#6248)
In addition to exposing compilation over the API cleaned up the structures that would be exchanged to be cleaner and easier to support and understand.

Also removed ability to configure the envoy OverprovisioningFactor.
2019-08-02 15:34:54 -05:00
R.B. Boyer dcb609af83
connect: detect and prevent circular discovery chain references (#6246) 2019-08-02 09:18:45 -05:00
R.B. Boyer f02924fafe
connect: simplify the compiled discovery chain data structures (#6242)
This should make them better for sending over RPC or the API.

Instead of a chain implemented explicitly like a linked list (nodes
holding pointers to other nodes) instead switch to a flat map of named
nodes with nodes linking other other nodes by name. The shipped
structure is just a map and a string to indicate which key to start
from.

Other changes:

* inline the compiler option InferDefaults as true

* introduce compiled target config to avoid needing to send back
  additional maps of Resolvers; future target-specific compiled state
  can go here

* move compiled MeshGateway out of the Resolver and into the
  TargetConfig where it makes more sense.
2019-08-01 22:44:05 -05:00
R.B. Boyer 6393edba53
connect: reconcile how upstream configuration works with discovery chains (#6225)
* connect: reconcile how upstream configuration works with discovery chains

The following upstream config fields for connect sidecars sanely
integrate into discovery chain resolution:

- Destination Namespace/Datacenter: Compilation occurs locally but using
different default values for namespaces and datacenters. The xDS
clusters that are created are named as they normally would be.

- Mesh Gateway Mode (single upstream): If set this value overrides any
value computed for any resolver for the entire discovery chain. The xDS
clusters that are created may be named differently (see below).

- Mesh Gateway Mode (whole sidecar): If set this value overrides any
value computed for any resolver for the entire discovery chain. If this
is specifically overridden for a single upstream this value is ignored
in that case. The xDS clusters that are created may be named differently
(see below).

- Protocol (in opaque config): If set this value overrides the value
computed when evaluating the entire discovery chain. If the normal chain
would be TCP or if this override is set to TCP then the result is that
we explicitly disable L7 Routing and Splitting. The xDS clusters that
are created may be named differently (see below).

- Connect Timeout (in opaque config): If set this value overrides the
value for any resolver in the entire discovery chain. The xDS clusters
that are created may be named differently (see below).

If any of the above overrides affect the actual result of compiling the
discovery chain (i.e. "tcp" becomes "grpc" instead of being a no-op
override to "tcp") then the relevant parameters are hashed and provided
to the xDS layer as a prefix for use in naming the Clusters. This is to
ensure that if one Upstream discovery chain has no overrides and
tangentially needs a cluster named "api.default.XXX", and another
Upstream does have overrides for "api.default.XXX" that they won't
cross-pollinate against the operator's wishes.

Fixes #6159
2019-08-01 22:03:34 -05:00
R.B. Boyer 8564b6bb38
connect: validate upstreams and prevent duplicates (#6224)
* connect: validate upstreams and prevent duplicates

* Actually run Upstream.Validate() instead of ignoring it as dead code.

* Prevent two upstreams from declaring the same bind address and port.
  It wouldn't work anyway.

* Prevent two upstreams from being declared that use the same
  type+name+namespace+datacenter. Due to how the Upstream.Identity()
  function worked this ended up mostly being enforced in xDS at use-time,
  but it should be enforced more clearly at register-time.
2019-08-01 13:26:02 -05:00
Paul Banks e87cef2bb8 Revert "connect: support AWS PCA as a CA provider" (#6251)
This reverts commit 3497b7c00d.
2019-07-31 09:08:10 -04:00
Todd Radel 3497b7c00d
connect: support AWS PCA as a CA provider (#6189)
Port AWS PCA provider from consul-ent
2019-07-30 22:57:51 -04:00
Todd Radel 2552f4a11a
connect: Support RSA keys in addition to ECDSA (#6055)
Support RSA keys in addition to ECDSA
2019-07-30 17:47:39 -04:00
R.B. Boyer c6c4a2251a Merge Consul OSS branch master at commit b3541c4f34 2019-07-26 10:34:24 -05:00
Jeff Mitchell 94c73d0c92 Chunking support (#6172)
* Initial chunk support

This uses the go-raft-middleware library to allow for chunked commits to the KV
2019-07-24 17:06:39 -04:00
Matt Keeler 3053342198
Envoy Mesh Gateway integration tests (#6187)
* Allow setting the mesh gateway mode for an upstream in config files

* Add envoy integration test for mesh gateways

This necessitated many supporting changes in most of the other test cases.

Add remote mode mesh gateways integration test
2019-07-24 17:01:42 -04:00
R.B. Boyer ad9e7b6ae9
connect: allow L7 routers to match on http methods (#6164)
Fixes #6158
2019-07-23 20:56:39 -05:00
R.B. Boyer 85cf2706e6
connect: change router syntax for matching query parameters to resemble the syntax for matching paths and headers for consistency. (#6163)
This is a breaking change, but only in the context of the beta series.
2019-07-23 20:55:26 -05:00
R.B. Boyer 1dbd92e091
connect: validate and test more of the L7 config entries (#6156) 2019-07-23 20:50:23 -05:00
R.B. Boyer e039dfd7f8
connect: rework how the service resolver subset OnlyPassing flag works (#6173)
The main change is that we no longer filter service instances by health,
preferring instead to render all results down into EDS endpoints in
envoy and merely label the endpoints as HEALTHY or UNHEALTHY.

When OnlyPassing is set to true we will force consul checks in a
'warning' state to render as UNHEALTHY in envoy.

Fixes #6171
2019-07-23 20:20:24 -05:00
Matt Keeler d7fe8befa9
Update go-bexpr (#6190)
* Update go-bexpr to v0.1.1

This brings in:

• `in`/`not in` operators to do substring matching
• `matches` / `not matches` operators to perform regex string matching.

* Add the capability to auto-generate the filtering selector ops tables for our docs
2019-07-23 14:45:20 -04:00
Matt Keeler 4728329aeb
Various Gateway Fixes (#6093)
* Ensure the mesh gateway configuration comes back in the api within each upstream

* Add a test for the MeshGatewayConfig in the ToAPI functions

* Ensure we don’t use gateways for dc local connections

* Update the svc kind index for deletions

* Replace the proxycfg.state cache with an interface for testing

Also start implementing proxycfg state testing.

* Update the state tests to verify some gateway watches for upstream-targets of a discovery chain.
2019-07-12 17:19:37 -04:00
R.B. Boyer bcd2de3a2e
implement some missing service-router features and add more xDS testing (#6065)
- also implement OnlyPassing filters for non-gateway clusters
2019-07-12 14:16:21 -05:00
R.B. Boyer 9138a97054
Fix bug in service-resolver redirects if the destination uses a default resolver. (#6122)
Also:
- add back an internal http endpoint to dump a compiled discovery chain for debugging purposes

Before the CompiledDiscoveryChain.IsDefault() method would test:

- is this chain just one resolver step?
- is that resolver step just the default?

But what I forgot to test:

- is that resolver step for the same service that the chain represents?

This last point is important because if you configured just one config
entry:

    kind = "service-resolver"
    name = "web"
    redirect {
      service = "other"
    }

and requested the chain for "web" you'd get back a **default** resolver
for "other".  In the xDS code the IsDefault() method is used to
determine if this chain is "empty". If it is then we use the
pre-discovery-chain logic that just uses data embedded in the Upstream
object (and still lets the escape hatches function).

In the example above that means certain parts of the xDS code were going
to try referencing a cluster named "web..." despite the other parts of
the xDS code maintaining clusters named "other...".
2019-07-12 12:21:25 -05:00
R.B. Boyer 67a36e3452
handle structs.ConfigEntry decoding similarly to api.ConfigEntry decoding (#6106)
Both 'consul config write' and server bootstrap config entries take a
decoding detour through mapstructure on the way from HCL to an actual
struct. They both may take in snake_case or CamelCase (for consistency)
so need very similar handling.

Unfortunately since they are operating on mirror universes of structs
(api.* vs structs.*) the code cannot be identitical, so try to share the
kind-configuration and duplicate the rest for now.
2019-07-12 12:20:30 -05:00
Matt Keeler 6e65811db2
Envoy CLI bind addresses (#6107)
* Ensure we MapWalk the proxy config in the NodeService and ServiceNode structs

This gets rid of some json encoder errors in the catalog endpoints

* Allow passing explicit bind addresses to envoy

* Move map walking to the ConnectProxyConfig struct

Any place where this struct gets JSON encoded will benefit as opposed to having to implement it everywhere.

* Fail when a non-empty address is provided and not bindable

* camel case

* Update command/connect/envoy/envoy.go

Co-Authored-By: Paul Banks <banks@banksco.de>
2019-07-12 12:57:31 -04:00
Matt Keeler 3eb3ee5a15
Merge pull request #6053 from hashicorp/gateways_and_resolvers
Integrate Mesh Gateways with ServiceResolverSubsets
2019-07-02 12:05:08 -04:00
R.B. Boyer 43770b9391
digest the proxy-defaults protocol into the graph (#6050) 2019-07-02 11:01:17 -05:00
Matt Keeler 3b6d5e382a Implement caching for config entry lists
Update agent/cache-types/config_entry.go

Co-Authored-By: R.B. Boyer <public@richardboyer.net>
2019-07-02 10:11:19 -04:00
R.B. Boyer 4bdb690a25
activate most discovery chain features in xDS for envoy (#6024) 2019-07-01 22:10:51 -05:00
Matt Keeler bdebe62fd0
Fix some tests that I broke when refactoring the ConfigSnapshot (#6051)
* Fix some tests that I broke when refactoring the ConfigSnapshot

* Make sure the MeshGateway config is added to all the right api structs

* Fix some more tests
2019-07-01 19:47:58 -04:00
Matt Keeler 8d953f5840 Implement Mesh Gateways
This includes both ingress and egress functionality.
2019-07-01 16:28:30 -04:00
Matt Keeler 4bc1277315 Include a content hash of the intention for use during replication 2019-07-01 16:28:30 -04:00
Matt Keeler 3943e38133 Implement Kind based ServiceDump and caching of the ServiceDump RPC 2019-07-01 16:28:30 -04:00
R.B. Boyer 2ad516aeaf
do some initial config entry graph validation during writes (#6047) 2019-07-01 15:23:36 -05:00
hashicorp-ci 43bda6fb76 Merge Consul OSS branch 'master' at commit e91f73f592 2019-06-30 02:00:31 +00:00
Hans Hasselberg 33a7df3330
tls: auto_encrypt enables automatic RPC cert provisioning for consul clients (#5597) 2019-06-27 22:22:07 +02:00
R.B. Boyer 6a52f9f9fb
initial version of L7 config entry compiler (#5994)
With this you should be able to fetch all of the relevant discovery
chain config entries from the state store in one query and then feed
them into the compiler outside of a transaction.

There are a lot of TODOs scattered through here, but they're mostly
around handling fun edge cases and can be deferred until more of the
plumbing works completely.
2019-06-27 13:38:21 -05:00
R.B. Boyer ceef44bbc9
adding new config entries for L7 discovery chain (unused) (#5987) 2019-06-27 12:37:43 -05:00
hashicorp-ci f4304e2e5b Merge Consul OSS branch 'master' at commit 4eb73973b6 2019-06-27 02:00:41 +00:00
Pierre Souchay 0e907f5aa8 Support for maximum size for Output of checks (#5233)
* Support for maximum size for Output of checks

This PR allows users to limit the size of output produced by checks at the agent 
and check level.

When set at the agent level, it will limit the output for all checks monitored
by the agent.

When set at the check level, it can override the agent max for a specific check but
only if it is lower than the agent max.

Default value is 4k, and input must be at least 1.
2019-06-26 09:43:25 -06:00
Matt Keeler 43c5ba0304
New Cache Types (#5995)
* Add a cache type for the Catalog.ListServices endpoint

* Add a cache type for the Catalog.ListDatacenters endpoint
2019-06-24 14:11:34 -04:00
Aestek b839f52195 kv: do not trigger watches when setting the same value (#5885)
If a KVSet is performed but does not update the entry, do not trigger
watches for this key.
This avoids releasing blocking queries for KV values that did not
actually changed.
2019-06-18 15:06:29 +02:00
Matt Keeler f3d9b999ee
Add tagged addresses for services (#5965)
This allows addresses to be tagged at the service level similar to what we allow for nodes already. The address translation that can be enabled with the `translate_wan_addrs` config was updated to take these new addresses into account as well.
2019-06-17 10:51:50 -04:00
R.B. Boyer 40336fd353
agent: fix several data races and bugs related to node-local alias checks (#5876)
The observed bug was that a full restart of a consul datacenter (servers
and clients) in conjunction with a restart of a connect-flavored
application with bring-your-own-service-registration logic would very
frequently cause the envoy sidecar service check to never reflect the
aliased service.

Over the course of investigation several bugs and unfortunate
interactions were corrected:

(1)

local.CheckState objects were only shallow copied, but the key piece of
data that gets read and updated is one of the things not copied (the
underlying Check with a Status field). When the stock code was run with
the race detector enabled this highly-relevant-to-the-test-scenario field
was found to be racy.

Changes:

 a) update the existing Clone method to include the Check field
 b) copy-on-write when those fields need to change rather than
    incrementally updating them in place.

This made the observed behavior occur slightly less often.

(2)

If anything about how the runLocal method for node-local alias check
logic was ever flawed, there was no fallback option. Those checks are
purely edge-triggered and failure to properly notice a single edge
transition would leave the alias check incorrect until the next flap of
the aliased check.

The change was to introduce a fallback timer to act as a control loop to
double check the alias check matches the aliased check every minute
(borrowing the duration from the non-local alias check logic body).

This made the observed behavior eventually go away when it did occur.

(3)

Originally I thought there were two main actions involved in the data race:

A. The act of adding the original check (from disk recovery) and its
   first health evaluation.

B. The act of the HTTP API requests coming in and resetting the local
   state when re-registering the same services and checks.

It took awhile for me to realize that there's a third action at work:

C. The goroutines associated with the original check and the later
   checks.

The actual sequence of actions that was causing the bad behavior was
that the API actions result in the original check to be removed and
re-added _without waiting for the original goroutine to terminate_. This
means for brief windows of time during check definition edits there are
two goroutines that can be sending updates for the alias check status.

In extremely unlikely scenarios the original goroutine sees the aliased
check start up in `critical` before being removed but does not get the
notification about the nearly immediate update of that check to
`passing`.

This is interlaced wit the new goroutine coming up, initializing its
base case to `passing` from the current state and then listening for new
notifications of edge triggers.

If the original goroutine "finishes" its update, it then commits one
more write into the local state of `critical` and exits leaving the
alias check no longer reflecting the underlying check.

The correction here is to enforce that the old goroutines must terminate
before spawning the new one for alias checks.
2019-05-24 13:36:56 -05:00