* Proxy Config Manager
This component watches for local state changes on the agent and ensures that each service registered locally with Kind == connect-proxy has it's state being actively populated in the cache.
This serves two purposes:
1. For the built-in proxy, it ensures that the state needed to accept connections is available in RAM shortly after registration and likely before the proxy actually starts accepting traffic.
2. For (future - next PR) xDS server and other possible future proxies that require _push_ based config discovery, this provides a mechanism to subscribe and be notified about updates to a proxy instance's config including upstream service discovery results.
* Address review comments
* Better comments; Better delivery of latest snapshot for slow watchers; Embed Config
* Comment typos
* Add upstream Stringer for funsies
- A new endpoint `/v1/agent/service/:service_id` which is a generic way to look up the service for a single instance. The primary value here is that it:
- **supports hash-based blocking** and so;
- **replaces `/agent/connect/proxy/:proxy_id`** as the mechanism the built-in proxy uses to read its config.
- It's not proxy specific and so works for any service.
- It has a temporary shim to call through to the existing endpoint to preserve current managed proxy config defaulting behaviour until that is removed entirely (tested).
- The built-in proxy now uses the new endpoint exclusively for it's config
- The built-in proxy now has a `-sidecar-for` flag that allows the service ID of the _target_ service to be specified, on the condition that there is exactly one "sidecar" proxy (that is one that has `Proxy.DestinationServiceID` set) for the service registered.
- Several fixes for edge cases for SidecarService
- A fix for `Alias` checks - when running locally they didn't update their state until some external thing updated the target. If the target service has no checks registered as below, then the alias never made it past critical.
* Added new Config for SidecarService in ServiceDefinitions.
* WIP: all the code needed for SidecarService is written... none of it is tested other than config :). Need API updates too.
* Test coverage for the new sidecarServiceFromNodeService method.
* Test API registratrion with SidecarService
* Recursive Key Translation 🤦
* Add tests for nested sidecar defintion arrays to ensure they are translated correctly
* Use dedicated internal state rather than Service Meta for tracking sidecars for deregistration.
Add tests for deregistration.
* API struct for agent register. No other endpoint should be affected yet.
* Additional test cases to cover updates to API registrations
* Refactor Service Definition ProxyDestination.
This includes:
- Refactoring all internal structs used
- Updated tests for both deprecated and new input for:
- Agent Services endpoint response
- Agent Service endpoint response
- Agent Register endpoint
- Unmanaged deprecated field
- Unmanaged new fields
- Managed deprecated upstreams
- Managed new
- Catalog Register
- Unmanaged deprecated field
- Unmanaged new fields
- Managed deprecated upstreams
- Managed new
- Catalog Services endpoint response
- Catalog Node endpoint response
- Catalog Service endpoint response
- Updated API tests for all of the above too (both deprecated and new forms of register)
TODO:
- config package changes for on-disk service definitions
- proxy config endpoint
- built-in proxy support for new fields
* Agent proxy config endpoint updated with upstreams
* Config file changes for upstreams.
* Add upstream opaque config and update all tests to ensure it works everywhere.
* Built in proxy working with new Upstreams config
* Command fixes and deprecations
* Fix key translation, upstream type defaults and a spate of other subtele bugs found with ned to end test scripts...
TODO: tests still failing on one case that needs a fix. I think it's key translation for upstreams nested in Managed proxy struct.
* Fix translated keys in API registration.
≈
* Fixes from docs
- omit some empty undocumented fields in API
- Bring back ServiceProxyDestination in Catalog responses to not break backwards compat - this was removed assuming it was only used internally.
* Documentation updates for Upstreams in service definition
* Fixes for tests broken by many refactors.
* Enable travis on f-connect branch in this branch too.
* Add consistent Deprecation comments to ProxyDestination uses
* Update version number on deprecation notices, and correct upstream datacenter field with explanation in docs
* Rename agent/proxy package to reflect that it is limited to managed proxy processes
Rationale: we have several other components of the agent that relate to Connect proxies for example the ProxyConfigManager component needed for Envoy work. Those things are pretty separate from the focus of this package so far which is only concerned with managing external proxy processes so it's nota good fit to put code for that in here, yet there is a naming clash if we have other packages related to proxy functionality that are not in the `agent/proxy` package.
Happy to bikeshed the name. I started by calling it `managedproxy` but `managedproxy.Manager` is especially unpleasant. `proxyprocess` seems good in that it's more specific about purpose but less clearly connected with the concept of "managed proxies". The names in use are cleaner though e.g. `proxyprocess.Manager`.
This rename was completed automatically using golang.org/x/tools/cmd/gomvpkg.
Depends on #4541
* Fix missed windows tagged files
* Add cache types for catalog/services and health/services and basic test that caching works
* Support non-blocking cache types with Cache-Control semantics.
* Update API docs to include caching info for every endpoint.
* Comment updates per PR feedback.
* Add note on caching to the 10,000 foot view on the architecture page to make the new data path more clear.
* Document prepared query staleness quirk and force all background requests to AllowStale so we can spread service discovery load across servers.
Don't set the value of TLSConfig.Address explicitly.
This will make sure env vars like CONSUL_TLS_SERVER_NAME are taken into account for the connection. Fixes#4718.
In a real agent the `cache` instance is alive until the agent shuts down so this is not a real leak in production, however in out test suite, every testAgent that is started and stops leaks goroutines that never get cleaned up which accumulate consuming CPU and memory through subsequent test in the `agent` package which doesn't help our test flakiness.
This adds a Close method that doesn't invalidate or clean up the cache, and still allows concurrent blocking queries to run (for up to 10 mins which might still affect tests). But at least it doesn't maintain them forever with background refresh and an expiry watcher routine.
It would be nice to cancel any outstanding blocking requests as well when we close but that requires much more invasive surgery right into our RPC protocol since we don't have a way to cancel requests currently.
Unscientifically this seems to make tests pass a bit quicker and more reliably locally but I can't really be sure of that!
cli: forward SIGTERM to child process of 'lock' and 'watch' subcommands on unix
This also removes the signal handler for SIGKILL as it's impossible to receive these signals.
* Fix CA pruning when CA config uses string durations.
The tl;dr here is:
- Configuring LeafCertTTL with a string like "72h" is how we do it by default and should be supported
- Most of our tests managed to escape this by defining them as time.Duration directly
- Out actual default value is a string
- Since this is stored in a map[string]interface{} config, when it is written to Raft it goes through a msgpack encode/decode cycle (even though it's written from server not over RPC).
- msgpack decode leaves the string as a `[]uint8`
- Some of our parsers required string and failed
- So after 1 hour, a default configured server would throw an error about pruning old CAs
- If a new CA was configured that set LeafCertTTL as a time.Duration, things might be OK after that, but if a new CA was just configured from config file, intialization would cause same issue but always fail still so would never prune the old CA.
- Mostly this is just a janky error that got passed tests due to many levels of complicated encoding/decoding.
tl;dr of the tl;dr: Yay for type safety. Map[string]interface{} combined with msgpack always goes wrong but we somehow get bitten every time in a new way :D
We already fixed this once! The main CA config had the same problem so @kyhavlov already wrote the mapstructure DecodeHook that fixes it. It wasn't used in several places it needed to be and one of those is notw in `structs` which caused a dependency cycle so I've moved them.
This adds a whole new test thta explicitly tests the case that broke here. It also adds tests that would have failed in other places before (Consul and Vaul provider parsing functions). I'm not sure if they would ever be affected as it is now as we've not seen things broken with them but it seems better to explicitly test that and support it to not be bitten a third time!
* Typo fix
* Fix bad Uint8 usage
If you provide an invalid HTTP configuration consul will still start again instead of failing. But if you do so the build-in proxy won't be able to start which you might need for connect.
This implements parts of RFC 7871 where Consul is acting as an authoritative name server (or forwarding resolver when recursors are configured)
If ECS opt is present in the request we will mirror it back and return a response with a scope of 0 (global) or with the same prefix length as the request (indicating its valid specifically for that subnet).
We only mirror the prefix-length (non-global) for prepared queries as those could potentially use nearness checks that could be affected by the subnet. In the future we could get more sophisticated with determining the scope bits and allow for better caching of prepared queries that don’t rely on nearness checks.
The other thing this does not do is implement the part of the ECS RFC related to originating ECS headers when acting as a intermediate DNS server (forwarding resolver). That would take a quite a bit more effort and in general provide very little value. Consul will currently forward the ECS headers between recursors and the clients transparently, we just don't originate them for non-ECS clients to get potentially more accurate "location aware" results.
Fixes: #4578
Prior to this fix if there was an error binding to ports for the DNS servers the error would be swallowed by the gated log writer and never output. This fix propagates the DNS server errors back to the shell with a multierror.
* Implementation of Weights Data structures
Adding this datastructure will allow us to resolve the
issues #1088 and #4198
This new structure defaults to values:
```
{ Passing: 1, Warning: 0 }
```
Which means, use weight of 0 for a Service in Warning State
while use Weight 1 for a Healthy Service.
Thus it remains compatible with previous Consul versions.
* Implemented weights for DNS SRV Records
* DNS properly support agents with weight support while server does not (backwards compatibility)
* Use Warning value of Weights of 1 by default
When using DNS interface with only_passing = false, all nodes
with non-Critical healthcheck used to have a weight value of 1.
While having weight.Warning = 0 as default value, this is probably
a bad idea as it breaks ascending compatibility.
Thus, we put a default value of 1 to be consistent with existing behaviour.
* Added documentation for new weight field in service description
* Better documentation about weights as suggested by @banks
* Return weight = 1 for unknown Check states as suggested by @banks
* Fixed typo (of -> or) in error message as requested by @mkeeler
* Fixed unstable unit test TestRetryJoin
* Fixed unstable tests
* Fixed wrong Fatalf format in `testrpc/wait.go`
* Added notes regarding DNS SRV lookup limitations regarding number of instances
* Documentation fixes and clarification regarding SRV records with weights as requested by @banks
* Rephrase docs
* Added log-file flag to capture Consul logs in a user specified file
* Refactored code.
* Refactored code. Added flags to rotate logs based on bytes and duration
* Added the flags for log file and log rotation on the webpage
* Fixed TestSantize from failing due to the addition of 3 flags
* Introduced changes : mutex, data-dir log writes, rotation logic
* Added test for logfile and updated the default log destination for docs
* Log name now uses UnixNano
* TestLogFile is now uses t.Parallel()
* Removed unnecessary int64Val function
* Updated docs to reflect default log name for log-file
* No longer writes to data-dir and adds .log if the filename has no extension
Fixes#4515
This just slightly refactors the logic to only attempt to set the serf wan reconnect timeout when the rest of the serf wan settings are configured - thus avoiding a segfault.
* Display more information about check being not properly added when it fails
It follows an incident where we add lots of error messages:
[WARN] consul.fsm: EnsureRegistration failed: failed inserting check: Missing service registration
That seems related to Consul failing to restart on respective agents.
Having Node information as well as service information would help diagnose the issue.
* Renamed ensureCheckIfNodeMatches() as requested by @banks
- Add WaitForTestAgent to tests flaky due to missing serfHealth registration
- Fix bug in retries calling Fatalf with *testing.T
- Convert TestLockCommand_ChildExitCode to table driven test
* Allow to rename nodes with IDs, will fix#3974 and #4413
This change allow to rename any well behaving recent agent with an
ID to be renamed safely, ie: without taking the name of another one
with case insensitive comparison.
Deprecated behaviour warning
----------------------------
Due to asceding compatibility, it is still possible however to
"take" the name of another name by not providing any ID.
Note that when not providing any ID, it is possible to have 2 nodes
having similar names with case differences, ie: myNode and mynode
which might lead to DB corruption on Consul server side and
lead to server not properly restarting.
See #3983 and #4399 for Context about this change.
Disabling registration of nodes without IDs as specified in #4414
should probably be the way to go eventually.
* Removed the case-insensitive search when adding a node within the else
block since it breaks the test TestAgentAntiEntropy_Services
While the else case is probably legit, it will be fixed with #4414 in
a later release.
* Added again the test in the else to avoid duplicated names, but
enforce this test only for nodes having IDs.
Thus most tests without any ID will work, and allows us fixing
* Added more tests regarding request with/without IDs.
`TestStateStore_EnsureNode` now test registration and renaming with IDs
`TestStateStore_EnsureNodeDeprecated` tests registration without IDs
and tests removing an ID from a node as well as updated a node
without its ID (deprecated behaviour kept for backwards compatibility)
* Do not allow renaming in case of conflict, including when other node has no ID
* Fixed function GetNodeID that was not working due to wrong type when searching node from its ID
Thus, all tests about renaming were not working properly.
Added the full test cas that allowed me to detect it.
* Better error messages, more tests when nodeID is not a valid UUID in GetNodeID()
* Added separate TestStateStore_GetNodeID to test GetNodeID.
More complete test coverage for GetNodeID
* Added new unit test `TestStateStore_ensureNoNodeWithSimilarNameTxn`
Also fixed comments to be clearer after remarks from @banks
* Fixed error message in unit test to match test case
* Use uuid.ParseUUID to parse Node.ID as requested by @mkeeler
* Fixes TestAgent_IndexChurn
* Fixes TestPreparedQuery_Wrapper
* Increased sleep in agent_test for IndexChurn to 500ms
* Made the comment about joinWAN operation much less of a cliffhanger
* Revert "BUGFIX: Unit test relying on WaitForLeader() did not work due to wrong test (#4472)"
This reverts commit cec5d72396.
* Revert "CA initialization while boostrapping and TestLeader_ChangeServerID fix. (#4493)"
This reverts commit 589b589b53.
- Improve resilience of testrpc.WaitForLeader()
- Add additionall retry to CI
- Increase "go test" timeout to 8m
- Add wait for cluster leader to several tests in the agent package
- Add retry to some tests in the api and command packages
* Fixes the DNS recursor properly resolving the requests
* Added a test case for the recursor bug
* Refactored code && added a test case for all failing recursors
* Inner indentation moved into else if check
Fixes: #4441
This fixes the issue with Connect Managed Proxies + ACLs being broken.
The underlying problem was that the token parsed for most http endpoints was sent untouched to the servers via the RPC request. These changes make it so that at the HTTP endpoint when parsing the token we additionally attempt to convert potential proxy tokens into regular tokens before sending to the RPC endpoint. Proxy tokens are only valid on the agent with the managed proxy so the resolution has to happen before it gets forwarded anywhere.
* New Providers added and updated vendoring for go-discover
* Vendor.json formatted using make vendorfmt
* Docs/Agent/auto-join: Added documentation for the new providers introduced in this PR
* Updated the golang.org/x/sys/unix in the vendor directory
* Agent: TestGoDiscoverRegistration updated to reflect the addition of new providers
* Deleted terraform.tfstate from vendor.
* Deleted terraform.tfstate.backup
Deleted terraform state file artifacts from unknown runs.
* Updated x/sys/windows vendor for Windows binary compilation
* Fix theoretical cache collision bug if/when we use more cache types with same result type
* Generalized fix for blocking query handling when state store methods return zero index
* Refactor test retry to only affect CI
* Undo make file merge
* Add hint to error message returned to end-user requests if Connect is not enabled when they try to request cert
* Explicit error for Roots endpoint if connect is disabled
* Fix tests that were asserting old behaviour
Also change how loadProxies works. Now it will load all persisted proxies into a map, then when loading config file proxies will look up the previous proxy token in that map.