This implements a solution for #7863
It does:
Add a new config cache.entry_fetch_rate to limit the number of calls/s for a given cache entry, default value = rate.Inf
Add cache.entry_fetch_max_burst size of rate limit (default value = 2)
The new configuration now supports the following syntax for instance to allow 1 query every 3s:
command line HCL: -hcl 'cache = { entry_fetch_rate = 0.333}'
in JSON
{
"cache": {
"entry_fetch_rate": 0.333
}
}
On the servers they must have a certificate.
On the clients they just have to set verify_outgoing to true to attempt TLS connections for RPCs.
Eventually we may relax these restrictions but right now all of the settings we push down (acl tokens, acl related settings, certificates, gossip key) are sensitive and shouldn’t be transmitted over an unencrypted connection. Our guides and docs should recoommend verify_server_hostname on the clients as well.
Another reason to do this is weird things happen when making an insecure RPC when TLS is not enabled. Basically it tries TLS anyways. We should probably fix that to make it clearer what is going on.
The envisioned changes would allow extra settings to enable dynamically defined auth methods to be used instead of or in addition to the statically defined one in the configuration.
There are a couple of things in here.
First, just like auto encrypt, any Cluster.AutoConfig RPC will implicitly use the less secure RPC mechanism.
This drastically modifies how the Consul Agent starts up and moves most of the responsibilities (other than signal handling) from the cli command and into the Agent.
All commands which read config (agent, services, and validate) will now
print warnings when one of the config files is skipped because it did
not match an expected format.
Also ensures that config validate prints all warnings.
Previously the logic for reading ConfigFiles and produces Sources was split
between NewBuilder and Build. This commit moves all of the logic into NewBuilder
so that Build() can operate entirely on Sources.
This change is in preparation for logging warnings when files have an
unsupported extension.
It also reduces the scope of BuilderOpts, and gets us very close to removing
Builder.options.
The nil value was never used. We can avoid a bunch of complications by
making the field a string value instead of a pointer.
This change is in preparation for fixing a silent config failure.
Flags is an overloaded term in this context. It generally is used to
refer to command line flags. This struct, however, is a data object
used as input to the construction.
It happens to be partially populated by command line flags, but
otherwise has very little to do with them.
Renaming this struct should make the actual responsibility of this struct
more obvious, and remove the possibility that it is confused with
command line flags.
This change is in preparation for adding additional fields to
BuilderOpts.
This field was populated for one reason, to test that it was empty.
Of all the callers, only a single one used this functionality. The rest
constructed a `Flags{}` struct which did not set Args.
I think this shows that the logic was in the wrong place. Only the agent
command needs to care about validating the args.
This commit removes the field, and moves the logic to the one caller
that cares.
Also fix some comments.
Currently opaque config blocks (config entries, and CA provider config) are
modified by PatchSliceOfMaps, making it impossible for these opaque
config sections to contain slices of maps.
In order to fix this problem, any lazy-decoding of these blocks needs to support
weak decoding of []map[string]interface{} to a struct type before
PatchSliceOfMaps is replaces. This is necessary because these config
blobs are persisted, and during an upgrade an older version of Consul
could read one of the new configuration values, which would cause an error.
To support the upgrade path, this commit first introduces the new hooks
for weak decoding of []map[string]interface{} and uses them only in the
lazy-decode paths. That way, in a future release, new style
configuration will be supported by the older version of Consul.
This decode hook has a number of advantages:
1. It no longer panics. It allows mapstructure to report the error
2. It no longer requires the user to declare which fields are slices of
structs. It can deduce that information from the 'to' value.
3. It will make it possible to preserve opaque configuration, allowing
for structured opaque config.
This allows the operator to disable agent caching for the http endpoint.
It is on by default for backwards compatibility and if disabled will
ignore the url parameter `cached`.
* testing: replace most goe/verify.Values with require.Equal
One difference between these two comparisons is that go/verify considers
nil slices/maps to be equal to empty slices/maps, where as testify/require
does not, and does not appear to provide any way to enable that behaviour.
Because of this difference some expected values were changed from empty
slices to nil slices, and some calls to verify.Values were left.
* Remove github.com/pascaldekloe/goe/verify
Reduce the number of assertion packages we use from 2 to 1
Three of the checks are temporarily disabled to limit the size of the
diff, and allow us to enable all the other checks in CI.
In a follow up we can fix the issues reported by the other checks one
at a time, and enable them.
Based on work done in https://github.com/hashicorp/memberlist/pull/196
this allows to restrict the IP ranges that can join a given Serf cluster
and be a member of the cluster.
Restrictions on IPs can be done separatly using 2 new differents flags
and config options to restrict IPs for LAN and WAN Serf.
This will emit warnings about the configs not doing anything but still allow them to be parsed.
This also added the warnings for enterprise fields that we already had in OSS but didn’t change their enforcement behavior. For example, attempting to use a network segment will cause a hard error in OSS.
* Implements a simple, tcp ingress gateway workflow
This adds a new type of gateway for allowing Ingress traffic into Connect from external services.
Co-authored-by: Chris Piraino <cpiraino@hashicorp.com>
On recent Mac OS versions, the ulimit defaults to 256 by default, but many
systems (eg: some Linux distributions) often limit this value to 1024.
On validation of configuration, Consul now validates that the number of
allowed files descriptors is bigger than http_max_conns_per_client.
This make some unit tests failing on Mac OS.
Use a less important value in unit test, so tests runs well by default
on Mac OS without need for tuning the OS.
I spent some time today on my local Mac to figure out why Consul 1.6.3+
was not accepting limits.http_max_conns_per_client.
This adds an explicit check on number of file descriptors to be sure
it might work (this is no guarantee as if many clients are reaching
the agent, it might consume even more file descriptors)
Anyway, many users are fighting with RLIMIT_NOFILE, having a clear
message would allow them to figure out what to fix.
Example of message (reload or start):
```
2020-03-11T16:38:37.062+0100 [ERROR] agent: Error starting agent: error="system allows a max of 512 file descriptors, but limits.http_max_conns_per_client: 8192 needs at least 8212"
```
When run in with `-dev` in DevMode, it is not possible to replace
the embeded UI with another one because `-dev` implies `-ui`.
This commit allows this an slightly change the error message
about Consul 0.7.0 which is very old and does not apply to
current version anyway.
This is like a Möbius strip of code due to the fact that low-level components (serf/memberlist) are connected to high-level components (the catalog and mesh-gateways) in a twisty maze of references which make it hard to dive into. With that in mind here's a high level summary of what you'll find in the patch:
There are several distinct chunks of code that are affected:
* new flags and config options for the server
* retry join WAN is slightly different
* retry join code is shared to discover primary mesh gateways from secondary datacenters
* because retry join logic runs in the *agent* and the results of that
operation for primary mesh gateways are needed in the *server* there are
some methods like `RefreshPrimaryGatewayFallbackAddresses` that must occur
at multiple layers of abstraction just to pass the data down to the right
layer.
* new cache type `FederationStateListMeshGatewaysName` for use in `proxycfg/xds` layers
* the function signature for RPC dialing picked up a new required field (the
node name of the destination)
* several new RPCs for manipulating a FederationState object:
`FederationState:{Apply,Get,List,ListMeshGateways}`
* 3 read-only internal APIs for debugging use to invoke those RPCs from curl
* raft and fsm changes to persist these FederationStates
* replication for FederationStates as they are canonically stored in the
Primary and replicated to the Secondaries.
* a special derivative of anti-entropy that runs in secondaries to snapshot
their local mesh gateway `CheckServiceNodes` and sync them into their upstream
FederationState in the primary (this works in conjunction with the
replication to distribute addresses for all mesh gateways in all DCs to all
other DCs)
* a "gateway locator" convenience object to make use of this data to choose
the addresses of gateways to use for any given RPC or gossip operation to a
remote DC. This gets data from the "retry join" logic in the agent and also
directly calls into the FSM.
* RPC (`:8300`) on the server sniffs the first byte of a new connection to
determine if it's actually doing native TLS. If so it checks the ALPN header
for protocol determination (just like how the existing system uses the
type-byte marker).
* 2 new kinds of protocols are exclusively decoded via this native TLS
mechanism: one for ferrying "packet" operations (udp-like) from the gossip
layer and one for "stream" operations (tcp-like). The packet operations
re-use sockets (using length-prefixing) to cut down on TLS re-negotiation
overhead.
* the server instances specially wrap the `memberlist.NetTransport` when running
with gateway federation enabled (in a `wanfed.Transport`). The general gist is
that if it tries to dial a node in the SAME datacenter (deduced by looking
at the suffix of the node name) there is no change. If dialing a DIFFERENT
datacenter it is wrapped up in a TLS+ALPN blob and sent through some mesh
gateways to eventually end up in a server's :8300 port.
* a new flag when launching a mesh gateway via `consul connect envoy` to
indicate that the servers are to be exposed. This sets a special service
meta when registering the gateway into the catalog.
* `proxycfg/xds` notice this metadata blob to activate additional watches for
the FederationState objects as well as the location of all of the consul
servers in that datacenter.
* `xds:` if the extra metadata is in place additional clusters are defined in a
DC to bulk sink all traffic to another DC's gateways. For the current
datacenter we listen on a wildcard name (`server.<dc>.consul`) that load
balances all servers as well as one mini-cluster per node
(`<node>.server.<dc>.consul`)
* the `consul tls cert create` command got a new flag (`-node`) to help create
an additional SAN in certs that can be used with this flavor of federation.
Fixes#7231. Before an agent would always emit a warning when there is
an encrypt key in the configuration and an existing keyring stored,
which is happening on restart.
Now it only emits that warning when the encrypt key from the
configuration is not part of the keyring.
Something similar already happens inside of the server
(agent/consul/server.go) but by doing it in the general config parsing
for the agent we can have agent-level code rely on the PrimaryDatacenter
field, too.
Currently when using the built-in CA provider for Connect, root certificates are valid for 10 years, however secondary DCs get intermediates that are valid for only 1 year. There is no mechanism currently short of rotating the root in the primary that will cause the secondary DCs to renew their intermediates.
This PR adds a check that renews the cert if it is half way through its validity period.
In order to be able to test these changes, a new configuration option was added: IntermediateCertTTL which is set extremely low in the tests.
* Add CreateCSRWithSAN
* Use CreateCSRWithSAN in auto_encrypt and cache
* Copy DNSNames and IPAddresses to cert
* Verify auto_encrypt.sign returns cert with SAN
* provide configuration options for auto_encrypt dnssan and ipsan
* rename CreateCSRWithSAN to CreateCSR
* Use consts for well known tagged adress keys
* Add ipv4 and ipv6 tagged addresses for node lan and wan
* Add ipv4 and ipv6 tagged addresses for service lan and wan
* Use IPv4 and IPv6 address in DNS
* relax requirements for auto_encrypt on server
* better error message when auto_encrypt and verify_incoming on
* docs: explain verify_incoming on Consul clients.
* Update AWS SDK to use PCA features.
* Add AWS PCA provider
* Add plumbing for config, config validation tests, add test for inheriting existing CA resources created by user
* Unparallel the tests so we don't exhaust PCA limits
* Merge updates
* More aggressive polling; rate limit pass through on sign; Timeout on Sign and CA create
* Add AWS PCA docs
* Fix Vault doc typo too
* Doc typo
* Apply suggestions from code review
Co-Authored-By: R.B. Boyer <rb@hashicorp.com>
Co-Authored-By: kaitlincarter-hc <43049322+kaitlincarter-hc@users.noreply.github.com>
* Doc fixes; tests for erroring if State is modified via API
* More review cleanup
* Uncomment tests!
* Minor suggested clean ups
A check may be set to become passing/critical only if a specified number of successive
checks return passing/critical in a row. Status will stay identical as before until
the threshold is reached.
This feature is available for HTTP, TCP, gRPC, Docker & Monitor checks.
Fixes: #5396
This PR adds a proxy configuration stanza called expose. These flags register
listeners in Connect sidecar proxies to allow requests to specific HTTP paths from outside of the node. This allows services to protect themselves by only
listening on the loopback interface, while still accepting traffic from non
Connect-enabled services.
Under expose there is a boolean checks flag that would automatically expose all
registered HTTP and gRPC check paths.
This stanza also accepts a paths list to expose individual paths. The primary
use case for this functionality would be to expose paths for third parties like
Prometheus or the kubelet.
Listeners for requests to exposed paths are be configured dynamically at run
time. Any time a proxy, or check can be registered, a listener can also be
created.
In this initial implementation requests to these paths are not
authenticated/encrypted.
Previously `verify_incoming` was required when turning on `auto_encrypt.allow_tls`, but that doesn't work together with HTTPS UI in some scenarios. Adding `verify_incoming_rpc` to the allowed configurations.
Compiling this will set an optional SNI field on each DiscoveryTarget.
When set this value should be used for TLS connections to the instances
of the target. If not set the default should be used.
Setting ExternalSNI will disable mesh gateway use for that target. It also
disables several service-resolver features that do not make sense for an
external service.
* Allow setting the mesh gateway mode for an upstream in config files
* Add envoy integration test for mesh gateways
This necessitated many supporting changes in most of the other test cases.
Add remote mode mesh gateways integration test
* Display nicely Networks (CIDR) in runtime configuration
CIDR mask is displayed in binary in configuration.
This add support for nicely displaying CIDR in runtime configuration.
Currently, if a configuration contains the following lines:
"http_config": {
"allow_write_http_from": [
"127.0.0.0/8",
"::1/128"
]
}
A call to `/v1/agent/self?pretty` would display
"AllowWriteHTTPFrom": [
{
"IP": "127.0.0.0",
"Mask": "/wAAAA=="
},
{
"IP": "::1",
"Mask": "/////////////////////w=="
}
]
This PR fixes it and it will now display:
"AllowWriteHTTPFrom": [ "127.0.0.0/8", "::1/128" ]
* Added test for cidr nice rendering in `TestSanitize()`.
This fixes pathological cases where the write throughput and snapshot size are both so large that more than 10k log entries are written in the time it takes to restore the snapshot from disk. In this case followers that restart can never catch up with leader replication again and enter a loop of constantly downloading a full snapshot and restoring it only to find that snapshot is already out of date and the leader has truncated its logs so a new snapshot is sent etc.
In general if you need to adjust this, you are probably abusing Consul for purposes outside its design envelope and should reconsider your usage to reduce data size and/or write volume.
Both 'consul config write' and server bootstrap config entries take a
decoding detour through mapstructure on the way from HCL to an actual
struct. They both may take in snake_case or CamelCase (for consistency)
so need very similar handling.
Unfortunately since they are operating on mirror universes of structs
(api.* vs structs.*) the code cannot be identitical, so try to share the
kind-configuration and duplicate the rest for now.
* Make cluster names SNI always
* Update some tests
* Ensure we check for prepared query types
* Use sni for route cluster names
* Proper mesh gateway mode defaulting when the discovery chain is used
* Ignore service splits from PatchSliceOfMaps
* Update some xds golden files for proper test output
* Allow for grpc/http listeners/cluster configs with the disco chain
* Update stats expectation
* Add ui-content-path flag
* tests complete, regex validator on string, index.html updated
* cleaning up debugging stuff
* ui: Enable ember environment configuration to be set via the go binary at runtime (#5934)
* ui: Only inject {{.ContentPath}} if we are makeing a prod build...
...otherwise we just use the current rootURL
This gets injected into a <base /> node which solves the assets path
problem but not the ember problem
* ui: Pull out the <base href=""> value and inject it into ember env
See previous commit:
The <base href=""> value is 'sometimes' injected from go at index
serve time. We pass this value down to ember by overwriting the ember
config that is injected via a <meta> tag. This has to be done before
ember bootup.
Sometimes (during testing and development, basically not production)
this is injected with the already existing value, in which case this
essentially changes nothing.
The code here is slightly abstracted away from our specific usage to
make it easier for anyone else to use, and also make sure we can cope
with using this same method to pass variables down from the CLI through
to ember in the future.
* ui: We can't use <base /> move everything to javascript (#5941)
Unfortuantely we can't seem to be able to use <base> and rootURL
together as URL paths will get doubled up (`ui/ui/`).
This moves all the things that we need to interpolate with .ContentPath
to the `startup` javascript so we can conditionally print out
`{{.ContentPath}}` in lots of places (now we can't use base)
* fixed when we serve index.html
* ui: For writing a ContentPath, we also need to cope with testing... (#5945)
...and potentially more environments
Testing has more additional things in a separate index.html in `tests/`
This make the entire thing a little saner and uses just javascriopt
template literals instead of a pseudo handbrake synatx for our
templating of these files.
Intead of just templating the entire file this way, we still only
template `{{content-for 'head'}}` and `{{content-for 'body'}}`
in this way to ensure we support other plugins/addons
* build: Loosen up the regex for retrieving the CONSUL_VERSION (#5946)
* build: Loosen up the regex for retrieving the CONSUL_VERSION
1. Previously the `sed` replacement was searching for the CONSUL_VERSION
comment at the start of a line, it no longer does this to allow for
indentation.
2. Both `grep` and `sed` where looking for the omment at the end of the
line. We've removed this restriction here. We don't need to remove it
right now, but if we ever put the comment followed by something here the
searching would break.
3. Added `xargs` for trimming the resulting version string. We aren't
using this already in the rest of the scripts, but we are pretty sure
this is available on most systems.
* ui: Fix erroneous variable, and also force an ember cache clean on build
1. We referenced a variable incorrectly here, this fixes that.
2. We also made sure that every `make` target clears ember's `tmp` cache
to ensure that its not using any caches that have since been edited
everytime we call a `make` target.
* added docs, fixed encoding
* fixed go fmt
* Update agent/config/config.go
Co-Authored-By: R.B. Boyer <public@richardboyer.net>
* Completed Suggestions
* run gofmt on http.go
* fix testsanitize
* fix fullconfig/hcl by setting correct 'want'
* ran gofmt on agent/config/runtime_test.go
* Update website/source/docs/agent/options.html.md
Co-Authored-By: Hans Hasselberg <me@hans.io>
* Update website/source/docs/agent/options.html.md
Co-Authored-By: kaitlincarter-hc <43049322+kaitlincarter-hc@users.noreply.github.com>
* remove contentpath from redirectFS struct
* Support for maximum size for Output of checks
This PR allows users to limit the size of output produced by checks at the agent
and check level.
When set at the agent level, it will limit the output for all checks monitored
by the agent.
When set at the check level, it can override the agent max for a specific check but
only if it is lower than the agent max.
Default value is 4k, and input must be at least 1.
This allows addresses to be tagged at the service level similar to what we allow for nodes already. The address translation that can be enabled with the `translate_wan_addrs` config was updated to take these new addresses into account as well.
Roles are named and can express the same bundle of permissions that can
currently be assigned to a Token (lists of Policies and Service
Identities). The difference with a Role is that it not itself a bearer
token, but just another entity that can be tied to a Token.
This lets an operator potentially curate a set of smaller reusable
Policies and compose them together into reusable Roles, rather than
always exploding that same list of Policies on any Token that needs
similar permissions.
This also refactors the acl replication code to be semi-generic to avoid
3x copypasta.
* First conversion
* Use serf 0.8.2 tag and associated updated deps
* * Move freeport and testutil into internal/
* Make internal/ its own module
* Update imports
* Add replace statements so API and normal Consul code are
self-referencing for ease of development
* Adapt to newer goe/values
* Bump to new cleanhttp
* Fix ban nonprintable chars test
* Update lock bad args test
The error message when the duration cannot be parsed changed in Go 1.12
(ae0c435877d3aacb9af5e706c40f9dddde5d3e67). This updates that test.
* Update another test as well
* Bump travis
* Bump circleci
* Bump go-discover and godo to get rid of launchpad dep
* Bump dockerfile go version
* fix tar command
* Bump go-cleanhttp
This PR introduces reloading tls configuration. Consul will now be able to reload the TLS configuration which previously required a restart. It is not yet possible to turn TLS ON or OFF with these changes. Only when TLS is already turned on, the configuration can be reloaded. Most importantly the certificates and CAs.
This PR adds two features which will be useful for operators when ACLs are in use.
1. Tokens set in configuration files are now reloadable.
2. If `acl.enable_token_persistence` is set to `true` in the configuration, tokens set via the `v1/agent/token` endpoint are now persisted to disk and loaded when the agent starts (or during configuration reload)
Note that token persistence is opt-in so our users who do not want tokens on the local disk will see no change.
Some other secondary changes:
* Refactored a bunch of places where the replication token is retrieved from the token store. This token isn't just for replicating ACLs and now it is named accordingly.
* Allowed better paths in the `v1/agent/token/` API. Instead of paths like: `v1/agent/token/acl_replication_token` the path can now be just `v1/agent/token/replication`. The old paths remain to be valid.
* Added a couple new API functions to set tokens via the new paths. Deprecated the old ones and pointed to the new names. The names are also generally better and don't imply that what you are setting is for ACLs but rather are setting ACL tokens. There is a minor semantic difference there especially for the replication token as again, its no longer used only for ACL token/policy replication. The new functions will detect 404s and fallback to using the older token paths when talking to pre-1.4.3 agents.
* Docs updated to reflect the API additions and to show using the new endpoints.
* Updated the ACL CLI set-agent-tokens command to use the non-deprecated APIs.
In order to be able to reload the TLS configuration, we need one way to generate the different configurations.
This PR introduces a `tlsutil.Configurator` which holds a `tlsutil.Config`. Afterwards it is responsible for rendering every `tls.Config`. In this particular PR I moved `IncomingHTTPSConfig`, `IncomingTLSConfig`, and `OutgoingTLSWrapper` into `tlsutil.Configurator`.
This PR is a pure refactoring - not a single feature added. And not a single test added. I only slightly modified existing tests as necessary.
Adds two new configuration parameters "dns_config.use_cache" and
"dns_config.cache_max_age" controlling how DNS requests use the agent
cache when querying servers.
* Support rate limiting and concurrency limiting CSR requests on servers; handle CA rotations gracefully with jitter and backoff-on-rate-limit in client
* Add CSR rate limiting docs
* Fix config naming and add tests for new CA configs
* Add State storage and LastResult argument into Cache so that cache.Types can safely store additional data that is eventually expired.
* New Leaf cache type working and basic tests passing. TODO: more extensive testing for the Root change jitter across blocking requests, test concurrent fetches for different leaves interact nicely with rootsWatcher.
* Add multi-client and delayed rotation tests.
* Typos and cleanup error handling in roots watch
* Add comment about how the FetchResult can be used and change ca leaf state to use a non-pointer state.
* Plumb test override of root CA jitter through TestAgent so that tests are deterministic again!
* Fix failing config test
This PR is almost a complete rewrite of the ACL system within Consul. It brings the features more in line with other HashiCorp products. Obviously there is quite a bit left to do here but most of it is related docs, testing and finishing the last few commands in the CLI. I will update the PR description and check off the todos as I finish them over the next few days/week.
Description
At a high level this PR is mainly to split ACL tokens from Policies and to split the concepts of Authorization from Identities. A lot of this PR is mostly just to support CRUD operations on ACLTokens and ACLPolicies. These in and of themselves are not particularly interesting. The bigger conceptual changes are in how tokens get resolved, how backwards compatibility is handled and the separation of policy from identity which could lead the way to allowing for alternative identity providers.
On the surface and with a new cluster the ACL system will look very similar to that of Nomads. Both have tokens and policies. Both have local tokens. The ACL management APIs for both are very similar. I even ripped off Nomad's ACL bootstrap resetting procedure. There are a few key differences though.
Nomad requires token and policy replication where Consul only requires policy replication with token replication being opt-in. In Consul local tokens only work with token replication being enabled though.
All policies in Nomad are globally applicable. In Consul all policies are stored and replicated globally but can be scoped to a subset of the datacenters. This allows for more granular access management.
Unlike Nomad, Consul has legacy baggage in the form of the original ACL system. The ramifications of this are:
A server running the new system must still support other clients using the legacy system.
A client running the new system must be able to use the legacy RPCs when the servers in its datacenter are running the legacy system.
The primary ACL DC's servers running in legacy mode needs to be a gate that keeps everything else in the entire multi-DC cluster running in legacy mode.
So not only does this PR implement the new ACL system but has a legacy mode built in for when the cluster isn't ready for new ACLs. Also detecting that new ACLs can be used is automatic and requires no configuration on the part of administrators. This process is detailed more in the "Transitioning from Legacy to New ACL Mode" section below.
* Add -enable-local-script-checks options
These options allow for a finer control over when script checks are enabled by
giving the option to only allow them when they are declared from the local
file system.
* Add documentation for the new option
* Nitpick doc wording
* Added new Config for SidecarService in ServiceDefinitions.
* WIP: all the code needed for SidecarService is written... none of it is tested other than config :). Need API updates too.
* Test coverage for the new sidecarServiceFromNodeService method.
* Test API registratrion with SidecarService
* Recursive Key Translation 🤦
* Add tests for nested sidecar defintion arrays to ensure they are translated correctly
* Use dedicated internal state rather than Service Meta for tracking sidecars for deregistration.
Add tests for deregistration.
* API struct for agent register. No other endpoint should be affected yet.
* Additional test cases to cover updates to API registrations
* Refactor Service Definition ProxyDestination.
This includes:
- Refactoring all internal structs used
- Updated tests for both deprecated and new input for:
- Agent Services endpoint response
- Agent Service endpoint response
- Agent Register endpoint
- Unmanaged deprecated field
- Unmanaged new fields
- Managed deprecated upstreams
- Managed new
- Catalog Register
- Unmanaged deprecated field
- Unmanaged new fields
- Managed deprecated upstreams
- Managed new
- Catalog Services endpoint response
- Catalog Node endpoint response
- Catalog Service endpoint response
- Updated API tests for all of the above too (both deprecated and new forms of register)
TODO:
- config package changes for on-disk service definitions
- proxy config endpoint
- built-in proxy support for new fields
* Agent proxy config endpoint updated with upstreams
* Config file changes for upstreams.
* Add upstream opaque config and update all tests to ensure it works everywhere.
* Built in proxy working with new Upstreams config
* Command fixes and deprecations
* Fix key translation, upstream type defaults and a spate of other subtele bugs found with ned to end test scripts...
TODO: tests still failing on one case that needs a fix. I think it's key translation for upstreams nested in Managed proxy struct.
* Fix translated keys in API registration.
≈
* Fixes from docs
- omit some empty undocumented fields in API
- Bring back ServiceProxyDestination in Catalog responses to not break backwards compat - this was removed assuming it was only used internally.
* Documentation updates for Upstreams in service definition
* Fixes for tests broken by many refactors.
* Enable travis on f-connect branch in this branch too.
* Add consistent Deprecation comments to ProxyDestination uses
* Update version number on deprecation notices, and correct upstream datacenter field with explanation in docs
Don't set the value of TLSConfig.Address explicitly.
This will make sure env vars like CONSUL_TLS_SERVER_NAME are taken into account for the connection. Fixes#4718.
* Implementation of Weights Data structures
Adding this datastructure will allow us to resolve the
issues #1088 and #4198
This new structure defaults to values:
```
{ Passing: 1, Warning: 0 }
```
Which means, use weight of 0 for a Service in Warning State
while use Weight 1 for a Healthy Service.
Thus it remains compatible with previous Consul versions.
* Implemented weights for DNS SRV Records
* DNS properly support agents with weight support while server does not (backwards compatibility)
* Use Warning value of Weights of 1 by default
When using DNS interface with only_passing = false, all nodes
with non-Critical healthcheck used to have a weight value of 1.
While having weight.Warning = 0 as default value, this is probably
a bad idea as it breaks ascending compatibility.
Thus, we put a default value of 1 to be consistent with existing behaviour.
* Added documentation for new weight field in service description
* Better documentation about weights as suggested by @banks
* Return weight = 1 for unknown Check states as suggested by @banks
* Fixed typo (of -> or) in error message as requested by @mkeeler
* Fixed unstable unit test TestRetryJoin
* Fixed unstable tests
* Fixed wrong Fatalf format in `testrpc/wait.go`
* Added notes regarding DNS SRV lookup limitations regarding number of instances
* Documentation fixes and clarification regarding SRV records with weights as requested by @banks
* Rephrase docs
* Added log-file flag to capture Consul logs in a user specified file
* Refactored code.
* Refactored code. Added flags to rotate logs based on bytes and duration
* Added the flags for log file and log rotation on the webpage
* Fixed TestSantize from failing due to the addition of 3 flags
* Introduced changes : mutex, data-dir log writes, rotation logic
* Added test for logfile and updated the default log destination for docs
* Log name now uses UnixNano
* TestLogFile is now uses t.Parallel()
* Removed unnecessary int64Val function
* Updated docs to reflect default log name for log-file
* No longer writes to data-dir and adds .log if the filename has no extension
It fixes the following warnings:
agent/config/builder.go:1201: Errorf format %q has arg s of wrong type *string
agent/config/builder.go:1240: Errorf format %q has arg s of wrong type *string
agent/config will turn [{}] into {} (single element maps into a single
map) to work around HCL issues. These are resolved in HCL2 which I'm
sure Consul will switch to eventually.
This breaks the connect proxy configuration in service definition FILES
since we call this patch function. For now, let's just special-case skip
this. In the future we maybe Consul will adopt HCL2 and fix it, or we
can do something else if we want. This works and is tested.
This enables `consul agent -dev` to begin using Connect features with
the built-in CA. I think this is expected behavior since you can imagine
that new users would want to try.
There is no real downside since we're just using the built-in CA.
There are also a lot of small bug fixes found when testing lots of things end-to-end for the first time and some cleanup now it's integrated with real CA code.
The list of cipher suites included in this commit are consistent with
the values and precedence in the [Golang TLS documentation](https://golang.org/src/crypto/tls/cipher_suites.go).
> **Note:** Cipher suites with RC4 are still included within the list
> of accepted values for compatibility, but **these cipher suites are
> not safe to use** and should be deprecated with warnings and
> subsequently removed. Support for RC4 ciphers has already been
> removed or disabled by default in many prominent browsers and tools,
> including Golang.
>
> **References:**
>
> * [RC4 on Wikipedia](https://en.wikipedia.org/wiki/RC4)
> * [Mozilla Security Blog](https://blog.mozilla.org/security/2015/09/11/deprecating-the-rc4-cipher/)
Docker/Openshift/Kubernetes mount the config file as a symbolic link and
IsDir returns true if the file is a symlink. Before calling IsDir, the
symlink should be resolved to determine if it points at a file or
directory.
Fixes#3753
* config: refactor ReadPath(s) methods without side-effects
Return the sources instead of modifying the state.
* config: clean data dir before every test
* config: add tests for config-file and config-dir
* config: add -config-format option
Starting with Consul 1.0 all config files must have a '.json' or '.hcl'
extension to make it unambigous how the data should be parsed. Some
automation tools generate temporary files by appending a random string
to the generated file which obfuscates the extension and prevents the
file type detection.
This patch adds a -config-format option which can be used to override
the auto-detection behavior by forcing all config files or all files
within a config directory independent of their extension to be
interpreted as of this format.
Fixes#3620
The `consul agent` command was ignoring extra command line arguments
which can lead to confusion when the user has for example forgotten to
add a dash in front of an argument or is not using an `=` when setting
boolean flags to `true`. `-bootstrap true` is not the same as
`-bootstrap=true`, for example.
Since all command line flags are known and we don't expect unparsed
arguments we can return an error. However, this may make it slightly
more difficult in the future if we ever wanted to have these kinds of
arguments.
Fixes#3397
The `consul agent` command was ignoring extra command line arguments
which can lead to confusion when the user has for example forgotten to
add a dash in front of an argument or is not using an `=` when setting
boolean flags to `true`. `-bootstrap true` is not the same as
`-bootstrap=true`, for example.
Since all command line flags are known and we don't expect unparsed
arguments we can return an error. However, this may make it slightly
more difficult in the future if we ever wanted to have these kinds of
arguments.
Fixes#3397
DNS recursors can be added through go-sockaddr templates. Entries
are deduplicated while the order is maintained.
Originally proposed by @taylorchu
See #2932
DNS recursors can be added through go-sockaddr templates. Entries
are deduplicated while the order is maintained.
Originally proposed by @taylorchu
See #2932
* agent: add option to discard health output
In high volatile environments consul will have checks with "noisy"
output which changes every time even though the status does not change.
Since the output is stored in the raft log every health check update
unblocks a blocking call on health checks since the raft index has
changed even though the status of the health checks may not have changed
at all. By discarding the output of the health checks the users can
choose a different tradeoff. Less visibility on why a check failed in
exchange for a reduced change rate on the raft log.
* agent: discard output also when adding a check
* agent: add test for discard check output
* agent: update docs
* go vet
* Adds discard_check_output to reloadable config table.
* Updates the change log.
* doc: document discrepancy between id and CheckID
* doc: document enable_tag_override change
* config: add TranslateKeys helper
TranslateKeys makes it easier to map between different representations
of internal structures. It allows to recursively map alias keys to
canonical keys in structured maps.
* config: use TranslateKeys for config file
This also adds support for 'enabletagoverride' and removes
the need for a separate CheckID alias field.
* config: remove dead code
* agent: use TranslateKeys for FixupCheckType
* agent: translate enable_tag_override during service registration
* doc: add '.hcl' as valid extension
* config: map ScriptArgs to args
* config: add comment for TranslateKeys
* Adds client-side retry for no leader errors.
This paves over the case where the client was connected to the leader
when it loses leadership.
* Adds a configurable server RPC drain time and a fail-fast path for RPCs.
When a server leaves it gets removed from the Raft configuration, so it will
never know who the new leader server ends up being. Without this we'd be
doomed to wait out the RPC hold timeout and then fail. This makes things fail
a little quicker while a sever is draining, and since we added a client retry
AND since the server doing this has already shut down and left the Serf LAN,
clients should retry against some other server.
* Makes the RPC hold timeout configurable.
* Reorders struct members.
* Sets the RPC hold timeout default for test servers.
* Bumps the leave drain time up to 5 seconds.
* Robustifies retries with a simpler client-side RPC hold.
* Reverts untended delete.
* Clean up handling of subprocesses and make using a shell optional
* Update docs for subprocess changes
* Fix tests for new subprocess behavior
* More cleanup of subprocesses
* Minor adjustments and cleanup for subprocess logic
* Makes the watch handler reload test use the new path.
* Adds check tests for new args path, and updates existing tests to use new path.
* Adds support for script args in Docker checks.
* Fixes the sanitize unit test.
* Adds panic for unknown watch type, and reverts back to Run().
* Adds shell option back to consul lock command.
* Adds shell option back to consul exec command.
* Adds shell back into consul watch command.
* Refactors signal forwarding and makes Windows-friendly.
* Adds a clarifying comment.
* Changes error wording to a warning.
* Scopes signals to interrupt and kill.
This avoids us trying to send SIGCHILD to the dead process.
* Adds an error for shell=false for consul exec.
* Adds notes about the deprecated script and handler fields.
* De-nests an if statement.
* vendor: add github.com/sergi/go-diff/diffmatchpatch for diff'ing test output
* config: refactor Sanitize to recursively clean runtime config and format complex fields
* Removes an extra int cast.
* Adds a top-level check test case for sanitization.
* metrics: replace statsite_prefix with service_prefix
The metrics prefix isn't statsite specific and is in fact used
for all metrics providers. Since we are deprecating fields
anyway we should fix this one as well.
Fixes#3293
* Updates docs and sorts telemetry section.
* Renames to "metrics_prefix" to disambiguate with Consul services.
* Updates the change log.
* new config parser for agent
This patch implements a new config parser for the consul agent which
makes the following changes to the previous implementation:
* add HCL support
* all configuration fragments in tests and for default config are
expressed as HCL fragments
* HCL fragments can be provided on the command line so that they
can eventually replace the command line flags.
* HCL/JSON fragments are parsed into a temporary Config structure
which can be merged using reflection (all values are pointers).
The existing merge logic of overwrite for values and append
for slices has been preserved.
* A single builder process generates a typed runtime configuration
for the agent.
The new implementation is more strict and fails in the builder process
if no valid runtime configuration can be generated. Therefore,
additional validations in other parts of the code should be removed.
The builder also pre-computes all required network addresses so that no
address/port magic should be required where the configuration is used
and should therefore be removed.
* Upgrade github.com/hashicorp/hcl to support int64
* improve error messages
* fix directory permission test
* Fix rtt test
* Fix ForceLeave test
* Skip performance test for now until we know what to do
* Update github.com/hashicorp/memberlist to update log prefix
* Make memberlist use the default logger
* improve config error handling
* do not fail on non-existing data-dir
* experiment with non-uniform timeouts to get a handle on stalled leader elections
* Run tests for packages separately to eliminate the spurious port conflicts
* refactor private address detection and unify approach for ipv4 and ipv6.
Fixes#2825
* do not allow unix sockets for DNS
* improve bind and advertise addr error handling
* go through builder using test coverage
* minimal update to the docs
* more coverage tests fixed
* more tests
* fix makefile
* cleanup
* fix port conflicts with external port server 'porter'
* stop test server on error
* do not run api test that change global ENV concurrently with the other tests
* Run remaining api tests concurrently
* no need for retry with the port number service
* monkey patch race condition in go-sockaddr until we understand why that fails
* monkey patch hcl decoder race condidtion until we understand why that fails
* monkey patch spurious errors in strings.EqualFold from here
* add test for hcl decoder race condition. Run with go test -parallel 128
* Increase timeout again
* cleanup
* don't log port allocations by default
* use base command arg parsing to format help output properly
* handle -dc deprecation case in Build
* switch autopilot.max_trailing_logs to int
* remove duplicate test case
* remove unused methods
* remove comments about flag/config value inconsistencies
* switch got and want around since the error message was misleading.
* Removes a stray debug log.
* Removes a stray newline in imports.
* Fixes TestACL_Version8.
* Runs go fmt.
* Adds a default case for unknown address types.
* Reoders and reformats some imports.
* Adds some comments and fixes typos.
* Reorders imports.
* add unix socket support for dns later
* drop all deprecated flags and arguments
* fix wrong field name
* remove stray node-id file
* drop unnecessary patch section in test
* drop duplicate test
* add test for LeaveOnTerm and SkipLeaveOnInt in client mode
* drop "bla" and add clarifying comment for the test
* split up tests to support enterprise/non-enterprise tests
* drop raft multiplier and derive values during build phase
* sanitize runtime config reflectively and add test
* detect invalid config fields
* fix tests with invalid config fields
* use different values for wan sanitiziation test
* drop recursor in favor of recursors
* allow dns_config.udp_answer_limit to be zero
* make sure tests run on machines with multiple ips
* Fix failing tests in a few more places by providing a bind address in the test
* Gets rid of skipped TestAgent_CheckPerformanceSettings and adds case for builder.
* Add porter to server_test.go to make tests there less flaky
* go fmt