In 858b05fc31 (diff-46ef88aa04507fb9b039344277531584)
we removed encoding values in pathnames as we thought they were
eventually being encoded by `ember`. It looks like this isn't the case.
Turns out sometimes they are encoded sometimes they aren't. It's complicated.
If at all possible refer to the PR https://github.com/hashicorp/consul/pull/5206.
It's related to the difference between `dynamic` routes and `wildcard` routes.
Partly related to this is a decision on whether we urlencode the slashes within service names or not. Whilst historically we haven't done this, we feel its a good time to change this behaviour, so we'll also be changing services to use dynamic routes instead of wildcard routes. So service links will then look like /ui/dc-1/services/application%2Fservice rather than /ui/dc-1/services/application/service
Here, we define our routes in a declarative format (for the moment at least JSON) outside of Router.map, and loop through this within Router.map to set all our routes using the standard this.route method. We essentially configure our Router from the outside. As this configuration is now done declaratively outside of Router.map we can also make this data available to href-to and paramsFor, allowing us to detect wildcard routes and therefore apply urlencoding/decoding.
Where I mention 'conditionally' below, this is detection is what is used for the decision.
We conditionally add url encoding to the `{{href-to}}` helper/addon. The
reasoning here is, if we are asking for a 'href/url' then whatever we
receive back should always be urlencoded. We've done this by reusing as much
code from the original `ember-href-to` addon as possible, after this
change every call to the `{{href-to}}` helper will be urlencoded.
As all links using `{{href-to}}` are now properly urlencoded. We also
need to decode them in the correct place 'on the other end', so..
We also override the default `Route.paramsFor` method to conditionally decode all
params before passing them to the `Route.model` hook.
Lastly (the revert), as we almost consistently use url params to
construct API calls, we make sure we re-encode any slugs that have been
passed in by the user/developer. The original API for the `createURL`
function was to allow you to pass values that didn't need encoding,
values that **did** need encoding, followed by query params (which again
require url encoding)
All in all this should make the entire ember app url encode/decode safe.
* Fix 2 remote ACL policy resolution issues
1 - Use the right method to fire async not found errors when the ACL.PolicyResolve RPC returns that error. This was previously accidentally firing a token result instead of a policy result which would have effectively done nothing (unless there happened to be a token with a secret id == the policy id being resolved.
2. When concurrent policy resolution is being done we single flight the requests. The bug before was that for the policy resolution that was going to piggy back on anothers RPC results it wasn’t waiting long enough for the results to come back due to looping with the wrong variable.
* Fix a handful of other edge case ACL scenarios
The main issue was that token specific issues (not able to access a particular policy or the token being deleted after initial fetching) were poisoning the policy cache.
A second issue was that for concurrent token resolutions, the first resolution to get started would go fetch all the policies. If before the policies were retrieved a second resolution request came in, the new request would register watchers for those policies but then never block waiting for them to complete. This resulted in using the default policy when it shouldn't have.
* Support rate limiting and concurrency limiting CSR requests on servers; handle CA rotations gracefully with jitter and backoff-on-rate-limit in client
* Add CSR rate limiting docs
* Fix config naming and add tests for new CA configs
For established xDS gRPC streams recheck ACLs for each DiscoveryRequest
or DiscoveryResponse. If more than 5 minutes has elapsed since the last
ACL check, recheck even without an incoming DiscoveryRequest or
DiscoveryResponse. ACL failures will terminate the stream.
Fixes#4969
This implements non-blocking request polling at the cache layer which is currently only used for prepared queries. Additionally this enables the proxycfg manager to poll prepared queries for use in envoy proxy upstreams.
* Stared updaing links for the learn migration
* Language change cluster -> datacenter (#5212)
* Updating the language from cluster to datacenter in the backup guide to be consistent and more accurate.
* missed some clusters
* updated three broken links for the sidebar nav
Adds info about `k8stag` and `nodePortSyncType` options that were
added in consul-helm v0.5.0.
Additionally moves the k8sprefix to match the order in the Helm chart
values file, while also clarifying that it only affects one sync
direction.
If a user installs the default Helm chart Consul on a Kubernetes
cluster that is open to the internet, it is lacking some important
security configurations.
* Store leaf cert indexes in raft and use for the ModifyIndex on the returned certs
This ensures that future certificate signings will have a strictly greater ModifyIndex than any previous certs signed.
## Background
When making a blocking query on a missing service (was never registered, or is not registered anymore) the query returns as soon as any service is updated.
On clusters with frequent updates (5~10 updates/s in our DCs) these queries virtually do not block, and clients with no protections againt this waste ressources on the agent and server side. Clients that do protect against this get updates later than they should because of the backoff time they implement between requests.
## Implementation
While reducing the number of unnecessary updates we still want :
* Clients to be notified as soon as when the last instance of a service disapears.
* Clients to be notified whenever there's there is an update for the service.
* Clients to be notified as soon as the first instance of the requested service is added.
To reduce the number of unnecessary updates we need to block when a request to a missing service is made. However in the following case :
1. Client `client1` makes a query for service `foo`, gets back a node and X-Consul-Index 42
2. `foo` is unregistered
3. `client1` makes a query for `foo` with `index=42` -> `foo` does not exist, the query blocks and `client1` is not notified of the change on `foo`
We could store the last raft index when each service was last alive to know wether we should block on the incoming query or not, but that list could grow indefinetly.
We instead store the last raft index when a service was unregistered and use it when a query targets a service that does not exist.
When a service `srv` is unregistered this "missing service index" is always greater than any X-Consul-Index held by the clients while `srv` was up, allowing us to immediatly notify them.
1. Client `client1` makes a query for service `foo`, gets back a node and `X-Consul-Index: 42`
2. `foo` is unregistered, we set the "missing service index" to 43
3. `client1` makes a blocking query for `foo` with `index=42` -> `foo` does not exist, we check against the "missing service index" and return immediatly with `X-Consul-Index: 43`
4. `client1` makes a blocking query for `foo` with `index=43` -> we block
5. Other changes happen in the cluster, but foo still doesn't exist and "missing service index" hasn't changed, the query is still blocked
6. `foo` is registered again on index 62 -> `foo` exists and its index is greater than 43, we unblock the query
This adds a MaxQueryTime field to the connect ca leaf cache request type and populates it via the wait query param. The cache will then do the right thing and timeout the operation as expected if no new leaf cert is available within that time.
Fixes#4462
The reproduction scenario in the original issue now times out appropriately.