This add a fix to properly verify the gateway mode before creating a watch specific to mesh gateways. This watch have a high performance cost and when mesh gateways are not used is not used.
This also adds an optimization to only return the nodes when watching the Internal.ServiceDump RPC to avoid unnecessary disco chain compilation. As watches in proxy config only need the nodes.
* Option to set HCP client at runtime
Allows us to initially set a nil HCP client for the
telemetry provider and update it later.
* Set telemetry provider HCP client in HCP manager
Set the telemetry provider as a dependency and pass it to
the manager. Update the telemetry provider's HCP client
when the HCP manager starts.
* Add a provider interface for the metrics client
This provider will allow us to configure and reconfigure the
retryable HTTP client and the headers for the metrics client.
* Move HTTP retryable client to separate file
Copied directly from the metrics client.
* Abstract HCP specific values in HTTP client
Remove HCP specific references and instead initiate with
a generic TLS configuration and authentication source.
* Set up HTTP client and headers in the provider
Move setup from the metrics client to the HCP telemetry
provider.
* Update the telemetry provider in the HCP manager
Initialize the provider without the HCP configs and then update
it in the HCP manager to enable it.
* Improve test assertion, fix method comment
* Move client provider to metrics client
* Stop the manager on setup error
* Add separate lock for http configuration
* Start telemetry provider in HCP manager
* Update HCP client and config as part of Run
* Remove option to set config at initialization
* Simplify and clean up setting HCP configs
* Add test for telemetry provider Run method
* Fix race condition
* Use clone of HTTP headers
* Only allow initial update and run once
* Implement In-Process gRPC for use by controller caching/indexing
This replaces the pipe base listener implementation we were previously using. The new style CAN avoid cloning resources which our controller caching/indexing is taking advantage of to not duplicate resource objects in memory.
To maintain safety for controllers and for them to be able to modify data they get back from the cache and the resource service, the client they are presented in their runtime will be wrapped with an autogenerated client which clones request and response messages as they pass through the client.
Another sizable change in this PR is to consolidate how server specific gRPC services get registered and managed. Before this was in a bunch of different methods and it was difficult to track down how gRPC services were registered. Now its all in one place.
* Fix race in tests
* Ensure the resource service is registered to the multiplexed handler for forwarding from client agents
* Expose peer streaming on the internal handler
* Add HCCLink resource type
* Register HCCLink resource type with basic validation
* Add validation for required fields
* Add test for default ACLs
* Add no-op controller for HCCLink
* Add resource-apis semantic validation check in hcclink controller
* Add copyright headers
* Rename HCCLink to Link
* Add hcp_cluster_url to link proto
* Update 'disabled' reason with more detail
* Update link status name to consul.io/hcp/link
* Change link version from v1 to v2
* Use feature flag/experiment to enable v2 resources with HCP
* Update SCADA provider version
Also update mocks for SCADA provider.
* Create SCADA provider w/o HCP config, then update
Adds a placeholder config option to allow us to initialize a SCADA provider
without the HCP configuration. Also adds an update method to then add the
HCP configuration. We need this to be able to eventually always register a
SCADA listener at startup before the HCP config values are known.
* Pass cloud configuration to HCP manager
Save the entire cloud configuration and pass it to the HCP
manager.
* Update and start SCADA provider in HCP manager
Move config updating and starting to the HCP manager. The HCP manager
will eventually be responsible for all processes that contribute
to linking to HCP.
* NET-6426 Create ProxyStateTemplate when reconciling MeshGateway resource
* Add TODO for switching fetch method based on gateway type
* Use gateway-kind in workload metadata instead of owner reference
* Create ProxyStateTemplate builder for gatewayproxy controller
* Update to use new controller interface
* Add copyright headers
* Set correct name for ProxyStateTemplate identity reference
* Generate empty ProxyStateTemplate by fetching MeshGateway
This cheats and looks up the MeshGateway directly. In the future, we will need a Workload => xGateway mapper
* Specify owner reference when writing ProxyStateTemplate
* Update dependency mapper to account for multiple controllers per resource type
* Regenerate v2 resource dependencies map
* Add helpful trace logs, tag TODOs with ticket identifiers
This commit fixes an issue where the partition was not properly set
on the peering query failover target created from sameness-groups.
Before this change, it was always empty, meaning that the data
would be queried with respect to the default partition always. This
resulted in a situation where a PQ that was attempting to use a
sameness-group for failover would select peers from the default
partition, rather than the partition of the sameness-group itself.
* add a hash to config entries when normalizing
* add GetHash and implement comparing hashes
* only update if the Hash is different
* only update if the Hash is different and not 0
* fix proto to include the Hash
* fix proto gen
* buf format
* add SetHash and fix tests
* fix config load tests
* fix state test and config test
* recalculate hash when restoring config entries
* fix snapshot restore test
* add changelog
* fix missing normalize, fix proto indexes and add normalize test
* Add CE version of gateway-upstream-disambiguation
* Use NamespaceOrDefault and PartitionOrDefault
* Add Changelog entry
* Remove the unneeded reassignment
* Use c.ID()
The client.rpc metric now excludes internal retries for consistency
with client.rpc.exceeded and client.rpc.failed. All of these metrics
now increment at most once per RPC method call, allowing for
accurate calculation of failure / rate limit application occurrence.
Additionally, if an RPC fails because no servers are present,
client.rpc.failed is now incremented.
* Add a make target to run lint-consul-retry on all the modules
* Cleanup sdk/testutil/retry
* Fix a bunch of retry.Run* usage to not use the outer testing.T
* Fix some more recent retry lint issues and pin to v1.4.0 of lint-consul-retry
* Fix codegen copywrite lint issues
* Don’t perform cleanup after each retry attempt by default.
* Use the common testutil.TestingTB interface in test-integ/tenancy
* Fix retry tests
* Update otel access logging extension test to perform requests within the retry block
* Upgrade hcp-sdk-go to latest version v0.73
Changes:
- go get github.com/hashicorp/hcp-sdk-go
- go mod tidy
* From upgrade: regenerate protobufs for upgrade from 1.30 to 1.31
Ran: `make proto`
Slack: https://hashicorp.slack.com/archives/C0253EQ5B40/p1701105418579429
* From upgrade: fix mock interface implementation
After upgrading, there is the following compile error:
cannot use &mockHCPCfg{} (value of type *mockHCPCfg) as "github.com/hashicorp/hcp-sdk-go/config".HCPConfig value in return statement: *mockHCPCfg does not implement "github.com/hashicorp/hcp-sdk-go/config".HCPConfig (missing method Logout)
Solution: update the mock to have the missing Logout method
* From upgrade: Lint: remove usage of deprecated req.ServerState.TLS
Due to upgrade, linting is erroring due to usage of a newly deprecated field
22:47:56 [consul]: make lint
--> Running golangci-lint (.)
agent/hcp/testing.go:157:24: SA1019: req.ServerState.TLS is deprecated: use server_tls.internal_rpc instead. (staticcheck)
time.Until(time.Time(req.ServerState.TLS.CertExpiry)).Hours()/24,
^
* From upgrade: adjust oidc error message
From the upgrade, this test started failing:
=== FAIL: internal/go-sso/oidcauth TestOIDC_ClaimsFromAuthCode/failed_code_exchange (re-run 2) (0.01s)
oidc_test.go:393: unexpected error: Provider login failed: Error exchanging oidc code: oauth2: "invalid_grant" "unexpected auth code"
Prior to the upgrade, the error returned was:
```
Provider login failed: Error exchanging oidc code: oauth2: cannot fetch token: 401 Unauthorized\nResponse: {\"error\":\"invalid_grant\",\"error_description\":\"unexpected auth code\"}\n
```
Now the error returned is as below and does not contain "cannot fetch token"
```
Provider login failed: Error exchanging oidc code: oauth2: "invalid_grant" "unexpected auth code"
```
* Update AgentPushServerState structs with new fields
HCP-side changes for the new fields are in:
https://github.com/hashicorp/cloud-global-network-manager-service/pull/1195/files
* Minor refactor for hcpServerStatus to abstract tlsInfo into struct
This will make it easier to set the same tls-info information to both
- status.TLS (deprecated field)
- status.ServerTLSMetadata (new field to use instead)
* Update hcpServerStatus to parse out information for new fields
Changes:
- Improve error message and handling (encountered some issues and was confused)
- Set new field TLSInfo.CertIssuer
- Collect certificate authority metadata and set on TLSInfo.CertificateAuthorities
- Set TLSInfo on both server.TLS and server.ServerTLSMetadata.InternalRPC
* Update serverStatusToHCP to convert new fields to GNM rpc
* Add changelog
* Feedback: connect.ParseCert, caCerts
* Feedback: refactor and unit test server status
* Feedback: test to use expected struct
* Feedback: certificate with intermediate
* Feedback: catch no leaf, remove expectedErr
* Feedback: update todos with jira ticket
* Feedback: mock tlsConfigurator
* Update catalog and ui endpoints to show APIGateway in gateway service
topology view
* Added initial implementation for service view
* updated ui
* Fix topology view for gateways
* Adding tests for gw controller
* remove unused args
* Undo formatting changes
* Fix call sites for upstream/downstream gw changes
* Add config entry tests
* Fix function calls again
* Move from ServiceKey to ServiceName, cleanup from PR review
* Add additional check for length of services in bound apigateway for
IsSame comparison
* fix formatting for proto
* gofmt
* Add DeepCopy for retrieved BoundAPIGateway
* gofmt
* gofmt
* Rename function to be more consistent
* Add meshconfiguration/controller
* Add MeshConfiguration Registration function
* Fix the TODOs on the RegisterMeshGateway function
* Call RegisterMeshConfiguration
* Add comment to MeshConfigurationRegistration
* Add a test for Reconcile and some comments
* Generate resource_types for MeshGateway by specifying spec option
* Register MeshGateway type w/ TODOs for hooks
* Initialize controller for MeshGateway resources
* Add meshgateway to list of v2 resource dependencies for golden test
* Scope MeshGateway resource to partition
* init
* computed exported service
* make proto
* exported services resource
* exported services test
* added some tests and namespace exported service
* partition exported services
* computed service
* computed services tests
* register types
* fix comment
* make proto lint
* fix proto format make proto
* make codegen
* Update proto-public/pbmulticluster/v1alpha1/computed_exported_services.proto
Co-authored-by: Eric Haberkorn <erichaberkorn@gmail.com>
* Update internal/multicluster/internal/types/computed_exported_services.go
Co-authored-by: Eric Haberkorn <erichaberkorn@gmail.com>
* using different way of resource creation in tests
* make proto
* fix computed exported services test
* fix tests
* differnet validation for computed services for ent and ce
* Acls for exported services
* added validations for enterprise features in ce
* fix error
* fix acls test
* Update internal/multicluster/internal/types/validation_exported_services_ee.go
Co-authored-by: Eric Haberkorn <erichaberkorn@gmail.com>
* removed the create method
* update proto
* removed namespace
* created seperate function for ce and ent
* test files updated and validations fixed
* added nil checks
* fix tests
* added comments
* removed tenancy check
* added mutation function
* fix mutation method
* fix list permissions in test
* fix pr comments
* fix tests
* lisence
* busl license
* Update internal/multicluster/internal/types/helpers_ce.go
Co-authored-by: Eric Haberkorn <erichaberkorn@gmail.com>
* Update internal/multicluster/internal/types/helpers_ce.go
Co-authored-by: Eric Haberkorn <erichaberkorn@gmail.com>
* Update internal/multicluster/internal/types/helpers_ce.go
Co-authored-by: Eric Haberkorn <erichaberkorn@gmail.com>
* make proto
* some pr comments addressed
* some pr comments addressed
* acls helper
* some comment changes
* removed unused files
* fixes
* fix function in file
* caps
* some positioing
* added test for validation error
* fix names
* made valid a function
* remvoed patch
* removed mutations
* v2 beta1
* v2beta1
* rmeoved v1alpha1
* validate error
* merge ent
* some nits
* removed dup func
* removed nil check
---------
Co-authored-by: Eric Haberkorn <erichaberkorn@gmail.com>
Prior to the introduction of this configuration, grpc keepalive messages were
sent after 2 hours of inactivity on the stream. This posed issues in various
scenarios where the server-side xds connection balancing was unaware that envoy
instances were uncleanly killed / force-closed, since the connections would
only be cleaned up after ~5 minutes of TCP timeouts occurred. Setting this
config to a 30 second interval with a 20 second timeout ensures that at most,
it should take up to 50 seconds for a dead xds connection to be closed.
When the v2 catalog experiment is enabled the old v1 catalog apis will be
forcibly disabled at both the API (json) layer and the RPC (msgpack) layer.
This will also disable anti-entropy as it uses the v1 api.
This includes all of /v1/catalog/*, /v1/health/*, most of /v1/agent/*,
/v1/config/*, and most of /v1/internal/*.
* add namespace proto and registration
* fix proto generation
* add missing copywrite headers
* fix proto linter errors
* fix exports and Type export
* add mutate hook and more validation
* add more validation rules and tests
* Apply suggestions from code review
Co-authored-by: Semir Patel <semir.patel@hashicorp.com>
* fix owner error and add test
* remove ACL for now
* add tests around space suffix prefix.
* only fait when ns and ap are default, add test for it
---------
Co-authored-by: Semir Patel <semir.patel@hashicorp.com>
The renaming of files from oss -> ce caused incorrect snapshots
to be created due to ce writes now happening prior to ent writes.
When this happens various entities will attempt to be restored
from the snapshot prior to a partition existing and will cause a
panic to occur.
* Refactors the leafcert package to not have a dependency on agent/consul and agent/cache to avoid import cycles. This way the xds controller can just import the leafcert package to use the leafcert manager.
The leaf cert logic in the controller:
* Sets up watches for leaf certs that are referenced in the ProxyStateTemplate (which generates the leaf certs too).
* Gets the leaf cert from the leaf cert cache
* Stores the leaf cert in the ProxyState that's pushed to xds
* For the cert watches, this PR also uses a bimapper + a thin wrapper to map leaf cert events to related ProxyStateTemplates
Since bimapper uses a resource.Reference or resource.ID to map between two resource types, I've created an internal type for a leaf certificate to use for the resource.Reference, since it's not a v2 resource.
The wrapper allows mapping events to resources (as opposed to mapping resources to resources)
The controller tests:
Unit: Ensure that we resolve leaf cert references
Lifecycle: Ensure that when the CA is updated, the leaf cert is as well
Also adds a new spiffe id type, and adds workload identity and workload identity URI to leaf certs. This is so certs are generated with the new workload identity based SPIFFE id.
* Pulls out some leaf cert test helpers into a helpers file so it
can be used in the xds controller tests.
* Wires up leaf cert manager dependency
* Support getting token from proxytracker
* Add workload identity spiffe id type to the authorize and sign functions
---------
Co-authored-by: John Murret <john.murret@hashicorp.com>
* mesh-controller: handle L4 protocols for a proxy without upstreams
* sidecar-controller: Support explicit destinations for L4 protocols and single ports.
* This controller generates and saves ProxyStateTemplate for sidecar proxies.
* It currently supports single-port L4 ports only.
* It keeps a cache of all destinations to make it easier to compute and retrieve destinations.
* It will update the status of the pbmesh.Upstreams resource if anything is invalid.
* endpoints-controller: add workload identity to the service endpoints resource
* small fixes
* review comments
* Address PR comments
* sidecar-proxy controller: Add support for transparent proxy
This currently does not support inferring destinations from intentions.
* PR review comments
* mesh-controller: handle L4 protocols for a proxy without upstreams
* sidecar-controller: Support explicit destinations for L4 protocols and single ports.
* This controller generates and saves ProxyStateTemplate for sidecar proxies.
* It currently supports single-port L4 ports only.
* It keeps a cache of all destinations to make it easier to compute and retrieve destinations.
* It will update the status of the pbmesh.Upstreams resource if anything is invalid.
* endpoints-controller: add workload identity to the service endpoints resource
* small fixes
* review comments
* Make sure endpoint refs route to mesh port instead of an app port
* Address PR comments
* fixing copyright
* tidy imports
* sidecar-proxy controller: Add support for transparent proxy
This currently does not support inferring destinations from intentions.
* tidy imports
* add copyright headers
* Prefix sidecar proxy test files with source and destination.
* Update controller_test.go
* NET-5132 - Configure multiport routing for connect proxies in TProxy mode
* formatting golden files
* reverting golden files and adding changes in manually. build implicit destinations still has some issues.
* fixing files that were incorrectly repeating the outbound listener
* PR comments
* extract AlpnProtocol naming convention to getAlpnProtocolFromPortName(portName)
* removing address level filtering.
* adding license to resources_test.go
---------
Co-authored-by: Iryna Shustava <iryna@hashicorp.com>
Co-authored-by: R.B. Boyer <rb@hashicorp.com>
Co-authored-by: github-team-consul-core <github-team-consul-core@hashicorp.com>
This commit adds support for transparent proxy to the sidecar proxy controller. As we do not yet support inferring destinations from intentions, this assumes that all services in the cluster are destinations.
* Add response header filters to http-route config entry definitions
* Map response header filters from config entry when constructing route destination
* Support response header modifiers at the service level as well
* Update protobuf definitions
* Update existing unit tests
* Add response filters to route consolidation logic
* Make existing unit tests more robust
* Add missing docstring
* Add changelog entry
* Add response filter modifiers to existing integration test
* Add more robust testing for response header modifiers in the discovery chain
* Add more robust testing for request header modifiers in the discovery chain
* Modify test to verify that service filter modifiers take precedence over rule filter modifiers
* [NET-5325] ACL templated policies support in tokens and roles
- Add API support for creating tokens/roles with templated-policies
- Add CLI support for creating tokens/roles with templated-policies
* adding changelog
* Fixes issues in setting status
* Update golden files for changes to xds generation to not use deprecated
methods
* Fixed default for validation of JWT for route
* This controller generates and saves ProxyStateTemplate for sidecar proxies.
* It currently supports single-port L4 ports only.
* It keeps a cache of all destinations to make it easier to compute and retrieve destinations.
* It will update the status of the pbmesh.Upstreams resource if anything is invalid.
* This commit also changes service endpoints to include workload identity. This made the implementation a bit easier as we don't need to look up as many workloads and instead rely on endpoints data.
This PR enables the GetEnvoyBootstrapParams endpoint to construct envoy bootstrap parameters from v2 catalog and mesh resources.
* Make bootstrap request and response parameters less specific to services so that we can re-use them for workloads or service instances.
* Remove ServiceKind from bootstrap params response. This value was unused previously and is not needed for V2.
* Make access logs generation generic so that we can generate them using v1 or v2 resources.
Add support for querying tokens by service name
The consul-k8s endpoints controller has a workflow where it fetches all tokens.
This is not performant for large clusters, where there may be a sizable number
of tokens. This commit attempts to alleviate that problem and introduces a new
way to query by the token's service name.
* Add the plumbing for APIGW JWT work
* Remove unneeded import
* Add deep equal function for HTTPMatch
* Added plumbing for status conditions
* Remove unneeded comment
* Fix comments
* Add calls in xds listener for apigateway to setup listener jwt auth
* Adding explicit MPL license for sub-package
This directory and its subdirectories (packages) contain files licensed with the MPLv2 `LICENSE` file in this directory and are intentionally licensed separately from the BSL `LICENSE` file at the root of this repository.
* Adding explicit MPL license for sub-package
This directory and its subdirectories (packages) contain files licensed with the MPLv2 `LICENSE` file in this directory and are intentionally licensed separately from the BSL `LICENSE` file at the root of this repository.
* Updating the license from MPL to Business Source License
Going forward, this project will be licensed under the Business Source License v1.1. Please see our blog post for more details at <Blog URL>, FAQ at www.hashicorp.com/licensing-faq, and details of the license at www.hashicorp.com/bsl.
* add missing license headers
* Update copyright file headers to BUSL-1.1
* Update copyright file headers to BUSL-1.1
* Update copyright file headers to BUSL-1.1
* Update copyright file headers to BUSL-1.1
* Update copyright file headers to BUSL-1.1
* Update copyright file headers to BUSL-1.1
* Update copyright file headers to BUSL-1.1
* Update copyright file headers to BUSL-1.1
* Update copyright file headers to BUSL-1.1
* Update copyright file headers to BUSL-1.1
* Update copyright file headers to BUSL-1.1
* Update copyright file headers to BUSL-1.1
* Update copyright file headers to BUSL-1.1
* Update copyright file headers to BUSL-1.1
* Update copyright file headers to BUSL-1.1
---------
Co-authored-by: hashicorp-copywrite[bot] <110428419+hashicorp-copywrite[bot]@users.noreply.github.com>
* [CC-5719] Add support for builtin global-read-only policy
* Add changelog
* Add read-only to docs
* Fix some minor issues.
* Change from ReplaceAll to Sprintf
* Change IsValidPolicy name to return an error instead of bool
* Fix PolicyList test
* Fix other tests
* Apply suggestions from code review
Co-authored-by: Paul Glass <pglass@hashicorp.com>
* Fix state store test for policy list.
* Fix naming issues
* Update acl/validation.go
Co-authored-by: Chris Thain <32781396+cthain@users.noreply.github.com>
* Update agent/consul/acl_endpoint.go
---------
Co-authored-by: Paul Glass <pglass@hashicorp.com>
Co-authored-by: Chris Thain <32781396+cthain@users.noreply.github.com>
* Fix topoloy intention with mixed connect-native/normal services.
If a service is registered twice, once with connect-native and once
without, the topology views would prune the existing intentions. This
change brings the code more in line with the transparent proxy behavior.
* Dedupe nodes in the ServiceTopology ui endpoint (like done with tags).
* Consider a service connect-native as soon as one instance is.
Updating RootPKIPath but not IntermediatePKIPath would not update
leaf signing certs with the new root. Unsure if this happens in practice
but manual testing showed it is a bug that would break mesh and agent
connections once the old root is pruned.
### Description
<!-- Please describe why you're making this change, in plain English.
-->
Dan had already started on this
[task](https://github.com/hashicorp/consul/pull/17849) which is needed
to start building the HTTP APIs. This just needed some cleanup to get it
ready for review.
Overview:
- Rename `internalResourceServiceClient` to
`insecureResourceServiceClient` for name consistency
- Configure a `secureResourceServiceClient` with auth enabled
### PR Checklist
* [ ] ~updated test coverage~
* [ ] ~external facing docs updated~
* [x] appropriate backport labels added
* [ ] ~not a security concern~
* update UINodes and UINodeInfo response with consul-version info added as NodeMeta, fetched from serf members
* update test cases TestUINodes, TestUINodeInfo
* added nil check for map
* add consul-version in local agent node metadata
* get consul version from serf member and add this as node meta in catalog register request
* updated ui mock response to include consul versions as node meta
* updated ui trans and added version as query param to node list route
* updates in ui templates to display consul version with filter and sorts
* updates in ui - model class, serializers,comparators,predicates for consul version feature
* added change log for Consul Version Feature
* updated to get version from consul service, if for some reason not available from serf
* updated changelog text
* updated dependent testcases
* multiselection version filter
* Update agent/consul/state/catalog.go
comments updated
Co-authored-by: Jared Kirschner <85913323+jkirschner-hashicorp@users.noreply.github.com>
---------
Co-authored-by: Jared Kirschner <85913323+jkirschner-hashicorp@users.noreply.github.com>
This PR fixes a bug that was introduced in:
https://github.com/hashicorp/consul/pull/16021
A user setting a protocol in proxy-defaults would cause tproxy implicit
upstreams to not honor the upstream service's protocol set in its
`ServiceDefaults.Protocol` field, and would instead always use the
proxy-defaults value.
Due to the fact that upstreams configured with "tcp" can successfully contact
upstream "http" services, this issue was not recognized until recently (a
proxy-defaults with "tcp" and a listening service with "http" would make
successful requests, but not the opposite).
As a temporary work-around, users experiencing this issue can explicitly set
the protocol on the `ServiceDefaults.UpstreamConfig.Overrides`, which should
take precedence.
The fix in this PR removes the proxy-defaults protocol from the wildcard
upstream that tproxy uses to configure implicit upstreams. When the protocol
was included, it would always overwrite the value during discovery chain
compilation, which was not correct. The discovery chain compiler also consumes
proxy defaults to determine the protocol, so simply excluding it from the
wildcard upstream config map resolves the issue.
Fix issue with streaming service health watches.
This commit fixes an issue where the health streams were unaware of service
export changes. Whenever an exported-services config entry is modified, it is
effectively an ACL change.
The bug would be triggered by the following situation:
- no services are exported
- an upstream watch to service X is spawned
- the streaming backend filters out data for service X (due to lack of exports)
- service X is finally exported
In the situation above, the streaming backend does not trigger a refresh of its
data. This means that any events that were supposed to have been received prior
to the export are NOT backfilled, and the watches never see service X spawning.
We currently have decided to not trigger a stream refresh in this situation due
to the potential for a thundering herd effect (touching exports would cause a
re-fetch of all watches for that partition, potentially). Therefore, a local
blocking-query approach was added by this commit for agentless.
It's also worth noting that the streaming subscription is currently bypassed
most of the time with agentful, because proxycfg has a `req.Source.Node != ""`
which prevents the `streamingEnabled` check from passing. This means that while
agents should technically have this same issue, they don't experience it with
mesh health watches.
Note that this is a temporary fix that solves the issue for proxycfg, but not
service-discovery use cases.
* agent: remove agent cache dependency from service mesh leaf certificate management
This extracts the leaf cert management from within the agent cache.
This code was produced by the following process:
1. All tests in agent/cache, agent/cache-types, agent/auto-config,
agent/consul/servercert were run at each stage.
- The tests in agent matching .*Leaf were run at each stage.
- The tests in agent/leafcert were run at each stage after they
existed.
2. The former leaf cert Fetch implementation was extracted into a new
package behind a "fake RPC" endpoint to make it look almost like all
other cache type internals.
3. The old cache type was shimmed to use the fake RPC endpoint and
generally cleaned up.
4. I selectively duplicated all of Get/Notify/NotifyCallback/Prepopulate
from the agent/cache.Cache implementation over into the new package.
This was renamed as leafcert.Manager.
- Code that was irrelevant to the leaf cert type was deleted
(inlining blocking=true, refresh=false)
5. Everything that used the leaf cert cache type (including proxycfg
stuff) was shifted to use the leafcert.Manager instead.
6. agent/cache-types tests were moved and gently replumbed to execute
as-is against a leafcert.Manager.
7. Inspired by some of the locking changes from derek's branch I split
the fat lock into N+1 locks.
8. The waiter chan struct{} was eventually replaced with a
singleflight.Group around cache updates, which was likely the biggest
net structural change.
9. The awkward two layers or logic produced as a byproduct of marrying
the agent cache management code with the leaf cert type code was
slowly coalesced and flattened to remove confusion.
10. The .*Leaf tests from the agent package were copied and made to work
directly against a leafcert.Manager to increase direct coverage.
I have done a best effort attempt to port the previous leaf-cert cache
type's tests over in spirit, as well as to take the e2e-ish tests in the
agent package with Leaf in the test name and copy those into the
agent/leafcert package to get more direct coverage, rather than coverage
tangled up in the agent logic.
There is no net-new test coverage, just coverage that was pushed around
from elsewhere.
This includes prioritize by localities on disco chain targets rather than
resolvers, allowing different targets within the same partition to have
different policies.