Wire the ComputedImplicitDestinations resource into the sidecar controller, replacing the inline version already present.
Also:
- Rewrite the controller to use the controller cache
- Rewrite it to no longer depend on ServiceEndpoints
- Remove the fetcher and (local) cache abstraction
Creates a new controller to create ComputedImplicitDestinations resources by
composing ComputedRoutes, Services, and ComputedTrafficPermissions to
infer all ParentRef services that could possibly send some portion of traffic to a
Service that has at least one accessible Workload Identity. A followup PR will
rewire the sidecar controller to make use of this new resource.
As this is a performance optimization, rather than a security feature the following
aspects of traffic permissions have been ignored:
- DENY rules
- port rules (all ports are allowed)
Also:
- Add some v2 TestController machinery to help test complex dependency mappers.
* Use a full EndpointRef on ComputedRoutes targets instead of just the ID
Today, the `ComputedRoutes` targets have the appropriate ID set for their `ServiceEndpoints` reference; however, the `MeshPort` and `RoutePort` are assumed to be that of the target when adding the endpoints reference in the sidecar's `ProxyStateTemplate`.
This is problematic when the target lives behind a `MeshGateway` and the `Mesh/RoutePort` used in the sidecar's `ProxyStateTemplate` should be that of the `MeshGateway` instead of the target.
Instead of assuming the `MeshPort` and `RoutePort` when building the `ProxyStateTemplate` for the sidecar, let's just add the full `EndpointRef` -- including the ID and the ports -- when hydrating the computed destinations.
* Make sure the UID from the existing ServiceEndpoints makes it onto ComputedRoutes
* Update test assertions
* Undo confusing whitespace change
* Remove one-line function wrapper
* Use plural name for endpoints ref
* Add constants for gateway name, kind and port names
[OG Author: michael.zalimeni@hashicorp.com, rebase needed a separate PR]
* v2: support virtual port in Service port references
In addition to Service target port references, allow users to specify a
port by stringified virtual port value. This is useful in environments
such as Kubernetes where typical configuration is written in terms of
Service virtual ports rather than workload (pod) target port names.
Retaining the option of referencing target ports by name supports VMs,
Nomad, and other use cases where virtual ports are not used by default.
To support both uses cases at once, we will strictly interpret port
references based on whether the value is numeric. See updated
`ServicePort` docs for more details.
* v2: update service ref docs for virtual port support
Update proto and generated .go files with docs reflecting virtual port
reference support.
* v2: add virtual port references to L7 topo test
Add coverage for mixed virtual and target port references to existing
test.
* update failover policy controller tests to work with computed failover policy and assert error conditions against FailoverPolicy and ComputedFailoverPolicy resources
* accumulate services; don't overwrite them in enterprise
---------
Co-authored-by: Michael Zalimeni <michael.zalimeni@hashicorp.com>
Co-authored-by: R.B. Boyer <rb@hashicorp.com>
* If a workload does not implement a port, it should not be included in the list of endpoints for the Envoy cluster for that port.
* Adds tenancy tests for xds controller and xdsv2 resource generation, and adds all those files.
* The original change in this PR was for filtering the list of endpoints by the port being routed to (bullet 1). Since I made changes to sidecarproxycontroller golden files, I realized some of the golden files were unused because of the tenancy changes, so when I deleted those, that broke xds controller tests which weren't correctly using tenancy. So when I fixed that, then the xdsv2 tests broke, so I added tenancy support there too. So now, from sidecarproxy controller -> xds controller -> xdsv2 we now have tenancy support and all the golden files are lined up.
* API Gateway proto
* fix lint issue
* new line
* run make proto format
* regened with comment
* lint
* utilizie existing TLS struct
* Update proto-public/pbmesh/v2beta1/api_gateway.proto
Co-authored-by: Nathan Coleman <nathan.coleman@hashicorp.com>
* generated file
* Update proto-public/pbmesh/v2beta1/api_gateway.proto
* regen with comment
* format the comment
---------
Co-authored-by: Nathan Coleman <nathan.coleman@hashicorp.com>
* NET-6945 - Replace usage of deprecated Envoy field envoy.config.core.v3.HeaderValueOption.append
* update proto for v2 and then update xds v2 logic
* add changelog
* Update 20078.txt to be consistent with existing changelog entries
* swap enum values tomatch envoy.
* Upgrade hcp-sdk-go to latest version v0.73
Changes:
- go get github.com/hashicorp/hcp-sdk-go
- go mod tidy
* From upgrade: regenerate protobufs for upgrade from 1.30 to 1.31
Ran: `make proto`
Slack: https://hashicorp.slack.com/archives/C0253EQ5B40/p1701105418579429
* From upgrade: fix mock interface implementation
After upgrading, there is the following compile error:
cannot use &mockHCPCfg{} (value of type *mockHCPCfg) as "github.com/hashicorp/hcp-sdk-go/config".HCPConfig value in return statement: *mockHCPCfg does not implement "github.com/hashicorp/hcp-sdk-go/config".HCPConfig (missing method Logout)
Solution: update the mock to have the missing Logout method
* From upgrade: Lint: remove usage of deprecated req.ServerState.TLS
Due to upgrade, linting is erroring due to usage of a newly deprecated field
22:47:56 [consul]: make lint
--> Running golangci-lint (.)
agent/hcp/testing.go:157:24: SA1019: req.ServerState.TLS is deprecated: use server_tls.internal_rpc instead. (staticcheck)
time.Until(time.Time(req.ServerState.TLS.CertExpiry)).Hours()/24,
^
* From upgrade: adjust oidc error message
From the upgrade, this test started failing:
=== FAIL: internal/go-sso/oidcauth TestOIDC_ClaimsFromAuthCode/failed_code_exchange (re-run 2) (0.01s)
oidc_test.go:393: unexpected error: Provider login failed: Error exchanging oidc code: oauth2: "invalid_grant" "unexpected auth code"
Prior to the upgrade, the error returned was:
```
Provider login failed: Error exchanging oidc code: oauth2: cannot fetch token: 401 Unauthorized\nResponse: {\"error\":\"invalid_grant\",\"error_description\":\"unexpected auth code\"}\n
```
Now the error returned is as below and does not contain "cannot fetch token"
```
Provider login failed: Error exchanging oidc code: oauth2: "invalid_grant" "unexpected auth code"
```
* Update AgentPushServerState structs with new fields
HCP-side changes for the new fields are in:
https://github.com/hashicorp/cloud-global-network-manager-service/pull/1195/files
* Minor refactor for hcpServerStatus to abstract tlsInfo into struct
This will make it easier to set the same tls-info information to both
- status.TLS (deprecated field)
- status.ServerTLSMetadata (new field to use instead)
* Update hcpServerStatus to parse out information for new fields
Changes:
- Improve error message and handling (encountered some issues and was confused)
- Set new field TLSInfo.CertIssuer
- Collect certificate authority metadata and set on TLSInfo.CertificateAuthorities
- Set TLSInfo on both server.TLS and server.ServerTLSMetadata.InternalRPC
* Update serverStatusToHCP to convert new fields to GNM rpc
* Add changelog
* Feedback: connect.ParseCert, caCerts
* Feedback: refactor and unit test server status
* Feedback: test to use expected struct
* Feedback: certificate with intermediate
* Feedback: catch no leaf, remove expectedErr
* Feedback: update todos with jira ticket
* Feedback: mock tlsConfigurator
* add build tags/import k8s specific proto packages
* fix generated import paths
* fix gomod linting issue
* mod tidy every go mod file
* revert protobuff version, take care of in different pr
* cleaned up new lines
* added newline to end of file
* Generate resource_types for MeshGateway by specifying spec option
* Register MeshGateway type w/ TODOs for hooks
* Initialize controller for MeshGateway resources
* Add meshgateway to list of v2 resource dependencies for golden test
* Scope MeshGateway resource to partition
* xds: Ensure v2 route match is populated for gRPC
Similar to HTTP, ensure that route match config (which is required by
Envoy) is populated when default values are used.
Because the default matches generated for gRPC contain a single empty
`GRPCRouteMatch`, and that proto does not directly support prefix-based
config, an interpretation of the empty struct is needed to generate the
same output that the `HTTPRouteMatch` is explicitly configured to
provide in internal/mesh/internal/controllers/routes/generate.go.
* xds: Ensure protocol set for gRPC resources
Add explicit protocol in `ProxyStateTemplate` builders and validate it
is always set on clusters. This ensures that HTTP filters and
`http2_protocol_options` are populated in all the necessary places for
gRPC traffic and prevents future unintended omissions of non-TCP
protocols.
Co-authored-by: John Murret <john.murret@hashicorp.com>
---------
Co-authored-by: John Murret <john.murret@hashicorp.com>
* xdsv2: support l7 by adding xfcc policy/headers, tweaking routes, and make a bunch of listeners l7 tests pass
* sidecarproxycontroller: add l7 local app support
* trafficpermissions: make l4 traffic permissions work on l7 workloads
* rename route name field for consistency with l4 cluster name field
* resolve conflicts and rebase
* fix: ensure route name is used in l7 destination route name as well. previously it was only in the route names themselves, now the route name and l7 destination route name line up
This change builds on #19043 and #19067 and updates the sidecar controller to use those computed resources. This achieves several benefits:
* The cache is now simplified which helps us solve for previous bugs (such as multiple Upstreams/Destinations targeting the same service would overwrite each other)
* We no longer need proxy config cache
* We no longer need to do merging of proxy configs as part of the controller logic
* Controller watches are simplified because we no longer need to have complex mapping using cache and can instead use the simple ReplaceType mapper.
It also makes several other improvements/refactors:
* Unifies all caches into one. This is because originally the caches were more independent, however, now that they need to interact with each other it made sense to unify them where sidecar proxy controller uses one cache with 3 bimappers
* Unifies cache and mappers. Mapper already needed all caches anyway and so it made sense to make the cache do the mapping also now that the cache is unified.
* Gets rid of service endpoints watches. This was needed to get updates in a case when service's identities have changed and we need to update proxy state template's spiffe IDs for those destinations. This will however generate a lot of reconcile requests for this controller as service endpoints objects can change a lot because they contain workload's health status. This is solved by adding a status to the service object tracking "bound identities" and have service endpoints controller update it. Having service's status updated allows us to get updates in the sidecar proxy controller because it's already watching service objects
* Add a watch for workloads. We need it so that we get updates if workload's ports change. This also ensures that we update cached identities in case workload's identity changes.
This commit adds a new type ComputedDestinations that will contain all destinations from any Destinations resources and will be name-aligned with a workload. This also adds an explicit-destinations controller that computes these resources.
This is needed to simplify the tracking we need to do currently in the sidecar-proxy controller and makes it easier to query all explicit destinations that apply to a workload.
* Introduce a new type `ComputedProxyConfiguration` and add a controller for it. This is needed for two reasons. The first one is that external integrations like kubernetes may need to read the fully computed and sorted proxy configuration per workload. The second reasons is that it makes sidecar-proxy controller logic quite a bit simpler as it no longer needs to do this.
* Generalize workload selection mapper and fix a bug where it would delete IDs from the tree if only one is left after a removal is done.
Convert more of the xRoutes features that were skipped in an earlier PR into ComputedRoutes and make them work:
- DestinationPolicy defaults
- more timeouts
- load balancer policy
- request/response header mutations
- urlrewrite
- GRPCRoute matches
xRoute resource types contain a slice of parentRefs to services that they
manipulate traffic for. All xRoutes that have a parentRef to given Service
will be merged together to generate a ComputedRoutes resource
name-aligned with that Service.
This means that a write of an xRoute with 2 parent ref pointers will cause
at most 2 reconciles for ComputedRoutes.
If that xRoute's list of parentRefs were ever to be reduced, or otherwise
lose an item, that subsequent map event will only emit events for the current
set of refs. The removed ref will not cause the generated ComputedRoutes
related to that service to be re-reconciled to omit the influence of that xRoute.
To combat this, we will store on the ComputedRoutes resource a
BoundResources []*pbresource.Reference field with references to all
resources that were used to influence the generated output.
When the routes controller reconciles, it will use a bimapper to index this
influence, and the dependency mappers for the xRoutes will look
themselves up in that index to discover additional (former) ComputedRoutes
that need to be notified as well.
Ensure that configuring a FailoverPolicy for a service that is reachable via a xRoute or a direct upstream causes an envoy aggregate cluster to be created for the original cluster name, but with separate clusters for each one of the possible destinations.