* Upgrade Go to 1.21
* ci: detect Go backwards compatibility test version automatically
For our submodules and other places we choose to test against previous
Go versions, detect this version automatically from the current one
rather than hard-coding it.
* NET-6426 Create ProxyStateTemplate when reconciling MeshGateway resource
* Add TODO for switching fetch method based on gateway type
* Use gateway-kind in workload metadata instead of owner reference
* Create ProxyStateTemplate builder for gatewayproxy controller
* Update to use new controller interface
* Add copyright headers
* Set correct name for ProxyStateTemplate identity reference
* Generate empty ProxyStateTemplate by fetching MeshGateway
This cheats and looks up the MeshGateway directly. In the future, we will need a Workload => xGateway mapper
* Specify owner reference when writing ProxyStateTemplate
* Update dependency mapper to account for multiple controllers per resource type
* Regenerate v2 resource dependencies map
* Add helpful trace logs, tag TODOs with ticket identifiers
* NET-6899 Create name-aligned Service when reconciling MeshGateway resource
The Service has an owner reference added to it indicating that it belongs to a MeshGateway
* Specify port list when creating Service
* Use constants, add TODO w/ ticket reference
* Include gateway-kind in metadata of Service resource
* NET-6663 Modify sidecarproxy controller to skip xGateway resources
* Check workload metadata after nil-check for workload
* Add test asserting that workloads with meta gateway-kind are ignored
* Use more common pattern for map access to increase readability
* Add a make target to run lint-consul-retry on all the modules
* Cleanup sdk/testutil/retry
* Fix a bunch of retry.Run* usage to not use the outer testing.T
* Fix some more recent retry lint issues and pin to v1.4.0 of lint-consul-retry
* Fix codegen copywrite lint issues
* Don’t perform cleanup after each retry attempt by default.
* Use the common testutil.TestingTB interface in test-integ/tenancy
* Fix retry tests
* Update otel access logging extension test to perform requests within the retry block
test: Address occasional flakes in sidecarproxy/controller_test.go
We've observed an occasional flake in this test where some state check
fails. Adding in some wait wrappers to these state checks will hopefully
address the issue, assuming it is a simple flake.
* Add meshconfiguration/controller
* Add MeshConfiguration Registration function
* Fix the TODOs on the RegisterMeshGateway function
* Call RegisterMeshConfiguration
* Add comment to MeshConfigurationRegistration
* Add a test for Reconcile and some comments
* [NET-6438] Add tenancy to xDS Tests
* [NET-6438] Add tenancy to xDS Tests
- Fixing imports
* [NET-6438] Add tenancy to xDS Tests
- Added cleanup post test run
* [NET-6356] Add tenancy to xDS Tests
- Added cleanup post test run
* [NET-6438] Add tenancy to xDS Tests
- using t.Cleanup instead of defer delete
* [NET-6438] Add tenancy to xDS Tests
- rebased
* [NET-6438] Add tenancy to xDS Tests
- rebased
* Generate resource_types for MeshGateway by specifying spec option
* Register MeshGateway type w/ TODOs for hooks
* Initialize controller for MeshGateway resources
* Add meshgateway to list of v2 resource dependencies for golden test
* Scope MeshGateway resource to partition
* node health controller tenancy
* some prog
* some fixes
* revert
* pr comment resolved
* removed name
* Add namespace and tenancy in sidecar proxy controller test
* revert node health controller
* clean up data
* fix local
* copy from ENT
* removed dup code
* removed tenancy
* add test tenancies
* cover all protocols in local_app golden tests
* fix xds tests
* updating latest
* fix broken test
* add sorting of routers to TestBuildLocalApp to get rid of the flaking
Add some generic type hook wrappers to first decode the data
There seems to be a pattern for Validation, Mutation and Write Authorization hooks where they first need to decode the Any data before doing the domain specific work.
This PR introduces 3 new functions to generate wrappers around the other hooks to pre-decode the data into a DecodedResource and pass that in instead of the original pbresource.Resource.
This PR also updates the various catalog data types to use the new hook generators.
* xds: Ensure v2 route match is populated for gRPC
Similar to HTTP, ensure that route match config (which is required by
Envoy) is populated when default values are used.
Because the default matches generated for gRPC contain a single empty
`GRPCRouteMatch`, and that proto does not directly support prefix-based
config, an interpretation of the empty struct is needed to generate the
same output that the `HTTPRouteMatch` is explicitly configured to
provide in internal/mesh/internal/controllers/routes/generate.go.
* xds: Ensure protocol set for gRPC resources
Add explicit protocol in `ProxyStateTemplate` builders and validate it
is always set on clusters. This ensures that HTTP filters and
`http2_protocol_options` are populated in all the necessary places for
gRPC traffic and prevents future unintended omissions of non-TCP
protocols.
Co-authored-by: John Murret <john.murret@hashicorp.com>
---------
Co-authored-by: John Murret <john.murret@hashicorp.com>
* NET-5397 - wire up golden tests from sidecar-proxy controller for xds controller and xdsv2
* WIP
* WIP
* everything matching except leafCerts. need to mock those
* single port destinations working except mixed destinations
* golden test input to xds controller tests for destinations
* proposed fix for failover group naming errors
* clean up test to use helper.
* clean up test to use helper.
* fix test file
* add docstring for test function.
* add docstring for test function.
* fix linting error
* fixing test after route fix merged into main
* first source test works
* WIP
* modify all source files
* source tests pass
* fixing tests after bug fix in main
* NET-5397 - wire up golden tests from sidecar-proxy controller for xds controller and xdsv2
* WIP
* WIP
* everything matching except leafCerts. need to mock those
* single port destinations working except mixed destinations
* golden test input to xds controller tests for destinations
* proposed fix for failover group naming errors
* clean up test to use helper.
* clean up test to use helper.
* fix test file
* add docstring for test function.
* add docstring for test function.
* fix linting error
* fixing test after route fix merged into main
When testing adding http probes to apps, I ran into some issues which I fixed here:
- The listener should be listening on the exposed listener port, updated that.
- The listener and route names were pointing to the path of the exposed path. In my test, the path was "/" resulting in an empty string path. Also, the path may not be unique across exposed path listeners, so I decided to use the path+exposed port as the unique identifier.
This change adds ACL hooks to the remaining catalog and mesh resources, excluding any computed ones. Those will for now continue using the default operator:x permissions.
It refactors a lot of the common testing functions so that they can be re-used between resources.
There are also some types that we don't yet support (e.g. virtual IPs) that this change adds ACL hooks to for future-proofing.
This implements the Filter field on pbcatalog.WorkloadSelector to be
a post-fetch in-memory filter using the https://github.com/hashicorp/go-bexpr
expression language to filter resources based on their envelope metadata fields.
All existing usages of WorkloadSelector should be able to make use of the filter.
* xdsv2: support l7 by adding xfcc policy/headers, tweaking routes, and make a bunch of listeners l7 tests pass
* sidecarproxycontroller: add l7 local app support
* trafficpermissions: make l4 traffic permissions work on l7 workloads
* rename route name field for consistency with l4 cluster name field
* resolve conflicts and rebase
* fix: ensure route name is used in l7 destination route name as well. previously it was only in the route names themselves, now the route name and l7 destination route name line up
Sometimes workloads could come with unspecified protocols such as when running on Kubernetes. Currently, if this is the case, we will just default to tcp protocol.
However, to make sidecar-proxy controller work with l7 protocols we should instead inherit the protocol from service. This change adds tracking for services that a workload is part of and attempts to inherit the protocol whenever services a workload is part of doesn't have conflicting protocols.
This change builds on #19043 and #19067 and updates the sidecar controller to use those computed resources. This achieves several benefits:
* The cache is now simplified which helps us solve for previous bugs (such as multiple Upstreams/Destinations targeting the same service would overwrite each other)
* We no longer need proxy config cache
* We no longer need to do merging of proxy configs as part of the controller logic
* Controller watches are simplified because we no longer need to have complex mapping using cache and can instead use the simple ReplaceType mapper.
It also makes several other improvements/refactors:
* Unifies all caches into one. This is because originally the caches were more independent, however, now that they need to interact with each other it made sense to unify them where sidecar proxy controller uses one cache with 3 bimappers
* Unifies cache and mappers. Mapper already needed all caches anyway and so it made sense to make the cache do the mapping also now that the cache is unified.
* Gets rid of service endpoints watches. This was needed to get updates in a case when service's identities have changed and we need to update proxy state template's spiffe IDs for those destinations. This will however generate a lot of reconcile requests for this controller as service endpoints objects can change a lot because they contain workload's health status. This is solved by adding a status to the service object tracking "bound identities" and have service endpoints controller update it. Having service's status updated allows us to get updates in the sidecar proxy controller because it's already watching service objects
* Add a watch for workloads. We need it so that we get updates if workload's ports change. This also ensures that we update cached identities in case workload's identity changes.