* Include secret type when building resources from config snapshot
* First pass at generating envoy secrets from api-gateway snapshot
* Update comments for xDS update order
* Add secret type + corresponding golden files to existing tests
* Initialize test helpers for testing api-gateway resource generation
* Generate golden files for new api-gateway xDS resource test
* Support ADS for TLS certificates on api-gateway
* Configure TLS on api-gateway listeners
* Inline TLS cert code
* update tests
* Add SNI support so we can have multiple certificates
* Remove commented out section from helper
* regen deep-copy
* Add tcp tls test
---------
Co-authored-by: Nathan Coleman <nathan.coleman@hashicorp.com>
* Simple API Gateway e2e test for tcp routes
* Drop DNSSans since we don't front the Gateway with a leaf cert
* WIP listener tests for api-gateway
* Return early if no routes
* Add back in leaf cert to testing
* Fix merge conflicts
* Re-add kind to setup
* Fix iteration over listener upstreams
* New tcp listener test
* Add tests for API Gateway with TCP and HTTP routes
* Move zero-route check back
* Drop generateIngressDNSSANs
* Check for chains not routes
---------
Co-authored-by: Andrew Stucki <andrew.stucki@hashicorp.com>
* draft
* expose internal admin port and add proxy test
* update tests
* move comment
* add failure case, fix lint issues
* cleanup
* handle error
* revert changes to service interface
* address review comments
* fix merge conflict
* merge the tests so cluster is created once
* fix other test
* [API Gateway] Add integration test for conflicted TCP listeners
* [API Gateway] Update simple test to leverage intentions and multiple listeners
* Fix broken unit test
* [API Gateway] Add integration test for HTTP routes
* [API Gateway] Add integration test for conflicted TCP listeners
* [API Gateway] Update simple test to leverage intentions and multiple listeners
* Fix broken unit test
* PR suggestions
Prior to this commit, secondary datacenters could not be initialized
as peering acceptors if ACLs were enabled. This is due to the fact that
internal server-to-server API calls would fail because the management
token was not generated. This PR makes it so that both primary and
secondary datacenters generate their own management token whenever
a leader is elected in their respective clusters.
1. Upgraded agent can inherit the persisted token and join the cluster
2. Agent token prior to upgrade is still valid after upgrade
3. Enable ACL in the agent configuration
* rate limit test
* Have tests for the 3 modes
* added assertions for logs and metrics
* add comments to test sections
* add check for rate limit exceeded text in log assertion section.
* fix linting error
* updating test to use KV get and put. move log assertion tolast.
* Adding logging for blocking messages in enforcing mode. refactoring tests.
* modified test description
* formatting
* Apply suggestions from code review
Co-authored-by: Dan Upton <daniel@floppy.co>
* Update test/integration/consul-container/test/ratelimit/ratelimit_test.go
Co-authored-by: Dhia Ayachi <dhia@hashicorp.com>
* expand log checking so that it ensures both logs are they when they are supposed to be and not there when they are not expected to be.
* add retry on test
* Warn once when rate limit exceed regardless of enforcing vs permissive.
* Update test/integration/consul-container/test/ratelimit/ratelimit_test.go
Co-authored-by: Dan Upton <daniel@floppy.co>
Co-authored-by: Dan Upton <daniel@floppy.co>
Co-authored-by: Dhia Ayachi <dhia@hashicorp.com>
- remove dep on consul main module
- use 'consul tls' subcommands instead of tlsutil
- use direct json config construction instead of agent/config structs
- merge libcluster and libagent packages together
- more widely use BuildContext
- get the OSS/ENT runner stuff working properly
- reduce some flakiness
- fix some correctness related to http/https API
* Protobuf Modernization
Remove direct usage of golang/protobuf in favor of google.golang.org/protobuf
Marshallers (protobuf and json) needed some changes to account for different APIs.
Moved to using the google.golang.org/protobuf/types/known/* for the well known types including replacing some custom Struct manipulation with whats available in the structpb well known type package.
This also updates our devtools script to install protoc-gen-go from the right location so that files it generates conform to the correct interfaces.
* Fix go-mod-tidy make target to work on all modules
* Refactoring the peering integ test to accommodate coming changes of other upgrade scenarios.
- Add a utils package under test that contains methods to set up various test scenarios.
- Deduplication: have a single CreatingPeeringClusterAndSetup replace
CreatingAcceptingClusterAndSetup and CreateDialingClusterAndSetup.
- Separate peering cluster creation and server registration.
* Apply suggestions from code review
Co-authored-by: Dan Stough <dan.stough@hashicorp.com>
Adds automation for generating the map of `gRPC Method Name → Rate Limit Type`
used by the middleware introduced in #15550, and will ensure we don't forget
to add new endpoints.
Engineers must annotate their RPCs in the proto file like so:
```
rpc Foo(FooRequest) returns (FooResponse) {
option (consul.internal.ratelimit.spec) = {
operation_type: READ,
};
}
```
When they run `make proto` a protoc plugin `protoc-gen-consul-rate-limit` will
be installed that writes rate-limit specs as a JSON array to a file called
`.ratelimit.tmp` (one per protobuf package/directory).
After running Buf, `make proto` will execute a post-process script that will
ingest all of the `.ratelimit.tmp` files and generate a Go file containing the
mappings in the `agent/grpc-middleware` package. In the enterprise repository,
it will write an additional file with the enterprise-only endpoints.
If an engineer forgets to add the annotation to a new RPC, the plugin will
return an error like so:
```
RPC Foo is missing rate-limit specification, fix it with:
import "proto-public/annotations/ratelimit/ratelimit.proto";
service Bar {
rpc Foo(...) returns (...) {
option (hashicorp.consul.internal.ratelimit.spec) = {
operation_type: OPERATION_READ | OPERATION_WRITE | OPERATION_EXEMPT,
};
}
}
```
In the future, this annotation can be extended to support rate-limit
category (e.g. KV vs Catalog) and to determine the retry policy.
* feat(ingress-gateway): support outlier detection of upstream service for ingress gateway
* changelog
Co-authored-by: Eric Haberkorn <erichaberkorn@gmail.com>
* integ-test: fix flaky test - case-cfg-splitter-peering-ingress-gateways
* add retry peering to all peering cases
Co-authored-by: Dan Stough <dan.stough@hashicorp.com>
* integ-test: test consul upgrade from the snapshot of a running cluster
* use Target version as default
Co-authored-by: Dan Stough <dan.stough@hashicorp.com>
* update go version to 1.18 for api and sdk, go mod tidy
* removes ioutil usage everywhere which was deprecated in go1.16 in favour of io and os packages. Also introduces a lint rule which forbids use of ioutil going forward.
Co-authored-by: R.B. Boyer <4903+rboyer@users.noreply.github.com>
This is instead of the current behavior where we feed the config entries in using the config_entries.bootstrap configuration which oddly races against other setup code in some circumstances.
I converted ALL tests to explicitly create config entries.
The integration test TestEnvoy/case-ingress-gateway-multiple-services is flaky
and this possibly reduces the flakiness by explicitly waiting for services to show
up in the catalog as healthy before waiting for them to show up in envoy as
healthy which gives it just a bit more time to sync.
* updating to serf v0.10.1 and memberlist v0.5.0 to get memberlist size metrics and memberlist broadcast queue depth metric
* update changelog
* update changelog
* correcting changelog
* adding "QueueCheckInterval" for memberlist to test
* updating integration test containers to grab latest api
more readable in CI.
```
Running primary verification step for case-ingress-gateway-multiple-services...
�[34;1mverify.bats
�[0m�[1G ingress proxy admin is up on :20000�[K�[75G 1/12�[2G�[1G ✓ ingress proxy admin is up on :20000�[K
�[0m�[1G s1 proxy admin is up on :19000�[K�[75G 2/12�[2G�[1G ✓ s1 proxy admin is up on :19000�[K
�[0m�[1G s2 proxy admin is up on :19001�[K�[75G 3/12�[2G�[1G ✓ s2 proxy admin is up on :19001�[K
�[0m�[1G s1 proxy listener should be up and have right cert�[K�[75G 4/12�[2G�[1G ✓ s1 proxy listener should be up and have right cert�[K
�[0m�[1G s2 proxy listener should be up and have right cert�[K�[75G 5/12�[2G�[1G ✓ s2 proxy listener should be up and have right cert�[K
�[0m�[1G ingress-gateway should have healthy endpoints for s1�[K�[75G 6/12�[2G�[31;1m�[1G ✗ ingress-gateway should have healthy endpoints for s1�[K
�[0m�[31;22m (from function `assert_upstream_has_endpoints_in_status' in file /workdir/primary/bats/helpers.bash, line 385,
```
versus
```
Running primary verification step for case-ingress-gateway-multiple-services...
1..12
ok 1 ingress proxy admin is up on :20000
ok 2 s1 proxy admin is up on :19000
ok 3 s2 proxy admin is up on :19001
ok 4 s1 proxy listener should be up and have right cert
ok 5 s2 proxy listener should be up and have right cert
not ok 6 ingress-gateway should have healthy endpoints for s1
not ok 7 s1 proxy should have been configured with max_connections in services
ok 8 ingress-gateway should have healthy endpoints for s2
```
* feat(ingress gateway: support configuring limits in ingress-gateway config entry
- a new Defaults field with max_connections, max_pending_connections, max_requests
is added to ingress gateway config entry
- new field max_connections, max_pending_connections, max_requests in
individual services to overwrite the value in Default
- added unit test and integration test
- updated doc
Co-authored-by: Chris S. Kim <ckim@hashicorp.com>
Co-authored-by: Jeff Boruszak <104028618+boruszak@users.noreply.github.com>
Co-authored-by: Dan Stough <dan.stough@hashicorp.com>
Without this change, you'd see this error:
```
./run-tests.sh: line 49: LAMBDA_TESTS_ENABLED: unbound variable
./run-tests.sh: line 49: LAMBDA_TESTS_ENABLED: unbound variable
```
Locally, always run integration tests using amd64, even if running
on an arm mac. This ensures the architecture locally always matches
the CI/CD environment.
In addition:
* Use consul:local for envoy integration and upgrade tests. Previously,
consul:local was used for upgrade tests and consul-dev for integration
tests. I didn't see a reason to use separate images as it's more
confusing.
* By default, disable the requirement that aws credentials are set.
These are only needed for the lambda tests and make it so you
can't run any tests locally, even if you're not running the
lambda tests. Now they'll only run if the LAMBDA_TESTS_ENABLED
env var is set.
* Split out the building of the Docker image for integration
tests into its own target from `dev-docker`. This allows us to always
use an amd64 image without messing up the `dev-docker` target.
* Add support for passing GO_TEST_FLAGs to `test-envoy-integ` target.
* Add a wait_for_leader function because tests were failing locally
without it.
* defaulting to false because peering will be released as beta
* Ignore peering disabled error in bundles cachetype
Co-authored-by: Matt Keeler <mkeeler@users.noreply.github.com>
Co-authored-by: freddygv <freddy@hashicorp.com>
Co-authored-by: Matt Keeler <mjkeeler7@gmail.com>
Now that peered upstreams can generate envoy resources (#13758), we need a way to disambiguate local from peered resources in our metrics. The key difference is that datacenter and partition will be replaced with peer, since in the context of peered resources partition is ambiguous (could refer to the partition in a remote cluster or one that exists locally). The partition and datacenter of the proxy will always be that of the source service.
Regexes were updated to make emitting datacenter and partition labels mutually exclusive with peer labels.
Listener filter names were updated to better match the existing regex.
Cluster names assigned to peered upstreams were updated to be synthesized from local peer name (it previously used the externally provided primary SNI, which contained the peer name from the other side of the peering). Integration tests were updated to assert for the new peer labels.
Ensure that the peer stream replication rpc can successfully be used with TLS activated.
Also:
- If key material is configured for the gRPC port but HTTPS is not
enabled now TLS will still be activated for the gRPC port.
- peerstream replication stream opened by the establishing-side will now
ignore grpc.WithBlock so that TLS errors will bubble up instead of
being awkwardly delayed or suppressed
Peer replication is intended to be between separate Consul installs and
effectively should be considered "external". This PR moves the peer
stream replication bidirectional RPC endpoint to the external gRPC
server and ensures that things continue to function.
- fix sg: need remote access to test server
- Give the load generator a name
- Update loadtest hcl filename in readme
- Add terraform init
- Disable access to the server machine by default
Require use of mesh gateways in order for service mesh data plane
traffic to flow between peers.
This also adds plumbing for envoy integration tests involving peers, and
one starter peering test.
* tidy code and add some doc strings
* add doc strings to tests
* add partitions tests, need to adapt to run in both oss and ent
* split oss and enterprise versions
* remove parallel tests
* add error
* fix queryBackend in test
* revert unneeded change
* fix failing tests
* add a sample
* Consul cluster test
* add build dockerfile
* add tests to cover mixed versions tests
* use flag to pass docker image name
* remove default config and rely on flags to inject the right image to test
* add cluster abstraction
* fix imports and remove old files
* fix imports and remove old files
* fix dockerIgnore
* make a `Node interface` and encapsulate ConsulContainer
* fix a test bug where we only check the leader against a single node.
* add upgrade tests to CI
* fix yaml alignment
* fix alignment take 2
* fix flag naming
* fix image to build
* fix test run and go mod tidy
* add a debug command
* run without RYUK
* fix parallel run
* add skip reaper code
* make tempdir in local dir
* chmod the temp dir to 0777
* chmod the right dir name
* change executor to use machine instead of docker
* add docker layer caching
* remove setup docker
* add gotestsum
* install go version
* use variable for GO installed version
* add environment
* add environment in the right place
* do not disable RYUK in CI
* add service check to tests
* assertions outside routines
* add queryBackend to the api query meta.
* check if we are using the right backend for those tests (streaming)
* change the tested endpoint to use one that have streaming.
* refactor to test multiple scenarios for streaming
* Fix dockerfile
Co-authored-by: FFMMM <FFMMM@users.noreply.github.com>
* rename Clients to clients
Co-authored-by: FFMMM <FFMMM@users.noreply.github.com>
* check if cluster have 0 node
* tidy code and add some doc strings
* use uuid instead of random string
* add doc strings to tests
* add queryBackend to the api query meta.
* add a changelog
* fix for api backend query
* add missing require
* fix q.QueryBackend
* Revert "fix q.QueryBackend"
This reverts commit cd0e5f7b1a.
* fix circle ci config
* tidy go mod after merging main
* rename package and fix test scenario
* update go download url
* address review comments
* rename flag in CI
* add readme to the upgrade tests
* fix golang download url
* fix golang arch downloaded
* fix AddNodes to handle an empty cluster case
* use `parseBool`
* rename circle job and add comment
* update testcontainer to 0.13
* fix circle ci config
* remove build docker file and use `make dev-docker` instead
* Apply suggestions from code review
Co-authored-by: Dan Upton <daniel@floppy.co>
* fix a typo
Co-authored-by: FFMMM <FFMMM@users.noreply.github.com>
Co-authored-by: Dan Upton <daniel@floppy.co>
* Add partition fields to targets like service route destinations
* Update validation to prevent cross-DC + cross-partition references
* Handle partitions when reading config entries for disco chain
* Encode partition in compiled targets
The secondary DC now takes longer to populate the MGW snapshot because
it needs to wait for the secondary CA to be initialized before it can
receive roots and generate xDS config.
Previously MGWs could receive empty roots before the CA was
initialized. This wasn't necessarily a problem since the cluster ID in
the trust domain isn't verified.
We launch one container as part of the test with --pid=host but
apparently within that container it launches a copy of "tini" as a
process supervisor that prefers to be PID 1.
Because it's not PID 1 it logs a warning message about this to the envoy
integration test logs that can lead to thinking somehow that a test
failure is related when in fact it's completely unrelated.
Adding this environment variable avoids the warning.