* Adding explicit MPL license for sub-package
This directory and its subdirectories (packages) contain files licensed with the MPLv2 `LICENSE` file in this directory and are intentionally licensed separately from the BSL `LICENSE` file at the root of this repository.
* Adding explicit MPL license for sub-package
This directory and its subdirectories (packages) contain files licensed with the MPLv2 `LICENSE` file in this directory and are intentionally licensed separately from the BSL `LICENSE` file at the root of this repository.
* Updating the license from MPL to Business Source License
Going forward, this project will be licensed under the Business Source License v1.1. Please see our blog post for more details at <Blog URL>, FAQ at www.hashicorp.com/licensing-faq, and details of the license at www.hashicorp.com/bsl.
* add missing license headers
* Update copyright file headers to BUSL-1.1
* Update copyright file headers to BUSL-1.1
* Update copyright file headers to BUSL-1.1
* Update copyright file headers to BUSL-1.1
* Update copyright file headers to BUSL-1.1
* Update copyright file headers to BUSL-1.1
* Update copyright file headers to BUSL-1.1
* Update copyright file headers to BUSL-1.1
* Update copyright file headers to BUSL-1.1
* Update copyright file headers to BUSL-1.1
* Update copyright file headers to BUSL-1.1
* Update copyright file headers to BUSL-1.1
* Update copyright file headers to BUSL-1.1
* Update copyright file headers to BUSL-1.1
* Update copyright file headers to BUSL-1.1
---------
Co-authored-by: hashicorp-copywrite[bot] <110428419+hashicorp-copywrite[bot]@users.noreply.github.com>
* Added oss config entries for Policy and JWT on APIGW
* Updated structs for config entry
* Updated comments, ran deep-copy
* Move JWT configuration into OSS file
* Add in the config entry OSS file for jwts
* Added changelog
* fixing proto spacing
* Moved to using manually written deep copy method
* Use pointers for override/default fields in apigw config entries
* Run gen scripts for changed types
* Add logging to locality policy application
In OSS, this is currently a no-op.
* Inherit locality when registering sidecars
When sidecar locality is not explicitly configured, inherit locality
from the proxied service.
* bump testcontainers-go from 0.22.0 and remove pinned go version in integ test
* go mod tidy
* Replace deprecated target.Authority with target.URL.Host
* OTElExporter now uses an EndpointProvider to discover the endpoint
* OTELSink uses a ConfigProvider to obtain filters and labels configuration
* improve tests for otel_sink
* Regex logic is moved into client for a method on the TelemetryConfig object
* Create a telemetry_config_provider and update deps to use it
* Fix conversion
* fix import newline
* Add logger to hcp client and move telemetry_config out of the client.go file
* Add a telemetry_config.go to refactor client.go
* Update deps
* update hcp deps test
* Modify telemetry_config_providers
* Check for nil filters
* PR review updates
* Fix comments and move around pieces
* Fix comments
* Remove context from client struct
* Moved ctx out of sink struct and fixed filters, added a test
* Remove named imports, use errors.New if not fformatting
* Remove HCP dependencies in telemetry package
* Add success metric and move lock only to grab the t.cfgHahs
* Update hash
* fix nits
* Create an equals method and add tests
* Improve telemetry_config_provider.go tests
* Add race test
* Add missing godoc
* Remove mock for MetricsClient
* Avoid goroutine test panics
* trying to kick CI lint issues by upgrading mod
* imprve test code and add hasher for testing
* Use structure logging for filters, fix error constants, and default to allow all regex
* removed hashin and modify logic to simplify
* Improve race test and fix PR feedback by removing hash equals and avoid testing the timer.Ticker logic, and instead unit test
* Ran make go-mod-tidy
* Use errtypes in the test
* Add changelog
* add safety check for exporter endpoint
* remove require.Contains by using error types, fix structure logging, and fix success metric typo in exporter
* Fixed race test to have changing config values
* Send success metric before modifying config
* Avoid the defer and move the success metric under
* [CC-5719] Add support for builtin global-read-only policy
* Add changelog
* Add read-only to docs
* Fix some minor issues.
* Change from ReplaceAll to Sprintf
* Change IsValidPolicy name to return an error instead of bool
* Fix PolicyList test
* Fix other tests
* Apply suggestions from code review
Co-authored-by: Paul Glass <pglass@hashicorp.com>
* Fix state store test for policy list.
* Fix naming issues
* Update acl/validation.go
Co-authored-by: Chris Thain <32781396+cthain@users.noreply.github.com>
* Update agent/consul/acl_endpoint.go
---------
Co-authored-by: Paul Glass <pglass@hashicorp.com>
Co-authored-by: Chris Thain <32781396+cthain@users.noreply.github.com>
Prevent partial application of Envoy extensions
Ensure that non-required extensions do not change xDS resources before
exiting on failure by cloning proto messages prior to applying each
extension.
To support this change, also move `CanApply` checks up a layer and make
them prior to attempting extension application, s.t. we avoid
unnecessary copies where extensions can't be applied.
Last, ensure that we do not allow panics from `CanApply` or `Extend`
checks to escape the attempted extension application.
* Fix topoloy intention with mixed connect-native/normal services.
If a service is registered twice, once with connect-native and once
without, the topology views would prune the existing intentions. This
change brings the code more in line with the transparent proxy behavior.
* Dedupe nodes in the ServiceTopology ui endpoint (like done with tags).
* Consider a service connect-native as soon as one instance is.
* api-gateway: subscribe to bound-api-gateway only after receiving api-gateway
This fixes a race condition due to our dependency on having the listener(s) from the api-gateway config entry in order to fully and properly process the resources on the bound-api-gateway config entry.
* Apply suggestions from code review
* Add changelog entry
### Description
<!-- Please describe why you're making this change, in plain English.
-->
- Currently the jwt-auth filter doesn't take into account the service
identity when validating jwt-auth, it only takes into account the path
and jwt provider during validation. This causes issues when multiple
source intentions restrict access to an endpoint with different JWT
providers.
- To fix these issues, rather than use the JWT auth filter for
validation, we use it in metadata mode and allow it to forward the
successful validated JWT token payload to the RBAC filter which will
make the decisions.
This PR ensures requests with and without JWT tokens successfully go
through the jwt-authn filter. The filter however only forwards the data
for successful/valid tokens. On the RBAC filter level, we check the
payload for claims and token issuer + existing rbac rules.
### Testing & Reproduction steps
<!--
* In the case of bugs, describe how to replicate
* If any manual tests were done, document the steps and the conditions
to replicate
* Call out any important/ relevant unit tests, e2e tests or integration
tests you have added or are adding
-->
- This test covers a multi level jwt requirements (requirements at top
level and permissions level). It also assumes you have envoy running,
you have a redis and a sidecar proxy service registered, and have a way
to generate jwks with jwt. I mostly use:
https://www.scottbrady91.com/tools/jwt for this.
- first write your proxy defaults
```
Kind = "proxy-defaults"
name = "global"
config {
protocol = "http"
}
```
- Create two providers
```
Kind = "jwt-provider"
Name = "auth0"
Issuer = "https://ronald.local"
JSONWebKeySet = {
Local = {
JWKS = "eyJrZXlzIjog....."
}
}
```
```
Kind = "jwt-provider"
Name = "okta"
Issuer = "https://ronald.local"
JSONWebKeySet = {
Local = {
JWKS = "eyJrZXlzIjogW3...."
}
}
```
- add a service intention
```
Kind = "service-intentions"
Name = "redis"
JWT = {
Providers = [
{
Name = "okta"
},
]
}
Sources = [
{
Name = "*"
Permissions = [{
Action = "allow"
HTTP = {
PathPrefix = "/workspace"
}
JWT = {
Providers = [
{
Name = "okta"
VerifyClaims = [
{
Path = ["aud"]
Value = "my_client_app"
},
{
Path = ["sub"]
Value = "5be86359073c434bad2da3932222dabe"
}
]
},
]
}
},
{
Action = "allow"
HTTP = {
PathPrefix = "/"
}
JWT = {
Providers = [
{
Name = "auth0"
},
]
}
}]
}
]
```
- generate 3 jwt tokens: 1 from auth0 jwks, 1 from okta jwks with
different claims than `/workspace` expects and 1 with correct claims
- connect to your envoy (change service and address as needed) to view
logs and potential errors. You can add: `-- --log-level debug` to see
what data is being forwarded
```
consul connect envoy -sidecar-for redis1 -grpc-addr 127.0.0.1:8502
```
- Make the following requests:
```
curl -s -H "Authorization: Bearer $Auth0_TOKEN" --insecure --cert leaf.cert --key leaf.key --cacert connect-ca.pem https://localhost:20000/workspace -v
RBAC filter denied
curl -s -H "Authorization: Bearer $Okta_TOKEN_with_wrong_claims" --insecure --cert leaf.cert --key leaf.key --cacert connect-ca.pem https://localhost:20000/workspace -v
RBAC filter denied
curl -s -H "Authorization: Bearer $Okta_TOKEN_with_correct_claims" --insecure --cert leaf.cert --key leaf.key --cacert connect-ca.pem https://localhost:20000/workspace -v
Successful request
```
### TODO
* [x] Update test coverage
* [ ] update integration tests (follow-up PR)
* [x] appropriate backport labels added
### Description
<!-- Please describe why you're making this change, in plain English.
-->
The mock is used in `http_ent_test` file which caused lint failures. For
OSS->ENT parity adding the same change here.
### Links
<!--
Include any links here that might be helpful for people reviewing your
PR (Tickets, GH issues, API docs, external benchmarks, tools docs, etc).
If there are none, feel free to delete this section.
Please be mindful not to leak any customer or confidential information.
HashiCorp employees may want to use our internal URL shortener to
obfuscate links.
-->
Identified in OSS->ENT [merge
PR](https://github.com/hashicorp/consul-enterprise/pull/6328)
### PR Checklist
* [ ] ~updated test coverage~
* [ ] ~external facing docs updated~
* [x] appropriate backport labels added
* [ ] ~not a security concern~
### Description
This is to correct a code problem because this assumes all segments, but
when you get to Enterprise, you can be in partition that is not the
default partition, in which case specifying all segments does not
validate and fails. This is to correct the setting of this filter with
`AllSegments` to `true` to only occur when in the the `default`
partition.
### Testing & Reproduction steps
<!--
* In the case of bugs, describe how to replicate
* If any manual tests were done, document the steps and the conditions
to replicate
* Call out any important/ relevant unit tests, e2e tests or integration
tests you have added or are adding
-->
### Links
<!--
Include any links here that might be helpful for people reviewing your
PR (Tickets, GH issues, API docs, external benchmarks, tools docs, etc).
If there are none, feel free to delete this section.
Please be mindful not to leak any customer or confidential information.
HashiCorp employees may want to use our internal URL shortener to
obfuscate links.
-->
### PR Checklist
* [ ] updated test coverage
* [ ] external facing docs updated
* [ ] appropriate backport labels added
* [ ] not a security concern
Updating RootPKIPath but not IntermediatePKIPath would not update
leaf signing certs with the new root. Unsure if this happens in practice
but manual testing showed it is a bug that would break mesh and agent
connections once the old root is pruned.
### Description
<!-- Please describe why you're making this change, in plain English.
-->
Dan had already started on this
[task](https://github.com/hashicorp/consul/pull/17849) which is needed
to start building the HTTP APIs. This just needed some cleanup to get it
ready for review.
Overview:
- Rename `internalResourceServiceClient` to
`insecureResourceServiceClient` for name consistency
- Configure a `secureResourceServiceClient` with auth enabled
### PR Checklist
* [ ] ~updated test coverage~
* [ ] ~external facing docs updated~
* [x] appropriate backport labels added
* [ ] ~not a security concern~
* update UINodes and UINodeInfo response with consul-version info added as NodeMeta, fetched from serf members
* update test cases TestUINodes, TestUINodeInfo
* added nil check for map
* add consul-version in local agent node metadata
* get consul version from serf member and add this as node meta in catalog register request
* updated ui mock response to include consul versions as node meta
* updated ui trans and added version as query param to node list route
* updates in ui templates to display consul version with filter and sorts
* updates in ui - model class, serializers,comparators,predicates for consul version feature
* added change log for Consul Version Feature
* updated to get version from consul service, if for some reason not available from serf
* updated changelog text
* updated dependent testcases
* multiselection version filter
* Update agent/consul/state/catalog.go
comments updated
Co-authored-by: Jared Kirschner <85913323+jkirschner-hashicorp@users.noreply.github.com>
---------
Co-authored-by: Jared Kirschner <85913323+jkirschner-hashicorp@users.noreply.github.com>
This PR fixes a bug that was introduced in:
https://github.com/hashicorp/consul/pull/16021
A user setting a protocol in proxy-defaults would cause tproxy implicit
upstreams to not honor the upstream service's protocol set in its
`ServiceDefaults.Protocol` field, and would instead always use the
proxy-defaults value.
Due to the fact that upstreams configured with "tcp" can successfully contact
upstream "http" services, this issue was not recognized until recently (a
proxy-defaults with "tcp" and a listening service with "http" would make
successful requests, but not the opposite).
As a temporary work-around, users experiencing this issue can explicitly set
the protocol on the `ServiceDefaults.UpstreamConfig.Overrides`, which should
take precedence.
The fix in this PR removes the proxy-defaults protocol from the wildcard
upstream that tproxy uses to configure implicit upstreams. When the protocol
was included, it would always overwrite the value during discovery chain
compilation, which was not correct. The discovery chain compiler also consumes
proxy defaults to determine the protocol, so simply excluding it from the
wildcard upstream config map resolves the issue.
* # This is a combination of 9 commits.
# This is the 1st commit message:
init without tests
# This is the commit message #2:
change log
# This is the commit message #3:
fix tests
# This is the commit message #4:
fix tests
# This is the commit message #5:
added tests
# This is the commit message #6:
change log breaking change
# This is the commit message #7:
removed breaking change
# This is the commit message #8:
fix test
# This is the commit message #9:
keeping the test behaviour same
* # This is a combination of 12 commits.
# This is the 1st commit message:
init without tests
# This is the commit message #2:
change log
# This is the commit message #3:
fix tests
# This is the commit message #4:
fix tests
# This is the commit message #5:
added tests
# This is the commit message #6:
change log breaking change
# This is the commit message #7:
removed breaking change
# This is the commit message #8:
fix test
# This is the commit message #9:
keeping the test behaviour same
# This is the commit message #10:
made enable debug atomic bool
# This is the commit message #11:
fix lint
# This is the commit message #12:
fix test true enable debug
* parent 10f500e895
author absolutelightning <ashesh.vidyut@hashicorp.com> 1687352587 +0530
committer absolutelightning <ashesh.vidyut@hashicorp.com> 1687352592 +0530
init without tests
change log
fix tests
fix tests
added tests
change log breaking change
removed breaking change
fix test
keeping the test behaviour same
made enable debug atomic bool
fix lint
fix test true enable debug
using enable debug in agent as atomic bool
test fixes
fix tests
fix tests
added update on correct locaiton
fix tests
fix reloadable config enable debug
fix tests
fix init and acl 403
* revert commit
* Ensure RSA keys are at least 2048 bits in length
* Add changelog
* update key length check for FIPS compliance
* Fix no new variables error and failing to return when error exists from
validating
* clean up code for better readability
* actually return value
* Fix a bug that wrongly trims domains when there is an overlap with DC name
Before this change, when DC name and domain/alt-domain overlap, the domain name incorrectly trimmed from the query.
Example:
Given: datacenter = dc-test, alt-domain = test.consul.
Querying for "test-node.node.dc-test.consul" will faile, because the
code was trimming "test.consul" instead of just ".consul"
This change, fixes the issue by adding dot (.) before trimming
* trimDomain: ensure domain trimmed without modyfing original domains
* update changelog
---------
Co-authored-by: Dhia Ayachi <dhia@hashicorp.com>
For consistency, resource type names must follow these rules:
- `Group` must be snake case, and in most cases a single word.
- `GroupVersion` must be lowercase, start with a "v" and end with a number.
- `Kind` must be pascal case.
These were chosen because they map to our protobuf type naming
conventions.
Update CA provider docs
Clarify that providers can differ between
primary and secondary datacenters
Provide a comparison chart for consul vs
vault CA providers
Loosen Vault CA provider validation for RootPKIPath
Update Vault CA provider documentation
* Reject inbound Prop Override patch with Services
Services filtering is only supported for outbound TrafficDirection patches.
* Improve Prop Override unexpected type validation
- Guard against additional invalid parent and target types
- Add specific error handling for Any fields (unsupported)
Fix issue with streaming service health watches.
This commit fixes an issue where the health streams were unaware of service
export changes. Whenever an exported-services config entry is modified, it is
effectively an ACL change.
The bug would be triggered by the following situation:
- no services are exported
- an upstream watch to service X is spawned
- the streaming backend filters out data for service X (due to lack of exports)
- service X is finally exported
In the situation above, the streaming backend does not trigger a refresh of its
data. This means that any events that were supposed to have been received prior
to the export are NOT backfilled, and the watches never see service X spawning.
We currently have decided to not trigger a stream refresh in this situation due
to the potential for a thundering herd effect (touching exports would cause a
re-fetch of all watches for that partition, potentially). Therefore, a local
blocking-query approach was added by this commit for agentless.
It's also worth noting that the streaming subscription is currently bypassed
most of the time with agentful, because proxycfg has a `req.Source.Node != ""`
which prevents the `streamingEnabled` check from passing. This means that while
agents should technically have this same issue, they don't experience it with
mesh health watches.
Note that this is a temporary fix that solves the issue for proxycfg, but not
service-discovery use cases.
* agent: remove agent cache dependency from service mesh leaf certificate management
This extracts the leaf cert management from within the agent cache.
This code was produced by the following process:
1. All tests in agent/cache, agent/cache-types, agent/auto-config,
agent/consul/servercert were run at each stage.
- The tests in agent matching .*Leaf were run at each stage.
- The tests in agent/leafcert were run at each stage after they
existed.
2. The former leaf cert Fetch implementation was extracted into a new
package behind a "fake RPC" endpoint to make it look almost like all
other cache type internals.
3. The old cache type was shimmed to use the fake RPC endpoint and
generally cleaned up.
4. I selectively duplicated all of Get/Notify/NotifyCallback/Prepopulate
from the agent/cache.Cache implementation over into the new package.
This was renamed as leafcert.Manager.
- Code that was irrelevant to the leaf cert type was deleted
(inlining blocking=true, refresh=false)
5. Everything that used the leaf cert cache type (including proxycfg
stuff) was shifted to use the leafcert.Manager instead.
6. agent/cache-types tests were moved and gently replumbed to execute
as-is against a leafcert.Manager.
7. Inspired by some of the locking changes from derek's branch I split
the fat lock into N+1 locks.
8. The waiter chan struct{} was eventually replaced with a
singleflight.Group around cache updates, which was likely the biggest
net structural change.
9. The awkward two layers or logic produced as a byproduct of marrying
the agent cache management code with the leaf cert type code was
slowly coalesced and flattened to remove confusion.
10. The .*Leaf tests from the agent package were copied and made to work
directly against a leafcert.Manager to increase direct coverage.
I have done a best effort attempt to port the previous leaf-cert cache
type's tests over in spirit, as well as to take the e2e-ish tests in the
agent package with Leaf in the test name and copy those into the
agent/leafcert package to get more direct coverage, rather than coverage
tangled up in the agent logic.
There is no net-new test coverage, just coverage that was pushed around
from elsewhere.
This includes prioritize by localities on disco chain targets rather than
resolvers, allowing different targets within the same partition to have
different policies.
* Add header filter to api-gateway xDS golden test
* Stop adding all header filters to virtual host when generating xDS for api-gateway
* Regenerate xDS golden file for api-gateway w/ header filter
Ensure that the embedded api struct is properly parsed when
deserializing config containing a set ResourceFilter.Services field.
Also enhance existing integration test to guard against bugs and
exercise this field.
TLDR with many modules the versions included in each diverged quite a bit. Attempting to use Go Workspaces produces a bunch of errors.
This commit:
1. Fixes envoy-library-references.sh to work again
2. Ensures we are pulling in go-control-plane@v0.11.0 everywhere (previously it was at that version in some modules and others were much older)
3. Remove one usage of golang/protobuf that caused us to have a direct dependency on it.
4. Remove deprecated usage of the Endpoint field in the grpc resolver.Target struct. The current version of grpc (v1.55.0) has removed that field and recommended replacement with URL.Opaque and calls to the Endpoint() func when needing to consume the previous field.
4. `go work init <all the paths to go.mod files>` && `go work sync`. This syncrhonized versions of dependencies from the main workspace/root module to all submodules
5. Updated .gitignore to ignore the go.work and go.work.sum files. This seems to be standard practice at the moment.
6. Update doc comments in protoc-gen-consul-rate-limit to be go fmt compatible
7. Upgraded makefile infra to perform linting, testing and go mod tidy on all modules in a flexible manner.
8. Updated linter rules to prevent usage of golang/protobuf
9. Updated a leader peering test to account for an extra colon in a grpc error message.
When UpstreamEnvoyExtender was introduced, some code was left duplicated
between it and BasicEnvoyExtender. One path in that code panics when a
TProxy listener patch is attempted due to no upstream data in
RuntimeConfig matching the local service (which would only happen in
rare cases).
Instead, we can remove the special handling of upstream VIPs from
BasicEnvoyExtender entirely, greatly simplifying the listener filter
patch code and avoiding the panic. UpstreamEnvoyExtender, which needs
this code to function, is modified to ensure a panic does not occur.
This also fixes a second regression in which the Lua extension was not
applied to TProxy outbound listeners.
Sameness groups with default-for-failover enabled did not function properly with
tproxy whenever all instances of the service disappeared from the local cluster.
This occured, because there were no corresponding resolvers (due to the implicit
failover policy) which caused VIPs to be deallocated.
This ticket expands upon the VIP allocations so that both service-defaults and
service-intentions (without destination wildcards) will ensure that the virtual
IP exists.
This commit only contains the OSS PR (datacenter query param support).
A separate enterprise PR adds support for ap and namespace query params.
Resources in Consul can exists within scopes such as datacenters, cluster
peers, admin partitions, and namespaces. You can refer to those resources from
interfaces such as the CLI, HTTP API, DNS, and configuration files.
Some scope levels have consistent naming: cluster peers are always referred to
as "peer".
Other scope levels use a short-hand in DNS lookups...
- "ns" for namespace
- "ap" for admin partition
- "dc" for datacenter
...But use long-hand in CLI commands:
- "namespace" for namespace
- "partition" for admin partition
- and "datacenter"
However, HTTP API query parameters do not follow a consistent pattern,
supporting short-hand for some scopes but long-hand for others:
- "ns" for namespace
- "partition" for admin partition
- and "dc" for datacenter.
This inconsistency is confusing, especially for users who have been exposed to
providing scope names through another interface such as CLI or DNS queries.
This commit improves UX by consistently supporting both short-hand and
long-hand forms of the namespace, partition, and datacenter scopes in HTTP API
query parameters.
* add upstream service targeting to property override extension
* Also add baseline goldens for service specific property override extension.
* Refactor the extension framework to put more logic into the templates.
* fix up the golden tests
* Move hcp client to subpackage hcpclient (#16800)
* [HCP Observability] New MetricsClient (#17100)
* Client configured with TLS using HCP config and retry/throttle
* Add tests and godoc for metrics client
* close body after request
* run go mod tidy
* Remove one abstraction to use the config from deps
* Address PR feedback
* remove clone
* Extract CloudConfig and mock for future PR
* Switch to hclog.FromContext
* [HCP Observability] OTELExporter (#17128)
* Client configured with TLS using HCP config and retry/throttle
* run go mod tidy
* Remove one abstraction to use the config from deps
* Address PR feedback
* Client configured with TLS using HCP config and retry/throttle
* run go mod tidy
* Create new OTELExporter which uses the MetricsClient
Add transform because the conversion is in an /internal package
* Fix lint error
* early return when there are no metrics
* Add NewOTELExporter() function
* Downgrade to metrics SDK version: v1.15.0-rc.1
* Fix imports
* fix small nits with comments and url.URL
* Fix tests by asserting actual error for context cancellation, fix parallel, and make mock more versatile
* Cleanup error handling and clarify empty metrics case
* Fix input/expected naming in otel_transform_test.go
* add comment for metric tracking
* Add a general isEmpty method
* Add clear error types
* update to latest version 1.15.0 of OTEL
* [HCP Observability] OTELSink (#17159)
* Client configured with TLS using HCP config and retry/throttle
* run go mod tidy
* Remove one abstraction to use the config from deps
* Address PR feedback
* Client configured with TLS using HCP config and retry/throttle
* run go mod tidy
* Create new OTELExporter which uses the MetricsClient
Add transform because the conversion is in an /internal package
* Fix lint error
* early return when there are no metrics
* Add NewOTELExporter() function
* Downgrade to metrics SDK version: v1.15.0-rc.1
* Fix imports
* fix small nits with comments and url.URL
* Fix tests by asserting actual error for context cancellation, fix parallel, and make mock more versatile
* Cleanup error handling and clarify empty metrics case
* Fix input/expected naming in otel_transform_test.go
* add comment for metric tracking
* Add a general isEmpty method
* Add clear error types
* update to latest version 1.15.0 of OTEL
* Client configured with TLS using HCP config and retry/throttle
* run go mod tidy
* Remove one abstraction to use the config from deps
* Address PR feedback
* Initialize OTELSink with sync.Map for all the instrument stores.
* Moved PeriodicReader init to NewOtelReader function. This allows us to use a ManualReader for tests.
* Switch to mutex instead of sync.Map to avoid type assertion
* Add gauge store
* Clarify comments
* return concrete sink type
* Fix lint errors
* Move gauge store to be within sink
* Use context.TODO,rebase and clenaup opts handling
* Rebase onto otl exporter to downgrade metrics API to v1.15.0-rc.1
* Fix imports
* Update to latest stable version by rebasing on cc-4933, fix import, remove mutex init, fix opts error messages and use logger from ctx
* Add lots of documentation to the OTELSink
* Fix gauge store comment and check ok
* Add select and ctx.Done() check to gauge callback
* use require.Equal for attributes
* Fixed import naming
* Remove float64 calls and add a NewGaugeStore method
* Change name Store to Set in gaugeStore, add concurrency tests in both OTELSink and gauge store
* Generate 100 gauge operations
* Seperate the labels into goroutines in sink test
* Generate kv store for the test case keys to avoid using uuid
* Added a race test with 300 samples for OTELSink
* Do not pass in waitgroup and use error channel instead.
* Using SHA 7dea2225a218872e86d2f580e82c089b321617b0 to avoid build failures in otel
* Fix nits
* [HCP Observability] Init OTELSink in Telemetry (#17162)
* Move hcp client to subpackage hcpclient (#16800)
* [HCP Observability] New MetricsClient (#17100)
* Client configured with TLS using HCP config and retry/throttle
* Add tests and godoc for metrics client
* close body after request
* run go mod tidy
* Remove one abstraction to use the config from deps
* Address PR feedback
* remove clone
* Extract CloudConfig and mock for future PR
* Switch to hclog.FromContext
* [HCP Observability] New MetricsClient (#17100)
* Client configured with TLS using HCP config and retry/throttle
* Add tests and godoc for metrics client
* close body after request
* run go mod tidy
* Remove one abstraction to use the config from deps
* Address PR feedback
* remove clone
* Extract CloudConfig and mock for future PR
* Switch to hclog.FromContext
* [HCP Observability] New MetricsClient (#17100)
* Client configured with TLS using HCP config and retry/throttle
* Add tests and godoc for metrics client
* close body after request
* run go mod tidy
* Remove one abstraction to use the config from deps
* Address PR feedback
* remove clone
* Extract CloudConfig and mock for future PR
* Switch to hclog.FromContext
* Client configured with TLS using HCP config and retry/throttle
* run go mod tidy
* Remove one abstraction to use the config from deps
* Address PR feedback
* Client configured with TLS using HCP config and retry/throttle
* run go mod tidy
* Create new OTELExporter which uses the MetricsClient
Add transform because the conversion is in an /internal package
* Fix lint error
* early return when there are no metrics
* Add NewOTELExporter() function
* Downgrade to metrics SDK version: v1.15.0-rc.1
* Fix imports
* fix small nits with comments and url.URL
* Fix tests by asserting actual error for context cancellation, fix parallel, and make mock more versatile
* Cleanup error handling and clarify empty metrics case
* Fix input/expected naming in otel_transform_test.go
* add comment for metric tracking
* Add a general isEmpty method
* Add clear error types
* update to latest version 1.15.0 of OTEL
* Client configured with TLS using HCP config and retry/throttle
* run go mod tidy
* Remove one abstraction to use the config from deps
* Address PR feedback
* Initialize OTELSink with sync.Map for all the instrument stores.
* Moved PeriodicReader init to NewOtelReader function. This allows us to use a ManualReader for tests.
* Switch to mutex instead of sync.Map to avoid type assertion
* Add gauge store
* Clarify comments
* return concrete sink type
* Fix lint errors
* Move gauge store to be within sink
* Use context.TODO,rebase and clenaup opts handling
* Rebase onto otl exporter to downgrade metrics API to v1.15.0-rc.1
* Fix imports
* Update to latest stable version by rebasing on cc-4933, fix import, remove mutex init, fix opts error messages and use logger from ctx
* Add lots of documentation to the OTELSink
* Fix gauge store comment and check ok
* Add select and ctx.Done() check to gauge callback
* use require.Equal for attributes
* Fixed import naming
* Remove float64 calls and add a NewGaugeStore method
* Change name Store to Set in gaugeStore, add concurrency tests in both OTELSink and gauge store
* Generate 100 gauge operations
* Seperate the labels into goroutines in sink test
* Generate kv store for the test case keys to avoid using uuid
* Added a race test with 300 samples for OTELSink
* [HCP Observability] OTELExporter (#17128)
* Client configured with TLS using HCP config and retry/throttle
* run go mod tidy
* Remove one abstraction to use the config from deps
* Address PR feedback
* Client configured with TLS using HCP config and retry/throttle
* run go mod tidy
* Create new OTELExporter which uses the MetricsClient
Add transform because the conversion is in an /internal package
* Fix lint error
* early return when there are no metrics
* Add NewOTELExporter() function
* Downgrade to metrics SDK version: v1.15.0-rc.1
* Fix imports
* fix small nits with comments and url.URL
* Fix tests by asserting actual error for context cancellation, fix parallel, and make mock more versatile
* Cleanup error handling and clarify empty metrics case
* Fix input/expected naming in otel_transform_test.go
* add comment for metric tracking
* Add a general isEmpty method
* Add clear error types
* update to latest version 1.15.0 of OTEL
* Do not pass in waitgroup and use error channel instead.
* Using SHA 7dea2225a218872e86d2f580e82c089b321617b0 to avoid build failures in otel
* Rebase onto otl exporter to downgrade metrics API to v1.15.0-rc.1
* Initialize OTELSink with sync.Map for all the instrument stores.
* Added telemetry agent to client and init sink in deps
* Fixed client
* Initalize sink in deps
* init sink in telemetry library
* Init deps before telemetry
* Use concrete telemetry.OtelSink type
* add /v1/metrics
* Avoid returning err for telemetry init
* move sink init within the IsCloudEnabled()
* Use HCPSinkOpts in deps instead
* update golden test for configuration file
* Switch to using extra sinks in the telemetry library
* keep name MetricsConfig
* fix log in verifyCCMRegistration
* Set logger in context
* pass around MetricSink in deps
* Fix imports
* Rebased onto otel sink pr
* Fix URL in test
* [HCP Observability] OTELSink (#17159)
* Client configured with TLS using HCP config and retry/throttle
* run go mod tidy
* Remove one abstraction to use the config from deps
* Address PR feedback
* Client configured with TLS using HCP config and retry/throttle
* run go mod tidy
* Create new OTELExporter which uses the MetricsClient
Add transform because the conversion is in an /internal package
* Fix lint error
* early return when there are no metrics
* Add NewOTELExporter() function
* Downgrade to metrics SDK version: v1.15.0-rc.1
* Fix imports
* fix small nits with comments and url.URL
* Fix tests by asserting actual error for context cancellation, fix parallel, and make mock more versatile
* Cleanup error handling and clarify empty metrics case
* Fix input/expected naming in otel_transform_test.go
* add comment for metric tracking
* Add a general isEmpty method
* Add clear error types
* update to latest version 1.15.0 of OTEL
* Client configured with TLS using HCP config and retry/throttle
* run go mod tidy
* Remove one abstraction to use the config from deps
* Address PR feedback
* Initialize OTELSink with sync.Map for all the instrument stores.
* Moved PeriodicReader init to NewOtelReader function. This allows us to use a ManualReader for tests.
* Switch to mutex instead of sync.Map to avoid type assertion
* Add gauge store
* Clarify comments
* return concrete sink type
* Fix lint errors
* Move gauge store to be within sink
* Use context.TODO,rebase and clenaup opts handling
* Rebase onto otl exporter to downgrade metrics API to v1.15.0-rc.1
* Fix imports
* Update to latest stable version by rebasing on cc-4933, fix import, remove mutex init, fix opts error messages and use logger from ctx
* Add lots of documentation to the OTELSink
* Fix gauge store comment and check ok
* Add select and ctx.Done() check to gauge callback
* use require.Equal for attributes
* Fixed import naming
* Remove float64 calls and add a NewGaugeStore method
* Change name Store to Set in gaugeStore, add concurrency tests in both OTELSink and gauge store
* Generate 100 gauge operations
* Seperate the labels into goroutines in sink test
* Generate kv store for the test case keys to avoid using uuid
* Added a race test with 300 samples for OTELSink
* Do not pass in waitgroup and use error channel instead.
* Using SHA 7dea2225a218872e86d2f580e82c089b321617b0 to avoid build failures in otel
* Fix nits
* pass extraSinks as function param instead
* Add default interval as package export
* remove verifyCCM func
* Add clusterID
* Fix import and add t.Parallel() for missing tests
* Kick Vercel CI
* Remove scheme from endpoint path, and fix error logging
* return metrics.MetricSink for sink method
* Update SDK
* [HCP Observability] Metrics filtering and Labels in Go Metrics sink (#17184)
* Move hcp client to subpackage hcpclient (#16800)
* [HCP Observability] New MetricsClient (#17100)
* Client configured with TLS using HCP config and retry/throttle
* Add tests and godoc for metrics client
* close body after request
* run go mod tidy
* Remove one abstraction to use the config from deps
* Address PR feedback
* remove clone
* Extract CloudConfig and mock for future PR
* Switch to hclog.FromContext
* [HCP Observability] New MetricsClient (#17100)
* Client configured with TLS using HCP config and retry/throttle
* Add tests and godoc for metrics client
* close body after request
* run go mod tidy
* Remove one abstraction to use the config from deps
* Address PR feedback
* remove clone
* Extract CloudConfig and mock for future PR
* Switch to hclog.FromContext
* [HCP Observability] New MetricsClient (#17100)
* Client configured with TLS using HCP config and retry/throttle
* Add tests and godoc for metrics client
* close body after request
* run go mod tidy
* Remove one abstraction to use the config from deps
* Address PR feedback
* remove clone
* Extract CloudConfig and mock for future PR
* Switch to hclog.FromContext
* Client configured with TLS using HCP config and retry/throttle
* run go mod tidy
* Remove one abstraction to use the config from deps
* Address PR feedback
* Client configured with TLS using HCP config and retry/throttle
* run go mod tidy
* Create new OTELExporter which uses the MetricsClient
Add transform because the conversion is in an /internal package
* Fix lint error
* early return when there are no metrics
* Add NewOTELExporter() function
* Downgrade to metrics SDK version: v1.15.0-rc.1
* Fix imports
* fix small nits with comments and url.URL
* Fix tests by asserting actual error for context cancellation, fix parallel, and make mock more versatile
* Cleanup error handling and clarify empty metrics case
* Fix input/expected naming in otel_transform_test.go
* add comment for metric tracking
* Add a general isEmpty method
* Add clear error types
* update to latest version 1.15.0 of OTEL
* Client configured with TLS using HCP config and retry/throttle
* run go mod tidy
* Remove one abstraction to use the config from deps
* Address PR feedback
* Initialize OTELSink with sync.Map for all the instrument stores.
* Moved PeriodicReader init to NewOtelReader function. This allows us to use a ManualReader for tests.
* Switch to mutex instead of sync.Map to avoid type assertion
* Add gauge store
* Clarify comments
* return concrete sink type
* Fix lint errors
* Move gauge store to be within sink
* Use context.TODO,rebase and clenaup opts handling
* Rebase onto otl exporter to downgrade metrics API to v1.15.0-rc.1
* Fix imports
* Update to latest stable version by rebasing on cc-4933, fix import, remove mutex init, fix opts error messages and use logger from ctx
* Add lots of documentation to the OTELSink
* Fix gauge store comment and check ok
* Add select and ctx.Done() check to gauge callback
* use require.Equal for attributes
* Fixed import naming
* Remove float64 calls and add a NewGaugeStore method
* Change name Store to Set in gaugeStore, add concurrency tests in both OTELSink and gauge store
* Generate 100 gauge operations
* Seperate the labels into goroutines in sink test
* Generate kv store for the test case keys to avoid using uuid
* Added a race test with 300 samples for OTELSink
* [HCP Observability] OTELExporter (#17128)
* Client configured with TLS using HCP config and retry/throttle
* run go mod tidy
* Remove one abstraction to use the config from deps
* Address PR feedback
* Client configured with TLS using HCP config and retry/throttle
* run go mod tidy
* Create new OTELExporter which uses the MetricsClient
Add transform because the conversion is in an /internal package
* Fix lint error
* early return when there are no metrics
* Add NewOTELExporter() function
* Downgrade to metrics SDK version: v1.15.0-rc.1
* Fix imports
* fix small nits with comments and url.URL
* Fix tests by asserting actual error for context cancellation, fix parallel, and make mock more versatile
* Cleanup error handling and clarify empty metrics case
* Fix input/expected naming in otel_transform_test.go
* add comment for metric tracking
* Add a general isEmpty method
* Add clear error types
* update to latest version 1.15.0 of OTEL
* Do not pass in waitgroup and use error channel instead.
* Using SHA 7dea2225a218872e86d2f580e82c089b321617b0 to avoid build failures in otel
* Rebase onto otl exporter to downgrade metrics API to v1.15.0-rc.1
* Initialize OTELSink with sync.Map for all the instrument stores.
* Added telemetry agent to client and init sink in deps
* Fixed client
* Initalize sink in deps
* init sink in telemetry library
* Init deps before telemetry
* Use concrete telemetry.OtelSink type
* add /v1/metrics
* Avoid returning err for telemetry init
* move sink init within the IsCloudEnabled()
* Use HCPSinkOpts in deps instead
* update golden test for configuration file
* Switch to using extra sinks in the telemetry library
* keep name MetricsConfig
* fix log in verifyCCMRegistration
* Set logger in context
* pass around MetricSink in deps
* Fix imports
* Rebased onto otel sink pr
* Fix URL in test
* [HCP Observability] OTELSink (#17159)
* Client configured with TLS using HCP config and retry/throttle
* run go mod tidy
* Remove one abstraction to use the config from deps
* Address PR feedback
* Client configured with TLS using HCP config and retry/throttle
* run go mod tidy
* Create new OTELExporter which uses the MetricsClient
Add transform because the conversion is in an /internal package
* Fix lint error
* early return when there are no metrics
* Add NewOTELExporter() function
* Downgrade to metrics SDK version: v1.15.0-rc.1
* Fix imports
* fix small nits with comments and url.URL
* Fix tests by asserting actual error for context cancellation, fix parallel, and make mock more versatile
* Cleanup error handling and clarify empty metrics case
* Fix input/expected naming in otel_transform_test.go
* add comment for metric tracking
* Add a general isEmpty method
* Add clear error types
* update to latest version 1.15.0 of OTEL
* Client configured with TLS using HCP config and retry/throttle
* run go mod tidy
* Remove one abstraction to use the config from deps
* Address PR feedback
* Initialize OTELSink with sync.Map for all the instrument stores.
* Moved PeriodicReader init to NewOtelReader function. This allows us to use a ManualReader for tests.
* Switch to mutex instead of sync.Map to avoid type assertion
* Add gauge store
* Clarify comments
* return concrete sink type
* Fix lint errors
* Move gauge store to be within sink
* Use context.TODO,rebase and clenaup opts handling
* Rebase onto otl exporter to downgrade metrics API to v1.15.0-rc.1
* Fix imports
* Update to latest stable version by rebasing on cc-4933, fix import, remove mutex init, fix opts error messages and use logger from ctx
* Add lots of documentation to the OTELSink
* Fix gauge store comment and check ok
* Add select and ctx.Done() check to gauge callback
* use require.Equal for attributes
* Fixed import naming
* Remove float64 calls and add a NewGaugeStore method
* Change name Store to Set in gaugeStore, add concurrency tests in both OTELSink and gauge store
* Generate 100 gauge operations
* Seperate the labels into goroutines in sink test
* Generate kv store for the test case keys to avoid using uuid
* Added a race test with 300 samples for OTELSink
* Do not pass in waitgroup and use error channel instead.
* Using SHA 7dea2225a218872e86d2f580e82c089b321617b0 to avoid build failures in otel
* Fix nits
* pass extraSinks as function param instead
* Add default interval as package export
* remove verifyCCM func
* Add clusterID
* Fix import and add t.Parallel() for missing tests
* Kick Vercel CI
* Remove scheme from endpoint path, and fix error logging
* return metrics.MetricSink for sink method
* Update SDK
* Added telemetry agent to client and init sink in deps
* Add node_id and __replica__ default labels
* add function for default labels and set x-hcp-resource-id
* Fix labels tests
* Commit suggestion for getDefaultLabels
Co-authored-by: Joshua Timmons <joshua.timmons1@gmail.com>
* Fixed server.id, and t.Parallel()
* Make defaultLabels a method on the TelemetryConfig object
* Rename FilterList to lowercase filterList
* Cleanup filter implemetation by combining regex into a single one, and making the type lowercase
* Fix append
* use regex directly for filters
* Fix x-resource-id test to use mocked value
* Fix log.Error formats
* Forgot the len(opts.Label) optimization)
* Use cfg.NodeID instead
---------
Co-authored-by: Joshua Timmons <joshua.timmons1@gmail.com>
* remove replic tag (#17484)
* [HCP Observability] Add custom metrics for OTEL sink, improve logging, upgrade modules and cleanup metrics client (#17455)
* Add custom metrics for Exporter and transform operations
* Improve deps logging
Run go mod tidy
* Upgrade SDK and OTEL
* Remove the partial success implemetation and check for HTTP status code in metrics client
* Add x-channel
* cleanup logs in deps.go based on PR feedback
* Change to debug log and lowercase
* address test operation feedback
* use GetHumanVersion on version
* Fix error wrapping
* Fix metric names
* [HCP Observability] Turn off retries for now until dynamically configurable (#17496)
* Remove retries for now until dynamic configuration is possible
* Clarify comment
* Update changelog
* improve changelog
---------
Co-authored-by: Joshua Timmons <joshua.timmons1@gmail.com>
* Support Listener in Property Override
Add support for patching `Listener` resources via the builtin
`property-override` extension.
Refactor existing listener patch code in `BasicEnvoyExtender` to
simplify addition of resource support.
* Support ClusterLoadAssignment in Property Override
Add support for patching `ClusterLoadAssignment` resources via the
builtin `property-override` extension.
`property-override` is an extension that allows for arbitrarily
patching Envoy resources based on resource matching filters. Patch
operations resemble a subset of the JSON Patch spec with minor
differences to facilitate patching pre-defined (protobuf) schemas.
See Envoy Extension product documentation for more details.
Co-authored-by: Eric Haberkorn <eric.haberkorn@hashicorp.com>
Co-authored-by: Kyle Havlovitz <kyle@hashicorp.com>
* perf: Remove expensive reflection from raft/mesh hot path
Replaces a reflection-based copy of a struct in the mesh topology with a
deep-copy generated implementation.
This is in the hot-path of raft FSM updates, and the reflection overhead was a
substantial part of mesh registration times (~90%). This could manifest as raft
thread saturation, and resulting instability.
Co-authored-by: Joel Brandhorst <joel.brandhorst@gmail.com>
* add changelog
---------
Co-authored-by: Joel Brandhorst <joel.brandhorst@gmail.com>
Co-authored-by: John Murret <john.murret@hashicorp.com>
This will likely happen frequently with sameness groups. Relaxing this
constraint is harmless for failover because xds/endpoints exludes cross
partition and peer endpoints.
* xds generation for routes api gateway
* Update gateway.go
* move buildHttpRoute into xds package
* Update agent/consul/discoverychain/gateway.go
* remove unneeded function
* convert http route code to only run for http protocol to future proof code path
* Update agent/consul/discoverychain/gateway.go
Co-authored-by: Mike Morris <mikemorris@users.noreply.github.com>
* fix tests, clean up http check logic
* clean up todo
* Fix casing in docstring
* Fix import block, adjust docstrings
* Rename func
* Consolidate docstring onto single line
* Remove ToIngress() conversion for APIGW, which generates its own xDS now
* update name and comment
* use constant value
* use constant
* rename readyUpstreams to readyListeners to better communicate what that function is doing
---------
Co-authored-by: Mike Morris <mikemorris@users.noreply.github.com>
Co-authored-by: Nathan Coleman <nathan.coleman@hashicorp.com>
* xds generation for routes api gateway
* Update gateway.go
* move buildHttpRoute into xds package
* Update agent/consul/discoverychain/gateway.go
* remove unneeded function
* convert http route code to only run for http protocol to future proof code path
* Update agent/consul/discoverychain/gateway.go
Co-authored-by: Mike Morris <mikemorris@users.noreply.github.com>
* fix tests, clean up http check logic
* clean up todo
* Fix casing in docstring
* Fix import block, adjust docstrings
* update name and comment
* use constant value
* use constant
---------
Co-authored-by: Mike Morris <mikemorris@users.noreply.github.com>
Co-authored-by: Nathan Coleman <nathan.coleman@hashicorp.com>
Fix ACL check on health endpoint
Prior to this change, the service health API would not explicitly return an
error whenever a token with invalid permissions was given, and it would instead
return empty results. With this change, a "Permission denied" error is returned
whenever data is queried. This is done to better support the agent cache, which
performs a fetch backoff sleep whenever ACL errors are encountered. Affected
endpoints are: `/v1/health/connect/` and `/v1/health/ingress/`.
* Fix namespaced peer service updates / deletes.
This change fixes a function so that namespaced services are
correctly queried when handling updates / deletes. Prior to this
change, some peered services would not correctly be un-exported.
* Add changelog.
To avoid unintended tampering with remote downstreams via service
config, refactor BasicEnvoyExtender and RuntimeConfig to disallow
typical Envoy extensions from being applied to non-local proxies.
Continue to allow this behavior for AWS Lambda and the read-only
Validate builtin extensions.
Addresses CVE-2023-2816.
* API Gateway XDS Primitives, endpoints and clusters (#17002)
* XDS primitive generation for endpoints and clusters
Co-authored-by: Nathan Coleman <nathan.coleman@hashicorp.com>
* server_test
* deleted extra file
* add missing parents to test
---------
Co-authored-by: Nathan Coleman <nathan.coleman@hashicorp.com>
* Routes for API Gateway (#17158)
* XDS primitive generation for endpoints and clusters
Co-authored-by: Nathan Coleman <nathan.coleman@hashicorp.com>
* server_test
* deleted extra file
* add missing parents to test
* checkpoint
* delete extra file
* httproute flattening code
* linting issue
* so close on this, calling for tonight
* unit test passing
* add in header manip to virtual host
* upstream rebuild commented out
* Use consistent upstream name whether or not we're rebuilding
* Start working through route naming logic
* Fix typos in test descriptions
* Simplify route naming logic
* Simplify RebuildHTTPRouteUpstream
* Merge additional compiled discovery chains instead of overwriting
* Use correct chain for flattened route, clean up + add TODOs
* Remove empty conditional branch
* Restore previous variable declaration
Limit the scope of this PR
* Clean up, improve TODO
* add logging, clean up todos
* clean up function
---------
Co-authored-by: Nathan Coleman <nathan.coleman@hashicorp.com>
* checkpoint, skeleton, tests not passing
* checkpoint
* endpoints xds cluster configuration
* resources test fix
* fix reversion in resources_test
* checkpoint
* Update agent/proxycfg/api_gateway.go
Co-authored-by: John Maguire <john.maguire@hashicorp.com>
* unit tests passing
* gofmt
* add deterministic sorting to appease the unit test gods
* remove panic
* Find ready upstream matching listener instead of first in list
* Clean up, improve TODO
* Modify getReadyUpstreams to filter upstreams by listener (#17410)
Each listener would previously have all upstreams from any route that bound to the listener. This is problematic when a route bound to one listener also binds to other listeners and so includes upstreams for multiple listeners. The list for a given listener would then wind up including upstreams for other listeners.
* clean up todos, references to api gateway in listeners_ingress
* merge in Nathan's fix
* Update agent/consul/discoverychain/gateway.go
* cleanup current todos, remove snapshot manipulation from generation code
* Update agent/structs/config_entry_gateways.go
Co-authored-by: Thomas Eckert <teckert@hashicorp.com>
* Update agent/consul/discoverychain/gateway.go
Co-authored-by: Nathan Coleman <nathan.coleman@hashicorp.com>
* Update agent/consul/discoverychain/gateway.go
Co-authored-by: Nathan Coleman <nathan.coleman@hashicorp.com>
* Update agent/proxycfg/snapshot.go
Co-authored-by: Nathan Coleman <nathan.coleman@hashicorp.com>
* clarified header comment for FlattenHTTPRoute, changed RebuildHTTPRouteUpstream to BuildHTTPRouteUpstream
* simplify cert logic
* Delete scratch
* revert route related changes in listener PR
* Update agent/consul/discoverychain/gateway.go
* Update agent/proxycfg/snapshot.go
* clean up uneeded extra lines in endpoints
---------
Co-authored-by: Nathan Coleman <nathan.coleman@hashicorp.com>
Co-authored-by: John Maguire <john.maguire@hashicorp.com>
Co-authored-by: Thomas Eckert <teckert@hashicorp.com>
* endpoints xds cluster configuration
* clusters xds native generation
* resources test fix
* fix reversion in resources_test
* Update agent/proxycfg/api_gateway.go
Co-authored-by: John Maguire <john.maguire@hashicorp.com>
* gofmt
* Modify getReadyUpstreams to filter upstreams by listener (#17410)
Each listener would previously have all upstreams from any route that bound to the listener. This is problematic when a route bound to one listener also binds to other listeners and so includes upstreams for multiple listeners. The list for a given listener would then wind up including upstreams for other listeners.
* Update agent/proxycfg/api_gateway.go
Co-authored-by: Nathan Coleman <nathan.coleman@hashicorp.com>
* Restore import blocking
* Undo removal of unrelated code
---------
Co-authored-by: John Maguire <john.maguire@hashicorp.com>
Co-authored-by: Nathan Coleman <nathan.coleman@hashicorp.com>
This change enables workflows where you are reapplying a resource that should have an owner ref to publish modifications to the resources data without performing a read to figure out the current owner resource incarnations UID.
Basically we want workflows similar to `kubectl apply` or `consul config write` to be able to work seamlessly even for owned resources.
In these cases the users intention is to have the resource owned by the “current” incarnation of the owner resource.
* endpoints xds cluster configuration
* resources test fix
* fix reversion in resources_test
* Update agent/proxycfg/api_gateway.go
Co-authored-by: John Maguire <john.maguire@hashicorp.com>
* gofmt
* Modify getReadyUpstreams to filter upstreams by listener (#17410)
Each listener would previously have all upstreams from any route that bound to the listener. This is problematic when a route bound to one listener also binds to other listeners and so includes upstreams for multiple listeners. The list for a given listener would then wind up including upstreams for other listeners.
* Update agent/proxycfg/api_gateway.go
Co-authored-by: Nathan Coleman <nathan.coleman@hashicorp.com>
* Restore import blocking
* Skip to next route if route has no upstreams
* cleanup
* change set from bool to empty struct
---------
Co-authored-by: John Maguire <john.maguire@hashicorp.com>
Co-authored-by: Nathan Coleman <nathan.coleman@hashicorp.com>
* agent: configure server lastseen timestamp
Signed-off-by: Dan Bond <danbond@protonmail.com>
* use correct config
Signed-off-by: Dan Bond <danbond@protonmail.com>
* add comments
Signed-off-by: Dan Bond <danbond@protonmail.com>
* use default age in test golden data
Signed-off-by: Dan Bond <danbond@protonmail.com>
* add changelog
Signed-off-by: Dan Bond <danbond@protonmail.com>
* fix runtime test
Signed-off-by: Dan Bond <danbond@protonmail.com>
* agent: add server_metadata
Signed-off-by: Dan Bond <danbond@protonmail.com>
* update comments
Signed-off-by: Dan Bond <danbond@protonmail.com>
* correctly check if metadata file does not exist
Signed-off-by: Dan Bond <danbond@protonmail.com>
* follow instructions for adding new config
Signed-off-by: Dan Bond <danbond@protonmail.com>
* add comments
Signed-off-by: Dan Bond <danbond@protonmail.com>
* update comments
Signed-off-by: Dan Bond <danbond@protonmail.com>
* Update agent/agent.go
Co-authored-by: Dan Upton <daniel@floppy.co>
* agent/config: add validation for duration with min
Signed-off-by: Dan Bond <danbond@protonmail.com>
* docs: add new server_rejoin_age_max config definition
Signed-off-by: Dan Bond <danbond@protonmail.com>
* agent: add unit test for checking server last seen
Signed-off-by: Dan Bond <danbond@protonmail.com>
* agent: log continually for 60s before erroring
Signed-off-by: Dan Bond <danbond@protonmail.com>
* pr comments
Signed-off-by: Dan Bond <danbond@protonmail.com>
* remove unneeded todo
* agent: fix error message
Signed-off-by: Dan Bond <danbond@protonmail.com>
---------
Signed-off-by: Dan Bond <danbond@protonmail.com>
Co-authored-by: Dan Upton <daniel@floppy.co>
* Add v1/internal/service-virtual-ip for manually setting service VIPs
* Attach service virtual IP info to compiled discovery chain
* Separate auto-assigned and manual VIPs in response
The grpc resolver implementation is fed from changes to the
router.Router. Within the router there is a map of various areas storing
the addressing information for servers in those areas. All map entries
are of the WAN variety except a single special entry for the LAN.
Addressing information in the LAN "area" are local addresses intended
for use when making a client-to-server or server-to-server request.
The client agent correctly updates this LAN area when receiving lan serf
events, so by extension the grpc resolver works fine in that scenario.
The server agent only initially populates a single entry in the LAN area
(for itself) on startup, and then never mutates that area map again.
For normal RPCs a different structure is used for LAN routing.
Additionally when selecting a server to contact in the local datacenter
it will randomly select addresses from either the LAN or WAN addressed
entries in the map.
Unfortunately this means that the grpc resolver stack as it exists on
server agents is either broken or only accidentally functions by having
servers dial each other over the WAN-accessible address. If the operator
disables the serf wan port completely likely this incidental functioning
would break.
This PR enforces that local requests for servers (both for stale reads
or leader forwarded requests) exclusively use the LAN "area" information
and also fixes it so that servers keep that area up to date in the
router.
A test for the grpc resolver logic was added, as well as a higher level
full-stack test to ensure the externally perceived bug does not return.
* snapshot: some improvments to the snapshot process
Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>
Co-authored-by: Chris S. Kim <ckim@hashicorp.com>
UNIX domain socket paths are limited to 104-108 characters, depending on
the OS. This limit was quite easy to exceed when testing the feature on
Kubernetes, due to how proxy IDs encode the Pod ID eg:
metrics-collector-59467bcb9b-fkkzl-hcp-metrics-collector-sidecar-proxy
To ensure we stay under that character limit this commit makes a
couple changes:
- Use a b64 encoded SHA1 hash of the namespace + proxy ID to create a
short and deterministic socket file name.
- Add validation to proxy registrations and proxy-defaults to enforce a
limit on the socket directory length.
Fix multiple issues related to proxycfg health queries.
1. The datacenter was not being provided to a proxycfg query, which resulted in
bypassing agentless query optimizations and using the normal API instead.
2. The health rpc endpoint would return a zero index when insufficient ACLs were
detected. This would result in the agent cache performing an infinite loop of
queries in rapid succession without backoff.
Fix issue with peer stream node cleanup.
This commit encompasses a few problems that are closely related due to their
proximity in the code.
1. The peerstream utilizes node IDs in several locations to determine which
nodes / services / checks should be cleaned up or created. While VM deployments
with agents will likely always have a node ID, agentless uses synthetic nodes
and does not populate the field. This means that for consul-k8s deployments, all
services were likely bundled together into the same synthetic node in some code
paths (but not all), resulting in strange behavior. The Node.Node field should
be used instead as a unique identifier, as it should always be populated.
2. The peerstream cleanup process for unused nodes uses an incorrect query for
node deregistration. This query is NOT namespace aware and results in the node
(and corresponding services) being deregistered prematurely whenever it has zero
default-namespace services and 1+ non-default-namespace services registered on
it. This issue is tricky to find due to the incorrect logic mentioned in #1,
combined with the fact that the affected services must be co-located on the same
node as the currently deregistering service for this to be encountered.
3. The stream tracker did not understand differences between services in
different namespaces and could therefore report incorrect numbers. It was
updated to utilize the full service name to avoid conflicts and return proper
results.
When using vault as a CA and generating the local signing cert, try to
enable the PKI endpoint's auto-tidy feature with it set to tidy expired
issuers.
This adds filtering for service-defaults: consul config list -filter 'MutualTLSMode == "permissive"'.
It adds CLI warnings when the CLI writes a config entry and sees that either service-defaults or proxy-defaults contains MutualTLSMode=permissive, or sees that the mesh config entry contains AllowEnablingPermissiveMutualTLSMode=true.
Partitioned downstreams with peered upstreams could not properly merge central config info (i.e. proxy-defaults and service-defaults things like mesh gateway modes) if the upstream had an empty DestinationPartition field in Enterprise.
Due to data flow, if this setup is done using Consul client agents the field is never empty and thus does not experience the bug.
When a service is registered directly to the catalog as is the case for consul-dataplane use this field may be empty and and the internal machinery of the merging function doesn't handle this well.
This PR ensures the internal machinery of that function is referentially self-consistent.
* Persist HCP management token from server config
We want to move away from injecting an initial management token into
Consul clusters linked to HCP. The reasoning is that by using a separate
class of token we can have more flexibility in terms of allowing HCP's
token to co-exist with the user's management token.
Down the line we can also more easily adjust the permissions attached to
HCP's token to limit it's scope.
With these changes, the cloud management token is like the initial
management token in that iit has the same global management policy and
if it is created it effectively bootstraps the ACL system.
* Update SDK and mock HCP server
The HCP management token will now be sent in a special field rather than
as Consul's "initial management" token configuration.
This commit also updates the mock HCP server to more accurately reflect
the behavior of the CCM backend.
* Refactor HCP bootstrapping logic and add tests
We want to allow users to link Consul clusters that already exist to
HCP. Existing clusters need care when bootstrapped by HCP, since we do
not want to do things like change ACL/TLS settings for a running
cluster.
Additional changes:
* Deconstruct MaybeBootstrap so that it can be tested. The HCP Go SDK
requires HTTPS to fetch a token from the Auth URL, even if the backend
server is mocked. By pulling the hcp.Client creation out we can modify
its TLS configuration in tests while keeping the secure behavior in
production code.
* Add light validation for data received/loaded.
* Sanitize initial_management token from received config, since HCP will
only ever use the CloudConfig.MangementToken.
* Add changelog entry
* Move status condition for invalid certifcate to reference the listener
that is using the certificate
* Fix where we set the condition status for listeners and certificate
refs, added tests
* Add changelog
* Add MaxEjectionPercent to config entry
* Add BaseEjectionTime to config entry
* Add MaxEjectionPercent and BaseEjectionTime to protobufs
* Add MaxEjectionPercent and BaseEjectionTime to api
* Fix integration test breakage
* Verify MaxEjectionPercent and BaseEjectionTime in integration test upstream confings
* Website docs for MaxEjectionPercent and BaseEjection time
* Add `make docs` to browse docs at http://localhost:3000
* Changelog entry
* so that is the difference between consul-docker and dev-docker
* blah
* update proto funcs
* update proto
---------
Co-authored-by: Maliz <maliheh.monshizadeh@hashicorp.com>
* Fix straggler from renaming Register->RegisterTypes
* somehow a lint failure got through previously
* Fix lint-consul-retry errors
* adding in fix for success jobs getting skipped. (#17132)
* Temporarily disable inmem backend conformance test to get green pipeline
* Another test needs disabling
---------
Co-authored-by: John Murret <john.murret@hashicorp.com>
* normalize status conditions for gateways and routes
* Added tests for checking condition status and panic conditions for
validating combinations, added dummy code for fsm store
* get rid of unneeded gateway condition generator struct
* Remove unused file
* run go mod tidy
* Update tests, add conflicted gateway status
* put back removed status for test
* Fix linting violation, remove custom conflicted status
* Update fsm commands oss
* Fix incorrect combination of type/condition/status
* cleaning up from PR review
* Change "invalidCertificate" to be of accepted status
* Move status condition enums into api package
* Update gateways controller and generated code
* Update conditions in fsm oss tests
* run go mod tidy on consul-container module to fix linting
* Fix type for gateway endpoint test
* go mod tidy from changes to api
* go mod tidy on troubleshoot
* Fix route conflicted reason
* fix route conflict reason rename
* Fix text for gateway conflicted status
* Add valid certificate ref condition setting
* Revert change to resolved refs to be handled in future PR
* added method for converting SamenessGroupConfigEntry
- added new method `ToQueryFailoverTargets` for converting a SamenessGroupConfigEntry's members to a list of QueryFailoverTargets
- renamed `ToFailoverTargets` ToServiceResolverFailoverTargets to distinguish it from `ToQueryFailoverTargets`
* Added SamenessGroup to PreparedQuery
- exposed Service.Partition to API when defining a prepared query
- added a method for determining if a QueryFailoverOptions is empty
- This will be useful for validation
- added unit tests
* added method for retrieving a SamenessGroup to state store
* added logic for using PQ with SamenessGroup
- added branching path for SamenessGroup handling in execute. It will be handled separate from the normal PQ case
- added a new interface so that the `GetSamenessGroupFailoverTargets` can be properly tested
- separated the execute logic into a `targetSelector` function so that it can be used for both failover and sameness group PQs
- split OSS only methods into new PQ OSS files
- added validation that `samenessGroup` is an enterprise only feature
* added documentation for PQ SamenessGroup
Before this change, we were not fetching service resolvers (and therefore
service defaults) configuration entries for services on members of sameness
groups.
This implements permissive mTLS , which allows toggling services into "permissive" mTLS mode.
Permissive mTLS mode allows incoming "non Consul-mTLS" traffic to be forward unmodified to the application.
* Update service-defaults and proxy-defaults config entries with a MutualTLSMode field
* Update the mesh config entry with an AllowEnablingPermissiveMutualTLS field and implement the necessary validation. AllowEnablingPermissiveMutualTLS must be true to allow changing to MutualTLSMode=permissive, but this does not require that all proxy-defaults and service-defaults are currently in strict mode.
* Update xDS listener config to add a "permissive filter chain" when MutualTLSMode=permissive for a particular service. The permissive filter chain matches incoming traffic by the destination port. If the destination port matches the service port from the catalog, then no mTLS is required and the traffic sent is forwarded unmodified to the application.
This commit adds the PrioritizeByLocality field to both proxy-config
and service-resolver config entries for locality-aware routing. The
field is currently intended for enterprise only, and will be used to
enable prioritization of service-mesh connections to services based
on geographical region / zone.
- added Sameness Group to config entries
- added Sameness Group to subscriptions
* generated proto files
* added Sameness Group events to the state store
- added test cases
* Refactored health RPC Client
- moved code that is common to rpcclient under rpcclient common.go. This will help set us up to support future RPC clients
* Refactored proxycfg glue views
- Moved views to rpcclient config entry. This will allow us to reuse this code for a config entry client
* added config entry RPC Client
- Copied most of the testing code from rpcclient/health
* hooked up new rpcclient in agent
* fixed documentation and comments for clarity
* Add a test to reproduce the race condition
* Fix race condition by publishing the event after the commit and adding a lock to prevent out of order events.
* split publish to generate the list of events before committing the transaction.
* add changelog
* remove extra func
* Apply suggestions from code review
Co-authored-by: Dan Upton <daniel@floppy.co>
* add comment to explain test
---------
Co-authored-by: Dan Upton <daniel@floppy.co>
Prior to this change, peer services would be targeted by service-default
overrides as long as the new `peer` field was not found in the config entry.
This commit removes that deprecated backwards-compatibility behavior. Now
it is necessary to specify the `peer` field in order for upstream overrides
to apply to a peer upstream.
The old setting of 24 hours was not enough time to deal with an expiring certificates. This change ups it to 28 days OR 40% of the full cert duration, whichever is shorter. It also adds details to the log message to indicate which certificate it is logging about and a suggested action.
Currently, if an acceptor peer deletes a peering the dialer's peering
will eventually get to a "terminated" state. If the two clusters need to
be re-peered the acceptor will re-generate the token but the dialer will
encounter this error on the call to establish:
"failed to get addresses to dial peer: failed to refresh peer server
addresses, will continue to use initial addresses: there is no active
peering for "<<<ID>>>""
This is because in `exchangeSecret().GetDialAddresses()` we will get an
error if fetching addresses for an inactive peering. The peering shows
up as inactive at this point because of the existing terminated state.
Rather than checking whether a peering is active we can instead check
whether it was deleted. This way users do not need to delete terminated
peerings in the dialing cluster before re-establishing them.
* Rename Intermediate cert references to LeafSigningCert
Within the Consul CA subsystem, the term "Intermediate"
is confusing because the meaning changes depending on
provider and datacenter (primary vs secondary). For
example, when using the Consul CA the "ActiveIntermediate"
may return the root certificate in a primary datacenter.
At a high level, we are interested in knowing which
CA is responsible for signing leaf certs, regardless of
its position in a certificate chain. This rename makes
the intent clearer.
* Move provider state check earlier
* Remove calls to GenerateLeafSigningCert
GenerateLeafSigningCert (formerly known
as GenerateIntermediate) is vestigial in
non-Vault providers, as it simply returns
the root certificate in primary
datacenters.
By folding Vault's intermediate cert logic
into `GenerateRoot` we can encapsulate
the intermediate cert handling within
`newCARoot`.
* Move GenerateLeafSigningCert out of PrimaryProvidder
Now that the Vault Provider calls
GenerateLeafSigningCert within
GenerateRoot, we can remove the method
from all other providers that never
used it in a meaningful way.
* Add test for IntermediatePEM
* Rename GenerateRoot to GenerateCAChain
"Root" was being overloaded in the Consul CA
context, as different providers and configs
resulted in a single root certificate or
a chain originating from an external trusted
CA. Since the Vault provider also generates
intermediates, it seems more accurate to
call this a CAChain.
This PR adds the sameness-group field to exported-service
config entries, which allows for services to be exported
to multiple destination partitions / peers easily.
* Use merge of enterprise meta's rather than new custom method
* Add merge logic for tcp routes
* Add changelog
* Normalize certificate refs on gateways
* Fix infinite call loop
* Explicitly call enterprise meta