mirror of https://github.com/hashicorp/consul
Merge branch 'main' into feat/vault-token-env
commit
ba402bbfdd
|
@ -0,0 +1,3 @@
|
|||
```release-note:bug
|
||||
connect: Fix incorrect protocol config merging for transparent proxy implicit upstreams.
|
||||
```
|
|
@ -0,0 +1,4 @@
|
|||
```release-note:improvement
|
||||
http: GET API `operator/usage` endpoint now returns node count
|
||||
cli: `consul operator usage` command now returns node count
|
||||
```
|
|
@ -0,0 +1,3 @@
|
|||
```release-note:improvement
|
||||
mesh: Expose remote jwks cluster configuration through jwt-provider config entry
|
||||
```
|
|
@ -0,0 +1,4 @@
|
|||
```release-note:bug
|
||||
connect: Removes the default health check from the `consul connect envoy` command when starting an API Gateway.
|
||||
This health check would always fail.
|
||||
```
|
73
CHANGELOG.md
73
CHANGELOG.md
|
@ -1,3 +1,76 @@
|
|||
## 1.16.0 (June 26, 2023)
|
||||
|
||||
BREAKING CHANGES:
|
||||
|
||||
* api: The `/v1/health/connect/` and `/v1/health/ingress/` endpoints now immediately return 403 "Permission Denied" errors whenever a token with insufficient `service:read` permissions is provided. Prior to this change, the endpoints returned a success code with an empty result list when a token with insufficient permissions was provided. [[GH-17424](https://github.com/hashicorp/consul/issues/17424)]
|
||||
* peering: Removed deprecated backward-compatibility behavior.
|
||||
Upstream overrides in service-defaults will now only apply to peer upstreams when the `peer` field is provided.
|
||||
Visit the 1.16.x [upgrade instructions](https://developer.hashicorp.com/consul/docs/upgrading/upgrade-specific) for more information. [[GH-16957](https://github.com/hashicorp/consul/issues/16957)]
|
||||
|
||||
SECURITY:
|
||||
|
||||
* Bump Dockerfile base image to `alpine:3.18`. [[GH-17719](https://github.com/hashicorp/consul/issues/17719)]
|
||||
* audit-logging: **(Enterprise only)** limit `v1/operator/audit-hash` endpoint to ACL token with `operator:read` privileges.
|
||||
|
||||
FEATURES:
|
||||
|
||||
* api: (Enterprise only) Add `POST /v1/operator/audit-hash` endpoint to calculate the hash of the data used by the audit log hash function and salt.
|
||||
* cli: (Enterprise only) Add a new `consul operator audit hash` command to retrieve and compare the hash of the data used by the audit log hash function and salt.
|
||||
* cli: Adds new command - `consul services export` - for exporting a service to a peer or partition [[GH-15654](https://github.com/hashicorp/consul/issues/15654)]
|
||||
* connect: **(Consul Enterprise only)** Implement order-by-locality failover.
|
||||
* mesh: Add new permissive mTLS mode that allows sidecar proxies to forward incoming traffic unmodified to the application. This adds `AllowEnablingPermissiveMutualTLS` setting to the mesh config entry and the `MutualTLSMode` setting to proxy-defaults and service-defaults. [[GH-17035](https://github.com/hashicorp/consul/issues/17035)]
|
||||
* mesh: Support configuring JWT authentication in Envoy. [[GH-17452](https://github.com/hashicorp/consul/issues/17452)]
|
||||
* server: **(Enterprise Only)** added server side RPC requests IP based read/write rate-limiter. [[GH-4633](https://github.com/hashicorp/consul/issues/4633)]
|
||||
* server: **(Enterprise Only)** allow automatic license utilization reporting. [[GH-5102](https://github.com/hashicorp/consul/issues/5102)]
|
||||
* server: added server side RPC requests global read/write rate-limiter. [[GH-16292](https://github.com/hashicorp/consul/issues/16292)]
|
||||
* xds: Add `property-override` built-in Envoy extension that directly patches Envoy resources. [[GH-17487](https://github.com/hashicorp/consul/issues/17487)]
|
||||
* xds: Add a built-in Envoy extension that inserts External Authorization (ext_authz) network and HTTP filters. [[GH-17495](https://github.com/hashicorp/consul/issues/17495)]
|
||||
* xds: Add a built-in Envoy extension that inserts Wasm HTTP filters. [[GH-16877](https://github.com/hashicorp/consul/issues/16877)]
|
||||
* xds: Add a built-in Envoy extension that inserts Wasm network filters. [[GH-17505](https://github.com/hashicorp/consul/issues/17505)]
|
||||
|
||||
IMPROVEMENTS:
|
||||
|
||||
* * api: Support filtering for config entries. [[GH-17183](https://github.com/hashicorp/consul/issues/17183)]
|
||||
* * cli: Add `-filter` option to `consul config list` for filtering config entries. [[GH-17183](https://github.com/hashicorp/consul/issues/17183)]
|
||||
* agent: remove agent cache dependency from service mesh leaf certificate management [[GH-17075](https://github.com/hashicorp/consul/issues/17075)]
|
||||
* api: Enable setting query options on agent force-leave endpoint. [[GH-15987](https://github.com/hashicorp/consul/issues/15987)]
|
||||
* audit-logging: **(Enterprise only)** enable error response and request body logging
|
||||
* ca: automatically set up Vault's auto-tidy setting for tidy_expired_issuers when using Vault as a CA provider. [[GH-17138](https://github.com/hashicorp/consul/issues/17138)]
|
||||
* ca: support Vault agent auto-auth config for Vault CA provider using AliCloud authentication. [[GH-16224](https://github.com/hashicorp/consul/issues/16224)]
|
||||
* ca: support Vault agent auto-auth config for Vault CA provider using AppRole authentication. [[GH-16259](https://github.com/hashicorp/consul/issues/16259)]
|
||||
* ca: support Vault agent auto-auth config for Vault CA provider using Azure MSI authentication. [[GH-16298](https://github.com/hashicorp/consul/issues/16298)]
|
||||
* ca: support Vault agent auto-auth config for Vault CA provider using JWT authentication. [[GH-16266](https://github.com/hashicorp/consul/issues/16266)]
|
||||
* ca: support Vault agent auto-auth config for Vault CA provider using Kubernetes authentication. [[GH-16262](https://github.com/hashicorp/consul/issues/16262)]
|
||||
* command: Adds ACL enabled to status output on agent startup. [[GH-17086](https://github.com/hashicorp/consul/issues/17086)]
|
||||
* command: Allow creating ACL Token TTL with greater than 24 hours with the -expires-ttl flag. [[GH-17066](https://github.com/hashicorp/consul/issues/17066)]
|
||||
* connect: **(Enterprise Only)** Add support for specifying "Partition" and "Namespace" in Prepared Queries failover rules.
|
||||
* connect: update supported envoy versions to 1.23.10, 1.24.8, 1.25.7, 1.26.2 [[GH-17546](https://github.com/hashicorp/consul/issues/17546)]
|
||||
* connect: update supported envoy versions to 1.23.8, 1.24.6, 1.25.4, 1.26.0 [[GH-5200](https://github.com/hashicorp/consul/issues/5200)]
|
||||
* fix metric names in /docs/agent/telemetry [[GH-17577](https://github.com/hashicorp/consul/issues/17577)]
|
||||
* gateway: Change status condition reason for invalid certificate on a listener from "Accepted" to "ResolvedRefs". [[GH-17115](https://github.com/hashicorp/consul/issues/17115)]
|
||||
* http: accept query parameters `datacenter`, `ap` (enterprise-only), and `namespace` (enterprise-only). Both short-hand and long-hand forms of these query params are now supported via the HTTP API (dc/datacenter, ap/partition, ns/namespace). [[GH-17525](https://github.com/hashicorp/consul/issues/17525)]
|
||||
* systemd: set service type to notify. [[GH-16845](https://github.com/hashicorp/consul/issues/16845)]
|
||||
* ui: Update alerts to Hds::Alert component [[GH-16412](https://github.com/hashicorp/consul/issues/16412)]
|
||||
* ui: Update to use Hds::Toast component to show notifications [[GH-16519](https://github.com/hashicorp/consul/issues/16519)]
|
||||
* ui: update from <button> and <a> to design-system-components button <Hds::Button> [[GH-16251](https://github.com/hashicorp/consul/issues/16251)]
|
||||
* ui: update typography to styles from hds [[GH-16577](https://github.com/hashicorp/consul/issues/16577)]
|
||||
|
||||
BUG FIXES:
|
||||
|
||||
* Fix a race condition where an event is published before the data associated is commited to memdb. [[GH-16871](https://github.com/hashicorp/consul/issues/16871)]
|
||||
* connect: Fix issue where changes to service exports were not reflected in proxies. [[GH-17775](https://github.com/hashicorp/consul/issues/17775)]
|
||||
* gateways: **(Enterprise only)** Fixed a bug in API gateways where gateway configuration objects in non-default partitions did not reconcile properly. [[GH-17581](https://github.com/hashicorp/consul/issues/17581)]
|
||||
* gateways: Fixed a bug in API gateways where binding a route that only targets a service imported from a peer results
|
||||
in the programmed gateway having no routes. [[GH-17609](https://github.com/hashicorp/consul/issues/17609)]
|
||||
* gateways: Fixed a bug where API gateways were not being taken into account in determining xDS rate limits. [[GH-17631](https://github.com/hashicorp/consul/issues/17631)]
|
||||
* namespaces: **(Enterprise only)** fixes a bug where agent health checks stop syncing for all services on a node if the namespace of any service has been removed from the server.
|
||||
* namespaces: **(Enterprise only)** fixes a bug where namespaces are stuck in a deferred deletion state indefinitely under some conditions.
|
||||
Also fixes the Consul query metadata present in the HTTP headers of the namespace read and list endpoints.
|
||||
* peering: Fix a bug that caused server agents to continue cleaning up peering resources even after loss of leadership. [[GH-17483](https://github.com/hashicorp/consul/issues/17483)]
|
||||
* peering: Fixes a bug where the importing partition was not added to peered failover targets, which causes issues when the importing partition is a non-default partition. [[GH-16673](https://github.com/hashicorp/consul/issues/16673)]
|
||||
* ui: fixes ui tests run on CI [[GH-16428](https://github.com/hashicorp/consul/issues/16428)]
|
||||
* xds: Fixed a bug where modifying ACLs on a token being actively used for an xDS connection caused all xDS updates to fail. [[GH-17566](https://github.com/hashicorp/consul/issues/17566)]
|
||||
|
||||
## 1.15.4 (June 26, 2023)
|
||||
FEATURES:
|
||||
|
||||
|
|
|
@ -36,6 +36,7 @@ func ComputeResolvedServiceConfig(
|
|||
// blocking query, this function will be rerun and these state store lookups will both be current.
|
||||
// We use the default enterprise meta to look up the global proxy defaults because they are not namespaced.
|
||||
|
||||
var proxyConfGlobalProtocol string
|
||||
proxyConf := entries.GetProxyDefaults(args.PartitionOrDefault())
|
||||
if proxyConf != nil {
|
||||
// Apply the proxy defaults to the sidecar's proxy config
|
||||
|
@ -63,9 +64,30 @@ func ComputeResolvedServiceConfig(
|
|||
if !proxyConf.MeshGateway.IsZero() {
|
||||
wildcardUpstreamDefaults["mesh_gateway"] = proxyConf.MeshGateway
|
||||
}
|
||||
if protocol, ok := thisReply.ProxyConfig["protocol"]; ok {
|
||||
wildcardUpstreamDefaults["protocol"] = protocol
|
||||
|
||||
// We explicitly DO NOT merge the protocol from proxy-defaults into the wildcard upstream here.
|
||||
// TProxy will try to use the data from the `wildcardUpstreamDefaults` as a source of truth, which is
|
||||
// normally correct to inherit from proxy-defaults. However, it is NOT correct for protocol.
|
||||
//
|
||||
// This edge-case is different for `protocol` from other fields, since the protocol can be
|
||||
// set on both the local `ServiceDefaults.UpstreamOverrides` and upstream `ServiceDefaults.Protocol`.
|
||||
// This means that when proxy-defaults is set, it would always be treated as an explicit override,
|
||||
// and take precedence over the protocol that is set on the discovery chain (which comes from the
|
||||
// service's preference in its service-defaults), which is wrong.
|
||||
//
|
||||
// When the upstream is not explicitly defined, we should only get the protocol from one of these locations:
|
||||
// 1. For tproxy non-peering services, it can be fetched via the discovery chain.
|
||||
// The chain compiler merges the proxy-defaults protocol with the upstream's preferred service-defaults protocol.
|
||||
// 2. For tproxy non-peering services with default upstream overrides, it will come from the wildcard upstream overrides.
|
||||
// 3. For tproxy non-peering services with specific upstream overrides, it will come from the specific upstream override defined.
|
||||
// 4. For tproxy peering services, they do not honor the proxy-defaults, since they reside in a different cluster.
|
||||
// The data will come from a separate peerMeta field.
|
||||
// In all of these cases, it is not necessary for the proxy-defaults to exist in the wildcard upstream.
|
||||
parsed, err := structs.ParseUpstreamConfigNoDefaults(mapCopy.(map[string]interface{}))
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to parse upstream config map for proxy-defaults: %v", err)
|
||||
}
|
||||
proxyConfGlobalProtocol = parsed.Protocol
|
||||
}
|
||||
|
||||
serviceConf := entries.GetServiceDefaults(
|
||||
|
@ -210,6 +232,10 @@ func ComputeResolvedServiceConfig(
|
|||
// 2. Protocol for upstream service defined in its service-defaults (how the upstream wants to be addressed)
|
||||
// 3. Protocol defined for the upstream in the service-defaults.(upstream_config.defaults|upstream_config.overrides) of the downstream
|
||||
// (how the downstream wants to address it)
|
||||
if proxyConfGlobalProtocol != "" {
|
||||
resolvedCfg["protocol"] = proxyConfGlobalProtocol
|
||||
}
|
||||
|
||||
if err := mergo.MergeWithOverwrite(&resolvedCfg, wildcardUpstreamDefaults); err != nil {
|
||||
return nil, fmt.Errorf("failed to merge wildcard defaults into upstream: %v", err)
|
||||
}
|
||||
|
|
|
@ -1444,16 +1444,6 @@ func TestConfigEntry_ResolveServiceConfig_Upstreams(t *testing.T) {
|
|||
"protocol": "grpc",
|
||||
},
|
||||
UpstreamConfigs: structs.OpaqueUpstreamConfigs{
|
||||
{
|
||||
Upstream: structs.PeeredServiceName{
|
||||
ServiceName: structs.NewServiceName(
|
||||
structs.WildcardSpecifier,
|
||||
acl.DefaultEnterpriseMeta().WithWildcardNamespace()),
|
||||
},
|
||||
Config: map[string]interface{}{
|
||||
"protocol": "grpc",
|
||||
},
|
||||
},
|
||||
{
|
||||
Upstream: cache,
|
||||
Config: map[string]interface{}{
|
||||
|
@ -1510,12 +1500,6 @@ func TestConfigEntry_ResolveServiceConfig_Upstreams(t *testing.T) {
|
|||
"protocol": "grpc",
|
||||
},
|
||||
UpstreamConfigs: structs.OpaqueUpstreamConfigs{
|
||||
{
|
||||
Upstream: wildcard,
|
||||
Config: map[string]interface{}{
|
||||
"protocol": "grpc",
|
||||
},
|
||||
},
|
||||
{
|
||||
Upstream: cache,
|
||||
Config: map[string]interface{}{
|
||||
|
@ -2267,17 +2251,6 @@ func TestConfigEntry_ResolveServiceConfig_UpstreamProxyDefaultsProtocol(t *testi
|
|||
require.NoError(t, msgpackrpc.CallWithCodec(codec, "ConfigEntry.ResolveServiceConfig", &args, &out))
|
||||
|
||||
expected := structs.OpaqueUpstreamConfigs{
|
||||
{
|
||||
Upstream: structs.PeeredServiceName{
|
||||
ServiceName: structs.NewServiceName(
|
||||
structs.WildcardSpecifier,
|
||||
acl.DefaultEnterpriseMeta().WithWildcardNamespace(),
|
||||
),
|
||||
},
|
||||
Config: map[string]interface{}{
|
||||
"protocol": "http",
|
||||
},
|
||||
},
|
||||
{
|
||||
Upstream: id("bar"),
|
||||
Config: map[string]interface{}{
|
||||
|
@ -2346,16 +2319,6 @@ func TestConfigEntry_ResolveServiceConfig_ProxyDefaultsProtocol_UsedForAllUpstre
|
|||
"protocol": "http",
|
||||
},
|
||||
UpstreamConfigs: structs.OpaqueUpstreamConfigs{
|
||||
{
|
||||
Upstream: structs.PeeredServiceName{
|
||||
ServiceName: structs.NewServiceName(
|
||||
structs.WildcardSpecifier,
|
||||
acl.DefaultEnterpriseMeta().WithWildcardNamespace()),
|
||||
},
|
||||
Config: map[string]interface{}{
|
||||
"protocol": "http",
|
||||
},
|
||||
},
|
||||
{
|
||||
Upstream: psn,
|
||||
Config: map[string]interface{}{
|
||||
|
|
|
@ -424,6 +424,11 @@ func (s *Store) ServiceUsage(ws memdb.WatchSet) (uint64, structs.ServiceUsage, e
|
|||
return 0, structs.ServiceUsage{}, fmt.Errorf("failed services lookup: %s", err)
|
||||
}
|
||||
|
||||
nodes, err := firstUsageEntry(ws, tx, tableNodes)
|
||||
if err != nil {
|
||||
return 0, structs.ServiceUsage{}, fmt.Errorf("failed nodes lookup: %s", err)
|
||||
}
|
||||
|
||||
serviceKindInstances := make(map[string]int)
|
||||
for _, kind := range allConnectKind {
|
||||
usage, err := firstUsageEntry(ws, tx, connectUsageTableName(kind))
|
||||
|
@ -443,6 +448,7 @@ func (s *Store) ServiceUsage(ws memdb.WatchSet) (uint64, structs.ServiceUsage, e
|
|||
Services: services.Count,
|
||||
ConnectServiceInstances: serviceKindInstances,
|
||||
BillableServiceInstances: billableServiceInstances.Count,
|
||||
Nodes: nodes.Count,
|
||||
}
|
||||
results, err := compileEnterpriseServiceUsage(ws, tx, usage)
|
||||
if err != nil {
|
||||
|
|
|
@ -65,6 +65,7 @@ func TestOperator_Usage(t *testing.T) {
|
|||
},
|
||||
// 4 = 6 total service instances - 1 connect proxy - 1 consul service
|
||||
BillableServiceInstances: 4,
|
||||
Nodes: 2,
|
||||
},
|
||||
}
|
||||
require.Equal(t, expected, raw.(structs.Usage).Usage)
|
||||
|
|
|
@ -58,6 +58,26 @@ func (o *ConfigSnapshot) DeepCopy() *ConfigSnapshot {
|
|||
*cp_JWTProviders_v2.JSONWebKeySet.Remote.RetryPolicy.RetryPolicyBackOff = *v2.JSONWebKeySet.Remote.RetryPolicy.RetryPolicyBackOff
|
||||
}
|
||||
}
|
||||
if v2.JSONWebKeySet.Remote.JWKSCluster != nil {
|
||||
cp_JWTProviders_v2.JSONWebKeySet.Remote.JWKSCluster = new(structs.JWKSCluster)
|
||||
*cp_JWTProviders_v2.JSONWebKeySet.Remote.JWKSCluster = *v2.JSONWebKeySet.Remote.JWKSCluster
|
||||
if v2.JSONWebKeySet.Remote.JWKSCluster.TLSCertificates != nil {
|
||||
cp_JWTProviders_v2.JSONWebKeySet.Remote.JWKSCluster.TLSCertificates = new(structs.JWKSTLSCertificate)
|
||||
*cp_JWTProviders_v2.JSONWebKeySet.Remote.JWKSCluster.TLSCertificates = *v2.JSONWebKeySet.Remote.JWKSCluster.TLSCertificates
|
||||
if v2.JSONWebKeySet.Remote.JWKSCluster.TLSCertificates.CaCertificateProviderInstance != nil {
|
||||
cp_JWTProviders_v2.JSONWebKeySet.Remote.JWKSCluster.TLSCertificates.CaCertificateProviderInstance = new(structs.JWKSTLSCertProviderInstance)
|
||||
*cp_JWTProviders_v2.JSONWebKeySet.Remote.JWKSCluster.TLSCertificates.CaCertificateProviderInstance = *v2.JSONWebKeySet.Remote.JWKSCluster.TLSCertificates.CaCertificateProviderInstance
|
||||
}
|
||||
if v2.JSONWebKeySet.Remote.JWKSCluster.TLSCertificates.TrustedCA != nil {
|
||||
cp_JWTProviders_v2.JSONWebKeySet.Remote.JWKSCluster.TLSCertificates.TrustedCA = new(structs.JWKSTLSCertTrustedCA)
|
||||
*cp_JWTProviders_v2.JSONWebKeySet.Remote.JWKSCluster.TLSCertificates.TrustedCA = *v2.JSONWebKeySet.Remote.JWKSCluster.TLSCertificates.TrustedCA
|
||||
if v2.JSONWebKeySet.Remote.JWKSCluster.TLSCertificates.TrustedCA.InlineBytes != nil {
|
||||
cp_JWTProviders_v2.JSONWebKeySet.Remote.JWKSCluster.TLSCertificates.TrustedCA.InlineBytes = make([]byte, len(v2.JSONWebKeySet.Remote.JWKSCluster.TLSCertificates.TrustedCA.InlineBytes))
|
||||
copy(cp_JWTProviders_v2.JSONWebKeySet.Remote.JWKSCluster.TLSCertificates.TrustedCA.InlineBytes, v2.JSONWebKeySet.Remote.JWKSCluster.TLSCertificates.TrustedCA.InlineBytes)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
if v2.Audiences != nil {
|
||||
|
|
|
@ -15,6 +15,12 @@ import (
|
|||
|
||||
const (
|
||||
DefaultClockSkewSeconds = 30
|
||||
|
||||
DiscoveryTypeStrictDNS ClusterDiscoveryType = "STRICT_DNS"
|
||||
DiscoveryTypeStatic ClusterDiscoveryType = "STATIC"
|
||||
DiscoveryTypeLogicalDNS ClusterDiscoveryType = "LOGICAL_DNS"
|
||||
DiscoveryTypeEDS ClusterDiscoveryType = "EDS"
|
||||
DiscoveryTypeOriginalDST ClusterDiscoveryType = "ORIGINAL_DST"
|
||||
)
|
||||
|
||||
type JWTProviderConfigEntry struct {
|
||||
|
@ -97,7 +103,7 @@ func (location *JWTLocation) Validate() error {
|
|||
hasCookie := location.Cookie != nil
|
||||
|
||||
if countTrue(hasHeader, hasQueryParam, hasCookie) != 1 {
|
||||
return fmt.Errorf("Must set exactly one of: JWT location header, query param or cookie")
|
||||
return fmt.Errorf("must set exactly one of: JWT location header, query param or cookie")
|
||||
}
|
||||
|
||||
if hasHeader {
|
||||
|
@ -205,7 +211,7 @@ func (ks *LocalJWKS) Validate() error {
|
|||
hasJWKS := ks.JWKS != ""
|
||||
|
||||
if countTrue(hasFilename, hasJWKS) != 1 {
|
||||
return fmt.Errorf("Must specify exactly one of String or filename for local keyset")
|
||||
return fmt.Errorf("must specify exactly one of String or filename for local keyset")
|
||||
}
|
||||
|
||||
if hasJWKS {
|
||||
|
@ -245,6 +251,9 @@ type RemoteJWKS struct {
|
|||
//
|
||||
// There is no retry by default.
|
||||
RetryPolicy *JWKSRetryPolicy `json:",omitempty" alias:"retry_policy"`
|
||||
|
||||
// JWKSCluster defines how the specified Remote JWKS URI is to be fetched.
|
||||
JWKSCluster *JWKSCluster `json:",omitempty" alias:"jwks_cluster"`
|
||||
}
|
||||
|
||||
func (ks *RemoteJWKS) Validate() error {
|
||||
|
@ -257,9 +266,127 @@ func (ks *RemoteJWKS) Validate() error {
|
|||
}
|
||||
|
||||
if ks.RetryPolicy != nil && ks.RetryPolicy.RetryPolicyBackOff != nil {
|
||||
return ks.RetryPolicy.RetryPolicyBackOff.Validate()
|
||||
err := ks.RetryPolicy.RetryPolicyBackOff.Validate()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
if ks.JWKSCluster != nil {
|
||||
return ks.JWKSCluster.Validate()
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
type JWKSCluster struct {
|
||||
// DiscoveryType refers to the service discovery type to use for resolving the cluster.
|
||||
//
|
||||
// This defaults to STRICT_DNS.
|
||||
// Other options include STATIC, LOGICAL_DNS, EDS or ORIGINAL_DST.
|
||||
DiscoveryType ClusterDiscoveryType `json:",omitempty" alias:"discovery_type"`
|
||||
|
||||
// TLSCertificates refers to the data containing certificate authority certificates to use
|
||||
// in verifying a presented peer certificate.
|
||||
// If not specified and a peer certificate is presented it will not be verified.
|
||||
//
|
||||
// Must be either CaCertificateProviderInstance or TrustedCA.
|
||||
TLSCertificates *JWKSTLSCertificate `json:",omitempty" alias:"tls_certificates"`
|
||||
|
||||
// The timeout for new network connections to hosts in the cluster.
|
||||
// If not set, a default value of 5s will be used.
|
||||
ConnectTimeout time.Duration `json:",omitempty" alias:"connect_timeout"`
|
||||
}
|
||||
|
||||
type ClusterDiscoveryType string
|
||||
|
||||
func (d ClusterDiscoveryType) Validate() error {
|
||||
switch d {
|
||||
case DiscoveryTypeStatic, DiscoveryTypeStrictDNS, DiscoveryTypeLogicalDNS, DiscoveryTypeEDS, DiscoveryTypeOriginalDST:
|
||||
return nil
|
||||
default:
|
||||
return fmt.Errorf("unsupported jwks cluster discovery type: %q", d)
|
||||
}
|
||||
}
|
||||
|
||||
func (c *JWKSCluster) Validate() error {
|
||||
if c.DiscoveryType != "" {
|
||||
err := c.DiscoveryType.Validate()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
if c.TLSCertificates != nil {
|
||||
return c.TLSCertificates.Validate()
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// JWKSTLSCertificate refers to the data containing certificate authority certificates to use
|
||||
// in verifying a presented peer certificate.
|
||||
// If not specified and a peer certificate is presented it will not be verified.
|
||||
//
|
||||
// Must be either CaCertificateProviderInstance or TrustedCA.
|
||||
type JWKSTLSCertificate struct {
|
||||
// CaCertificateProviderInstance Certificate provider instance for fetching TLS certificates.
|
||||
CaCertificateProviderInstance *JWKSTLSCertProviderInstance `json:",omitempty" alias:"ca_certificate_provider_instance"`
|
||||
|
||||
// TrustedCA defines TLS certificate data containing certificate authority certificates
|
||||
// to use in verifying a presented peer certificate.
|
||||
//
|
||||
// Exactly one of Filename, EnvironmentVariable, InlineString or InlineBytes must be specified.
|
||||
TrustedCA *JWKSTLSCertTrustedCA `json:",omitempty" alias:"trusted_ca"`
|
||||
}
|
||||
|
||||
func (c *JWKSTLSCertificate) Validate() error {
|
||||
hasProviderInstance := c.CaCertificateProviderInstance != nil
|
||||
hasTrustedCA := c.TrustedCA != nil
|
||||
|
||||
if countTrue(hasProviderInstance, hasTrustedCA) != 1 {
|
||||
return fmt.Errorf("must specify exactly one of: CaCertificateProviderInstance or TrustedCA for JKWS' TLSCertificates")
|
||||
}
|
||||
|
||||
if c.TrustedCA != nil {
|
||||
return c.TrustedCA.Validate()
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
type JWKSTLSCertProviderInstance struct {
|
||||
// InstanceName refers to the certificate provider instance name
|
||||
//
|
||||
// The default value is "default".
|
||||
InstanceName string `json:",omitempty" alias:"instance_name"`
|
||||
|
||||
// CertificateName is used to specify certificate instances or types. For example, "ROOTCA" to specify
|
||||
// a root-certificate (validation context) or "example.com" to specify a certificate for a
|
||||
// particular domain.
|
||||
//
|
||||
// The default value is the empty string.
|
||||
CertificateName string `json:",omitempty" alias:"certificate_name"`
|
||||
}
|
||||
|
||||
// JWKSTLSCertTrustedCA defines TLS certificate data containing certificate authority certificates
|
||||
// to use in verifying a presented peer certificate.
|
||||
//
|
||||
// Exactly one of Filename, EnvironmentVariable, InlineString or InlineBytes must be specified.
|
||||
type JWKSTLSCertTrustedCA struct {
|
||||
Filename string `json:",omitempty" alias:"filename"`
|
||||
EnvironmentVariable string `json:",omitempty" alias:"environment_variable"`
|
||||
InlineString string `json:",omitempty" alias:"inline_string"`
|
||||
InlineBytes []byte `json:",omitempty" alias:"inline_bytes"`
|
||||
}
|
||||
|
||||
func (c *JWKSTLSCertTrustedCA) Validate() error {
|
||||
hasFilename := c.Filename != ""
|
||||
hasEnv := c.EnvironmentVariable != ""
|
||||
hasInlineBytes := len(c.InlineBytes) > 0
|
||||
hasInlineString := c.InlineString != ""
|
||||
|
||||
if countTrue(hasFilename, hasEnv, hasInlineString, hasInlineBytes) != 1 {
|
||||
return fmt.Errorf("must specify exactly one of: Filename, EnvironmentVariable, InlineString or InlineBytes for JWKS' TrustedCA")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
|
@ -293,7 +420,7 @@ type RetryPolicyBackOff struct {
|
|||
func (r *RetryPolicyBackOff) Validate() error {
|
||||
|
||||
if (r.MaxInterval != 0) && (r.BaseInterval > r.MaxInterval) {
|
||||
return fmt.Errorf("Retry policy backoff's MaxInterval should be greater or equal to BaseInterval")
|
||||
return fmt.Errorf("retry policy backoff's MaxInterval should be greater or equal to BaseInterval")
|
||||
}
|
||||
|
||||
return nil
|
||||
|
@ -339,7 +466,7 @@ func (jwks *JSONWebKeySet) Validate() error {
|
|||
hasRemoteKeySet := jwks.Remote != nil
|
||||
|
||||
if countTrue(hasLocalKeySet, hasRemoteKeySet) != 1 {
|
||||
return fmt.Errorf("Must specify exactly one of Local or Remote JSON Web key set")
|
||||
return fmt.Errorf("must specify exactly one of Local or Remote JSON Web key set")
|
||||
}
|
||||
|
||||
if hasRemoteKeySet {
|
||||
|
|
|
@ -22,6 +22,7 @@ func newTestAuthz(t *testing.T, src string) acl.Authorizer {
|
|||
|
||||
var tenSeconds time.Duration = 10 * time.Second
|
||||
var hundredSeconds time.Duration = 100 * time.Second
|
||||
var connectTimeout = time.Duration(5) * time.Second
|
||||
|
||||
func TestJWTProviderConfigEntry_ValidateAndNormalize(t *testing.T) {
|
||||
defaultMeta := DefaultEnterpriseMetaInDefaultPartition()
|
||||
|
@ -113,7 +114,7 @@ func TestJWTProviderConfigEntry_ValidateAndNormalize(t *testing.T) {
|
|||
Name: "okta",
|
||||
JSONWebKeySet: &JSONWebKeySet{},
|
||||
},
|
||||
validateErr: "Must specify exactly one of Local or Remote JSON Web key set",
|
||||
validateErr: "must specify exactly one of Local or Remote JSON Web key set",
|
||||
},
|
||||
"invalid jwt-provider - local jwks with non-encoded base64 jwks": {
|
||||
entry: &JWTProviderConfigEntry{
|
||||
|
@ -138,7 +139,7 @@ func TestJWTProviderConfigEntry_ValidateAndNormalize(t *testing.T) {
|
|||
Remote: &RemoteJWKS{},
|
||||
},
|
||||
},
|
||||
validateErr: "Must specify exactly one of Local or Remote JSON Web key set",
|
||||
validateErr: "must specify exactly one of Local or Remote JSON Web key set",
|
||||
},
|
||||
"invalid jwt-provider - local jwks string and filename both set": {
|
||||
entry: &JWTProviderConfigEntry{
|
||||
|
@ -151,7 +152,7 @@ func TestJWTProviderConfigEntry_ValidateAndNormalize(t *testing.T) {
|
|||
},
|
||||
},
|
||||
},
|
||||
validateErr: "Must specify exactly one of String or filename for local keyset",
|
||||
validateErr: "must specify exactly one of String or filename for local keyset",
|
||||
},
|
||||
"invalid jwt-provider - remote jwks missing uri": {
|
||||
entry: &JWTProviderConfigEntry{
|
||||
|
@ -202,7 +203,7 @@ func TestJWTProviderConfigEntry_ValidateAndNormalize(t *testing.T) {
|
|||
},
|
||||
},
|
||||
},
|
||||
validateErr: "Must set exactly one of: JWT location header, query param or cookie",
|
||||
validateErr: "must set exactly one of: JWT location header, query param or cookie",
|
||||
},
|
||||
"invalid jwt-provider - Remote JWKS retry policy maxinterval < baseInterval": {
|
||||
entry: &JWTProviderConfigEntry{
|
||||
|
@ -221,7 +222,63 @@ func TestJWTProviderConfigEntry_ValidateAndNormalize(t *testing.T) {
|
|||
},
|
||||
},
|
||||
},
|
||||
validateErr: "Retry policy backoff's MaxInterval should be greater or equal to BaseInterval",
|
||||
validateErr: "retry policy backoff's MaxInterval should be greater or equal to BaseInterval",
|
||||
},
|
||||
"invalid jwt-provider - Remote JWKS cluster wrong discovery type": {
|
||||
entry: &JWTProviderConfigEntry{
|
||||
Kind: JWTProvider,
|
||||
Name: "okta",
|
||||
JSONWebKeySet: &JSONWebKeySet{
|
||||
Remote: &RemoteJWKS{
|
||||
FetchAsynchronously: true,
|
||||
URI: "https://example.com/.well-known/jwks.json",
|
||||
JWKSCluster: &JWKSCluster{
|
||||
DiscoveryType: "FAKE",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
validateErr: "unsupported jwks cluster discovery type: \"FAKE\"",
|
||||
},
|
||||
"invalid jwt-provider - Remote JWKS cluster with both trustedCa and provider instance": {
|
||||
entry: &JWTProviderConfigEntry{
|
||||
Kind: JWTProvider,
|
||||
Name: "okta",
|
||||
JSONWebKeySet: &JSONWebKeySet{
|
||||
Remote: &RemoteJWKS{
|
||||
FetchAsynchronously: true,
|
||||
URI: "https://example.com/.well-known/jwks.json",
|
||||
JWKSCluster: &JWKSCluster{
|
||||
TLSCertificates: &JWKSTLSCertificate{
|
||||
TrustedCA: &JWKSTLSCertTrustedCA{},
|
||||
CaCertificateProviderInstance: &JWKSTLSCertProviderInstance{},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
validateErr: "must specify exactly one of: CaCertificateProviderInstance or TrustedCA for JKWS' TLSCertificates",
|
||||
},
|
||||
"invalid jwt-provider - Remote JWKS cluster with multiple trustedCa options": {
|
||||
entry: &JWTProviderConfigEntry{
|
||||
Kind: JWTProvider,
|
||||
Name: "okta",
|
||||
JSONWebKeySet: &JSONWebKeySet{
|
||||
Remote: &RemoteJWKS{
|
||||
FetchAsynchronously: true,
|
||||
URI: "https://example.com/.well-known/jwks.json",
|
||||
JWKSCluster: &JWKSCluster{
|
||||
TLSCertificates: &JWKSTLSCertificate{
|
||||
TrustedCA: &JWKSTLSCertTrustedCA{
|
||||
Filename: "myfile.cert",
|
||||
InlineString: "*****",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
validateErr: "must specify exactly one of: Filename, EnvironmentVariable, InlineString or InlineBytes for JWKS' TrustedCA",
|
||||
},
|
||||
"invalid jwt-provider - JWT location with 2 fields": {
|
||||
entry: &JWTProviderConfigEntry{
|
||||
|
@ -244,7 +301,7 @@ func TestJWTProviderConfigEntry_ValidateAndNormalize(t *testing.T) {
|
|||
},
|
||||
},
|
||||
},
|
||||
validateErr: "Must set exactly one of: JWT location header, query param or cookie",
|
||||
validateErr: "must set exactly one of: JWT location header, query param or cookie",
|
||||
},
|
||||
"valid jwt-provider - with all possible fields": {
|
||||
entry: &JWTProviderConfigEntry{
|
||||
|
@ -265,6 +322,15 @@ func TestJWTProviderConfigEntry_ValidateAndNormalize(t *testing.T) {
|
|||
MaxInterval: hundredSeconds,
|
||||
},
|
||||
},
|
||||
JWKSCluster: &JWKSCluster{
|
||||
DiscoveryType: "STATIC",
|
||||
ConnectTimeout: connectTimeout,
|
||||
TLSCertificates: &JWKSTLSCertificate{
|
||||
TrustedCA: &JWKSTLSCertTrustedCA{
|
||||
Filename: "myfile.cert",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
Forwarding: &JWTForwardingConfig{
|
||||
|
@ -297,6 +363,15 @@ func TestJWTProviderConfigEntry_ValidateAndNormalize(t *testing.T) {
|
|||
MaxInterval: hundredSeconds,
|
||||
},
|
||||
},
|
||||
JWKSCluster: &JWKSCluster{
|
||||
DiscoveryType: "STATIC",
|
||||
ConnectTimeout: connectTimeout,
|
||||
TLSCertificates: &JWKSTLSCertificate{
|
||||
TrustedCA: &JWKSTLSCertTrustedCA{
|
||||
Filename: "myfile.cert",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
Forwarding: &JWTForwardingConfig{
|
||||
|
|
|
@ -2324,6 +2324,7 @@ type ServiceUsage struct {
|
|||
ServiceInstances int
|
||||
ConnectServiceInstances map[string]int
|
||||
BillableServiceInstances int
|
||||
Nodes int
|
||||
EnterpriseServiceUsage
|
||||
}
|
||||
|
||||
|
|
|
@ -211,13 +211,9 @@ func makeJWTProviderCluster(p *structs.JWTProviderConfigEntry) (*envoy_cluster_v
|
|||
return nil, err
|
||||
}
|
||||
|
||||
// TODO: expose additional fields: eg. ConnectTimeout, through
|
||||
// JWTProviderConfigEntry to allow user to configure cluster
|
||||
cluster := &envoy_cluster_v3.Cluster{
|
||||
Name: makeJWKSClusterName(p.Name),
|
||||
ClusterDiscoveryType: &envoy_cluster_v3.Cluster_Type{
|
||||
Type: envoy_cluster_v3.Cluster_STRICT_DNS,
|
||||
},
|
||||
Name: makeJWKSClusterName(p.Name),
|
||||
ClusterDiscoveryType: makeJWKSDiscoveryClusterType(p.JSONWebKeySet.Remote),
|
||||
LoadAssignment: &envoy_endpoint_v3.ClusterLoadAssignment{
|
||||
ClusterName: makeJWKSClusterName(p.Name),
|
||||
Endpoints: []*envoy_endpoint_v3.LocalityLbEndpoints{
|
||||
|
@ -230,14 +226,19 @@ func makeJWTProviderCluster(p *structs.JWTProviderConfigEntry) (*envoy_cluster_v
|
|||
},
|
||||
}
|
||||
|
||||
if c := p.JSONWebKeySet.Remote.JWKSCluster; c != nil {
|
||||
connectTimeout := int64(c.ConnectTimeout / time.Second)
|
||||
if connectTimeout > 0 {
|
||||
cluster.ConnectTimeout = &durationpb.Duration{Seconds: connectTimeout}
|
||||
}
|
||||
}
|
||||
|
||||
if scheme == "https" {
|
||||
// TODO: expose this configuration through JWTProviderConfigEntry to allow
|
||||
// user to configure certs
|
||||
jwksTLSContext, err := makeUpstreamTLSTransportSocket(
|
||||
&envoy_tls_v3.UpstreamTlsContext{
|
||||
CommonTlsContext: &envoy_tls_v3.CommonTlsContext{
|
||||
ValidationContextType: &envoy_tls_v3.CommonTlsContext_ValidationContext{
|
||||
ValidationContext: &envoy_tls_v3.CertificateValidationContext{},
|
||||
ValidationContext: makeJWTCertValidationContext(p.JSONWebKeySet.Remote.JWKSCluster),
|
||||
},
|
||||
},
|
||||
},
|
||||
|
@ -251,6 +252,76 @@ func makeJWTProviderCluster(p *structs.JWTProviderConfigEntry) (*envoy_cluster_v
|
|||
return cluster, nil
|
||||
}
|
||||
|
||||
func makeJWKSDiscoveryClusterType(r *structs.RemoteJWKS) *envoy_cluster_v3.Cluster_Type {
|
||||
ct := &envoy_cluster_v3.Cluster_Type{}
|
||||
if r == nil || r.JWKSCluster == nil {
|
||||
return ct
|
||||
}
|
||||
|
||||
switch r.JWKSCluster.DiscoveryType {
|
||||
case structs.DiscoveryTypeStatic:
|
||||
ct.Type = envoy_cluster_v3.Cluster_STATIC
|
||||
case structs.DiscoveryTypeLogicalDNS:
|
||||
ct.Type = envoy_cluster_v3.Cluster_LOGICAL_DNS
|
||||
case structs.DiscoveryTypeEDS:
|
||||
ct.Type = envoy_cluster_v3.Cluster_EDS
|
||||
case structs.DiscoveryTypeOriginalDST:
|
||||
ct.Type = envoy_cluster_v3.Cluster_ORIGINAL_DST
|
||||
case structs.DiscoveryTypeStrictDNS:
|
||||
fallthrough // default case so uses the default option
|
||||
default:
|
||||
ct.Type = envoy_cluster_v3.Cluster_STRICT_DNS
|
||||
}
|
||||
return ct
|
||||
}
|
||||
|
||||
func makeJWTCertValidationContext(p *structs.JWKSCluster) *envoy_tls_v3.CertificateValidationContext {
|
||||
vc := &envoy_tls_v3.CertificateValidationContext{}
|
||||
if p == nil || p.TLSCertificates == nil {
|
||||
return vc
|
||||
}
|
||||
|
||||
if tc := p.TLSCertificates.TrustedCA; tc != nil {
|
||||
vc.TrustedCa = &envoy_core_v3.DataSource{}
|
||||
if tc.Filename != "" {
|
||||
vc.TrustedCa.Specifier = &envoy_core_v3.DataSource_Filename{
|
||||
Filename: tc.Filename,
|
||||
}
|
||||
}
|
||||
|
||||
if tc.EnvironmentVariable != "" {
|
||||
vc.TrustedCa.Specifier = &envoy_core_v3.DataSource_EnvironmentVariable{
|
||||
EnvironmentVariable: tc.EnvironmentVariable,
|
||||
}
|
||||
}
|
||||
|
||||
if tc.InlineString != "" {
|
||||
vc.TrustedCa.Specifier = &envoy_core_v3.DataSource_InlineString{
|
||||
InlineString: tc.InlineString,
|
||||
}
|
||||
}
|
||||
|
||||
if len(tc.InlineBytes) > 0 {
|
||||
vc.TrustedCa.Specifier = &envoy_core_v3.DataSource_InlineBytes{
|
||||
InlineBytes: tc.InlineBytes,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if pi := p.TLSCertificates.CaCertificateProviderInstance; pi != nil {
|
||||
vc.CaCertificateProviderInstance = &envoy_tls_v3.CertificateProviderPluginInstance{}
|
||||
if pi.InstanceName != "" {
|
||||
vc.CaCertificateProviderInstance.InstanceName = pi.InstanceName
|
||||
}
|
||||
|
||||
if pi.CertificateName != "" {
|
||||
vc.CaCertificateProviderInstance.CertificateName = pi.CertificateName
|
||||
}
|
||||
}
|
||||
|
||||
return vc
|
||||
}
|
||||
|
||||
// parseJWTRemoteURL splits the URI into domain, scheme and port.
|
||||
// It will default to port 80 for http and 443 for https for any
|
||||
// URI that does not specify a port.
|
||||
|
|
|
@ -1037,11 +1037,104 @@ func makeTestProviderWithJWKS(uri string) *structs.JWTProviderConfigEntry {
|
|||
RequestTimeoutMs: 1000,
|
||||
FetchAsynchronously: true,
|
||||
URI: uri,
|
||||
JWKSCluster: &structs.JWKSCluster{
|
||||
DiscoveryType: structs.DiscoveryTypeStatic,
|
||||
ConnectTimeout: time.Duration(5) * time.Second,
|
||||
TLSCertificates: &structs.JWKSTLSCertificate{
|
||||
TrustedCA: &structs.JWKSTLSCertTrustedCA{
|
||||
Filename: "mycert.crt",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func TestMakeJWKSDiscoveryClusterType(t *testing.T) {
|
||||
tests := map[string]struct {
|
||||
remoteJWKS *structs.RemoteJWKS
|
||||
expectedClusterType *envoy_cluster_v3.Cluster_Type
|
||||
}{
|
||||
"nil remote jwks": {
|
||||
remoteJWKS: nil,
|
||||
expectedClusterType: &envoy_cluster_v3.Cluster_Type{},
|
||||
},
|
||||
"nil jwks cluster": {
|
||||
remoteJWKS: &structs.RemoteJWKS{},
|
||||
expectedClusterType: &envoy_cluster_v3.Cluster_Type{},
|
||||
},
|
||||
"jwks cluster defaults to Strict DNS": {
|
||||
remoteJWKS: &structs.RemoteJWKS{
|
||||
JWKSCluster: &structs.JWKSCluster{},
|
||||
},
|
||||
expectedClusterType: &envoy_cluster_v3.Cluster_Type{
|
||||
Type: envoy_cluster_v3.Cluster_STRICT_DNS,
|
||||
},
|
||||
},
|
||||
"jwks with cluster EDS": {
|
||||
remoteJWKS: &structs.RemoteJWKS{
|
||||
JWKSCluster: &structs.JWKSCluster{
|
||||
DiscoveryType: "EDS",
|
||||
},
|
||||
},
|
||||
expectedClusterType: &envoy_cluster_v3.Cluster_Type{
|
||||
Type: envoy_cluster_v3.Cluster_EDS,
|
||||
},
|
||||
},
|
||||
"jwks with static dns": {
|
||||
remoteJWKS: &structs.RemoteJWKS{
|
||||
JWKSCluster: &structs.JWKSCluster{
|
||||
DiscoveryType: "STATIC",
|
||||
},
|
||||
},
|
||||
expectedClusterType: &envoy_cluster_v3.Cluster_Type{
|
||||
Type: envoy_cluster_v3.Cluster_STATIC,
|
||||
},
|
||||
},
|
||||
|
||||
"jwks with original dst": {
|
||||
remoteJWKS: &structs.RemoteJWKS{
|
||||
JWKSCluster: &structs.JWKSCluster{
|
||||
DiscoveryType: "ORIGINAL_DST",
|
||||
},
|
||||
},
|
||||
expectedClusterType: &envoy_cluster_v3.Cluster_Type{
|
||||
Type: envoy_cluster_v3.Cluster_ORIGINAL_DST,
|
||||
},
|
||||
},
|
||||
"jwks with strict dns": {
|
||||
remoteJWKS: &structs.RemoteJWKS{
|
||||
JWKSCluster: &structs.JWKSCluster{
|
||||
DiscoveryType: "STRICT_DNS",
|
||||
},
|
||||
},
|
||||
expectedClusterType: &envoy_cluster_v3.Cluster_Type{
|
||||
Type: envoy_cluster_v3.Cluster_STRICT_DNS,
|
||||
},
|
||||
},
|
||||
"jwks with logical dns": {
|
||||
remoteJWKS: &structs.RemoteJWKS{
|
||||
JWKSCluster: &structs.JWKSCluster{
|
||||
DiscoveryType: "LOGICAL_DNS",
|
||||
},
|
||||
},
|
||||
expectedClusterType: &envoy_cluster_v3.Cluster_Type{
|
||||
Type: envoy_cluster_v3.Cluster_LOGICAL_DNS,
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for name, tt := range tests {
|
||||
tt := tt
|
||||
t.Run(name, func(t *testing.T) {
|
||||
clusterType := makeJWKSDiscoveryClusterType(tt.remoteJWKS)
|
||||
|
||||
require.Equal(t, tt.expectedClusterType, clusterType)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseJWTRemoteURL(t *testing.T) {
|
||||
tests := map[string]struct {
|
||||
uri string
|
||||
|
|
|
@ -1381,12 +1381,11 @@ func (s *ResourceGenerator) makeInboundListener(cfgSnap *proxycfg.ConfigSnapshot
|
|||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
filterOpts.httpAuthzFilters = []*envoy_http_v3.HttpFilter{rbacFilter}
|
||||
|
||||
filterOpts.httpAuthzFilters = []*envoy_http_v3.HttpFilter{}
|
||||
if jwtFilter != nil {
|
||||
filterOpts.httpAuthzFilters = append(filterOpts.httpAuthzFilters, jwtFilter)
|
||||
}
|
||||
filterOpts.httpAuthzFilters = append(filterOpts.httpAuthzFilters, rbacFilter)
|
||||
|
||||
meshConfig := cfgSnap.MeshConfig()
|
||||
includeXFCC := meshConfig == nil || meshConfig.HTTP == nil || !meshConfig.HTTP.SanitizeXForwardedClientCert
|
||||
|
|
|
@ -19,5 +19,6 @@
|
|||
]
|
||||
},
|
||||
"name": "jwks_cluster_okta",
|
||||
"type": "STRICT_DNS"
|
||||
"connectTimeout": "5s",
|
||||
"type": "STATIC"
|
||||
}
|
|
@ -19,5 +19,6 @@
|
|||
]
|
||||
},
|
||||
"name": "jwks_cluster_okta",
|
||||
"type": "STRICT_DNS"
|
||||
"connectTimeout": "5s",
|
||||
"type": "STATIC"
|
||||
}
|
|
@ -19,5 +19,6 @@
|
|||
]
|
||||
},
|
||||
"name": "jwks_cluster_okta",
|
||||
"type": "STRICT_DNS"
|
||||
"connectTimeout": "5s",
|
||||
"type": "STATIC"
|
||||
}
|
|
@ -19,5 +19,6 @@
|
|||
]
|
||||
},
|
||||
"name": "jwks_cluster_okta",
|
||||
"type": "STRICT_DNS"
|
||||
"connectTimeout": "5s",
|
||||
"type": "STATIC"
|
||||
}
|
|
@ -24,9 +24,14 @@
|
|||
"typedConfig": {
|
||||
"@type":"type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext",
|
||||
"commonTlsContext": {
|
||||
"validationContext": {}
|
||||
"validationContext": {
|
||||
"trustedCa": {
|
||||
"filename": "mycert.crt"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"type": "STRICT_DNS"
|
||||
"connectTimeout": "5s",
|
||||
"type": "STATIC"
|
||||
}
|
|
@ -24,9 +24,14 @@
|
|||
"typedConfig": {
|
||||
"@type":"type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext",
|
||||
"commonTlsContext": {
|
||||
"validationContext": {}
|
||||
"validationContext": {
|
||||
"trustedCa": {
|
||||
"filename": "mycert.crt"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"type": "STRICT_DNS"
|
||||
"connectTimeout": "5s",
|
||||
"type": "STATIC"
|
||||
}
|
|
@ -24,9 +24,14 @@
|
|||
"typedConfig": {
|
||||
"@type":"type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext",
|
||||
"commonTlsContext": {
|
||||
"validationContext": {}
|
||||
"validationContext": {
|
||||
"trustedCa": {
|
||||
"filename": "mycert.crt"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"type": "STRICT_DNS"
|
||||
"connectTimeout": "5s",
|
||||
"type": "STATIC"
|
||||
}
|
|
@ -24,9 +24,14 @@
|
|||
"typedConfig": {
|
||||
"@type":"type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext",
|
||||
"commonTlsContext": {
|
||||
"validationContext": {}
|
||||
"validationContext": {
|
||||
"trustedCa": {
|
||||
"filename": "mycert.crt"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"type": "STRICT_DNS"
|
||||
"connectTimeout": "5s",
|
||||
"type": "STATIC"
|
||||
}
|
|
@ -7,6 +7,14 @@ import (
|
|||
"time"
|
||||
)
|
||||
|
||||
const (
|
||||
DiscoveryTypeStrictDNS ClusterDiscoveryType = "STRICT_DNS"
|
||||
DiscoveryTypeStatic ClusterDiscoveryType = "STATIC"
|
||||
DiscoveryTypeLogicalDNS ClusterDiscoveryType = "LOGICAL_DNS"
|
||||
DiscoveryTypeEDS ClusterDiscoveryType = "EDS"
|
||||
DiscoveryTypeOriginalDST ClusterDiscoveryType = "ORIGINAL_DST"
|
||||
)
|
||||
|
||||
type JWTProviderConfigEntry struct {
|
||||
// Kind is the kind of configuration entry and must be "jwt-provider".
|
||||
Kind string `json:",omitempty"`
|
||||
|
@ -188,6 +196,71 @@ type RemoteJWKS struct {
|
|||
//
|
||||
// There is no retry by default.
|
||||
RetryPolicy *JWKSRetryPolicy `json:",omitempty" alias:"retry_policy"`
|
||||
|
||||
// JWKSCluster defines how the specified Remote JWKS URI is to be fetched.
|
||||
JWKSCluster *JWKSCluster `json:",omitempty" alias:"jwks_cluster"`
|
||||
}
|
||||
|
||||
type JWKSCluster struct {
|
||||
// DiscoveryType refers to the service discovery type to use for resolving the cluster.
|
||||
//
|
||||
// This defaults to STRICT_DNS.
|
||||
// Other options include STATIC, LOGICAL_DNS, EDS or ORIGINAL_DST.
|
||||
DiscoveryType ClusterDiscoveryType `json:",omitempty" alias:"discovery_type"`
|
||||
|
||||
// TLSCertificates refers to the data containing certificate authority certificates to use
|
||||
// in verifying a presented peer certificate.
|
||||
// If not specified and a peer certificate is presented it will not be verified.
|
||||
//
|
||||
// Must be either CaCertificateProviderInstance or TrustedCA.
|
||||
TLSCertificates *JWKSTLSCertificate `json:",omitempty" alias:"tls_certificates"`
|
||||
|
||||
// The timeout for new network connections to hosts in the cluster.
|
||||
// If not set, a default value of 5s will be used.
|
||||
ConnectTimeout time.Duration `json:",omitempty" alias:"connect_timeout"`
|
||||
}
|
||||
|
||||
type ClusterDiscoveryType string
|
||||
|
||||
// JWKSTLSCertificate refers to the data containing certificate authority certificates to use
|
||||
// in verifying a presented peer certificate.
|
||||
// If not specified and a peer certificate is presented it will not be verified.
|
||||
//
|
||||
// Must be either CaCertificateProviderInstance or TrustedCA.
|
||||
type JWKSTLSCertificate struct {
|
||||
// CaCertificateProviderInstance Certificate provider instance for fetching TLS certificates.
|
||||
CaCertificateProviderInstance *JWKSTLSCertProviderInstance `json:",omitempty" alias:"ca_certificate_provider_instance"`
|
||||
|
||||
// TrustedCA defines TLS certificate data containing certificate authority certificates
|
||||
// to use in verifying a presented peer certificate.
|
||||
//
|
||||
// Exactly one of Filename, EnvironmentVariable, InlineString or InlineBytes must be specified.
|
||||
TrustedCA *JWKSTLSCertTrustedCA `json:",omitempty" alias:"trusted_ca"`
|
||||
}
|
||||
|
||||
// JWKSTLSCertTrustedCA defines TLS certificate data containing certificate authority certificates
|
||||
// to use in verifying a presented peer certificate.
|
||||
//
|
||||
// Exactly one of Filename, EnvironmentVariable, InlineString or InlineBytes must be specified.
|
||||
type JWKSTLSCertTrustedCA struct {
|
||||
Filename string `json:",omitempty" alias:"filename"`
|
||||
EnvironmentVariable string `json:",omitempty" alias:"environment_variable"`
|
||||
InlineString string `json:",omitempty" alias:"inline_string"`
|
||||
InlineBytes []byte `json:",omitempty" alias:"inline_bytes"`
|
||||
}
|
||||
|
||||
type JWKSTLSCertProviderInstance struct {
|
||||
// InstanceName refers to the certificate provider instance name
|
||||
//
|
||||
// The default value is "default".
|
||||
InstanceName string `json:",omitempty" alias:"instance_name"`
|
||||
|
||||
// CertificateName is used to specify certificate instances or types. For example, "ROOTCA" to specify
|
||||
// a root-certificate (validation context) or "example.com" to specify a certificate for a
|
||||
// particular domain.
|
||||
//
|
||||
// The default value is the empty string.
|
||||
CertificateName string `json:",omitempty" alias:"certificate_name"`
|
||||
}
|
||||
|
||||
type JWKSRetryPolicy struct {
|
||||
|
|
|
@ -4,6 +4,7 @@ package api
|
|||
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/hashicorp/consul/sdk/testutil"
|
||||
"github.com/stretchr/testify/require"
|
||||
|
@ -17,12 +18,23 @@ func TestAPI_ConfigEntries_JWTProvider(t *testing.T) {
|
|||
entries := c.ConfigEntries()
|
||||
|
||||
testutil.RunStep(t, "set and get", func(t *testing.T) {
|
||||
connectTimeout := time.Duration(5) * time.Second
|
||||
jwtProvider := &JWTProviderConfigEntry{
|
||||
Name: "okta",
|
||||
Kind: JWTProvider,
|
||||
JSONWebKeySet: &JSONWebKeySet{
|
||||
Local: &LocalJWKS{
|
||||
Filename: "test.txt",
|
||||
Remote: &RemoteJWKS{
|
||||
FetchAsynchronously: true,
|
||||
URI: "https://example.com/.well-known/jwks.json",
|
||||
JWKSCluster: &JWKSCluster{
|
||||
DiscoveryType: "STATIC",
|
||||
ConnectTimeout: connectTimeout,
|
||||
TLSCertificates: &JWKSTLSCertificate{
|
||||
TrustedCA: &JWKSTLSCertTrustedCA{
|
||||
Filename: "myfile.cert",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
Meta: map[string]string{
|
||||
|
|
|
@ -10,6 +10,7 @@ type Usage struct {
|
|||
|
||||
// ServiceUsage contains information about the number of services and service instances for a datacenter.
|
||||
type ServiceUsage struct {
|
||||
Nodes int
|
||||
Services int
|
||||
ServiceInstances int
|
||||
ConnectServiceInstances map[string]int
|
||||
|
|
|
@ -440,6 +440,18 @@ func (c *cmd) run(args []string) int {
|
|||
meta = map[string]string{structs.MetaWANFederationKey: "1"}
|
||||
}
|
||||
|
||||
// API gateways do not have a default listener or ready endpoint,
|
||||
// so adding any check to the registration will fail
|
||||
var check *api.AgentServiceCheck
|
||||
if c.gatewayKind != api.ServiceKindAPIGateway {
|
||||
check = &api.AgentServiceCheck{
|
||||
Name: fmt.Sprintf("%s listening", c.gatewayKind),
|
||||
TCP: ipaddr.FormatAddressPort(tcpCheckAddr, lanAddr.Port),
|
||||
Interval: "10s",
|
||||
DeregisterCriticalServiceAfter: c.deregAfterCritical,
|
||||
}
|
||||
}
|
||||
|
||||
svc := api.AgentServiceRegistration{
|
||||
Kind: c.gatewayKind,
|
||||
Name: c.gatewaySvcName,
|
||||
|
@ -449,12 +461,7 @@ func (c *cmd) run(args []string) int {
|
|||
Meta: meta,
|
||||
TaggedAddresses: taggedAddrs,
|
||||
Proxy: proxyConf,
|
||||
Check: &api.AgentServiceCheck{
|
||||
Name: fmt.Sprintf("%s listening", c.gatewayKind),
|
||||
TCP: ipaddr.FormatAddressPort(tcpCheckAddr, lanAddr.Port),
|
||||
Interval: "10s",
|
||||
DeregisterCriticalServiceAfter: c.deregAfterCritical,
|
||||
},
|
||||
Check: check,
|
||||
}
|
||||
|
||||
if err := c.client.Agent().ServiceRegister(&svc); err != nil {
|
||||
|
|
|
@ -99,6 +99,14 @@ func (c *cmd) Run(args []string) int {
|
|||
return 1
|
||||
}
|
||||
c.UI.Output(billableOutput + "\n")
|
||||
|
||||
c.UI.Output("\nNodes")
|
||||
nodesOutput, err := formatNodesCounts(usage.Usage)
|
||||
if err != nil {
|
||||
c.UI.Error(err.Error())
|
||||
return 1
|
||||
}
|
||||
c.UI.Output(nodesOutput + "\n\n")
|
||||
}
|
||||
|
||||
// Output Connect service counts
|
||||
|
@ -115,6 +123,34 @@ func (c *cmd) Run(args []string) int {
|
|||
return 0
|
||||
}
|
||||
|
||||
func formatNodesCounts(usageStats map[string]api.ServiceUsage) (string, error) {
|
||||
var output bytes.Buffer
|
||||
tw := tabwriter.NewWriter(&output, 0, 2, 6, ' ', 0)
|
||||
|
||||
nodesTotal := 0
|
||||
|
||||
fmt.Fprintf(tw, "Datacenter\t")
|
||||
|
||||
fmt.Fprintf(tw, "Count\t")
|
||||
|
||||
fmt.Fprint(tw, "\t\n")
|
||||
|
||||
for dc, usage := range usageStats {
|
||||
nodesTotal += usage.Nodes
|
||||
fmt.Fprintf(tw, "%s\t%d\n", dc, usage.Nodes)
|
||||
}
|
||||
|
||||
fmt.Fprint(tw, "\t\n")
|
||||
fmt.Fprintf(tw, "Total")
|
||||
|
||||
fmt.Fprintf(tw, "\t%d", nodesTotal)
|
||||
|
||||
if err := tw.Flush(); err != nil {
|
||||
return "", fmt.Errorf("Error flushing tabwriter: %s", err)
|
||||
}
|
||||
return strings.TrimSpace(output.String()), nil
|
||||
}
|
||||
|
||||
func formatServiceCounts(usageStats map[string]api.ServiceUsage, billable, showDatacenter bool) (string, error) {
|
||||
var output bytes.Buffer
|
||||
tw := tabwriter.NewWriter(&output, 0, 2, 6, ' ', 0)
|
||||
|
|
|
@ -117,3 +117,54 @@ Total 45`,
|
|||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestUsageInstances_formatNodesCounts(t *testing.T) {
|
||||
usageBasic := map[string]api.ServiceUsage{
|
||||
"dc1": {
|
||||
Nodes: 10,
|
||||
},
|
||||
}
|
||||
|
||||
usageMultiDC := map[string]api.ServiceUsage{
|
||||
"dc1": {
|
||||
Nodes: 10,
|
||||
},
|
||||
"dc2": {
|
||||
Nodes: 11,
|
||||
},
|
||||
}
|
||||
|
||||
cases := []struct {
|
||||
name string
|
||||
usageStats map[string]api.ServiceUsage
|
||||
expectedNodes string
|
||||
}{
|
||||
{
|
||||
name: "basic",
|
||||
usageStats: usageBasic,
|
||||
expectedNodes: `
|
||||
Datacenter Count
|
||||
dc1 10
|
||||
|
||||
Total 10`,
|
||||
},
|
||||
{
|
||||
name: "multi-datacenter",
|
||||
usageStats: usageMultiDC,
|
||||
expectedNodes: `
|
||||
Datacenter Count
|
||||
dc1 10
|
||||
dc2 11
|
||||
|
||||
Total 21`,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range cases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
nodesOutput, err := formatNodesCounts(tc.usageStats)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, strings.TrimSpace(tc.expectedNodes), nodesOutput)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1082,6 +1082,30 @@ func JSONWebKeySetFromStructs(t *structs.JSONWebKeySet, s *JSONWebKeySet) {
|
|||
s.Remote = &x
|
||||
}
|
||||
}
|
||||
func JWKSClusterToStructs(s *JWKSCluster, t *structs.JWKSCluster) {
|
||||
if s == nil {
|
||||
return
|
||||
}
|
||||
t.DiscoveryType = structs.ClusterDiscoveryType(s.DiscoveryType)
|
||||
if s.TLSCertificates != nil {
|
||||
var x structs.JWKSTLSCertificate
|
||||
JWKSTLSCertificateToStructs(s.TLSCertificates, &x)
|
||||
t.TLSCertificates = &x
|
||||
}
|
||||
t.ConnectTimeout = structs.DurationFromProto(s.ConnectTimeout)
|
||||
}
|
||||
func JWKSClusterFromStructs(t *structs.JWKSCluster, s *JWKSCluster) {
|
||||
if s == nil {
|
||||
return
|
||||
}
|
||||
s.DiscoveryType = string(t.DiscoveryType)
|
||||
if t.TLSCertificates != nil {
|
||||
var x JWKSTLSCertificate
|
||||
JWKSTLSCertificateFromStructs(t.TLSCertificates, &x)
|
||||
s.TLSCertificates = &x
|
||||
}
|
||||
s.ConnectTimeout = structs.DurationToProto(t.ConnectTimeout)
|
||||
}
|
||||
func JWKSRetryPolicyToStructs(s *JWKSRetryPolicy, t *structs.JWKSRetryPolicy) {
|
||||
if s == nil {
|
||||
return
|
||||
|
@ -1104,6 +1128,68 @@ func JWKSRetryPolicyFromStructs(t *structs.JWKSRetryPolicy, s *JWKSRetryPolicy)
|
|||
s.RetryPolicyBackOff = &x
|
||||
}
|
||||
}
|
||||
func JWKSTLSCertProviderInstanceToStructs(s *JWKSTLSCertProviderInstance, t *structs.JWKSTLSCertProviderInstance) {
|
||||
if s == nil {
|
||||
return
|
||||
}
|
||||
t.InstanceName = s.InstanceName
|
||||
t.CertificateName = s.CertificateName
|
||||
}
|
||||
func JWKSTLSCertProviderInstanceFromStructs(t *structs.JWKSTLSCertProviderInstance, s *JWKSTLSCertProviderInstance) {
|
||||
if s == nil {
|
||||
return
|
||||
}
|
||||
s.InstanceName = t.InstanceName
|
||||
s.CertificateName = t.CertificateName
|
||||
}
|
||||
func JWKSTLSCertTrustedCAToStructs(s *JWKSTLSCertTrustedCA, t *structs.JWKSTLSCertTrustedCA) {
|
||||
if s == nil {
|
||||
return
|
||||
}
|
||||
t.Filename = s.Filename
|
||||
t.EnvironmentVariable = s.EnvironmentVariable
|
||||
t.InlineString = s.InlineString
|
||||
t.InlineBytes = s.InlineBytes
|
||||
}
|
||||
func JWKSTLSCertTrustedCAFromStructs(t *structs.JWKSTLSCertTrustedCA, s *JWKSTLSCertTrustedCA) {
|
||||
if s == nil {
|
||||
return
|
||||
}
|
||||
s.Filename = t.Filename
|
||||
s.EnvironmentVariable = t.EnvironmentVariable
|
||||
s.InlineString = t.InlineString
|
||||
s.InlineBytes = t.InlineBytes
|
||||
}
|
||||
func JWKSTLSCertificateToStructs(s *JWKSTLSCertificate, t *structs.JWKSTLSCertificate) {
|
||||
if s == nil {
|
||||
return
|
||||
}
|
||||
if s.CaCertificateProviderInstance != nil {
|
||||
var x structs.JWKSTLSCertProviderInstance
|
||||
JWKSTLSCertProviderInstanceToStructs(s.CaCertificateProviderInstance, &x)
|
||||
t.CaCertificateProviderInstance = &x
|
||||
}
|
||||
if s.TrustedCA != nil {
|
||||
var x structs.JWKSTLSCertTrustedCA
|
||||
JWKSTLSCertTrustedCAToStructs(s.TrustedCA, &x)
|
||||
t.TrustedCA = &x
|
||||
}
|
||||
}
|
||||
func JWKSTLSCertificateFromStructs(t *structs.JWKSTLSCertificate, s *JWKSTLSCertificate) {
|
||||
if s == nil {
|
||||
return
|
||||
}
|
||||
if t.CaCertificateProviderInstance != nil {
|
||||
var x JWKSTLSCertProviderInstance
|
||||
JWKSTLSCertProviderInstanceFromStructs(t.CaCertificateProviderInstance, &x)
|
||||
s.CaCertificateProviderInstance = &x
|
||||
}
|
||||
if t.TrustedCA != nil {
|
||||
var x JWKSTLSCertTrustedCA
|
||||
JWKSTLSCertTrustedCAFromStructs(t.TrustedCA, &x)
|
||||
s.TrustedCA = &x
|
||||
}
|
||||
}
|
||||
func JWTCacheConfigToStructs(s *JWTCacheConfig, t *structs.JWTCacheConfig) {
|
||||
if s == nil {
|
||||
return
|
||||
|
@ -1521,6 +1607,11 @@ func RemoteJWKSToStructs(s *RemoteJWKS, t *structs.RemoteJWKS) {
|
|||
JWKSRetryPolicyToStructs(s.RetryPolicy, &x)
|
||||
t.RetryPolicy = &x
|
||||
}
|
||||
if s.JWKSCluster != nil {
|
||||
var x structs.JWKSCluster
|
||||
JWKSClusterToStructs(s.JWKSCluster, &x)
|
||||
t.JWKSCluster = &x
|
||||
}
|
||||
}
|
||||
func RemoteJWKSFromStructs(t *structs.RemoteJWKS, s *RemoteJWKS) {
|
||||
if s == nil {
|
||||
|
@ -1535,6 +1626,11 @@ func RemoteJWKSFromStructs(t *structs.RemoteJWKS, s *RemoteJWKS) {
|
|||
JWKSRetryPolicyFromStructs(t.RetryPolicy, &x)
|
||||
s.RetryPolicy = &x
|
||||
}
|
||||
if t.JWKSCluster != nil {
|
||||
var x JWKSCluster
|
||||
JWKSClusterFromStructs(t.JWKSCluster, &x)
|
||||
s.JWKSCluster = &x
|
||||
}
|
||||
}
|
||||
func ResourceReferenceToStructs(s *ResourceReference, t *structs.ResourceReference) {
|
||||
if s == nil {
|
||||
|
|
|
@ -727,6 +727,46 @@ func (msg *RemoteJWKS) UnmarshalBinary(b []byte) error {
|
|||
return proto.Unmarshal(b, msg)
|
||||
}
|
||||
|
||||
// MarshalBinary implements encoding.BinaryMarshaler
|
||||
func (msg *JWKSCluster) MarshalBinary() ([]byte, error) {
|
||||
return proto.Marshal(msg)
|
||||
}
|
||||
|
||||
// UnmarshalBinary implements encoding.BinaryUnmarshaler
|
||||
func (msg *JWKSCluster) UnmarshalBinary(b []byte) error {
|
||||
return proto.Unmarshal(b, msg)
|
||||
}
|
||||
|
||||
// MarshalBinary implements encoding.BinaryMarshaler
|
||||
func (msg *JWKSTLSCertificate) MarshalBinary() ([]byte, error) {
|
||||
return proto.Marshal(msg)
|
||||
}
|
||||
|
||||
// UnmarshalBinary implements encoding.BinaryUnmarshaler
|
||||
func (msg *JWKSTLSCertificate) UnmarshalBinary(b []byte) error {
|
||||
return proto.Unmarshal(b, msg)
|
||||
}
|
||||
|
||||
// MarshalBinary implements encoding.BinaryMarshaler
|
||||
func (msg *JWKSTLSCertProviderInstance) MarshalBinary() ([]byte, error) {
|
||||
return proto.Marshal(msg)
|
||||
}
|
||||
|
||||
// UnmarshalBinary implements encoding.BinaryUnmarshaler
|
||||
func (msg *JWKSTLSCertProviderInstance) UnmarshalBinary(b []byte) error {
|
||||
return proto.Unmarshal(b, msg)
|
||||
}
|
||||
|
||||
// MarshalBinary implements encoding.BinaryMarshaler
|
||||
func (msg *JWKSTLSCertTrustedCA) MarshalBinary() ([]byte, error) {
|
||||
return proto.Marshal(msg)
|
||||
}
|
||||
|
||||
// UnmarshalBinary implements encoding.BinaryUnmarshaler
|
||||
func (msg *JWKSTLSCertTrustedCA) UnmarshalBinary(b []byte) error {
|
||||
return proto.Unmarshal(b, msg)
|
||||
}
|
||||
|
||||
// MarshalBinary implements encoding.BinaryMarshaler
|
||||
func (msg *JWKSRetryPolicy) MarshalBinary() ([]byte, error) {
|
||||
return proto.Marshal(msg)
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -1021,6 +1021,51 @@ message RemoteJWKS {
|
|||
google.protobuf.Duration CacheDuration = 3;
|
||||
bool FetchAsynchronously = 4;
|
||||
JWKSRetryPolicy RetryPolicy = 5;
|
||||
JWKSCluster JWKSCluster = 6;
|
||||
}
|
||||
|
||||
// mog annotation:
|
||||
//
|
||||
// target=github.com/hashicorp/consul/agent/structs.JWKSCluster
|
||||
// output=config_entry.gen.go
|
||||
// name=Structs
|
||||
message JWKSCluster {
|
||||
string DiscoveryType = 1;
|
||||
JWKSTLSCertificate TLSCertificates = 2;
|
||||
// mog: func-to=structs.DurationFromProto func-from=structs.DurationToProto
|
||||
google.protobuf.Duration ConnectTimeout = 3;
|
||||
}
|
||||
|
||||
// mog annotation:
|
||||
//
|
||||
// target=github.com/hashicorp/consul/agent/structs.JWKSTLSCertificate
|
||||
// output=config_entry.gen.go
|
||||
// name=Structs
|
||||
message JWKSTLSCertificate {
|
||||
JWKSTLSCertProviderInstance CaCertificateProviderInstance = 1;
|
||||
JWKSTLSCertTrustedCA TrustedCA = 2;
|
||||
}
|
||||
|
||||
// mog annotation:
|
||||
//
|
||||
// target=github.com/hashicorp/consul/agent/structs.JWKSTLSCertProviderInstance
|
||||
// output=config_entry.gen.go
|
||||
// name=Structs
|
||||
message JWKSTLSCertProviderInstance {
|
||||
string InstanceName = 1;
|
||||
string CertificateName = 2;
|
||||
}
|
||||
|
||||
// mog annotation:
|
||||
//
|
||||
// target=github.com/hashicorp/consul/agent/structs.JWKSTLSCertTrustedCA
|
||||
// output=config_entry.gen.go
|
||||
// name=Structs
|
||||
message JWKSTLSCertTrustedCA {
|
||||
string Filename = 1;
|
||||
string EnvironmentVariable = 2;
|
||||
string InlineString = 3;
|
||||
bytes InlineBytes = 4;
|
||||
}
|
||||
|
||||
// mog annotation:
|
||||
|
|
|
@ -7,6 +7,7 @@ require (
|
|||
github.com/avast/retry-go v3.0.0+incompatible
|
||||
github.com/docker/docker v23.0.6+incompatible
|
||||
github.com/docker/go-connections v0.4.0
|
||||
github.com/go-jose/go-jose/v3 v3.0.0
|
||||
github.com/hashicorp/consul v0.0.0-00010101000000-000000000000
|
||||
github.com/hashicorp/consul/api v1.22.0-rc1
|
||||
github.com/hashicorp/consul/envoyextensions v0.3.0-rc1
|
||||
|
@ -83,6 +84,7 @@ require (
|
|||
github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529 // indirect
|
||||
github.com/sirupsen/logrus v1.9.0 // indirect
|
||||
github.com/stretchr/objx v0.5.0 // indirect
|
||||
golang.org/x/crypto v0.1.0 // indirect
|
||||
golang.org/x/exp v0.0.0-20230321023759-10a507213a29 // indirect
|
||||
golang.org/x/net v0.10.0 // indirect
|
||||
golang.org/x/sync v0.2.0 // indirect
|
||||
|
|
|
@ -79,6 +79,8 @@ github.com/fatih/color v1.9.0/go.mod h1:eQcE1qtQxscV5RaZvpXrrb8Drkc3/DdQ+uUYCNjL
|
|||
github.com/fatih/color v1.13.0/go.mod h1:kLAiJbzzSOZDVNGyDpeOxJ47H46qBXwg5ILebYFFOfk=
|
||||
github.com/fatih/color v1.14.1 h1:qfhVLaG5s+nCROl1zJsZRxFeYrHLqWroPOQ8BWiNb4w=
|
||||
github.com/fatih/color v1.14.1/go.mod h1:2oHN61fhTpgcxD3TSWCgKDiH1+x4OiDVVGH8WlgGZGg=
|
||||
github.com/go-jose/go-jose/v3 v3.0.0 h1:s6rrhirfEP/CGIoc6p+PZAeogN2SxKav6Wp7+dyMWVo=
|
||||
github.com/go-jose/go-jose/v3 v3.0.0/go.mod h1:RNkWWRld676jZEYoV3+XK8L2ZnNSvIsxFMht0mSX+u8=
|
||||
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
|
||||
github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
|
||||
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
|
||||
|
@ -101,6 +103,7 @@ github.com/google/btree v1.0.1/go.mod h1:xXMiIv4Fb/0kKde4SpL7qlzvu5cMJDRkFDxJfI9
|
|||
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
|
||||
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
|
||||
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||
github.com/google/go-cmp v0.5.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||
github.com/google/go-cmp v0.5.9 h1:O2Tfq5qg4qc4AmwVlvv0oLiVAGB7enBSJ2x2DqQFi38=
|
||||
github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
|
||||
|
@ -286,6 +289,7 @@ github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpE
|
|||
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
|
||||
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
|
||||
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
|
||||
github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
||||
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
||||
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
||||
github.com/stretchr/testify v1.7.2/go.mod h1:R6va5+xMeoiuVRoj+gSkQ7d3FALtqAAGI1FQKckRals=
|
||||
|
@ -303,10 +307,12 @@ github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9dec
|
|||
github.com/yuin/goldmark v1.3.5/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k=
|
||||
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
|
||||
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
|
||||
golang.org/x/crypto v0.0.0-20190911031432-227b76d455e7/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||
golang.org/x/crypto v0.0.0-20190923035154-9ee001bba392/go.mod h1:/lpIB1dKB+9EgE3H3cr1v9wB50oz8l4C4h62xy7jSTY=
|
||||
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
|
||||
golang.org/x/crypto v0.1.0 h1:MDRAIl0xIo9Io2xV565hzXHw3zVseKrJKodhohM5CjU=
|
||||
golang.org/x/crypto v0.1.0/go.mod h1:RecgLatLF4+eUMCP1PoPZQb+cVrJcOPbHkTkbkB9sbw=
|
||||
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
|
||||
golang.org/x/exp v0.0.0-20230321023759-10a507213a29 h1:ooxPy7fPvB4kwsA2h+iBNHkAbp/4JxTSwCmvdjEYmug=
|
||||
golang.org/x/exp v0.0.0-20230321023759-10a507213a29/go.mod h1:CxIveKay+FTh1D0yPZemJVgC/95VzuuOLq5Qi4xnoYc=
|
||||
|
|
|
@ -13,12 +13,11 @@ import (
|
|||
"time"
|
||||
|
||||
"github.com/hashicorp/go-cleanhttp"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
"github.com/hashicorp/consul/api"
|
||||
"github.com/hashicorp/consul/sdk/testutil/retry"
|
||||
"github.com/stretchr/testify/assert"
|
||||
|
||||
libservice "github.com/hashicorp/consul/test/integration/consul-container/libs/service"
|
||||
)
|
||||
|
||||
|
@ -54,7 +53,7 @@ func CatalogServiceHasInstanceCount(t *testing.T, c *api.Client, svc string, cou
|
|||
})
|
||||
}
|
||||
|
||||
// CatalogServiceExists verifies the node name exists in the Consul catalog
|
||||
// CatalogNodeExists verifies the node name exists in the Consul catalog
|
||||
func CatalogNodeExists(t *testing.T, c *api.Client, nodeName string) {
|
||||
retry.Run(t, func(r *retry.R) {
|
||||
node, _, err := c.Catalog().Node(nodeName, nil)
|
||||
|
@ -67,26 +66,55 @@ func CatalogNodeExists(t *testing.T, c *api.Client, nodeName string) {
|
|||
})
|
||||
}
|
||||
|
||||
func HTTPServiceEchoes(t *testing.T, ip string, port int, path string) {
|
||||
doHTTPServiceEchoes(t, ip, port, path, nil)
|
||||
// CatalogServiceIsHealthy verifies the service name exists and all instances pass healthchecks
|
||||
func CatalogServiceIsHealthy(t *testing.T, c *api.Client, svc string, opts *api.QueryOptions) {
|
||||
CatalogServiceExists(t, c, svc, opts)
|
||||
|
||||
retry.Run(t, func(r *retry.R) {
|
||||
services, _, err := c.Health().Service(svc, "", false, opts)
|
||||
if err != nil {
|
||||
r.Fatal("error reading service health data")
|
||||
}
|
||||
if len(services) == 0 {
|
||||
r.Fatal("did not find catalog entry for ", svc)
|
||||
}
|
||||
|
||||
for _, svc := range services {
|
||||
for _, check := range svc.Checks {
|
||||
if check.Status != api.HealthPassing {
|
||||
r.Fatal("at least one check is not PASSING for service", svc.Service.Service)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
})
|
||||
}
|
||||
|
||||
func HTTPServiceEchoes(t *testing.T, ip string, port int, path string) {
|
||||
doHTTPServiceEchoes(t, ip, port, path, nil, nil)
|
||||
}
|
||||
|
||||
func HTTPServiceEchoesWithHeaders(t *testing.T, ip string, port int, path string, headers map[string]string) {
|
||||
doHTTPServiceEchoes(t, ip, port, path, headers, nil)
|
||||
}
|
||||
|
||||
func HTTPServiceEchoesWithClient(t *testing.T, client *http.Client, addr string, path string) {
|
||||
doHTTPServiceEchoesWithClient(t, client, addr, path, nil)
|
||||
doHTTPServiceEchoesWithClient(t, client, addr, path, nil, nil)
|
||||
}
|
||||
|
||||
func HTTPServiceEchoesResHeader(t *testing.T, ip string, port int, path string, expectedResHeader map[string]string) {
|
||||
doHTTPServiceEchoes(t, ip, port, path, expectedResHeader)
|
||||
doHTTPServiceEchoes(t, ip, port, path, nil, expectedResHeader)
|
||||
}
|
||||
func HTTPServiceEchoesResHeaderWithClient(t *testing.T, client *http.Client, addr string, path string, expectedResHeader map[string]string) {
|
||||
doHTTPServiceEchoesWithClient(t, client, addr, path, expectedResHeader)
|
||||
doHTTPServiceEchoesWithClient(t, client, addr, path, nil, expectedResHeader)
|
||||
}
|
||||
|
||||
// HTTPServiceEchoes verifies that a post to the given ip/port combination returns the data
|
||||
// in the response body. Optional path can be provided to differentiate requests.
|
||||
func doHTTPServiceEchoes(t *testing.T, ip string, port int, path string, expectedResHeader map[string]string) {
|
||||
func doHTTPServiceEchoes(t *testing.T, ip string, port int, path string, requestHeaders map[string]string, expectedResHeader map[string]string) {
|
||||
client := cleanhttp.DefaultClient()
|
||||
addr := fmt.Sprintf("%s:%d", ip, port)
|
||||
doHTTPServiceEchoesWithClient(t, client, addr, path, expectedResHeader)
|
||||
doHTTPServiceEchoesWithClient(t, client, addr, path, requestHeaders, expectedResHeader)
|
||||
}
|
||||
|
||||
func doHTTPServiceEchoesWithClient(
|
||||
|
@ -94,6 +122,7 @@ func doHTTPServiceEchoesWithClient(
|
|||
client *http.Client,
|
||||
addr string,
|
||||
path string,
|
||||
requestHeaders map[string]string,
|
||||
expectedResHeader map[string]string,
|
||||
) {
|
||||
const phrase = "hello"
|
||||
|
@ -110,8 +139,20 @@ func doHTTPServiceEchoesWithClient(
|
|||
|
||||
retry.RunWith(failer(), t, func(r *retry.R) {
|
||||
t.Logf("making call to %s", url)
|
||||
|
||||
reader := strings.NewReader(phrase)
|
||||
res, err := client.Post(url, "text/plain", reader)
|
||||
req, err := http.NewRequest("POST", url, reader)
|
||||
require.NoError(t, err, "could not construct request")
|
||||
|
||||
for k, v := range requestHeaders {
|
||||
req.Header.Add(k, v)
|
||||
|
||||
if k == "Host" {
|
||||
req.Host = v
|
||||
}
|
||||
}
|
||||
|
||||
res, err := client.Do(req)
|
||||
if err != nil {
|
||||
r.Fatal("could not make call to service ", url)
|
||||
}
|
||||
|
|
|
@ -46,11 +46,11 @@ type Agent interface {
|
|||
type Config struct {
|
||||
// NodeName is set for the consul agent name and container name
|
||||
// Equivalent to the -node command-line flag.
|
||||
// If empty, a randam name will be generated
|
||||
// If empty, a random name will be generated
|
||||
NodeName string
|
||||
// NodeID is used to configure node_id in agent config file
|
||||
// Equivalent to the -node-id command-line flag.
|
||||
// If empty, a randam name will be generated
|
||||
// If empty, a random name will be generated
|
||||
NodeID string
|
||||
|
||||
// ExternalDataDir is data directory to copy consul data from, if set.
|
||||
|
@ -83,10 +83,7 @@ func (c *Config) DockerImage() string {
|
|||
func (c Config) Clone() Config {
|
||||
c2 := c
|
||||
if c.Cmd != nil {
|
||||
c2.Cmd = make([]string, len(c.Cmd))
|
||||
for i, v := range c.Cmd {
|
||||
c2.Cmd[i] = v
|
||||
}
|
||||
copy(c2.Cmd, c.Cmd)
|
||||
}
|
||||
return c2
|
||||
}
|
||||
|
|
|
@ -187,7 +187,11 @@ type ClusterConfig struct {
|
|||
BuildOpts *libcluster.BuildOptions
|
||||
Cmd string
|
||||
LogConsumer *TestLogConsumer
|
||||
Ports []int
|
||||
|
||||
// Exposed Ports are available on the cluster's pause container for the purposes
|
||||
// of adding external communication to the cluster. An example would be a listener
|
||||
// on a gateway.
|
||||
ExposedPorts []int
|
||||
}
|
||||
|
||||
// NewCluster creates a cluster with peering enabled. It also creates
|
||||
|
@ -234,8 +238,8 @@ func NewCluster(
|
|||
serverConf.Cmd = append(serverConf.Cmd, config.Cmd)
|
||||
}
|
||||
|
||||
if config.Ports != nil {
|
||||
cluster, err = libcluster.New(t, []libcluster.Config{*serverConf}, config.Ports...)
|
||||
if config.ExposedPorts != nil {
|
||||
cluster, err = libcluster.New(t, []libcluster.Config{*serverConf}, config.ExposedPorts...)
|
||||
} else {
|
||||
cluster, err = libcluster.NewN(t, *serverConf, config.NumServers)
|
||||
}
|
||||
|
|
|
@ -4,6 +4,15 @@
|
|||
package utils
|
||||
|
||||
import (
|
||||
"crypto/ecdsa"
|
||||
"crypto/elliptic"
|
||||
"crypto/rand"
|
||||
"crypto/x509"
|
||||
"encoding/pem"
|
||||
"fmt"
|
||||
|
||||
"github.com/go-jose/go-jose/v3"
|
||||
"github.com/go-jose/go-jose/v3/jwt"
|
||||
"github.com/hashicorp/consul/api"
|
||||
)
|
||||
|
||||
|
@ -18,3 +27,98 @@ func ApplyDefaultProxySettings(c *api.Client) (bool, error) {
|
|||
ok, _, err := c.ConfigEntries().Set(req, &api.WriteOptions{})
|
||||
return ok, err
|
||||
}
|
||||
|
||||
// Generates a private and public key pair that is for signing
|
||||
// JWT.
|
||||
func GenerateKey() (pub, priv string, err error) {
|
||||
privateKey, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader)
|
||||
|
||||
if err != nil {
|
||||
return "", "", fmt.Errorf("error generating private key: %w", err)
|
||||
}
|
||||
|
||||
{
|
||||
derBytes, err := x509.MarshalECPrivateKey(privateKey)
|
||||
if err != nil {
|
||||
return "", "", fmt.Errorf("error marshaling private key: %w", err)
|
||||
}
|
||||
priv = string(pem.EncodeToMemory(&pem.Block{
|
||||
Type: "EC PRIVATE KEY",
|
||||
Bytes: derBytes,
|
||||
}))
|
||||
}
|
||||
{
|
||||
derBytes, err := x509.MarshalPKIXPublicKey(privateKey.Public())
|
||||
if err != nil {
|
||||
return "", "", fmt.Errorf("error marshaling public key: %w", err)
|
||||
}
|
||||
pub = string(pem.EncodeToMemory(&pem.Block{
|
||||
Type: "PUBLIC KEY",
|
||||
Bytes: derBytes,
|
||||
}))
|
||||
}
|
||||
|
||||
return pub, priv, nil
|
||||
}
|
||||
|
||||
// SignJWT will bundle the provided claims into a signed JWT. The provided key
|
||||
// is assumed to be ECDSA.
|
||||
//
|
||||
// If no private key is provided, it will generate a private key. These can
|
||||
// be retrieved via the SigningKeys() method.
|
||||
func SignJWT(privKey string, claims jwt.Claims, privateClaims interface{}) (string, error) {
|
||||
var err error
|
||||
if privKey == "" {
|
||||
_, privKey, err = GenerateKey()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
}
|
||||
var key *ecdsa.PrivateKey
|
||||
block, _ := pem.Decode([]byte(privKey))
|
||||
if block != nil {
|
||||
key, err = x509.ParseECPrivateKey(block.Bytes)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
}
|
||||
|
||||
sig, err := jose.NewSigner(
|
||||
jose.SigningKey{Algorithm: jose.ES256, Key: key},
|
||||
(&jose.SignerOptions{}).WithType("JWT"),
|
||||
)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
raw, err := jwt.Signed(sig).
|
||||
Claims(claims).
|
||||
Claims(privateClaims).
|
||||
CompactSerialize()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
return raw, nil
|
||||
}
|
||||
|
||||
// newJWKS converts a pem-encoded public key into JWKS data suitable for a
|
||||
// verification endpoint response
|
||||
func NewJWKS(pubKey string) (*jose.JSONWebKeySet, error) {
|
||||
block, _ := pem.Decode([]byte(pubKey))
|
||||
if block == nil || block.Type != "PUBLIC KEY" {
|
||||
return nil, fmt.Errorf("unable to decode public key")
|
||||
}
|
||||
|
||||
pub, err := x509.ParsePKIXPublicKey(block.Bytes)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &jose.JSONWebKeySet{
|
||||
Keys: []jose.JSONWebKey{
|
||||
{
|
||||
Key: pub,
|
||||
},
|
||||
},
|
||||
}, nil
|
||||
}
|
||||
|
|
|
@ -7,11 +7,8 @@ import (
|
|||
"fmt"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
libassert "github.com/hashicorp/consul/test/integration/consul-container/libs/assert"
|
||||
libcluster "github.com/hashicorp/consul/test/integration/consul-container/libs/cluster"
|
||||
libservice "github.com/hashicorp/consul/test/integration/consul-container/libs/service"
|
||||
"github.com/hashicorp/consul/test/integration/consul-container/libs/topology"
|
||||
)
|
||||
|
||||
|
@ -40,7 +37,7 @@ func TestBasicConnectService(t *testing.T) {
|
|||
},
|
||||
})
|
||||
|
||||
clientService := createServices(t, cluster)
|
||||
_, clientService := topology.CreateServices(t, cluster)
|
||||
_, port := clientService.GetAddr()
|
||||
_, adminPort := clientService.GetAdminAddr()
|
||||
|
||||
|
@ -51,30 +48,3 @@ func TestBasicConnectService(t *testing.T) {
|
|||
libassert.HTTPServiceEchoes(t, "localhost", port, "")
|
||||
libassert.AssertFortioName(t, fmt.Sprintf("http://localhost:%d", port), "static-server", "")
|
||||
}
|
||||
|
||||
func createServices(t *testing.T, cluster *libcluster.Cluster) libservice.Service {
|
||||
node := cluster.Agents[0]
|
||||
client := node.GetClient()
|
||||
// Create a service and proxy instance
|
||||
serviceOpts := &libservice.ServiceOpts{
|
||||
Name: libservice.StaticServerServiceName,
|
||||
ID: "static-server",
|
||||
HTTPPort: 8080,
|
||||
GRPCPort: 8079,
|
||||
}
|
||||
|
||||
// Create a service and proxy instance
|
||||
_, _, err := libservice.CreateAndRegisterStaticServerAndSidecar(node, serviceOpts)
|
||||
require.NoError(t, err)
|
||||
|
||||
libassert.CatalogServiceExists(t, client, "static-server-sidecar-proxy", nil)
|
||||
libassert.CatalogServiceExists(t, client, libservice.StaticServerServiceName, nil)
|
||||
|
||||
// Create a client proxy instance with the server as an upstream
|
||||
clientConnectProxy, err := libservice.CreateAndRegisterStaticClientSidecar(node, "", false, false)
|
||||
require.NoError(t, err)
|
||||
|
||||
libassert.CatalogServiceExists(t, client, "static-client-sidecar-proxy", nil)
|
||||
|
||||
return clientConnectProxy
|
||||
}
|
||||
|
|
|
@ -0,0 +1,170 @@
|
|||
// Copyright (c) HashiCorp, Inc.
|
||||
// SPDX-License-Identifier: MPL-2.0
|
||||
|
||||
package envoyextensions
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"os"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/stretchr/testify/require"
|
||||
"github.com/testcontainers/testcontainers-go"
|
||||
|
||||
"github.com/hashicorp/consul/api"
|
||||
"github.com/hashicorp/consul/sdk/testutil/retry"
|
||||
libassert "github.com/hashicorp/consul/test/integration/consul-container/libs/assert"
|
||||
libcluster "github.com/hashicorp/consul/test/integration/consul-container/libs/cluster"
|
||||
libservice "github.com/hashicorp/consul/test/integration/consul-container/libs/service"
|
||||
"github.com/hashicorp/consul/test/integration/consul-container/libs/topology"
|
||||
"github.com/hashicorp/go-cleanhttp"
|
||||
)
|
||||
|
||||
// TestExtAuthzLocal Summary
|
||||
// This test makes sure two services in the same datacenter have connectivity.
|
||||
// A simulated client (a direct HTTP call) talks to it's upstream proxy through the mesh.
|
||||
// The upstream (static-server) is configured with a `builtin/ext-authz` extension that
|
||||
// calls an OPA external authorization service to authorize incoming HTTP requests.
|
||||
// The external authorization service is deployed as a container on the local network.
|
||||
//
|
||||
// Steps:
|
||||
// - Create a single agent cluster.
|
||||
// - Create the example static-server and sidecar containers, then register them both with Consul
|
||||
// - Create an example static-client sidecar, then register both the service and sidecar with Consul
|
||||
// - Create an OPA external authorization container on the local network, this doesn't need to be registered with Consul.
|
||||
// - Configure the static-server service with a `builtin/ext-authz` EnvoyExtension targeting the OPA ext-authz service.
|
||||
// - Make sure a call to the client sidecar local bind port returns the expected response from the upstream static-server:
|
||||
// - A call to `/allow` returns 200 OK.
|
||||
// - A call to any other endpoint returns 403 Forbidden.
|
||||
func TestExtAuthzLocal(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
cluster, _, _ := topology.NewCluster(t, &topology.ClusterConfig{
|
||||
NumServers: 1,
|
||||
NumClients: 1,
|
||||
ApplyDefaultProxySettings: true,
|
||||
BuildOpts: &libcluster.BuildOptions{
|
||||
Datacenter: "dc1",
|
||||
InjectAutoEncryption: true,
|
||||
InjectGossipEncryption: true,
|
||||
},
|
||||
})
|
||||
|
||||
createLocalAuthzService(t, cluster)
|
||||
|
||||
clientService := createServices(t, cluster)
|
||||
_, port := clientService.GetAddr()
|
||||
_, adminPort := clientService.GetAdminAddr()
|
||||
|
||||
libassert.AssertUpstreamEndpointStatus(t, adminPort, "static-server.default", "HEALTHY", 1)
|
||||
libassert.GetEnvoyListenerTCPFilters(t, adminPort)
|
||||
|
||||
libassert.AssertContainerState(t, clientService, "running")
|
||||
libassert.AssertFortioName(t, fmt.Sprintf("http://localhost:%d", port), "static-server", "")
|
||||
|
||||
// Wire up the ext-authz envoy extension for the static-server
|
||||
consul := cluster.APIClient(0)
|
||||
defaults := api.ServiceConfigEntry{
|
||||
Kind: api.ServiceDefaults,
|
||||
Name: "static-server",
|
||||
Protocol: "http",
|
||||
EnvoyExtensions: []api.EnvoyExtension{{
|
||||
Name: "builtin/ext-authz",
|
||||
Arguments: map[string]any{
|
||||
"Config": map[string]any{
|
||||
"GrpcService": map[string]any{
|
||||
"Target": map[string]any{"URI": "127.0.0.1:9191"},
|
||||
},
|
||||
},
|
||||
},
|
||||
}},
|
||||
}
|
||||
consul.ConfigEntries().Set(&defaults, nil)
|
||||
|
||||
// Make requests to the static-server. We expect that all requests are rejected with 403 Forbidden
|
||||
// unless they are to the /allow path.
|
||||
baseURL := fmt.Sprintf("http://localhost:%d", port)
|
||||
doRequest(t, baseURL, http.StatusForbidden)
|
||||
doRequest(t, baseURL+"/allow", http.StatusOK)
|
||||
}
|
||||
|
||||
func createServices(t *testing.T, cluster *libcluster.Cluster) libservice.Service {
|
||||
node := cluster.Agents[0]
|
||||
client := node.GetClient()
|
||||
// Create a service and proxy instance
|
||||
serviceOpts := &libservice.ServiceOpts{
|
||||
Name: libservice.StaticServerServiceName,
|
||||
ID: "static-server",
|
||||
HTTPPort: 8080,
|
||||
GRPCPort: 8079,
|
||||
}
|
||||
|
||||
// Create a service and proxy instance
|
||||
_, _, err := libservice.CreateAndRegisterStaticServerAndSidecar(node, serviceOpts)
|
||||
require.NoError(t, err)
|
||||
|
||||
libassert.CatalogServiceExists(t, client, "static-server-sidecar-proxy", nil)
|
||||
libassert.CatalogServiceExists(t, client, libservice.StaticServerServiceName, nil)
|
||||
|
||||
// Create a client proxy instance with the server as an upstream
|
||||
clientConnectProxy, err := libservice.CreateAndRegisterStaticClientSidecar(node, "", false, false)
|
||||
require.NoError(t, err)
|
||||
|
||||
libassert.CatalogServiceExists(t, client, "static-client-sidecar-proxy", nil)
|
||||
|
||||
return clientConnectProxy
|
||||
}
|
||||
|
||||
func createLocalAuthzService(t *testing.T, cluster *libcluster.Cluster) {
|
||||
node := cluster.Agents[0]
|
||||
|
||||
cwd, err := os.Getwd()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
req := testcontainers.ContainerRequest{
|
||||
Image: "openpolicyagent/opa:0.53.0-envoy-3",
|
||||
AutoRemove: true,
|
||||
Name: "ext-authz",
|
||||
Env: make(map[string]string),
|
||||
Cmd: []string{
|
||||
"run",
|
||||
"--server",
|
||||
"--addr=localhost:8181",
|
||||
"--diagnostic-addr=0.0.0.0:8282",
|
||||
"--set=plugins.envoy_ext_authz_grpc.addr=:9191",
|
||||
"--set=plugins.envoy_ext_authz_grpc.path=envoy/authz/allow",
|
||||
"--set=decision_logs.console=true",
|
||||
"--set=status.console=true",
|
||||
"--ignore=.*",
|
||||
"/testdata/policies/policy.rego",
|
||||
},
|
||||
Mounts: []testcontainers.ContainerMount{{
|
||||
Source: testcontainers.DockerBindMountSource{
|
||||
HostPath: fmt.Sprintf("%s/testdata", cwd),
|
||||
},
|
||||
Target: "/testdata",
|
||||
ReadOnly: true,
|
||||
}},
|
||||
}
|
||||
|
||||
ctx := context.Background()
|
||||
|
||||
exposedPorts := []string{}
|
||||
_, err = libcluster.LaunchContainerOnNode(ctx, node, req, exposedPorts)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
}
|
||||
|
||||
func doRequest(t *testing.T, url string, expStatus int) {
|
||||
retry.RunWith(&retry.Timer{Timeout: 5 * time.Second, Wait: time.Second}, t, func(r *retry.R) {
|
||||
resp, err := cleanhttp.DefaultClient().Get(url)
|
||||
require.NoError(r, err)
|
||||
require.Equal(r, expStatus, resp.StatusCode)
|
||||
})
|
||||
}
|
12
test/integration/consul-container/test/envoy_extensions/testdata/policies/policy.rego
vendored
Normal file
12
test/integration/consul-container/test/envoy_extensions/testdata/policies/policy.rego
vendored
Normal file
|
@ -0,0 +1,12 @@
|
|||
package envoy.authz
|
||||
|
||||
import future.keywords
|
||||
|
||||
import input.attributes.request.http as http_request
|
||||
|
||||
default allow := false
|
||||
|
||||
allow if {
|
||||
http_request.method == "GET"
|
||||
glob.match("/allow", ["/"], http_request.path)
|
||||
}
|
|
@ -12,11 +12,10 @@ import (
|
|||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/hashicorp/go-cleanhttp"
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
"github.com/hashicorp/consul/api"
|
||||
"github.com/hashicorp/go-cleanhttp"
|
||||
|
||||
libassert "github.com/hashicorp/consul/test/integration/consul-container/libs/assert"
|
||||
libcluster "github.com/hashicorp/consul/test/integration/consul-container/libs/cluster"
|
||||
libservice "github.com/hashicorp/consul/test/integration/consul-container/libs/service"
|
||||
|
@ -47,7 +46,7 @@ func TestAPIGatewayCreate(t *testing.T) {
|
|||
InjectGossipEncryption: true,
|
||||
AllowHTTPAnyway: true,
|
||||
},
|
||||
Ports: []int{
|
||||
ExposedPorts: []int{
|
||||
listenerPortOne,
|
||||
serviceHTTPPort,
|
||||
serviceGRPCPort,
|
||||
|
@ -59,6 +58,21 @@ func TestAPIGatewayCreate(t *testing.T) {
|
|||
|
||||
namespace := getOrCreateNamespace(t, client)
|
||||
|
||||
// Create a gateway
|
||||
// We intentionally do this before creating the config entries
|
||||
gatewayService, err := libservice.NewGatewayService(context.Background(), libservice.GatewayConfig{
|
||||
Kind: "api",
|
||||
Namespace: namespace,
|
||||
Name: gatewayName,
|
||||
}, cluster.Agents[0], listenerPortOne)
|
||||
require.NoError(t, err)
|
||||
|
||||
// We check this is healthy here because in the case of bringing up a new kube cluster,
|
||||
// it is not possible to create the config entry in advance.
|
||||
// The health checks must pass so the pod can start up.
|
||||
// For API gateways, this should always pass, because there is no default listener for health in Envoy
|
||||
libassert.CatalogServiceIsHealthy(t, client, gatewayName, &api.QueryOptions{Namespace: namespace})
|
||||
|
||||
// add api gateway config
|
||||
apiGateway := &api.APIGatewayConfigEntry{
|
||||
Kind: api.APIGateway,
|
||||
|
@ -75,7 +89,7 @@ func TestAPIGatewayCreate(t *testing.T) {
|
|||
|
||||
require.NoError(t, cluster.ConfigEntryWrite(apiGateway))
|
||||
|
||||
_, _, err := libservice.CreateAndRegisterStaticServerAndSidecar(cluster.Agents[0], &libservice.ServiceOpts{
|
||||
_, _, err = libservice.CreateAndRegisterStaticServerAndSidecar(cluster.Agents[0], &libservice.ServiceOpts{
|
||||
ID: serviceName,
|
||||
Name: serviceName,
|
||||
Namespace: namespace,
|
||||
|
@ -105,14 +119,6 @@ func TestAPIGatewayCreate(t *testing.T) {
|
|||
|
||||
require.NoError(t, cluster.ConfigEntryWrite(tcpRoute))
|
||||
|
||||
// Create a gateway
|
||||
gatewayService, err := libservice.NewGatewayService(context.Background(), libservice.GatewayConfig{
|
||||
Kind: "api",
|
||||
Namespace: namespace,
|
||||
Name: gatewayName,
|
||||
}, cluster.Agents[0], listenerPortOne)
|
||||
require.NoError(t, err)
|
||||
|
||||
// make sure the gateway/route come online
|
||||
// make sure config entries have been properly created
|
||||
checkGatewayConfigEntry(t, client, gatewayName, &api.QueryOptions{Namespace: namespace})
|
||||
|
|
|
@ -70,7 +70,7 @@ func TestHTTPRouteFlattening(t *testing.T) {
|
|||
InjectGossipEncryption: true,
|
||||
AllowHTTPAnyway: true,
|
||||
},
|
||||
Ports: []int{
|
||||
ExposedPorts: []int{
|
||||
listenerPort,
|
||||
serviceOneHTTPPort,
|
||||
serviceOneGRPCPort,
|
||||
|
@ -298,7 +298,7 @@ func TestHTTPRoutePathRewrite(t *testing.T) {
|
|||
InjectGossipEncryption: true,
|
||||
AllowHTTPAnyway: true,
|
||||
},
|
||||
Ports: []int{
|
||||
ExposedPorts: []int{
|
||||
listenerPort,
|
||||
fooHTTPPort,
|
||||
fooGRPCPort,
|
||||
|
@ -525,7 +525,7 @@ func TestHTTPRouteParentRefChange(t *testing.T) {
|
|||
InjectGossipEncryption: true,
|
||||
AllowHTTPAnyway: true,
|
||||
},
|
||||
Ports: []int{
|
||||
ExposedPorts: []int{
|
||||
listenerOnePort,
|
||||
listenerTwoPort,
|
||||
serviceHTTPPort,
|
||||
|
|
|
@ -0,0 +1,129 @@
|
|||
// Copyright (c) HashiCorp, Inc.
|
||||
// SPDX-License-Identifier: MPL-2.0
|
||||
|
||||
package gateways
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/docker/go-connections/nat"
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
"github.com/hashicorp/consul/api"
|
||||
libassert "github.com/hashicorp/consul/test/integration/consul-container/libs/assert"
|
||||
libcluster "github.com/hashicorp/consul/test/integration/consul-container/libs/cluster"
|
||||
libservice "github.com/hashicorp/consul/test/integration/consul-container/libs/service"
|
||||
"github.com/hashicorp/consul/test/integration/consul-container/libs/topology"
|
||||
)
|
||||
|
||||
// TestIngressGateway Summary
|
||||
// This test makes sure a cluster service can be reached via and ingress gateway.
|
||||
//
|
||||
// Steps:
|
||||
// - Create a cluster (1 server and 1 client).
|
||||
// - Create the example static-server and sidecar containers, then register them both with Consul
|
||||
// - Create an ingress gateway and register it with Consul on the client agent
|
||||
// - Create a config entry that binds static-server to a new listener on the ingress gateway
|
||||
// - Verify that static-service is accessible through the ingress gateway port
|
||||
func TestIngressGateway(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
// Ingress gateways must have a listener other than 8443, which is used for health checks.
|
||||
// 9999 is already exposed from consul agents
|
||||
gatewayListenerPort := 9999
|
||||
|
||||
cluster, _, _ := topology.NewCluster(t, &topology.ClusterConfig{
|
||||
NumServers: 1,
|
||||
NumClients: 1,
|
||||
ApplyDefaultProxySettings: true,
|
||||
BuildOpts: &libcluster.BuildOptions{
|
||||
Datacenter: "dc1",
|
||||
InjectAutoEncryption: true,
|
||||
InjectGossipEncryption: true,
|
||||
// TODO(rb): fix the test to not need the service/envoy stack to use :8500
|
||||
AllowHTTPAnyway: true,
|
||||
},
|
||||
})
|
||||
apiClient := cluster.APIClient(0)
|
||||
clientNode := cluster.Clients()[0]
|
||||
|
||||
// Set up the "static-server" backend
|
||||
serverService, _ := topology.CreateServices(t, cluster)
|
||||
|
||||
// Create the ingress gateway service
|
||||
// We expose this on the client node, which already has port 9999 exposed as part of it's pause "pod"
|
||||
gwCfg := libservice.GatewayConfig{
|
||||
Name: api.IngressGateway,
|
||||
Kind: "ingress",
|
||||
}
|
||||
ingressService, err := libservice.NewGatewayService(context.Background(), gwCfg, clientNode)
|
||||
require.NoError(t, err)
|
||||
|
||||
// this is deliberate
|
||||
// internally, ingress gw have a 15s timeout before the /ready endpoint is available,
|
||||
// then we need to wait for the health check to re-execute and propagate.
|
||||
time.Sleep(45 * time.Second)
|
||||
|
||||
// We check this is healthy here because in the case of bringing up a new kube cluster,
|
||||
// it is not possible to create the config entry in advance.
|
||||
// The health checks must pass so the pod can start up.
|
||||
libassert.CatalogServiceIsHealthy(t, apiClient, api.IngressGateway, nil)
|
||||
|
||||
// Register a service to the ingress gateway
|
||||
// **NOTE**: We intentionally wait until after the gateway starts to create the config entry.
|
||||
// This was a regression that can cause errors when starting up consul-k8s before you have the resource defined.
|
||||
ingressGwConfig := &api.IngressGatewayConfigEntry{
|
||||
Kind: api.IngressGateway,
|
||||
Name: api.IngressGateway,
|
||||
Listeners: []api.IngressListener{
|
||||
{
|
||||
Port: gatewayListenerPort,
|
||||
Protocol: "http",
|
||||
Services: []api.IngressService{
|
||||
{
|
||||
Name: libservice.StaticServerServiceName,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
require.NoError(t, cluster.ConfigEntryWrite(ingressGwConfig))
|
||||
|
||||
// Wait for the request to persist
|
||||
checkIngressConfigEntry(t, apiClient, api.IngressGateway, nil)
|
||||
|
||||
_, adminPort := ingressService.GetAdminAddr()
|
||||
libassert.AssertUpstreamEndpointStatus(t, adminPort, "static-server.default", "HEALTHY", 1)
|
||||
//libassert.GetEnvoyListenerTCPFilters(t, adminPort) // This won't succeed because the dynamic listener is delayed
|
||||
|
||||
libassert.AssertContainerState(t, ingressService, "running")
|
||||
libassert.AssertContainerState(t, serverService, "running")
|
||||
|
||||
mappedPort, err := clientNode.GetPod().MappedPort(context.Background(), nat.Port(fmt.Sprintf("%d/tcp", gatewayListenerPort)))
|
||||
require.NoError(t, err)
|
||||
|
||||
// by default, ingress routes are set per <service>.ingress.*
|
||||
headers := map[string]string{"Host": fmt.Sprintf("%s.ingress.com", libservice.StaticServerServiceName)}
|
||||
libassert.HTTPServiceEchoesWithHeaders(t, "localhost", mappedPort.Int(), "", headers)
|
||||
}
|
||||
|
||||
func checkIngressConfigEntry(t *testing.T, client *api.Client, gatewayName string, opts *api.QueryOptions) {
|
||||
t.Helper()
|
||||
|
||||
require.Eventually(t, func() bool {
|
||||
entry, _, err := client.ConfigEntries().Get(api.IngressGateway, gatewayName, opts)
|
||||
if err != nil {
|
||||
t.Log("error constructing request", err)
|
||||
return false
|
||||
}
|
||||
if entry == nil {
|
||||
t.Log("returned entry is nil")
|
||||
return false
|
||||
}
|
||||
return true
|
||||
}, time.Second*10, time.Second*1)
|
||||
}
|
|
@ -0,0 +1,215 @@
|
|||
// Copyright (c) HashiCorp, Inc.
|
||||
// SPDX-License-Identifier: MPL-2.0
|
||||
|
||||
package jwtauth
|
||||
|
||||
import (
|
||||
"encoding/base64"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net/http"
|
||||
|
||||
"github.com/hashicorp/consul/api"
|
||||
"github.com/hashicorp/consul/sdk/testutil/retry"
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
"github.com/go-jose/go-jose/v3/jwt"
|
||||
libassert "github.com/hashicorp/consul/test/integration/consul-container/libs/assert"
|
||||
libcluster "github.com/hashicorp/consul/test/integration/consul-container/libs/cluster"
|
||||
libservice "github.com/hashicorp/consul/test/integration/consul-container/libs/service"
|
||||
libtopology "github.com/hashicorp/consul/test/integration/consul-container/libs/topology"
|
||||
libutils "github.com/hashicorp/consul/test/integration/consul-container/libs/utils"
|
||||
"github.com/hashicorp/go-cleanhttp"
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
// TestJWTAuthConnectService summary
|
||||
// This test ensures that when we have an intention referencing a JWT, requests
|
||||
// without JWT authorization headers are denied. And requests with the correct JWT
|
||||
// Authorization header are successful
|
||||
//
|
||||
// Steps:
|
||||
// - Creates a single agent cluster
|
||||
// - Creates a static-server and sidecar containers
|
||||
// - Registers the created static-server and sidecar with consul
|
||||
// - Create a static-client and sidecar containers
|
||||
// - Registers the static-client and sidecar with consul
|
||||
// - Ensure client sidecar is running as expected
|
||||
// - Make a request without the JWT Authorization header and expects 401 StatusUnauthorized
|
||||
// - Make a request with the JWT Authorization header and expects a 200
|
||||
func TestJWTAuthConnectService(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
cluster, _, _ := libtopology.NewCluster(t, &libtopology.ClusterConfig{
|
||||
NumServers: 1,
|
||||
NumClients: 1,
|
||||
ApplyDefaultProxySettings: true,
|
||||
BuildOpts: &libcluster.BuildOptions{
|
||||
Datacenter: "dc1",
|
||||
InjectAutoEncryption: true,
|
||||
InjectGossipEncryption: true,
|
||||
},
|
||||
})
|
||||
|
||||
clientService := createServices(t, cluster)
|
||||
_, clientPort := clientService.GetAddr()
|
||||
_, clientAdminPort := clientService.GetAdminAddr()
|
||||
|
||||
libassert.AssertUpstreamEndpointStatus(t, clientAdminPort, "static-server.default", "HEALTHY", 1)
|
||||
libassert.AssertContainerState(t, clientService, "running")
|
||||
libassert.AssertFortioName(t, fmt.Sprintf("http://localhost:%d", clientPort), "static-server", "")
|
||||
|
||||
claims := jwt.Claims{
|
||||
Subject: "r3qXcK2bix9eFECzsU3Sbmh0K16fatW6@clients",
|
||||
Audience: jwt.Audience{"https://consul.test"},
|
||||
Issuer: "https://legit.issuer.internal/",
|
||||
NotBefore: jwt.NewNumericDate(time.Now().Add(-5 * time.Second)),
|
||||
Expiry: jwt.NewNumericDate(time.Now().Add(60 * time.Minute)),
|
||||
}
|
||||
|
||||
jwks, jwt := makeJWKSAndJWT(t, claims)
|
||||
|
||||
// configure proxy-defaults, jwt-provider and intention
|
||||
configureProxyDefaults(t, cluster)
|
||||
configureJWTProvider(t, cluster, jwks, claims)
|
||||
configureIntentions(t, cluster)
|
||||
|
||||
baseURL := fmt.Sprintf("http://localhost:%d", clientPort)
|
||||
// fails without jwt headers
|
||||
doRequest(t, baseURL, http.StatusUnauthorized, "")
|
||||
// succeeds with jwt
|
||||
doRequest(t, baseURL, http.StatusOK, jwt)
|
||||
}
|
||||
|
||||
func createServices(t *testing.T, cluster *libcluster.Cluster) libservice.Service {
|
||||
node := cluster.Agents[0]
|
||||
client := node.GetClient()
|
||||
// Create a service and proxy instance
|
||||
serviceOpts := &libservice.ServiceOpts{
|
||||
Name: libservice.StaticServerServiceName,
|
||||
ID: "static-server",
|
||||
HTTPPort: 8080,
|
||||
GRPCPort: 8079,
|
||||
}
|
||||
|
||||
// Create a service and proxy instance
|
||||
_, _, err := libservice.CreateAndRegisterStaticServerAndSidecar(node, serviceOpts)
|
||||
require.NoError(t, err)
|
||||
|
||||
libassert.CatalogServiceExists(t, client, "static-server-sidecar-proxy", nil)
|
||||
libassert.CatalogServiceExists(t, client, libservice.StaticServerServiceName, nil)
|
||||
|
||||
// Create a client proxy instance with the server as an upstream
|
||||
clientConnectProxy, err := libservice.CreateAndRegisterStaticClientSidecar(node, "", false, false)
|
||||
require.NoError(t, err)
|
||||
|
||||
libassert.CatalogServiceExists(t, client, "static-client-sidecar-proxy", nil)
|
||||
|
||||
return clientConnectProxy
|
||||
}
|
||||
|
||||
// creates a JWKS and JWT that will be used for validation
|
||||
func makeJWKSAndJWT(t *testing.T, claims jwt.Claims) (string, string) {
|
||||
pub, priv, err := libutils.GenerateKey()
|
||||
require.NoError(t, err)
|
||||
|
||||
jwks, err := libutils.NewJWKS(pub)
|
||||
require.NoError(t, err)
|
||||
|
||||
jwksJson, err := json.Marshal(jwks)
|
||||
require.NoError(t, err)
|
||||
|
||||
type orgs struct {
|
||||
Primary string `json:"primary"`
|
||||
}
|
||||
privateCl := struct {
|
||||
FirstName string `json:"first_name"`
|
||||
Org orgs `json:"org"`
|
||||
Groups []string `json:"groups"`
|
||||
}{
|
||||
FirstName: "jeff2",
|
||||
Org: orgs{"engineering"},
|
||||
Groups: []string{"foo", "bar"},
|
||||
}
|
||||
|
||||
jwt, err := libutils.SignJWT(priv, claims, privateCl)
|
||||
require.NoError(t, err)
|
||||
return string(jwksJson), jwt
|
||||
}
|
||||
|
||||
// configures the protocol to http as this is needed for jwt-auth
|
||||
func configureProxyDefaults(t *testing.T, cluster *libcluster.Cluster) {
|
||||
client := cluster.Agents[0].GetClient()
|
||||
|
||||
ok, _, err := client.ConfigEntries().Set(&api.ProxyConfigEntry{
|
||||
Kind: api.ProxyDefaults,
|
||||
Name: api.ProxyConfigGlobal,
|
||||
Config: map[string]interface{}{
|
||||
"protocol": "http",
|
||||
},
|
||||
}, nil)
|
||||
require.NoError(t, err)
|
||||
require.True(t, ok)
|
||||
}
|
||||
|
||||
// creates a JWT local provider
|
||||
func configureJWTProvider(t *testing.T, cluster *libcluster.Cluster, jwks string, claims jwt.Claims) {
|
||||
client := cluster.Agents[0].GetClient()
|
||||
|
||||
ok, _, err := client.ConfigEntries().Set(&api.JWTProviderConfigEntry{
|
||||
Kind: api.JWTProvider,
|
||||
Name: "test-jwt",
|
||||
JSONWebKeySet: &api.JSONWebKeySet{
|
||||
Local: &api.LocalJWKS{
|
||||
JWKS: base64.StdEncoding.EncodeToString([]byte(jwks)),
|
||||
},
|
||||
},
|
||||
Issuer: claims.Issuer,
|
||||
Audiences: claims.Audience,
|
||||
}, nil)
|
||||
require.NoError(t, err)
|
||||
require.True(t, ok)
|
||||
}
|
||||
|
||||
// creates an intention referencing the jwt provider
|
||||
func configureIntentions(t *testing.T, cluster *libcluster.Cluster) {
|
||||
client := cluster.Agents[0].GetClient()
|
||||
|
||||
ok, _, err := client.ConfigEntries().Set(&api.ServiceIntentionsConfigEntry{
|
||||
Kind: "service-intentions",
|
||||
Name: libservice.StaticServerServiceName,
|
||||
Sources: []*api.SourceIntention{
|
||||
{
|
||||
Name: libservice.StaticClientServiceName,
|
||||
Action: api.IntentionActionAllow,
|
||||
},
|
||||
},
|
||||
JWT: &api.IntentionJWTRequirement{
|
||||
Providers: []*api.IntentionJWTProvider{
|
||||
{
|
||||
Name: "test-jwt",
|
||||
VerifyClaims: []*api.IntentionJWTClaimVerification{},
|
||||
},
|
||||
},
|
||||
},
|
||||
}, nil)
|
||||
require.NoError(t, err)
|
||||
require.True(t, ok)
|
||||
}
|
||||
|
||||
func doRequest(t *testing.T, url string, expStatus int, jwt string) {
|
||||
retry.RunWith(&retry.Timer{Timeout: 5 * time.Second, Wait: time.Second}, t, func(r *retry.R) {
|
||||
|
||||
client := cleanhttp.DefaultClient()
|
||||
|
||||
req, err := http.NewRequest("GET", url, nil)
|
||||
require.NoError(r, err)
|
||||
if jwt != "" {
|
||||
req.Header.Set("Authorization", fmt.Sprintf("Bearer %s", jwt))
|
||||
}
|
||||
resp, err := client.Do(req)
|
||||
require.NoError(r, err)
|
||||
require.Equal(r, expStatus, resp.StatusCode)
|
||||
})
|
||||
}
|
|
@ -64,7 +64,8 @@ $ curl \
|
|||
"mesh-gateway": 0,
|
||||
"terminating-gateway": 0
|
||||
},
|
||||
"BillableServiceInstances": 0
|
||||
"BillableServiceInstances": 0,
|
||||
"Nodes": 1
|
||||
}
|
||||
},
|
||||
"Index": 13,
|
||||
|
|
|
@ -50,6 +50,12 @@ Billable Services
|
|||
Services Service instances
|
||||
2 3
|
||||
|
||||
Nodes
|
||||
Datacenter Count
|
||||
dc1 1
|
||||
|
||||
Total 1
|
||||
|
||||
Connect Services
|
||||
Type Service instances
|
||||
connect-native 0
|
||||
|
@ -74,6 +80,13 @@ dc2 1 1
|
|||
|
||||
Total 3 4
|
||||
|
||||
Nodes
|
||||
Datacenter Count
|
||||
dc1 1
|
||||
dc2 2
|
||||
|
||||
Total 3
|
||||
|
||||
Connect Services
|
||||
Datacenter Type Service instances
|
||||
dc1 connect-native 0
|
||||
|
|
|
@ -19,9 +19,13 @@ You can set global limits on the rate of read and write requests that affect ind
|
|||
|
||||
1. Set arbitrary limits to begin understanding the upper boundary of RPC and gRPC loads in your network. Refer to [Initialize rate limit settings](/consul/docs/agent/limits/usage/init-rate-limits) for additional information.
|
||||
|
||||
1. Monitor the metrics and logs and readjust the initial configurations as necessary. Refer to [Monitor rate limit data](/consul/docs/agent/limits/usage/monitor-rate-limit-data)
|
||||
1. Monitor the metrics and logs and readjust the initial configurations as necessary. Refer to [Monitor rate limit data](/consul/docs/agent/limits/usage/monitor-rate-limits)
|
||||
|
||||
1. Define your final operational limits based on your observations. If you are defining global rate limits, refer to [Set global traffic rate limits](/consul/docs/agent/limits/usage/set-global-rate-limits) for additional information. For information about setting limits per source IP address, refer to [Limit traffic rates for a source IP](/consul/docs/agent/limits/usage/set-source-ip-rate-limits). Note that setting limits per source IP requires Consul Enterprise.
|
||||
1. Define your final operational limits based on your observations. If you are defining global rate limits, refer to [Set global traffic rate limits](/consul/docs/agent/limits/usage/set-global-traffic-rate-limits) for additional information. For information about setting limits per source IP address, refer to [Limit traffic rates for a source IP](/consul/docs/agent/limits/usage/limit-request-rates-from-ips).
|
||||
|
||||
<EnterpriseAlert>
|
||||
Setting limits per source IP requires Consul Enterprise.
|
||||
</EnterpriseAlert>
|
||||
|
||||
### Order of operations
|
||||
|
||||
|
|
|
@ -10,7 +10,7 @@ This topic describes how to designate groups of services as functionally identic
|
|||
|
||||
<Warning>
|
||||
|
||||
Sameness groups are a beta feature in this version of Consul. Functionality is subject to change. You should never use the beta release in secure environments or production scenarios. Features in beta may experience performance issues, scaling issues, and limited support.
|
||||
Sameness groups are a beta feature for all Consul v1.16.x releases. Functionality is subject to change. You should never use the beta release in secure environments or production scenarios. Features in beta may experience performance issues, scaling issues, and limited support.
|
||||
|
||||
</Warning>
|
||||
|
||||
|
|
|
@ -42,7 +42,7 @@ When every field is defined, a control plane request limit configuration entry h
|
|||
```hcl
|
||||
kind = "control-plane-request-limit"
|
||||
mode = "permissive"
|
||||
name = "<destination service>"
|
||||
name = "<name-for-the-entry>"
|
||||
read_rate = 100
|
||||
write_rate = 100
|
||||
kv = {
|
||||
|
@ -64,7 +64,7 @@ catalog = {
|
|||
{
|
||||
"kind": "control-plane-request-limit",
|
||||
"mode": "permissive",
|
||||
"name": "<destination service>",
|
||||
"name": "<name-for-the-entry>",
|
||||
"read_rate": 100,
|
||||
"write_rate": 100,
|
||||
"kv": {
|
||||
|
@ -85,7 +85,7 @@ catalog = {
|
|||
```yaml
|
||||
kind: control-plane-request-limit
|
||||
mode: permissive
|
||||
name: <destination service>
|
||||
name: <name-for-the-entry>
|
||||
read_rate: 100
|
||||
write_rate: 100
|
||||
kv:
|
||||
|
|
|
@ -182,6 +182,60 @@ spec:
|
|||
]
|
||||
```
|
||||
|
||||
</CodeTabs>
|
||||
</Tab>
|
||||
|
||||
|
||||
<Tab heading="Consul Enterprise (Sameness Group)">
|
||||
<CodeTabs heading="Exported services configuration syntax" tabs={[ "HCL", "Kubernetes YAML", "JSON" ]}>
|
||||
|
||||
```hcl
|
||||
Kind = "exported-services"
|
||||
Partition = "<partition containing services to export>"
|
||||
Name = "<partition containing services to export>"
|
||||
Services = [
|
||||
{
|
||||
Name = "<name of service to export>"
|
||||
Namespace = "<namespace in the partition containing the service to export>"
|
||||
Consumers = [
|
||||
{
|
||||
SamenessGroup = "<name of the sameness group that dials the exported service>"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
```yaml
|
||||
apiVersion: consul.hashicorp.com/v1alpha1
|
||||
kind: ExportedServices
|
||||
metadata:
|
||||
name: <partition containing services to export>
|
||||
spec:
|
||||
services:
|
||||
- name: <name of service to export>
|
||||
namespace: <namespace in the partition containing the service to export>
|
||||
consumers:
|
||||
- samenessGroup: <name of the sameness group that dials the exported service>
|
||||
```
|
||||
|
||||
```json
|
||||
"Kind": "exported-services",
|
||||
"Partition": "<partition containing services to export>",
|
||||
"Name": "<partition containing services to export>",
|
||||
"Services": [
|
||||
{
|
||||
"Name": "<name of service to export>",
|
||||
"Namespace": "<namespace in the partition containing the service to export>"
|
||||
"Consumers": [
|
||||
{
|
||||
"SamenessGroup": "<name of the sameness group that dials the exported service>"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
</CodeTabs>
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
@ -456,6 +510,57 @@ spec:
|
|||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
### Exporting a service to a sameness group
|
||||
|
||||
The following example configures Consul to export a service named `api` to a defined group of partitions that belong to a separately defined [sameness group](/consul/docs/connect/config-entries/sameness-group) named `monitoring`.
|
||||
|
||||
<CodeTabs tabs={[ "HCL", "Kubernetes YAML", "JSON" ]}>
|
||||
|
||||
```hcl
|
||||
Kind = "exported-services"
|
||||
Name = "default"
|
||||
|
||||
Services = [
|
||||
{
|
||||
Name = "api"
|
||||
Consumers = [
|
||||
{
|
||||
SamenessGroup = "monitoring"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
```yaml
|
||||
apiVersion: consul.hashicorp.com/v1alpha1
|
||||
Kind: ExportedServices
|
||||
metadata:
|
||||
name: default
|
||||
spec:
|
||||
services:
|
||||
- name: api
|
||||
consumers:
|
||||
- samenessGroup: monitoring
|
||||
```
|
||||
|
||||
```json
|
||||
"Kind": "exported-services",
|
||||
"Name": "default",
|
||||
"Services": [
|
||||
{
|
||||
"Name": "api",
|
||||
"Consumers": [
|
||||
{
|
||||
"SamenessGroup": "monitoring"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
</CodeTabs>
|
||||
|
||||
### Exporting all services
|
||||
|
||||
<Tabs>
|
||||
|
|
|
@ -12,7 +12,7 @@ To learn more about creating a sameness group, refer to [Create sameness groups]
|
|||
|
||||
<Warning>
|
||||
|
||||
Sameness groups are a beta feature in this version of Consul. Functionality is subject to change. You should never use the beta release in secure environments or production scenarios. Features in beta may experience performance issues, scaling issues, and limited support.
|
||||
Sameness groups are a beta feature for all Consul v1.16.x releases. Functionality is subject to change. You should never use the beta release in secure environments or production scenarios. Features in beta may experience performance issues, scaling issues, and limited support.
|
||||
|
||||
</Warning>
|
||||
|
||||
|
|
|
@ -7,9 +7,7 @@ description: >-
|
|||
|
||||
# Enabling Peering Control Plane Traffic
|
||||
|
||||
In addition to [service-to-service traffic routing](/consul/docs/connect/cluster-peering/usage/establish-cluster-peering),
|
||||
we recommend routing control plane traffic between cluster peers through mesh gateways
|
||||
to simplfy networking requirements.
|
||||
This topic describes how to configure a mesh gateway to route control plane traffic between Consul clusters that share a peer connection. For information about routing service traffic between cluster peers through a mesh gateway, refer to [Enabling Service-to-service Traffic Across Admin Partitions](/consul/docs/connect/gateways/mesh-gateway/service-to-service-traffic-partitions).
|
||||
|
||||
Control plane traffic between cluster peers includes
|
||||
the initial secret handshake and the bi-directional stream replicating peering data.
|
||||
|
@ -60,6 +58,7 @@ For Consul Enterprise clusters, mesh gateways must be registered in the "default
|
|||
<Tab heading="Consul OSS">
|
||||
|
||||
In addition to the [ACL Configuration](/consul/docs/connect/cluster-peering/tech-specs#acl-specifications) necessary for service-to-service traffic, mesh gateways that route peering control plane traffic must be granted `peering:read` access to all peerings.
|
||||
|
||||
This access allows the mesh gateway to list all peerings in a Consul cluster and generate unique routing per peered datacenter.
|
||||
|
||||
<CodeTabs heading="Example ACL rules for Mesh Gateway Peering Control Plane Traffic in Consul OSS">
|
||||
|
@ -81,6 +80,7 @@ peering = "read"
|
|||
<Tab heading="Consul Enterprise">
|
||||
|
||||
In addition to the [ACL Configuration](/consul/docs/connect/cluster-peering/tech-specs#acl-specifications) necessary for service-to-service traffic, mesh gateways that route peering control plane traffic must be granted `peering:read` access to all peerings in all partitions.
|
||||
|
||||
This access allows the mesh gateway to list all peerings in a Consul cluster and generate unique routing per peered partition.
|
||||
|
||||
<CodeTabs heading="Example ACL rules for Mesh Gateway Peering Control Plane Traffic in Consul Enterprise">
|
||||
|
|
|
@ -53,7 +53,7 @@ EnvoyExtensions = [
|
|||
```
|
||||
</CodeBlockConfig>
|
||||
</Tab>
|
||||
<Tab heading="HCL" group="hcl">
|
||||
<Tab heading="JSON" group="json">
|
||||
<CodeBlockConfig filename="api-auth-service-defaults.json">
|
||||
|
||||
```json
|
||||
|
|
|
@ -9,16 +9,21 @@ description: >-
|
|||
|
||||
## Overview
|
||||
|
||||
Consul on Kubernetes provides a few options for customizing how connect-inject behavior should be configured.
|
||||
Consul on Kubernetes provides a few options for customizing how connect-inject or service sync behavior should be configured.
|
||||
This allows the user to configure natively configure Consul on select Kubernetes resources (i.e. pods, services).
|
||||
|
||||
- [Annotations](#annotations)
|
||||
- [Labels](#labels)
|
||||
- [Consul Service Mesh](#consul-service-mesh)
|
||||
- [Annotations](#annotations)
|
||||
- [Labels](#labels)
|
||||
- [Service Sync](#service-sync)
|
||||
- [Annotations](#annotations-1)
|
||||
|
||||
The noun _connect_ is used throughout this documentation to refer to the connect
|
||||
subsystem that provides Consul's service mesh capabilities.
|
||||
|
||||
## Annotations
|
||||
## Consul Service Mesh
|
||||
|
||||
### Annotations
|
||||
|
||||
The following Kubernetes resource annotations could be used on a pod to control connect-inject behavior:
|
||||
|
||||
|
@ -76,7 +81,7 @@ The following Kubernetes resource annotations could be used on a pod to control
|
|||
local port to listen for those connections. When transparent proxy is enabled,
|
||||
this annotation is optional. This annotation can be either _labeled_ or _unlabeled_. We recommend the labeled format because it has a more consistent syntax and can be used to reference cluster peers as upstreams.
|
||||
|
||||
- **Labeled** (requires Consul on Kubernetes v0.45.0+):
|
||||
- **Labeled**:
|
||||
|
||||
The labeled annotation format allows you to reference any service as an upstream. You can specify a Consul Enterprise namespace. You can also specify an admin partition in the same datacenter, a cluster peer, or a WAN-federated datacenter.
|
||||
|
||||
|
@ -133,7 +138,7 @@ The following Kubernetes resource annotations could be used on a pod to control
|
|||
"consul.hashicorp.com/connect-service-upstreams":"[service-name]:[port]:[optional datacenter]"
|
||||
```
|
||||
|
||||
- Namespace (requires Consul Enterprise 1.7+): Upstream services may be running in a different namespace. Place
|
||||
- Namespace: Upstream services may be running in a different namespace. Place
|
||||
the upstream namespace after the service name. For additional details about configuring the injector, refer to [Consul Enterprise namespaces](#consul-enterprise-namespaces) .
|
||||
|
||||
```yaml
|
||||
|
@ -144,7 +149,7 @@ The following Kubernetes resource annotations could be used on a pod to control
|
|||
If the namespace is not specified, the annotation defaults to the namespace of the source service.
|
||||
Consul Enterprise v1.7 and older interprets the value placed in the namespace position as part of the service name.
|
||||
|
||||
- Admin partitions (requires Consul Enterprise 1.11+): Upstream services may be running in a different
|
||||
- Admin partitions: Upstream services may be running in a different
|
||||
partition. When specifying a partition, you must also specify a namespace. Place the partition name after the namespace. If you specify the name of the datacenter, it must be the local datacenter. Communicating across partitions using this method is only supported within a
|
||||
datacenter. For cross partition communication across datacenters, [establish a cluster
|
||||
peering connection](/consul/docs/k8s/connect/cluster-peering/usage/establish-peering) and set the upstream with a labeled annotation format.
|
||||
|
@ -265,7 +270,7 @@ The following Kubernetes resource annotations could be used on a pod to control
|
|||
"consul.hashicorp.com/consul-sidecar-user-volume-mount": "[{\"name\": \"secrets-store-mount\", \"mountPath\": \"/mnt/secrets-store\"}]"
|
||||
```
|
||||
|
||||
## Labels
|
||||
### Labels
|
||||
|
||||
Resource labels could be used on a Kubernetes service to control connect-inject behavior.
|
||||
|
||||
|
@ -276,3 +281,45 @@ Resource labels could be used on a Kubernetes service to control connect-inject
|
|||
registration to ignore all services except for the one which should be used for routing requests
|
||||
using Consul.
|
||||
|
||||
## Service Sync
|
||||
|
||||
### Annotations
|
||||
|
||||
The following Kubernetes resource annotations could be used on a pod to [Service Sync](https://developer.hashicorp.com/consul/docs/k8s/service-sync) behavior:
|
||||
|
||||
- `consul.hashicorp.com/service-sync`: If this is set to `true`, then the Kubernetes service is explicitly configured to be synced to Consul.
|
||||
|
||||
```yaml
|
||||
annotations:
|
||||
'consul.hashicorp.com/service-sync': 'true'
|
||||
```
|
||||
|
||||
- `consul.hashicorp.com/service-port`: Configures the port to register to the Consul Catalog for the Kubernetes service. The annotation value may be a name of a port (recommended) or an exact port value. Refer to [service ports](https://developer.hashicorp.com/consul/docs/k8s/service-sync#service-ports) for more information.
|
||||
|
||||
```yaml
|
||||
annotations:
|
||||
'consul.hashicorp.com/service-port': 'http'
|
||||
```
|
||||
|
||||
- `consul.hashicorp.com/service-tags`: A comma separated list of strings (without whitespace) to use for registering tags to the service registered to Consul. These custom tags automatically include the `k8s` tag which can't be disabled.
|
||||
|
||||
```yaml
|
||||
annotations:
|
||||
'consul.hashicorp.com/service-tags': 'primary,foo'
|
||||
```
|
||||
|
||||
- `consul.hashicorp.com/service-meta-KEY`: A map for specifying service metadata for Consul services. The "KEY" below can be set to any key. This allows you to set multiple meta values.
|
||||
|
||||
```yaml
|
||||
annotations:
|
||||
'consul.hashicorp.com/service-meta-KEY': 'value'
|
||||
```
|
||||
|
||||
- `consul.hashicorp.com/service-weight:` - Configures ability to support weighted loadbalancing by service annotation for Catalog Sync. The integer provided will be applied as a weight for the `passing` state for the health of the service. Refer to [weights](/consul/docs/services/configuration/services-configuration-reference#weights) in service configuration for more information on how this is leveraged for services in the Consul catalog.
|
||||
|
||||
```yaml
|
||||
annotations:
|
||||
consul.hashicorp.com/service-weight: 10
|
||||
```
|
||||
|
||||
|
||||
|
|
|
@ -19,10 +19,33 @@ Consul service mesh is enabled by default when you install Consul on Kubernetes
|
|||
|
||||
If `connectInject.default` is set to `false` or you want to explicitly enable service mesh sidecar proxy injection for a specific deployment, add the `consul.hashicorp.com/connect-inject` annotation to the pod specification template and set it to `true` when connecting services to the mesh.
|
||||
|
||||
### Example
|
||||
### Service names
|
||||
|
||||
When the service is onboarded, the name registered in Consul is set to the name of the Kubernetes Service associated with the Pod. You can use the [`consul.hashicorp.com/connect-service` annotation](/consul/docs/k8s/annotations-and-labels#consul-hashicorp-com-connect-service) to specify a custom name for the service, but if ACLs are enabled then the name of the service registered in Consul must match the Pod's `ServiceAccount` name.
|
||||
|
||||
### Transparent proxy mode
|
||||
|
||||
By default, the Consul service mesh runs in transparent proxy mode. This mode forces inbound and outbound traffic through the sidecar proxy even though the service binds to all interfaces. Transparent proxy infers the location of upstream services using Consul service intentions, and also allows you to use Kubernetes DNS as you normally would for your workloads.
|
||||
|
||||
When transparent proxy mode is enabled, all service-to-service traffic is required to use mTLS. When onboarding new services to service mesh, your network may have mixed mTLS and non-mTLS traffic, which can result in broken service-to-service communication. You can temporarily enable permissive mTLS mode during the onboarding process so that existing mesh services can accept traffic from services that are not yet fully onboarded. Permissive mTLS enables sidecar proxies to access both mTLS and non-mTLS traffic. Refer to [Onboard mesh services in transparent proxy mode](/consul/docs/k8s/connect/onboarding-tproxy-mode) for additional information.
|
||||
|
||||
### Kubernetes service mesh workload scenarios
|
||||
|
||||
-> **Note:** A Kubernetes Service is required in order to register services on the Consul service mesh. Consul monitors the lifecyle of the Kubernetes Service and its service instances using the service object. In addition, the Kubernetes service is used to register and de-register the service from Consul's catalog.
|
||||
|
||||
The following configurations are examples for registering workloads on Kubernetes into Consul's service mesh in different scenarios. Each scenario provides an example Kubernetes manifest to demonstrate how to use Consul's service mesh with a specific Kubernetes workload type.
|
||||
|
||||
- [Kubernetes Pods running as a deployment](#kubernetes-pods-running-as-a-deployment)
|
||||
- [Connecting to mesh-enabled Services](#connecting-to-mesh-enabled-services)
|
||||
- [Kubernetes Jobs](#kubernetes-jobs)
|
||||
- [Kubernetes Pods with multiple ports](#kubernetes-pods-with-multiple-ports)
|
||||
|
||||
#### Kubernetes Pods running as a deployment
|
||||
|
||||
The following example shows a Kubernetes configuration that specifically enables service mesh connections for the `static-server` service. Consul starts and registers a sidecar proxy that listens on port 20000 by default and proxies valid inbound connections to port 8080.
|
||||
|
||||
<CodeBlockConfig filename="static-server.yaml">
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
|
@ -72,27 +95,18 @@ spec:
|
|||
serviceAccountName: static-server
|
||||
```
|
||||
|
||||
To establish a connection to the Pod using service mesh, a client must use another mesh proxy. The client mesh proxy will use Consul service discovery to find all available upstream proxies and their public ports.
|
||||
</CodeBlockConfig>
|
||||
|
||||
### Service names
|
||||
To establish a connection to the upstream Pod using service mesh, a client must dial the upstream workload using a mesh proxy. The client mesh proxy will use Consul service discovery to find all available upstream proxies and their public ports.
|
||||
|
||||
When the service is onboarded, the name registered in Consul is set to the name of the Kubernetes Service associated with the Pod. You can specify a custom name for the service in the [`consul.hashicorp.com/connect-service` annotation](/consul/docs/k8s/annotations-and-labels#consul-hashicorp-com-connect-service), but if ACLs are enabled, then the name of the service registered in Consul must match the Pod's `ServiceAccount` name.
|
||||
|
||||
### Transparent proxy mode
|
||||
|
||||
By default, the Consul service mesh runs in transparent proxy mode. This mode forces inbound and outbound traffic through the sidecar proxy even though the service binds to all interfaces. Transparent proxy infers the location of upstream services using Consul service intentions, and also allows you to use Kubernetes DNS as you normally would for your workloads.
|
||||
|
||||
When transparent proxy mode is enabled, all service-to-service traffic is required to use mTLS. While onboarding new services to service mesh, your network may have mixed mTLS and non-mTLS traffic, which can result in broken service-to-service communication. You can temporarily enable permissive mTLS mode during the onboarding process so that existing mesh services can accept traffic from services that are not yet fully onboarded. Permissive mTLS enables sidecar proxies to access both mTLS and non-mTLS traffic. Refer to [Onboard mesh services in transparent proxy mode](/consul/docs/k8s/connect/onboarding-tproxy-mode) for additional information.
|
||||
|
||||
### Connecting to Mesh-Enabled Services
|
||||
#### Connecting to mesh-enabled Services
|
||||
|
||||
The example Deployment specification below configures a Deployment that is capable
|
||||
of establishing connections to our previous example "static-server" service. The
|
||||
connection to this static text service happens over an authorized and encrypted
|
||||
connection via service mesh.
|
||||
|
||||
-> **Note:** As of consul-k8s `v0.26.0` and Consul Helm `v0.32.0`, having a Kubernetes
|
||||
Service is **required** to run services on the Consul Service Mesh.
|
||||
<CodeBlockConfig filename="static-client.yaml">
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
|
@ -138,6 +152,8 @@ spec:
|
|||
serviceAccountName: static-client
|
||||
```
|
||||
|
||||
</CodeBlockConfig>
|
||||
|
||||
By default when ACLs are enabled or when ACLs default policy is `allow`,
|
||||
Consul will automatically configure proxies with all upstreams from the same datacenter.
|
||||
When ACLs are enabled with default `deny` policy,
|
||||
|
@ -172,7 +188,95 @@ $ kubectl exec deploy/static-client -- curl --silent http://static-server/
|
|||
command terminated with exit code 52
|
||||
```
|
||||
|
||||
### Kubernetes Pods with Multiple ports
|
||||
#### Kubernetes Jobs
|
||||
|
||||
Kubernetes Jobs run pods that only make outbound requests to services on the mesh and successfully terminate when they are complete. In order to register a Kubernetes Job with the mesh, you must provide an integer value for the `consul.hashicorp.com/sidecar-proxy-lifecycle-shutdown-grace-period-seconds` annotation. Then, issue a request to the `http://127.0.0.1:20600/graceful_shutdown` API endpoint so that Kubernetes gracefully shuts down the `consul-dataplane` sidecar after the job is complete.
|
||||
|
||||
Below is an example Kubernetes manifest that deploys a job correctly.
|
||||
|
||||
<CodeBlockConfig filename="test-job.yaml">
|
||||
|
||||
```yaml
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: test-job
|
||||
namespace: default
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: test-job
|
||||
namespace: default
|
||||
spec:
|
||||
selector:
|
||||
app: test-job
|
||||
ports:
|
||||
- port: 80
|
||||
---
|
||||
apiVersion: batch/v1
|
||||
kind: Job
|
||||
metadata:
|
||||
name: test-job
|
||||
namespace: default
|
||||
labels:
|
||||
app: test-job
|
||||
spec:
|
||||
template:
|
||||
metadata:
|
||||
annotations:
|
||||
'consul.hashicorp.com/connect-inject': 'true'
|
||||
'consul.hashicorp.com/sidecar-proxy-lifecycle-shutdown-grace-period-seconds': '5'
|
||||
labels:
|
||||
app: test-job
|
||||
spec:
|
||||
containers:
|
||||
- name: test-job
|
||||
image: alpine/curl:3.14
|
||||
ports:
|
||||
- containerPort: 80
|
||||
command:
|
||||
- /bin/sh
|
||||
- -c
|
||||
- |
|
||||
echo "Started test job"
|
||||
sleep 10
|
||||
echo "Killing proxy"
|
||||
curl --max-time 2 -s -f -XPOST http://127.0.0.1:20600/graceful_shutdown
|
||||
sleep 10
|
||||
echo "Ended test job"
|
||||
serviceAccountName: test-job
|
||||
restartPolicy: Never
|
||||
```
|
||||
|
||||
</CodeBlockConfig>
|
||||
|
||||
Upon completing the job you should be able to verify that all containers are shut down within the pod.
|
||||
|
||||
```shell-session
|
||||
$ kubectl get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
test-job-49st7 0/2 Completed 0 3m55s
|
||||
```
|
||||
|
||||
```shell-session
|
||||
$ kubectl get job
|
||||
NAME COMPLETIONS DURATION AGE
|
||||
test-job 1/1 30s 4m31s
|
||||
```
|
||||
|
||||
In addition, based on the logs emitted by the pod you can verify that the proxy was shut down before the Job completed.
|
||||
|
||||
```shell-session
|
||||
$ kubectl logs test-job-49st7 -c test-job
|
||||
Started test job
|
||||
Killing proxy
|
||||
Ended test job
|
||||
```
|
||||
|
||||
#### Kubernetes Pods with multiple ports
|
||||
|
||||
To configure a pod with multiple ports to be a part of the service mesh and receive and send service mesh traffic, you
|
||||
will need to add configuration so that a Consul service can be registered per port. This is because services in Consul
|
||||
currently support a single port per service instance.
|
||||
|
@ -184,6 +288,9 @@ First, decide on the names for the two Consul services that will correspond to t
|
|||
chooses the names `web` for `8080` and `web-admin` for `9090`.
|
||||
|
||||
Create two service accounts for `web` and `web-admin`:
|
||||
|
||||
<CodeBlockConfig filename="multiport-web-sa.yaml">
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
|
@ -195,7 +302,14 @@ kind: ServiceAccount
|
|||
metadata:
|
||||
name: web-admin
|
||||
```
|
||||
|
||||
</CodeBlockConfig>
|
||||
|
||||
|
||||
Create two Service objects for `web` and `web-admin`:
|
||||
|
||||
<CodeBlockConfig filename="multiport-web-svc.yaml">
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
|
@ -221,12 +335,17 @@ spec:
|
|||
port: 80
|
||||
targetPort: 9090
|
||||
```
|
||||
|
||||
</CodeBlockConfig>
|
||||
|
||||
`web` will target `containerPort` `8080` and select pods labeled `app: web`. `web-admin` will target `containerPort`
|
||||
`9090` and will also select the same pods.
|
||||
|
||||
~> Kubernetes 1.24+ only
|
||||
In Kubernetes 1.24+ you need to [create a Kubernetes secret](https://kubernetes.io/docs/concepts/configuration/secret/#service-account-token-secrets) for each multi-port service that references the ServiceAccount, and the Kubernetes secret must have the same name as the ServiceAccount:
|
||||
|
||||
<CodeBlockConfig filename="multiport-web-secret.yaml">
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
|
@ -245,12 +364,15 @@ metadata:
|
|||
type: kubernetes.io/service-account-token
|
||||
```
|
||||
|
||||
</CodeBlockConfig>
|
||||
|
||||
Create a Deployment with any chosen name, and use the following annotations:
|
||||
```yaml
|
||||
consul.hashicorp.com/connect-inject: true
|
||||
consul.hashicorp.com/transparent-proxy: false
|
||||
consul.hashicorp.com/connect-service: web,web-admin
|
||||
consul.hashicorp.com/connect-service-port: 8080,9090
|
||||
annotations:
|
||||
'consul.hashicorp.com/connect-inject': 'true'
|
||||
'consul.hashicorp.com/transparent-proxy': 'false'
|
||||
'consul.hashicorp.com/connect-service': 'web,web-admin'
|
||||
'consul.hashicorp.com/connect-service-port': '8080,9090'
|
||||
```
|
||||
Note that the order the ports are listed in the same order as the service names, i.e. the first service name `web`
|
||||
corresponds to the first port, `8080`, and the second service name `web-admin` corresponds to the second port, `9090`.
|
||||
|
@ -260,7 +382,10 @@ The service account on the pod spec for the deployment should be set to the firs
|
|||
serviceAccountName: web
|
||||
```
|
||||
|
||||
For reference, the full deployment example could look something like the following:
|
||||
The following deployment example demonstrates the required annotations for the manifest. In addition, the previous YAML manifests can also be combined into a single manifest for easier deployment.
|
||||
|
||||
<CodeBlockConfig filename="multiport-web.yaml">
|
||||
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
|
@ -302,13 +427,61 @@ spec:
|
|||
serviceAccountName: web
|
||||
```
|
||||
|
||||
</CodeBlockConfig>
|
||||
|
||||
After deploying the `web` application, you can test service mesh connections by deploying the `static-client`
|
||||
application with the configuration in the [previous section](#connecting-to-mesh-enabled-services) and add the
|
||||
following annotation to the pod template on `static-client`:
|
||||
`consul.hashicorp.com/connect-service-upstreams: 'web:1234,web-admin:2234'` annotation to the pod template on `static-client`:
|
||||
|
||||
<CodeBlockConfig filename="multiport-static-client.yaml" lineNumbers highlight="33">
|
||||
|
||||
```yaml
|
||||
consul.hashicorp.com/connect-service-upstreams: "web:1234,web-admin:2234"
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
# This name will be the service name in Consul.
|
||||
name: static-client
|
||||
spec:
|
||||
selector:
|
||||
app: static-client
|
||||
ports:
|
||||
- port: 80
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: static-client
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: static-client
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: static-client
|
||||
template:
|
||||
metadata:
|
||||
name: static-client
|
||||
labels:
|
||||
app: static-client
|
||||
annotations:
|
||||
'consul.hashicorp.com/connect-inject': 'true'
|
||||
'consul.hashicorp.com/connect-service-upstreams': 'web:1234,web-admin:2234'
|
||||
spec:
|
||||
containers:
|
||||
- name: static-client
|
||||
image: curlimages/curl:latest
|
||||
# Just spin & wait forever, we'll use `kubectl exec` to demo
|
||||
command: ['/bin/sh', '-c', '--']
|
||||
args: ['while true; do sleep 30; done;']
|
||||
# If ACLs are enabled, the serviceAccountName must match the Consul service name.
|
||||
serviceAccountName: static-client
|
||||
```
|
||||
|
||||
</CodeBlockConfig>
|
||||
|
||||
If you exec on to a static-client pod, using a command like:
|
||||
```shell-session
|
||||
$ kubectl exec -it static-client-5bd667fbd6-kk6xs -- /bin/sh
|
||||
|
|
|
@ -12,7 +12,7 @@ services are available to Consul agents and services in Consul can be available
|
|||
as first-class Kubernetes services. This functionality is provided by the
|
||||
[consul-k8s project](https://github.com/hashicorp/consul-k8s) and can be
|
||||
automatically installed and configured using the
|
||||
[Consul Helm chart](/consul/docs/k8s/installation/install).
|
||||
[Consul K8s Helm chart](/consul/docs/k8s/installation/install).
|
||||
|
||||
![screenshot of a Kubernetes service in the UI](/img/k8s-service.png)
|
||||
|
||||
|
@ -31,11 +31,7 @@ service discovery, including hosted services like databases.
|
|||
|
||||
~> Enabling both Service Mesh and Service Sync on the same Kubernetes services is not supported, as Service Mesh also registers Kubernetes service instances to Consul. Ensure that Service Sync is only enabled for namespaces and services that are not injected with the Consul sidecar for Service Mesh as described in [Sync Enable/Disable](/consul/docs/k8s/service-sync#sync-enable-disable).
|
||||
|
||||
The service sync uses an external long-running process in the
|
||||
[consul-k8s project](https://github.com/hashicorp/consul-k8s). This process
|
||||
can run either inside or outside of a Kubernetes cluster. However, running this process within
|
||||
the Kubernetes cluster is generally easier since it is automated using the
|
||||
[Helm chart](/consul/docs/k8s/helm).
|
||||
The service sync feature deploys a long-running process which can run either inside or outside of a Kubernetes cluster. However, running this process within the Kubernetes cluster is generally easier since it is automated using the [Helm chart](/consul/docs/k8s/helm).
|
||||
|
||||
The Consul server cluster can run either in or out of a Kubernetes cluster.
|
||||
The Consul server cluster does not need to be running on the same machine
|
||||
|
|
|
@ -84,7 +84,7 @@ spec:
|
|||
|
||||
### Deploy the mesh gateway
|
||||
|
||||
The mesh gateway must be running and registered to the Lambda function’s Consul datacenter. Refer to the following documentation and tutorials for instructions:
|
||||
The mesh gateway must be running and registered to the Lambda function’s Consul datacenter. Refer to the following documentation and tutorials for instructions:
|
||||
|
||||
- [Mesh Gateways between WAN-Federated Datacenters](/consul/docs/connect/gateways/mesh-gateway/service-to-service-traffic-wan-datacenters)
|
||||
- [Mesh Gateways between Admin Partitions](/consul/docs/connect/gateways/mesh-gateway/service-to-service-traffic-partitions)
|
||||
|
|
|
@ -404,7 +404,7 @@ String value that specifies the namespace in which to register the service. Refe
|
|||
|
||||
## Multiple service definitions
|
||||
|
||||
You can define multiple services in a single definition file in the `servcies` block. This enables you register multiple services in a single command. Note that the HTTP API does not support the `services` block.
|
||||
You can define multiple services in a single definition file in the `services` block. This enables you register multiple services in a single command. Note that the HTTP API does not support the `services` block.
|
||||
|
||||
<CodeTabs tabs={[ "HCL", "JSON" ]}>
|
||||
|
||||
|
|
|
@ -97,16 +97,18 @@ The DNS protocol limits the size of requests, even when performing DNS TCP queri
|
|||
Consul randomizes DNS SRV records and ignores weights specified in service configurations when printing responses. If records are truncated, each client using weighted SRV responses may have partial and inconsistent views of instance weights. As a result, the request distribution may be skewed from the intended weights. We recommend calling the [`/catalog/nodes` API endpoint](/consul/api-docs/catalog#list-nodes) to retrieve the complete list of nodes. You can apply query parameters to API calls to sort and filter the results.
|
||||
|
||||
### Standard lookups
|
||||
To perform standard service lookups, specify tags, the name of the service, datacenter, and domain using the following syntax to query for service providers:
|
||||
To perform standard service lookups, specify tags, the name of the service, datacenter, cluster peer, and domain using the following syntax to query for service providers:
|
||||
|
||||
```text
|
||||
[<tag>.]<service>.service[.<datacenter>].dc.<domain>
|
||||
[<tag>.]<service>.service[.<datacenter>.dc][.<cluster-peer>.peer].<domain>
|
||||
```
|
||||
|
||||
The `tag` subdomain is optional. It filters responses so that only service providers containing the tag appear.
|
||||
|
||||
The `datacenter` subdomain is optional. By default, Consul interrogates the querying agent's datacenter.
|
||||
|
||||
The `cluster-peer` name is optional, and specifies the [cluster peer](/consul/docs/connect/cluster-peering) whose [exported services](/consul/docs/connect/config-entries/exported-services) should be the target of the query.
|
||||
|
||||
By default, the lookups query in the `consul` domain. Refer to [Configure DNS Behaviors](/consul/docs/services/discovery/dns-configuration) for information about using alternate domains.
|
||||
|
||||
#### Standard lookup results
|
||||
|
@ -195,6 +197,38 @@ _rabbitmq._amqp.service.consul. 0 IN SRV 1 1 5672 rabbitmq.node1.dc1.consul.
|
|||
;; ADDITIONAL SECTION:
|
||||
rabbitmq.node1.dc1.consul. 0 IN A 10.1.11.20
|
||||
```
|
||||
|
||||
You can also perform RFC 2782 lookups that target a specific [cluster peer](/consul/docs/connect/cluster-peering) or datacenter by including `<datacenter>.dc` or `<cluster-peer>.peer` in the query labels:
|
||||
|
||||
```text
|
||||
_<service>._<tag>[.service][.<datacenter>.dc][.<cluster-peer>.peer].<domain>
|
||||
```
|
||||
|
||||
The following example queries the `redis` service tagged with `tcp` for the cluster peer `phx1`, which returns two instances, one at `10.1.11.83:29081` and one at `10.1.11.86:29142`:
|
||||
|
||||
```shell-session
|
||||
dig @127.0.0.1 -p 8600 _redis._tcp.service.phx1.peer.consul SRV
|
||||
|
||||
; <<>> DiG 9.18.15 <<>> @127.0.0.1 -p 8600 _redis._tcp.service.phx1.peer.consul SRV
|
||||
;; global options: +cmd
|
||||
;; Got answer:
|
||||
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 40572
|
||||
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 2
|
||||
|
||||
;; OPT PSEUDOSECTION:
|
||||
; EDNS: version: 0, flags:; udp: 1232
|
||||
;; QUESTION SECTION:
|
||||
;_redis._tcp.service.phx1.peer.consul. IN SRV
|
||||
|
||||
;; ANSWER SECTION:
|
||||
_redis._tcp.service.phx1.peer.consul. 0 IN SRV 1 1 29081 0a000d53.addr.consul.
|
||||
_redis._tcp.service.phx1.peer.consul. 0 IN SRV 1 1 29142 0a010d56.addr.consul.
|
||||
|
||||
;; ADDITIONAL SECTION:
|
||||
0a000d53.addr.consul. 0 IN A 10.1.11.83
|
||||
0a010d56.addr.consul. 0 IN A 10.1.11.86
|
||||
```
|
||||
|
||||
#### SRV responses for hosts in the .addr subdomain
|
||||
|
||||
If a service registered with Consul is configured with an explicit IP address or addresses in the [`address`](/consul/docs/services/configuration/services-configuration-reference#address) or [`tagged_address`](/consul/docs/services/configuration/services-configuration-reference#tagged_address) parameter, then Consul returns the hostname in the target field of the answer section for the DNS SRV query according to the following format:
|
||||
|
|
|
@ -780,10 +780,6 @@
|
|||
{
|
||||
"title": "Configuration",
|
||||
"routes": [
|
||||
{
|
||||
"title": "Sameness groups",
|
||||
"href": "/consul/docs/connect/config-entries/service-resolver"
|
||||
},
|
||||
{
|
||||
"title": "Service resolver",
|
||||
"href": "/consul/docs/connect/config-entries/service-resolver"
|
||||
|
|
Loading…
Reference in New Issue