Consul is a distributed, highly available, and data center aware solution to connect and configure applications across dynamic, distributed infrastructure.
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

106 lines
4.4 KiB

module github.com/hashicorp/consul
go 1.13
replace github.com/hashicorp/consul/api => ./api
replace github.com/hashicorp/consul/sdk => ./sdk
replace launchpad.net/gocheck => github.com/go-check/check v0.0.0-20140225173054-eb6ee6f84d0a
require (
github.com/Microsoft/go-winio v0.4.3 // indirect
github.com/NYTimes/gziphandler v1.0.1
github.com/armon/circbuf v0.0.0-20150827004946-bbbad097214e
github.com/armon/go-metrics v0.3.10
github.com/armon/go-radix v1.0.0
github.com/aws/aws-sdk-go v1.42.34
github.com/coredns/coredns v1.1.2
github.com/coreos/go-oidc v2.1.0+incompatible
github.com/digitalocean/godo v1.10.0 // indirect
github.com/docker/go-connections v0.3.0
github.com/elazarl/go-bindata-assetfs v0.0.0-20160803192304-e1a2a7ec64b0
github.com/envoyproxy/go-control-plane v0.9.5
github.com/frankban/quicktest v1.11.0 // indirect
file watcher to be used for configuration auto-reload feature (#12301) * add config watcher to the config package * add logging to watcher * add test and refactor to add WatcherEvent. * add all API calls and fix a bug with recreated files * add tests for watcher * remove the unnecessary use of context * Add debug log and a test for file rename * use inode to detect if the file is recreated/replaced and only listen to create events. * tidy ups (#1535) * tidy ups * Add tests for inode reconcile * fix linux vs windows syscall * fix linux vs windows syscall * fix windows compile error * increase timeout * use ctime ID * remove remove/creation test as it's a use case that fail in linux * fix linux/windows to use Ino/CreationTime * fix the watcher to only overwrite current file id * fix linter error * fix remove/create test * set reconcile loop to 200 Milliseconds * fix watcher to not trigger event on remove, add more tests * on a remove event try to add the file back to the watcher and trigger the handler if success * fix race condition * fix flaky test * fix race conditions * set level to info * fix when file is removed and get an event for it after * fix to trigger handler when we get a remove but re-add fail * fix error message * add tests for directory watch and fixes * detect if a file is a symlink and return an error on Add * rename Watcher to FileWatcher and remove symlink deref * add fsnotify@v1.5.1 * fix go mod * fix flaky test * Apply suggestions from code review Co-authored-by: Ashwin Venkatesh <ashwin@hashicorp.com> * fix a possible stack overflow * do not reset timer on errors, rename OS specific files * start the watcher when creating it * fix data race in tests * rename New func * do not call handler when a remove event happen * events trigger on write and rename * fix watcher tests * make handler async * remove recursive call * do not produce events for sub directories * trim "/" at the end of a directory when adding * add missing test * fix logging * add todo * fix failing test * fix flaking tests * fix flaky test * add logs * fix log text * increase timeout * reconcile when remove * check reconcile when removed * fix reconcile move test * fix logging * delete invalid file * Apply suggestions from code review Co-authored-by: R.B. Boyer <4903+rboyer@users.noreply.github.com> * fix review comments * fix is watched to properly catch a remove * change test timeout * fix test and rename id * fix test to create files with different mod time. * fix deadlock when stopping watcher * Apply suggestions from code review Co-authored-by: R.B. Boyer <4903+rboyer@users.noreply.github.com> * fix a deadlock when calling stop while emitting event is blocked * make sure to close the event channel after the event loop is done * add go doc * back date file instead of sleeping * Apply suggestions from code review Co-authored-by: R.B. Boyer <4903+rboyer@users.noreply.github.com> * check error Co-authored-by: Ashwin Venkatesh <ashwin@hashicorp.com> Co-authored-by: R.B. Boyer <4903+rboyer@users.noreply.github.com>
3 years ago
github.com/fsnotify/fsnotify v1.5.1
github.com/gogo/protobuf v1.3.2
github.com/golang/protobuf v1.3.5
github.com/google/go-cmp v0.5.6
github.com/google/go-querystring v1.0.0 // indirect
github.com/google/gofuzz v1.2.0
generate a single debug file for a long duration capture (#10279) * debug: remove the CLI check for debug_enabled The API allows collecting profiles even debug_enabled=false as long as ACLs are enabled. Remove this check from the CLI so that users do not need to set debug_enabled=true for no reason. Also: - fix the API client to return errors on non-200 status codes for debug endpoints - improve the failure messages when pprof data can not be collected Co-Authored-By: Dhia Ayachi <dhia@hashicorp.com> * remove parallel test runs parallel runs create a race condition that fail the debug tests * snapshot the timestamp at the beginning of the capture - timestamp used to create the capture sub folder is snapshot only at the beginning of the capture and reused for subsequent captures - capture append to the file if it already exist * Revert "snapshot the timestamp at the beginning of the capture" This reverts commit c2d03346 * Refactor captureDynamic to extract capture logic for each item in a different func * snapshot the timestamp at the beginning of the capture - timestamp used to create the capture sub folder is snapshot only at the beginning of the capture and reused for subsequent captures - capture append to the file if it already exist * Revert "snapshot the timestamp at the beginning of the capture" This reverts commit c2d03346 * Refactor captureDynamic to extract capture logic for each item in a different func * extract wait group outside the go routine to avoid a race condition * capture pprof in a separate go routine * perform a single capture for pprof data for the whole duration * add missing vendor dependency * add a change log and fix documentation to reflect the change * create function for timestamp dir creation and simplify error handling * use error groups and ticker to simplify interval capture loop * Logs, profile and traces are captured for the full duration. Metrics, Heap and Go routines are captured every interval * refactor Logs capture routine and add log capture specific test * improve error reporting when log test fail * change test duration to 1s * make time parsing in log line more robust * refactor log time format in a const * test on log line empty the earliest possible and return Co-authored-by: Freddy <freddygv@users.noreply.github.com> * rename function to captureShortLived * more specific changelog Co-authored-by: Paul Banks <banks@banksco.de> * update documentation to reflect current implementation * add test for behavior when invalid param is passed to the command * fix argument line in test * a more detailed description of the new behaviour Co-authored-by: Paul Banks <banks@banksco.de> * print success right after the capture is done * remove an unnecessary error check Co-authored-by: Daniel Nephin <dnephin@hashicorp.com> * upgraded github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57 => v0.0.0-20210601050228-01bbb1931b22 Co-authored-by: Daniel Nephin <dnephin@hashicorp.com> Co-authored-by: Freddy <freddygv@users.noreply.github.com> Co-authored-by: Paul Banks <banks@banksco.de>
4 years ago
github.com/google/pprof v0.0.0-20210601050228-01bbb1931b22
wan federation via mesh gateways (#6884) This is like a Möbius strip of code due to the fact that low-level components (serf/memberlist) are connected to high-level components (the catalog and mesh-gateways) in a twisty maze of references which make it hard to dive into. With that in mind here's a high level summary of what you'll find in the patch: There are several distinct chunks of code that are affected: * new flags and config options for the server * retry join WAN is slightly different * retry join code is shared to discover primary mesh gateways from secondary datacenters * because retry join logic runs in the *agent* and the results of that operation for primary mesh gateways are needed in the *server* there are some methods like `RefreshPrimaryGatewayFallbackAddresses` that must occur at multiple layers of abstraction just to pass the data down to the right layer. * new cache type `FederationStateListMeshGatewaysName` for use in `proxycfg/xds` layers * the function signature for RPC dialing picked up a new required field (the node name of the destination) * several new RPCs for manipulating a FederationState object: `FederationState:{Apply,Get,List,ListMeshGateways}` * 3 read-only internal APIs for debugging use to invoke those RPCs from curl * raft and fsm changes to persist these FederationStates * replication for FederationStates as they are canonically stored in the Primary and replicated to the Secondaries. * a special derivative of anti-entropy that runs in secondaries to snapshot their local mesh gateway `CheckServiceNodes` and sync them into their upstream FederationState in the primary (this works in conjunction with the replication to distribute addresses for all mesh gateways in all DCs to all other DCs) * a "gateway locator" convenience object to make use of this data to choose the addresses of gateways to use for any given RPC or gossip operation to a remote DC. This gets data from the "retry join" logic in the agent and also directly calls into the FSM. * RPC (`:8300`) on the server sniffs the first byte of a new connection to determine if it's actually doing native TLS. If so it checks the ALPN header for protocol determination (just like how the existing system uses the type-byte marker). * 2 new kinds of protocols are exclusively decoded via this native TLS mechanism: one for ferrying "packet" operations (udp-like) from the gossip layer and one for "stream" operations (tcp-like). The packet operations re-use sockets (using length-prefixing) to cut down on TLS re-negotiation overhead. * the server instances specially wrap the `memberlist.NetTransport` when running with gateway federation enabled (in a `wanfed.Transport`). The general gist is that if it tries to dial a node in the SAME datacenter (deduced by looking at the suffix of the node name) there is no change. If dialing a DIFFERENT datacenter it is wrapped up in a TLS+ALPN blob and sent through some mesh gateways to eventually end up in a server's :8300 port. * a new flag when launching a mesh gateway via `consul connect envoy` to indicate that the servers are to be exposed. This sets a special service meta when registering the gateway into the catalog. * `proxycfg/xds` notice this metadata blob to activate additional watches for the FederationState objects as well as the location of all of the consul servers in that datacenter. * `xds:` if the extra metadata is in place additional clusters are defined in a DC to bulk sink all traffic to another DC's gateways. For the current datacenter we listen on a wildcard name (`server.<dc>.consul`) that load balances all servers as well as one mini-cluster per node (`<node>.server.<dc>.consul`) * the `consul tls cert create` command got a new flag (`-node`) to help create an additional SAN in certs that can be used with this flavor of federation.
5 years ago
github.com/google/tcpproxy v0.0.0-20180808230851-dfa16c61dad2
github.com/grpc-ecosystem/go-grpc-middleware v1.0.0
github.com/hashicorp/consul-net-rpc v0.0.0-20220207223504-4cffceffcd29
github.com/hashicorp/consul/api v1.11.0
github.com/hashicorp/consul/sdk v0.8.0
github.com/hashicorp/go-bexpr v0.1.2
github.com/hashicorp/go-checkpoint v0.5.0
github.com/hashicorp/go-cleanhttp v0.5.1
github.com/hashicorp/go-connlimit v0.3.0
github.com/hashicorp/go-discover v0.0.0-20210818145131-c573d69da192
github.com/hashicorp/go-hclog v0.14.1
github.com/hashicorp/go-memdb v1.3.2
github.com/hashicorp/go-multierror v1.1.1
github.com/hashicorp/go-raftchunking v0.6.2
github.com/hashicorp/go-retryablehttp v0.6.7 // indirect
github.com/hashicorp/go-sockaddr v1.0.2
github.com/hashicorp/go-syslog v1.0.0
github.com/hashicorp/go-uuid v1.0.2
github.com/hashicorp/go-version v1.2.1
github.com/hashicorp/golang-lru v0.5.4
github.com/hashicorp/hcl v1.0.0
github.com/hashicorp/hil v0.0.0-20200423225030-a18a1cd20038
github.com/hashicorp/memberlist v0.3.1
github.com/hashicorp/raft v1.3.6
github.com/hashicorp/raft-autopilot v0.1.5
github.com/hashicorp/raft-boltdb v0.0.0-20211202195631-7d34b9fb3f42 // indirect
github.com/hashicorp/raft-boltdb/v2 v2.2.0
github.com/hashicorp/serf v0.9.7
github.com/hashicorp/vault/api v1.0.5-0.20200717191844-f687267c8086
github.com/hashicorp/vault/sdk v0.1.14-0.20200519221838-e0cfd64bc267
3 years ago
github.com/hashicorp/yamux v0.0.0-20210826001029-26ff87cf9493
github.com/imdario/mergo v0.3.6
github.com/joyent/triton-go v1.7.1-0.20200416154420-6801d15b779f // indirect
github.com/konsorten/go-windows-terminal-sequences v1.0.2 // indirect
github.com/kr/text v0.2.0
github.com/miekg/dns v1.1.41
github.com/mitchellh/cli v1.1.0
github.com/mitchellh/copystructure v1.0.0
github.com/mitchellh/go-testing-interface v1.14.0
github.com/mitchellh/hashstructure v0.0.0-20170609045927-2bca23e0e452
server: suppress spurious blocking query returns where multiple config entries are involved (#12362) Starting from and extending the mechanism introduced in #12110 we can specially handle the 3 main special Consul RPC endpoints that react to many config entries in a single blocking query in Connect: - `DiscoveryChain.Get` - `ConfigEntry.ResolveServiceConfig` - `Intentions.Match` All of these will internally watch for many config entries, and at least one of those will likely be not found in any given query. Because these are blends of multiple reads the exact solution from #12110 isn't perfectly aligned, but we can tweak the approach slightly and regain the utility of that mechanism. ### No Config Entries Found In this case, despite looking for many config entries none may be found at all. Unlike #12110 in this scenario we do not return an empty reply to the caller, but instead synthesize a struct from default values to return. This can be handled nearly identically to #12110 with the first 1-2 replies being non-empty payloads followed by the standard spurious wakeup suppression mechanism from #12110. ### No Change Since Last Wakeup Once a blocking query loop on the server has completed and slept at least once, there is a further optimization we can make here to detect if any of the config entries that were present at specific versions for the prior execution of the loop are identical for the loop we just woke up for. In that scenario we can return a slightly different internal sentinel error and basically externally handle it similar to #12110. This would mean that even if 20 discovery chain read RPC handling goroutines wakeup due to the creation of an unrelated config entry, the only ones that will terminate and reply with a blob of data are those that genuinely have new data to report. ### Extra Endpoints Since this pattern is pretty reusable, other key config-entry-adjacent endpoints used by `agent/proxycfg` also were updated: - `ConfigEntry.List` - `Internal.IntentionUpstreams` (tproxy)
3 years ago
github.com/mitchellh/hashstructure/v2 v2.0.2
github.com/mitchellh/mapstructure v1.4.1
github.com/mitchellh/pointerstructure v1.2.1
github.com/mitchellh/reflectwalk v1.0.1
github.com/patrickmn/go-cache v2.1.0+incompatible
github.com/pierrec/lz4 v2.5.2+incompatible // indirect
github.com/pkg/errors v0.9.1
github.com/pquerna/cachecontrol v0.0.0-20180517163645-1555304b9b35 // indirect
github.com/prometheus/client_golang v1.4.0
github.com/rboyer/safeio v0.2.1
github.com/ryanuber/columnize v2.1.2+incompatible
github.com/shirou/gopsutil/v3 v3.21.10
github.com/stretchr/testify v1.7.0
go.etcd.io/bbolt v1.3.5
go.opencensus.io v0.22.0 // indirect
go.uber.org/goleak v1.1.10
golang.org/x/crypto v0.0.0-20210513164829-c07d793c2f9a
golang.org/x/net v0.0.0-20211216030914-fe4d6282115f
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c
golang.org/x/sys v0.0.0-20211013075003-97ac67df715c
golang.org/x/time v0.0.0-20200630173020-3af7569d3a1e
google.golang.org/api v0.9.0 // indirect
google.golang.org/appengine v1.6.0 // indirect
Support Incremental xDS mode (#9855) This adds support for the Incremental xDS protocol when using xDS v3. This is best reviewed commit-by-commit and will not be squashed when merged. Union of all commit messages follows to give an overarching summary: xds: exclusively support incremental xDS when using xDS v3 Attempts to use SoTW via v3 will fail, much like attempts to use incremental via v2 will fail. Work around a strange older envoy behavior involving empty CDS responses over incremental xDS. xds: various cleanups and refactors that don't strictly concern the addition of incremental xDS support Dissolve the connectionInfo struct in favor of per-connection ResourceGenerators instead. Do a better job of ensuring the xds code uses a well configured logger that accurately describes the connected client. xds: pull out checkStreamACLs method in advance of a later commit xds: rewrite SoTW xDS protocol tests to use protobufs rather than hand-rolled json strings In the test we very lightly reuse some of the more boring protobuf construction helper code that is also technically under test. The important thing of the protocol tests is testing the protocol. The actual inputs and outputs are largely already handled by the xds golden output tests now so these protocol tests don't have to do double-duty. This also updates the SoTW protocol test to exclusively use xDS v2 which is the only variant of SoTW that will be supported in Consul 1.10. xds: default xds.Server.AuthCheckFrequency at use-time instead of construction-time
4 years ago
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55
google.golang.org/grpc v1.25.1
gopkg.in/square/go-jose.v2 v2.5.1
gotest.tools/v3 v3.0.3
k8s.io/api v0.18.2
k8s.io/apimachinery v0.18.2
k8s.io/client-go v0.18.2
)
replace istio.io/gogo-genproto v0.0.0-20190124151557-6d926a6e6feb => github.com/istio/gogo-genproto v0.0.0-20190124151557-6d926a6e6feb