Merge branch 'master' of github.com:hashicorp/consul into aws_autodiscovery

pull/2459/head
Kyle Havlovitz 2016-11-01 11:19:52 -04:00
commit c908121c72
841 changed files with 39564 additions and 113850 deletions

View File

@ -1,14 +1,13 @@
language: go
go:
- 1.6.2
- 1.6.3
branches:
only:
- master
install: make
script:
- make test
- make ci
sudo: false

View File

@ -1,48 +1,91 @@
## 0.7.0 (UNRELEASED)
## 0.7.1 (UNRELEASED)
BACKWARDS INCOMPATIBILITIES:
* `skip_leave_on_interrupt`'s default behavior is now dependent on whether or
not the agent is acting as a server or client. When Consul is started as a
server the default is `true` and `false` when a client. [GH-1909]
* HTTP check output is truncated to 4k, similar to script check output. [GH-1952]
* Child process reaping support has been removed, along with the `reap` configuration option. Reaping is also done via [dumb-init](https://github.com/Yelp/dumb-init) in the [Consul Docker image](https://github.com/hashicorp/docker-consul), so removing it from Consul itself simplifies the code and eases future maintainence for Consul. If you are running Consul as PID 1 in a container you will need to arrange for a wrapper process to reap child processes. [GH-1988]
FEATURES:
* **Key/Value Store Command Line Interface:** New `consul kv` commands were added for easy access to all basic key/value store operations. [GH-2360]
* **Snapshot/Restore:** A new /v1/snapshot HTTP endpoint and corresponding set of `consul snapshot` commands were added for easy point-in-time snapshots for disaster recovery. Snapshots include all state managed by Consul's Raft [consensus protocol](/docs/internals/consensus.html), including Key/Value Entries, Service Catalog, Prepared Queries, Sessions, and ACLs. Snapshots can be restored on the fly into a completely fresh cluster. [GH-2396]
IMPROVEMENTS:
* Consul agents will now periodically reconnect to available Consul servers
in order to redistribute their RPC query load. Consul clients will, by
default, attempt to establish a new connection every 120s to 180s unless
the size of the cluster is sufficiently large. The rate at which agents
begin to query new servers is proportional to the size of the Consul
cluster (servers should never receive more than 64 new connections per
second per Consul server as a result of rebalancing). Clusters in stable
environments who use `allow_stale` should see a more even distribution of
query load across all of their Consul servers. [GH-1743]
* Consul agents can now limit the number of UDP answers returned via the DNS
interface. The default number of UDP answers is `3`, however by adjusting
the `dns_config.udp_answer_limit` configuration parameter, it is now
possible to limit the results down to `1`. This tunable provides
environments where RFC3484 section 6, rule 9 is enforced with an important
workaround in order to preserve the desired behavior of randomized DNS
results. Most modern environments will not need to adjust this setting as
this RFC was made obsolete by RFC 6724. See the
[agent options](https://www.consul.io/docs/agent/options.html#udp_answer_limit)
documentation for additional details for when this should be
used. [GH-1712]
* Consul will now refuse to start with a helpful message if the same UNIX
socket is used for more than one listening endpoint. [GH-1910]
* Removed an obsolete warning message when Consul starts on Windows. [GH-1920]
* Defaults bind address to 127.0.0.1 when running in `-dev` mode. [GH-1878]
* Builds Consul releases with Go 1.6.1. [GH-1948]
* HTTP health checks limit saved output to 4K to avoid performance issues. [GH-1952]
* Reap time for failed nodes is now configurable via new `reconnect_timeout` and
`reconnect_timeout_wan` config options ([use with caution](https://www.consul.io/docs/agent/options.html#reconnect_timeout)). [GH-1935]
* Script checks now support an optional `timeout` parameter. [GH-1762]
* api: All session options can now be set when using `api.Lock()`. [GH-2372]
BUG FIXES:
* Fixed an issue where a health check's output never updates if the check
status doesn't change after the Consul agent starts. [GH-1934]
* agent: Fixed a Go race issue with log buffering at startup. [GH-2262]
* agent: Fixed a panic during anti-entropy sync for services and checks. [GH-2125]
* agent: Fixed an issue on Windows where "wsarecv" errors were logged when CLI commands accessed the RPC interface. [GH-2356]
* agent: Syslog initialization will now retry on errors for up to 60 seconds to avoid a race condition at system startup. [GH-1610]
* dns: Fixed external services that pointed to consul addresses (CNAME records) not resolving to A-records. [GH-1228]
* dns: Fixed an issue with SRV lookups where the service address was different from the node's. [GH-832]
* server: Fixed the port numbers in the sample JSON inside peers.info. [GH-2391]
* ui: Fixed an XSS issue with the display of sessions and ACLs in the web UI. [GH-2456]
## 0.7.0 (September 14, 2016)
BREAKING CHANGES:
* The default behavior of `leave_on_terminate` and `skip_leave_on_interrupt` are now dependent on whether or not the agent is acting as a server or client. When Consul is started as a server the defaults for these are `false` and `true`, respectively, which means that you have to explicitly configure a server to leave the cluster. When Consul is started as a client the defaults are the opposite, which means by default, clients will leave the cluster if shutdown or interrupted. [GH-1909] [GH-2320]
* The `allow_stale` configuration for DNS queries to the Consul agent now defaults to `true`, allowing for better utilization of available Consul servers and higher throughput at the expense of weaker consistency. This is almost always an acceptable tradeoff for DNS queries, but this can be reconfigured to use the old default behavior if desired. [GH-2315]
* Output from HTTP checks is truncated to 4k when stored on the servers, similar to script check output. [GH-1952]
* Consul's Go API client will now send ACL tokens using HTTP headers instead of query parameters, requiring Consul 0.6.0 or later. [GH-2233]
* Removed support for protocol version 1, so Consul 0.7 is no longer compatible with Consul versions prior to 0.3. [GH-2259]
* The Raft peers information in `consul info` has changed format and includes information about the suffrage of a server, which will be used in future versions of Consul. [GH-2222]
* New [`translate_wan_addrs`](https://www.consul.io/docs/agent/options.html#translate_wan_addrs) behavior from [GH-2118] translates addresses in HTTP responses and could break clients that are expecting local addresses. A new `X-Consul-Translate-Addresses` header was added to allow clients to detect if translation is enabled for HTTP responses, and a "lan" tag was added to `TaggedAddresses` for clients that need the local address regardless of translation. [GH-2280]
* The behavior of the `peers.json` file is different in this version of Consul. This file won't normally be present and is used only during outage recovery. Be sure to read the updated [Outage Recovery Guide](https://www.consul.io/docs/guides/outage.html) for details. [GH-2222]
* Consul's default Raft timing is now set to work more reliably on lower-performance servers, which allows small clusters to use lower cost compute at the expense of reduced performance for failed leader detection and leader elections. You will need to configure Consul to get the same performance as before. See the new [Server Performance](https://www.consul.io/docs/guides/performance.html) guide for more details. [GH-2303]
FEATURES:
* **Transactional Key/Value API:** A new `/v1/txn` API was added that allows for atomic updates to and fetches from multiple entries in the key/value store inside of an atomic transaction. This includes conditional updates based on obtaining locks, and all other key/value store operations. See the [Key/Value Store Endpoint](https://www.consul.io/docs/agent/http/kv.html#txn) for more details. [GH-2028]
* **Native ACL Replication:** Added a built-in full replication capability for ACLs. Non-ACL datacenters can now replicate the complete ACL set locally to their state store and fall back to that if there's an outage. Additionally, this provides a good way to make a backup ACL datacenter, or to migrate the ACL datacenter to a different one. See the [ACL Internals Guide](https://www.consul.io/docs/internals/acl.html#replication) for more details. [GH-2237]
* **Server Connection Rebalancing:** Consul agents will now periodically reconnect to available Consul servers in order to redistribute their RPC query load. Consul clients will, by default, attempt to establish a new connection every 120s to 180s unless the size of the cluster is sufficiently large. The rate at which agents begin to query new servers is proportional to the size of the Consul cluster (servers should never receive more than 64 new connections per second per Consul server as a result of rebalancing). Clusters in stable environments who use `allow_stale` should see a more even distribution of query load across all of their Consul servers. [GH-1743]
* **Raft Updates and Consul Operator Interface:** This version of Consul upgrades to "stage one" of the v2 HashiCorp Raft library. This version offers improved handling of cluster membership changes and recovery after a loss of quorum. This version also provides a foundation for new features that will appear in future Consul versions once the remainder of the v2 library is complete. [GH-2222] <br> Consul's default Raft timing is now set to work more reliably on lower-performance servers, which allows small clusters to use lower cost compute at the expense of reduced performance for failed leader detection and leader elections. You will need to configure Consul to get the same performance as before. See the new [Server Performance](https://www.consul.io/docs/guides/performance.html) guide for more details. [GH-2303] <br> Servers will now abort bootstrapping if they detect an existing cluster with configured Raft peers. This will help prevent safe but spurious leader elections when introducing new nodes with `bootstrap_expect` enabled into an existing cluster. [GH-2319] <br> Added new `consul operator` command, HTTP endpoint, and associated ACL to allow Consul operators to view and update the Raft configuration. This allows a stale server to be removed from the Raft peers without requiring downtime and peers.json recovery file use. See the new [Consul Operator Command](https://www.consul.io/docs/commands/operator.html) and the [Consul Operator Endpoint](https://www.consul.io/docs/agent/http/operator.html) for details, as well as the updated [Outage Recovery Guide](https://www.consul.io/docs/guides/outage.html). [GH-2312]
* **Serf Lifeguard Updates:** Implemented a new set of feedback controls for the gossip layer that help prevent degraded nodes that can't meet the soft real-time requirements from erroneously causing `serfHealth` flapping in other, healthy nodes. This feature tunes itself automatically and requires no configuration. [GH-2101]
* **Prepared Query Near Parameter:** Prepared queries support baking in a new `Near` sorting parameter. This allows results to be sorted by network round trip time based on a static node, or based on the round trip time from the Consul agent where the request originated. This can be used to find a co-located service instance is one is available, with a transparent fallback to the next best alternate instance otherwise. [GH-2137]
* **Automatic Service Deregistration:** Added a new `deregister_critical_service_after` timeout field for health checks which will cause the service associated with that check to get deregistered if the check is critical for longer than the timeout. This is useful for cleanup of health checks registered natively by applications, or in other situations where services may not always be cleanly shutdown. [GH-679]
* **WAN Address Translation Everywhere:** Extended the [`translate_wan_addrs`](https://www.consul.io/docs/agent/options.html#translate_wan_addrs) config option to also translate node addresses in HTTP responses, making it easy to use this feature from non-DNS clients. [GH-2118]
* **RPC Retries:** Consul will now retry RPC calls that result in "no leader" errors for up to 5 seconds. This allows agents to ride out leader elections with a delayed response vs. an error. [GH-2175]
* **Circonus Telemetry Support:** Added support for Circonus as a telemetry destination. [GH-2193]
IMPROVEMENTS:
* agent: Reap time for failed nodes is now configurable via new `reconnect_timeout` and `reconnect_timeout_wan` config options ([use with caution](https://www.consul.io/docs/agent/options.html#reconnect_timeout)). [GH-1935]
* agent: Joins based on a DNS lookup will use TCP and attempt to join with the full list of returned addresses. [GH-2101]
* agent: Consul will now refuse to start with a helpful message if the same UNIX socket is used for more than one listening endpoint. [GH-1910]
* agent: Removed an obsolete warning message when Consul starts on Windows. [GH-1920]
* agent: Defaults bind address to 127.0.0.1 when running in `-dev` mode. [GH-1878]
* agent: Added version information to the log when Consul starts up. [GH-1404]
* agent: Added timing metrics for HTTP requests in the form of `consul.http.<verb>.<path>`. [GH-2256]
* build: Updated all vendored dependencies. [GH-2258]
* build: Consul releases are now built with Go 1.6.3. [GH-2260]
* checks: Script checks now support an optional `timeout` parameter. [GH-1762]
* checks: HTTP health checks limit saved output to 4K to avoid performance issues. [GH-1952]
* cli: Added a `-stale` mode for watchers to allow them to pull data from any Consul server, not just the leader. [GH-2045] [GH-917]
* dns: Consul agents can now limit the number of UDP answers returned via the DNS interface. The default number of UDP answers is `3`, however by adjusting the `dns_config.udp_answer_limit` configuration parameter, it is now possible to limit the results down to `1`. This tunable provides environments where RFC3484 section 6, rule 9 is enforced with an important workaround in order to preserve the desired behavior of randomized DNS results. Most modern environments will not need to adjust this setting as this RFC was made obsolete by RFC 6724\. See the [agent options](https://www.consul.io/docs/agent/options.html#udp_answer_limit) documentation for additional details for when this should be used. [GH-1712]
* dns: Consul now compresses all DNS responses by default. This prevents issues when recursing records that were originally compressed, where Consul would sometimes generate an invalid, uncompressed response that was too large. [GH-2266]
* dns: Added a new `recursor_timeout` configuration option to set the timeout for Consul's internal DNS client that's used for recursing queries to upstream DNS servers. [GH-2321]
* dns: Added a new `-dns-port` command line option so this can be set without a config file. [GH-2263]
* ui: Added a new network tomography visualization to the UI. [GH-2046]
BUG FIXES:
* agent: Fixed an issue where a health check's output never updates if the check status doesn't change after the Consul agent starts. [GH-1934]
* agent: External services can now be registered with ACL tokens. [GH-1738]
* agent: Fixed an issue where large events affecting many nodes could cause infinite intent rebroadcasts, leading to many log messages about intent queue overflows. [GH-1062]
* agent: Gossip encryption keys are now validated before being made persistent in the keyring, avoiding delayed feedback at runtime. [GH-1299]
* dns: Fixed an issue where DNS requests for SRV records could be incorrectly trimmed, resulting in an ADDITIONAL section that was out of sync with the ANSWER. [GH-1931]
* dns: Fixed two issues where DNS requests for SRV records on a prepared query that failed over would report the wrong domain and fail to translate addresses. [GH-2218] [GH-2220]
* server: Fixed a deadlock related to sorting the list of available datacenters by round trip time. [GH-2130]
* server: Fixed an issue with the state store's immutable radix tree that would prevent it from using cached modified objects during transactions, leading to extra copies and increased memory / GC pressure. [GH-2106]
* server: Upgraded Bolt DB to v1.2.1 to fix an issue on Windows where Consul would sometimes fail to start due to open user-mapped sections. [GH-2203]
OTHER CHANGES:
* build: Switched from Godep to govendor. [GH-2252]
## 0.6.4 (March 16, 2016)
@ -107,6 +150,8 @@ BUG FIXES:
fallback pings. This affected users with frequent UDP connectivity problems. [GH-1802]
* Added a fix to trim UDP DNS responses so they don't exceed 512 bytes. [GH-1813]
* Updated go-dockerclient to fix Docker health checks with Docker 1.10. [GH-1706]
* Removed fixed height display of nodes and services in UI, leading to broken displays
when a node has a lot of services. [GH-2055]
## 0.6.3 (January 15, 2016)
@ -657,4 +702,3 @@ MISC:
## 0.1.0 (April 17, 2014)
* Initial release

View File

@ -10,7 +10,15 @@ VETARGS?=-asmdecl -atomic -bool -buildtags -copylocks -methods \
VERSION?=$(shell awk -F\" '/^const Version/ { print $$2; exit }' version.go)
# all builds binaries for all targets
all: tools
all: bin
ci:
if [ "${TRAVIS_PULL_REQUEST}" = "false" ]; then \
$(MAKE) bin ;\
fi
@$(MAKE) test
bin: tools
@mkdir -p bin/
@sh -c "'$(CURDIR)/scripts/build.sh'"
@ -61,4 +69,4 @@ static-assets:
tools:
go get -u -v $(GOTOOLS)
.PHONY: all bin dev dist cov test cover format vet static-assets tools
.PHONY: all ci bin dev dist cov test cover format vet static-assets tools

294
Godeps/Godeps.json generated
View File

@ -1,294 +0,0 @@
{
"ImportPath": "github.com/hashicorp/consul",
"GoVersion": "go1.6",
"Deps": [
{
"ImportPath": "github.com/DataDog/datadog-go/statsd",
"Rev": "b050cd8f4d7c394545fd7d966c8e2909ce89d552"
},
{
"ImportPath": "github.com/armon/circbuf",
"Rev": "bbbad097214e2918d8543d5201d12bfd7bca254d"
},
{
"ImportPath": "github.com/armon/go-metrics",
"Rev": "345426c77237ece5dab0e1605c3e4b35c3f54757"
},
{
"ImportPath": "github.com/armon/go-metrics/datadog",
"Rev": "345426c77237ece5dab0e1605c3e4b35c3f54757"
},
{
"ImportPath": "github.com/armon/go-radix",
"Rev": "4239b77079c7b5d1243b7b4736304ce8ddb6f0f2"
},
{
"ImportPath": "github.com/bgentry/speakeasy",
"Rev": "36e9cfdd690967f4f690c6edcc9ffacd006014a0"
},
{
"ImportPath": "github.com/boltdb/bolt",
"Comment": "v1.2.0",
"Rev": "c6ba97b89e0454fec9aa92e1d33a4e2c5fc1f631"
},
{
"ImportPath": "github.com/elazarl/go-bindata-assetfs",
"Rev": "57eb5e1fc594ad4b0b1dbea7b286d299e0cb43c2"
},
{
"ImportPath": "github.com/fsouza/go-dockerclient",
"Rev": "9b6c9720043b74304a6dd07a2a901d16e7bf3d3d"
},
{
"ImportPath": "github.com/fsouza/go-dockerclient/external/github.com/Sirupsen/logrus",
"Rev": "9b6c9720043b74304a6dd07a2a901d16e7bf3d3d"
},
{
"ImportPath": "github.com/fsouza/go-dockerclient/external/github.com/docker/docker/opts",
"Rev": "9b6c9720043b74304a6dd07a2a901d16e7bf3d3d"
},
{
"ImportPath": "github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/archive",
"Rev": "9b6c9720043b74304a6dd07a2a901d16e7bf3d3d"
},
{
"ImportPath": "github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/fileutils",
"Rev": "9b6c9720043b74304a6dd07a2a901d16e7bf3d3d"
},
{
"ImportPath": "github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/homedir",
"Rev": "9b6c9720043b74304a6dd07a2a901d16e7bf3d3d"
},
{
"ImportPath": "github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/idtools",
"Rev": "9b6c9720043b74304a6dd07a2a901d16e7bf3d3d"
},
{
"ImportPath": "github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/ioutils",
"Rev": "9b6c9720043b74304a6dd07a2a901d16e7bf3d3d"
},
{
"ImportPath": "github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/longpath",
"Rev": "9b6c9720043b74304a6dd07a2a901d16e7bf3d3d"
},
{
"ImportPath": "github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/pools",
"Rev": "9b6c9720043b74304a6dd07a2a901d16e7bf3d3d"
},
{
"ImportPath": "github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/promise",
"Rev": "9b6c9720043b74304a6dd07a2a901d16e7bf3d3d"
},
{
"ImportPath": "github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/stdcopy",
"Rev": "9b6c9720043b74304a6dd07a2a901d16e7bf3d3d"
},
{
"ImportPath": "github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system",
"Rev": "9b6c9720043b74304a6dd07a2a901d16e7bf3d3d"
},
{
"ImportPath": "github.com/fsouza/go-dockerclient/external/github.com/docker/go-units",
"Rev": "9b6c9720043b74304a6dd07a2a901d16e7bf3d3d"
},
{
"ImportPath": "github.com/fsouza/go-dockerclient/external/github.com/hashicorp/go-cleanhttp",
"Rev": "9b6c9720043b74304a6dd07a2a901d16e7bf3d3d"
},
{
"ImportPath": "github.com/fsouza/go-dockerclient/external/github.com/opencontainers/runc/libcontainer/user",
"Rev": "9b6c9720043b74304a6dd07a2a901d16e7bf3d3d"
},
{
"ImportPath": "github.com/fsouza/go-dockerclient/external/golang.org/x/net/context",
"Rev": "9b6c9720043b74304a6dd07a2a901d16e7bf3d3d"
},
{
"ImportPath": "github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix",
"Rev": "9b6c9720043b74304a6dd07a2a901d16e7bf3d3d"
},
{
"ImportPath": "github.com/hashicorp/errwrap",
"Rev": "7554cd9344cec97297fa6649b055a8c98c2a1e55"
},
{
"ImportPath": "github.com/hashicorp/go-checkpoint",
"Rev": "e4b2dc34c0f698ee04750bf2035d8b9384233e1b"
},
{
"ImportPath": "github.com/hashicorp/go-cleanhttp",
"Rev": "875fb671b3ddc66f8e2f0acc33829c8cb989a38d"
},
{
"ImportPath": "github.com/hashicorp/go-immutable-radix",
"Rev": "8e8ed81f8f0bf1bdd829593fdd5c29922c1ea990"
},
{
"ImportPath": "github.com/hashicorp/go-memdb",
"Rev": "98f52f52d7a476958fa9da671354d270c50661a7"
},
{
"ImportPath": "github.com/hashicorp/go-msgpack/codec",
"Rev": "fa3f63826f7c23912c15263591e65d54d080b458"
},
{
"ImportPath": "github.com/hashicorp/go-multierror",
"Rev": "d30f09973e19c1dfcd120b2d9c4f168e68d6b5d5"
},
{
"ImportPath": "github.com/hashicorp/go-reap",
"Rev": "2d85522212dcf5a84c6b357094f5c44710441912"
},
{
"ImportPath": "github.com/hashicorp/go-syslog",
"Rev": "42a2b573b664dbf281bd48c3cc12c086b17a39ba"
},
{
"ImportPath": "github.com/hashicorp/go-uuid",
"Rev": "36289988d83ca270bc07c234c36f364b0dd9c9a7"
},
{
"ImportPath": "github.com/hashicorp/golang-lru",
"Rev": "5c7531c003d8bf158b0fe5063649a2f41a822146"
},
{
"ImportPath": "github.com/hashicorp/golang-lru/simplelru",
"Rev": "5c7531c003d8bf158b0fe5063649a2f41a822146"
},
{
"ImportPath": "github.com/hashicorp/hcl",
"Rev": "578dd9746824a54637686b51a41bad457a56bcef"
},
{
"ImportPath": "github.com/hashicorp/hcl/hcl/ast",
"Rev": "578dd9746824a54637686b51a41bad457a56bcef"
},
{
"ImportPath": "github.com/hashicorp/hcl/hcl/parser",
"Rev": "578dd9746824a54637686b51a41bad457a56bcef"
},
{
"ImportPath": "github.com/hashicorp/hcl/hcl/scanner",
"Rev": "578dd9746824a54637686b51a41bad457a56bcef"
},
{
"ImportPath": "github.com/hashicorp/hcl/hcl/strconv",
"Rev": "578dd9746824a54637686b51a41bad457a56bcef"
},
{
"ImportPath": "github.com/hashicorp/hcl/hcl/token",
"Rev": "578dd9746824a54637686b51a41bad457a56bcef"
},
{
"ImportPath": "github.com/hashicorp/hcl/json/parser",
"Rev": "578dd9746824a54637686b51a41bad457a56bcef"
},
{
"ImportPath": "github.com/hashicorp/hcl/json/scanner",
"Rev": "578dd9746824a54637686b51a41bad457a56bcef"
},
{
"ImportPath": "github.com/hashicorp/hcl/json/token",
"Rev": "578dd9746824a54637686b51a41bad457a56bcef"
},
{
"ImportPath": "github.com/hashicorp/hil",
"Rev": "0457360d54ca4d081a769eaa1617e0462153fd70"
},
{
"ImportPath": "github.com/hashicorp/hil/ast",
"Rev": "0457360d54ca4d081a769eaa1617e0462153fd70"
},
{
"ImportPath": "github.com/hashicorp/logutils",
"Rev": "0dc08b1671f34c4250ce212759ebd880f743d883"
},
{
"ImportPath": "github.com/hashicorp/memberlist",
"Rev": "cef12ad58224d55cf26caa9e3d239c2fcb3432a2"
},
{
"ImportPath": "github.com/hashicorp/net-rpc-msgpackrpc",
"Rev": "a14192a58a694c123d8fe5481d4a4727d6ae82f3"
},
{
"ImportPath": "github.com/hashicorp/raft",
"Rev": "057b893fd996696719e98b6c44649ea14968c811"
},
{
"ImportPath": "github.com/hashicorp/raft-boltdb",
"Rev": "d1e82c1ec3f15ee991f7cc7ffd5b67ff6f5bbaee"
},
{
"ImportPath": "github.com/hashicorp/scada-client",
"Rev": "84989fd23ad4cc0e7ad44d6a871fd793eb9beb0a"
},
{
"ImportPath": "github.com/hashicorp/serf/coordinate",
"Comment": "v0.7.0-12-ge4ec8cc",
"Rev": "e4ec8cc423bbe20d26584b96efbeb9102e16d05f"
},
{
"ImportPath": "github.com/hashicorp/serf/serf",
"Comment": "v0.7.0-12-ge4ec8cc",
"Rev": "e4ec8cc423bbe20d26584b96efbeb9102e16d05f"
},
{
"ImportPath": "github.com/hashicorp/yamux",
"Rev": "df949784da9ed028ee76df44652e42d37a09d7e4"
},
{
"ImportPath": "github.com/inconshreveable/muxado",
"Rev": "f693c7e88ba316d1a0ae3e205e22a01aa3ec2848"
},
{
"ImportPath": "github.com/inconshreveable/muxado/proto",
"Rev": "f693c7e88ba316d1a0ae3e205e22a01aa3ec2848"
},
{
"ImportPath": "github.com/inconshreveable/muxado/proto/buffer",
"Rev": "f693c7e88ba316d1a0ae3e205e22a01aa3ec2848"
},
{
"ImportPath": "github.com/inconshreveable/muxado/proto/ext",
"Rev": "f693c7e88ba316d1a0ae3e205e22a01aa3ec2848"
},
{
"ImportPath": "github.com/inconshreveable/muxado/proto/frame",
"Rev": "f693c7e88ba316d1a0ae3e205e22a01aa3ec2848"
},
{
"ImportPath": "github.com/mattn/go-isatty",
"Rev": "56b76bdf51f7708750eac80fa38b952bb9f32639"
},
{
"ImportPath": "github.com/miekg/dns",
"Rev": "75e6e86cc601825c5dbcd4e0c209eab180997cd7"
},
{
"ImportPath": "github.com/mitchellh/cli",
"Rev": "cb6853d606ea4a12a15ac83cc43503df99fd28fb"
},
{
"ImportPath": "github.com/mitchellh/copystructure",
"Rev": "6fc66267e9da7d155a9d3bd489e00dad02666dc6"
},
{
"ImportPath": "github.com/mitchellh/mapstructure",
"Rev": "281073eb9eb092240d33ef253c404f1cca550309"
},
{
"ImportPath": "github.com/mitchellh/reflectwalk",
"Rev": "eecf4c70c626c7cfbb95c90195bc34d386c74ac6"
},
{
"ImportPath": "github.com/ryanuber/columnize",
"Comment": "v2.0.1-8-g983d3a5",
"Rev": "983d3a5fab1bf04d1b412465d2d9f8430e2e917e"
},
{
"ImportPath": "golang.org/x/sys/unix",
"Rev": "20457ee8ea8546920d3f4e19e405da45250dc5a5"
}
]
}

5
Godeps/Readme generated
View File

@ -1,5 +0,0 @@
This directory tree is generated automatically by godep.
Please do not edit.
See https://github.com/tools/godep for more information.

View File

@ -1,7 +1,7 @@
# Consul [![Build Status](https://travis-ci.org/hashicorp/consul.png)](https://travis-ci.org/hashicorp/consul)
# Consul [![Build Status](https://travis-ci.org/hashicorp/consul.svg?branch=master)](https://travis-ci.org/hashicorp/consul) [![Join the chat at https://gitter.im/hashicorp-consul/Lobby](https://badges.gitter.im/hashicorp-consul/Lobby.svg)](https://gitter.im/hashicorp-consul/Lobby?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
* Website: https://www.consul.io
* IRC: `#consul` on Freenode
* Chat: [Gitter](https://gitter.im/hashicorp-consul/Lobby)
* Mailing list: [Google Groups](https://groups.google.com/group/consul-tool/)
Consul is a tool for service discovery and configuration. Consul is
@ -25,12 +25,11 @@ Consul provides several key features:
* **Multi-Datacenter** - Consul is built to be datacenter aware, and can
support any number of regions without complex configuration.
Consul runs on Linux, Mac OS X, and Windows. It is recommended to run the
Consul servers only on Linux, however.
Consul runs on Linux, Mac OS X, FreeBSD, Solaris, and Windows.
## Quick Start
An extensive quick quick start is viewable on the Consul website:
An extensive quick start is viewable on the Consul website:
https://www.consul.io/intro/getting-started/install.html
@ -56,7 +55,7 @@ $ bin/consul
...
```
*note: `make` will also place a copy of the binary in the first part of your $GOPATH*
*Note: `make` will also place a copy of the binary in the first part of your `$GOPATH`.*
You can run tests by typing `make test`.
@ -82,3 +81,7 @@ See also [golang/winstrap](https://github.com/golang/winstrap) and
for more information of how to set up a general Go build environment on Windows
with MinGW.
## Vendoring
Consul currently uses [govendor](https://github.com/kardianos/govendor) for
vendoring.

View File

@ -73,11 +73,22 @@ type ACL interface {
// KeyringWrite determines if the keyring can be manipulated
KeyringWrite() bool
// OperatorRead determines if the read-only Consul operator functions
// can be used.
OperatorRead() bool
// OperatorWrite determines if the state-changing Consul operator
// functions can be used.
OperatorWrite() bool
// ACLList checks for permission to list all the ACLs
ACLList() bool
// ACLModify checks for permission to manipulate ACLs
ACLModify() bool
// Snapshot checks for permission to take and restore snapshots.
Snapshot() bool
}
// StaticACL is used to implement a base ACL policy. It either
@ -132,6 +143,14 @@ func (s *StaticACL) KeyringWrite() bool {
return s.defaultAllow
}
func (s *StaticACL) OperatorRead() bool {
return s.defaultAllow
}
func (s *StaticACL) OperatorWrite() bool {
return s.defaultAllow
}
func (s *StaticACL) ACLList() bool {
return s.allowManage
}
@ -140,6 +159,10 @@ func (s *StaticACL) ACLModify() bool {
return s.allowManage
}
func (s *StaticACL) Snapshot() bool {
return s.allowManage
}
// AllowAll returns an ACL rule that allows all operations
func AllowAll() ACL {
return allowAll
@ -188,10 +211,13 @@ type PolicyACL struct {
// preparedQueryRules contains the prepared query policies
preparedQueryRules *radix.Tree
// keyringRules contains the keyring policies. The keyring has
// keyringRule contains the keyring policies. The keyring has
// a very simple yes/no without prefix matching, so here we
// don't need to use a radix tree.
keyringRule string
// operatorRule contains the operator policies.
operatorRule string
}
// New is used to construct a policy based ACL from a set of policies
@ -228,6 +254,9 @@ func New(parent ACL, policy *Policy) (*PolicyACL, error) {
// Load the keyring policy
p.keyringRule = policy.Keyring
// Load the operator policy
p.operatorRule = policy.Operator
return p, nil
}
@ -422,6 +451,27 @@ func (p *PolicyACL) KeyringWrite() bool {
return p.parent.KeyringWrite()
}
// OperatorRead determines if the read-only operator functions are allowed.
func (p *PolicyACL) OperatorRead() bool {
switch p.operatorRule {
case PolicyRead, PolicyWrite:
return true
case PolicyDeny:
return false
default:
return p.parent.OperatorRead()
}
}
// OperatorWrite determines if the state-changing operator functions are
// allowed.
func (p *PolicyACL) OperatorWrite() bool {
if p.operatorRule == PolicyWrite {
return true
}
return p.parent.OperatorWrite()
}
// ACLList checks if listing of ACLs is allowed
func (p *PolicyACL) ACLList() bool {
return p.parent.ACLList()
@ -431,3 +481,8 @@ func (p *PolicyACL) ACLList() bool {
func (p *PolicyACL) ACLModify() bool {
return p.parent.ACLModify()
}
// Snapshot checks if taking and restoring snapshots is allowed.
func (p *PolicyACL) Snapshot() bool {
return p.parent.Snapshot()
}

View File

@ -65,12 +65,21 @@ func TestStaticACL(t *testing.T) {
if !all.KeyringWrite() {
t.Fatalf("should allow")
}
if !all.OperatorRead() {
t.Fatalf("should allow")
}
if !all.OperatorWrite() {
t.Fatalf("should allow")
}
if all.ACLList() {
t.Fatalf("should not allow")
}
if all.ACLModify() {
t.Fatalf("should not allow")
}
if all.Snapshot() {
t.Fatalf("should not allow")
}
if none.KeyRead("foobar") {
t.Fatalf("should not allow")
@ -108,12 +117,21 @@ func TestStaticACL(t *testing.T) {
if none.KeyringWrite() {
t.Fatalf("should not allow")
}
if none.OperatorRead() {
t.Fatalf("should now allow")
}
if none.OperatorWrite() {
t.Fatalf("should not allow")
}
if none.ACLList() {
t.Fatalf("should not allow")
}
if none.ACLModify() {
t.Fatalf("should not allow")
}
if none.Snapshot() {
t.Fatalf("should not allow")
}
if !manage.KeyRead("foobar") {
t.Fatalf("should allow")
@ -145,12 +163,21 @@ func TestStaticACL(t *testing.T) {
if !manage.KeyringWrite() {
t.Fatalf("should allow")
}
if !manage.OperatorRead() {
t.Fatalf("should allow")
}
if !manage.OperatorWrite() {
t.Fatalf("should allow")
}
if !manage.ACLList() {
t.Fatalf("should allow")
}
if !manage.ACLModify() {
t.Fatalf("should allow")
}
if !manage.Snapshot() {
t.Fatalf("should allow")
}
}
func TestPolicyACL(t *testing.T) {
@ -477,22 +504,24 @@ func TestPolicyACL_Parent(t *testing.T) {
if acl.ACLModify() {
t.Fatalf("should not allow")
}
if acl.Snapshot() {
t.Fatalf("should not allow")
}
}
func TestPolicyACL_Keyring(t *testing.T) {
// Test keyring ACLs
type keyringcase struct {
inp string
read bool
write bool
}
keyringcases := []keyringcase{
cases := []keyringcase{
{"", false, false},
{PolicyRead, true, false},
{PolicyWrite, true, true},
{PolicyDeny, false, false},
}
for _, c := range keyringcases {
for _, c := range cases {
acl, err := New(DenyAll(), &Policy{Keyring: c.inp})
if err != nil {
t.Fatalf("bad: %s", err)
@ -505,3 +534,29 @@ func TestPolicyACL_Keyring(t *testing.T) {
}
}
}
func TestPolicyACL_Operator(t *testing.T) {
type operatorcase struct {
inp string
read bool
write bool
}
cases := []operatorcase{
{"", false, false},
{PolicyRead, true, false},
{PolicyWrite, true, true},
{PolicyDeny, false, false},
}
for _, c := range cases {
acl, err := New(DenyAll(), &Policy{Operator: c.inp})
if err != nil {
t.Fatalf("bad: %s", err)
}
if acl.OperatorRead() != c.read {
t.Fatalf("bad: %#v", c)
}
if acl.OperatorWrite() != c.write {
t.Fatalf("bad: %#v", c)
}
}
}

View File

@ -8,7 +8,7 @@ import (
)
// FaultFunc is a function used to fault in the parent,
// rules for an ACL given it's ID
// rules for an ACL given its ID
type FaultFunc func(id string) (string, string, error)
// aclEntry allows us to store the ACL with it's policy ID
@ -21,9 +21,9 @@ type aclEntry struct {
// Cache is used to implement policy and ACL caching
type Cache struct {
faultfn FaultFunc
aclCache *lru.Cache // Cache id -> acl
policyCache *lru.Cache // Cache policy -> acl
ruleCache *lru.Cache // Cache rules -> policy
aclCache *lru.TwoQueueCache // Cache id -> acl
policyCache *lru.TwoQueueCache // Cache policy -> acl
ruleCache *lru.TwoQueueCache // Cache rules -> policy
}
// NewCache constructs a new policy and ACL cache of a given size
@ -31,9 +31,22 @@ func NewCache(size int, faultfn FaultFunc) (*Cache, error) {
if size <= 0 {
return nil, fmt.Errorf("Must provide positive cache size")
}
rc, _ := lru.New(size)
pc, _ := lru.New(size)
ac, _ := lru.New(size)
rc, err := lru.New2Q(size)
if err != nil {
return nil, err
}
pc, err := lru.New2Q(size)
if err != nil {
return nil, err
}
ac, err := lru.New2Q(size)
if err != nil {
return nil, err
}
c := &Cache{
faultfn: faultfn,
aclCache: ac,
@ -46,7 +59,7 @@ func NewCache(size int, faultfn FaultFunc) (*Cache, error) {
// GetPolicy is used to get a potentially cached policy set.
// If not cached, it will be parsed, and then cached.
func (c *Cache) GetPolicy(rules string) (*Policy, error) {
return c.getPolicy(c.ruleID(rules), rules)
return c.getPolicy(RuleID(rules), rules)
}
// getPolicy is an internal method to get a cached policy,
@ -66,8 +79,8 @@ func (c *Cache) getPolicy(id, rules string) (*Policy, error) {
}
// ruleID is used to generate an ID for a rule
func (c *Cache) ruleID(rules string) string {
// RuleID is used to generate an ID for a rule
func RuleID(rules string) string {
return fmt.Sprintf("%x", md5.Sum([]byte(rules)))
}
@ -112,7 +125,7 @@ func (c *Cache) GetACL(id string) (ACL, error) {
if err != nil {
return nil, err
}
ruleID := c.ruleID(rules)
ruleID := RuleID(rules)
// Check for a compiled ACL
policyID := c.policyID(parentID, ruleID)

View File

@ -5,7 +5,7 @@ import (
)
func TestCache_GetPolicy(t *testing.T) {
c, err := NewCache(1, nil)
c, err := NewCache(2, nil)
if err != nil {
t.Fatalf("err: %v", err)
}
@ -24,11 +24,23 @@ func TestCache_GetPolicy(t *testing.T) {
t.Fatalf("should be cached")
}
// Cache a new policy
// Work with some new policies to evict the original one
_, err = c.GetPolicy(testSimplePolicy)
if err != nil {
t.Fatalf("err: %v", err)
}
_, err = c.GetPolicy(testSimplePolicy)
if err != nil {
t.Fatalf("err: %v", err)
}
_, err = c.GetPolicy(testSimplePolicy2)
if err != nil {
t.Fatalf("err: %v", err)
}
_, err = c.GetPolicy(testSimplePolicy2)
if err != nil {
t.Fatalf("err: %v", err)
}
// Test invalidation of p
p3, err := c.GetPolicy("")
@ -44,12 +56,13 @@ func TestCache_GetACL(t *testing.T) {
policies := map[string]string{
"foo": testSimplePolicy,
"bar": testSimplePolicy2,
"baz": testSimplePolicy3,
}
faultfn := func(id string) (string, string, error) {
return "deny", policies[id], nil
}
c, err := NewCache(1, faultfn)
c, err := NewCache(2, faultfn)
if err != nil {
t.Fatalf("err: %v", err)
}
@ -80,6 +93,18 @@ func TestCache_GetACL(t *testing.T) {
if err != nil {
t.Fatalf("err: %v", err)
}
_, err = c.GetACL("bar")
if err != nil {
t.Fatalf("err: %v", err)
}
_, err = c.GetACL("baz")
if err != nil {
t.Fatalf("err: %v", err)
}
_, err = c.GetACL("baz")
if err != nil {
t.Fatalf("err: %v", err)
}
acl3, err := c.GetACL("foo")
if err != nil {
@ -100,7 +125,7 @@ func TestCache_ClearACL(t *testing.T) {
return "deny", policies[id], nil
}
c, err := NewCache(1, faultfn)
c, err := NewCache(16, faultfn)
if err != nil {
t.Fatalf("err: %v", err)
}
@ -135,7 +160,7 @@ func TestCache_Purge(t *testing.T) {
return "deny", policies[id], nil
}
c, err := NewCache(1, faultfn)
c, err := NewCache(16, faultfn)
if err != nil {
t.Fatalf("err: %v", err)
}
@ -167,7 +192,7 @@ func TestCache_GetACLPolicy(t *testing.T) {
faultfn := func(id string) (string, string, error) {
return "deny", policies[id], nil
}
c, err := NewCache(1, faultfn)
c, err := NewCache(16, faultfn)
if err != nil {
t.Fatalf("err: %v", err)
}
@ -220,7 +245,7 @@ func TestCache_GetACL_Parent(t *testing.T) {
return "", "", nil
}
c, err := NewCache(1, faultfn)
c, err := NewCache(16, faultfn)
if err != nil {
t.Fatalf("err: %v", err)
}
@ -296,3 +321,8 @@ key "bar/" {
policy = "read"
}
`
var testSimplePolicy3 = `
key "baz/" {
policy = "read"
}
`

View File

@ -21,6 +21,7 @@ type Policy struct {
Events []*EventPolicy `hcl:"event,expand"`
PreparedQueries []*PreparedQueryPolicy `hcl:"query,expand"`
Keyring string `hcl:"keyring"`
Operator string `hcl:"operator"`
}
// KeyPolicy represents a policy for a key
@ -125,5 +126,10 @@ func Parse(rules string) (*Policy, error) {
return nil, fmt.Errorf("Invalid keyring policy: %#v", p.Keyring)
}
// Validate the operator policy - this one is allowed to be empty
if p.Operator != "" && !isPolicyValid(p.Operator) {
return nil, fmt.Errorf("Invalid operator policy: %#v", p.Operator)
}
return p, nil
}

View File

@ -45,6 +45,7 @@ query "bar" {
policy = "deny"
}
keyring = "deny"
operator = "deny"
`
exp := &Policy{
Keys: []*KeyPolicy{
@ -103,7 +104,8 @@ keyring = "deny"
Policy: PolicyDeny,
},
},
Keyring: PolicyDeny,
Keyring: PolicyDeny,
Operator: PolicyDeny,
}
out, err := Parse(inp)
@ -162,7 +164,8 @@ func TestACLPolicy_Parse_JSON(t *testing.T) {
"policy": "deny"
}
},
"keyring": "deny"
"keyring": "deny",
"operator": "deny"
}`
exp := &Policy{
Keys: []*KeyPolicy{
@ -221,7 +224,8 @@ func TestACLPolicy_Parse_JSON(t *testing.T) {
Policy: PolicyDeny,
},
},
Keyring: PolicyDeny,
Keyring: PolicyDeny,
Operator: PolicyDeny,
}
out, err := Parse(inp)
@ -252,6 +256,24 @@ keyring = ""
}
}
func TestACLPolicy_Operator_Empty(t *testing.T) {
inp := `
operator = ""
`
exp := &Policy{
Operator: "",
}
out, err := Parse(inp)
if err != nil {
t.Fatalf("err: %v", err)
}
if !reflect.DeepEqual(out, exp) {
t.Fatalf("bad: %#v %#v", out, exp)
}
}
func TestACLPolicy_Bad_Policy(t *testing.T) {
cases := []string{
`key "" { policy = "nope" }`,
@ -259,6 +281,7 @@ func TestACLPolicy_Bad_Policy(t *testing.T) {
`event "" { policy = "nope" }`,
`query "" { policy = "nope" }`,
`keyring = "nope"`,
`operator = "nope"`,
}
for _, c := range cases {
_, err := Parse(c)

View File

@ -62,8 +62,7 @@ type AgentCheckRegistration struct {
AgentServiceCheck
}
// AgentServiceCheck is used to create an associated
// check for a service
// AgentServiceCheck is used to define a node or service level check
type AgentServiceCheck struct {
Script string `json:",omitempty"`
DockerContainerID string `json:",omitempty"`
@ -74,6 +73,14 @@ type AgentServiceCheck struct {
HTTP string `json:",omitempty"`
TCP string `json:",omitempty"`
Status string `json:",omitempty"`
// In Consul 0.7 and later, checks that are associated with a service
// may also contain this optional DeregisterCriticalServiceAfter field,
// which is a timeout in the same Go time format as Interval and TTL. If
// a check is in the critical state for more than this configured value,
// then its associated service (and all of its associated checks) will
// automatically be deregistered.
DeregisterCriticalServiceAfter string `json:",omitempty"`
}
type AgentServiceChecks []*AgentServiceCheck

View File

@ -455,6 +455,13 @@ func TestAgent_Checks_serviceBound(t *testing.T) {
ServiceID: "redis",
}
reg.TTL = "15s"
reg.DeregisterCriticalServiceAfter = "nope"
err := agent.CheckRegister(reg)
if err == nil || !strings.Contains(err.Error(), "invalid duration") {
t.Fatalf("err: %v", err)
}
reg.DeregisterCriticalServiceAfter = "90m"
if err := agent.CheckRegister(reg); err != nil {
t.Fatalf("err: %v", err)
}

View File

@ -80,6 +80,9 @@ type QueryMeta struct {
// How long did the request take
RequestTime time.Duration
// Is address translation enabled for HTTP responses on this agent
AddressTranslationEnabled bool
}
// WriteMeta is used to return meta data about a write
@ -330,6 +333,7 @@ type request struct {
url *url.URL
params url.Values
body io.Reader
header http.Header
obj interface{}
}
@ -355,7 +359,7 @@ func (r *request) setQueryOptions(q *QueryOptions) {
r.params.Set("wait", durToMsec(q.WaitTime))
}
if q.Token != "" {
r.params.Set("token", q.Token)
r.header.Set("X-Consul-Token", q.Token)
}
if q.Near != "" {
r.params.Set("near", q.Near)
@ -399,7 +403,7 @@ func (r *request) setWriteOptions(q *WriteOptions) {
r.params.Set("dc", q.Datacenter)
}
if q.Token != "" {
r.params.Set("token", q.Token)
r.header.Set("X-Consul-Token", q.Token)
}
}
@ -426,6 +430,7 @@ func (r *request) toHTTP() (*http.Request, error) {
req.URL.Host = r.url.Host
req.URL.Scheme = r.url.Scheme
req.Host = r.url.Host
req.Header = r.header
// Setup auth
if r.config.HttpAuth != nil {
@ -446,6 +451,7 @@ func (c *Client) newRequest(method, path string) *request {
Path: path,
},
params: make(map[string][]string),
header: make(http.Header),
}
if c.config.Datacenter != "" {
r.params.Set("dc", c.config.Datacenter)
@ -454,7 +460,7 @@ func (c *Client) newRequest(method, path string) *request {
r.params.Set("wait", durToMsec(r.config.WaitTime))
}
if c.config.Token != "" {
r.params.Set("token", r.config.Token)
r.header.Set("X-Consul-Token", r.config.Token)
}
return r
}
@ -539,6 +545,15 @@ func parseQueryMeta(resp *http.Response, q *QueryMeta) error {
default:
q.KnownLeader = false
}
// Parse X-Consul-Translate-Addresses
switch header.Get("X-Consul-Translate-Addresses") {
case "true":
q.AddressTranslationEnabled = true
default:
q.AddressTranslationEnabled = false
}
return nil
}

View File

@ -247,8 +247,8 @@ func TestSetQueryOptions(t *testing.T) {
if r.params.Get("wait") != "100000ms" {
t.Fatalf("bad: %v", r.params)
}
if r.params.Get("token") != "12345" {
t.Fatalf("bad: %v", r.params)
if r.header.Get("X-Consul-Token") != "12345" {
t.Fatalf("bad: %v", r.header)
}
if r.params.Get("near") != "nodex" {
t.Fatalf("bad: %v", r.params)
@ -270,8 +270,8 @@ func TestSetWriteOptions(t *testing.T) {
if r.params.Get("dc") != "foo" {
t.Fatalf("bad: %v", r.params)
}
if r.params.Get("token") != "23456" {
t.Fatalf("bad: %v", r.params)
if r.header.Get("X-Consul-Token") != "23456" {
t.Fatalf("bad: %v", r.header)
}
}
@ -306,6 +306,7 @@ func TestParseQueryMeta(t *testing.T) {
resp.Header.Set("X-Consul-Index", "12345")
resp.Header.Set("X-Consul-LastContact", "80")
resp.Header.Set("X-Consul-KnownLeader", "true")
resp.Header.Set("X-Consul-Translate-Addresses", "true")
qm := &QueryMeta{}
if err := parseQueryMeta(resp, qm); err != nil {
@ -321,6 +322,9 @@ func TestParseQueryMeta(t *testing.T) {
if !qm.KnownLeader {
t.Fatalf("Bad: %v", qm)
}
if !qm.AddressTranslationEnabled {
t.Fatalf("Bad: %v", qm)
}
}
func TestAPI_UnixSocket(t *testing.T) {

View File

@ -1,13 +1,15 @@
package api
type Node struct {
Node string
Address string
Node string
Address string
TaggedAddresses map[string]string
}
type CatalogService struct {
Node string
Address string
TaggedAddresses map[string]string
ServiceID string
ServiceName string
ServiceAddress string
@ -22,11 +24,12 @@ type CatalogNode struct {
}
type CatalogRegistration struct {
Node string
Address string
Datacenter string
Service *AgentService
Check *AgentCheck
Node string
Address string
TaggedAddresses map[string]string
Datacenter string
Service *AgentService
Check *AgentCheck
}
type CatalogDeregistration struct {

View File

@ -31,7 +31,6 @@ func TestCatalog_Datacenters(t *testing.T) {
}
func TestCatalog_Nodes(t *testing.T) {
t.Parallel()
c, s := makeClient(t)
defer s.Stop()
@ -51,6 +50,10 @@ func TestCatalog_Nodes(t *testing.T) {
return false, fmt.Errorf("Bad: %v", nodes)
}
if _, ok := nodes[0].TaggedAddresses["wan"]; !ok {
return false, fmt.Errorf("Bad: %v", nodes[0])
}
return true, nil
}, func(err error) {
t.Fatalf("err: %s", err)
@ -128,10 +131,15 @@ func TestCatalog_Node(t *testing.T) {
if meta.LastIndex == 0 {
return false, fmt.Errorf("Bad: %v", meta)
}
if len(info.Services) == 0 {
return false, fmt.Errorf("Bad: %v", info)
}
if _, ok := info.Node.TaggedAddresses["wan"]; !ok {
return false, fmt.Errorf("Bad: %v", info)
}
return true, nil
}, func(err error) {
t.Fatalf("err: %s", err)

View File

@ -8,7 +8,6 @@ const (
// HealthAny is special, and is used as a wild card,
// not as a specific state.
HealthAny = "any"
HealthUnknown = "unknown"
HealthPassing = "passing"
HealthWarning = "warning"
HealthCritical = "critical"
@ -122,7 +121,6 @@ func (h *Health) State(state string, q *QueryOptions) ([]*HealthCheck, *QueryMet
case HealthWarning:
case HealthCritical:
case HealthPassing:
case HealthUnknown:
default:
return nil, nil, fmt.Errorf("Unsupported state: %v", state)
}

View File

@ -76,7 +76,6 @@ func TestHealth_Checks(t *testing.T) {
}
func TestHealth_Service(t *testing.T) {
t.Parallel()
c, s := makeClient(t)
defer s.Stop()
@ -94,6 +93,9 @@ func TestHealth_Service(t *testing.T) {
if len(checks) == 0 {
return false, fmt.Errorf("Bad: %v", checks)
}
if _, ok := checks[0].Node.TaggedAddresses["wan"]; !ok {
return false, fmt.Errorf("Bad: %v", checks[0].Node)
}
return true, nil
}, func(err error) {
t.Fatalf("err: %s", err)

191
api/kv.go
View File

@ -11,18 +11,77 @@ import (
// KVPair is used to represent a single K/V entry
type KVPair struct {
Key string
// Key is the name of the key. It is also part of the URL path when accessed
// via the API.
Key string
// CreateIndex holds the index corresponding the creation of this KVPair. This
// is a read-only field.
CreateIndex uint64
// ModifyIndex is used for the Check-And-Set operations and can also be fed
// back into the WaitIndex of the QueryOptions in order to perform blocking
// queries.
ModifyIndex uint64
LockIndex uint64
Flags uint64
Value []byte
Session string
// LockIndex holds the index corresponding to a lock on this key, if any. This
// is a read-only field.
LockIndex uint64
// Flags are any user-defined flags on the key. It is up to the implementer
// to check these values, since Consul does not treat them specially.
Flags uint64
// Value is the value for the key. This can be any value, but it will be
// base64 encoded upon transport.
Value []byte
// Session is a string representing the ID of the session. Any other
// interactions with this key over the same session must specify the same
// session ID.
Session string
}
// KVPairs is a list of KVPair objects
type KVPairs []*KVPair
// KVOp constants give possible operations available in a KVTxn.
type KVOp string
const (
KVSet KVOp = "set"
KVDelete = "delete"
KVDeleteCAS = "delete-cas"
KVDeleteTree = "delete-tree"
KVCAS = "cas"
KVLock = "lock"
KVUnlock = "unlock"
KVGet = "get"
KVGetTree = "get-tree"
KVCheckSession = "check-session"
KVCheckIndex = "check-index"
)
// KVTxnOp defines a single operation inside a transaction.
type KVTxnOp struct {
Verb string
Key string
Value []byte
Flags uint64
Index uint64
Session string
}
// KVTxnOps defines a set of operations to be performed inside a single
// transaction.
type KVTxnOps []*KVTxnOp
// KVTxnResponse has the outcome of a transaction.
type KVTxnResponse struct {
Results []*KVPair
Errors TxnErrors
}
// KV is used to manipulate the K/V API
type KV struct {
c *Client
@ -33,7 +92,8 @@ func (c *Client) KV() *KV {
return &KV{c}
}
// Get is used to lookup a single key
// Get is used to lookup a single key. The returned pointer
// to the KVPair will be nil if the key does not exist.
func (k *KV) Get(key string, q *QueryOptions) (*KVPair, *QueryMeta, error) {
resp, qm, err := k.getInternal(key, nil, q)
if err != nil {
@ -238,3 +298,122 @@ func (k *KV) deleteInternal(key string, params map[string]string, q *WriteOption
res := strings.Contains(string(buf.Bytes()), "true")
return res, qm, nil
}
// TxnOp is the internal format we send to Consul. It's not specific to KV,
// though currently only KV operations are supported.
type TxnOp struct {
KV *KVTxnOp
}
// TxnOps is a list of transaction operations.
type TxnOps []*TxnOp
// TxnResult is the internal format we receive from Consul.
type TxnResult struct {
KV *KVPair
}
// TxnResults is a list of TxnResult objects.
type TxnResults []*TxnResult
// TxnError is used to return information about an operation in a transaction.
type TxnError struct {
OpIndex int
What string
}
// TxnErrors is a list of TxnError objects.
type TxnErrors []*TxnError
// TxnResponse is the internal format we receive from Consul.
type TxnResponse struct {
Results TxnResults
Errors TxnErrors
}
// Txn is used to apply multiple KV operations in a single, atomic transaction.
//
// Note that Go will perform the required base64 encoding on the values
// automatically because the type is a byte slice. Transactions are defined as a
// list of operations to perform, using the KVOp constants and KVTxnOp structure
// to define operations. If any operation fails, none of the changes are applied
// to the state store. Note that this hides the internal raw transaction interface
// and munges the input and output types into KV-specific ones for ease of use.
// If there are more non-KV operations in the future we may break out a new
// transaction API client, but it will be easy to keep this KV-specific variant
// supported.
//
// Even though this is generally a write operation, we take a QueryOptions input
// and return a QueryMeta output. If the transaction contains only read ops, then
// Consul will fast-path it to a different endpoint internally which supports
// consistency controls, but not blocking. If there are write operations then
// the request will always be routed through raft and any consistency settings
// will be ignored.
//
// Here's an example:
//
// ops := KVTxnOps{
// &KVTxnOp{
// Verb: KVLock,
// Key: "test/lock",
// Session: "adf4238a-882b-9ddc-4a9d-5b6758e4159e",
// Value: []byte("hello"),
// },
// &KVTxnOp{
// Verb: KVGet,
// Key: "another/key",
// },
// }
// ok, response, _, err := kv.Txn(&ops, nil)
//
// If there is a problem making the transaction request then an error will be
// returned. Otherwise, the ok value will be true if the transaction succeeded
// or false if it was rolled back. The response is a structured return value which
// will have the outcome of the transaction. Its Results member will have entries
// for each operation. Deleted keys will have a nil entry in the, and to save
// space, the Value of each key in the Results will be nil unless the operation
// is a KVGet. If the transaction was rolled back, the Errors member will have
// entries referencing the index of the operation that failed along with an error
// message.
func (k *KV) Txn(txn KVTxnOps, q *QueryOptions) (bool, *KVTxnResponse, *QueryMeta, error) {
r := k.c.newRequest("PUT", "/v1/txn")
r.setQueryOptions(q)
// Convert into the internal format since this is an all-KV txn.
ops := make(TxnOps, 0, len(txn))
for _, kvOp := range txn {
ops = append(ops, &TxnOp{KV: kvOp})
}
r.obj = ops
rtt, resp, err := k.c.doRequest(r)
if err != nil {
return false, nil, nil, err
}
defer resp.Body.Close()
qm := &QueryMeta{}
parseQueryMeta(resp, qm)
qm.RequestTime = rtt
if resp.StatusCode == http.StatusOK || resp.StatusCode == http.StatusConflict {
var txnResp TxnResponse
if err := decodeBody(resp, &txnResp); err != nil {
return false, nil, nil, err
}
// Convert from the internal format.
kvResp := KVTxnResponse{
Errors: txnResp.Errors,
}
for _, result := range txnResp.Results {
kvResp.Results = append(kvResp.Results, result.KV)
}
return resp.StatusCode == http.StatusOK, &kvResp, qm, nil
}
var buf bytes.Buffer
if _, err := io.Copy(&buf, resp.Body); err != nil {
return false, nil, nil, fmt.Errorf("Failed to read response: %v", err)
}
return false, nil, nil, fmt.Errorf("Failed request: %s", buf.String())
}

View File

@ -3,6 +3,7 @@ package api
import (
"bytes"
"path"
"strings"
"testing"
"time"
)
@ -243,6 +244,7 @@ func TestClient_WatchGet(t *testing.T) {
// Put the key
value := []byte("test")
doneCh := make(chan struct{})
go func() {
kv := c.KV()
@ -251,6 +253,7 @@ func TestClient_WatchGet(t *testing.T) {
if _, err := kv.Put(p, nil); err != nil {
t.Fatalf("err: %v", err)
}
doneCh <- struct{}{}
}()
// Get should work
@ -271,6 +274,9 @@ func TestClient_WatchGet(t *testing.T) {
if meta2.LastIndex <= meta.LastIndex {
t.Fatalf("unexpected value: %#v", meta2)
}
// Block until put finishes to avoid a race between it and deferred s.Stop()
<-doneCh
}
func TestClient_WatchList(t *testing.T) {
@ -296,6 +302,7 @@ func TestClient_WatchList(t *testing.T) {
// Put the key
value := []byte("test")
doneCh := make(chan struct{})
go func() {
kv := c.KV()
@ -304,6 +311,7 @@ func TestClient_WatchList(t *testing.T) {
if _, err := kv.Put(p, nil); err != nil {
t.Fatalf("err: %v", err)
}
doneCh <- struct{}{}
}()
// Get should work
@ -325,6 +333,8 @@ func TestClient_WatchList(t *testing.T) {
t.Fatalf("unexpected value: %#v", meta2)
}
// Block until put finishes to avoid a race between it and deferred s.Stop()
<-doneCh
}
func TestClient_Keys_DeleteRecurse(t *testing.T) {
@ -445,3 +455,120 @@ func TestClient_AcquireRelease(t *testing.T) {
t.Fatalf("unexpected value: %#v", meta)
}
}
func TestClient_Txn(t *testing.T) {
t.Parallel()
c, s := makeClient(t)
defer s.Stop()
session := c.Session()
kv := c.KV()
// Make a session.
id, _, err := session.CreateNoChecks(nil, nil)
if err != nil {
t.Fatalf("err: %v", err)
}
defer session.Destroy(id, nil)
// Acquire and get the key via a transaction, but don't supply a valid
// session.
key := testKey()
value := []byte("test")
txn := KVTxnOps{
&KVTxnOp{
Verb: KVLock,
Key: key,
Value: value,
},
&KVTxnOp{
Verb: KVGet,
Key: key,
},
}
ok, ret, _, err := kv.Txn(txn, nil)
if err != nil {
t.Fatalf("err: %v", err)
} else if ok {
t.Fatalf("transaction should have failed")
}
if ret == nil || len(ret.Errors) != 2 || len(ret.Results) != 0 {
t.Fatalf("bad: %v", ret)
}
if ret.Errors[0].OpIndex != 0 ||
!strings.Contains(ret.Errors[0].What, "missing session") ||
!strings.Contains(ret.Errors[1].What, "doesn't exist") {
t.Fatalf("bad: %v", ret.Errors[0])
}
// Now poke in a real session and try again.
txn[0].Session = id
ok, ret, _, err = kv.Txn(txn, nil)
if err != nil {
t.Fatalf("err: %v", err)
} else if !ok {
t.Fatalf("transaction failure")
}
if ret == nil || len(ret.Errors) != 0 || len(ret.Results) != 2 {
t.Fatalf("bad: %v", ret)
}
for i, result := range ret.Results {
var expected []byte
if i == 1 {
expected = value
}
if result.Key != key ||
!bytes.Equal(result.Value, expected) ||
result.Session != id ||
result.LockIndex != 1 {
t.Fatalf("bad: %v", result)
}
}
// Run a read-only transaction.
txn = KVTxnOps{
&KVTxnOp{
Verb: KVGet,
Key: key,
},
}
ok, ret, _, err = kv.Txn(txn, nil)
if err != nil {
t.Fatalf("err: %v", err)
} else if !ok {
t.Fatalf("transaction failure")
}
if ret == nil || len(ret.Errors) != 0 || len(ret.Results) != 1 {
t.Fatalf("bad: %v", ret)
}
for _, result := range ret.Results {
if result.Key != key ||
!bytes.Equal(result.Value, value) ||
result.Session != id ||
result.LockIndex != 1 {
t.Fatalf("bad: %v", result)
}
}
// Sanity check using the regular GET API.
pair, meta, err := kv.Get(key, nil)
if err != nil {
t.Fatalf("err: %v", err)
}
if pair == nil {
t.Fatalf("expected value: %#v", pair)
}
if pair.LockIndex != 1 {
t.Fatalf("Expected lock: %v", pair)
}
if pair.Session != id {
t.Fatalf("Expected lock: %v", pair)
}
if meta.LastIndex == 0 {
t.Fatalf("unexpected value: %#v", meta)
}
}

View File

@ -72,8 +72,9 @@ type LockOptions struct {
Key string // Must be set and have write permissions
Value []byte // Optional, value to associate with the lock
Session string // Optional, created if not specified
SessionName string // Optional, defaults to DefaultLockSessionName
SessionTTL string // Optional, defaults to DefaultLockSessionTTL
SessionOpts *SessionEntry // Optional, options to use when creating a session
SessionName string // Optional, defaults to DefaultLockSessionName (ignored if SessionOpts is given)
SessionTTL string // Optional, defaults to DefaultLockSessionTTL (ignored if SessionOpts is given)
MonitorRetries int // Optional, defaults to 0 which means no retries
MonitorRetryTime time.Duration // Optional, defaults to DefaultMonitorRetryTime
LockWaitTime time.Duration // Optional, defaults to DefaultLockWaitTime
@ -329,9 +330,12 @@ func (l *Lock) Destroy() error {
// createSession is used to create a new managed session
func (l *Lock) createSession() (string, error) {
session := l.c.Session()
se := &SessionEntry{
Name: l.opts.SessionName,
TTL: l.opts.SessionTTL,
se := l.opts.SessionOpts
if se == nil {
se = &SessionEntry{
Name: l.opts.SessionName,
TTL: l.opts.SessionTTL,
}
}
id, _, err := session.Create(se, nil)
if err != nil {

View File

@ -139,7 +139,7 @@ func TestLock_DeleteKey(t *testing.T) {
// Should loose leadership
select {
case <-leaderCh:
case <-time.After(time.Second):
case <-time.After(10 * time.Second):
t.Fatalf("should not be leader")
}
}()

81
api/operator.go Normal file
View File

@ -0,0 +1,81 @@
package api
// Operator can be used to perform low-level operator tasks for Consul.
type Operator struct {
c *Client
}
// Operator returns a handle to the operator endpoints.
func (c *Client) Operator() *Operator {
return &Operator{c}
}
// RaftServer has information about a server in the Raft configuration.
type RaftServer struct {
// ID is the unique ID for the server. These are currently the same
// as the address, but they will be changed to a real GUID in a future
// release of Consul.
ID string
// Node is the node name of the server, as known by Consul, or this
// will be set to "(unknown)" otherwise.
Node string
// Address is the IP:port of the server, used for Raft communications.
Address string
// Leader is true if this server is the current cluster leader.
Leader bool
// Voter is true if this server has a vote in the cluster. This might
// be false if the server is staging and still coming online, or if
// it's a non-voting server, which will be added in a future release of
// Consul.
Voter bool
}
// RaftConfigration is returned when querying for the current Raft configuration.
type RaftConfiguration struct {
// Servers has the list of servers in the Raft configuration.
Servers []*RaftServer
// Index has the Raft index of this configuration.
Index uint64
}
// RaftGetConfiguration is used to query the current Raft peer set.
func (op *Operator) RaftGetConfiguration(q *QueryOptions) (*RaftConfiguration, error) {
r := op.c.newRequest("GET", "/v1/operator/raft/configuration")
r.setQueryOptions(q)
_, resp, err := requireOK(op.c.doRequest(r))
if err != nil {
return nil, err
}
defer resp.Body.Close()
var out RaftConfiguration
if err := decodeBody(resp, &out); err != nil {
return nil, err
}
return &out, nil
}
// RaftRemovePeerByAddress is used to kick a stale peer (one that it in the Raft
// quorum but no longer known to Serf or the catalog) by address in the form of
// "IP:port".
func (op *Operator) RaftRemovePeerByAddress(address string, q *WriteOptions) error {
r := op.c.newRequest("DELETE", "/v1/operator/raft/peer")
r.setWriteOptions(q)
// TODO (slackpad) Currently we made address a query parameter. Once
// IDs are in place this will be DELETE /v1/operator/raft/peer/<id>.
r.params.Set("address", string(address))
_, resp, err := requireOK(op.c.doRequest(r))
if err != nil {
return err
}
resp.Body.Close()
return nil
}

38
api/operator_test.go Normal file
View File

@ -0,0 +1,38 @@
package api
import (
"strings"
"testing"
)
func TestOperator_RaftGetConfiguration(t *testing.T) {
t.Parallel()
c, s := makeClient(t)
defer s.Stop()
operator := c.Operator()
out, err := operator.RaftGetConfiguration(nil)
if err != nil {
t.Fatalf("err: %v", err)
}
if len(out.Servers) != 1 ||
!out.Servers[0].Leader ||
!out.Servers[0].Voter {
t.Fatalf("bad: %v", out)
}
}
func TestOperator_RaftRemovePeerByAddress(t *testing.T) {
t.Parallel()
c, s := makeClient(t)
defer s.Stop()
// If we get this error, it proves we sent the address all the way
// through.
operator := c.Operator()
err := operator.RaftRemovePeerByAddress("nope", nil)
if err == nil || !strings.Contains(err.Error(),
"address \"nope\" was not found in the Raft configuration") {
t.Fatalf("err: %v", err)
}
}

View File

@ -25,6 +25,11 @@ type ServiceQuery struct {
// Service is the service to query.
Service string
// Near allows baking in the name of a node to automatically distance-
// sort from. The magic "_agent" value is supported, which sorts near
// the agent which initiated the request by default.
Near string
// Failover controls what we do if there are no healthy nodes in the
// local datacenter.
Failover QueryDatacenterOptions
@ -40,6 +45,17 @@ type ServiceQuery struct {
Tags []string
}
// QueryTemplate carries the arguments for creating a templated query.
type QueryTemplate struct {
// Type specifies the type of the query template. Currently only
// "name_prefix_match" is supported. This field is required.
Type string
// Regexp allows specifying a regex pattern to match against the name
// of the query being executed.
Regexp string
}
// PrepatedQueryDefinition defines a complete prepared query.
type PreparedQueryDefinition struct {
// ID is this UUID-based ID for the query, always generated by Consul.
@ -67,6 +83,11 @@ type PreparedQueryDefinition struct {
// DNS has options that control how the results of this query are
// served over DNS.
DNS QueryDNSOptions
// Template is used to pass through the arguments for creating a
// prepared query with an attached template. If a template is given,
// interpolations are possible in other struct fields.
Template QueryTemplate
}
// PreparedQueryExecuteResponse has the results of executing a query.

View File

@ -17,6 +17,9 @@ func TestPreparedQuery(t *testing.T) {
Datacenter: "dc1",
Node: "foobar",
Address: "192.168.10.10",
TaggedAddresses: map[string]string{
"wan": "127.0.0.1",
},
Service: &AgentService{
ID: "redis1",
Service: "redis",
@ -96,6 +99,9 @@ func TestPreparedQuery(t *testing.T) {
if len(results.Nodes) != 1 || results.Nodes[0].Node.Node != "foobar" {
t.Fatalf("bad: %v", results)
}
if wan, ok := results.Nodes[0].Node.TaggedAddresses["wan"]; !ok || wan != "127.0.0.1" {
t.Fatalf("bad: %v", results)
}
// Execute by name.
results, _, err = query.Execute("my-query", nil)
@ -105,6 +111,9 @@ func TestPreparedQuery(t *testing.T) {
if len(results.Nodes) != 1 || results.Nodes[0].Node.Node != "foobar" {
t.Fatalf("bad: %v", results)
}
if wan, ok := results.Nodes[0].Node.TaggedAddresses["wan"]; !ok || wan != "127.0.0.1" {
t.Fatalf("bad: %v", results)
}
// Delete it.
_, err = query.Delete(def.ID, nil)

47
api/snapshot.go Normal file
View File

@ -0,0 +1,47 @@
package api
import (
"io"
)
// Snapshot can be used to query the /v1/snapshot endpoint to take snapshots of
// Consul's internal state and restore snapshots for disaster recovery.
type Snapshot struct {
c *Client
}
// Snapshot returns a handle that exposes the snapshot endpoints.
func (c *Client) Snapshot() *Snapshot {
return &Snapshot{c}
}
// Save requests a new snapshot and provides an io.ReadCloser with the snapshot
// data to save. If this doesn't return an error, then it's the responsibility
// of the caller to close it. Only a subset of the QueryOptions are supported:
// Datacenter, AllowStale, and Token.
func (s *Snapshot) Save(q *QueryOptions) (io.ReadCloser, *QueryMeta, error) {
r := s.c.newRequest("GET", "/v1/snapshot")
r.setQueryOptions(q)
rtt, resp, err := requireOK(s.c.doRequest(r))
if err != nil {
return nil, nil, err
}
qm := &QueryMeta{}
parseQueryMeta(resp, qm)
qm.RequestTime = rtt
return resp.Body, qm, nil
}
// Restore streams in an existing snapshot and attempts to restore it.
func (s *Snapshot) Restore(q *WriteOptions, in io.Reader) error {
r := s.c.newRequest("PUT", "/v1/snapshot")
r.body = in
r.setWriteOptions(q)
_, _, err := requireOK(s.c.doRequest(r))
if err != nil {
return err
}
return nil
}

134
api/snapshot_test.go Normal file
View File

@ -0,0 +1,134 @@
package api
import (
"bytes"
"strings"
"testing"
)
func TestSnapshot(t *testing.T) {
t.Parallel()
c, s := makeClient(t)
defer s.Stop()
// Place an initial key into the store.
kv := c.KV()
key := &KVPair{Key: testKey(), Value: []byte("hello")}
if _, err := kv.Put(key, nil); err != nil {
t.Fatalf("err: %v", err)
}
// Make sure it reads back.
pair, _, err := kv.Get(key.Key, nil)
if err != nil {
t.Fatalf("err: %v", err)
}
if pair == nil {
t.Fatalf("expected value: %#v", pair)
}
if !bytes.Equal(pair.Value, []byte("hello")) {
t.Fatalf("unexpected value: %#v", pair)
}
// Take a snapshot.
snapshot := c.Snapshot()
snap, qm, err := snapshot.Save(nil)
if err != nil {
t.Fatalf("err: %v", err)
}
defer snap.Close()
// Sanity check th query metadata.
if qm.LastIndex == 0 || !qm.KnownLeader ||
qm.RequestTime == 0 {
t.Fatalf("bad: %v", qm)
}
// Overwrite the key's value.
key.Value = []byte("goodbye")
if _, err := kv.Put(key, nil); err != nil {
t.Fatalf("err: %v", err)
}
// Read the key back and look for the new value.
pair, _, err = kv.Get(key.Key, nil)
if err != nil {
t.Fatalf("err: %v", err)
}
if pair == nil {
t.Fatalf("expected value: %#v", pair)
}
if !bytes.Equal(pair.Value, []byte("goodbye")) {
t.Fatalf("unexpected value: %#v", pair)
}
// Restore the snapshot.
if err := snapshot.Restore(nil, snap); err != nil {
t.Fatalf("err: %v", err)
}
// Read the key back and look for the original value.
pair, _, err = kv.Get(key.Key, nil)
if err != nil {
t.Fatalf("err: %v", err)
}
if pair == nil {
t.Fatalf("expected value: %#v", pair)
}
if !bytes.Equal(pair.Value, []byte("hello")) {
t.Fatalf("unexpected value: %#v", pair)
}
}
func TestSnapshot_Options(t *testing.T) {
t.Parallel()
c, s := makeACLClient(t)
defer s.Stop()
// Try to take a snapshot with a bad token.
snapshot := c.Snapshot()
_, _, err := snapshot.Save(&QueryOptions{Token: "anonymous"})
if err == nil || !strings.Contains(err.Error(), "Permission denied") {
t.Fatalf("err: %v", err)
}
// Now try an unknown DC.
_, _, err = snapshot.Save(&QueryOptions{Datacenter: "nope"})
if err == nil || !strings.Contains(err.Error(), "No path to datacenter") {
t.Fatalf("err: %v", err)
}
// This should work with a valid token.
snap, _, err := snapshot.Save(&QueryOptions{Token: "root"})
if err != nil {
t.Fatalf("err: %v", err)
}
defer snap.Close()
// This should work with a stale snapshot. This doesn't have good feedback
// that the stale option was sent, but it makes sure nothing bad happens.
snap, _, err = snapshot.Save(&QueryOptions{Token: "root", AllowStale: true})
if err != nil {
t.Fatalf("err: %v", err)
}
defer snap.Close()
// Try to restore a snapshot with a bad token.
null := bytes.NewReader([]byte(""))
err = snapshot.Restore(&WriteOptions{Token: "anonymous"}, null)
if err == nil || !strings.Contains(err.Error(), "Permission denied") {
t.Fatalf("err: %v", err)
}
// Now try an unknown DC.
null = bytes.NewReader([]byte(""))
err = snapshot.Restore(&WriteOptions{Datacenter: "nope"}, null)
if err == nil || !strings.Contains(err.Error(), "No path to datacenter") {
t.Fatalf("err: %v", err)
}
// This should work.
if err := snapshot.Restore(&WriteOptions{Token: "root"}, snap); err != nil {
t.Fatalf("err: %v", err)
}
}

View File

@ -205,3 +205,20 @@ func (s *HTTPServer) ACLList(resp http.ResponseWriter, req *http.Request) (inter
}
return out.ACLs, nil
}
func (s *HTTPServer) ACLReplicationStatus(resp http.ResponseWriter, req *http.Request) (interface{}, error) {
// Note that we do not forward to the ACL DC here. This is a query for
// any DC that's doing replication.
args := structs.DCSpecificRequest{}
s.parseSource(req, &args.Source)
if done := s.parse(resp, req, &args.Datacenter, &args.QueryOptions); done {
return nil, nil
}
// Make the request.
var out structs.ACLReplicationStatus
if err := s.agent.RPC("ACL.ReplicationStatus", &args, &out); err != nil {
return nil, err
}
return out, nil
}

View File

@ -218,3 +218,18 @@ func TestACLList(t *testing.T) {
}
})
}
func TestACLReplicationStatus(t *testing.T) {
httpTest(t, func(srv *HTTPServer) {
req, err := http.NewRequest("GET", "/v1/acl/replication", nil)
resp := httptest.NewRecorder()
obj, err := srv.ACLReplicationStatus(resp, req)
if err != nil {
t.Fatalf("err: %v", err)
}
_, ok := obj.(structs.ACLReplicationStatus)
if !ok {
t.Fatalf("should work")
}
})
}

View File

@ -19,6 +19,7 @@ import (
"github.com/hashicorp/consul/consul/state"
"github.com/hashicorp/consul/consul/structs"
"github.com/hashicorp/consul/lib"
"github.com/hashicorp/consul/types"
"github.com/hashicorp/serf/coordinate"
"github.com/hashicorp/serf/serf"
)
@ -73,20 +74,24 @@ type Agent struct {
// services and checks. Used for anti-entropy.
state localState
// checkReapAfter maps the check ID to a timeout after which we should
// reap its associated service
checkReapAfter map[types.CheckID]time.Duration
// checkMonitors maps the check ID to an associated monitor
checkMonitors map[string]*CheckMonitor
checkMonitors map[types.CheckID]*CheckMonitor
// checkHTTPs maps the check ID to an associated HTTP check
checkHTTPs map[string]*CheckHTTP
checkHTTPs map[types.CheckID]*CheckHTTP
// checkTCPs maps the check ID to an associated TCP check
checkTCPs map[string]*CheckTCP
checkTCPs map[types.CheckID]*CheckTCP
// checkTTLs maps the check ID to an associated check TTL
checkTTLs map[string]*CheckTTL
checkTTLs map[types.CheckID]*CheckTTL
// checkDockers maps the check ID to an associated Docker Exec based check
checkDockers map[string]*CheckDocker
checkDockers map[types.CheckID]*CheckDocker
// checkLock protects updates to the check* maps
checkLock sync.Mutex
@ -111,14 +116,6 @@ type Agent struct {
// agent methods use this, so use with care and never override
// outside of a unit test.
endpoints map[string]string
// reapLock is used to prevent child process reaping from interfering
// with normal waiting for subprocesses to complete. Any time you exec
// and wait, you should take a read lock on this mutex. Only the reaper
// takes the write lock. This setup prevents us from serializing all the
// child process management with each other, it just serializes them
// with the child process reaper.
reapLock sync.RWMutex
}
// Create is used to create a new Agent. Returns
@ -169,28 +166,30 @@ func Create(config *Config, logOutput io.Writer) (*Agent, error) {
// Create the default set of tagged addresses.
config.TaggedAddresses = map[string]string{
"lan": config.AdvertiseAddr,
"wan": config.AdvertiseAddrWan,
}
agent := &Agent{
config: config,
logger: log.New(logOutput, "", log.LstdFlags),
logOutput: logOutput,
checkMonitors: make(map[string]*CheckMonitor),
checkTTLs: make(map[string]*CheckTTL),
checkHTTPs: make(map[string]*CheckHTTP),
checkTCPs: make(map[string]*CheckTCP),
checkDockers: make(map[string]*CheckDocker),
eventCh: make(chan serf.UserEvent, 1024),
eventBuf: make([]*UserEvent, 256),
shutdownCh: make(chan struct{}),
endpoints: make(map[string]string),
config: config,
logger: log.New(logOutput, "", log.LstdFlags),
logOutput: logOutput,
checkReapAfter: make(map[types.CheckID]time.Duration),
checkMonitors: make(map[types.CheckID]*CheckMonitor),
checkTTLs: make(map[types.CheckID]*CheckTTL),
checkHTTPs: make(map[types.CheckID]*CheckHTTP),
checkTCPs: make(map[types.CheckID]*CheckTCP),
checkDockers: make(map[types.CheckID]*CheckDocker),
eventCh: make(chan serf.UserEvent, 1024),
eventBuf: make([]*UserEvent, 256),
shutdownCh: make(chan struct{}),
endpoints: make(map[string]string),
}
// Initialize the local state
// Initialize the local state.
agent.state.Init(config, agent.logger)
// Setup either the client or the server
// Setup either the client or the server.
var err error
if config.Server {
err = agent.setupServer()
@ -212,7 +211,7 @@ func Create(config *Config, logOutput io.Writer) (*Agent, error) {
return nil, err
}
// Load checks/services
// Load checks/services.
if err := agent.loadServices(config); err != nil {
return nil, err
}
@ -220,7 +219,11 @@ func Create(config *Config, logOutput io.Writer) (*Agent, error) {
return nil, err
}
// Start handling events
// Start watching for critical services to deregister, based on their
// checks.
go agent.reapServices()
// Start handling events.
go agent.handleEvents()
// Start sending network coordinate to the server.
@ -228,7 +231,7 @@ func Create(config *Config, logOutput io.Writer) (*Agent, error) {
go agent.sendCoordinate()
}
// Write out the PID file if necessary
// Write out the PID file if necessary.
err = agent.storePid()
if err != nil {
return nil, err
@ -250,6 +253,11 @@ func (a *Agent) consulConfig() *consul.Config {
// Apply dev mode
base.DevMode = a.config.DevMode
// Apply performance factors
if a.config.Performance.RaftMultiplier > 0 {
base.ScaleRaft(a.config.Performance.RaftMultiplier)
}
// Override with our config
if a.config.Datacenter != "" {
base.Datacenter = a.config.Datacenter
@ -338,6 +346,9 @@ func (a *Agent) consulConfig() *consul.Config {
if a.config.ACLDownPolicy != "" {
base.ACLDownPolicy = a.config.ACLDownPolicy
}
if a.config.ACLReplicationToken != "" {
base.ACLReplicationToken = a.config.ACLReplicationToken
}
if a.config.SessionTTLMinRaw != "" {
base.SessionTTLMin = a.config.SessionTTLMin
}
@ -458,6 +469,19 @@ func (a *Agent) RPC(method string, args interface{}, reply interface{}) error {
return a.client.RPC(method, args, reply)
}
// SnapshotRPC performs the requested snapshot RPC against the Consul server in
// a streaming manner. The contents of in will be read and passed along as the
// payload, and the response message will determine the error status, and any
// return payload will be written to out.
func (a *Agent) SnapshotRPC(args *structs.SnapshotRequest, in io.Reader, out io.Writer,
replyFn consul.SnapshotReplyFn) error {
if a.server != nil {
return a.server.SnapshotRPC(args, in, out, replyFn)
}
return a.client.SnapshotRPC(args, in, out, replyFn)
}
// Leave is used to prepare the agent for a graceful shutdown
func (a *Agent) Leave() error {
if a.server != nil {
@ -660,6 +684,53 @@ func (a *Agent) sendCoordinate() {
}
}
// reapServicesInternal does a single pass, looking for services to reap.
func (a *Agent) reapServicesInternal() {
reaped := make(map[string]struct{})
for checkID, check := range a.state.CriticalChecks() {
// There's nothing to do if there's no service.
if check.Check.ServiceID == "" {
continue
}
// There might be multiple checks for one service, so
// we don't need to reap multiple times.
serviceID := check.Check.ServiceID
if _, ok := reaped[serviceID]; ok {
continue
}
// See if there's a timeout.
a.checkLock.Lock()
timeout, ok := a.checkReapAfter[checkID]
a.checkLock.Unlock()
// Reap, if necessary. We keep track of which service
// this is so that we won't try to remove it again.
if ok && check.CriticalFor > timeout {
reaped[serviceID] = struct{}{}
a.RemoveService(serviceID, true)
a.logger.Printf("[INFO] agent: Check %q for service %q has been critical for too long; deregistered service",
checkID, serviceID)
}
}
}
// reapServices is a long running goroutine that looks for checks that have been
// critical too long and dregisters their associated services.
func (a *Agent) reapServices() {
for {
select {
case <-time.After(a.config.CheckReapInterval):
a.reapServicesInternal()
case <-a.shutdownCh:
return
}
}
}
// persistService saves a service definition to a JSON file in the data dir
func (a *Agent) persistService(service *structs.NodeService) error {
svcPath := filepath.Join(a.config.DataDir, servicesDir, stringHash(service.ID))
@ -696,7 +767,7 @@ func (a *Agent) purgeService(serviceID string) error {
// persistCheck saves a check definition to the local agent's state directory
func (a *Agent) persistCheck(check *structs.HealthCheck, chkType *CheckType) error {
checkPath := filepath.Join(a.config.DataDir, checksDir, stringHash(check.CheckID))
checkPath := filepath.Join(a.config.DataDir, checksDir, checkIDHash(check.CheckID))
// Create the persisted check
wrapped := persistedCheck{
@ -724,8 +795,8 @@ func (a *Agent) persistCheck(check *structs.HealthCheck, chkType *CheckType) err
}
// purgeCheck removes a persisted check definition file from the data dir
func (a *Agent) purgeCheck(checkID string) error {
checkPath := filepath.Join(a.config.DataDir, checksDir, stringHash(checkID))
func (a *Agent) purgeCheck(checkID types.CheckID) error {
checkPath := filepath.Join(a.config.DataDir, checksDir, checkIDHash(checkID))
if _, err := os.Stat(checkPath); err == nil {
return os.Remove(checkPath)
}
@ -758,7 +829,7 @@ func (a *Agent) AddService(service *structs.NodeService, chkTypes CheckTypes, pe
// Warn if any tags are incompatible with DNS
for _, tag := range service.Tags {
if !dnsNameRe.MatchString(tag) {
a.logger.Printf("[WARN] Service tag %q will not be discoverable "+
a.logger.Printf("[DEBUG] Service tag %q will not be discoverable "+
"via DNS due to invalid characters. Valid characters include "+
"all alpha-numerics and dashes.", tag)
}
@ -791,7 +862,7 @@ func (a *Agent) AddService(service *structs.NodeService, chkTypes CheckTypes, pe
}
check := &structs.HealthCheck{
Node: a.config.NodeName,
CheckID: checkID,
CheckID: types.CheckID(checkID),
Name: fmt.Sprintf("Service '%s' check", service.Service),
Status: structs.HealthCritical,
Notes: chkType.Notes,
@ -976,13 +1047,24 @@ func (a *Agent) AddCheck(check *structs.HealthCheck, chkType *CheckType, persist
Interval: chkType.Interval,
Timeout: chkType.Timeout,
Logger: a.logger,
ReapLock: &a.reapLock,
}
monitor.Start()
a.checkMonitors[check.CheckID] = monitor
} else {
return fmt.Errorf("Check type is not valid")
}
if chkType.DeregisterCriticalServiceAfter > 0 {
timeout := chkType.DeregisterCriticalServiceAfter
if timeout < a.config.CheckDeregisterIntervalMin {
timeout = a.config.CheckDeregisterIntervalMin
a.logger.Println(fmt.Sprintf("[WARN] agent: check '%s' has deregister interval below minimum of %v",
check.CheckID, a.config.CheckDeregisterIntervalMin))
}
a.checkReapAfter[check.CheckID] = timeout
} else {
delete(a.checkReapAfter, check.CheckID)
}
}
// Add to the local state for anti-entropy
@ -998,7 +1080,7 @@ func (a *Agent) AddCheck(check *structs.HealthCheck, chkType *CheckType, persist
// RemoveCheck is used to remove a health check.
// The agent will make a best effort to ensure it is deregistered
func (a *Agent) RemoveCheck(checkID string, persist bool) error {
func (a *Agent) RemoveCheck(checkID types.CheckID, persist bool) error {
// Validate CheckID
if checkID == "" {
return fmt.Errorf("CheckID missing")
@ -1011,6 +1093,7 @@ func (a *Agent) RemoveCheck(checkID string, persist bool) error {
defer a.checkLock.Unlock()
// Stop any monitors
delete(a.checkReapAfter, checkID)
if check, ok := a.checkMonitors[checkID]; ok {
check.Stop()
delete(a.checkMonitors, checkID)
@ -1039,25 +1122,27 @@ func (a *Agent) RemoveCheck(checkID string, persist bool) error {
return nil
}
// UpdateCheck is used to update the status of a check.
// This can only be used with checks of the TTL type.
func (a *Agent) UpdateCheck(checkID, status, output string) error {
// updateTTLCheck is used to update the status of a TTL check via the Agent API.
func (a *Agent) updateTTLCheck(checkID types.CheckID, status, output string) error {
a.checkLock.Lock()
defer a.checkLock.Unlock()
// Grab the TTL check.
check, ok := a.checkTTLs[checkID]
if !ok {
return fmt.Errorf("CheckID does not have associated TTL")
return fmt.Errorf("CheckID %q does not have associated TTL", checkID)
}
// Set the status through CheckTTL to reset the TTL
// Set the status through CheckTTL to reset the TTL.
check.SetStatus(status, output)
// We don't write any files in dev mode so bail here.
if a.config.DevMode {
return nil
}
// Always persist the state for TTL checks
// Persist the state so the TTL check can come up in a good state after
// an agent restart, especially with long TTL values.
if err := a.persistCheckState(check, status, output); err != nil {
return fmt.Errorf("failed persisting state for check %q: %s", checkID, err)
}
@ -1090,7 +1175,7 @@ func (a *Agent) persistCheckState(check *CheckTTL, status, output string) error
}
// Write the state to the file
file := filepath.Join(dir, stringHash(check.CheckID))
file := filepath.Join(dir, checkIDHash(check.CheckID))
if err := ioutil.WriteFile(file, buf, 0600); err != nil {
return fmt.Errorf("failed writing file %q: %s", file, err)
}
@ -1101,7 +1186,7 @@ func (a *Agent) persistCheckState(check *CheckTTL, status, output string) error
// loadCheckState is used to restore the persisted state of a check.
func (a *Agent) loadCheckState(check *structs.HealthCheck) error {
// Try to read the persisted state for this check
file := filepath.Join(a.config.DataDir, checkStateDir, stringHash(check.CheckID))
file := filepath.Join(a.config.DataDir, checkStateDir, checkIDHash(check.CheckID))
buf, err := ioutil.ReadFile(file)
if err != nil {
if os.IsNotExist(err) {
@ -1129,8 +1214,8 @@ func (a *Agent) loadCheckState(check *structs.HealthCheck) error {
}
// purgeCheckState is used to purge the state of a check from the data dir
func (a *Agent) purgeCheckState(checkID string) error {
file := filepath.Join(a.config.DataDir, checkStateDir, stringHash(checkID))
func (a *Agent) purgeCheckState(checkID types.CheckID) error {
file := filepath.Join(a.config.DataDir, checkStateDir, checkIDHash(checkID))
err := os.Remove(file)
if os.IsNotExist(err) {
return nil
@ -1393,22 +1478,22 @@ func (a *Agent) unloadChecks() error {
// snapshotCheckState is used to snapshot the current state of the health
// checks. This is done before we reload our checks, so that we can properly
// restore into the same state.
func (a *Agent) snapshotCheckState() map[string]*structs.HealthCheck {
func (a *Agent) snapshotCheckState() map[types.CheckID]*structs.HealthCheck {
return a.state.Checks()
}
// restoreCheckState is used to reset the health state based on a snapshot.
// This is done after we finish the reload to avoid any unnecessary flaps
// in health state and potential session invalidations.
func (a *Agent) restoreCheckState(snap map[string]*structs.HealthCheck) {
func (a *Agent) restoreCheckState(snap map[types.CheckID]*structs.HealthCheck) {
for id, check := range snap {
a.state.UpdateCheck(id, check.Status, check.Output)
}
}
// serviceMaintCheckID returns the ID of a given service's maintenance check
func serviceMaintCheckID(serviceID string) string {
return fmt.Sprintf("%s:%s", serviceMaintCheckPrefix, serviceID)
func serviceMaintCheckID(serviceID string) types.CheckID {
return types.CheckID(fmt.Sprintf("%s:%s", serviceMaintCheckPrefix, serviceID))
}
// EnableServiceMaintenance will register a false health check against the given

View File

@ -2,18 +2,21 @@ package agent
import (
"fmt"
"github.com/hashicorp/consul/consul/structs"
"github.com/hashicorp/serf/coordinate"
"github.com/hashicorp/serf/serf"
"net/http"
"strconv"
"strings"
"github.com/hashicorp/consul/consul/structs"
"github.com/hashicorp/consul/types"
"github.com/hashicorp/serf/coordinate"
"github.com/hashicorp/serf/serf"
)
type AgentSelf struct {
Config *Config
Coord *coordinate.Coordinate
Member serf.Member
Stats map[string]map[string]string
}
func (s *HTTPServer) AgentSelf(resp http.ResponseWriter, req *http.Request) (interface{}, error) {
@ -29,6 +32,7 @@ func (s *HTTPServer) AgentSelf(resp http.ResponseWriter, req *http.Request) (int
Config: s.agent.config,
Coord: c,
Member: s.agent.LocalMember(),
Stats: s.agent.Stats(),
}, nil
}
@ -129,7 +133,7 @@ func (s *HTTPServer) AgentRegisterCheck(resp http.ResponseWriter, req *http.Requ
}
func (s *HTTPServer) AgentDeregisterCheck(resp http.ResponseWriter, req *http.Request) (interface{}, error) {
checkID := strings.TrimPrefix(req.URL.Path, "/v1/agent/check/deregister/")
checkID := types.CheckID(strings.TrimPrefix(req.URL.Path, "/v1/agent/check/deregister/"))
if err := s.agent.RemoveCheck(checkID, true); err != nil {
return nil, err
}
@ -138,9 +142,9 @@ func (s *HTTPServer) AgentDeregisterCheck(resp http.ResponseWriter, req *http.Re
}
func (s *HTTPServer) AgentCheckPass(resp http.ResponseWriter, req *http.Request) (interface{}, error) {
checkID := strings.TrimPrefix(req.URL.Path, "/v1/agent/check/pass/")
checkID := types.CheckID(strings.TrimPrefix(req.URL.Path, "/v1/agent/check/pass/"))
note := req.URL.Query().Get("note")
if err := s.agent.UpdateCheck(checkID, structs.HealthPassing, note); err != nil {
if err := s.agent.updateTTLCheck(checkID, structs.HealthPassing, note); err != nil {
return nil, err
}
s.syncChanges()
@ -148,9 +152,9 @@ func (s *HTTPServer) AgentCheckPass(resp http.ResponseWriter, req *http.Request)
}
func (s *HTTPServer) AgentCheckWarn(resp http.ResponseWriter, req *http.Request) (interface{}, error) {
checkID := strings.TrimPrefix(req.URL.Path, "/v1/agent/check/warn/")
checkID := types.CheckID(strings.TrimPrefix(req.URL.Path, "/v1/agent/check/warn/"))
note := req.URL.Query().Get("note")
if err := s.agent.UpdateCheck(checkID, structs.HealthWarning, note); err != nil {
if err := s.agent.updateTTLCheck(checkID, structs.HealthWarning, note); err != nil {
return nil, err
}
s.syncChanges()
@ -158,9 +162,9 @@ func (s *HTTPServer) AgentCheckWarn(resp http.ResponseWriter, req *http.Request)
}
func (s *HTTPServer) AgentCheckFail(resp http.ResponseWriter, req *http.Request) (interface{}, error) {
checkID := strings.TrimPrefix(req.URL.Path, "/v1/agent/check/fail/")
checkID := types.CheckID(strings.TrimPrefix(req.URL.Path, "/v1/agent/check/fail/"))
note := req.URL.Query().Get("note")
if err := s.agent.UpdateCheck(checkID, structs.HealthCritical, note); err != nil {
if err := s.agent.updateTTLCheck(checkID, structs.HealthCritical, note); err != nil {
return nil, err
}
s.syncChanges()
@ -211,8 +215,8 @@ func (s *HTTPServer) AgentCheckUpdate(resp http.ResponseWriter, req *http.Reques
update.Output[:CheckBufSize], CheckBufSize, total)
}
checkID := strings.TrimPrefix(req.URL.Path, "/v1/agent/check/update/")
if err := s.agent.UpdateCheck(checkID, update.Status, update.Output); err != nil {
checkID := types.CheckID(strings.TrimPrefix(req.URL.Path, "/v1/agent/check/update/"))
if err := s.agent.updateTTLCheck(checkID, update.Status, update.Output); err != nil {
return nil, err
}
s.syncChanges()
@ -269,7 +273,7 @@ func (s *HTTPServer) AgentRegisterService(resp http.ResponseWriter, req *http.Re
for _, check := range chkTypes {
if check.Status != "" && !structs.ValidStatus(check.Status) {
resp.WriteHeader(400)
resp.Write([]byte("Status for checks must 'passing', 'warning', 'critical', 'unknown'"))
resp.Write([]byte("Status for checks must 'passing', 'warning', 'critical'"))
return nil, nil
}
if !check.Valid() {

View File

@ -13,6 +13,7 @@ import (
"github.com/hashicorp/consul/consul/structs"
"github.com/hashicorp/consul/testutil"
"github.com/hashicorp/consul/types"
"github.com/hashicorp/serf/serf"
)
@ -61,7 +62,7 @@ func TestHTTPAgentChecks(t *testing.T) {
if err != nil {
t.Fatalf("Err: %v", err)
}
val := obj.(map[string]*structs.HealthCheck)
val := obj.(map[types.CheckID]*structs.HealthCheck)
if len(val) != 1 {
t.Fatalf("bad checks: %v", obj)
}
@ -188,9 +189,15 @@ func TestHTTPAgentJoin(t *testing.T) {
t.Fatalf("Err: %v", obj)
}
if len(a2.LANMembers()) != 2 {
if len(srv.agent.LANMembers()) != 2 {
t.Fatalf("should have 2 members")
}
testutil.WaitForResult(func() (bool, error) {
return len(a2.LANMembers()) == 2, nil
}, func(err error) {
t.Fatalf("should have 2 members")
})
}
func TestHTTPAgentJoin_WAN(t *testing.T) {
@ -217,6 +224,10 @@ func TestHTTPAgentJoin_WAN(t *testing.T) {
t.Fatalf("Err: %v", obj)
}
if len(srv.agent.WANMembers()) != 2 {
t.Fatalf("should have 2 members")
}
testutil.WaitForResult(func() (bool, error) {
return len(a2.WANMembers()) == 2, nil
}, func(err error) {
@ -294,21 +305,22 @@ func TestHTTPAgentRegisterCheck(t *testing.T) {
}
// Ensure we have a check mapping
if _, ok := srv.agent.state.Checks()["test"]; !ok {
checkID := types.CheckID("test")
if _, ok := srv.agent.state.Checks()[checkID]; !ok {
t.Fatalf("missing test check")
}
if _, ok := srv.agent.checkTTLs["test"]; !ok {
if _, ok := srv.agent.checkTTLs[checkID]; !ok {
t.Fatalf("missing test check ttl")
}
// Ensure the token was configured
if token := srv.agent.state.CheckToken("test"); token == "" {
if token := srv.agent.state.CheckToken(checkID); token == "" {
t.Fatalf("missing token")
}
// By default, checks start in critical state.
state := srv.agent.state.Checks()["test"]
state := srv.agent.state.Checks()[checkID]
if state.Status != structs.HealthCritical {
t.Fatalf("bad: %v", state)
}
@ -343,15 +355,16 @@ func TestHTTPAgentRegisterCheckPassing(t *testing.T) {
}
// Ensure we have a check mapping
if _, ok := srv.agent.state.Checks()["test"]; !ok {
checkID := types.CheckID("test")
if _, ok := srv.agent.state.Checks()[checkID]; !ok {
t.Fatalf("missing test check")
}
if _, ok := srv.agent.checkTTLs["test"]; !ok {
if _, ok := srv.agent.checkTTLs[checkID]; !ok {
t.Fatalf("missing test check ttl")
}
state := srv.agent.state.Checks()["test"]
state := srv.agent.state.Checks()[checkID]
if state.Status != structs.HealthPassing {
t.Fatalf("bad: %v", state)
}

View File

@ -17,12 +17,27 @@ import (
"github.com/hashicorp/consul/consul"
"github.com/hashicorp/consul/consul/structs"
"github.com/hashicorp/consul/testutil"
"github.com/hashicorp/raft"
)
var offset uint64
const (
basePortNumber = 10000
portOffsetDNS = iota
portOffsetHTTP
portOffsetRPC
portOffsetSerfLan
portOffsetSerfWan
portOffsetServer
// Must be last in list
numPortsPerIndex
)
var offset uint64 = basePortNumber
func nextConfig() *Config {
idx := int(atomic.AddUint64(&offset, 1))
idx := int(atomic.AddUint64(&offset, numPortsPerIndex))
conf := DefaultConfig()
conf.Version = "a.b"
@ -32,12 +47,12 @@ func nextConfig() *Config {
conf.Datacenter = "dc1"
conf.NodeName = fmt.Sprintf("Node %d", idx)
conf.BindAddr = "127.0.0.1"
conf.Ports.DNS = 19000 + idx
conf.Ports.HTTP = 18800 + idx
conf.Ports.RPC = 18600 + idx
conf.Ports.SerfLan = 18200 + idx
conf.Ports.SerfWan = 18400 + idx
conf.Ports.Server = 18000 + idx
conf.Ports.DNS = basePortNumber + idx + portOffsetDNS
conf.Ports.HTTP = basePortNumber + idx + portOffsetHTTP
conf.Ports.RPC = basePortNumber + idx + portOffsetRPC
conf.Ports.SerfLan = basePortNumber + idx + portOffsetSerfLan
conf.Ports.SerfWan = basePortNumber + idx + portOffsetSerfWan
conf.Ports.Server = basePortNumber + idx + portOffsetServer
conf.Server = true
conf.ACLDatacenter = "dc1"
conf.ACLMasterToken = "root"
@ -169,6 +184,7 @@ func TestAgent_CheckAdvertiseAddrsSettings(t *testing.T) {
t.Fatalf("RPC is not properly set to %v: %s", c.AdvertiseAddrs.RPC, rpc)
}
expected := map[string]string{
"lan": agent.config.AdvertiseAddr,
"wan": agent.config.AdvertiseAddrWan,
}
if !reflect.DeepEqual(agent.config.TaggedAddresses, expected) {
@ -176,6 +192,44 @@ func TestAgent_CheckAdvertiseAddrsSettings(t *testing.T) {
}
}
func TestAgent_CheckPerformanceSettings(t *testing.T) {
// Try a default config.
{
c := nextConfig()
c.ConsulConfig = nil
dir, agent := makeAgent(t, c)
defer os.RemoveAll(dir)
defer agent.Shutdown()
raftMult := time.Duration(consul.DefaultRaftMultiplier)
r := agent.consulConfig().RaftConfig
def := raft.DefaultConfig()
if r.HeartbeatTimeout != raftMult*def.HeartbeatTimeout ||
r.ElectionTimeout != raftMult*def.ElectionTimeout ||
r.LeaderLeaseTimeout != raftMult*def.LeaderLeaseTimeout {
t.Fatalf("bad: %#v", *r)
}
}
// Try a multiplier.
{
c := nextConfig()
c.Performance.RaftMultiplier = 99
dir, agent := makeAgent(t, c)
defer os.RemoveAll(dir)
defer agent.Shutdown()
const raftMult time.Duration = 99
r := agent.consulConfig().RaftConfig
def := raft.DefaultConfig()
if r.HeartbeatTimeout != raftMult*def.HeartbeatTimeout ||
r.ElectionTimeout != raftMult*def.ElectionTimeout ||
r.LeaderLeaseTimeout != raftMult*def.LeaderLeaseTimeout {
t.Fatalf("bad: %#v", *r)
}
}
}
func TestAgent_ReconnectConfigSettings(t *testing.T) {
c := nextConfig()
func() {
@ -194,6 +248,7 @@ func TestAgent_ReconnectConfigSettings(t *testing.T) {
}
}()
c = nextConfig()
c.ReconnectTimeoutLan = 24 * time.Hour
c.ReconnectTimeoutWan = 36 * time.Hour
func() {
@ -627,7 +682,7 @@ func TestAgent_RemoveCheck(t *testing.T) {
}
}
func TestAgent_UpdateCheck(t *testing.T) {
func TestAgent_updateTTLCheck(t *testing.T) {
dir, agent := makeAgent(t, nextConfig())
defer os.RemoveAll(dir)
defer agent.Shutdown()
@ -641,17 +696,17 @@ func TestAgent_UpdateCheck(t *testing.T) {
chk := &CheckType{
TTL: 15 * time.Second,
}
// Add check and update it.
err := agent.AddCheck(health, chk, false, "")
if err != nil {
t.Fatalf("err: %v", err)
}
// Remove check
if err := agent.UpdateCheck("mem", structs.HealthPassing, "foo"); err != nil {
if err := agent.updateTTLCheck("mem", structs.HealthPassing, "foo"); err != nil {
t.Fatalf("err: %v", err)
}
// Ensure we have a check mapping
// Ensure we have a check mapping.
status := agent.state.Checks()["mem"]
if status.Status != structs.HealthPassing {
t.Fatalf("bad: %v", status)
@ -919,7 +974,7 @@ func TestAgent_PersistCheck(t *testing.T) {
Interval: 10 * time.Second,
}
file := filepath.Join(agent.config.DataDir, checksDir, stringHash(check.CheckID))
file := filepath.Join(agent.config.DataDir, checksDir, checkIDHash(check.CheckID))
// Not persisted if not requested
if err := agent.AddCheck(check, chkType, false, ""); err != nil {
@ -1014,7 +1069,7 @@ func TestAgent_PurgeCheck(t *testing.T) {
Status: structs.HealthPassing,
}
file := filepath.Join(agent.config.DataDir, checksDir, stringHash(check.CheckID))
file := filepath.Join(agent.config.DataDir, checksDir, checkIDHash(check.CheckID))
if err := agent.AddCheck(check, nil, true, ""); err != nil {
t.Fatalf("err: %v", err)
}
@ -1074,7 +1129,7 @@ func TestAgent_PurgeCheckOnDuplicate(t *testing.T) {
}
defer agent2.Shutdown()
file := filepath.Join(agent.config.DataDir, checksDir, stringHash(check1.CheckID))
file := filepath.Join(agent.config.DataDir, checksDir, checkIDHash(check1.CheckID))
if _, err := os.Stat(file); err == nil {
t.Fatalf("should have removed persisted check")
}
@ -1233,7 +1288,7 @@ func TestAgent_unloadServices(t *testing.T) {
}
}
func TestAgent_ServiceMaintenanceMode(t *testing.T) {
func TestAgent_Service_MaintenanceMode(t *testing.T) {
config := nextConfig()
dir, agent := makeAgent(t, config)
defer os.RemoveAll(dir)
@ -1298,6 +1353,133 @@ func TestAgent_ServiceMaintenanceMode(t *testing.T) {
}
}
func TestAgent_Service_Reap(t *testing.T) {
config := nextConfig()
config.CheckReapInterval = time.Millisecond
config.CheckDeregisterIntervalMin = 0
dir, agent := makeAgent(t, config)
defer os.RemoveAll(dir)
defer agent.Shutdown()
svc := &structs.NodeService{
ID: "redis",
Service: "redis",
Tags: []string{"foo"},
Port: 8000,
}
chkTypes := CheckTypes{
&CheckType{
Status: structs.HealthPassing,
TTL: 10 * time.Millisecond,
DeregisterCriticalServiceAfter: 100 * time.Millisecond,
},
}
// Register the service.
if err := agent.AddService(svc, chkTypes, false, ""); err != nil {
t.Fatalf("err: %v", err)
}
// Make sure it's there and there's no critical check yet.
if _, ok := agent.state.Services()["redis"]; !ok {
t.Fatalf("should have redis service")
}
if checks := agent.state.CriticalChecks(); len(checks) > 0 {
t.Fatalf("should not have critical checks")
}
// Wait for the check TTL to fail.
time.Sleep(30 * time.Millisecond)
if _, ok := agent.state.Services()["redis"]; !ok {
t.Fatalf("should have redis service")
}
if checks := agent.state.CriticalChecks(); len(checks) != 1 {
t.Fatalf("should have a critical check")
}
// Pass the TTL.
if err := agent.updateTTLCheck("service:redis", structs.HealthPassing, "foo"); err != nil {
t.Fatalf("err: %v", err)
}
if _, ok := agent.state.Services()["redis"]; !ok {
t.Fatalf("should have redis service")
}
if checks := agent.state.CriticalChecks(); len(checks) > 0 {
t.Fatalf("should not have critical checks")
}
// Wait for the check TTL to fail again.
time.Sleep(30 * time.Millisecond)
if _, ok := agent.state.Services()["redis"]; !ok {
t.Fatalf("should have redis service")
}
if checks := agent.state.CriticalChecks(); len(checks) != 1 {
t.Fatalf("should have a critical check")
}
// Wait for the reap.
time.Sleep(300 * time.Millisecond)
if _, ok := agent.state.Services()["redis"]; ok {
t.Fatalf("redis service should have been reaped")
}
if checks := agent.state.CriticalChecks(); len(checks) > 0 {
t.Fatalf("should not have critical checks")
}
}
func TestAgent_Service_NoReap(t *testing.T) {
config := nextConfig()
config.CheckReapInterval = time.Millisecond
config.CheckDeregisterIntervalMin = 0
dir, agent := makeAgent(t, config)
defer os.RemoveAll(dir)
defer agent.Shutdown()
svc := &structs.NodeService{
ID: "redis",
Service: "redis",
Tags: []string{"foo"},
Port: 8000,
}
chkTypes := CheckTypes{
&CheckType{
Status: structs.HealthPassing,
TTL: 10 * time.Millisecond,
},
}
// Register the service.
if err := agent.AddService(svc, chkTypes, false, ""); err != nil {
t.Fatalf("err: %v", err)
}
// Make sure it's there and there's no critical check yet.
if _, ok := agent.state.Services()["redis"]; !ok {
t.Fatalf("should have redis service")
}
if checks := agent.state.CriticalChecks(); len(checks) > 0 {
t.Fatalf("should not have critical checks")
}
// Wait for the check TTL to fail.
time.Sleep(30 * time.Millisecond)
if _, ok := agent.state.Services()["redis"]; !ok {
t.Fatalf("should have redis service")
}
if checks := agent.state.CriticalChecks(); len(checks) != 1 {
t.Fatalf("should have a critical check")
}
// Wait a while and make sure it doesn't reap.
time.Sleep(300 * time.Millisecond)
if _, ok := agent.state.Services()["redis"]; !ok {
t.Fatalf("should have redis service")
}
if checks := agent.state.CriticalChecks(); len(checks) != 1 {
t.Fatalf("should have a critical check")
}
}
func TestAgent_addCheck_restoresSnapshot(t *testing.T) {
config := nextConfig()
dir, agent := makeAgent(t, config)
@ -1465,7 +1647,7 @@ func TestAgent_loadChecks_checkFails(t *testing.T) {
}
// Check to make sure the check was persisted
checkHash := stringHash(check.CheckID)
checkHash := checkIDHash(check.CheckID)
checkPath := filepath.Join(config.DataDir, checksDir, checkHash)
if _, err := os.Stat(checkPath); err != nil {
t.Fatalf("err: %s", err)

File diff suppressed because one or more lines are too long

View File

@ -2,9 +2,10 @@ package agent
import (
"fmt"
"github.com/hashicorp/consul/consul/structs"
"net/http"
"strings"
"github.com/hashicorp/consul/consul/structs"
)
func (s *HTTPServer) CatalogRegister(resp http.ResponseWriter, req *http.Request) (interface{}, error) {
@ -19,6 +20,7 @@ func (s *HTTPServer) CatalogRegister(resp http.ResponseWriter, req *http.Request
if args.Datacenter == "" {
args.Datacenter = s.agent.config.Datacenter
}
s.parseToken(req, &args.Token)
// Forward to the servers
var out struct{}
@ -40,6 +42,7 @@ func (s *HTTPServer) CatalogDeregister(resp http.ResponseWriter, req *http.Reque
if args.Datacenter == "" {
args.Datacenter = s.agent.config.Datacenter
}
s.parseToken(req, &args.Token)
// Forward to the servers
var out struct{}
@ -70,6 +73,7 @@ func (s *HTTPServer) CatalogNodes(resp http.ResponseWriter, req *http.Request) (
if err := s.agent.RPC("Catalog.ListNodes", &args, &out); err != nil {
return nil, err
}
translateAddresses(s.agent.config, args.Datacenter, out.Nodes)
// Use empty list instead of nil
if out.Nodes == nil {
@ -122,6 +126,7 @@ func (s *HTTPServer) CatalogServiceNodes(resp http.ResponseWriter, req *http.Req
if err := s.agent.RPC("Catalog.ServiceNodes", &args, &out); err != nil {
return nil, err
}
translateAddresses(s.agent.config, args.Datacenter, out.ServiceNodes)
// Use empty list instead of nil
if out.ServiceNodes == nil {
@ -151,5 +156,9 @@ func (s *HTTPServer) CatalogNodeServices(resp http.ResponseWriter, req *http.Req
if err := s.agent.RPC("Catalog.NodeServices", &args, &out); err != nil {
return nil, err
}
if out.NodeServices != nil && out.NodeServices.Node != nil {
translateAddresses(s.agent.config, args.Datacenter, out.NodeServices.Node)
}
return out.NodeServices, nil
}

View File

@ -145,6 +145,112 @@ func TestCatalogNodes(t *testing.T) {
}
}
func TestCatalogNodes_WanTranslation(t *testing.T) {
dir1, srv1 := makeHTTPServerWithConfig(t,
func(c *Config) {
c.Datacenter = "dc1"
c.TranslateWanAddrs = true
})
defer os.RemoveAll(dir1)
defer srv1.Shutdown()
defer srv1.agent.Shutdown()
testutil.WaitForLeader(t, srv1.agent.RPC, "dc1")
dir2, srv2 := makeHTTPServerWithConfig(t,
func(c *Config) {
c.Datacenter = "dc2"
c.TranslateWanAddrs = true
})
defer os.RemoveAll(dir2)
defer srv2.Shutdown()
defer srv2.agent.Shutdown()
testutil.WaitForLeader(t, srv2.agent.RPC, "dc2")
// Wait for the WAN join.
addr := fmt.Sprintf("127.0.0.1:%d",
srv1.agent.config.Ports.SerfWan)
if _, err := srv2.agent.JoinWAN([]string{addr}); err != nil {
t.Fatalf("err: %v", err)
}
testutil.WaitForResult(
func() (bool, error) {
return len(srv1.agent.WANMembers()) > 1, nil
},
func(err error) {
t.Fatalf("Failed waiting for WAN join: %v", err)
})
// Register a node with DC2.
{
args := &structs.RegisterRequest{
Datacenter: "dc2",
Node: "wan_translation_test",
Address: "127.0.0.1",
TaggedAddresses: map[string]string{
"wan": "127.0.0.2",
},
Service: &structs.NodeService{
Service: "http_wan_translation_test",
},
}
var out struct{}
if err := srv2.agent.RPC("Catalog.Register", args, &out); err != nil {
t.Fatalf("err: %v", err)
}
}
// Query nodes in DC2 from DC1.
req, err := http.NewRequest("GET", "/v1/catalog/nodes?dc=dc2", nil)
if err != nil {
t.Fatalf("err: %v", err)
}
resp1 := httptest.NewRecorder()
obj1, err1 := srv1.CatalogNodes(resp1, req)
if err1 != nil {
t.Fatalf("err: %v", err1)
}
assertIndex(t, resp1)
// Expect that DC1 gives us a WAN address (since the node is in DC2).
nodes1 := obj1.(structs.Nodes)
if len(nodes1) != 2 {
t.Fatalf("bad: %v", obj1)
}
var address string
for _, node := range nodes1 {
if node.Node == "wan_translation_test" {
address = node.Address
}
}
if address != "127.0.0.2" {
t.Fatalf("bad: %s", address)
}
// Query DC2 from DC2.
resp2 := httptest.NewRecorder()
obj2, err2 := srv2.CatalogNodes(resp2, req)
if err2 != nil {
t.Fatalf("err: %v", err2)
}
assertIndex(t, resp2)
// Expect that DC2 gives us a private address (since the node is in DC2).
nodes2 := obj2.(structs.Nodes)
if len(nodes2) != 2 {
t.Fatalf("bad: %v", obj2)
}
for _, node := range nodes2 {
if node.Node == "wan_translation_test" {
address = node.Address
}
}
if address != "127.0.0.1" {
t.Fatalf("bad: %s", address)
}
}
func TestCatalogNodes_Blocking(t *testing.T) {
dir, srv := makeHTTPServer(t)
defer os.RemoveAll(dir)
@ -407,6 +513,103 @@ func TestCatalogServiceNodes(t *testing.T) {
}
}
func TestCatalogServiceNodes_WanTranslation(t *testing.T) {
dir1, srv1 := makeHTTPServerWithConfig(t,
func(c *Config) {
c.Datacenter = "dc1"
c.TranslateWanAddrs = true
})
defer os.RemoveAll(dir1)
defer srv1.Shutdown()
defer srv1.agent.Shutdown()
testutil.WaitForLeader(t, srv1.agent.RPC, "dc1")
dir2, srv2 := makeHTTPServerWithConfig(t,
func(c *Config) {
c.Datacenter = "dc2"
c.TranslateWanAddrs = true
})
defer os.RemoveAll(dir2)
defer srv2.Shutdown()
defer srv2.agent.Shutdown()
testutil.WaitForLeader(t, srv2.agent.RPC, "dc2")
// Wait for the WAN join.
addr := fmt.Sprintf("127.0.0.1:%d",
srv1.agent.config.Ports.SerfWan)
if _, err := srv2.agent.JoinWAN([]string{addr}); err != nil {
t.Fatalf("err: %v", err)
}
testutil.WaitForResult(
func() (bool, error) {
return len(srv1.agent.WANMembers()) > 1, nil
},
func(err error) {
t.Fatalf("Failed waiting for WAN join: %v", err)
})
// Register a node with DC2.
{
args := &structs.RegisterRequest{
Datacenter: "dc2",
Node: "foo",
Address: "127.0.0.1",
TaggedAddresses: map[string]string{
"wan": "127.0.0.2",
},
Service: &structs.NodeService{
Service: "http_wan_translation_test",
},
}
var out struct{}
if err := srv2.agent.RPC("Catalog.Register", args, &out); err != nil {
t.Fatalf("err: %v", err)
}
}
// Query for the node in DC2 from DC1.
req, err := http.NewRequest("GET", "/v1/catalog/service/http_wan_translation_test?dc=dc2", nil)
if err != nil {
t.Fatalf("err: %v", err)
}
resp1 := httptest.NewRecorder()
obj1, err1 := srv1.CatalogServiceNodes(resp1, req)
if err1 != nil {
t.Fatalf("err: %v", err1)
}
assertIndex(t, resp1)
// Expect that DC1 gives us a WAN address (since the node is in DC2).
nodes1 := obj1.(structs.ServiceNodes)
if len(nodes1) != 1 {
t.Fatalf("bad: %v", obj1)
}
node1 := nodes1[0]
if node1.Address != "127.0.0.2" {
t.Fatalf("bad: %v", node1)
}
// Query DC2 from DC2.
resp2 := httptest.NewRecorder()
obj2, err2 := srv2.CatalogServiceNodes(resp2, req)
if err2 != nil {
t.Fatalf("err: %v", err2)
}
assertIndex(t, resp2)
// Expect that DC2 gives us a local address (since the node is in DC2).
nodes2 := obj2.(structs.ServiceNodes)
if len(nodes2) != 1 {
t.Fatalf("bad: %v", obj2)
}
node2 := nodes2[0]
if node2.Address != "127.0.0.1" {
t.Fatalf("bad: %v", node2)
}
}
func TestCatalogServiceNodes_DistanceSort(t *testing.T) {
dir, srv := makeHTTPServer(t)
defer os.RemoveAll(dir)
@ -550,3 +753,99 @@ func TestCatalogNodeServices(t *testing.T) {
t.Fatalf("bad: %v", obj)
}
}
func TestCatalogNodeServices_WanTranslation(t *testing.T) {
dir1, srv1 := makeHTTPServerWithConfig(t,
func(c *Config) {
c.Datacenter = "dc1"
c.TranslateWanAddrs = true
})
defer os.RemoveAll(dir1)
defer srv1.Shutdown()
defer srv1.agent.Shutdown()
testutil.WaitForLeader(t, srv1.agent.RPC, "dc1")
dir2, srv2 := makeHTTPServerWithConfig(t,
func(c *Config) {
c.Datacenter = "dc2"
c.TranslateWanAddrs = true
})
defer os.RemoveAll(dir2)
defer srv2.Shutdown()
defer srv2.agent.Shutdown()
testutil.WaitForLeader(t, srv2.agent.RPC, "dc2")
// Wait for the WAN join.
addr := fmt.Sprintf("127.0.0.1:%d",
srv1.agent.config.Ports.SerfWan)
if _, err := srv2.agent.JoinWAN([]string{addr}); err != nil {
t.Fatalf("err: %v", err)
}
testutil.WaitForResult(
func() (bool, error) {
return len(srv1.agent.WANMembers()) > 1, nil
},
func(err error) {
t.Fatalf("Failed waiting for WAN join: %v", err)
})
// Register a node with DC2.
{
args := &structs.RegisterRequest{
Datacenter: "dc2",
Node: "foo",
Address: "127.0.0.1",
TaggedAddresses: map[string]string{
"wan": "127.0.0.2",
},
Service: &structs.NodeService{
Service: "http_wan_translation_test",
},
}
var out struct{}
if err := srv2.agent.RPC("Catalog.Register", args, &out); err != nil {
t.Fatalf("err: %v", err)
}
}
// Query for the node in DC2 from DC1.
req, err := http.NewRequest("GET", "/v1/catalog/node/foo?dc=dc2", nil)
if err != nil {
t.Fatalf("err: %v", err)
}
resp1 := httptest.NewRecorder()
obj1, err1 := srv1.CatalogNodeServices(resp1, req)
if err1 != nil {
t.Fatalf("err: %v", err1)
}
assertIndex(t, resp1)
// Expect that DC1 gives us a WAN address (since the node is in DC2).
services1 := obj1.(*structs.NodeServices)
if len(services1.Services) != 1 {
t.Fatalf("bad: %v", obj1)
}
service1 := services1.Node
if service1.Address != "127.0.0.2" {
t.Fatalf("bad: %v", service1)
}
// Query DC2 from DC2.
resp2 := httptest.NewRecorder()
obj2, err2 := srv2.CatalogNodeServices(resp2, req)
if err2 != nil {
t.Fatalf("err: %v", err2)
}
assertIndex(t, resp2)
// Expect that DC2 gives us a private address (since the node is in DC2).
services2 := obj2.(*structs.NodeServices)
if len(services2.Services) != 1 {
t.Fatalf("bad: %v", obj2)
}
service2 := services2.Node
if service2.Address != "127.0.0.1" {
t.Fatalf("bad: %v", service2)
}
}

View File

@ -16,6 +16,7 @@ import (
docker "github.com/fsouza/go-dockerclient"
"github.com/hashicorp/consul/consul/structs"
"github.com/hashicorp/consul/lib"
"github.com/hashicorp/consul/types"
"github.com/hashicorp/go-cleanhttp"
)
@ -34,12 +35,11 @@ const (
HttpUserAgent = "Consul Health Check"
)
// CheckType is used to create either the CheckMonitor
// or the CheckTTL.
// Five types are supported: Script, HTTP, TCP, Docker and TTL
// Script, HTTP, Docker and TCP all require Interval
// Only one of the types needs to be provided
// TTL or Script/Interval or HTTP/Interval or TCP/Interval or Docker/Interval
// CheckType is used to create either the CheckMonitor or the CheckTTL.
// Five types are supported: Script, HTTP, TCP, Docker and TTL. Script, HTTP,
// Docker and TCP all require Interval. Only one of the types may to be
// provided: TTL or Script/Interval or HTTP/Interval or TCP/Interval or
// Docker/Interval.
type CheckType struct {
Script string
HTTP string
@ -51,6 +51,11 @@ type CheckType struct {
Timeout time.Duration
TTL time.Duration
// DeregisterCriticalServiceAfter, if >0, will cause the associated
// service, if any, to be deregistered if this check is critical for
// longer than this duration.
DeregisterCriticalServiceAfter time.Duration
Status string
Notes string
@ -90,7 +95,7 @@ func (c *CheckType) IsDocker() bool {
// to notify when a check has a status update. The update
// should take care to be idempotent.
type CheckNotifier interface {
UpdateCheck(checkID, status, output string)
UpdateCheck(checkID types.CheckID, status, output string)
}
// CheckMonitor is used to periodically invoke a script to
@ -98,7 +103,7 @@ type CheckNotifier interface {
// nagios plugins and expects the output in the same format.
type CheckMonitor struct {
Notify CheckNotifier
CheckID string
CheckID types.CheckID
Script string
Interval time.Duration
Timeout time.Duration
@ -231,7 +236,7 @@ func (c *CheckMonitor) check() {
// automatically set to critical.
type CheckTTL struct {
Notify CheckNotifier
CheckID string
CheckID types.CheckID
TTL time.Duration
Logger *log.Logger
@ -322,7 +327,7 @@ type persistedCheck struct {
// expiration timestamp which is used to determine staleness on later
// agent restarts.
type persistedCheckState struct {
CheckID string
CheckID types.CheckID
Output string
Status string
Expires int64
@ -336,7 +341,7 @@ type persistedCheckState struct {
// or if the request returns an error
type CheckHTTP struct {
Notify CheckNotifier
CheckID string
CheckID types.CheckID
HTTP string
Interval time.Duration
Timeout time.Duration
@ -462,7 +467,7 @@ func (c *CheckHTTP) check() {
// The check is critical if the connection returns an error
type CheckTCP struct {
Notify CheckNotifier
CheckID string
CheckID types.CheckID
TCP string
Interval time.Duration
Timeout time.Duration
@ -553,7 +558,7 @@ type DockerClient interface {
// with nagios plugins and expects the output in the same format.
type CheckDocker struct {
Notify CheckNotifier
CheckID string
CheckID types.CheckID
Script string
DockerContainerID string
Shell string

View File

@ -18,15 +18,16 @@ import (
docker "github.com/fsouza/go-dockerclient"
"github.com/hashicorp/consul/consul/structs"
"github.com/hashicorp/consul/testutil"
"github.com/hashicorp/consul/types"
)
type MockNotify struct {
state map[string]string
updates map[string]int
output map[string]string
state map[types.CheckID]string
updates map[types.CheckID]int
output map[types.CheckID]string
}
func (m *MockNotify) UpdateCheck(id, status, output string) {
func (m *MockNotify) UpdateCheck(id types.CheckID, status, output string) {
m.state[id] = status
old := m.updates[id]
m.updates[id] = old + 1
@ -35,13 +36,13 @@ func (m *MockNotify) UpdateCheck(id, status, output string) {
func expectStatus(t *testing.T, script, status string) {
mock := &MockNotify{
state: make(map[string]string),
updates: make(map[string]int),
output: make(map[string]string),
state: make(map[types.CheckID]string),
updates: make(map[types.CheckID]int),
output: make(map[types.CheckID]string),
}
check := &CheckMonitor{
Notify: mock,
CheckID: "foo",
CheckID: types.CheckID("foo"),
Script: script,
Interval: 10 * time.Millisecond,
Logger: log.New(os.Stderr, "", log.LstdFlags),
@ -84,13 +85,13 @@ func TestCheckMonitor_BadCmd(t *testing.T) {
func TestCheckMonitor_Timeout(t *testing.T) {
mock := &MockNotify{
state: make(map[string]string),
updates: make(map[string]int),
output: make(map[string]string),
state: make(map[types.CheckID]string),
updates: make(map[types.CheckID]int),
output: make(map[types.CheckID]string),
}
check := &CheckMonitor{
Notify: mock,
CheckID: "foo",
CheckID: types.CheckID("foo"),
Script: "sleep 1 && exit 0",
Interval: 10 * time.Millisecond,
Timeout: 5 * time.Millisecond,
@ -114,13 +115,13 @@ func TestCheckMonitor_Timeout(t *testing.T) {
func TestCheckMonitor_RandomStagger(t *testing.T) {
mock := &MockNotify{
state: make(map[string]string),
updates: make(map[string]int),
output: make(map[string]string),
state: make(map[types.CheckID]string),
updates: make(map[types.CheckID]int),
output: make(map[types.CheckID]string),
}
check := &CheckMonitor{
Notify: mock,
CheckID: "foo",
CheckID: types.CheckID("foo"),
Script: "exit 0",
Interval: 25 * time.Millisecond,
Logger: log.New(os.Stderr, "", log.LstdFlags),
@ -143,13 +144,13 @@ func TestCheckMonitor_RandomStagger(t *testing.T) {
func TestCheckMonitor_LimitOutput(t *testing.T) {
mock := &MockNotify{
state: make(map[string]string),
updates: make(map[string]int),
output: make(map[string]string),
state: make(map[types.CheckID]string),
updates: make(map[types.CheckID]int),
output: make(map[types.CheckID]string),
}
check := &CheckMonitor{
Notify: mock,
CheckID: "foo",
CheckID: types.CheckID("foo"),
Script: "od -N 81920 /dev/urandom",
Interval: 25 * time.Millisecond,
Logger: log.New(os.Stderr, "", log.LstdFlags),
@ -168,13 +169,13 @@ func TestCheckMonitor_LimitOutput(t *testing.T) {
func TestCheckTTL(t *testing.T) {
mock := &MockNotify{
state: make(map[string]string),
updates: make(map[string]int),
output: make(map[string]string),
state: make(map[types.CheckID]string),
updates: make(map[types.CheckID]int),
output: make(map[types.CheckID]string),
}
check := &CheckTTL{
Notify: mock,
CheckID: "foo",
CheckID: types.CheckID("foo"),
TTL: 100 * time.Millisecond,
Logger: log.New(os.Stderr, "", log.LstdFlags),
}
@ -229,13 +230,13 @@ func mockHTTPServer(responseCode int) *httptest.Server {
func expectHTTPStatus(t *testing.T, url string, status string) {
mock := &MockNotify{
state: make(map[string]string),
updates: make(map[string]int),
output: make(map[string]string),
state: make(map[types.CheckID]string),
updates: make(map[types.CheckID]int),
output: make(map[types.CheckID]string),
}
check := &CheckHTTP{
Notify: mock,
CheckID: "foo",
CheckID: types.CheckID("foo"),
HTTP: url,
Interval: 10 * time.Millisecond,
Logger: log.New(os.Stderr, "", log.LstdFlags),
@ -243,21 +244,24 @@ func expectHTTPStatus(t *testing.T, url string, status string) {
check.Start()
defer check.Stop()
time.Sleep(50 * time.Millisecond)
testutil.WaitForResult(func() (bool, error) {
// Should have at least 2 updates
if mock.updates["foo"] < 2 {
return false, fmt.Errorf("should have 2 updates %v", mock.updates)
}
// Should have at least 2 updates
if mock.updates["foo"] < 2 {
t.Fatalf("should have 2 updates %v", mock.updates)
}
if mock.state["foo"] != status {
return false, fmt.Errorf("should be %v %v", status, mock.state)
}
if mock.state["foo"] != status {
t.Fatalf("should be %v %v", status, mock.state)
}
// Allow slightly more data than CheckBufSize, for the header
if n := len(mock.output["foo"]); n > (CheckBufSize + 256) {
t.Fatalf("output too long: %d (%d-byte limit)", n, CheckBufSize)
}
// Allow slightly more data than CheckBufSize, for the header
if n := len(mock.output["foo"]); n > (CheckBufSize + 256) {
return false, fmt.Errorf("output too long: %d (%d-byte limit)", n, CheckBufSize)
}
return true, nil
}, func(err error) {
t.Fatalf("err: %s", err)
})
}
func TestCheckHTTPCritical(t *testing.T) {
@ -329,14 +333,14 @@ func TestCheckHTTPTimeout(t *testing.T) {
defer server.Close()
mock := &MockNotify{
state: make(map[string]string),
updates: make(map[string]int),
output: make(map[string]string),
state: make(map[types.CheckID]string),
updates: make(map[types.CheckID]int),
output: make(map[types.CheckID]string),
}
check := &CheckHTTP{
Notify: mock,
CheckID: "bar",
CheckID: types.CheckID("bar"),
HTTP: server.URL,
Timeout: 5 * time.Millisecond,
Interval: 10 * time.Millisecond,
@ -346,21 +350,24 @@ func TestCheckHTTPTimeout(t *testing.T) {
check.Start()
defer check.Stop()
time.Sleep(50 * time.Millisecond)
testutil.WaitForResult(func() (bool, error) {
// Should have at least 2 updates
if mock.updates["bar"] < 2 {
return false, fmt.Errorf("should have at least 2 updates %v", mock.updates)
}
// Should have at least 2 updates
if mock.updates["bar"] < 2 {
t.Fatalf("should have at least 2 updates %v", mock.updates)
}
if mock.state["bar"] != structs.HealthCritical {
t.Fatalf("should be critical %v", mock.state)
}
if mock.state["bar"] != structs.HealthCritical {
return false, fmt.Errorf("should be critical %v", mock.state)
}
return true, nil
}, func(err error) {
t.Fatalf("err: %s", err)
})
}
func TestCheckHTTP_disablesKeepAlives(t *testing.T) {
check := &CheckHTTP{
CheckID: "foo",
CheckID: types.CheckID("foo"),
HTTP: "http://foo.bar/baz",
Interval: 10 * time.Second,
Logger: log.New(os.Stderr, "", log.LstdFlags),
@ -395,13 +402,13 @@ func mockTCPServer(network string) net.Listener {
func expectTCPStatus(t *testing.T, tcp string, status string) {
mock := &MockNotify{
state: make(map[string]string),
updates: make(map[string]int),
output: make(map[string]string),
state: make(map[types.CheckID]string),
updates: make(map[types.CheckID]int),
output: make(map[types.CheckID]string),
}
check := &CheckTCP{
Notify: mock,
CheckID: "foo",
CheckID: types.CheckID("foo"),
TCP: tcp,
Interval: 10 * time.Millisecond,
Logger: log.New(os.Stderr, "", log.LstdFlags),
@ -409,16 +416,19 @@ func expectTCPStatus(t *testing.T, tcp string, status string) {
check.Start()
defer check.Stop()
time.Sleep(50 * time.Millisecond)
testutil.WaitForResult(func() (bool, error) {
// Should have at least 2 updates
if mock.updates["foo"] < 2 {
return false, fmt.Errorf("should have 2 updates %v", mock.updates)
}
// Should have at least 2 updates
if mock.updates["foo"] < 2 {
t.Fatalf("should have 2 updates %v", mock.updates)
}
if mock.state["foo"] != status {
t.Fatalf("should be %v %v", status, mock.state)
}
if mock.state["foo"] != status {
return false, fmt.Errorf("should be %v %v", status, mock.state)
}
return true, nil
}, func(err error) {
t.Fatalf("err: %s", err)
})
}
func TestCheckTCPCritical(t *testing.T) {
@ -575,13 +585,13 @@ func (d *fakeDockerClientWithExecInfoErrors) InspectExec(id string) (*docker.Exe
func expectDockerCheckStatus(t *testing.T, dockerClient DockerClient, status string, output string) {
mock := &MockNotify{
state: make(map[string]string),
updates: make(map[string]int),
output: make(map[string]string),
state: make(map[types.CheckID]string),
updates: make(map[types.CheckID]int),
output: make(map[types.CheckID]string),
}
check := &CheckDocker{
Notify: mock,
CheckID: "foo",
CheckID: types.CheckID("foo"),
Script: "/health.sh",
DockerContainerID: "54432bad1fc7",
Shell: "/bin/sh",
@ -635,13 +645,13 @@ func TestDockerCheckWhenExecInfoFails(t *testing.T) {
func TestDockerCheckDefaultToSh(t *testing.T) {
os.Setenv("SHELL", "")
mock := &MockNotify{
state: make(map[string]string),
updates: make(map[string]int),
output: make(map[string]string),
state: make(map[types.CheckID]string),
updates: make(map[types.CheckID]int),
output: make(map[types.CheckID]string),
}
check := &CheckDocker{
Notify: mock,
CheckID: "foo",
CheckID: types.CheckID("foo"),
Script: "/health.sh",
DockerContainerID: "54432bad1fc7",
Interval: 10 * time.Millisecond,
@ -659,14 +669,14 @@ func TestDockerCheckDefaultToSh(t *testing.T) {
func TestDockerCheckUseShellFromEnv(t *testing.T) {
mock := &MockNotify{
state: make(map[string]string),
updates: make(map[string]int),
output: make(map[string]string),
state: make(map[types.CheckID]string),
updates: make(map[types.CheckID]int),
output: make(map[types.CheckID]string),
}
os.Setenv("SHELL", "/bin/bash")
check := &CheckDocker{
Notify: mock,
CheckID: "foo",
CheckID: types.CheckID("foo"),
Script: "/health.sh",
DockerContainerID: "54432bad1fc7",
Interval: 10 * time.Millisecond,
@ -685,13 +695,13 @@ func TestDockerCheckUseShellFromEnv(t *testing.T) {
func TestDockerCheckTruncateOutput(t *testing.T) {
mock := &MockNotify{
state: make(map[string]string),
updates: make(map[string]int),
output: make(map[string]string),
state: make(map[types.CheckID]string),
updates: make(map[types.CheckID]int),
output: make(map[types.CheckID]string),
}
check := &CheckDocker{
Notify: mock,
CheckID: "foo",
CheckID: types.CheckID("foo"),
Script: "/health.sh",
DockerContainerID: "54432bad1fc7",
Shell: "/bin/sh",

View File

@ -10,11 +10,13 @@ import (
"os/signal"
"path/filepath"
"regexp"
"strconv"
"strings"
"syscall"
"time"
"github.com/armon/go-metrics"
"github.com/armon/go-metrics/circonus"
"github.com/armon/go-metrics/datadog"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/credentials"
@ -23,10 +25,9 @@ import (
"github.com/hashicorp/consul/lib"
"github.com/hashicorp/consul/watch"
"github.com/hashicorp/go-checkpoint"
"github.com/hashicorp/go-reap"
"github.com/hashicorp/go-syslog"
"github.com/hashicorp/logutils"
scada "github.com/hashicorp/scada-client"
scada "github.com/hashicorp/scada-client/scada"
"github.com/mitchellh/cli"
)
@ -44,6 +45,7 @@ type Command struct {
Revision string
Version string
VersionPrerelease string
HumanVersion string
Ui cli.Ui
ShutdownCh <-chan struct{}
args []string
@ -66,6 +68,7 @@ func (c *Command) readConfig() *Config {
var retryIntervalWan string
var dnsRecursors []string
var dev bool
var dcDeprecated string
cmdFlags := flag.NewFlagSet("agent", flag.ContinueOnError)
cmdFlags.Usage = func() { c.Ui.Output(c.Help()) }
@ -76,7 +79,8 @@ func (c *Command) readConfig() *Config {
cmdFlags.StringVar(&cmdConfig.LogLevel, "log-level", "", "log level")
cmdFlags.StringVar(&cmdConfig.NodeName, "node", "", "node name")
cmdFlags.StringVar(&cmdConfig.Datacenter, "dc", "", "node datacenter")
cmdFlags.StringVar(&dcDeprecated, "dc", "", "node datacenter (deprecated: use 'datacenter' instead)")
cmdFlags.StringVar(&cmdConfig.Datacenter, "datacenter", "", "node datacenter")
cmdFlags.StringVar(&cmdConfig.DataDir, "data-dir", "", "path to the data directory")
cmdFlags.BoolVar(&cmdConfig.EnableUi, "ui", false, "enable the built-in web UI")
cmdFlags.StringVar(&cmdConfig.UiDir, "ui-dir", "", "path to the web UI directory")
@ -91,6 +95,7 @@ func (c *Command) readConfig() *Config {
cmdFlags.StringVar(&cmdConfig.ClientAddr, "client", "", "address to bind client listeners to (DNS, HTTP, HTTPS, RPC)")
cmdFlags.StringVar(&cmdConfig.BindAddr, "bind", "", "address to bind server listeners to")
cmdFlags.IntVar(&cmdConfig.Ports.HTTP, "http-port", 0, "http port to use")
cmdFlags.IntVar(&cmdConfig.Ports.DNS, "dns-port", 0, "DNS port to use")
cmdFlags.StringVar(&cmdConfig.AdvertiseAddr, "advertise", "", "address to advertise instead of bind addr")
cmdFlags.StringVar(&cmdConfig.AdvertiseAddrWan, "advertise-wan", "", "address to advertise on wan instead of bind or advertise addr")
@ -185,15 +190,13 @@ func (c *Command) readConfig() *Config {
return nil
}
// Make sure SkipLeaveOnInt is set to the right default based on the
// agent's mode (client or server)
// Make sure LeaveOnTerm and SkipLeaveOnInt are set to the right
// defaults based on the agent's mode (client or server).
if config.LeaveOnTerm == nil {
config.LeaveOnTerm = Bool(!config.Server)
}
if config.SkipLeaveOnInt == nil {
config.SkipLeaveOnInt = new(bool)
if config.Server {
*config.SkipLeaveOnInt = true
} else {
*config.SkipLeaveOnInt = false
}
config.SkipLeaveOnInt = Bool(config.Server)
}
// Ensure we have a data directory
@ -249,6 +252,14 @@ func (c *Command) readConfig() *Config {
}
}
// Output a warning if the 'dc' flag has been used.
if dcDeprecated != "" {
c.Ui.Error("WARNING: the 'dc' flag has been deprecated. Use 'datacenter' instead")
// Making sure that we don't break previous versions.
config.Datacenter = dcDeprecated
}
// Ensure the datacenter is always lowercased. The DNS endpoints automatically
// lowercase all queries, and internally we expect DC1 and dc1 to be the same.
config.Datacenter = strings.ToLower(config.Datacenter)
@ -449,13 +460,26 @@ func (c *Command) setupLoggers(config *Config) (*GatedWriter, *logWriter, io.Wri
// Check if syslog is enabled
var syslog io.Writer
retries := 12
delay := 5 * time.Second
if config.EnableSyslog {
l, err := gsyslog.NewLogger(gsyslog.LOG_NOTICE, config.SyslogFacility, "consul")
if err != nil {
c.Ui.Error(fmt.Sprintf("Syslog setup failed: %v", err))
return nil, nil, nil
for i := 0; i <= retries; i++ {
l, err := gsyslog.NewLogger(gsyslog.LOG_NOTICE, config.SyslogFacility, "consul")
if err != nil {
c.Ui.Error(fmt.Sprintf("Syslog setup error: %v", err))
if i == retries {
timeout := time.Duration(retries) * delay
c.Ui.Error(fmt.Sprintf("Syslog setup did not succeed within timeout (%s).", timeout.String()))
return nil, nil, nil
} else {
c.Ui.Error(fmt.Sprintf("Retrying syslog setup in %s...", delay.String()))
time.Sleep(delay)
}
} else {
syslog = &SyslogWrapper{l, c.logFilter}
break
}
}
syslog = &SyslogWrapper{l, c.logFilter}
}
// Create a log writer, and wrap a logOutput around it
@ -587,12 +611,7 @@ func (c *Command) checkpointResults(results *checkpoint.CheckResponse, err error
return
}
if results.Outdated {
versionStr := c.Version
if c.VersionPrerelease != "" {
versionStr += fmt.Sprintf("-%s", c.VersionPrerelease)
}
c.Ui.Error(fmt.Sprintf("Newer Consul version available: %s (currently running: %s)", results.CurrentVersion, versionStr))
c.Ui.Error(fmt.Sprintf("Newer Consul version available: %s (currently running: %s)", results.CurrentVersion, c.Version))
}
for _, alert := range results.Alerts {
switch alert.Level {
@ -782,6 +801,41 @@ func (c *Command) Run(args []string) int {
fanout = append(fanout, sink)
}
if config.Telemetry.CirconusAPIToken != "" || config.Telemetry.CirconusCheckSubmissionURL != "" {
cfg := &circonus.Config{}
cfg.Interval = config.Telemetry.CirconusSubmissionInterval
cfg.CheckManager.API.TokenKey = config.Telemetry.CirconusAPIToken
cfg.CheckManager.API.TokenApp = config.Telemetry.CirconusAPIApp
cfg.CheckManager.API.URL = config.Telemetry.CirconusAPIURL
cfg.CheckManager.Check.SubmissionURL = config.Telemetry.CirconusCheckSubmissionURL
cfg.CheckManager.Check.ID = config.Telemetry.CirconusCheckID
cfg.CheckManager.Check.ForceMetricActivation = config.Telemetry.CirconusCheckForceMetricActivation
cfg.CheckManager.Check.InstanceID = config.Telemetry.CirconusCheckInstanceID
cfg.CheckManager.Check.SearchTag = config.Telemetry.CirconusCheckSearchTag
cfg.CheckManager.Broker.ID = config.Telemetry.CirconusBrokerID
cfg.CheckManager.Broker.SelectTag = config.Telemetry.CirconusBrokerSelectTag
if cfg.CheckManager.API.TokenApp == "" {
cfg.CheckManager.API.TokenApp = "consul"
}
if cfg.CheckManager.Check.InstanceID == "" {
cfg.CheckManager.Check.InstanceID = fmt.Sprintf("%s:%s", config.NodeName, config.Datacenter)
}
if cfg.CheckManager.Check.SearchTag == "" {
cfg.CheckManager.Check.SearchTag = "service:consul"
}
sink, err := circonus.NewCirconusSink(cfg)
if err != nil {
c.Ui.Error(fmt.Sprintf("Failed to start Circonus sink. Got: %s", err))
return 1
}
sink.Start()
fanout = append(fanout, sink)
}
// Initialize the global sink
if len(fanout) > 0 {
fanout = append(fanout, inm)
@ -806,33 +860,6 @@ func (c *Command) Run(args []string) int {
defer server.Shutdown()
}
// Enable child process reaping
if (config.Reap != nil && *config.Reap) || (config.Reap == nil && os.Getpid() == 1) {
if !reap.IsSupported() {
c.Ui.Error("Child process reaping is not supported on this platform (set reap=false)")
return 1
} else {
logger := c.agent.logger
logger.Printf("[DEBUG] Automatically reaping child processes")
pids := make(reap.PidCh, 1)
errors := make(reap.ErrorCh, 1)
go func() {
for {
select {
case pid := <-pids:
logger.Printf("[DEBUG] Reaped child process %d", pid)
case err := <-errors:
logger.Printf("[ERR] Error reaping child process: %v", err)
case <-c.agent.shutdownCh:
return
}
}
}()
go reap.ReapChildren(pids, errors, c.agent.shutdownCh, &c.agent.reapLock)
}
}
// Check and shut down the SCADA listeners at the end
defer func() {
if c.scadaHttp != nil {
@ -864,7 +891,7 @@ func (c *Command) Run(args []string) int {
// Register the watches
for _, wp := range config.WatchPlans {
go func(wp *watch.WatchPlan) {
wp.Handler = makeWatchHandler(logOutput, wp.Exempt["handler"], &c.agent.reapLock)
wp.Handler = makeWatchHandler(logOutput, wp.Exempt["handler"])
wp.LogOutput = c.logOutput
if err := wp.Run(httpAddr.String()); err != nil {
c.Ui.Error(fmt.Sprintf("Error running watch: %v", err))
@ -890,6 +917,7 @@ func (c *Command) Run(args []string) int {
c.agent.StartSync()
c.Ui.Output("Consul agent running!")
c.Ui.Info(fmt.Sprintf(" Version: '%s'", c.HumanVersion))
c.Ui.Info(fmt.Sprintf(" Node name: '%s'", config.NodeName))
c.Ui.Info(fmt.Sprintf(" Datacenter: '%s'", config.Datacenter))
c.Ui.Info(fmt.Sprintf(" Server: %v (bootstrap: %v)", config.Server, config.Bootstrap))
@ -955,7 +983,7 @@ WAIT:
graceful := false
if sig == os.Interrupt && !(*config.SkipLeaveOnInt) {
graceful = true
} else if sig == syscall.SIGTERM && config.LeaveOnTerm {
} else if sig == syscall.SIGTERM && (*config.LeaveOnTerm) {
graceful = true
}
@ -1051,7 +1079,7 @@ func (c *Command) handleReload(config *Config) *Config {
// Register the new watches
for _, wp := range newConf.WatchPlans {
go func(wp *watch.WatchPlan) {
wp.Handler = makeWatchHandler(c.logOutput, wp.Exempt["handler"], &c.agent.reapLock)
wp.Handler = makeWatchHandler(c.logOutput, wp.Exempt["handler"])
wp.LogOutput = c.logOutput
if err := wp.Run(httpAddr.String()); err != nil {
c.Ui.Error(fmt.Sprintf("Error running watch: %v", err))
@ -1088,9 +1116,25 @@ func (c *Command) setupScadaConn(config *Config) error {
return nil
}
scadaConfig := &scada.Config{
Service: "consul",
Version: fmt.Sprintf("%s%s", config.Version, config.VersionPrerelease),
ResourceType: "infrastructures",
Meta: map[string]string{
"auto-join": strconv.FormatBool(config.AtlasJoin),
"datacenter": config.Datacenter,
"server": strconv.FormatBool(config.Server),
},
Atlas: scada.AtlasConfig{
Endpoint: config.AtlasEndpoint,
Infrastructure: config.AtlasInfrastructure,
Token: config.AtlasToken,
},
}
// Create the new provider and listener
c.Ui.Output("Connecting to Atlas: " + config.AtlasInfrastructure)
provider, list, err := NewProvider(config, c.logOutput)
provider, list, err := scada.NewHTTPProvider(scadaConfig, c.logOutput)
if err != nil {
return err
}
@ -1139,7 +1183,8 @@ Options:
-dev Starts the agent in development mode.
-recursor=1.2.3.4 Address of an upstream DNS server.
Can be specified multiple times.
-dc=east-aws Datacenter of the agent
-dc=east-aws Datacenter of the agent (deprecated: use 'datacenter' instead).
-datacenter=east-aws Datacenter of the agent.
-encrypt=key Provides the gossip encryption key
-join=1.2.3.4 Address of an agent to join at start time.
Can be specified multiple times.

View File

@ -137,7 +137,7 @@ func TestReadCliConfig(t *testing.T) {
}
}
// Test SkipLeaveOnInt default for server mode
// Test LeaveOnTerm and SkipLeaveOnInt defaults for server mode
{
ui := new(cli.MockUi)
cmd := &Command{
@ -157,12 +157,15 @@ func TestReadCliConfig(t *testing.T) {
if config.Server != true {
t.Errorf(`Expected -server to be true`)
}
if (*config.LeaveOnTerm) != false {
t.Errorf(`Expected LeaveOnTerm to be false in server mode`)
}
if (*config.SkipLeaveOnInt) != true {
t.Errorf(`Expected SkipLeaveOnInt to be true in server mode`)
}
}
// Test SkipLeaveOnInt default for client mode
// Test LeaveOnTerm and SkipLeaveOnInt defaults for client mode
{
ui := new(cli.MockUi)
cmd := &Command{
@ -181,6 +184,9 @@ func TestReadCliConfig(t *testing.T) {
if config.Server != false {
t.Errorf(`Expected server to be false`)
}
if (*config.LeaveOnTerm) != true {
t.Errorf(`Expected LeaveOnTerm to be true in client mode`)
}
if *config.SkipLeaveOnInt != false {
t.Errorf(`Expected SkipLeaveOnInt to be false in client mode`)
}
@ -336,10 +342,6 @@ func TestSetupScadaConn(t *testing.T) {
if err := cmd.setupScadaConn(conf1); err != nil {
t.Fatalf("err: %s", err)
}
list := cmd.scadaHttp.listener.(*scadaListener)
if list == nil || list.addr.infra != "hashicorp/test1" {
t.Fatalf("bad: %#v", list)
}
http1 := cmd.scadaHttp
provider1 := cmd.scadaProvider
@ -354,10 +356,6 @@ func TestSetupScadaConn(t *testing.T) {
if cmd.scadaHttp == http1 || cmd.scadaProvider == provider1 {
t.Fatalf("should change: %#v %#v", cmd.scadaHttp, cmd.scadaProvider)
}
list = cmd.scadaHttp.listener.(*scadaListener)
if list == nil || list.addr.infra != "hashicorp/test2" {
t.Fatalf("bad: %#v", list)
}
// Original provider and listener must be closed
if !provider1.IsShutdown() {

View File

@ -68,7 +68,7 @@ type DNSConfig struct {
// data. This gives horizontal read scalability since
// any Consul server can service the query instead of
// only the leader.
AllowStale bool `mapstructure:"allow_stale"`
AllowStale *bool `mapstructure:"allow_stale"`
// EnableTruncate is used to enable setting the truncate
// flag for UDP DNS queries. This allows unmodified
@ -104,6 +104,25 @@ type DNSConfig struct {
// whose health checks are in any non-passing state. By
// default, only nodes in a critical state are excluded.
OnlyPassing bool `mapstructure:"only_passing"`
// DisableCompression is used to control whether DNS responses are
// compressed. In Consul 0.7 this was turned on by default and this
// config was added as an opt-out.
DisableCompression bool `mapstructure:"disable_compression"`
// RecursorTimeout specifies the timeout in seconds
// for Consul's internal dns client used for recursion.
// This value is used for the connection, read and write timeout.
// Default: 2s
RecursorTimeout time.Duration `mapstructure:"-"`
RecursorTimeoutRaw string `mapstructure:"recursor_timeout" json:"-"`
}
// Performance is used to tune the performance of Consul's subsystems.
type Performance struct {
// RaftMultiplier is an integer multiplier used to scale Raft timing
// parameters: HeartbeatTimeout, ElectionTimeout, and LeaderLeaseTimeout.
RaftMultiplier uint `mapstructure:"raft_multiplier"`
}
// Telemetry is the telemetry configuration for the server
@ -130,16 +149,83 @@ type Telemetry struct {
// DogStatsdTags are the global tags that should be sent with each packet to dogstatsd
// It is a list of strings, where each string looks like "my_tag_name:my_tag_value"
DogStatsdTags []string `mapstructure:"dogstatsd_tags"`
// Circonus: see https://github.com/circonus-labs/circonus-gometrics
// for more details on the various configuration options.
// Valid configuration combinations:
// - CirconusAPIToken
// metric management enabled (search for existing check or create a new one)
// - CirconusSubmissionUrl
// metric management disabled (use check with specified submission_url,
// broker must be using a public SSL certificate)
// - CirconusAPIToken + CirconusCheckSubmissionURL
// metric management enabled (use check with specified submission_url)
// - CirconusAPIToken + CirconusCheckID
// metric management enabled (use check with specified id)
// CirconusAPIToken is a valid API Token used to create/manage check. If provided,
// metric management is enabled.
// Default: none
CirconusAPIToken string `mapstructure:"circonus_api_token" json:"-"`
// CirconusAPIApp is an app name associated with API token.
// Default: "consul"
CirconusAPIApp string `mapstructure:"circonus_api_app"`
// CirconusAPIURL is the base URL to use for contacting the Circonus API.
// Default: "https://api.circonus.com/v2"
CirconusAPIURL string `mapstructure:"circonus_api_url"`
// CirconusSubmissionInterval is the interval at which metrics are submitted to Circonus.
// Default: 10s
CirconusSubmissionInterval string `mapstructure:"circonus_submission_interval"`
// CirconusCheckSubmissionURL is the check.config.submission_url field from a
// previously created HTTPTRAP check.
// Default: none
CirconusCheckSubmissionURL string `mapstructure:"circonus_submission_url"`
// CirconusCheckID is the check id (not check bundle id) from a previously created
// HTTPTRAP check. The numeric portion of the check._cid field.
// Default: none
CirconusCheckID string `mapstructure:"circonus_check_id"`
// CirconusCheckForceMetricActivation will force enabling metrics, as they are encountered,
// if the metric already exists and is NOT active. If check management is enabled, the default
// behavior is to add new metrics as they are encoutered. If the metric already exists in the
// check, it will *NOT* be activated. This setting overrides that behavior.
// Default: "false"
CirconusCheckForceMetricActivation string `mapstructure:"circonus_check_force_metric_activation"`
// CirconusCheckInstanceID serves to uniquely identify the metrics coming from this "instance".
// It can be used to maintain metric continuity with transient or ephemeral instances as
// they move around within an infrastructure.
// Default: hostname:app
CirconusCheckInstanceID string `mapstructure:"circonus_check_instance_id"`
// CirconusCheckSearchTag is a special tag which, when coupled with the instance id, helps to
// narrow down the search results when neither a Submission URL or Check ID is provided.
// Default: service:app (e.g. service:consul)
CirconusCheckSearchTag string `mapstructure:"circonus_check_search_tag"`
// CirconusBrokerID is an explicit broker to use when creating a new check. The numeric portion
// of broker._cid. If metric management is enabled and neither a Submission URL nor Check ID
// is provided, an attempt will be made to search for an existing check using Instance ID and
// Search Tag. If one is not found, a new HTTPTRAP check will be created.
// Default: use Select Tag if provided, otherwise, a random Enterprise Broker associated
// with the specified API token or the default Circonus Broker.
// Default: none
CirconusBrokerID string `mapstructure:"circonus_broker_id"`
// CirconusBrokerSelectTag is a special tag which will be used to select a broker when
// a Broker ID is not provided. The best use of this is to as a hint for which broker
// should be used based on *where* this particular instance is running.
// (e.g. a specific geo location or datacenter, dc:sfo)
// Default: none
CirconusBrokerSelectTag string `mapstructure:"circonus_broker_select_tag"`
}
// Config is the configuration that can be set for an Agent.
// Some of this is configurable as CLI flags, but most must
// be set using a configuration file.
type Config struct {
// DevMode enables a fast-path mode of opertaion to bring up an in-memory
// DevMode enables a fast-path mode of operation to bring up an in-memory
// server with minimal configuration. Useful for developing Consul.
DevMode bool `mapstructure:"-"`
// Performance is used to tune the performance of Consul's subsystems.
Performance Performance `mapstructure:"performance"`
// Bootstrap is used to bring up the first Consul server, and
// permits that node to elect itself leader
Bootstrap bool `mapstructure:"bootstrap"`
@ -224,8 +310,9 @@ type Config struct {
TaggedAddresses map[string]string
// LeaveOnTerm controls if Serf does a graceful leave when receiving
// the TERM signal. Defaults false. This can be changed on reload.
LeaveOnTerm bool `mapstructure:"leave_on_terminate"`
// the TERM signal. Defaults true on clients, false on servers. This can
// be changed on reload.
LeaveOnTerm *bool `mapstructure:"leave_on_terminate"`
// SkipLeaveOnInt controls if Serf skips a graceful leave when
// receiving the INT signal. Defaults false on clients, true on
@ -353,6 +440,14 @@ type Config struct {
CheckUpdateInterval time.Duration `mapstructure:"-"`
CheckUpdateIntervalRaw string `mapstructure:"check_update_interval" json:"-"`
// CheckReapInterval controls the interval on which we will look for
// failed checks and reap their associated services, if so configured.
CheckReapInterval time.Duration `mapstructure:"-"`
// CheckDeregisterIntervalMin is the smallest allowed interval to set
// a check's DeregisterCriticalServiceAfter value to.
CheckDeregisterIntervalMin time.Duration `mapstructure:"-"`
// ACLToken is the default token used to make requests if a per-request
// token is not provided. If not configured the 'anonymous' token is used.
ACLToken string `mapstructure:"acl_token" json:"-"`
@ -388,6 +483,12 @@ type Config struct {
// this acts like deny.
ACLDownPolicy string `mapstructure:"acl_down_policy"`
// ACLReplicationToken is used to fetch ACLs from the ACLDatacenter in
// order to replicate them locally. Setting this to a non-empty value
// also enables replication. Replication is only available in datacenters
// other than the ACLDatacenter.
ACLReplicationToken string `mapstructure:"acl_replication_token" json:"-"`
// Watches are used to monitor various endpoints and to invoke a
// handler to act appropriately. These are managed entirely in the
// agent layer using the standard APIs.
@ -491,12 +592,6 @@ type Config struct {
// Minimum Session TTL
SessionTTLMin time.Duration `mapstructure:"-"`
SessionTTLMinRaw string `mapstructure:"session_ttl_min"`
// Reap controls automatic reaping of child processes, useful if running
// as PID 1 in a Docker container. This defaults to nil which will make
// Consul reap only if it detects it's running as PID 1. If non-nil,
// then this will be used to decide if reaping is enabled.
Reap *bool `mapstructure:"reap"`
}
// Bool is used to initialize bool pointers in struct literals.
@ -566,17 +661,21 @@ func DefaultConfig() *Config {
Server: 8300,
},
DNSConfig: DNSConfig{
UDPAnswerLimit: 3,
MaxStale: 5 * time.Second,
AllowStale: Bool(true),
UDPAnswerLimit: 3,
MaxStale: 5 * time.Second,
RecursorTimeout: 2 * time.Second,
},
Telemetry: Telemetry{
StatsitePrefix: "consul",
},
SyslogFacility: "LOCAL0",
Protocol: consul.ProtocolVersion2Compatible,
CheckUpdateInterval: 5 * time.Minute,
AEInterval: time.Minute,
DisableCoordinates: false,
SyslogFacility: "LOCAL0",
Protocol: consul.ProtocolVersion2Compatible,
CheckUpdateInterval: 5 * time.Minute,
CheckDeregisterIntervalMin: time.Minute,
CheckReapInterval: 30 * time.Second,
AEInterval: time.Minute,
DisableCoordinates: false,
// SyncCoordinateRateTarget is set based on the rate that we want
// the server to handle as an aggregate across the entire cluster.
@ -723,7 +822,9 @@ func DecodeConfig(r io.Reader) (*Config, error) {
// Check unused fields and verify that no bad configuration options were
// passed to Consul. There are a few additional fields which don't directly
// use mapstructure decoding, so we need to account for those as well.
// use mapstructure decoding, so we need to account for those as well. These
// telemetry-related fields used to be available as top-level keys, so they
// are here for backward compatibility with the old format.
allowedKeys := []string{
"service", "services", "check", "checks", "statsd_addr", "statsite_addr", "statsite_prefix",
"dogstatsd_addr", "dogstatsd_tags",
@ -756,6 +857,14 @@ func DecodeConfig(r io.Reader) (*Config, error) {
result.DNSConfig.MaxStale = dur
}
if raw := result.DNSConfig.RecursorTimeoutRaw; raw != "" {
dur, err := time.ParseDuration(raw)
if err != nil {
return nil, fmt.Errorf("RecursorTimeout invalid: %v", err)
}
result.DNSConfig.RecursorTimeout = dur
}
if len(result.DNSConfig.ServiceTTLRaw) != 0 {
if result.DNSConfig.ServiceTTL == nil {
result.DNSConfig.ServiceTTL = make(map[string]time.Duration)
@ -860,6 +969,11 @@ func DecodeConfig(r io.Reader) (*Config, error) {
result.AdvertiseAddrs.RPC = addr
}
// Enforce the max Raft multiplier.
if result.Performance.RaftMultiplier > consul.MaxRaftMultiplier {
return nil, fmt.Errorf("Performance.RaftMultiplier must be <= %d", consul.MaxRaftMultiplier)
}
return &result, nil
}
@ -913,6 +1027,7 @@ AFTER_FIX:
func FixupCheckType(raw interface{}) error {
var ttlKey, intervalKey, timeoutKey string
const deregisterKey = "DeregisterCriticalServiceAfter"
// Handle decoding of time durations
rawMap, ok := raw.(map[string]interface{})
@ -928,12 +1043,15 @@ func FixupCheckType(raw interface{}) error {
intervalKey = k
case "timeout":
timeoutKey = k
case "deregister_critical_service_after":
rawMap[deregisterKey] = v
delete(rawMap, k)
case "service_id":
rawMap["serviceid"] = v
delete(rawMap, "service_id")
delete(rawMap, k)
case "docker_container_id":
rawMap["DockerContainerID"] = v
delete(rawMap, "docker_container_id")
delete(rawMap, k)
}
}
@ -970,6 +1088,17 @@ func FixupCheckType(raw interface{}) error {
}
}
if deregister, ok := rawMap[deregisterKey]; ok {
timeoutS, ok := deregister.(string)
if ok {
if dur, err := time.ParseDuration(timeoutS); err != nil {
return err
} else {
rawMap[deregisterKey] = dur
}
}
}
return nil
}
@ -998,6 +1127,11 @@ func DecodeCheckDefinition(raw interface{}) (*CheckDefinition, error) {
func MergeConfig(a, b *Config) *Config {
var result Config = *a
// Propagate non-default performance settings
if b.Performance.RaftMultiplier > 0 {
result.Performance.RaftMultiplier = b.Performance.RaftMultiplier
}
// Copy the strings if they're set
if b.Bootstrap {
result.Bootstrap = true
@ -1062,8 +1196,8 @@ func MergeConfig(a, b *Config) *Config {
if b.Server == true {
result.Server = b.Server
}
if b.LeaveOnTerm == true {
result.LeaveOnTerm = true
if b.LeaveOnTerm != nil {
result.LeaveOnTerm = b.LeaveOnTerm
}
if b.SkipLeaveOnInt != nil {
result.SkipLeaveOnInt = b.SkipLeaveOnInt
@ -1086,6 +1220,39 @@ func MergeConfig(a, b *Config) *Config {
if b.Telemetry.DogStatsdTags != nil {
result.Telemetry.DogStatsdTags = b.Telemetry.DogStatsdTags
}
if b.Telemetry.CirconusAPIToken != "" {
result.Telemetry.CirconusAPIToken = b.Telemetry.CirconusAPIToken
}
if b.Telemetry.CirconusAPIApp != "" {
result.Telemetry.CirconusAPIApp = b.Telemetry.CirconusAPIApp
}
if b.Telemetry.CirconusAPIURL != "" {
result.Telemetry.CirconusAPIURL = b.Telemetry.CirconusAPIURL
}
if b.Telemetry.CirconusCheckSubmissionURL != "" {
result.Telemetry.CirconusCheckSubmissionURL = b.Telemetry.CirconusCheckSubmissionURL
}
if b.Telemetry.CirconusSubmissionInterval != "" {
result.Telemetry.CirconusSubmissionInterval = b.Telemetry.CirconusSubmissionInterval
}
if b.Telemetry.CirconusCheckID != "" {
result.Telemetry.CirconusCheckID = b.Telemetry.CirconusCheckID
}
if b.Telemetry.CirconusCheckForceMetricActivation != "" {
result.Telemetry.CirconusCheckForceMetricActivation = b.Telemetry.CirconusCheckForceMetricActivation
}
if b.Telemetry.CirconusCheckInstanceID != "" {
result.Telemetry.CirconusCheckInstanceID = b.Telemetry.CirconusCheckInstanceID
}
if b.Telemetry.CirconusCheckSearchTag != "" {
result.Telemetry.CirconusCheckSearchTag = b.Telemetry.CirconusCheckSearchTag
}
if b.Telemetry.CirconusBrokerID != "" {
result.Telemetry.CirconusBrokerID = b.Telemetry.CirconusBrokerID
}
if b.Telemetry.CirconusBrokerSelectTag != "" {
result.Telemetry.CirconusBrokerSelectTag = b.Telemetry.CirconusBrokerSelectTag
}
if b.EnableDebug {
result.EnableDebug = true
}
@ -1195,8 +1362,8 @@ func MergeConfig(a, b *Config) *Config {
result.DNSConfig.ServiceTTL[service] = dur
}
}
if b.DNSConfig.AllowStale {
result.DNSConfig.AllowStale = true
if b.DNSConfig.AllowStale != nil {
result.DNSConfig.AllowStale = b.DNSConfig.AllowStale
}
if b.DNSConfig.UDPAnswerLimit != 0 {
result.DNSConfig.UDPAnswerLimit = b.DNSConfig.UDPAnswerLimit
@ -1210,6 +1377,12 @@ func MergeConfig(a, b *Config) *Config {
if b.DNSConfig.OnlyPassing {
result.DNSConfig.OnlyPassing = true
}
if b.DNSConfig.DisableCompression {
result.DNSConfig.DisableCompression = true
}
if b.DNSConfig.RecursorTimeout != 0 {
result.DNSConfig.RecursorTimeout = b.DNSConfig.RecursorTimeout
}
if b.CheckUpdateIntervalRaw != "" || b.CheckUpdateInterval != 0 {
result.CheckUpdateInterval = b.CheckUpdateInterval
}
@ -1235,6 +1408,9 @@ func MergeConfig(a, b *Config) *Config {
if b.ACLDefaultPolicy != "" {
result.ACLDefaultPolicy = b.ACLDefaultPolicy
}
if b.ACLReplicationToken != "" {
result.ACLReplicationToken = b.ACLReplicationToken
}
if len(b.Watches) != 0 {
result.Watches = append(result.Watches, b.Watches...)
}
@ -1325,10 +1501,6 @@ func MergeConfig(a, b *Config) *Config {
result.RetryJoinWan = append(result.RetryJoinWan, a.RetryJoinWan...)
result.RetryJoinWan = append(result.RetryJoinWan, b.RetryJoinWan...)
if b.Reap != nil {
result.Reap = b.Reap
}
return &result
}

View File

@ -78,8 +78,8 @@ func TestDecodeConfig(t *testing.T) {
t.Fatalf("bad: expected nil SkipLeaveOnInt")
}
if config.LeaveOnTerm != DefaultConfig().LeaveOnTerm {
t.Fatalf("bad: %#v", config)
if config.LeaveOnTerm != nil {
t.Fatalf("bad: expected nil LeaveOnTerm")
}
// Server bootstrap
@ -279,7 +279,7 @@ func TestDecodeConfig(t *testing.T) {
t.Fatalf("err: %s", err)
}
if config.LeaveOnTerm != true {
if *config.LeaveOnTerm != true {
t.Fatalf("bad: %#v", config)
}
@ -544,13 +544,13 @@ func TestDecodeConfig(t *testing.T) {
}
// DNS node ttl, max stale
input = `{"dns_config": {"allow_stale": true, "enable_truncate": false, "max_stale": "15s", "node_ttl": "5s", "only_passing": true, "udp_answer_limit": 6}}`
input = `{"dns_config": {"allow_stale": false, "enable_truncate": false, "max_stale": "15s", "node_ttl": "5s", "only_passing": true, "udp_answer_limit": 6, "recursor_timeout": "7s"}}`
config, err = DecodeConfig(bytes.NewReader([]byte(input)))
if err != nil {
t.Fatalf("err: %s", err)
}
if !config.DNSConfig.AllowStale {
if *config.DNSConfig.AllowStale {
t.Fatalf("bad: %#v", config)
}
if config.DNSConfig.EnableTruncate {
@ -568,6 +568,9 @@ func TestDecodeConfig(t *testing.T) {
if config.DNSConfig.UDPAnswerLimit != 6 {
t.Fatalf("bad: %#v", config)
}
if config.DNSConfig.RecursorTimeout != 7*time.Second {
t.Fatalf("bad: %#v", config)
}
// DNS service ttl
input = `{"dns_config": {"service_ttl": {"*": "1s", "api": "10s", "web": "30s"}}}`
@ -608,6 +611,17 @@ func TestDecodeConfig(t *testing.T) {
t.Fatalf("bad: %#v", config)
}
// DNS disable compression
input = `{"dns_config": {"disable_compression": true}}`
config, err = DecodeConfig(bytes.NewReader([]byte(input)))
if err != nil {
t.Fatalf("err: %s", err)
}
if !config.DNSConfig.DisableCompression {
t.Fatalf("bad: %#v", config)
}
// CheckUpdateInterval
input = `{"check_update_interval": "10m"}`
config, err = DecodeConfig(bytes.NewReader([]byte(input)))
@ -622,7 +636,8 @@ func TestDecodeConfig(t *testing.T) {
// ACLs
input = `{"acl_token": "1234", "acl_datacenter": "dc2",
"acl_ttl": "60s", "acl_down_policy": "deny",
"acl_default_policy": "deny", "acl_master_token": "2345"}`
"acl_default_policy": "deny", "acl_master_token": "2345",
"acl_replication_token": "8675309"}`
config, err = DecodeConfig(bytes.NewReader([]byte(input)))
if err != nil {
t.Fatalf("err: %s", err)
@ -646,6 +661,9 @@ func TestDecodeConfig(t *testing.T) {
if config.ACLDefaultPolicy != "deny" {
t.Fatalf("bad: %#v", config)
}
if config.ACLReplicationToken != "8675309" {
t.Fatalf("bad: %#v", config)
}
// Watches
input = `{"watches": [{"type":"keyprefix", "prefix":"foo/", "handler":"foobar"}]}`
@ -725,6 +743,51 @@ func TestDecodeConfig(t *testing.T) {
t.Fatalf("bad: %#v", config)
}
// Circonus settings
input = `{"telemetry": {"circonus_api_token": "12345678-1234-1234-12345678", "circonus_api_app": "testApp",
"circonus_api_url": "https://api.host.foo/v2", "circonus_submission_interval": "15s",
"circonus_submission_url": "https://submit.host.bar:123/one/two/three",
"circonus_check_id": "12345", "circonus_check_force_metric_activation": "true",
"circonus_check_instance_id": "a:b", "circonus_check_search_tag": "c:d",
"circonus_broker_id": "6789", "circonus_broker_select_tag": "e:f"} }`
config, err = DecodeConfig(bytes.NewReader([]byte(input)))
if err != nil {
t.Fatalf("err: %s", err)
}
if config.Telemetry.CirconusAPIToken != "12345678-1234-1234-12345678" {
t.Fatalf("bad: %#v", config)
}
if config.Telemetry.CirconusAPIApp != "testApp" {
t.Fatalf("bad: %#v", config)
}
if config.Telemetry.CirconusAPIURL != "https://api.host.foo/v2" {
t.Fatalf("bad: %#v", config)
}
if config.Telemetry.CirconusSubmissionInterval != "15s" {
t.Fatalf("bad: %#v", config)
}
if config.Telemetry.CirconusCheckSubmissionURL != "https://submit.host.bar:123/one/two/three" {
t.Fatalf("bad: %#v", config)
}
if config.Telemetry.CirconusCheckID != "12345" {
t.Fatalf("bad: %#v", config)
}
if config.Telemetry.CirconusCheckForceMetricActivation != "true" {
t.Fatalf("bad: %#v", config)
}
if config.Telemetry.CirconusCheckInstanceID != "a:b" {
t.Fatalf("bad: %#v", config)
}
if config.Telemetry.CirconusCheckSearchTag != "c:d" {
t.Fatalf("bad: %#v", config)
}
if config.Telemetry.CirconusBrokerID != "6789" {
t.Fatalf("bad: %#v", config)
}
if config.Telemetry.CirconusBrokerSelectTag != "e:f" {
t.Fatalf("bad: %#v", config)
}
// New telemetry
input = `{"telemetry": { "statsite_prefix": "my_prefix", "statsite_address": "127.0.0.1:7250", "statsd_address":"127.0.0.1:7251", "disable_hostname": true, "dogstatsd_addr": "1.1.1.1:111", "dogstatsd_tags": [ "tag_1:val_1" ] } }`
config, err = DecodeConfig(bytes.NewReader([]byte(input)))
@ -866,27 +929,6 @@ func TestDecodeConfig(t *testing.T) {
if config.SessionTTLMin != 5*time.Second {
t.Fatalf("bad: %s %#v", config.SessionTTLMin.String(), config)
}
// Reap
input = `{"reap": true}`
config, err = DecodeConfig(bytes.NewReader([]byte(input)))
if err != nil {
t.Fatalf("err: %s", err)
}
if config.Reap == nil || *config.Reap != true {
t.Fatalf("bad: reap not enabled: %#v", config)
}
input = `{}`
config, err = DecodeConfig(bytes.NewReader([]byte(input)))
if err != nil {
t.Fatalf("err: %s", err)
}
if config.Reap != nil {
t.Fatalf("bad: reap not tri-stated: %#v", config)
}
}
func TestDecodeConfig_invalidKeys(t *testing.T) {
@ -897,6 +939,23 @@ func TestDecodeConfig_invalidKeys(t *testing.T) {
}
}
func TestDecodeConfig_Performance(t *testing.T) {
input := `{"performance": { "raft_multiplier": 3 }}`
config, err := DecodeConfig(bytes.NewReader([]byte(input)))
if err != nil {
t.Fatalf("err: %s", err)
}
if config.Performance.RaftMultiplier != 3 {
t.Fatalf("bad: multiplier isn't set: %#v", config)
}
input = `{"performance": { "raft_multiplier": 11 }}`
config, err = DecodeConfig(bytes.NewReader([]byte(input)))
if err == nil || !strings.Contains(err.Error(), "Performance.RaftMultiplier must be <=") {
t.Fatalf("bad: %v", err)
}
}
func TestDecodeConfig_Services(t *testing.T) {
input := `{
"services": [
@ -1198,7 +1257,7 @@ func TestDecodeConfig_Multiples(t *testing.T) {
func TestDecodeConfig_Service(t *testing.T) {
// Basics
input := `{"service": {"id": "red1", "name": "redis", "tags": ["master"], "port":8000, "check": {"script": "/bin/check_redis", "interval": "10s", "ttl": "15s" }}}`
input := `{"service": {"id": "red1", "name": "redis", "tags": ["master"], "port":8000, "check": {"script": "/bin/check_redis", "interval": "10s", "ttl": "15s", "DeregisterCriticalServiceAfter": "90m" }}}`
config, err := DecodeConfig(bytes.NewReader([]byte(input)))
if err != nil {
t.Fatalf("err: %s", err)
@ -1236,11 +1295,15 @@ func TestDecodeConfig_Service(t *testing.T) {
if serv.Check.TTL != 15*time.Second {
t.Fatalf("bad: %v", serv)
}
if serv.Check.DeregisterCriticalServiceAfter != 90*time.Minute {
t.Fatalf("bad: %v", serv)
}
}
func TestDecodeConfig_Check(t *testing.T) {
// Basics
input := `{"check": {"id": "chk1", "name": "mem", "notes": "foobar", "script": "/bin/check_redis", "interval": "10s", "ttl": "15s", "shell": "/bin/bash", "docker_container_id": "redis" }}`
input := `{"check": {"id": "chk1", "name": "mem", "notes": "foobar", "script": "/bin/check_redis", "interval": "10s", "ttl": "15s", "shell": "/bin/bash", "docker_container_id": "redis", "deregister_critical_service_after": "90s" }}`
config, err := DecodeConfig(bytes.NewReader([]byte(input)))
if err != nil {
t.Fatalf("err: %s", err)
@ -1282,6 +1345,10 @@ func TestDecodeConfig_Check(t *testing.T) {
if chk.DockerContainerID != "redis" {
t.Fatalf("bad: %v", chk)
}
if chk.DeregisterCriticalServiceAfter != 90*time.Second {
t.Fatalf("bad: %v", chk)
}
}
func TestMergeConfig(t *testing.T) {
@ -1297,7 +1364,7 @@ func TestMergeConfig(t *testing.T) {
BindAddr: "127.0.0.1",
AdvertiseAddr: "127.0.0.1",
Server: false,
LeaveOnTerm: false,
LeaveOnTerm: new(bool),
SkipLeaveOnInt: new(bool),
EnableDebug: false,
CheckUpdateIntervalRaw: "8m",
@ -1314,20 +1381,25 @@ func TestMergeConfig(t *testing.T) {
}
b := &Config{
Performance: Performance{
RaftMultiplier: 99,
},
Bootstrap: true,
BootstrapExpect: 3,
Datacenter: "dc2",
DataDir: "/tmp/bar",
DNSRecursors: []string{"127.0.0.2:1001"},
DNSConfig: DNSConfig{
AllowStale: false,
EnableTruncate: true,
MaxStale: 30 * time.Second,
NodeTTL: 10 * time.Second,
AllowStale: Bool(false),
EnableTruncate: true,
DisableCompression: true,
MaxStale: 30 * time.Second,
NodeTTL: 10 * time.Second,
ServiceTTL: map[string]time.Duration{
"api": 10 * time.Second,
},
UDPAnswerLimit: 4,
UDPAnswerLimit: 4,
RecursorTimeout: 30 * time.Second,
},
Domain: "other",
LogLevel: "info",
@ -1352,8 +1424,8 @@ func TestMergeConfig(t *testing.T) {
HTTPS: "127.0.0.4",
},
Server: true,
LeaveOnTerm: true,
SkipLeaveOnInt: new(bool),
LeaveOnTerm: Bool(true),
SkipLeaveOnInt: Bool(true),
EnableDebug: true,
VerifyIncoming: true,
VerifyOutgoing: true,
@ -1387,6 +1459,7 @@ func TestMergeConfig(t *testing.T) {
ACLTTLRaw: "15s",
ACLDownPolicy: "deny",
ACLDefaultPolicy: "deny",
ACLReplicationToken: "8765309",
Watches: []map[string]interface{}{
map[string]interface{}{
"type": "keyprefix",
@ -1429,9 +1502,7 @@ func TestMergeConfig(t *testing.T) {
RPC: &net.TCPAddr{},
RPCRaw: "127.0.0.5:1233",
},
Reap: Bool(true),
}
*b.SkipLeaveOnInt = true
c := MergeConfig(a, b)

View File

@ -1,6 +1,7 @@
package agent
import (
"encoding/hex"
"fmt"
"io"
"log"
@ -50,8 +51,8 @@ func (d *DNSServer) Shutdown() {
// NewDNSServer starts a new DNS server to provide an agent interface
func NewDNSServer(agent *Agent, config *DNSConfig, logOutput io.Writer, domain string, bind string, recursors []string) (*DNSServer, error) {
// Make sure domain is FQDN
domain = dns.Fqdn(domain)
// Make sure domain is FQDN, make it case insensitive for ServeMux
domain = dns.Fqdn(strings.ToLower(domain))
// Construct the DNS components
mux := dns.NewServeMux()
@ -180,6 +181,7 @@ func (d *DNSServer) handlePtr(resp dns.ResponseWriter, req *dns.Msg) {
// Setup the message response
m := new(dns.Msg)
m.SetReply(req)
m.Compress = !d.config.DisableCompression
m.Authoritative = true
m.RecursionAvailable = (len(d.recursors) > 0)
@ -197,7 +199,7 @@ func (d *DNSServer) handlePtr(resp dns.ResponseWriter, req *dns.Msg) {
Datacenter: datacenter,
QueryOptions: structs.QueryOptions{
Token: d.agent.config.ACLToken,
AllowStale: d.config.AllowStale,
AllowStale: *d.config.AllowStale,
},
}
var out structs.IndexedNodes
@ -249,6 +251,7 @@ func (d *DNSServer) handleQuery(resp dns.ResponseWriter, req *dns.Msg) {
// Setup the message response
m := new(dns.Msg)
m.SetReply(req)
m.Compress = !d.config.DisableCompression
m.Authoritative = true
m.RecursionAvailable = (len(d.recursors) > 0)
@ -355,6 +358,46 @@ PARSE:
query := strings.Join(labels[:n-1], ".")
d.preparedQueryLookup(network, datacenter, query, req, resp)
case "addr":
if n != 2 {
goto INVALID
}
switch len(labels[0]) / 2 {
// IPv4
case 4:
ip, err := hex.DecodeString(labels[0])
if err != nil {
goto INVALID
}
resp.Answer = append(resp.Answer, &dns.A{
Hdr: dns.RR_Header{
Name: qName + d.domain,
Rrtype: dns.TypeA,
Class: dns.ClassINET,
Ttl: uint32(d.config.NodeTTL / time.Second),
},
A: ip,
})
// IPv6
case 16:
ip, err := hex.DecodeString(labels[0])
if err != nil {
goto INVALID
}
resp.Answer = append(resp.Answer, &dns.AAAA{
Hdr: dns.RR_Header{
Name: qName + d.domain,
Rrtype: dns.TypeAAAA,
Class: dns.ClassINET,
Ttl: uint32(d.config.NodeTTL / time.Second),
},
AAAA: ip,
})
}
default:
// Store the DC, and re-parse
datacenter = labels[n-1]
@ -368,19 +411,6 @@ INVALID:
resp.SetRcode(req, dns.RcodeNameError)
}
// translateAddr is used to provide the final, translated address for a node,
// depending on how this agent and the other node are configured.
func (d *DNSServer) translateAddr(dc string, node *structs.Node) string {
addr := node.Address
if d.agent.config.TranslateWanAddrs && (d.agent.config.Datacenter != dc) {
wanAddr := node.TaggedAddresses["wan"]
if wanAddr != "" {
addr = wanAddr
}
}
return addr
}
// nodeLookup is used to handle a node query
func (d *DNSServer) nodeLookup(network, datacenter, node string, req, resp *dns.Msg) {
// Only handle ANY, A and AAAA type requests
@ -395,7 +425,7 @@ func (d *DNSServer) nodeLookup(network, datacenter, node string, req, resp *dns.
Node: node,
QueryOptions: structs.QueryOptions{
Token: d.agent.config.ACLToken,
AllowStale: d.config.AllowStale,
AllowStale: *d.config.AllowStale,
},
}
var out structs.IndexedNodeServices
@ -421,7 +451,8 @@ RPC:
}
// Add the node record
addr := d.translateAddr(datacenter, out.NodeServices.Node)
n := out.NodeServices.Node
addr := translateAddress(d.agent.config, datacenter, n.Address, n.TaggedAddresses)
records := d.formatNodeRecord(out.NodeServices.Node, addr,
req.Question[0].Name, qType, d.config.NodeTTL)
if records != nil {
@ -492,22 +523,94 @@ func (d *DNSServer) formatNodeRecord(node *structs.Node, addr, qName string, qTy
return records
}
// trimUDPAnswers makes sure a UDP response is not longer than allowed by RFC
// 1035. Enforce an arbitrary limit that can be further ratcheted down by
// config, and then make sure the response doesn't exceed 512 bytes.
func trimUDPAnswers(config *DNSConfig, resp *dns.Msg) (trimmed bool) {
// indexRRs populates a map which indexes a given list of RRs by name. NOTE that
// the names are all squashed to lower case so we can perform case-insensitive
// lookups; the RRs are not modified.
func indexRRs(rrs []dns.RR, index map[string]dns.RR) {
for _, rr := range rrs {
name := strings.ToLower(rr.Header().Name)
if _, ok := index[name]; !ok {
index[name] = rr
}
}
}
// syncExtra takes a DNS response message and sets the extra data to the most
// minimal set needed to cover the answer data. A pre-made index of RRs is given
// so that can be re-used between calls. This assumes that the extra data is
// only used to provide info for SRV records. If that's not the case, then this
// will wipe out any additional data.
func syncExtra(index map[string]dns.RR, resp *dns.Msg) {
extra := make([]dns.RR, 0, len(resp.Answer))
resolved := make(map[string]struct{}, len(resp.Answer))
for _, ansRR := range resp.Answer {
srv, ok := ansRR.(*dns.SRV)
if !ok {
continue
}
// Note that we always use lower case when using the index so
// that compares are not case-sensitive. We don't alter the actual
// RRs we add into the extra section, however.
target := strings.ToLower(srv.Target)
RESOLVE:
if _, ok := resolved[target]; ok {
continue
}
resolved[target] = struct{}{}
extraRR, ok := index[target]
if ok {
extra = append(extra, extraRR)
if cname, ok := extraRR.(*dns.CNAME); ok {
target = strings.ToLower(cname.Target)
goto RESOLVE
}
}
}
resp.Extra = extra
}
// trimUDPResponse makes sure a UDP response is not longer than allowed by RFC
// 1035. Enforce an arbitrary limit that can be further ratcheted down by
// config, and then make sure the response doesn't exceed 512 bytes. Any extra
// records will be trimmed along with answers.
func trimUDPResponse(config *DNSConfig, resp *dns.Msg) (trimmed bool) {
numAnswers := len(resp.Answer)
hasExtra := len(resp.Extra) > 0
// We avoid some function calls and allocations by only handling the
// extra data when necessary.
var index map[string]dns.RR
if hasExtra {
index = make(map[string]dns.RR, len(resp.Extra))
indexRRs(resp.Extra, index)
}
// This cuts UDP responses to a useful but limited number of responses.
maxAnswers := lib.MinInt(maxUDPAnswerLimit, config.UDPAnswerLimit)
if numAnswers > maxAnswers {
resp.Answer = resp.Answer[:maxAnswers]
if hasExtra {
syncExtra(index, resp)
}
}
// This enforces the hard limit of 512 bytes per the RFC.
// This enforces the hard limit of 512 bytes per the RFC. Note that we
// temporarily switch to uncompressed so that we limit to a response
// that will not exceed 512 bytes uncompressed, which is more
// conservative and will allow our responses to be compliant even if
// some downstream server uncompresses them.
compress := resp.Compress
resp.Compress = false
for len(resp.Answer) > 0 && resp.Len() > 512 {
resp.Answer = resp.Answer[:len(resp.Answer)-1]
if hasExtra {
syncExtra(index, resp)
}
}
resp.Compress = compress
return len(resp.Answer) < numAnswers
}
@ -522,7 +625,7 @@ func (d *DNSServer) serviceLookup(network, datacenter, service, tag string, req,
TagFilter: tag != "",
QueryOptions: structs.QueryOptions{
Token: d.agent.config.ACLToken,
AllowStale: d.config.AllowStale,
AllowStale: *d.config.AllowStale,
},
}
var out structs.IndexedCheckServiceNodes
@ -565,15 +668,15 @@ RPC:
// Add various responses depending on the request
qType := req.Question[0].Qtype
d.serviceNodeRecords(datacenter, out.Nodes, req, resp, ttl)
if qType == dns.TypeSRV {
d.serviceSRVRecords(datacenter, out.Nodes, req, resp, ttl)
} else {
d.serviceNodeRecords(datacenter, out.Nodes, req, resp, ttl)
}
// If the network is not TCP, restrict the number of responses
if network != "tcp" {
wasTrimmed := trimUDPAnswers(d.config, resp)
wasTrimmed := trimUDPResponse(d.config, resp)
// Flag that there are more records to return in the UDP response
if wasTrimmed && d.config.EnableTruncate {
@ -596,7 +699,16 @@ func (d *DNSServer) preparedQueryLookup(network, datacenter, query string, req,
QueryIDOrName: query,
QueryOptions: structs.QueryOptions{
Token: d.agent.config.ACLToken,
AllowStale: d.config.AllowStale,
AllowStale: *d.config.AllowStale,
},
// Always pass the local agent through. In the DNS interface, there
// is no provision for passing additional query parameters, so we
// send the local agent's data through to allow distance sorting
// relative to ourself on the server side.
Agent: structs.QuerySource{
Datacenter: d.agent.config.Datacenter,
Node: d.agent.config.NodeName,
},
}
@ -661,14 +773,15 @@ RPC:
// Add various responses depending on the request.
qType := req.Question[0].Qtype
d.serviceNodeRecords(datacenter, out.Nodes, req, resp, ttl)
if qType == dns.TypeSRV {
d.serviceSRVRecords(datacenter, out.Nodes, req, resp, ttl)
d.serviceSRVRecords(out.Datacenter, out.Nodes, req, resp, ttl)
} else {
d.serviceNodeRecords(out.Datacenter, out.Nodes, req, resp, ttl)
}
// If the network is not TCP, restrict the number of responses.
if network != "tcp" {
wasTrimmed := trimUDPAnswers(d.config, resp)
wasTrimmed := trimUDPResponse(d.config, resp)
// Flag that there are more records to return in the UDP response
if wasTrimmed && d.config.EnableTruncate {
@ -692,7 +805,7 @@ func (d *DNSServer) serviceNodeRecords(dc string, nodes structs.CheckServiceNode
for _, node := range nodes {
// Start with the translated address but use the service address,
// if specified.
addr := d.translateAddr(dc, node.Node)
addr := translateAddress(d.agent.config, dc, node.Node.Address, node.Node.TaggedAddresses)
if node.Service.Address != "" {
addr = node.Service.Address
}
@ -741,15 +854,39 @@ func (d *DNSServer) serviceSRVRecords(dc string, nodes structs.CheckServiceNodes
// Start with the translated address but use the service address,
// if specified.
addr := d.translateAddr(dc, node.Node)
addr := translateAddress(d.agent.config, dc, node.Node.Address, node.Node.TaggedAddresses)
if node.Service.Address != "" {
addr = node.Service.Address
}
// Add the extra record
records := d.formatNodeRecord(node.Node, addr, srvRec.Target, dns.TypeANY, ttl)
if records != nil {
resp.Extra = append(resp.Extra, records...)
// Use the node address if it doesn't differ from the service address
if addr == node.Node.Address {
resp.Extra = append(resp.Extra, records...)
} else {
// If it differs from the service address, give a special response in the
// 'addr.consul' domain with the service IP encoded in it. We have to do
// this because we can't put an IP in the target field of an SRV record.
switch record := records[0].(type) {
// IPv4
case *dns.A:
addr := hex.EncodeToString(record.A)
// Take the last 8 chars (4 bytes) of the encoded address to avoid junk bytes
srvRec.Target = fmt.Sprintf("%s.addr.%s.%s", addr[len(addr)-(net.IPv4len*2):], dc, d.domain)
record.Hdr.Name = srvRec.Target
resp.Extra = append(resp.Extra, record)
// IPv6
case *dns.AAAA:
srvRec.Target = fmt.Sprintf("%s.addr.%s.%s", hex.EncodeToString(record.AAAA), dc, d.domain)
record.Hdr.Name = srvRec.Target
resp.Extra = append(resp.Extra, record)
}
}
}
}
}
@ -770,13 +907,18 @@ func (d *DNSServer) handleRecurse(resp dns.ResponseWriter, req *dns.Msg) {
}
// Recursively resolve
c := &dns.Client{Net: network}
c := &dns.Client{Net: network, Timeout: d.config.RecursorTimeout}
var r *dns.Msg
var rtt time.Duration
var err error
for _, recursor := range d.recursors {
r, rtt, err = c.Exchange(req, recursor)
if err == nil {
// Compress the response; we don't know if the incoming
// response was compressed or not, so by not compressing
// we might generate an invalid packet on the way out.
r.Compress = !d.config.DisableCompression
// Forward the response
d.logger.Printf("[DEBUG] dns: recurse RTT for %v (%v)", q, rtt)
if err := resp.WriteMsg(r); err != nil {
@ -792,6 +934,7 @@ func (d *DNSServer) handleRecurse(resp dns.ResponseWriter, req *dns.Msg) {
q, resp.RemoteAddr().String(), resp.RemoteAddr().Network())
m := &dns.Msg{}
m.SetReply(req)
m.Compress = !d.config.DisableCompression
m.RecursionAvailable = true
m.SetRcode(req, dns.RcodeServerFailure)
resp.WriteMsg(m)
@ -799,6 +942,19 @@ func (d *DNSServer) handleRecurse(resp dns.ResponseWriter, req *dns.Msg) {
// resolveCNAME is used to recursively resolve CNAME records
func (d *DNSServer) resolveCNAME(name string) []dns.RR {
// If the CNAME record points to a Consul address, resolve it internally
// Convert query to lowercase because DNS is case insensitive; d.domain is
// already converted
if strings.HasSuffix(strings.ToLower(name), "."+d.domain) {
req := &dns.Msg{}
resp := &dns.Msg{}
req.SetQuestion(name, dns.TypeANY)
d.dispatch("udp", req, resp)
return resp.Answer
}
// Do nothing if we don't have a recursor
if len(d.recursors) == 0 {
return nil
@ -809,7 +965,7 @@ func (d *DNSServer) resolveCNAME(name string) []dns.RR {
m.SetQuestion(name, dns.TypeA)
// Make a DNS lookup request
c := &dns.Client{Net: "udp"}
c := &dns.Client{Net: "udp", Timeout: d.config.RecursorTimeout}
var r *dns.Msg
var rtt time.Duration
var err error

File diff suppressed because it is too large Load Diff

View File

@ -29,13 +29,26 @@ func (w *GatedWriter) Flush() {
}
func (w *GatedWriter) Write(p []byte) (n int, err error) {
// Once we flush we no longer synchronize writers since there's
// no use of the internal buffer. This is the happy path.
w.lock.RLock()
defer w.lock.RUnlock()
if w.flush {
w.lock.RUnlock()
return w.Writer.Write(p)
}
w.lock.RUnlock()
// Now take the write lock.
w.lock.Lock()
defer w.lock.Unlock()
// Things could have changed between the locking operations, so we
// have to check one more time.
if w.flush {
return w.Writer.Write(p)
}
// Buffer up the written data.
p2 := make([]byte, len(p))
copy(p2, p)
w.buf = append(w.buf, p2)

View File

@ -131,6 +131,9 @@ func (s *HTTPServer) HealthServiceNodes(resp http.ResponseWriter, req *http.Requ
out.Nodes = filterNonPassing(out.Nodes)
}
// Translate addresses after filtering so we don't waste effort.
translateAddresses(s.agent.config, args.Datacenter, out.Nodes)
// Use empty list instead of nil
for i, _ := range out.Nodes {
// TODO (slackpad) It's lame that this isn't a slice of pointers
@ -143,6 +146,7 @@ func (s *HTTPServer) HealthServiceNodes(resp http.ResponseWriter, req *http.Requ
if out.Nodes == nil {
out.Nodes = make(structs.CheckServiceNodes, 0)
}
return out.Nodes, nil
}

View File

@ -7,7 +7,6 @@ import (
"os"
"reflect"
"testing"
"time"
"github.com/hashicorp/consul/consul/structs"
"github.com/hashicorp/consul/testutil"
@ -126,25 +125,29 @@ func TestHealthChecksInState_DistanceSort(t *testing.T) {
if err := srv.agent.RPC("Coordinate.Update", &arg, &out); err != nil {
t.Fatalf("err: %v", err)
}
time.Sleep(300 * time.Millisecond)
// Query again and now foo should have moved to the front of the line.
resp = httptest.NewRecorder()
obj, err = srv.HealthChecksInState(resp, req)
if err != nil {
t.Fatalf("err: %v", err)
}
assertIndex(t, resp)
nodes = obj.(structs.HealthChecks)
if len(nodes) != 2 {
t.Fatalf("bad: %v", nodes)
}
if nodes[0].Node != "foo" {
t.Fatalf("bad: %v", nodes)
}
if nodes[1].Node != "bar" {
t.Fatalf("bad: %v", nodes)
}
// Retry until foo moves to the front of the line.
testutil.WaitForResult(func() (bool, error) {
resp = httptest.NewRecorder()
obj, err = srv.HealthChecksInState(resp, req)
if err != nil {
return false, fmt.Errorf("err: %v", err)
}
assertIndex(t, resp)
nodes = obj.(structs.HealthChecks)
if len(nodes) != 2 {
return false, fmt.Errorf("bad: %v", nodes)
}
if nodes[0].Node != "foo" {
return false, fmt.Errorf("bad: %v", nodes)
}
if nodes[1].Node != "bar" {
return false, fmt.Errorf("bad: %v", nodes)
}
return true, nil
}, func(err error) {
t.Fatalf("failed to get sorted service nodes: %v", err)
})
}
func TestHealthNodeChecks(t *testing.T) {
@ -320,25 +323,29 @@ func TestHealthServiceChecks_DistanceSort(t *testing.T) {
if err := srv.agent.RPC("Coordinate.Update", &arg, &out); err != nil {
t.Fatalf("err: %v", err)
}
time.Sleep(300 * time.Millisecond)
// Query again and now foo should have moved to the front of the line.
resp = httptest.NewRecorder()
obj, err = srv.HealthServiceChecks(resp, req)
if err != nil {
t.Fatalf("err: %v", err)
}
assertIndex(t, resp)
nodes = obj.(structs.HealthChecks)
if len(nodes) != 2 {
t.Fatalf("bad: %v", obj)
}
if nodes[0].Node != "foo" {
t.Fatalf("bad: %v", nodes)
}
if nodes[1].Node != "bar" {
t.Fatalf("bad: %v", nodes)
}
// Retry until foo has moved to the front of the line.
testutil.WaitForResult(func() (bool, error) {
resp = httptest.NewRecorder()
obj, err = srv.HealthServiceChecks(resp, req)
if err != nil {
return false, fmt.Errorf("err: %v", err)
}
assertIndex(t, resp)
nodes = obj.(structs.HealthChecks)
if len(nodes) != 2 {
return false, fmt.Errorf("bad: %v", obj)
}
if nodes[0].Node != "foo" {
return false, fmt.Errorf("bad: %v", nodes)
}
if nodes[1].Node != "bar" {
return false, fmt.Errorf("bad: %v", nodes)
}
return true, nil
}, func(err error) {
t.Fatalf("failed to get sorted service checks: %v", err)
})
}
func TestHealthServiceNodes(t *testing.T) {
@ -487,25 +494,29 @@ func TestHealthServiceNodes_DistanceSort(t *testing.T) {
if err := srv.agent.RPC("Coordinate.Update", &arg, &out); err != nil {
t.Fatalf("err: %v", err)
}
time.Sleep(300 * time.Millisecond)
// Query again and now foo should have moved to the front of the line.
resp = httptest.NewRecorder()
obj, err = srv.HealthServiceNodes(resp, req)
if err != nil {
t.Fatalf("err: %v", err)
}
assertIndex(t, resp)
nodes = obj.(structs.CheckServiceNodes)
if len(nodes) != 2 {
t.Fatalf("bad: %v", obj)
}
if nodes[0].Node.Node != "foo" {
t.Fatalf("bad: %v", nodes)
}
if nodes[1].Node.Node != "bar" {
t.Fatalf("bad: %v", nodes)
}
// Retry until foo has moved to the front of the line.
testutil.WaitForResult(func() (bool, error) {
resp = httptest.NewRecorder()
obj, err = srv.HealthServiceNodes(resp, req)
if err != nil {
return false, fmt.Errorf("err: %v", err)
}
assertIndex(t, resp)
nodes = obj.(structs.CheckServiceNodes)
if len(nodes) != 2 {
return false, fmt.Errorf("bad: %v", obj)
}
if nodes[0].Node.Node != "foo" {
return false, fmt.Errorf("bad: %v", nodes)
}
if nodes[1].Node.Node != "bar" {
return false, fmt.Errorf("bad: %v", nodes)
}
return true, nil
}, func(err error) {
t.Fatalf("failed to get sorted service nodes: %v", err)
})
}
func TestHealthServiceNodes_PassingFilter(t *testing.T) {
@ -554,6 +565,102 @@ func TestHealthServiceNodes_PassingFilter(t *testing.T) {
}
}
func TestHealthServiceNodes_WanTranslation(t *testing.T) {
dir1, srv1 := makeHTTPServerWithConfig(t,
func(c *Config) {
c.Datacenter = "dc1"
c.TranslateWanAddrs = true
})
defer os.RemoveAll(dir1)
defer srv1.Shutdown()
defer srv1.agent.Shutdown()
testutil.WaitForLeader(t, srv1.agent.RPC, "dc1")
dir2, srv2 := makeHTTPServerWithConfig(t,
func(c *Config) {
c.Datacenter = "dc2"
c.TranslateWanAddrs = true
})
defer os.RemoveAll(dir2)
defer srv2.Shutdown()
defer srv2.agent.Shutdown()
testutil.WaitForLeader(t, srv2.agent.RPC, "dc2")
// Wait for the WAN join.
addr := fmt.Sprintf("127.0.0.1:%d",
srv1.agent.config.Ports.SerfWan)
if _, err := srv2.agent.JoinWAN([]string{addr}); err != nil {
t.Fatalf("err: %v", err)
}
testutil.WaitForResult(
func() (bool, error) {
return len(srv1.agent.WANMembers()) > 1, nil
},
func(err error) {
t.Fatalf("Failed waiting for WAN join: %v", err)
})
// Register a node with DC2.
{
args := &structs.RegisterRequest{
Datacenter: "dc2",
Node: "foo",
Address: "127.0.0.1",
TaggedAddresses: map[string]string{
"wan": "127.0.0.2",
},
Service: &structs.NodeService{
Service: "http_wan_translation_test",
},
}
var out struct{}
if err := srv2.agent.RPC("Catalog.Register", args, &out); err != nil {
t.Fatalf("err: %v", err)
}
}
// Query for a service in DC2 from DC1.
req, err := http.NewRequest("GET", "/v1/health/service/http_wan_translation_test?dc=dc2", nil)
if err != nil {
t.Fatalf("err: %v", err)
}
resp1 := httptest.NewRecorder()
obj1, err1 := srv1.HealthServiceNodes(resp1, req)
if err1 != nil {
t.Fatalf("err: %v", err1)
}
assertIndex(t, resp1)
// Expect that DC1 gives us a WAN address (since the node is in DC2).
nodes1 := obj1.(structs.CheckServiceNodes)
if len(nodes1) != 1 {
t.Fatalf("bad: %v", obj1)
}
node1 := nodes1[0].Node
if node1.Address != "127.0.0.2" {
t.Fatalf("bad: %v", node1)
}
// Query DC2 from DC2.
resp2 := httptest.NewRecorder()
obj2, err2 := srv2.HealthServiceNodes(resp2, req)
if err2 != nil {
t.Fatalf("err: %v", err2)
}
assertIndex(t, resp2)
// Expect that DC2 gives us a private address (since the node is in DC2).
nodes2 := obj2.(structs.CheckServiceNodes)
if len(nodes2) != 1 {
t.Fatalf("bad: %v", obj2)
}
node2 := nodes2[0].Node
if node2.Address != "127.0.0.1" {
t.Fatalf("bad: %v", node2)
}
}
func TestFilterNonPassing(t *testing.T) {
nodes := structs.CheckServiceNodes{
structs.CheckServiceNode{

View File

@ -15,6 +15,7 @@ import (
"strings"
"time"
"github.com/armon/go-metrics"
"github.com/hashicorp/consul/consul/structs"
"github.com/hashicorp/consul/tlsutil"
"github.com/mitchellh/mapstructure"
@ -42,6 +43,10 @@ type HTTPServer struct {
// NewHTTPServers starts new HTTP servers to provide an interface to
// the agent.
func NewHTTPServers(agent *Agent, config *Config, logOutput io.Writer) ([]*HTTPServer, error) {
if logOutput == nil {
return nil, fmt.Errorf("Please provide a valid logOutput(io.Writer)")
}
var servers []*HTTPServer
if config.Ports.HTTPS > 0 {
@ -191,89 +196,120 @@ func (s *HTTPServer) Shutdown() {
}
}
// handleFuncMetrics takes the given pattern and handler and wraps to produce
// metrics based on the pattern and request.
func (s *HTTPServer) handleFuncMetrics(pattern string, handler func(http.ResponseWriter, *http.Request)) {
// Get the parts of the pattern. We omit any initial empty for the
// leading slash, and put an underscore as a "thing" placeholder if we
// see a trailing slash, which means the part after is parsed. This lets
// us distinguish from things like /v1/query and /v1/query/<query id>.
var parts []string
for i, part := range strings.Split(pattern, "/") {
if part == "" {
if i == 0 {
continue
} else {
part = "_"
}
}
parts = append(parts, part)
}
// Register the wrapper, which will close over the expensive-to-compute
// parts from above.
wrapper := func(resp http.ResponseWriter, req *http.Request) {
start := time.Now()
handler(resp, req)
key := append([]string{"consul", "http", req.Method}, parts...)
metrics.MeasureSince(key, start)
}
s.mux.HandleFunc(pattern, wrapper)
}
// registerHandlers is used to attach our handlers to the mux
func (s *HTTPServer) registerHandlers(enableDebug bool) {
s.mux.HandleFunc("/", s.Index)
s.mux.HandleFunc("/v1/status/leader", s.wrap(s.StatusLeader))
s.mux.HandleFunc("/v1/status/peers", s.wrap(s.StatusPeers))
s.mux.HandleFunc("/v1/catalog/register", s.wrap(s.CatalogRegister))
s.mux.HandleFunc("/v1/catalog/deregister", s.wrap(s.CatalogDeregister))
s.mux.HandleFunc("/v1/catalog/datacenters", s.wrap(s.CatalogDatacenters))
s.mux.HandleFunc("/v1/catalog/nodes", s.wrap(s.CatalogNodes))
s.mux.HandleFunc("/v1/catalog/services", s.wrap(s.CatalogServices))
s.mux.HandleFunc("/v1/catalog/service/", s.wrap(s.CatalogServiceNodes))
s.mux.HandleFunc("/v1/catalog/node/", s.wrap(s.CatalogNodeServices))
if !s.agent.config.DisableCoordinates {
s.mux.HandleFunc("/v1/coordinate/datacenters", s.wrap(s.CoordinateDatacenters))
s.mux.HandleFunc("/v1/coordinate/nodes", s.wrap(s.CoordinateNodes))
} else {
s.mux.HandleFunc("/v1/coordinate/datacenters", s.wrap(coordinateDisabled))
s.mux.HandleFunc("/v1/coordinate/nodes", s.wrap(coordinateDisabled))
}
s.mux.HandleFunc("/v1/health/node/", s.wrap(s.HealthNodeChecks))
s.mux.HandleFunc("/v1/health/checks/", s.wrap(s.HealthServiceChecks))
s.mux.HandleFunc("/v1/health/state/", s.wrap(s.HealthChecksInState))
s.mux.HandleFunc("/v1/health/service/", s.wrap(s.HealthServiceNodes))
s.mux.HandleFunc("/v1/agent/self", s.wrap(s.AgentSelf))
s.mux.HandleFunc("/v1/agent/maintenance", s.wrap(s.AgentNodeMaintenance))
s.mux.HandleFunc("/v1/agent/services", s.wrap(s.AgentServices))
s.mux.HandleFunc("/v1/agent/checks", s.wrap(s.AgentChecks))
s.mux.HandleFunc("/v1/agent/members", s.wrap(s.AgentMembers))
s.mux.HandleFunc("/v1/agent/join/", s.wrap(s.AgentJoin))
s.mux.HandleFunc("/v1/agent/force-leave/", s.wrap(s.AgentForceLeave))
s.mux.HandleFunc("/v1/agent/check/register", s.wrap(s.AgentRegisterCheck))
s.mux.HandleFunc("/v1/agent/check/deregister/", s.wrap(s.AgentDeregisterCheck))
s.mux.HandleFunc("/v1/agent/check/pass/", s.wrap(s.AgentCheckPass))
s.mux.HandleFunc("/v1/agent/check/warn/", s.wrap(s.AgentCheckWarn))
s.mux.HandleFunc("/v1/agent/check/fail/", s.wrap(s.AgentCheckFail))
s.mux.HandleFunc("/v1/agent/check/update/", s.wrap(s.AgentCheckUpdate))
s.mux.HandleFunc("/v1/agent/service/register", s.wrap(s.AgentRegisterService))
s.mux.HandleFunc("/v1/agent/service/deregister/", s.wrap(s.AgentDeregisterService))
s.mux.HandleFunc("/v1/agent/service/maintenance/", s.wrap(s.AgentServiceMaintenance))
s.mux.HandleFunc("/v1/event/fire/", s.wrap(s.EventFire))
s.mux.HandleFunc("/v1/event/list", s.wrap(s.EventList))
s.mux.HandleFunc("/v1/kv/", s.wrap(s.KVSEndpoint))
s.mux.HandleFunc("/v1/session/create", s.wrap(s.SessionCreate))
s.mux.HandleFunc("/v1/session/destroy/", s.wrap(s.SessionDestroy))
s.mux.HandleFunc("/v1/session/renew/", s.wrap(s.SessionRenew))
s.mux.HandleFunc("/v1/session/info/", s.wrap(s.SessionGet))
s.mux.HandleFunc("/v1/session/node/", s.wrap(s.SessionsForNode))
s.mux.HandleFunc("/v1/session/list", s.wrap(s.SessionList))
// API V1.
if s.agent.config.ACLDatacenter != "" {
s.mux.HandleFunc("/v1/acl/create", s.wrap(s.ACLCreate))
s.mux.HandleFunc("/v1/acl/update", s.wrap(s.ACLUpdate))
s.mux.HandleFunc("/v1/acl/destroy/", s.wrap(s.ACLDestroy))
s.mux.HandleFunc("/v1/acl/info/", s.wrap(s.ACLGet))
s.mux.HandleFunc("/v1/acl/clone/", s.wrap(s.ACLClone))
s.mux.HandleFunc("/v1/acl/list", s.wrap(s.ACLList))
s.handleFuncMetrics("/v1/acl/create", s.wrap(s.ACLCreate))
s.handleFuncMetrics("/v1/acl/update", s.wrap(s.ACLUpdate))
s.handleFuncMetrics("/v1/acl/destroy/", s.wrap(s.ACLDestroy))
s.handleFuncMetrics("/v1/acl/info/", s.wrap(s.ACLGet))
s.handleFuncMetrics("/v1/acl/clone/", s.wrap(s.ACLClone))
s.handleFuncMetrics("/v1/acl/list", s.wrap(s.ACLList))
s.handleFuncMetrics("/v1/acl/replication", s.wrap(s.ACLReplicationStatus))
} else {
s.mux.HandleFunc("/v1/acl/create", s.wrap(aclDisabled))
s.mux.HandleFunc("/v1/acl/update", s.wrap(aclDisabled))
s.mux.HandleFunc("/v1/acl/destroy/", s.wrap(aclDisabled))
s.mux.HandleFunc("/v1/acl/info/", s.wrap(aclDisabled))
s.mux.HandleFunc("/v1/acl/clone/", s.wrap(aclDisabled))
s.mux.HandleFunc("/v1/acl/list", s.wrap(aclDisabled))
s.handleFuncMetrics("/v1/acl/create", s.wrap(aclDisabled))
s.handleFuncMetrics("/v1/acl/update", s.wrap(aclDisabled))
s.handleFuncMetrics("/v1/acl/destroy/", s.wrap(aclDisabled))
s.handleFuncMetrics("/v1/acl/info/", s.wrap(aclDisabled))
s.handleFuncMetrics("/v1/acl/clone/", s.wrap(aclDisabled))
s.handleFuncMetrics("/v1/acl/list", s.wrap(aclDisabled))
s.handleFuncMetrics("/v1/acl/replication", s.wrap(aclDisabled))
}
s.handleFuncMetrics("/v1/agent/self", s.wrap(s.AgentSelf))
s.handleFuncMetrics("/v1/agent/maintenance", s.wrap(s.AgentNodeMaintenance))
s.handleFuncMetrics("/v1/agent/services", s.wrap(s.AgentServices))
s.handleFuncMetrics("/v1/agent/checks", s.wrap(s.AgentChecks))
s.handleFuncMetrics("/v1/agent/members", s.wrap(s.AgentMembers))
s.handleFuncMetrics("/v1/agent/join/", s.wrap(s.AgentJoin))
s.handleFuncMetrics("/v1/agent/force-leave/", s.wrap(s.AgentForceLeave))
s.handleFuncMetrics("/v1/agent/check/register", s.wrap(s.AgentRegisterCheck))
s.handleFuncMetrics("/v1/agent/check/deregister/", s.wrap(s.AgentDeregisterCheck))
s.handleFuncMetrics("/v1/agent/check/pass/", s.wrap(s.AgentCheckPass))
s.handleFuncMetrics("/v1/agent/check/warn/", s.wrap(s.AgentCheckWarn))
s.handleFuncMetrics("/v1/agent/check/fail/", s.wrap(s.AgentCheckFail))
s.handleFuncMetrics("/v1/agent/check/update/", s.wrap(s.AgentCheckUpdate))
s.handleFuncMetrics("/v1/agent/service/register", s.wrap(s.AgentRegisterService))
s.handleFuncMetrics("/v1/agent/service/deregister/", s.wrap(s.AgentDeregisterService))
s.handleFuncMetrics("/v1/agent/service/maintenance/", s.wrap(s.AgentServiceMaintenance))
s.handleFuncMetrics("/v1/catalog/register", s.wrap(s.CatalogRegister))
s.handleFuncMetrics("/v1/catalog/deregister", s.wrap(s.CatalogDeregister))
s.handleFuncMetrics("/v1/catalog/datacenters", s.wrap(s.CatalogDatacenters))
s.handleFuncMetrics("/v1/catalog/nodes", s.wrap(s.CatalogNodes))
s.handleFuncMetrics("/v1/catalog/services", s.wrap(s.CatalogServices))
s.handleFuncMetrics("/v1/catalog/service/", s.wrap(s.CatalogServiceNodes))
s.handleFuncMetrics("/v1/catalog/node/", s.wrap(s.CatalogNodeServices))
if !s.agent.config.DisableCoordinates {
s.handleFuncMetrics("/v1/coordinate/datacenters", s.wrap(s.CoordinateDatacenters))
s.handleFuncMetrics("/v1/coordinate/nodes", s.wrap(s.CoordinateNodes))
} else {
s.handleFuncMetrics("/v1/coordinate/datacenters", s.wrap(coordinateDisabled))
s.handleFuncMetrics("/v1/coordinate/nodes", s.wrap(coordinateDisabled))
}
s.handleFuncMetrics("/v1/event/fire/", s.wrap(s.EventFire))
s.handleFuncMetrics("/v1/event/list", s.wrap(s.EventList))
s.handleFuncMetrics("/v1/health/node/", s.wrap(s.HealthNodeChecks))
s.handleFuncMetrics("/v1/health/checks/", s.wrap(s.HealthServiceChecks))
s.handleFuncMetrics("/v1/health/state/", s.wrap(s.HealthChecksInState))
s.handleFuncMetrics("/v1/health/service/", s.wrap(s.HealthServiceNodes))
s.handleFuncMetrics("/v1/internal/ui/nodes", s.wrap(s.UINodes))
s.handleFuncMetrics("/v1/internal/ui/node/", s.wrap(s.UINodeInfo))
s.handleFuncMetrics("/v1/internal/ui/services", s.wrap(s.UIServices))
s.handleFuncMetrics("/v1/kv/", s.wrap(s.KVSEndpoint))
s.handleFuncMetrics("/v1/operator/raft/configuration", s.wrap(s.OperatorRaftConfiguration))
s.handleFuncMetrics("/v1/operator/raft/peer", s.wrap(s.OperatorRaftPeer))
s.handleFuncMetrics("/v1/query", s.wrap(s.PreparedQueryGeneral))
s.handleFuncMetrics("/v1/query/", s.wrap(s.PreparedQuerySpecific))
s.handleFuncMetrics("/v1/session/create", s.wrap(s.SessionCreate))
s.handleFuncMetrics("/v1/session/destroy/", s.wrap(s.SessionDestroy))
s.handleFuncMetrics("/v1/session/renew/", s.wrap(s.SessionRenew))
s.handleFuncMetrics("/v1/session/info/", s.wrap(s.SessionGet))
s.handleFuncMetrics("/v1/session/node/", s.wrap(s.SessionsForNode))
s.handleFuncMetrics("/v1/session/list", s.wrap(s.SessionList))
s.handleFuncMetrics("/v1/status/leader", s.wrap(s.StatusLeader))
s.handleFuncMetrics("/v1/status/peers", s.wrap(s.StatusPeers))
s.handleFuncMetrics("/v1/snapshot", s.wrap(s.Snapshot))
s.handleFuncMetrics("/v1/txn", s.wrap(s.Txn))
s.mux.HandleFunc("/v1/query", s.wrap(s.PreparedQueryGeneral))
s.mux.HandleFunc("/v1/query/", s.wrap(s.PreparedQuerySpecific))
// Debug endpoints.
if enableDebug {
s.mux.HandleFunc("/debug/pprof/", pprof.Index)
s.mux.HandleFunc("/debug/pprof/cmdline", pprof.Cmdline)
s.mux.HandleFunc("/debug/pprof/profile", pprof.Profile)
s.mux.HandleFunc("/debug/pprof/symbol", pprof.Symbol)
s.handleFuncMetrics("/debug/pprof/", pprof.Index)
s.handleFuncMetrics("/debug/pprof/cmdline", pprof.Cmdline)
s.handleFuncMetrics("/debug/pprof/profile", pprof.Profile)
s.handleFuncMetrics("/debug/pprof/symbol", pprof.Symbol)
}
// Use the custom UI dir if provided.
@ -283,16 +319,13 @@ func (s *HTTPServer) registerHandlers(enableDebug bool) {
s.mux.Handle("/ui/", http.StripPrefix("/ui/", http.FileServer(assetFS())))
}
// API's are under /internal/ui/ to avoid conflict
s.mux.HandleFunc("/v1/internal/ui/nodes", s.wrap(s.UINodes))
s.mux.HandleFunc("/v1/internal/ui/node/", s.wrap(s.UINodeInfo))
s.mux.HandleFunc("/v1/internal/ui/services", s.wrap(s.UIServices))
}
// wrap is used to wrap functions to make them more convenient
func (s *HTTPServer) wrap(handler func(resp http.ResponseWriter, req *http.Request) (interface{}, error)) func(resp http.ResponseWriter, req *http.Request) {
f := func(resp http.ResponseWriter, req *http.Request) {
setHeaders(resp, s.agent.config.HTTPAPIResponseHeaders)
setTranslateAddr(resp, s.agent.config.TranslateWanAddrs)
// Obfuscate any tokens from appearing in the logs
formVals, err := url.ParseQuery(req.URL.RawQuery)
@ -337,26 +370,19 @@ func (s *HTTPServer) wrap(handler func(resp http.ResponseWriter, req *http.Reque
if strings.Contains(errMsg, "Permission denied") || strings.Contains(errMsg, "ACL not found") {
code = http.StatusForbidden // 403
}
resp.WriteHeader(code)
resp.Write([]byte(err.Error()))
return
}
prettyPrint := false
if _, ok := req.URL.Query()["pretty"]; ok {
prettyPrint = true
}
// Write out the JSON object
if obj != nil {
var buf []byte
if prettyPrint {
buf, err = json.MarshalIndent(obj, "", " ")
} else {
buf, err = json.Marshal(obj)
}
buf, err = s.marshalJSON(req, obj)
if err != nil {
goto HAS_ERR
}
resp.Header().Set("Content-Type", "application/json")
resp.Write(buf)
}
@ -364,6 +390,25 @@ func (s *HTTPServer) wrap(handler func(resp http.ResponseWriter, req *http.Reque
return f
}
// marshalJSON marshals the object into JSON, respecting the user's pretty-ness
// configuration.
func (s *HTTPServer) marshalJSON(req *http.Request, obj interface{}) ([]byte, error) {
if _, ok := req.URL.Query()["pretty"]; ok {
buf, err := json.MarshalIndent(obj, "", " ")
if err != nil {
return nil, err
}
buf = append(buf, "\n"...)
return buf, nil
}
buf, err := json.Marshal(obj)
if err != nil {
return nil, err
}
return buf, err
}
// Returns true if the UI is enabled.
func (s *HTTPServer) IsUIEnabled() bool {
return s.uiDir != "" || s.agent.config.EnableUi
@ -405,6 +450,14 @@ func decodeBody(req *http.Request, out interface{}, cb func(interface{}) error)
return mapstructure.Decode(raw, out)
}
// setTranslateAddr is used to set the address translation header. This is only
// present if the feature is active.
func setTranslateAddr(resp http.ResponseWriter, active bool) {
if active {
resp.Header().Set("X-Consul-Translate-Addresses", "true")
}
}
// setIndex is used to set the index response header
func setIndex(resp http.ResponseWriter, index uint64) {
resp.Header().Set("X-Consul-Index", strconv.FormatUint(index, 10))

View File

@ -223,6 +223,51 @@ func TestSetMeta(t *testing.T) {
}
}
func TestHTTPAPI_TranslateAddrHeader(t *testing.T) {
// Header should not be present if address translation is off.
{
dir, srv := makeHTTPServer(t)
defer os.RemoveAll(dir)
defer srv.Shutdown()
defer srv.agent.Shutdown()
resp := httptest.NewRecorder()
handler := func(resp http.ResponseWriter, req *http.Request) (interface{}, error) {
return nil, nil
}
req, _ := http.NewRequest("GET", "/v1/agent/self", nil)
srv.wrap(handler)(resp, req)
translate := resp.Header().Get("X-Consul-Translate-Addresses")
if translate != "" {
t.Fatalf("bad: expected %q, got %q", "", translate)
}
}
// Header should be set to true if it's turned on.
{
dir, srv := makeHTTPServer(t)
srv.agent.config.TranslateWanAddrs = true
defer os.RemoveAll(dir)
defer srv.Shutdown()
defer srv.agent.Shutdown()
resp := httptest.NewRecorder()
handler := func(resp http.ResponseWriter, req *http.Request) (interface{}, error) {
return nil, nil
}
req, _ := http.NewRequest("GET", "/v1/agent/self", nil)
srv.wrap(handler)(resp, req)
translate := resp.Header().Get("X-Consul-Translate-Addresses")
if translate != "true" {
t.Fatalf("bad: expected %q, got %q", "true", translate)
}
}
}
func TestHTTPAPIResponseHeaders(t *testing.T) {
dir, srv := makeHTTPServer(t)
srv.agent.config.HTTPAPIResponseHeaders = map[string]string{
@ -328,6 +373,7 @@ func testPrettyPrint(pretty string, t *testing.T) {
srv.wrap(handler)(resp, req)
expected, _ := json.MarshalIndent(r, "", " ")
expected = append(expected, "\n"...)
actual, err := ioutil.ReadAll(resp.Body)
if err != nil {
t.Fatalf("err: %s", err)
@ -595,7 +641,7 @@ func TestACLResolution(t *testing.T) {
t.Fatalf("bad: %s", token)
}
// Querystring token has precendence over header and agent tokens
// Querystring token has precedence over header and agent tokens
srv.parseToken(reqBothTokens, &token)
if token != "baz" {
t.Fatalf("bad: %s", token)

View File

@ -22,7 +22,9 @@ const (
func initKeyring(path, key string) error {
var keys []string
if _, err := base64.StdEncoding.DecodeString(key); err != nil {
if keyBytes, err := base64.StdEncoding.DecodeString(key); err != nil {
return fmt.Errorf("Invalid key: %s", err)
} else if err := memberlist.ValidateKey(keyBytes); err != nil {
return fmt.Errorf("Invalid key: %s", err)
}

View File

@ -12,6 +12,7 @@ import (
"github.com/hashicorp/consul/consul"
"github.com/hashicorp/consul/consul/structs"
"github.com/hashicorp/consul/lib"
"github.com/hashicorp/consul/types"
)
const (
@ -25,8 +26,7 @@ const (
// syncStatus is used to represent the difference between
// the local and remote state, and if action needs to be taken
type syncStatus struct {
remoteDelete bool // Should this be deleted from the server
inSync bool // Is this in sync with the server
inSync bool // Is this in sync with the server
}
// localState is used to represent the node's services,
@ -56,12 +56,13 @@ type localState struct {
serviceTokens map[string]string
// Checks tracks the local checks
checks map[string]*structs.HealthCheck
checkStatus map[string]syncStatus
checkTokens map[string]string
checks map[types.CheckID]*structs.HealthCheck
checkStatus map[types.CheckID]syncStatus
checkTokens map[types.CheckID]string
checkCriticalTime map[types.CheckID]time.Time
// Used to track checks that are being deferred
deferCheck map[string]*time.Timer
deferCheck map[types.CheckID]*time.Timer
// consulCh is used to inform of a change to the known
// consul nodes. This may be used to retry a sync run
@ -79,10 +80,11 @@ func (l *localState) Init(config *Config, logger *log.Logger) {
l.services = make(map[string]*structs.NodeService)
l.serviceStatus = make(map[string]syncStatus)
l.serviceTokens = make(map[string]string)
l.checks = make(map[string]*structs.HealthCheck)
l.checkStatus = make(map[string]syncStatus)
l.checkTokens = make(map[string]string)
l.deferCheck = make(map[string]*time.Timer)
l.checks = make(map[types.CheckID]*structs.HealthCheck)
l.checkStatus = make(map[types.CheckID]syncStatus)
l.checkTokens = make(map[types.CheckID]string)
l.checkCriticalTime = make(map[types.CheckID]time.Time)
l.deferCheck = make(map[types.CheckID]*time.Timer)
l.consulCh = make(chan struct{}, 1)
l.triggerCh = make(chan struct{}, 1)
}
@ -174,7 +176,7 @@ func (l *localState) RemoveService(serviceID string) {
delete(l.services, serviceID)
delete(l.serviceTokens, serviceID)
l.serviceStatus[serviceID] = syncStatus{remoteDelete: true}
l.serviceStatus[serviceID] = syncStatus{inSync: false}
l.changeMade()
}
@ -191,17 +193,17 @@ func (l *localState) Services() map[string]*structs.NodeService {
return services
}
// CheckToken is used to return the configured health check token, or
// if none is configured, the default agent ACL token.
func (l *localState) CheckToken(id string) string {
// CheckToken is used to return the configured health check token for a
// Check, or if none is configured, the default agent ACL token.
func (l *localState) CheckToken(checkID types.CheckID) string {
l.RLock()
defer l.RUnlock()
return l.checkToken(id)
return l.checkToken(checkID)
}
// checkToken returns an ACL token associated with a check.
func (l *localState) checkToken(id string) string {
token := l.checkTokens[id]
func (l *localState) checkToken(checkID types.CheckID) string {
token := l.checkTokens[checkID]
if token == "" {
token = l.config.ACLToken
}
@ -221,23 +223,25 @@ func (l *localState) AddCheck(check *structs.HealthCheck, token string) {
l.checks[check.CheckID] = check
l.checkStatus[check.CheckID] = syncStatus{}
l.checkTokens[check.CheckID] = token
delete(l.checkCriticalTime, check.CheckID)
l.changeMade()
}
// RemoveCheck is used to remove a health check from the local state.
// The agent will make a best effort to ensure it is deregistered
func (l *localState) RemoveCheck(checkID string) {
func (l *localState) RemoveCheck(checkID types.CheckID) {
l.Lock()
defer l.Unlock()
delete(l.checks, checkID)
delete(l.checkTokens, checkID)
l.checkStatus[checkID] = syncStatus{remoteDelete: true}
delete(l.checkCriticalTime, checkID)
l.checkStatus[checkID] = syncStatus{inSync: false}
l.changeMade()
}
// UpdateCheck is used to update the status of a check
func (l *localState) UpdateCheck(checkID, status, output string) {
func (l *localState) UpdateCheck(checkID types.CheckID, status, output string) {
l.Lock()
defer l.Unlock()
@ -246,6 +250,17 @@ func (l *localState) UpdateCheck(checkID, status, output string) {
return
}
// Update the critical time tracking (this doesn't cause a server updates
// so we can always keep this up to date).
if status == structs.HealthCritical {
_, wasCritical := l.checkCriticalTime[checkID]
if !wasCritical {
l.checkCriticalTime[checkID] = time.Now()
}
} else {
delete(l.checkCriticalTime, checkID)
}
// Do nothing if update is idempotent
if check.Status == status && check.Output == output {
return
@ -282,17 +297,45 @@ func (l *localState) UpdateCheck(checkID, status, output string) {
// Checks returns the locally registered checks that the
// agent is aware of and are being kept in sync with the server
func (l *localState) Checks() map[string]*structs.HealthCheck {
checks := make(map[string]*structs.HealthCheck)
func (l *localState) Checks() map[types.CheckID]*structs.HealthCheck {
checks := make(map[types.CheckID]*structs.HealthCheck)
l.RLock()
defer l.RUnlock()
for name, check := range l.checks {
checks[name] = check
for checkID, check := range l.checks {
checks[checkID] = check
}
return checks
}
// CriticalCheck is used to return the duration a check has been critical along
// with its associated health check.
type CriticalCheck struct {
CriticalFor time.Duration
Check *structs.HealthCheck
}
// CriticalChecks returns locally registered health checks that the agent is
// aware of and are being kept in sync with the server, and that are in a
// critical state. This also returns information about how long each check has
// been critical.
func (l *localState) CriticalChecks() map[types.CheckID]CriticalCheck {
checks := make(map[types.CheckID]CriticalCheck)
l.RLock()
defer l.RUnlock()
now := time.Now()
for checkID, criticalTime := range l.checkCriticalTime {
checks[checkID] = CriticalCheck{
CriticalFor: now.Sub(criticalTime),
Check: l.checks[checkID],
}
}
return checks
}
// antiEntropy is a long running method used to perform anti-entropy
// between local and remote state.
func (l *localState) antiEntropy(shutdownCh chan struct{}) {
@ -390,7 +433,7 @@ func (l *localState) setSyncState() error {
// If we don't have the service locally, deregister it
existing, ok := l.services[id]
if !ok {
l.serviceStatus[id] = syncStatus{remoteDelete: true}
l.serviceStatus[id] = syncStatus{inSync: false}
continue
}
@ -406,7 +449,7 @@ func (l *localState) setSyncState() error {
}
// Index the remote health checks to improve efficiency
checkIndex := make(map[string]*structs.HealthCheck, len(checks))
checkIndex := make(map[types.CheckID]*structs.HealthCheck, len(checks))
for _, check := range checks {
checkIndex[check.CheckID] = check
}
@ -428,7 +471,7 @@ func (l *localState) setSyncState() error {
if id == consul.SerfCheckID {
continue
}
l.checkStatus[id] = syncStatus{remoteDelete: true}
l.checkStatus[id] = syncStatus{inSync: false}
continue
}
@ -477,7 +520,7 @@ func (l *localState) syncChanges() error {
// Sync the services
for id, status := range l.serviceStatus {
if status.remoteDelete {
if _, ok := l.services[id]; !ok {
if err := l.deleteService(id); err != nil {
return err
}
@ -492,7 +535,7 @@ func (l *localState) syncChanges() error {
// Sync the checks
for id, status := range l.checkStatus {
if status.remoteDelete {
if _, ok := l.checks[id]; !ok {
if err := l.deleteCheck(id); err != nil {
return err
}
@ -545,8 +588,8 @@ func (l *localState) deleteService(id string) error {
return err
}
// deleteCheck is used to delete a service from the server
func (l *localState) deleteCheck(id string) error {
// deleteCheck is used to delete a check from the server
func (l *localState) deleteCheck(id types.CheckID) error {
if id == "" {
return fmt.Errorf("CheckID missing")
}
@ -619,7 +662,7 @@ func (l *localState) syncService(id string) error {
}
// syncCheck is used to sync a check to the server
func (l *localState) syncCheck(id string) error {
func (l *localState) syncCheck(id types.CheckID) error {
// Pull in the associated service if any
check := l.checks[id]
var service *structs.NodeService

View File

@ -9,6 +9,7 @@ import (
"github.com/hashicorp/consul/consul/structs"
"github.com/hashicorp/consul/testutil"
"github.com/hashicorp/consul/types"
)
func TestAgentAntiEntropy_Services(t *testing.T) {
@ -88,6 +89,14 @@ func TestAgentAntiEntropy_Services(t *testing.T) {
}
agent.state.AddService(srv5, "")
srv5_mod := new(structs.NodeService)
*srv5_mod = *srv5
srv5_mod.Address = "127.0.0.1"
args.Service = srv5_mod
if err := agent.RPC("Catalog.Register", args, &out); err != nil {
t.Fatalf("err: %v", err)
}
// Exists local, in sync, remote missing (create)
srv6 := &structs.NodeService{
ID: "cache",
@ -98,82 +107,144 @@ func TestAgentAntiEntropy_Services(t *testing.T) {
agent.state.AddService(srv6, "")
agent.state.serviceStatus["cache"] = syncStatus{inSync: true}
srv5_mod := new(structs.NodeService)
*srv5_mod = *srv5
srv5_mod.Address = "127.0.0.1"
args.Service = srv5_mod
if err := agent.RPC("Catalog.Register", args, &out); err != nil {
t.Fatalf("err: %v", err)
}
// Trigger anti-entropy run and wait
agent.StartSync()
time.Sleep(200 * time.Millisecond)
// Verify that we are in sync
var services structs.IndexedNodeServices
req := structs.NodeSpecificRequest{
Datacenter: "dc1",
Node: agent.config.NodeName,
}
var services structs.IndexedNodeServices
if err := agent.RPC("Catalog.NodeServices", &req, &services); err != nil {
t.Fatalf("err: %v", err)
}
// Make sure we sent along our tagged addresses when we synced.
addrs := services.NodeServices.Node.TaggedAddresses
if len(addrs) == 0 || !reflect.DeepEqual(addrs, conf.TaggedAddresses) {
t.Fatalf("bad: %v", addrs)
}
// We should have 6 services (consul included)
if len(services.NodeServices.Services) != 6 {
t.Fatalf("bad: %v", services.NodeServices.Services)
}
// All the services should match
for id, serv := range services.NodeServices.Services {
serv.CreateIndex, serv.ModifyIndex = 0, 0
switch id {
case "mysql":
if !reflect.DeepEqual(serv, srv1) {
t.Fatalf("bad: %v %v", serv, srv1)
}
case "redis":
if !reflect.DeepEqual(serv, srv2) {
t.Fatalf("bad: %#v %#v", serv, srv2)
}
case "web":
if !reflect.DeepEqual(serv, srv3) {
t.Fatalf("bad: %v %v", serv, srv3)
}
case "api":
if !reflect.DeepEqual(serv, srv5) {
t.Fatalf("bad: %v %v", serv, srv5)
}
case "cache":
if !reflect.DeepEqual(serv, srv6) {
t.Fatalf("bad: %v %v", serv, srv6)
}
case "consul":
// ignore
default:
t.Fatalf("unexpected service: %v", id)
verifyServices := func() (bool, error) {
if err := agent.RPC("Catalog.NodeServices", &req, &services); err != nil {
return false, fmt.Errorf("err: %v", err)
}
// Make sure we sent along our tagged addresses when we synced.
addrs := services.NodeServices.Node.TaggedAddresses
if len(addrs) == 0 || !reflect.DeepEqual(addrs, conf.TaggedAddresses) {
return false, fmt.Errorf("bad: %v", addrs)
}
// We should have 6 services (consul included)
if len(services.NodeServices.Services) != 6 {
return false, fmt.Errorf("bad: %v", services.NodeServices.Services)
}
// All the services should match
for id, serv := range services.NodeServices.Services {
serv.CreateIndex, serv.ModifyIndex = 0, 0
switch id {
case "mysql":
if !reflect.DeepEqual(serv, srv1) {
return false, fmt.Errorf("bad: %v %v", serv, srv1)
}
case "redis":
if !reflect.DeepEqual(serv, srv2) {
return false, fmt.Errorf("bad: %#v %#v", serv, srv2)
}
case "web":
if !reflect.DeepEqual(serv, srv3) {
return false, fmt.Errorf("bad: %v %v", serv, srv3)
}
case "api":
if !reflect.DeepEqual(serv, srv5) {
return false, fmt.Errorf("bad: %v %v", serv, srv5)
}
case "cache":
if !reflect.DeepEqual(serv, srv6) {
return false, fmt.Errorf("bad: %v %v", serv, srv6)
}
case "consul":
// ignore
default:
return false, fmt.Errorf("unexpected service: %v", id)
}
}
// Check the local state
if len(agent.state.services) != 6 {
return false, fmt.Errorf("bad: %v", agent.state.services)
}
if len(agent.state.serviceStatus) != 6 {
return false, fmt.Errorf("bad: %v", agent.state.serviceStatus)
}
for name, status := range agent.state.serviceStatus {
if !status.inSync {
return false, fmt.Errorf("should be in sync: %v %v", name, status)
}
}
return true, nil
}
// Check the local state
if len(agent.state.services) != 6 {
t.Fatalf("bad: %v", agent.state.services)
}
if len(agent.state.serviceStatus) != 6 {
t.Fatalf("bad: %v", agent.state.serviceStatus)
}
for name, status := range agent.state.serviceStatus {
if !status.inSync {
t.Fatalf("should be in sync: %v %v", name, status)
testutil.WaitForResult(verifyServices, func(err error) {
t.Fatal(err)
})
// Remove one of the services
agent.state.RemoveService("api")
// Trigger anti-entropy run and wait
agent.StartSync()
verifyServicesAfterRemove := func() (bool, error) {
if err := agent.RPC("Catalog.NodeServices", &req, &services); err != nil {
return false, fmt.Errorf("err: %v", err)
}
// We should have 5 services (consul included)
if len(services.NodeServices.Services) != 5 {
return false, fmt.Errorf("bad: %v", services.NodeServices.Services)
}
// All the services should match
for id, serv := range services.NodeServices.Services {
serv.CreateIndex, serv.ModifyIndex = 0, 0
switch id {
case "mysql":
if !reflect.DeepEqual(serv, srv1) {
return false, fmt.Errorf("bad: %v %v", serv, srv1)
}
case "redis":
if !reflect.DeepEqual(serv, srv2) {
return false, fmt.Errorf("bad: %#v %#v", serv, srv2)
}
case "web":
if !reflect.DeepEqual(serv, srv3) {
return false, fmt.Errorf("bad: %v %v", serv, srv3)
}
case "cache":
if !reflect.DeepEqual(serv, srv6) {
return false, fmt.Errorf("bad: %v %v", serv, srv6)
}
case "consul":
// ignore
default:
return false, fmt.Errorf("unexpected service: %v", id)
}
}
// Check the local state
if len(agent.state.services) != 5 {
return false, fmt.Errorf("bad: %v", agent.state.services)
}
if len(agent.state.serviceStatus) != 5 {
return false, fmt.Errorf("bad: %v", agent.state.serviceStatus)
}
for name, status := range agent.state.serviceStatus {
if !status.inSync {
return false, fmt.Errorf("should be in sync: %v %v", name, status)
}
}
return true, nil
}
testutil.WaitForResult(verifyServicesAfterRemove, func(err error) {
t.Fatal(err)
})
}
func TestAgentAntiEntropy_EnableTagOverride(t *testing.T) {
@ -229,48 +300,55 @@ func TestAgentAntiEntropy_EnableTagOverride(t *testing.T) {
// Trigger anti-entropy run and wait
agent.StartSync()
time.Sleep(200 * time.Millisecond)
// Verify that we are in sync
req := structs.NodeSpecificRequest{
Datacenter: "dc1",
Node: agent.config.NodeName,
}
var services structs.IndexedNodeServices
if err := agent.RPC("Catalog.NodeServices", &req, &services); err != nil {
t.Fatalf("err: %v", err)
verifyServices := func() (bool, error) {
if err := agent.RPC("Catalog.NodeServices", &req, &services); err != nil {
return false, fmt.Errorf("err: %v", err)
}
// All the services should match
for id, serv := range services.NodeServices.Services {
serv.CreateIndex, serv.ModifyIndex = 0, 0
switch id {
case "svc_id1":
if serv.ID != "svc_id1" ||
serv.Service != "svc1" ||
serv.Port != 6100 ||
!reflect.DeepEqual(serv.Tags, []string{"tag1_mod"}) {
return false, fmt.Errorf("bad: %v %v", serv, srv1)
}
case "svc_id2":
if serv.ID != "svc_id2" ||
serv.Service != "svc2" ||
serv.Port != 6200 ||
!reflect.DeepEqual(serv.Tags, []string{"tag2"}) {
return false, fmt.Errorf("bad: %v %v", serv, srv2)
}
case "consul":
// ignore
default:
return false, fmt.Errorf("unexpected service: %v", id)
}
}
for name, status := range agent.state.serviceStatus {
if !status.inSync {
return false, fmt.Errorf("should be in sync: %v %v", name, status)
}
}
return true, nil
}
// All the services should match
for id, serv := range services.NodeServices.Services {
serv.CreateIndex, serv.ModifyIndex = 0, 0
switch id {
case "svc_id1":
if serv.ID != "svc_id1" ||
serv.Service != "svc1" ||
serv.Port != 6100 ||
!reflect.DeepEqual(serv.Tags, []string{"tag1_mod"}) {
t.Fatalf("bad: %v %v", serv, srv1)
}
case "svc_id2":
if serv.ID != "svc_id2" ||
serv.Service != "svc2" ||
serv.Port != 6200 ||
!reflect.DeepEqual(serv.Tags, []string{"tag2"}) {
t.Fatalf("bad: %v %v", serv, srv2)
}
case "consul":
// ignore
default:
t.Fatalf("unexpected service: %v", id)
}
}
for name, status := range agent.state.serviceStatus {
if !status.inSync {
t.Fatalf("should be in sync: %v %v", name, status)
}
}
testutil.WaitForResult(verifyServices, func(err error) {
t.Fatal(err)
})
}
func TestAgentAntiEntropy_Services_WithChecks(t *testing.T) {
@ -577,49 +655,54 @@ func TestAgentAntiEntropy_Checks(t *testing.T) {
// Trigger anti-entropy run and wait
agent.StartSync()
time.Sleep(200 * time.Millisecond)
// Verify that we are in sync
req := structs.NodeSpecificRequest{
Datacenter: "dc1",
Node: agent.config.NodeName,
}
var checks structs.IndexedHealthChecks
if err := agent.RPC("Health.NodeChecks", &req, &checks); err != nil {
t.Fatalf("err: %v", err)
}
// We should have 5 checks (serf included)
if len(checks.HealthChecks) != 5 {
t.Fatalf("bad: %v", checks)
}
// All the checks should match
for _, chk := range checks.HealthChecks {
chk.CreateIndex, chk.ModifyIndex = 0, 0
switch chk.CheckID {
case "mysql":
if !reflect.DeepEqual(chk, chk1) {
t.Fatalf("bad: %v %v", chk, chk1)
}
case "redis":
if !reflect.DeepEqual(chk, chk2) {
t.Fatalf("bad: %v %v", chk, chk2)
}
case "web":
if !reflect.DeepEqual(chk, chk3) {
t.Fatalf("bad: %v %v", chk, chk3)
}
case "cache":
if !reflect.DeepEqual(chk, chk5) {
t.Fatalf("bad: %v %v", chk, chk5)
}
case "serfHealth":
// ignore
default:
t.Fatalf("unexpected check: %v", chk)
// Verify that we are in sync
testutil.WaitForResult(func() (bool, error) {
if err := agent.RPC("Health.NodeChecks", &req, &checks); err != nil {
return false, fmt.Errorf("err: %v", err)
}
}
// We should have 5 checks (serf included)
if len(checks.HealthChecks) != 5 {
return false, fmt.Errorf("bad: %v", checks)
}
// All the checks should match
for _, chk := range checks.HealthChecks {
chk.CreateIndex, chk.ModifyIndex = 0, 0
switch chk.CheckID {
case "mysql":
if !reflect.DeepEqual(chk, chk1) {
return false, fmt.Errorf("bad: %v %v", chk, chk1)
}
case "redis":
if !reflect.DeepEqual(chk, chk2) {
return false, fmt.Errorf("bad: %v %v", chk, chk2)
}
case "web":
if !reflect.DeepEqual(chk, chk3) {
return false, fmt.Errorf("bad: %v %v", chk, chk3)
}
case "cache":
if !reflect.DeepEqual(chk, chk5) {
return false, fmt.Errorf("bad: %v %v", chk, chk5)
}
case "serfHealth":
// ignore
default:
return false, fmt.Errorf("unexpected check: %v", chk)
}
}
return true, nil
}, func(err error) {
t.Fatalf("err: %s", err)
})
// Check the local state
if len(agent.state.checks) != 4 {
@ -650,6 +733,63 @@ func TestAgentAntiEntropy_Checks(t *testing.T) {
t.Fatalf("bad: %v", addrs)
}
}
// Remove one of the checks
agent.state.RemoveCheck("redis")
// Trigger anti-entropy run and wait
agent.StartSync()
// Verify that we are in sync
testutil.WaitForResult(func() (bool, error) {
if err := agent.RPC("Health.NodeChecks", &req, &checks); err != nil {
return false, fmt.Errorf("err: %v", err)
}
// We should have 5 checks (serf included)
if len(checks.HealthChecks) != 4 {
return false, fmt.Errorf("bad: %v", checks)
}
// All the checks should match
for _, chk := range checks.HealthChecks {
chk.CreateIndex, chk.ModifyIndex = 0, 0
switch chk.CheckID {
case "mysql":
if !reflect.DeepEqual(chk, chk1) {
return false, fmt.Errorf("bad: %v %v", chk, chk1)
}
case "web":
if !reflect.DeepEqual(chk, chk3) {
return false, fmt.Errorf("bad: %v %v", chk, chk3)
}
case "cache":
if !reflect.DeepEqual(chk, chk5) {
return false, fmt.Errorf("bad: %v %v", chk, chk5)
}
case "serfHealth":
// ignore
default:
return false, fmt.Errorf("unexpected check: %v", chk)
}
}
return true, nil
}, func(err error) {
t.Fatalf("err: %s", err)
})
// Check the local state
if len(agent.state.checks) != 3 {
t.Fatalf("bad: %v", agent.state.checks)
}
if len(agent.state.checkStatus) != 3 {
t.Fatalf("bad: %v", agent.state.checkStatus)
}
for name, status := range agent.state.checkStatus {
if !status.inSync {
t.Fatalf("should be in sync: %v %v", name, status)
}
}
}
func TestAgentAntiEntropy_Check_DeferSync(t *testing.T) {
@ -673,7 +813,6 @@ func TestAgentAntiEntropy_Check_DeferSync(t *testing.T) {
// Trigger anti-entropy run and wait
agent.StartSync()
time.Sleep(200 * time.Millisecond)
// Verify that we are in sync
req := structs.NodeSpecificRequest{
@ -681,14 +820,21 @@ func TestAgentAntiEntropy_Check_DeferSync(t *testing.T) {
Node: agent.config.NodeName,
}
var checks structs.IndexedHealthChecks
if err := agent.RPC("Health.NodeChecks", &req, &checks); err != nil {
t.Fatalf("err: %v", err)
}
// Verify checks in place
if len(checks.HealthChecks) != 2 {
t.Fatalf("checks: %v", check)
}
testutil.WaitForResult(func() (bool, error) {
if err := agent.RPC("Health.NodeChecks", &req, &checks); err != nil {
return false, fmt.Errorf("err: %v", err)
}
// Verify checks in place
if len(checks.HealthChecks) != 2 {
return false, fmt.Errorf("checks: %v", check)
}
return true, nil
}, func(err error) {
t.Fatal(err)
})
// Update the check output! Should be deferred
agent.state.UpdateCheck("web", structs.HealthPassing, "output")
@ -858,24 +1004,30 @@ func TestAgentAntiEntropy_NodeInfo(t *testing.T) {
// Trigger anti-entropy run and wait
agent.StartSync()
time.Sleep(200 * time.Millisecond)
// Verify that we are in sync
req := structs.NodeSpecificRequest{
Datacenter: "dc1",
Node: agent.config.NodeName,
}
var services structs.IndexedNodeServices
if err := agent.RPC("Catalog.NodeServices", &req, &services); err != nil {
t.Fatalf("err: %v", err)
}
// Make sure we synced our node info - this should have ridden on the
// "consul" service sync
addrs := services.NodeServices.Node.TaggedAddresses
if len(addrs) == 0 || !reflect.DeepEqual(addrs, conf.TaggedAddresses) {
t.Fatalf("bad: %v", addrs)
}
// Wait for the sync
testutil.WaitForResult(func() (bool, error) {
if err := agent.RPC("Catalog.NodeServices", &req, &services); err != nil {
return false, fmt.Errorf("err: %v", err)
}
// Make sure we synced our node info - this should have ridden on the
// "consul" service sync
addrs := services.NodeServices.Node.TaggedAddresses
if len(addrs) == 0 || !reflect.DeepEqual(addrs, conf.TaggedAddresses) {
return false, fmt.Errorf("bad: %v", addrs)
}
return true, nil
}, func(err error) {
t.Fatalf("err: %s", err)
})
// Blow away the catalog version of the node info
if err := agent.RPC("Catalog.Register", args, &out); err != nil {
@ -884,17 +1036,22 @@ func TestAgentAntiEntropy_NodeInfo(t *testing.T) {
// Trigger anti-entropy run and wait
agent.StartSync()
time.Sleep(200 * time.Millisecond)
// Verify that we are in sync - this should have been a sync of just the
// Wait for the sync - this should have been a sync of just the
// node info
if err := agent.RPC("Catalog.NodeServices", &req, &services); err != nil {
t.Fatalf("err: %v", err)
}
addrs = services.NodeServices.Node.TaggedAddresses
if len(addrs) == 0 || !reflect.DeepEqual(addrs, conf.TaggedAddresses) {
t.Fatalf("bad: %v", addrs)
}
testutil.WaitForResult(func() (bool, error) {
if err := agent.RPC("Catalog.NodeServices", &req, &services); err != nil {
return false, fmt.Errorf("err: %v", err)
}
addrs := services.NodeServices.Node.TaggedAddresses
if len(addrs) == 0 || !reflect.DeepEqual(addrs, conf.TaggedAddresses) {
return false, fmt.Errorf("bad: %v", addrs)
}
return true, nil
}, func(err error) {
t.Fatalf("err: %s", err)
})
}
func TestAgentAntiEntropy_deleteService_fails(t *testing.T) {
@ -959,6 +1116,66 @@ func TestAgent_checkTokens(t *testing.T) {
}
}
func TestAgent_checkCriticalTime(t *testing.T) {
config := nextConfig()
l := new(localState)
l.Init(config, nil)
// Add a passing check and make sure it's not critical.
checkID := types.CheckID("redis:1")
chk := &structs.HealthCheck{
Node: "node",
CheckID: checkID,
Name: "redis:1",
ServiceID: "redis",
Status: structs.HealthPassing,
}
l.AddCheck(chk, "")
if checks := l.CriticalChecks(); len(checks) > 0 {
t.Fatalf("should not have any critical checks")
}
// Set it to warning and make sure that doesn't show up as critical.
l.UpdateCheck(checkID, structs.HealthWarning, "")
if checks := l.CriticalChecks(); len(checks) > 0 {
t.Fatalf("should not have any critical checks")
}
// Fail the check and make sure the time looks reasonable.
l.UpdateCheck(checkID, structs.HealthCritical, "")
if crit, ok := l.CriticalChecks()[checkID]; !ok {
t.Fatalf("should have a critical check")
} else if crit.CriticalFor > time.Millisecond {
t.Fatalf("bad: %#v", crit)
}
// Wait a while, then fail it again and make sure the time keeps track
// of the initial failure, and doesn't reset here.
time.Sleep(10 * time.Millisecond)
l.UpdateCheck(chk.CheckID, structs.HealthCritical, "")
if crit, ok := l.CriticalChecks()[checkID]; !ok {
t.Fatalf("should have a critical check")
} else if crit.CriticalFor < 5*time.Millisecond ||
crit.CriticalFor > 15*time.Millisecond {
t.Fatalf("bad: %#v", crit)
}
// Set it passing again.
l.UpdateCheck(checkID, structs.HealthPassing, "")
if checks := l.CriticalChecks(); len(checks) > 0 {
t.Fatalf("should not have any critical checks")
}
// Fail the check and make sure the time looks like it started again
// from the latest failure, not the original one.
l.UpdateCheck(checkID, structs.HealthCritical, "")
if crit, ok := l.CriticalChecks()[checkID]; !ok {
t.Fatalf("should have a critical check")
} else if crit.CriticalFor > time.Millisecond {
t.Fatalf("bad: %#v", crit)
}
}
func TestAgent_nestedPauseResume(t *testing.T) {
l := new(localState)
if l.isPaused() != false {
@ -1004,22 +1221,24 @@ func TestAgent_sendCoordinate(t *testing.T) {
testutil.WaitForLeader(t, agent.RPC, "dc1")
// Wait a little while for an update.
time.Sleep(3 * conf.ConsulConfig.CoordinateUpdatePeriod)
// Make sure the coordinate is present.
req := structs.DCSpecificRequest{
Datacenter: agent.config.Datacenter,
}
var reply structs.IndexedCoordinates
if err := agent.RPC("Coordinate.ListNodes", &req, &reply); err != nil {
testutil.WaitForResult(func() (bool, error) {
if err := agent.RPC("Coordinate.ListNodes", &req, &reply); err != nil {
return false, fmt.Errorf("err: %s", err)
}
if len(reply.Coordinates) != 1 {
return false, fmt.Errorf("expected a coordinate: %v", reply)
}
coord := reply.Coordinates[0]
if coord.Node != agent.config.NodeName || coord.Coord == nil {
return false, fmt.Errorf("bad: %v", coord)
}
return true, nil
}, func(err error) {
t.Fatalf("err: %s", err)
}
if len(reply.Coordinates) != 1 {
t.Fatalf("expected a coordinate: %v", reply)
}
coord := reply.Coordinates[0]
if coord.Node != agent.config.NodeName || coord.Coord == nil {
t.Fatalf("bad: %v", coord)
}
})
}

View File

@ -0,0 +1,57 @@
package agent
import (
"net/http"
"github.com/hashicorp/consul/consul/structs"
"github.com/hashicorp/raft"
)
// OperatorRaftConfiguration is used to inspect the current Raft configuration.
// This supports the stale query mode in case the cluster doesn't have a leader.
func (s *HTTPServer) OperatorRaftConfiguration(resp http.ResponseWriter, req *http.Request) (interface{}, error) {
if req.Method != "GET" {
resp.WriteHeader(http.StatusMethodNotAllowed)
return nil, nil
}
var args structs.DCSpecificRequest
if done := s.parse(resp, req, &args.Datacenter, &args.QueryOptions); done {
return nil, nil
}
var reply structs.RaftConfigurationResponse
if err := s.agent.RPC("Operator.RaftGetConfiguration", &args, &reply); err != nil {
return nil, err
}
return reply, nil
}
// OperatorRaftPeer supports actions on Raft peers. Currently we only support
// removing peers by address.
func (s *HTTPServer) OperatorRaftPeer(resp http.ResponseWriter, req *http.Request) (interface{}, error) {
if req.Method != "DELETE" {
resp.WriteHeader(http.StatusMethodNotAllowed)
return nil, nil
}
var args structs.RaftPeerByAddressRequest
s.parseDC(req, &args.Datacenter)
s.parseToken(req, &args.Token)
params := req.URL.Query()
if _, ok := params["address"]; ok {
args.Address = raft.ServerAddress(params.Get("address"))
} else {
resp.WriteHeader(http.StatusBadRequest)
resp.Write([]byte("Must specify ?address with IP:port of peer to remove"))
return nil, nil
}
var reply struct{}
if err := s.agent.RPC("Operator.RaftRemovePeerByAddress", &args, &reply); err != nil {
return nil, err
}
return nil, nil
}

View File

@ -0,0 +1,58 @@
package agent
import (
"bytes"
"net/http"
"net/http/httptest"
"strings"
"testing"
"github.com/hashicorp/consul/consul/structs"
)
func TestOperator_OperatorRaftConfiguration(t *testing.T) {
httpTest(t, func(srv *HTTPServer) {
body := bytes.NewBuffer(nil)
req, err := http.NewRequest("GET", "/v1/operator/raft/configuration", body)
if err != nil {
t.Fatalf("err: %v", err)
}
resp := httptest.NewRecorder()
obj, err := srv.OperatorRaftConfiguration(resp, req)
if err != nil {
t.Fatalf("err: %v", err)
}
if resp.Code != 200 {
t.Fatalf("bad code: %d", resp.Code)
}
out, ok := obj.(structs.RaftConfigurationResponse)
if !ok {
t.Fatalf("unexpected: %T", obj)
}
if len(out.Servers) != 1 ||
!out.Servers[0].Leader ||
!out.Servers[0].Voter {
t.Fatalf("bad: %v", out)
}
})
}
func TestOperator_OperatorRaftPeer(t *testing.T) {
httpTest(t, func(srv *HTTPServer) {
body := bytes.NewBuffer(nil)
req, err := http.NewRequest("DELETE", "/v1/operator/raft/peer?address=nope", body)
if err != nil {
t.Fatalf("err: %v", err)
}
// If we get this error, it proves we sent the address all the
// way through.
resp := httptest.NewRecorder()
_, err = srv.OperatorRaftPeer(resp, req)
if err == nil || !strings.Contains(err.Error(),
"address \"nope\" was not found in the Raft configuration") {
t.Fatalf("err: %v", err)
}
})
}

View File

@ -96,6 +96,10 @@ func parseLimit(req *http.Request, limit *int) error {
func (s *HTTPServer) preparedQueryExecute(id string, resp http.ResponseWriter, req *http.Request) (interface{}, error) {
args := structs.PreparedQueryExecuteRequest{
QueryIDOrName: id,
Agent: structs.QuerySource{
Node: s.agent.config.NodeName,
Datacenter: s.agent.config.Datacenter,
},
}
s.parseSource(req, &args.Source)
if done := s.parse(resp, req, &args.Datacenter, &args.QueryOptions); done {
@ -118,6 +122,12 @@ func (s *HTTPServer) preparedQueryExecute(id string, resp http.ResponseWriter, r
return nil, err
}
// Note that we translate using the DC that the results came from, since
// a query can fail over to a different DC than where the execute request
// was sent to. That's why we use the reply's DC and not the one from
// the args.
translateAddresses(s.agent.config, reply.Datacenter, reply.Nodes)
// Use empty list instead of nil.
if reply.Nodes == nil {
reply.Nodes = make(structs.CheckServiceNodes, 0)
@ -131,6 +141,10 @@ func (s *HTTPServer) preparedQueryExecute(id string, resp http.ResponseWriter, r
func (s *HTTPServer) preparedQueryExplain(id string, resp http.ResponseWriter, req *http.Request) (interface{}, error) {
args := structs.PreparedQueryExecuteRequest{
QueryIDOrName: id,
Agent: structs.QuerySource{
Node: s.agent.config.NodeName,
Datacenter: s.agent.config.Datacenter,
},
}
s.parseSource(req, &args.Source)
if done := s.parse(resp, req, &args.Datacenter, &args.QueryOptions); done {

View File

@ -286,6 +286,10 @@ func TestPreparedQuery_Execute(t *testing.T) {
Datacenter: "dc1",
Node: "my-node",
},
Agent: structs.QuerySource{
Datacenter: srv.agent.config.Datacenter,
Node: srv.agent.config.NodeName,
},
QueryOptions: structs.QueryOptions{
Token: "my-token",
RequireConsistent: true,
@ -323,6 +327,140 @@ func TestPreparedQuery_Execute(t *testing.T) {
}
})
// Ensure the proper params are set when no special args are passed
httpTest(t, func(srv *HTTPServer) {
m := MockPreparedQuery{}
if err := srv.agent.InjectEndpoint("PreparedQuery", &m); err != nil {
t.Fatalf("err: %v", err)
}
m.executeFn = func(args *structs.PreparedQueryExecuteRequest, reply *structs.PreparedQueryExecuteResponse) error {
if args.Source.Node != "" {
t.Fatalf("expect node to be empty, got %q", args.Source.Node)
}
expect := structs.QuerySource{
Datacenter: srv.agent.config.Datacenter,
Node: srv.agent.config.NodeName,
}
if !reflect.DeepEqual(args.Agent, expect) {
t.Fatalf("expect: %#v\nactual: %#v", expect, args.Agent)
}
return nil
}
req, err := http.NewRequest("GET", "/v1/query/my-id/execute", nil)
if err != nil {
t.Fatalf("err: %v", err)
}
resp := httptest.NewRecorder()
if _, err := srv.PreparedQuerySpecific(resp, req); err != nil {
t.Fatalf("err: %v", err)
}
})
// Ensure WAN translation occurs for a response outside of the local DC.
httpTestWithConfig(t, func(srv *HTTPServer) {
m := MockPreparedQuery{}
if err := srv.agent.InjectEndpoint("PreparedQuery", &m); err != nil {
t.Fatalf("err: %v", err)
}
m.executeFn = func(args *structs.PreparedQueryExecuteRequest, reply *structs.PreparedQueryExecuteResponse) error {
nodesResponse := make(structs.CheckServiceNodes, 1)
nodesResponse[0].Node = &structs.Node{
Node: "foo", Address: "127.0.0.1",
TaggedAddresses: map[string]string{
"wan": "127.0.0.2",
},
}
reply.Nodes = nodesResponse
reply.Datacenter = "dc2"
return nil
}
body := bytes.NewBuffer(nil)
req, err := http.NewRequest("GET", "/v1/query/my-id/execute?dc=dc2", body)
if err != nil {
t.Fatalf("err: %v", err)
}
resp := httptest.NewRecorder()
obj, err := srv.PreparedQuerySpecific(resp, req)
if err != nil {
t.Fatalf("err: %v", err)
}
if resp.Code != 200 {
t.Fatalf("bad code: %d", resp.Code)
}
r, ok := obj.(structs.PreparedQueryExecuteResponse)
if !ok {
t.Fatalf("unexpected: %T", obj)
}
if r.Nodes == nil || len(r.Nodes) != 1 {
t.Fatalf("bad: %v", r)
}
node := r.Nodes[0]
if node.Node.Address != "127.0.0.2" {
t.Fatalf("bad: %v", node.Node)
}
}, func(c *Config) {
c.Datacenter = "dc1"
c.TranslateWanAddrs = true
})
// Ensure WAN translation doesn't occur for the local DC.
httpTestWithConfig(t, func(srv *HTTPServer) {
m := MockPreparedQuery{}
if err := srv.agent.InjectEndpoint("PreparedQuery", &m); err != nil {
t.Fatalf("err: %v", err)
}
m.executeFn = func(args *structs.PreparedQueryExecuteRequest, reply *structs.PreparedQueryExecuteResponse) error {
nodesResponse := make(structs.CheckServiceNodes, 1)
nodesResponse[0].Node = &structs.Node{
Node: "foo", Address: "127.0.0.1",
TaggedAddresses: map[string]string{
"wan": "127.0.0.2",
},
}
reply.Nodes = nodesResponse
reply.Datacenter = "dc1"
return nil
}
body := bytes.NewBuffer(nil)
req, err := http.NewRequest("GET", "/v1/query/my-id/execute?dc=dc2", body)
if err != nil {
t.Fatalf("err: %v", err)
}
resp := httptest.NewRecorder()
obj, err := srv.PreparedQuerySpecific(resp, req)
if err != nil {
t.Fatalf("err: %v", err)
}
if resp.Code != 200 {
t.Fatalf("bad code: %d", resp.Code)
}
r, ok := obj.(structs.PreparedQueryExecuteResponse)
if !ok {
t.Fatalf("unexpected: %T", obj)
}
if r.Nodes == nil || len(r.Nodes) != 1 {
t.Fatalf("bad: %v", r)
}
node := r.Nodes[0]
if node.Node.Address != "127.0.0.1" {
t.Fatalf("bad: %v", node.Node)
}
}, func(c *Config) {
c.Datacenter = "dc1"
c.TranslateWanAddrs = true
})
httpTest(t, func(srv *HTTPServer) {
body := bytes.NewBuffer(nil)
req, err := http.NewRequest("GET", "/v1/query/not-there/execute", body)
@ -357,6 +495,10 @@ func TestPreparedQuery_Explain(t *testing.T) {
Datacenter: "dc1",
Node: "my-node",
},
Agent: structs.QuerySource{
Datacenter: srv.agent.config.Datacenter,
Node: srv.agent.config.NodeName,
},
QueryOptions: structs.QueryOptions{
Token: "my-token",
RequireConsistent: true,

View File

@ -135,12 +135,6 @@ func (a *Agent) handleRemoteExec(msg *UserEvent) {
return
}
// Disable child process reaping so that we can get this command's
// return value. Note that we take the read lock here since we are
// waiting on a specific PID and don't need to serialize all waits.
a.reapLock.RLock()
defer a.reapLock.RUnlock()
// Ensure we write out an exit code
exitCode := 0
defer a.remoteExecWriteExitCode(&event, &exitCode)

View File

@ -91,12 +91,6 @@ type handshakeRequest struct {
Version int32
}
type eventRequest struct {
Name string
Payload []byte
Coalesce bool
}
type forceLeaveRequest struct {
Node string
}
@ -149,10 +143,6 @@ type monitorRequest struct {
LogLevel string
}
type streamRequest struct {
Type string
}
type stopRequest struct {
Stop uint64
}
@ -161,20 +151,12 @@ type logRecord struct {
Log string
}
type userEventRecord struct {
Event string
LTime serf.LamportTime
Name string
Payload []byte
Coalesce bool
}
type Member struct {
Name string
Addr net.IP
Port uint16
Tags map[string]string
Status string
Port uint16
ProtocolMin uint8
ProtocolMax uint8
ProtocolCur uint8
@ -183,11 +165,6 @@ type Member struct {
DelegateCur uint8
}
type memberEventRecord struct {
Event string
Members []Member
}
type AgentRPC struct {
sync.Mutex
agent *Agent
@ -346,7 +323,7 @@ func (i *AgentRPC) handleClient(client *rpcClient) {
// The second part of this if is to block socket
// errors from Windows which appear to happen every
// time there is an EOF.
if err != io.EOF && !strings.Contains(err.Error(), "WSARecv") {
if err != io.EOF && !strings.Contains(strings.ToLower(err.Error()), "wsarecv") {
i.logger.Printf("[ERR] agent.rpc: failed to decode request header: %v", err)
}
}

View File

@ -1,106 +0,0 @@
package agent
import (
"net"
"reflect"
"testing"
"github.com/hashicorp/scada-client"
)
func TestProviderService(t *testing.T) {
conf := DefaultConfig()
conf.Version = "0.5.0"
conf.VersionPrerelease = "rc1"
conf.AtlasJoin = true
conf.Server = true
ps := ProviderService(conf)
expect := &client.ProviderService{
Service: "consul",
ServiceVersion: "0.5.0rc1",
Capabilities: map[string]int{
"http": 1,
},
Meta: map[string]string{
"auto-join": "true",
"datacenter": "dc1",
"server": "true",
},
ResourceType: "infrastructures",
}
if !reflect.DeepEqual(ps, expect) {
t.Fatalf("bad: %v", ps)
}
}
func TestProviderConfig(t *testing.T) {
conf := DefaultConfig()
conf.Version = "0.5.0"
conf.VersionPrerelease = "rc1"
conf.AtlasJoin = true
conf.Server = true
conf.AtlasInfrastructure = "armon/test"
conf.AtlasToken = "foobarbaz"
conf.AtlasEndpoint = "foo.bar:1111"
pc := ProviderConfig(conf)
expect := &client.ProviderConfig{
Service: &client.ProviderService{
Service: "consul",
ServiceVersion: "0.5.0rc1",
Capabilities: map[string]int{
"http": 1,
},
Meta: map[string]string{
"auto-join": "true",
"datacenter": "dc1",
"server": "true",
},
ResourceType: "infrastructures",
},
Handlers: map[string]client.CapabilityProvider{
"http": nil,
},
Endpoint: "foo.bar:1111",
ResourceGroup: "armon/test",
Token: "foobarbaz",
}
if !reflect.DeepEqual(pc, expect) {
t.Fatalf("bad: %v", pc)
}
}
func TestSCADAListener(t *testing.T) {
list := newScadaListener("armon/test")
defer list.Close()
var raw interface{} = list
_, ok := raw.(net.Listener)
if !ok {
t.Fatalf("bad")
}
a, b := net.Pipe()
defer a.Close()
defer b.Close()
go list.Push(a)
out, err := list.Accept()
if err != nil {
t.Fatalf("err: %v", err)
}
if out != a {
t.Fatalf("bad")
}
}
func TestSCADAAddr(t *testing.T) {
var addr interface{} = &scadaAddr{"armon/test"}
_, ok := addr.(net.Addr)
if !ok {
t.Fatalf("bad")
}
}

View File

@ -8,6 +8,7 @@ import (
"github.com/hashicorp/consul/consul"
"github.com/hashicorp/consul/consul/structs"
"github.com/hashicorp/consul/types"
)
const (
@ -38,7 +39,7 @@ func (s *HTTPServer) SessionCreate(resp http.ResponseWriter, req *http.Request)
Op: structs.SessionCreate,
Session: structs.Session{
Node: s.agent.config.NodeName,
Checks: []string{consul.SerfCheckID},
Checks: []types.CheckID{consul.SerfCheckID},
LockDelay: 15 * time.Second,
Behavior: structs.SessionKeysRelease,
TTL: "",

View File

@ -10,6 +10,7 @@ import (
"github.com/hashicorp/consul/consul"
"github.com/hashicorp/consul/consul/structs"
"github.com/hashicorp/consul/types"
)
func TestSessionCreate(t *testing.T) {
@ -38,7 +39,7 @@ func TestSessionCreate(t *testing.T) {
raw := map[string]interface{}{
"Name": "my-cool-session",
"Node": srv.agent.config.NodeName,
"Checks": []string{consul.SerfCheckID, "consul"},
"Checks": []types.CheckID{consul.SerfCheckID, "consul"},
"LockDelay": "20s",
}
enc.Encode(raw)
@ -86,7 +87,7 @@ func TestSessionCreateDelete(t *testing.T) {
raw := map[string]interface{}{
"Name": "my-cool-session",
"Node": srv.agent.config.NodeName,
"Checks": []string{consul.SerfCheckID, "consul"},
"Checks": []types.CheckID{consul.SerfCheckID, "consul"},
"LockDelay": "20s",
"Behavior": structs.SessionKeysDelete,
}

View File

@ -0,0 +1,50 @@
package agent
import (
"bytes"
"net/http"
"github.com/hashicorp/consul/consul/structs"
)
// Snapshot handles requests to take and restore snapshots. This uses a special
// mechanism to make the RPC since we potentially stream large amounts of data
// as part of these requests.
func (s *HTTPServer) Snapshot(resp http.ResponseWriter, req *http.Request) (interface{}, error) {
var args structs.SnapshotRequest
s.parseDC(req, &args.Datacenter)
s.parseToken(req, &args.Token)
if _, ok := req.URL.Query()["stale"]; ok {
args.AllowStale = true
}
switch req.Method {
case "GET":
args.Op = structs.SnapshotSave
// Headers need to go out before we stream the body.
replyFn := func(reply *structs.SnapshotResponse) error {
setMeta(resp, &reply.QueryMeta)
return nil
}
// Don't bother sending any request body through since it will
// be ignored.
var null bytes.Buffer
if err := s.agent.SnapshotRPC(&args, &null, resp, replyFn); err != nil {
return nil, err
}
return nil, nil
case "PUT":
args.Op = structs.SnapshotRestore
if err := s.agent.SnapshotRPC(&args, req.Body, resp, nil); err != nil {
return nil, err
}
return nil, nil
default:
resp.WriteHeader(http.StatusMethodNotAllowed)
return nil, nil
}
}

View File

@ -0,0 +1,142 @@
package agent
import (
"bytes"
"io"
"net/http"
"net/http/httptest"
"strings"
"testing"
)
func TestSnapshot(t *testing.T) {
var snap io.Reader
httpTest(t, func(srv *HTTPServer) {
body := bytes.NewBuffer(nil)
req, err := http.NewRequest("GET", "/v1/snapshot?token=root", body)
if err != nil {
t.Fatalf("err: %v", err)
}
resp := httptest.NewRecorder()
_, err = srv.Snapshot(resp, req)
if err != nil {
t.Fatalf("err: %v", err)
}
snap = resp.Body
header := resp.Header().Get("X-Consul-Index")
if header == "" {
t.Fatalf("bad: %v", header)
}
header = resp.Header().Get("X-Consul-KnownLeader")
if header != "true" {
t.Fatalf("bad: %v", header)
}
header = resp.Header().Get("X-Consul-LastContact")
if header != "0" {
t.Fatalf("bad: %v", header)
}
})
httpTest(t, func(srv *HTTPServer) {
req, err := http.NewRequest("PUT", "/v1/snapshot?token=root", snap)
if err != nil {
t.Fatalf("err: %v", err)
}
resp := httptest.NewRecorder()
_, err = srv.Snapshot(resp, req)
if err != nil {
t.Fatalf("err: %v", err)
}
})
}
func TestSnapshot_Options(t *testing.T) {
for _, method := range []string{"GET", "PUT"} {
httpTest(t, func(srv *HTTPServer) {
body := bytes.NewBuffer(nil)
req, err := http.NewRequest(method, "/v1/snapshot?token=anonymous", body)
if err != nil {
t.Fatalf("err: %v", err)
}
resp := httptest.NewRecorder()
_, err = srv.Snapshot(resp, req)
if err == nil || !strings.Contains(err.Error(), "Permission denied") {
t.Fatalf("err: %v", err)
}
})
httpTest(t, func(srv *HTTPServer) {
body := bytes.NewBuffer(nil)
req, err := http.NewRequest(method, "/v1/snapshot?dc=nope", body)
if err != nil {
t.Fatalf("err: %v", err)
}
resp := httptest.NewRecorder()
_, err = srv.Snapshot(resp, req)
if err == nil || !strings.Contains(err.Error(), "No path to datacenter") {
t.Fatalf("err: %v", err)
}
})
httpTest(t, func(srv *HTTPServer) {
body := bytes.NewBuffer(nil)
req, err := http.NewRequest(method, "/v1/snapshot?token=root&stale", body)
if err != nil {
t.Fatalf("err: %v", err)
}
resp := httptest.NewRecorder()
_, err = srv.Snapshot(resp, req)
if method == "GET" {
if err != nil {
t.Fatalf("err: %v", err)
}
} else {
if err == nil || !strings.Contains(err.Error(), "stale not allowed") {
t.Fatalf("err: %v", err)
}
}
})
}
}
func TestSnapshot_BadMethods(t *testing.T) {
httpTest(t, func(srv *HTTPServer) {
body := bytes.NewBuffer(nil)
req, err := http.NewRequest("POST", "/v1/snapshot", body)
if err != nil {
t.Fatalf("err: %v", err)
}
resp := httptest.NewRecorder()
_, err = srv.Snapshot(resp, req)
if err != nil {
t.Fatalf("err: %v", err)
}
if resp.Code != 405 {
t.Fatalf("bad code: %d", resp.Code)
}
})
httpTest(t, func(srv *HTTPServer) {
body := bytes.NewBuffer(nil)
req, err := http.NewRequest("DELETE", "/v1/snapshot", body)
if err != nil {
t.Fatalf("err: %v", err)
}
resp := httptest.NewRecorder()
_, err = srv.Snapshot(resp, req)
if err != nil {
t.Fatalf("err: %v", err)
}
if resp.Code != 405 {
t.Fatalf("bad code: %d", resp.Code)
}
})
}

View File

@ -2,6 +2,7 @@ package agent
import (
"github.com/hashicorp/consul/consul/structs"
"github.com/hashicorp/consul/types"
)
// ServiceDefinition is used to JSON decode the Service definitions
@ -42,9 +43,9 @@ func (s *ServiceDefinition) CheckTypes() (checks CheckTypes) {
return
}
// ChecKDefinition is used to JSON decode the Check definitions
// CheckDefinition is used to JSON decode the Check definitions
type CheckDefinition struct {
ID string
ID types.CheckID
Name string
Notes string
ServiceID string
@ -66,7 +67,7 @@ func (c *CheckDefinition) HealthCheck(node string) *structs.HealthCheck {
health.Status = c.Status
}
if health.CheckID == "" && health.Name != "" {
health.CheckID = health.Name
health.CheckID = types.CheckID(health.Name)
}
return health
}

View File

@ -0,0 +1,67 @@
package agent
import (
"fmt"
"github.com/hashicorp/consul/consul/structs"
)
// translateAddress is used to provide the final, translated address for a node,
// depending on how the agent and the other node are configured. The dc
// parameter is the dc the datacenter this node is from.
func translateAddress(config *Config, dc string, addr string, taggedAddresses map[string]string) string {
if config.TranslateWanAddrs && (config.Datacenter != dc) {
wanAddr := taggedAddresses["wan"]
if wanAddr != "" {
addr = wanAddr
}
}
return addr
}
// translateAddresses translates addresses in the given structure into the
// final, translated address, depending on how the agent and the other node are
// configured. The dc parameter is the datacenter this structure is from.
func translateAddresses(config *Config, dc string, subj interface{}) {
// CAUTION - SUBTLE! An agent running on a server can, in some cases,
// return pointers directly into the immutable state store for
// performance (it's via the in-memory RPC mechanism). It's never safe
// to modify those values, so we short circuit here so that we never
// update any structures that are from our own datacenter. This works
// for address translation because we *never* need to translate local
// addresses, but this is super subtle, so we've piped all the in-place
// address translation into this function which makes sure this check is
// done. This also happens to skip looking at any of the incoming
// structure for the common case of not needing to translate, so it will
// skip a lot of work if no translation needs to be done.
if !config.TranslateWanAddrs || (config.Datacenter == dc) {
return
}
// Translate addresses in-place, subject to the condition checked above
// which ensures this is safe to do since we are operating on a local
// copy of the data.
switch v := subj.(type) {
case structs.CheckServiceNodes:
for _, entry := range v {
entry.Node.Address = translateAddress(config, dc,
entry.Node.Address, entry.Node.TaggedAddresses)
}
case *structs.Node:
v.Address = translateAddress(config, dc,
v.Address, v.TaggedAddresses)
case structs.Nodes:
for _, node := range v {
node.Address = translateAddress(config, dc,
node.Address, node.TaggedAddresses)
}
case structs.ServiceNodes:
for _, entry := range v {
entry.Address = translateAddress(config, dc,
entry.Address, entry.TaggedAddresses)
}
default:
panic(fmt.Errorf("Unhandled type passed to address translator: %#v", subj))
}
}

View File

@ -0,0 +1,227 @@
package agent
import (
"encoding/base64"
"fmt"
"net/http"
"strings"
"github.com/hashicorp/consul/api"
"github.com/hashicorp/consul/consul/structs"
)
const (
// maxTxnOps is used to set an upper limit on the number of operations
// inside a transaction. If there are more operations than this, then the
// client is likely abusing transactions.
maxTxnOps = 64
)
// decodeValue decodes the value member of the given operation.
func decodeValue(rawKV interface{}) error {
rawMap, ok := rawKV.(map[string]interface{})
if !ok {
return fmt.Errorf("unexpected raw KV type: %T", rawKV)
}
for k, v := range rawMap {
switch strings.ToLower(k) {
case "value":
// Leave the byte slice nil if we have a nil
// value.
if v == nil {
return nil
}
// Otherwise, base64 decode it.
s, ok := v.(string)
if !ok {
return fmt.Errorf("unexpected value type: %T", v)
}
decoded, err := base64.StdEncoding.DecodeString(s)
if err != nil {
return fmt.Errorf("failed to decode value: %v", err)
}
rawMap[k] = decoded
return nil
}
}
return nil
}
// fixupKVOp looks for non-nil KV operations and passes them on for
// value conversion.
func fixupKVOp(rawOp interface{}) error {
rawMap, ok := rawOp.(map[string]interface{})
if !ok {
return fmt.Errorf("unexpected raw op type: %T", rawOp)
}
for k, v := range rawMap {
switch strings.ToLower(k) {
case "kv":
if v == nil {
return nil
}
return decodeValue(v)
}
}
return nil
}
// fixupKVOps takes the raw decoded JSON and base64 decodes values in KV ops,
// replacing them with byte arrays.
func fixupKVOps(raw interface{}) error {
rawSlice, ok := raw.([]interface{})
if !ok {
return fmt.Errorf("unexpected raw type: %t", raw)
}
for _, rawOp := range rawSlice {
if err := fixupKVOp(rawOp); err != nil {
return err
}
}
return nil
}
// convertOps takes the incoming body in API format and converts it to the
// internal RPC format. This returns a count of the number of write ops, and
// a boolean, that if false means an error response has been generated and
// processing should stop.
func (s *HTTPServer) convertOps(resp http.ResponseWriter, req *http.Request) (structs.TxnOps, int, bool) {
// Note the body is in API format, and not the RPC format. If we can't
// decode it, we will return a 400 since we don't have enough context to
// associate the error with a given operation.
var ops api.TxnOps
if err := decodeBody(req, &ops, fixupKVOps); err != nil {
resp.WriteHeader(http.StatusBadRequest)
resp.Write([]byte(fmt.Sprintf("Failed to parse body: %v", err)))
return nil, 0, false
}
// Enforce a reasonable upper limit on the number of operations in a
// transaction in order to curb abuse.
if size := len(ops); size > maxTxnOps {
resp.WriteHeader(http.StatusRequestEntityTooLarge)
resp.Write([]byte(fmt.Sprintf("Transaction contains too many operations (%d > %d)",
size, maxTxnOps)))
return nil, 0, false
}
// Convert the KV API format into the RPC format. Note that fixupKVOps
// above will have already converted the base64 encoded strings into
// byte arrays so we can assign right over.
var opsRPC structs.TxnOps
var writes int
var netKVSize int
for _, in := range ops {
if in.KV != nil {
if size := len(in.KV.Value); size > maxKVSize {
resp.WriteHeader(http.StatusRequestEntityTooLarge)
resp.Write([]byte(fmt.Sprintf("Value for key %q is too large (%d > %d bytes)",
in.KV.Key, size, maxKVSize)))
return nil, 0, false
} else {
netKVSize += size
}
verb := structs.KVSOp(in.KV.Verb)
if verb.IsWrite() {
writes += 1
}
out := &structs.TxnOp{
KV: &structs.TxnKVOp{
Verb: verb,
DirEnt: structs.DirEntry{
Key: in.KV.Key,
Value: in.KV.Value,
Flags: in.KV.Flags,
Session: in.KV.Session,
RaftIndex: structs.RaftIndex{
ModifyIndex: in.KV.Index,
},
},
},
}
opsRPC = append(opsRPC, out)
}
}
// Enforce an overall size limit to help prevent abuse.
if netKVSize > maxKVSize {
resp.WriteHeader(http.StatusRequestEntityTooLarge)
resp.Write([]byte(fmt.Sprintf("Cumulative size of key data is too large (%d > %d bytes)",
netKVSize, maxKVSize)))
return nil, 0, false
}
return opsRPC, writes, true
}
// Txn handles requests to apply multiple operations in a single, atomic
// transaction. A transaction consisting of only read operations will be fast-
// pathed to an endpoint that supports consistency modes (but not blocking),
// and everything else will be routed through Raft like a normal write.
func (s *HTTPServer) Txn(resp http.ResponseWriter, req *http.Request) (interface{}, error) {
if req.Method != "PUT" {
resp.WriteHeader(http.StatusMethodNotAllowed)
return nil, nil
}
// Convert the ops from the API format to the internal format.
ops, writes, ok := s.convertOps(resp, req)
if !ok {
return nil, nil
}
// Fast-path a transaction with only writes to the read-only endpoint,
// which bypasses Raft, and allows for staleness.
conflict := false
var ret interface{}
if writes == 0 {
args := structs.TxnReadRequest{Ops: ops}
if done := s.parse(resp, req, &args.Datacenter, &args.QueryOptions); done {
return nil, nil
}
var reply structs.TxnReadResponse
if err := s.agent.RPC("Txn.Read", &args, &reply); err != nil {
return nil, err
}
// Since we don't do blocking, we only add the relevant headers
// for metadata.
setLastContact(resp, reply.LastContact)
setKnownLeader(resp, reply.KnownLeader)
ret, conflict = reply, len(reply.Errors) > 0
} else {
args := structs.TxnRequest{Ops: ops}
s.parseDC(req, &args.Datacenter)
s.parseToken(req, &args.Token)
var reply structs.TxnResponse
if err := s.agent.RPC("Txn.Apply", &args, &reply); err != nil {
return nil, err
}
ret, conflict = reply, len(reply.Errors) > 0
}
// If there was a conflict return the response object but set a special
// status code.
if conflict {
var buf []byte
var err error
buf, err = s.marshalJSON(req, ret)
if err != nil {
return nil, err
}
resp.Header().Set("Content-Type", "application/json")
resp.WriteHeader(http.StatusConflict)
resp.Write(buf)
return nil, nil
}
// Otherwise, return the results of the successful transaction.
return ret, nil
}

View File

@ -0,0 +1,434 @@
package agent
import (
"bytes"
"fmt"
"net/http"
"net/http/httptest"
"reflect"
"strings"
"testing"
"github.com/hashicorp/consul/consul/structs"
)
func TestTxnEndpoint_Bad_JSON(t *testing.T) {
httpTest(t, func(srv *HTTPServer) {
buf := bytes.NewBuffer([]byte("{"))
req, err := http.NewRequest("PUT", "/v1/txn", buf)
if err != nil {
t.Fatalf("err: %v", err)
}
resp := httptest.NewRecorder()
if _, err := srv.Txn(resp, req); err != nil {
t.Fatalf("err: %v", err)
}
if resp.Code != 400 {
t.Fatalf("expected 400, got %d", resp.Code)
}
if !bytes.Contains(resp.Body.Bytes(), []byte("Failed to parse")) {
t.Fatalf("expected conflicting args error")
}
})
}
func TestTxnEndpoint_Bad_Method(t *testing.T) {
httpTest(t, func(srv *HTTPServer) {
buf := bytes.NewBuffer([]byte("{}"))
req, err := http.NewRequest("GET", "/v1/txn", buf)
if err != nil {
t.Fatalf("err: %v", err)
}
resp := httptest.NewRecorder()
if _, err := srv.Txn(resp, req); err != nil {
t.Fatalf("err: %v", err)
}
if resp.Code != 405 {
t.Fatalf("expected 405, got %d", resp.Code)
}
})
}
func TestTxnEndpoint_Bad_Size_Item(t *testing.T) {
httpTest(t, func(srv *HTTPServer) {
buf := bytes.NewBuffer([]byte(fmt.Sprintf(`
[
{
"KV": {
"Verb": "set",
"Key": "key",
"Value": %q
}
}
]
`, strings.Repeat("bad", 2*maxKVSize))))
req, err := http.NewRequest("PUT", "/v1/txn", buf)
if err != nil {
t.Fatalf("err: %v", err)
}
resp := httptest.NewRecorder()
if _, err := srv.Txn(resp, req); err != nil {
t.Fatalf("err: %v", err)
}
if resp.Code != 413 {
t.Fatalf("expected 413, got %d", resp.Code)
}
})
}
func TestTxnEndpoint_Bad_Size_Net(t *testing.T) {
httpTest(t, func(srv *HTTPServer) {
value := strings.Repeat("X", maxKVSize/2)
buf := bytes.NewBuffer([]byte(fmt.Sprintf(`
[
{
"KV": {
"Verb": "set",
"Key": "key1",
"Value": %q
}
},
{
"KV": {
"Verb": "set",
"Key": "key1",
"Value": %q
}
},
{
"KV": {
"Verb": "set",
"Key": "key1",
"Value": %q
}
}
]
`, value, value, value)))
req, err := http.NewRequest("PUT", "/v1/txn", buf)
if err != nil {
t.Fatalf("err: %v", err)
}
resp := httptest.NewRecorder()
if _, err := srv.Txn(resp, req); err != nil {
t.Fatalf("err: %v", err)
}
if resp.Code != 413 {
t.Fatalf("expected 413, got %d", resp.Code)
}
})
}
func TestTxnEndpoint_Bad_Size_Ops(t *testing.T) {
httpTest(t, func(srv *HTTPServer) {
buf := bytes.NewBuffer([]byte(fmt.Sprintf(`
[
%s
{
"KV": {
"Verb": "set",
"Key": "key",
"Value": ""
}
}
]
`, strings.Repeat(`{ "KV": { "Verb": "get", "Key": "key" } },`, 2*maxTxnOps))))
req, err := http.NewRequest("PUT", "/v1/txn", buf)
if err != nil {
t.Fatalf("err: %v", err)
}
resp := httptest.NewRecorder()
if _, err := srv.Txn(resp, req); err != nil {
t.Fatalf("err: %v", err)
}
if resp.Code != 413 {
t.Fatalf("expected 413, got %d", resp.Code)
}
})
}
func TestTxnEndpoint_KV_Actions(t *testing.T) {
httpTest(t, func(srv *HTTPServer) {
// Make sure all incoming fields get converted properly to the internal
// RPC format.
var index uint64
id := makeTestSession(t, srv)
{
buf := bytes.NewBuffer([]byte(fmt.Sprintf(`
[
{
"KV": {
"Verb": "lock",
"Key": "key",
"Value": "aGVsbG8gd29ybGQ=",
"Flags": 23,
"Session": %q
}
},
{
"KV": {
"Verb": "get",
"Key": "key"
}
}
]
`, id)))
req, err := http.NewRequest("PUT", "/v1/txn", buf)
if err != nil {
t.Fatalf("err: %v", err)
}
resp := httptest.NewRecorder()
obj, err := srv.Txn(resp, req)
if err != nil {
t.Fatalf("err: %v", err)
}
if resp.Code != 200 {
t.Fatalf("expected 200, got %d", resp.Code)
}
txnResp, ok := obj.(structs.TxnResponse)
if !ok {
t.Fatalf("bad type: %T", obj)
}
if len(txnResp.Results) != 2 {
t.Fatalf("bad: %v", txnResp)
}
index = txnResp.Results[0].KV.ModifyIndex
expected := structs.TxnResponse{
Results: structs.TxnResults{
&structs.TxnResult{
KV: &structs.DirEntry{
Key: "key",
Value: nil,
Flags: 23,
Session: id,
LockIndex: 1,
RaftIndex: structs.RaftIndex{
CreateIndex: index,
ModifyIndex: index,
},
},
},
&structs.TxnResult{
KV: &structs.DirEntry{
Key: "key",
Value: []byte("hello world"),
Flags: 23,
Session: id,
LockIndex: 1,
RaftIndex: structs.RaftIndex{
CreateIndex: index,
ModifyIndex: index,
},
},
},
},
}
if !reflect.DeepEqual(txnResp, expected) {
t.Fatalf("bad: %v", txnResp)
}
}
// Do a read-only transaction that should get routed to the
// fast-path endpoint.
{
buf := bytes.NewBuffer([]byte(`
[
{
"KV": {
"Verb": "get",
"Key": "key"
}
},
{
"KV": {
"Verb": "get-tree",
"Key": "key"
}
}
]
`))
req, err := http.NewRequest("PUT", "/v1/txn", buf)
if err != nil {
t.Fatalf("err: %v", err)
}
resp := httptest.NewRecorder()
obj, err := srv.Txn(resp, req)
if err != nil {
t.Fatalf("err: %v", err)
}
if resp.Code != 200 {
t.Fatalf("expected 200, got %d", resp.Code)
}
header := resp.Header().Get("X-Consul-KnownLeader")
if header != "true" {
t.Fatalf("bad: %v", header)
}
header = resp.Header().Get("X-Consul-LastContact")
if header != "0" {
t.Fatalf("bad: %v", header)
}
txnResp, ok := obj.(structs.TxnReadResponse)
if !ok {
t.Fatalf("bad type: %T", obj)
}
expected := structs.TxnReadResponse{
TxnResponse: structs.TxnResponse{
Results: structs.TxnResults{
&structs.TxnResult{
KV: &structs.DirEntry{
Key: "key",
Value: []byte("hello world"),
Flags: 23,
Session: id,
LockIndex: 1,
RaftIndex: structs.RaftIndex{
CreateIndex: index,
ModifyIndex: index,
},
},
},
&structs.TxnResult{
KV: &structs.DirEntry{
Key: "key",
Value: []byte("hello world"),
Flags: 23,
Session: id,
LockIndex: 1,
RaftIndex: structs.RaftIndex{
CreateIndex: index,
ModifyIndex: index,
},
},
},
},
},
QueryMeta: structs.QueryMeta{
KnownLeader: true,
},
}
if !reflect.DeepEqual(txnResp, expected) {
t.Fatalf("bad: %v", txnResp)
}
}
// Now that we have an index we can do a CAS to make sure the
// index field gets translated to the RPC format.
{
buf := bytes.NewBuffer([]byte(fmt.Sprintf(`
[
{
"KV": {
"Verb": "cas",
"Key": "key",
"Value": "Z29vZGJ5ZSB3b3JsZA==",
"Index": %d
}
},
{
"KV": {
"Verb": "get",
"Key": "key"
}
}
]
`, index)))
req, err := http.NewRequest("PUT", "/v1/txn", buf)
if err != nil {
t.Fatalf("err: %v", err)
}
resp := httptest.NewRecorder()
obj, err := srv.Txn(resp, req)
if err != nil {
t.Fatalf("err: %v", err)
}
if resp.Code != 200 {
t.Fatalf("expected 200, got %d", resp.Code)
}
txnResp, ok := obj.(structs.TxnResponse)
if !ok {
t.Fatalf("bad type: %T", obj)
}
if len(txnResp.Results) != 2 {
t.Fatalf("bad: %v", txnResp)
}
modIndex := txnResp.Results[0].KV.ModifyIndex
expected := structs.TxnResponse{
Results: structs.TxnResults{
&structs.TxnResult{
KV: &structs.DirEntry{
Key: "key",
Value: nil,
Session: id,
RaftIndex: structs.RaftIndex{
CreateIndex: index,
ModifyIndex: modIndex,
},
},
},
&structs.TxnResult{
KV: &structs.DirEntry{
Key: "key",
Value: []byte("goodbye world"),
Session: id,
RaftIndex: structs.RaftIndex{
CreateIndex: index,
ModifyIndex: modIndex,
},
},
},
},
}
if !reflect.DeepEqual(txnResp, expected) {
t.Fatalf("bad: %v", txnResp)
}
}
})
// Verify an error inside a transaction.
httpTest(t, func(srv *HTTPServer) {
buf := bytes.NewBuffer([]byte(`
[
{
"KV": {
"Verb": "lock",
"Key": "key",
"Value": "aGVsbG8gd29ybGQ=",
"Session": "nope"
}
},
{
"KV": {
"Verb": "get",
"Key": "key"
}
}
]
`))
req, err := http.NewRequest("PUT", "/v1/txn", buf)
if err != nil {
t.Fatalf("err: %v", err)
}
resp := httptest.NewRecorder()
if _, err = srv.Txn(resp, req); err != nil {
t.Fatalf("err: %v", err)
}
if resp.Code != 409 {
t.Fatalf("expected 409, got %d", resp.Code)
}
if !bytes.Contains(resp.Body.Bytes(), []byte("failed session lookup")) {
t.Fatalf("bad: %s", resp.Body.String())
}
})
}

View File

@ -12,6 +12,7 @@ import (
"strconv"
"time"
"github.com/hashicorp/consul/types"
"github.com/hashicorp/go-msgpack/codec"
)
@ -71,6 +72,11 @@ func stringHash(s string) string {
return fmt.Sprintf("%x", md5.Sum([]byte(s)))
}
// checkIDHash returns a simple md5sum for a types.CheckID.
func checkIDHash(checkID types.CheckID) string {
return stringHash(string(checkID))
}
// FilePermissions is an interface which allows a struct to set
// ownership and permissions easily on a file it describes.
type FilePermissions interface {

View File

@ -8,7 +8,6 @@ import (
"log"
"os"
"strconv"
"sync"
"github.com/armon/circbuf"
"github.com/hashicorp/consul/watch"
@ -34,16 +33,10 @@ func verifyWatchHandler(params interface{}) error {
}
// makeWatchHandler returns a handler for the given watch
func makeWatchHandler(logOutput io.Writer, params interface{}, reapLock *sync.RWMutex) watch.HandlerFunc {
func makeWatchHandler(logOutput io.Writer, params interface{}) watch.HandlerFunc {
script := params.(string)
logger := log.New(logOutput, "", log.LstdFlags)
fn := func(idx uint64, data interface{}) {
// Disable child process reaping so that we can get this command's
// return value. Note that we take the read lock here since we are
// waiting on a specific PID and don't need to serialize all waits.
reapLock.RLock()
defer reapLock.RUnlock()
// Create the command
cmd, err := ExecScript(script)
if err != nil {

View File

@ -3,7 +3,6 @@ package agent
import (
"io/ioutil"
"os"
"sync"
"testing"
)
@ -26,7 +25,7 @@ func TestMakeWatchHandler(t *testing.T) {
defer os.Remove("handler_out")
defer os.Remove("handler_index_out")
script := "echo $CONSUL_INDEX >> handler_index_out && cat >> handler_out"
handler := makeWatchHandler(os.Stderr, script, &sync.RWMutex{})
handler := makeWatchHandler(os.Stderr, script)
handler(100, []string{"foo", "bar", "baz"})
raw, err := ioutil.ReadFile("handler_out")
if err != nil {

View File

@ -268,7 +268,7 @@ func (c *ExecCommand) waitForJob() int {
errCh := make(chan struct{}, 1)
defer close(doneCh)
go c.streamResults(doneCh, ackCh, heartCh, outputCh, exitCh, errCh)
target := &TargettedUi{Ui: c.Ui}
target := &TargetedUi{Ui: c.Ui}
var ackCount, exitCount, badExit int
OUTER:
@ -637,33 +637,33 @@ Options:
return strings.TrimSpace(helpText)
}
// TargettedUi is a UI that wraps another UI implementation and modifies
// TargetedUi is a UI that wraps another UI implementation and modifies
// the output to indicate a specific target. Specifically, all Say output
// is prefixed with the target name. Message output is not prefixed but
// is offset by the length of the target so that output is lined up properly
// with Say output. Machine-readable output has the proper target set.
type TargettedUi struct {
type TargetedUi struct {
Target string
Ui cli.Ui
}
func (u *TargettedUi) Ask(query string) (string, error) {
func (u *TargetedUi) Ask(query string) (string, error) {
return u.Ui.Ask(u.prefixLines(true, query))
}
func (u *TargettedUi) Info(message string) {
func (u *TargetedUi) Info(message string) {
u.Ui.Info(u.prefixLines(true, message))
}
func (u *TargettedUi) Output(message string) {
func (u *TargetedUi) Output(message string) {
u.Ui.Output(u.prefixLines(false, message))
}
func (u *TargettedUi) Error(message string) {
func (u *TargetedUi) Error(message string) {
u.Ui.Error(u.prefixLines(true, message))
}
func (u *TargettedUi) prefixLines(arrow bool, message string) string {
func (u *TargetedUi) prefixLines(arrow bool, message string) string {
arrowText := "==>"
if !arrow {
arrowText = strings.Repeat(" ", len(arrowText))

View File

@ -23,7 +23,7 @@ func TestExecCommandRun(t *testing.T) {
ui := new(cli.MockUi)
c := &ExecCommand{Ui: ui}
args := []string{"-http-addr=" + a1.httpAddr, "-wait=400ms", "uptime"}
args := []string{"-http-addr=" + a1.httpAddr, "-wait=10s", "uptime"}
code := c.Run(args)
if code != 0 {

76
command/kv_command.go Normal file
View File

@ -0,0 +1,76 @@
package command
import (
"strings"
"github.com/mitchellh/cli"
)
// KVCommand is a Command implementation that just shows help for
// the subcommands nested below it.
type KVCommand struct {
Ui cli.Ui
}
func (c *KVCommand) Run(args []string) int {
return cli.RunResultHelp
}
func (c *KVCommand) Help() string {
helpText := `
Usage: consul kv <subcommand> [options] [args]
This command has subcommands for interacting with Consul's key-value
store. Here are some simple examples, and more detailed examples are
available in the subcommands or the documentation.
Create or update the key named "redis/config/connections" with the value "5":
$ consul kv put redis/config/connections 5
Read this value back:
$ consul kv get redis/config/connections
Or get detailed key information:
$ consul kv get -detailed redis/config/connections
Finally, delete the key:
$ consul kv delete redis/config/connections
For more examples, ask for subcommand help or view the documentation.
`
return strings.TrimSpace(helpText)
}
func (c *KVCommand) Synopsis() string {
return "Interact with the key-value store"
}
var apiOptsText = strings.TrimSpace(`
API Options:
-http-addr=<addr> Address of the Consul agent with the port. This can
be an IP address or DNS address, but it must include
the port. This can also be specified via the
CONSUL_HTTP_ADDR environment variable. The default
value is 127.0.0.1:8500.
-datacenter=<name> Name of the datacenter to query. If unspecified, the
query will default to the datacenter of the Consul
agent at the HTTP address.
-token=<value> ACL token to use in the request. This can also be
specified via the CONSUL_HTTP_TOKEN environment
variable. If unspecified, the query will default to
the token of the Consul agent at the HTTP address.
-stale Permit any Consul server (non-leader) to respond to
this request. This allows for lower latency and higher
throughput, but can result in stale data. This option
has no effect on non-read operations. The default
value is false.
`)

View File

@ -0,0 +1,15 @@
package command
import (
"testing"
"github.com/mitchellh/cli"
)
func TestKVCommand_implements(t *testing.T) {
var _ cli.Command = &KVCommand{}
}
func TestKVCommand_noTabs(t *testing.T) {
assertNoTabs(t, new(KVCommand))
}

165
command/kv_delete.go Normal file
View File

@ -0,0 +1,165 @@
package command
import (
"flag"
"fmt"
"strings"
"github.com/hashicorp/consul/api"
"github.com/mitchellh/cli"
)
// KVDeleteCommand is a Command implementation that is used to delete a key or
// prefix of keys from the key-value store.
type KVDeleteCommand struct {
Ui cli.Ui
}
func (c *KVDeleteCommand) Help() string {
helpText := `
Usage: consul kv delete [options] KEY_OR_PREFIX
Removes the value from Consul's key-value store at the given path. If no
key exists at the path, no action is taken.
To delete the value for the key named "foo" in the key-value store:
$ consul kv delete foo
To delete all keys which start with "foo", specify the -recurse option:
$ consul kv delete -recurse foo
This will delete the keys named "foo", "food", and "foo/bar/zip" if they
existed.
` + apiOptsText + `
KV Delete Options:
-cas Perform a Check-And-Set operation. Specifying this
value also requires the -modify-index flag to be set.
The default value is false.
-modify-index=<int> Unsigned integer representing the ModifyIndex of the
key. This is used in combination with the -cas flag.
-recurse Recursively delete all keys with the path. The default
value is false.
`
return strings.TrimSpace(helpText)
}
func (c *KVDeleteCommand) Run(args []string) int {
cmdFlags := flag.NewFlagSet("get", flag.ContinueOnError)
cmdFlags.Usage = func() { c.Ui.Output(c.Help()) }
datacenter := cmdFlags.String("datacenter", "", "")
token := cmdFlags.String("token", "", "")
cas := cmdFlags.Bool("cas", false, "")
modifyIndex := cmdFlags.Uint64("modify-index", 0, "")
recurse := cmdFlags.Bool("recurse", false, "")
httpAddr := HTTPAddrFlag(cmdFlags)
if err := cmdFlags.Parse(args); err != nil {
return 1
}
key := ""
// Check for arg validation
args = cmdFlags.Args()
switch len(args) {
case 0:
key = ""
case 1:
key = args[0]
default:
c.Ui.Error(fmt.Sprintf("Too many arguments (expected 1, got %d)", len(args)))
return 1
}
// This is just a "nice" thing to do. Since pairs cannot start with a /, but
// users will likely put "/" or "/foo", lets go ahead and strip that for them
// here.
if len(key) > 0 && key[0] == '/' {
key = key[1:]
}
// If the key is empty and we are not doing a recursive delete, this is an
// error.
if key == "" && !*recurse {
c.Ui.Error("Error! Missing KEY argument")
return 1
}
// ModifyIndex is required for CAS
if *cas && *modifyIndex == 0 {
c.Ui.Error("Must specify -modify-index with -cas!")
return 1
}
// Specifying a ModifyIndex for a non-CAS operation is not possible.
if *modifyIndex != 0 && !*cas {
c.Ui.Error("Cannot specify -modify-index without -cas!")
}
// It is not valid to use a CAS and recurse in the same call
if *recurse && *cas {
c.Ui.Error("Cannot specify both -cas and -recurse!")
return 1
}
// Create and test the HTTP client
conf := api.DefaultConfig()
conf.Address = *httpAddr
conf.Token = *token
client, err := api.NewClient(conf)
if err != nil {
c.Ui.Error(fmt.Sprintf("Error connecting to Consul agent: %s", err))
return 1
}
wo := &api.WriteOptions{
Datacenter: *datacenter,
}
switch {
case *recurse:
if _, err := client.KV().DeleteTree(key, wo); err != nil {
c.Ui.Error(fmt.Sprintf("Error! Did not delete prefix %s: %s", key, err))
return 1
}
c.Ui.Info(fmt.Sprintf("Success! Deleted keys with prefix: %s", key))
return 0
case *cas:
pair := &api.KVPair{
Key: key,
ModifyIndex: *modifyIndex,
}
success, _, err := client.KV().DeleteCAS(pair, wo)
if err != nil {
c.Ui.Error(fmt.Sprintf("Error! Did not delete key %s: %s", key, err))
return 1
}
if !success {
c.Ui.Error(fmt.Sprintf("Error! Did not delete key %s: CAS failed", key))
return 1
}
c.Ui.Info(fmt.Sprintf("Success! Deleted key: %s", key))
return 0
default:
if _, err := client.KV().Delete(key, wo); err != nil {
c.Ui.Error(fmt.Sprintf("Error deleting key %s: %s", key, err))
return 1
}
c.Ui.Info(fmt.Sprintf("Success! Deleted key: %s", key))
return 0
}
}
func (c *KVDeleteCommand) Synopsis() string {
return "Removes data from the KV store"
}

207
command/kv_delete_test.go Normal file
View File

@ -0,0 +1,207 @@
package command
import (
"strconv"
"strings"
"testing"
"github.com/hashicorp/consul/api"
"github.com/mitchellh/cli"
)
func TestKVDeleteCommand_implements(t *testing.T) {
var _ cli.Command = &KVDeleteCommand{}
}
func TestKVDeleteCommand_noTabs(t *testing.T) {
assertNoTabs(t, new(KVDeleteCommand))
}
func TestKVDeleteCommand_Validation(t *testing.T) {
ui := new(cli.MockUi)
c := &KVDeleteCommand{Ui: ui}
cases := map[string]struct {
args []string
output string
}{
"-cas and -recurse": {
[]string{"-cas", "-modify-index", "2", "-recurse", "foo"},
"Cannot specify both",
},
"-cas no -modify-index": {
[]string{"-cas", "foo"},
"Must specify -modify-index",
},
"-modify-index no -cas": {
[]string{"-modify-index", "2", "foo"},
"Cannot specify -modify-index without",
},
"no key": {
[]string{},
"Missing KEY argument",
},
"extra args": {
[]string{"foo", "bar", "baz"},
"Too many arguments",
},
}
for name, tc := range cases {
// Ensure our buffer is always clear
if ui.ErrorWriter != nil {
ui.ErrorWriter.Reset()
}
if ui.OutputWriter != nil {
ui.OutputWriter.Reset()
}
code := c.Run(tc.args)
if code == 0 {
t.Errorf("%s: expected non-zero exit", name)
}
output := ui.ErrorWriter.String()
if !strings.Contains(output, tc.output) {
t.Errorf("%s: expected %q to contain %q", name, output, tc.output)
}
}
}
func TestKVDeleteCommand_Run(t *testing.T) {
srv, client := testAgentWithAPIClient(t)
defer srv.Shutdown()
waitForLeader(t, srv.httpAddr)
ui := new(cli.MockUi)
c := &KVDeleteCommand{Ui: ui}
pair := &api.KVPair{
Key: "foo",
Value: []byte("bar"),
}
_, err := client.KV().Put(pair, nil)
if err != nil {
t.Fatalf("err: %#v", err)
}
args := []string{
"-http-addr=" + srv.httpAddr,
"foo",
}
code := c.Run(args)
if code != 0 {
t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String())
}
pair, _, err = client.KV().Get("foo", nil)
if err != nil {
t.Fatalf("err: %#v", err)
}
if pair != nil {
t.Fatalf("bad: %#v", pair)
}
}
func TestKVDeleteCommand_Recurse(t *testing.T) {
srv, client := testAgentWithAPIClient(t)
defer srv.Shutdown()
waitForLeader(t, srv.httpAddr)
ui := new(cli.MockUi)
c := &KVDeleteCommand{Ui: ui}
keys := []string{"foo/a", "foo/b", "food"}
for _, k := range keys {
pair := &api.KVPair{
Key: k,
Value: []byte("bar"),
}
_, err := client.KV().Put(pair, nil)
if err != nil {
t.Fatalf("err: %#v", err)
}
}
args := []string{
"-http-addr=" + srv.httpAddr,
"-recurse",
"foo",
}
code := c.Run(args)
if code != 0 {
t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String())
}
for _, k := range keys {
pair, _, err := client.KV().Get(k, nil)
if err != nil {
t.Fatalf("err: %#v", err)
}
if pair != nil {
t.Fatalf("bad: %#v", pair)
}
}
}
func TestKVDeleteCommand_CAS(t *testing.T) {
srv, client := testAgentWithAPIClient(t)
defer srv.Shutdown()
waitForLeader(t, srv.httpAddr)
ui := new(cli.MockUi)
c := &KVDeleteCommand{Ui: ui}
pair := &api.KVPair{
Key: "foo",
Value: []byte("bar"),
}
_, err := client.KV().Put(pair, nil)
if err != nil {
t.Fatalf("err: %#v", err)
}
args := []string{
"-http-addr=" + srv.httpAddr,
"-cas",
"-modify-index", "1",
"foo",
}
code := c.Run(args)
if code == 0 {
t.Fatalf("bad: expected error")
}
data, _, err := client.KV().Get("foo", nil)
if err != nil {
t.Fatal(err)
}
// Reset buffers
ui.OutputWriter.Reset()
ui.ErrorWriter.Reset()
args = []string{
"-http-addr=" + srv.httpAddr,
"-cas",
"-modify-index", strconv.FormatUint(data.ModifyIndex, 10),
"foo",
}
code = c.Run(args)
if code != 0 {
t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String())
}
data, _, err = client.KV().Get("foo", nil)
if err != nil {
t.Fatal(err)
}
if data != nil {
t.Fatalf("bad: %#v", data)
}
}

226
command/kv_get.go Normal file
View File

@ -0,0 +1,226 @@
package command
import (
"bytes"
"flag"
"fmt"
"io"
"strings"
"text/tabwriter"
"github.com/hashicorp/consul/api"
"github.com/mitchellh/cli"
)
// KVGetCommand is a Command implementation that is used to fetch the value of
// a key from the key-value store.
type KVGetCommand struct {
Ui cli.Ui
}
func (c *KVGetCommand) Help() string {
helpText := `
Usage: consul kv get [options] [KEY_OR_PREFIX]
Retrieves the value from Consul's key-value store at the given key name. If no
key exists with that name, an error is returned. If a key exists with that
name but has no data, nothing is returned. If the name or prefix is omitted,
it defaults to "" which is the root of the key-value store.
To retrieve the value for the key named "foo" in the key-value store:
$ consul kv get foo
This will return the original, raw value stored in Consul. To view detailed
information about the key, specify the "-detailed" flag. This will output all
known metadata about the key including ModifyIndex and any user-supplied
flags:
$ consul kv get -detailed foo
To treat the path as a prefix and list all keys which start with the given
prefix, specify the "-recurse" flag:
$ consul kv get -recurse foo
This will return all key-vlaue pairs. To just list the keys which start with
the specified prefix, use the "-keys" option instead:
$ consul kv get -keys foo
For a full list of options and examples, please see the Consul documentation.
` + apiOptsText + `
KV Get Options:
-detailed Provide additional metadata about the key in addition
to the value such as the ModifyIndex and any flags
that may have been set on the key. The default value
is false.
-keys List keys which start with the given prefix, but not
their values. This is especially useful if you only
need the key names themselves. This option is commonly
combined with the -separator option. The default value
is false.
-recurse Recursively look at all keys prefixed with the given
path. The default value is false.
-separator=<string> String to use as a separator between keys. The default
value is "/", but this option is only taken into
account when paired with the -keys flag.
`
return strings.TrimSpace(helpText)
}
func (c *KVGetCommand) Run(args []string) int {
cmdFlags := flag.NewFlagSet("get", flag.ContinueOnError)
cmdFlags.Usage = func() { c.Ui.Output(c.Help()) }
datacenter := cmdFlags.String("datacenter", "", "")
token := cmdFlags.String("token", "", "")
stale := cmdFlags.Bool("stale", false, "")
detailed := cmdFlags.Bool("detailed", false, "")
keys := cmdFlags.Bool("keys", false, "")
recurse := cmdFlags.Bool("recurse", false, "")
separator := cmdFlags.String("separator", "/", "")
httpAddr := HTTPAddrFlag(cmdFlags)
if err := cmdFlags.Parse(args); err != nil {
return 1
}
key := ""
// Check for arg validation
args = cmdFlags.Args()
switch len(args) {
case 0:
key = ""
case 1:
key = args[0]
default:
c.Ui.Error(fmt.Sprintf("Too many arguments (expected 1, got %d)", len(args)))
return 1
}
// This is just a "nice" thing to do. Since pairs cannot start with a /, but
// users will likely put "/" or "/foo", lets go ahead and strip that for them
// here.
if len(key) > 0 && key[0] == '/' {
key = key[1:]
}
// If the key is empty and we are not doing a recursive or key-based lookup,
// this is an error.
if key == "" && !(*recurse || *keys) {
c.Ui.Error("Error! Missing KEY argument")
return 1
}
// Create and test the HTTP client
conf := api.DefaultConfig()
conf.Address = *httpAddr
conf.Token = *token
client, err := api.NewClient(conf)
if err != nil {
c.Ui.Error(fmt.Sprintf("Error connecting to Consul agent: %s", err))
return 1
}
switch {
case *keys:
keys, _, err := client.KV().Keys(key, *separator, &api.QueryOptions{
Datacenter: *datacenter,
AllowStale: *stale,
})
if err != nil {
c.Ui.Error(fmt.Sprintf("Error querying Consul agent: %s", err))
return 1
}
for _, k := range keys {
c.Ui.Info(string(k))
}
return 0
case *recurse:
pairs, _, err := client.KV().List(key, &api.QueryOptions{
Datacenter: *datacenter,
AllowStale: *stale,
})
if err != nil {
c.Ui.Error(fmt.Sprintf("Error querying Consul agent: %s", err))
return 1
}
for i, pair := range pairs {
if *detailed {
var b bytes.Buffer
if err := prettyKVPair(&b, pair); err != nil {
c.Ui.Error(fmt.Sprintf("Error rendering KV pair: %s", err))
return 1
}
c.Ui.Info(b.String())
if i < len(pairs)-1 {
c.Ui.Info("")
}
} else {
c.Ui.Info(fmt.Sprintf("%s:%s", pair.Key, pair.Value))
}
}
return 0
default:
pair, _, err := client.KV().Get(key, &api.QueryOptions{
Datacenter: *datacenter,
AllowStale: *stale,
})
if err != nil {
c.Ui.Error(fmt.Sprintf("Error querying Consul agent: %s", err))
return 1
}
if pair == nil {
c.Ui.Error(fmt.Sprintf("Error! No key exists at: %s", key))
return 1
}
if *detailed {
var b bytes.Buffer
if err := prettyKVPair(&b, pair); err != nil {
c.Ui.Error(fmt.Sprintf("Error rendering KV pair: %s", err))
return 1
}
c.Ui.Info(b.String())
return 0
} else {
c.Ui.Info(string(pair.Value))
return 0
}
}
}
func (c *KVGetCommand) Synopsis() string {
return "Retrieves or lists data from the KV store"
}
func prettyKVPair(w io.Writer, pair *api.KVPair) error {
tw := tabwriter.NewWriter(w, 0, 2, 6, ' ', 0)
fmt.Fprintf(tw, "CreateIndex\t%d\n", pair.CreateIndex)
fmt.Fprintf(tw, "Flags\t%d\n", pair.Flags)
fmt.Fprintf(tw, "Key\t%s\n", pair.Key)
fmt.Fprintf(tw, "LockIndex\t%d\n", pair.LockIndex)
fmt.Fprintf(tw, "ModifyIndex\t%d\n", pair.ModifyIndex)
if pair.Session == "" {
fmt.Fprintf(tw, "Session\t-\n")
} else {
fmt.Fprintf(tw, "Session\t%s\n", pair.Session)
}
fmt.Fprintf(tw, "Value\t%s", pair.Value)
return tw.Flush()
}

252
command/kv_get_test.go Normal file
View File

@ -0,0 +1,252 @@
package command
import (
"strings"
"testing"
"github.com/hashicorp/consul/api"
"github.com/mitchellh/cli"
)
func TestKVGetCommand_implements(t *testing.T) {
var _ cli.Command = &KVGetCommand{}
}
func TestKVGetCommand_noTabs(t *testing.T) {
assertNoTabs(t, new(KVGetCommand))
}
func TestKVGetCommand_Validation(t *testing.T) {
ui := new(cli.MockUi)
c := &KVGetCommand{Ui: ui}
cases := map[string]struct {
args []string
output string
}{
"no key": {
[]string{},
"Missing KEY argument",
},
"extra args": {
[]string{"foo", "bar", "baz"},
"Too many arguments",
},
}
for name, tc := range cases {
// Ensure our buffer is always clear
if ui.ErrorWriter != nil {
ui.ErrorWriter.Reset()
}
if ui.OutputWriter != nil {
ui.OutputWriter.Reset()
}
code := c.Run(tc.args)
if code == 0 {
t.Errorf("%s: expected non-zero exit", name)
}
output := ui.ErrorWriter.String()
if !strings.Contains(output, tc.output) {
t.Errorf("%s: expected %q to contain %q", name, output, tc.output)
}
}
}
func TestKVGetCommand_Run(t *testing.T) {
srv, client := testAgentWithAPIClient(t)
defer srv.Shutdown()
waitForLeader(t, srv.httpAddr)
ui := new(cli.MockUi)
c := &KVGetCommand{Ui: ui}
pair := &api.KVPair{
Key: "foo",
Value: []byte("bar"),
}
_, err := client.KV().Put(pair, nil)
if err != nil {
t.Fatalf("err: %#v", err)
}
args := []string{
"-http-addr=" + srv.httpAddr,
"foo",
}
code := c.Run(args)
if code != 0 {
t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String())
}
output := ui.OutputWriter.String()
if !strings.Contains(output, "bar") {
t.Errorf("bad: %#v", output)
}
}
func TestKVGetCommand_Missing(t *testing.T) {
srv, _ := testAgentWithAPIClient(t)
defer srv.Shutdown()
waitForLeader(t, srv.httpAddr)
ui := new(cli.MockUi)
c := &KVGetCommand{Ui: ui}
args := []string{
"-http-addr=" + srv.httpAddr,
"not-a-real-key",
}
code := c.Run(args)
if code == 0 {
t.Fatalf("expected bad code")
}
}
func TestKVGetCommand_Empty(t *testing.T) {
srv, client := testAgentWithAPIClient(t)
defer srv.Shutdown()
waitForLeader(t, srv.httpAddr)
ui := new(cli.MockUi)
c := &KVGetCommand{Ui: ui}
pair := &api.KVPair{
Key: "empty",
Value: []byte(""),
}
_, err := client.KV().Put(pair, nil)
if err != nil {
t.Fatalf("err: %#v", err)
}
args := []string{
"-http-addr=" + srv.httpAddr,
"empty",
}
code := c.Run(args)
if code != 0 {
t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String())
}
}
func TestKVGetCommand_Detailed(t *testing.T) {
srv, client := testAgentWithAPIClient(t)
defer srv.Shutdown()
waitForLeader(t, srv.httpAddr)
ui := new(cli.MockUi)
c := &KVGetCommand{Ui: ui}
pair := &api.KVPair{
Key: "foo",
Value: []byte("bar"),
}
_, err := client.KV().Put(pair, nil)
if err != nil {
t.Fatalf("err: %#v", err)
}
args := []string{
"-http-addr=" + srv.httpAddr,
"-detailed",
"foo",
}
code := c.Run(args)
if code != 0 {
t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String())
}
output := ui.OutputWriter.String()
for _, key := range []string{
"CreateIndex",
"LockIndex",
"ModifyIndex",
"Flags",
"Session",
"Value",
} {
if !strings.Contains(output, key) {
t.Fatalf("bad %#v, missing %q", output, key)
}
}
}
func TestKVGetCommand_Keys(t *testing.T) {
srv, client := testAgentWithAPIClient(t)
defer srv.Shutdown()
waitForLeader(t, srv.httpAddr)
ui := new(cli.MockUi)
c := &KVGetCommand{Ui: ui}
keys := []string{"foo/bar", "foo/baz", "foo/zip"}
for _, key := range keys {
if _, err := client.KV().Put(&api.KVPair{Key: key}, nil); err != nil {
t.Fatalf("err: %#v", err)
}
}
args := []string{
"-http-addr=" + srv.httpAddr,
"-keys",
"foo/",
}
code := c.Run(args)
if code != 0 {
t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String())
}
output := ui.OutputWriter.String()
for _, key := range keys {
if !strings.Contains(output, key) {
t.Fatalf("bad %#v missing %q", output, key)
}
}
}
func TestKVGetCommand_Recurse(t *testing.T) {
srv, client := testAgentWithAPIClient(t)
defer srv.Shutdown()
waitForLeader(t, srv.httpAddr)
ui := new(cli.MockUi)
c := &KVGetCommand{Ui: ui}
keys := map[string]string{
"foo/a": "a",
"foo/b": "b",
"foo/c": "c",
}
for k, v := range keys {
pair := &api.KVPair{Key: k, Value: []byte(v)}
if _, err := client.KV().Put(pair, nil); err != nil {
t.Fatalf("err: %#v", err)
}
}
args := []string{
"-http-addr=" + srv.httpAddr,
"-recurse",
"foo",
}
code := c.Run(args)
if code != 0 {
t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String())
}
output := ui.OutputWriter.String()
for key, value := range keys {
if !strings.Contains(output, key+":"+value) {
t.Fatalf("bad %#v missing %q", output, key)
}
}
}

239
command/kv_put.go Normal file
View File

@ -0,0 +1,239 @@
package command
import (
"bytes"
"flag"
"fmt"
"io"
"io/ioutil"
"os"
"strings"
"github.com/hashicorp/consul/api"
"github.com/mitchellh/cli"
)
// KVPutCommand is a Command implementation that is used to write data to the
// key-value store.
type KVPutCommand struct {
Ui cli.Ui
// testStdin is the input for testing.
testStdin io.Reader
}
func (c *KVPutCommand) Help() string {
helpText := `
Usage: consul kv put [options] KEY [DATA]
Writes the data to the given path in the key-value store. The data can be of
any type.
$ consul kv put config/redis/maxconns 5
The data can also be consumed from a file on disk by prefixing with the "@"
symbol. For example:
$ consul kv put config/program/license @license.lic
Or it can be read from stdin using the "-" symbol:
$ echo "abcd1234" | consul kv put config/program/license -
The DATA argument itself is optional. If omitted, this will create an empty
key-value pair at the specified path:
$ consul kv put webapp/beta/active
To perform a Check-And-Set operation, specify the -cas flag with the
appropriate -modify-index flag corresponding to the key you want to perform
the CAS operation on:
$ consul kv put -cas -modify-index=844 config/redis/maxconns 5
Additional flags and more advanced use cases are detailed below.
` + apiOptsText + `
KV Put Options:
-acquire Obtain a lock on the key. If the key does not exist,
this operation will create the key and obtain the
lock. The session must already exist and be specified
via the -session flag. The default value is false.
-cas Perform a Check-And-Set operation. Specifying this
value also requires the -modify-index flag to be set.
The default value is false.
-flags=<int> Unsigned integer value to assign to this key-value
pair. This value is not read by Consul, so clients can
use this value however makes sense for their use case.
The default value is 0 (no flags).
-modify-index=<int> Unsigned integer representing the ModifyIndex of the
key. This is used in combination with the -cas flag.
-release Forfeit the lock on the key at the givne path. This
requires the -session flag to be set. The key must be
held by the session in order to be unlocked. The
default value is false.
-session=<string> User-defined identifer for this session as a string.
This is commonly used with the -acquire and -release
operations to build robust locking, but it can be set
on any key. The default value is empty (no session).
`
return strings.TrimSpace(helpText)
}
func (c *KVPutCommand) Run(args []string) int {
cmdFlags := flag.NewFlagSet("get", flag.ContinueOnError)
cmdFlags.Usage = func() { c.Ui.Output(c.Help()) }
httpAddr := HTTPAddrFlag(cmdFlags)
datacenter := cmdFlags.String("datacenter", "", "")
token := cmdFlags.String("token", "", "")
cas := cmdFlags.Bool("cas", false, "")
flags := cmdFlags.Uint64("flags", 0, "")
modifyIndex := cmdFlags.Uint64("modify-index", 0, "")
session := cmdFlags.String("session", "", "")
acquire := cmdFlags.Bool("acquire", false, "")
release := cmdFlags.Bool("release", false, "")
if err := cmdFlags.Parse(args); err != nil {
return 1
}
// Check for arg validation
args = cmdFlags.Args()
key, data, err := c.dataFromArgs(args)
if err != nil {
c.Ui.Error(fmt.Sprintf("Error! %s", err))
return 1
}
// Session is reauired for release or acquire
if (*release || *acquire) && *session == "" {
c.Ui.Error("Error! Missing -session (required with -acquire and -release)")
return 1
}
// ModifyIndex is required for CAS
if *cas && *modifyIndex == 0 {
c.Ui.Error("Must specify -modify-index with -cas!")
return 1
}
// Create and test the HTTP client
conf := api.DefaultConfig()
conf.Address = *httpAddr
conf.Token = *token
client, err := api.NewClient(conf)
if err != nil {
c.Ui.Error(fmt.Sprintf("Error connecting to Consul agent: %s", err))
return 1
}
pair := &api.KVPair{
Key: key,
ModifyIndex: *modifyIndex,
Flags: *flags,
Value: []byte(data),
Session: *session,
}
wo := &api.WriteOptions{
Datacenter: *datacenter,
Token: *token,
}
switch {
case *cas:
ok, _, err := client.KV().CAS(pair, wo)
if err != nil {
c.Ui.Error(fmt.Sprintf("Error! Did not write to %s: %s", key, err))
return 1
}
if !ok {
c.Ui.Error(fmt.Sprintf("Error! Did not write to %s: CAS failed", key))
return 1
}
c.Ui.Info(fmt.Sprintf("Success! Data written to: %s", key))
return 0
case *acquire:
ok, _, err := client.KV().Acquire(pair, wo)
if err != nil {
c.Ui.Error(fmt.Sprintf("Error! Failed writing data: %s", err))
return 1
}
if !ok {
c.Ui.Error("Error! Did not acquire lock")
return 1
}
c.Ui.Info(fmt.Sprintf("Success! Lock acquired on: %s", key))
return 0
case *release:
ok, _, err := client.KV().Release(pair, wo)
if err != nil {
c.Ui.Error(fmt.Sprintf("Error! Failed writing data: %s", key))
return 1
}
if !ok {
c.Ui.Error("Error! Did not release lock")
return 1
}
c.Ui.Info(fmt.Sprintf("Success! Lock released on: %s", key))
return 0
default:
if _, err := client.KV().Put(pair, wo); err != nil {
c.Ui.Error(fmt.Sprintf("Error! Failed writing data: %s", err))
return 1
}
c.Ui.Info(fmt.Sprintf("Success! Data written to: %s", key))
return 0
}
}
func (c *KVPutCommand) Synopsis() string {
return "Sets or updates data in the KV store"
}
func (c *KVPutCommand) dataFromArgs(args []string) (string, string, error) {
var stdin io.Reader = os.Stdin
if c.testStdin != nil {
stdin = c.testStdin
}
switch len(args) {
case 0:
return "", "", fmt.Errorf("Missing KEY argument")
case 1:
return args[0], "", nil
case 2:
default:
return "", "", fmt.Errorf("Too many arguments (expected 1 or 2, got %d)", len(args))
}
key := args[0]
data := args[1]
switch data[0] {
case '@':
data, err := ioutil.ReadFile(data[1:])
if err != nil {
return "", "", fmt.Errorf("Failed to read file: %s", err)
}
return key, string(data), nil
case '-':
var b bytes.Buffer
if _, err := io.Copy(&b, stdin); err != nil {
return "", "", fmt.Errorf("Failed to read stdin: %s", err)
}
return key, b.String(), nil
default:
return key, data, nil
}
}

284
command/kv_put_test.go Normal file
View File

@ -0,0 +1,284 @@
package command
import (
"bytes"
"io"
"io/ioutil"
"os"
"strconv"
"strings"
"testing"
"github.com/hashicorp/consul/api"
"github.com/mitchellh/cli"
)
func TestKVPutCommand_implements(t *testing.T) {
var _ cli.Command = &KVPutCommand{}
}
func TestKVPutCommand_noTabs(t *testing.T) {
assertNoTabs(t, new(KVPutCommand))
}
func TestKVPutCommand_Validation(t *testing.T) {
ui := new(cli.MockUi)
c := &KVPutCommand{Ui: ui}
cases := map[string]struct {
args []string
output string
}{
"-acquire without -session": {
[]string{"-acquire", "foo"},
"Missing -session",
},
"-release without -session": {
[]string{"-release", "foo"},
"Missing -session",
},
"-cas no -modify-index": {
[]string{"-cas", "foo"},
"Must specify -modify-index",
},
"no key": {
[]string{},
"Missing KEY argument",
},
"extra args": {
[]string{"foo", "bar", "baz"},
"Too many arguments",
},
}
for name, tc := range cases {
// Ensure our buffer is always clear
if ui.ErrorWriter != nil {
ui.ErrorWriter.Reset()
}
if ui.OutputWriter != nil {
ui.OutputWriter.Reset()
}
code := c.Run(tc.args)
if code == 0 {
t.Errorf("%s: expected non-zero exit", name)
}
output := ui.ErrorWriter.String()
if !strings.Contains(output, tc.output) {
t.Errorf("%s: expected %q to contain %q", name, output, tc.output)
}
}
}
func TestKVPutCommand_Run(t *testing.T) {
srv, client := testAgentWithAPIClient(t)
defer srv.Shutdown()
waitForLeader(t, srv.httpAddr)
ui := new(cli.MockUi)
c := &KVPutCommand{Ui: ui}
args := []string{
"-http-addr=" + srv.httpAddr,
"foo", "bar",
}
code := c.Run(args)
if code != 0 {
t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String())
}
data, _, err := client.KV().Get("foo", nil)
if err != nil {
t.Fatal(err)
}
if !bytes.Equal(data.Value, []byte("bar")) {
t.Errorf("bad: %#v", data.Value)
}
}
func TestKVPutCommand_File(t *testing.T) {
srv, client := testAgentWithAPIClient(t)
defer srv.Shutdown()
waitForLeader(t, srv.httpAddr)
ui := new(cli.MockUi)
c := &KVPutCommand{Ui: ui}
f, err := ioutil.TempFile("", "kv-put-command-file")
if err != nil {
t.Fatalf("err: %#v", err)
}
defer os.Remove(f.Name())
if _, err := f.WriteString("bar"); err != nil {
t.Fatalf("err: %#v", err)
}
args := []string{
"-http-addr=" + srv.httpAddr,
"foo", "@" + f.Name(),
}
code := c.Run(args)
if code != 0 {
t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String())
}
data, _, err := client.KV().Get("foo", nil)
if err != nil {
t.Fatal(err)
}
if !bytes.Equal(data.Value, []byte("bar")) {
t.Errorf("bad: %#v", data.Value)
}
}
func TestKVPutCommand_FileNoExist(t *testing.T) {
ui := new(cli.MockUi)
c := &KVPutCommand{Ui: ui}
args := []string{
"foo", "@/nope/definitely/not-a-real-file.txt",
}
code := c.Run(args)
if code == 0 {
t.Fatal("bad: expected error")
}
output := ui.ErrorWriter.String()
if !strings.Contains(output, "Failed to read file") {
t.Errorf("bad: %#v", output)
}
}
func TestKVPutCommand_Stdin(t *testing.T) {
srv, client := testAgentWithAPIClient(t)
defer srv.Shutdown()
waitForLeader(t, srv.httpAddr)
stdinR, stdinW := io.Pipe()
ui := new(cli.MockUi)
c := &KVPutCommand{
Ui: ui,
testStdin: stdinR,
}
go func() {
stdinW.Write([]byte("bar"))
stdinW.Close()
}()
args := []string{
"-http-addr=" + srv.httpAddr,
"foo", "-",
}
code := c.Run(args)
if code != 0 {
t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String())
}
data, _, err := client.KV().Get("foo", nil)
if err != nil {
t.Fatal(err)
}
if !bytes.Equal(data.Value, []byte("bar")) {
t.Errorf("bad: %#v", data.Value)
}
}
func TestKVPutCommand_Flags(t *testing.T) {
srv, client := testAgentWithAPIClient(t)
defer srv.Shutdown()
waitForLeader(t, srv.httpAddr)
ui := new(cli.MockUi)
c := &KVPutCommand{Ui: ui}
args := []string{
"-http-addr=" + srv.httpAddr,
"-flags", "12345",
"foo",
}
code := c.Run(args)
if code != 0 {
t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String())
}
data, _, err := client.KV().Get("foo", nil)
if err != nil {
t.Fatal(err)
}
if data.Flags != 12345 {
t.Errorf("bad: %#v", data.Flags)
}
}
func TestKVPutCommand_CAS(t *testing.T) {
srv, client := testAgentWithAPIClient(t)
defer srv.Shutdown()
waitForLeader(t, srv.httpAddr)
// Create the initial pair so it has a ModifyIndex.
pair := &api.KVPair{
Key: "foo",
Value: []byte("bar"),
}
if _, err := client.KV().Put(pair, nil); err != nil {
t.Fatalf("err: %#v", err)
}
ui := new(cli.MockUi)
c := &KVPutCommand{Ui: ui}
args := []string{
"-http-addr=" + srv.httpAddr,
"-cas",
"-modify-index", "123",
"foo", "a",
}
code := c.Run(args)
if code == 0 {
t.Fatalf("bad: expected error")
}
data, _, err := client.KV().Get("foo", nil)
if err != nil {
t.Fatal(err)
}
// Reset buffers
ui.OutputWriter.Reset()
ui.ErrorWriter.Reset()
args = []string{
"-http-addr=" + srv.httpAddr,
"-cas",
"-modify-index", strconv.FormatUint(data.ModifyIndex, 10),
"foo", "a",
}
code = c.Run(args)
if code != 0 {
t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String())
}
data, _, err = client.KV().Get("foo", nil)
if err != nil {
t.Fatal(err)
}
if !bytes.Equal(data.Value, []byte("a")) {
t.Errorf("bad: %#v", data.Value)
}
}

View File

@ -33,6 +33,12 @@ func (c *LeaveCommand) Run(args []string) int {
if err := cmdFlags.Parse(args); err != nil {
return 1
}
nonFlagArgs := cmdFlags.Args()
if len(nonFlagArgs) > 0 {
c.Ui.Error(fmt.Sprintf("Error found unexpected args: %v", nonFlagArgs))
c.Ui.Output(c.Help())
return 1
}
client, err := RPCClient(*rpcAddr)
if err != nil {

View File

@ -27,3 +27,17 @@ func TestLeaveCommandRun(t *testing.T) {
t.Fatalf("bad: %#v", ui.OutputWriter.String())
}
}
func TestLeaveCommandFailOnNonFlagArgs(t *testing.T) {
a1 := testAgent(t)
defer a1.Shutdown()
ui := new(cli.MockUi)
c := &LeaveCommand{Ui: ui}
args := []string{"-rpc-addr=" + a1.addr, "appserver1"}
code := c.Run(args)
if code == 0 {
t.Fatalf("bad: failed to check for unexpected args")
}
}

View File

@ -117,7 +117,7 @@ func (c *MaintCommand) Run(args []string) int {
c.Ui.Output(" Name: " + nodeName)
c.Ui.Output(" Reason: " + check.Notes)
c.Ui.Output("")
} else if strings.HasPrefix(check.CheckID, "_service_maintenance:") {
} else if strings.HasPrefix(string(check.CheckID), "_service_maintenance:") {
c.Ui.Output("Service:")
c.Ui.Output(" ID: " + check.ServiceID)
c.Ui.Output(" Reason: " + check.Notes)

173
command/operator.go Normal file
View File

@ -0,0 +1,173 @@
package command
import (
"flag"
"fmt"
"strings"
"github.com/hashicorp/consul/api"
"github.com/mitchellh/cli"
"github.com/ryanuber/columnize"
)
// OperatorCommand is used to provide various low-level tools for Consul
// operators.
type OperatorCommand struct {
Ui cli.Ui
}
func (c *OperatorCommand) Help() string {
helpText := `
Usage: consul operator <subcommand> [common options] [action] [options]
Provides cluster-level tools for Consul operators, such as interacting with
the Raft subsystem. NOTE: Use this command with extreme caution, as improper
use could lead to a Consul outage and even loss of data.
If ACLs are enabled then a token with operator privileges may required in
order to use this command. Requests are forwarded internally to the leader
if required, so this can be run from any Consul node in a cluster.
Run consul operator <subcommand> with no arguments for help on that
subcommand.
Common Options:
-http-addr=127.0.0.1:8500 HTTP address of the Consul agent.
-token="" ACL token to use. Defaults to that of agent.
Subcommands:
raft View and modify Consul's Raft configuration.
`
return strings.TrimSpace(helpText)
}
func (c *OperatorCommand) Run(args []string) int {
if len(args) < 1 {
c.Ui.Error("A subcommand must be specified")
c.Ui.Error("")
c.Ui.Error(c.Help())
return 1
}
var err error
subcommand := args[0]
switch subcommand {
case "raft":
err = c.raft(args[1:])
default:
err = fmt.Errorf("unknown subcommand %q", subcommand)
}
if err != nil {
c.Ui.Error(fmt.Sprintf("Operator %q subcommand failed: %v", subcommand, err))
return 1
}
return 0
}
// Synopsis returns a one-line description of this command.
func (c *OperatorCommand) Synopsis() string {
return "Provides cluster-level tools for Consul operators"
}
const raftHelp = `
Raft Subcommand Actions:
raft -list-peers -stale=[true|false]
Displays the current Raft peer configuration.
The -stale argument defaults to "false" which means the leader provides the
result. If the cluster is in an outage state without a leader, you may need
to set -stale to "true" to get the configuration from a non-leader server.
raft -remove-peer -address="IP:port"
Removes Consul server with given -address from the Raft configuration.
There are rare cases where a peer may be left behind in the Raft quorum even
though the server is no longer present and known to the cluster. This
command can be used to remove the failed server so that it is no longer
affects the Raft quorum. If the server still shows in the output of the
"consul members" command, it is preferable to clean up by simply running
"consul force-leave" instead of this command.
`
// raft handles the raft subcommands.
func (c *OperatorCommand) raft(args []string) error {
cmdFlags := flag.NewFlagSet("raft", flag.ContinueOnError)
cmdFlags.Usage = func() { c.Ui.Output(c.Help()) }
// Parse verb arguments.
var listPeers, removePeer bool
cmdFlags.BoolVar(&listPeers, "list-peers", false, "")
cmdFlags.BoolVar(&removePeer, "remove-peer", false, "")
// Parse other arguments.
var stale bool
var address, token string
cmdFlags.StringVar(&address, "address", "", "")
cmdFlags.BoolVar(&stale, "stale", false, "")
cmdFlags.StringVar(&token, "token", "", "")
httpAddr := HTTPAddrFlag(cmdFlags)
if err := cmdFlags.Parse(args); err != nil {
return err
}
// Set up a client.
conf := api.DefaultConfig()
conf.Address = *httpAddr
client, err := api.NewClient(conf)
if err != nil {
return fmt.Errorf("error connecting to Consul agent: %s", err)
}
operator := client.Operator()
// Dispatch based on the verb argument.
if listPeers {
// Fetch the current configuration.
q := &api.QueryOptions{
AllowStale: stale,
Token: token,
}
reply, err := operator.RaftGetConfiguration(q)
if err != nil {
return err
}
// Format it as a nice table.
result := []string{"Node|ID|Address|State|Voter"}
for _, s := range reply.Servers {
state := "follower"
if s.Leader {
state = "leader"
}
result = append(result, fmt.Sprintf("%s|%s|%s|%s|%v",
s.Node, s.ID, s.Address, state, s.Voter))
}
c.Ui.Output(columnize.SimpleFormat(result))
} else if removePeer {
// TODO (slackpad) Once we expose IDs, add support for removing
// by ID, add support for that.
if len(address) == 0 {
return fmt.Errorf("an address is required for the peer to remove")
}
// Try to kick the peer.
w := &api.WriteOptions{
Token: token,
}
if err := operator.RaftRemovePeerByAddress(address, w); err != nil {
return err
}
c.Ui.Output(fmt.Sprintf("Removed peer with address %q", address))
} else {
c.Ui.Output(c.Help())
c.Ui.Output("")
c.Ui.Output(strings.TrimSpace(raftHelp))
}
return nil
}

52
command/operator_test.go Normal file
View File

@ -0,0 +1,52 @@
package command
import (
"strings"
"testing"
"github.com/mitchellh/cli"
)
func TestOperator_Implements(t *testing.T) {
var _ cli.Command = &OperatorCommand{}
}
func TestOperator_Raft_ListPeers(t *testing.T) {
a1 := testAgent(t)
defer a1.Shutdown()
waitForLeader(t, a1.httpAddr)
ui := new(cli.MockUi)
c := &OperatorCommand{Ui: ui}
args := []string{"raft", "-http-addr=" + a1.httpAddr, "-list-peers"}
code := c.Run(args)
if code != 0 {
t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String())
}
output := strings.TrimSpace(ui.OutputWriter.String())
if !strings.Contains(output, "leader") {
t.Fatalf("bad: %s", output)
}
}
func TestOperator_Raft_RemovePeer(t *testing.T) {
a1 := testAgent(t)
defer a1.Shutdown()
waitForLeader(t, a1.httpAddr)
ui := new(cli.MockUi)
c := &OperatorCommand{Ui: ui}
args := []string{"raft", "-http-addr=" + a1.httpAddr, "-remove-peer", "-address=nope"}
code := c.Run(args)
if code != 1 {
t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String())
}
// If we get this error, it proves we sent the address all they through.
output := strings.TrimSpace(ui.ErrorWriter.String())
if !strings.Contains(output, "address \"nope\" was not found in the Raft configuration") {
t.Fatalf("bad: %s", output)
}
}

View File

@ -8,6 +8,7 @@ import (
"github.com/hashicorp/consul/command/agent"
"github.com/hashicorp/consul/consul/structs"
"github.com/hashicorp/consul/testutil"
"github.com/hashicorp/serf/coordinate"
"github.com/mitchellh/cli"
)
@ -88,29 +89,32 @@ func TestRTTCommand_Run_LAN(t *testing.T) {
}
}
// Wait for the updates to get flushed to the data store.
time.Sleep(2 * updatePeriod)
// Ask for the RTT of two known nodes
ui := new(cli.MockUi)
c := &RTTCommand{Ui: ui}
args := []string{
"-http-addr=" + a.httpAddr,
a.config.NodeName,
"dogs",
}
// Try two known nodes.
{
ui := new(cli.MockUi)
c := &RTTCommand{Ui: ui}
args := []string{
"-http-addr=" + a.httpAddr,
a.config.NodeName,
"dogs",
}
// Wait for the updates to get flushed to the data store.
testutil.WaitForResult(func() (bool, error) {
code := c.Run(args)
if code != 0 {
t.Fatalf("bad: %d: %#v", code, ui.ErrorWriter.String())
return false, fmt.Errorf("bad: %d: %#v", code, ui.ErrorWriter.String())
}
// Make sure the proper RTT was reported in the output.
expected := fmt.Sprintf("rtt: %s", dist_str)
if !strings.Contains(ui.OutputWriter.String(), expected) {
t.Fatalf("bad: %#v", ui.OutputWriter.String())
return false, fmt.Errorf("bad: %#v", ui.OutputWriter.String())
}
}
return true, nil
}, func(err error) {
t.Fatalf("failed to get proper RTT output: %v", err)
})
// Default to the agent's node.
{

View File

@ -0,0 +1,52 @@
package command
import (
"strings"
"github.com/mitchellh/cli"
)
// SnapshotCommand is a Command implementation that just shows help for
// the subcommands nested below it.
type SnapshotCommand struct {
Ui cli.Ui
}
func (c *SnapshotCommand) Run(args []string) int {
return cli.RunResultHelp
}
func (c *SnapshotCommand) Help() string {
helpText := `
Usage: consul snapshot <subcommand> [options] [args]
This command has subcommands for saving, restoring, and inspecting the state
of the Consul servers for disaster recovery. These are atomic, point-in-time
snapshots which include key/value entries, service catalog, prepared queries,
sessions, and ACLs.
If ACLs are enabled, a management token must be supplied in order to perform
snapshot operations.
Create a snapshot:
$ consul snapshot save backup.snap
Restore a snapshot:
$ consul snapshot restore backup.snap
Inspect a snapshot:
$ consul snapshot inspect backup.snap
For more examples, ask for subcommand help or view the documentation.
`
return strings.TrimSpace(helpText)
}
func (c *SnapshotCommand) Synopsis() string {
return "Saves, restores and inspects snapshots of Consul server state"
}

View File

@ -0,0 +1,15 @@
package command
import (
"testing"
"github.com/mitchellh/cli"
)
func TestSnapshotCommand_implements(t *testing.T) {
var _ cli.Command = &SnapshotCommand{}
}
func TestSnapshotCommand_noTabs(t *testing.T) {
assertNoTabs(t, new(SnapshotCommand))
}

View File

@ -0,0 +1,89 @@
package command
import (
"bytes"
"flag"
"fmt"
"os"
"strings"
"text/tabwriter"
"github.com/hashicorp/consul/consul/snapshot"
"github.com/mitchellh/cli"
)
// SnapshotInspectCommand is a Command implementation that is used to display
// metadata about a snapshot file
type SnapshotInspectCommand struct {
Ui cli.Ui
}
func (c *SnapshotInspectCommand) Help() string {
helpText := `
Usage: consul snapshot inspect [options] FILE
Displays information about a snapshot file on disk.
To inspect the file "backup.snap":
$ consul snapshot inspect backup.snap
For a full list of options and examples, please see the Consul documentation.
`
return strings.TrimSpace(helpText)
}
func (c *SnapshotInspectCommand) Run(args []string) int {
cmdFlags := flag.NewFlagSet("get", flag.ContinueOnError)
cmdFlags.Usage = func() { c.Ui.Output(c.Help()) }
if err := cmdFlags.Parse(args); err != nil {
return 1
}
var file string
args = cmdFlags.Args()
switch len(args) {
case 0:
c.Ui.Error("Missing FILE argument")
return 1
case 1:
file = args[0]
default:
c.Ui.Error(fmt.Sprintf("Too many arguments (expected 1, got %d)", len(args)))
return 1
}
// Open the file.
f, err := os.Open(file)
if err != nil {
c.Ui.Error(fmt.Sprintf("Error opening snapshot file: %s", err))
return 1
}
defer f.Close()
meta, err := snapshot.Verify(f)
if err != nil {
c.Ui.Error(fmt.Sprintf("Error verifying snapshot: %s", err))
}
var b bytes.Buffer
tw := tabwriter.NewWriter(&b, 0, 2, 6, ' ', 0)
fmt.Fprintf(tw, "ID\t%s\n", meta.ID)
fmt.Fprintf(tw, "Size\t%d\n", meta.Size)
fmt.Fprintf(tw, "Index\t%d\n", meta.Index)
fmt.Fprintf(tw, "Term\t%d\n", meta.Term)
fmt.Fprintf(tw, "Version\t%d\n", meta.Version)
if err = tw.Flush(); err != nil {
c.Ui.Error(fmt.Sprintf("Error rendering snapshot info: %s", err))
}
c.Ui.Info(b.String())
return 0
}
func (c *SnapshotInspectCommand) Synopsis() string {
return "Displays information about a Consul snapshot file"
}

View File

@ -0,0 +1,116 @@
package command
import (
"io"
"io/ioutil"
"os"
"path"
"strings"
"testing"
"github.com/mitchellh/cli"
)
func TestSnapshotInspectCommand_implements(t *testing.T) {
var _ cli.Command = &SnapshotInspectCommand{}
}
func TestSnapshotInspectCommand_noTabs(t *testing.T) {
assertNoTabs(t, new(SnapshotInspectCommand))
}
func TestSnapshotInspectCommand_Validation(t *testing.T) {
ui := new(cli.MockUi)
c := &SnapshotInspectCommand{Ui: ui}
cases := map[string]struct {
args []string
output string
}{
"no file": {
[]string{},
"Missing FILE argument",
},
"extra args": {
[]string{"foo", "bar", "baz"},
"Too many arguments",
},
}
for name, tc := range cases {
// Ensure our buffer is always clear
if ui.ErrorWriter != nil {
ui.ErrorWriter.Reset()
}
if ui.OutputWriter != nil {
ui.OutputWriter.Reset()
}
code := c.Run(tc.args)
if code == 0 {
t.Errorf("%s: expected non-zero exit", name)
}
output := ui.ErrorWriter.String()
if !strings.Contains(output, tc.output) {
t.Errorf("%s: expected %q to contain %q", name, output, tc.output)
}
}
}
func TestSnapshotInspectCommand_Run(t *testing.T) {
srv, client := testAgentWithAPIClient(t)
defer srv.Shutdown()
waitForLeader(t, srv.httpAddr)
ui := new(cli.MockUi)
dir, err := ioutil.TempDir("", "snapshot")
if err != nil {
t.Fatalf("err: %v", err)
}
defer os.RemoveAll(dir)
file := path.Join(dir, "backup.tgz")
// Save a snapshot of the current Consul state
f, err := os.Create(file)
if err != nil {
t.Fatalf("err: %v", err)
}
snap, _, err := client.Snapshot().Save(nil)
if err != nil {
f.Close()
t.Fatalf("err: %v", err)
}
if _, err := io.Copy(f, snap); err != nil {
f.Close()
t.Fatalf("err: %v", err)
}
if err := f.Close(); err != nil {
t.Fatalf("err: %v", err)
}
// Inspect the snapshot
inspect := &SnapshotInspectCommand{Ui: ui}
args := []string{file}
code := inspect.Run(args)
if code != 0 {
t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String())
}
output := ui.OutputWriter.String()
for _, key := range []string{
"ID",
"Size",
"Index",
"Term",
"Version",
} {
if !strings.Contains(output, key) {
t.Fatalf("bad %#v, missing %q", output, key)
}
}
}

103
command/snapshot_restore.go Normal file
View File

@ -0,0 +1,103 @@
package command
import (
"flag"
"fmt"
"os"
"strings"
"github.com/hashicorp/consul/api"
"github.com/mitchellh/cli"
)
// SnapshotRestoreCommand is a Command implementation that is used to restore
// the state of the Consul servers for disaster recovery.
type SnapshotRestoreCommand struct {
Ui cli.Ui
}
func (c *SnapshotRestoreCommand) Help() string {
helpText := `
Usage: consul snapshot restore [options] FILE
Restores an atomic, point-in-time snapshot of the state of the Consul servers
which includes key/value entries, service catalog, prepared queries, sessions,
and ACLs.
Restores involve a potentially dangerous low-level Raft operation that is not
designed to handle server failures during a restore. This command is primarily
intended to be used when recovering from a disaster, restoring into a fresh
cluster of Consul servers.
If ACLs are enabled, a management token must be supplied in order to perform
snapshot operations.
To restore a snapshot from the file "backup.snap":
$ consul snapshot restore backup.snap
For a full list of options and examples, please see the Consul documentation.
` + apiOptsText
return strings.TrimSpace(helpText)
}
func (c *SnapshotRestoreCommand) Run(args []string) int {
cmdFlags := flag.NewFlagSet("get", flag.ContinueOnError)
cmdFlags.Usage = func() { c.Ui.Output(c.Help()) }
datacenter := cmdFlags.String("datacenter", "", "")
token := cmdFlags.String("token", "", "")
httpAddr := HTTPAddrFlag(cmdFlags)
if err := cmdFlags.Parse(args); err != nil {
return 1
}
var file string
args = cmdFlags.Args()
switch len(args) {
case 0:
c.Ui.Error("Missing FILE argument")
return 1
case 1:
file = args[0]
default:
c.Ui.Error(fmt.Sprintf("Too many arguments (expected 1, got %d)", len(args)))
return 1
}
// Create and test the HTTP client
conf := api.DefaultConfig()
conf.Address = *httpAddr
conf.Token = *token
client, err := api.NewClient(conf)
if err != nil {
c.Ui.Error(fmt.Sprintf("Error connecting to Consul agent: %s", err))
return 1
}
// Open the file.
f, err := os.Open(file)
if err != nil {
c.Ui.Error(fmt.Sprintf("Error opening snapshot file: %s", err))
return 1
}
defer f.Close()
// Restore the snapshot.
err = client.Snapshot().Restore(&api.WriteOptions{
Datacenter: *datacenter,
}, f)
if err != nil {
c.Ui.Error(fmt.Sprintf("Error restoring snapshot: %s", err))
return 1
}
c.Ui.Info("Restored snapshot")
return 0
}
func (c *SnapshotRestoreCommand) Synopsis() string {
return "Restores snapshot of Consul server state"
}

View File

@ -0,0 +1,103 @@
package command
import (
"io"
"io/ioutil"
"os"
"path"
"strings"
"testing"
"github.com/mitchellh/cli"
)
func TestSnapshotRestoreCommand_implements(t *testing.T) {
var _ cli.Command = &SnapshotRestoreCommand{}
}
func TestSnapshotRestoreCommand_noTabs(t *testing.T) {
assertNoTabs(t, new(SnapshotRestoreCommand))
}
func TestSnapshotRestoreCommand_Validation(t *testing.T) {
ui := new(cli.MockUi)
c := &SnapshotRestoreCommand{Ui: ui}
cases := map[string]struct {
args []string
output string
}{
"no file": {
[]string{},
"Missing FILE argument",
},
"extra args": {
[]string{"foo", "bar", "baz"},
"Too many arguments",
},
}
for name, tc := range cases {
// Ensure our buffer is always clear
if ui.ErrorWriter != nil {
ui.ErrorWriter.Reset()
}
if ui.OutputWriter != nil {
ui.OutputWriter.Reset()
}
code := c.Run(tc.args)
if code == 0 {
t.Errorf("%s: expected non-zero exit", name)
}
output := ui.ErrorWriter.String()
if !strings.Contains(output, tc.output) {
t.Errorf("%s: expected %q to contain %q", name, output, tc.output)
}
}
}
func TestSnapshotRestoreCommand_Run(t *testing.T) {
srv, client := testAgentWithAPIClient(t)
defer srv.Shutdown()
waitForLeader(t, srv.httpAddr)
ui := new(cli.MockUi)
c := &SnapshotSaveCommand{Ui: ui}
dir, err := ioutil.TempDir("", "snapshot")
if err != nil {
t.Fatalf("err: %v", err)
}
defer os.RemoveAll(dir)
file := path.Join(dir, "backup.tgz")
args := []string{
"-http-addr=" + srv.httpAddr,
file,
}
f, err := os.Create(file)
if err != nil {
t.Fatalf("err: %v", err)
}
snap, _, err := client.Snapshot().Save(nil)
if err != nil {
f.Close()
t.Fatalf("err: %v", err)
}
if _, err := io.Copy(f, snap); err != nil {
f.Close()
t.Fatalf("err: %v", err)
}
if err := f.Close(); err != nil {
t.Fatalf("err: %v", err)
}
code := c.Run(args)
if code != 0 {
t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String())
}
}

132
command/snapshot_save.go Normal file
View File

@ -0,0 +1,132 @@
package command
import (
"flag"
"fmt"
"io"
"os"
"strings"
"github.com/hashicorp/consul/api"
"github.com/hashicorp/consul/consul/snapshot"
"github.com/mitchellh/cli"
)
// SnapshotSaveCommand is a Command implementation that is used to save the
// state of the Consul servers for disaster recovery.
type SnapshotSaveCommand struct {
Ui cli.Ui
}
func (c *SnapshotSaveCommand) Help() string {
helpText := `
Usage: consul snapshot save [options] FILE
Retrieves an atomic, point-in-time snapshot of the state of the Consul servers
which includes key/value entries, service catalog, prepared queries, sessions,
and ACLs.
If ACLs are enabled, a management token must be supplied in order to perform
snapshot operations.
To create a snapshot from the leader server and save it to "backup.snap":
$ consul snapshot save backup.snap
To create a potentially stale snapshot from any available server (useful if no
leader is available):
$ consul snapshot save -stale backup.snap
For a full list of options and examples, please see the Consul documentation.
` + apiOptsText
return strings.TrimSpace(helpText)
}
func (c *SnapshotSaveCommand) Run(args []string) int {
cmdFlags := flag.NewFlagSet("get", flag.ContinueOnError)
cmdFlags.Usage = func() { c.Ui.Output(c.Help()) }
datacenter := cmdFlags.String("datacenter", "", "")
token := cmdFlags.String("token", "", "")
stale := cmdFlags.Bool("stale", false, "")
httpAddr := HTTPAddrFlag(cmdFlags)
if err := cmdFlags.Parse(args); err != nil {
return 1
}
var file string
args = cmdFlags.Args()
switch len(args) {
case 0:
c.Ui.Error("Missing FILE argument")
return 1
case 1:
file = args[0]
default:
c.Ui.Error(fmt.Sprintf("Too many arguments (expected 1, got %d)", len(args)))
return 1
}
// Create and test the HTTP client
conf := api.DefaultConfig()
conf.Address = *httpAddr
conf.Token = *token
client, err := api.NewClient(conf)
if err != nil {
c.Ui.Error(fmt.Sprintf("Error connecting to Consul agent: %s", err))
return 1
}
// Take the snapshot.
snap, qm, err := client.Snapshot().Save(&api.QueryOptions{
Datacenter: *datacenter,
AllowStale: *stale,
})
if err != nil {
c.Ui.Error(fmt.Sprintf("Error saving snapshot: %s", err))
return 1
}
defer snap.Close()
// Save the file.
f, err := os.Create(file)
if err != nil {
c.Ui.Error(fmt.Sprintf("Error creating snapshot file: %s", err))
return 1
}
if _, err := io.Copy(f, snap); err != nil {
f.Close()
c.Ui.Error(fmt.Sprintf("Error writing snapshot file: %s", err))
return 1
}
if err := f.Close(); err != nil {
c.Ui.Error(fmt.Sprintf("Error closing snapshot file after writing: %s", err))
return 1
}
// Read it back to verify.
f, err = os.Open(file)
if err != nil {
c.Ui.Error(fmt.Sprintf("Error opening snapshot file for verify: %s", err))
return 1
}
if _, err := snapshot.Verify(f); err != nil {
f.Close()
c.Ui.Error(fmt.Sprintf("Error verifying snapshot file: %s", err))
return 1
}
if err := f.Close(); err != nil {
c.Ui.Error(fmt.Sprintf("Error closing snapshot file after verify: %s", err))
return 1
}
c.Ui.Info(fmt.Sprintf("Saved and verified snapshot to index %d", qm.LastIndex))
return 0
}
func (c *SnapshotSaveCommand) Synopsis() string {
return "Saves snapshot of Consul server state"
}

View File

@ -0,0 +1,94 @@
package command
import (
"io/ioutil"
"os"
"path"
"strings"
"testing"
"github.com/mitchellh/cli"
)
func TestSnapshotSaveCommand_implements(t *testing.T) {
var _ cli.Command = &SnapshotSaveCommand{}
}
func TestSnapshotSaveCommand_noTabs(t *testing.T) {
assertNoTabs(t, new(SnapshotSaveCommand))
}
func TestSnapshotSaveCommand_Validation(t *testing.T) {
ui := new(cli.MockUi)
c := &SnapshotSaveCommand{Ui: ui}
cases := map[string]struct {
args []string
output string
}{
"no file": {
[]string{},
"Missing FILE argument",
},
"extra args": {
[]string{"foo", "bar", "baz"},
"Too many arguments",
},
}
for name, tc := range cases {
// Ensure our buffer is always clear
if ui.ErrorWriter != nil {
ui.ErrorWriter.Reset()
}
if ui.OutputWriter != nil {
ui.OutputWriter.Reset()
}
code := c.Run(tc.args)
if code == 0 {
t.Errorf("%s: expected non-zero exit", name)
}
output := ui.ErrorWriter.String()
if !strings.Contains(output, tc.output) {
t.Errorf("%s: expected %q to contain %q", name, output, tc.output)
}
}
}
func TestSnapshotSaveCommand_Run(t *testing.T) {
srv, client := testAgentWithAPIClient(t)
defer srv.Shutdown()
waitForLeader(t, srv.httpAddr)
ui := new(cli.MockUi)
c := &SnapshotSaveCommand{Ui: ui}
dir, err := ioutil.TempDir("", "snapshot")
if err != nil {
t.Fatalf("err: %v", err)
}
defer os.RemoveAll(dir)
file := path.Join(dir, "backup.tgz")
args := []string{
"-http-addr=" + srv.httpAddr,
file,
}
code := c.Run(args)
if code != 0 {
t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String())
}
f, err := os.Open(file)
if err != nil {
t.Fatalf("err: %v", err)
}
defer f.Close()
if err := client.Snapshot().Restore(nil, f); err != nil {
t.Fatalf("err: %v", err)
}
}

View File

@ -2,16 +2,20 @@ package command
import (
"fmt"
"github.com/hashicorp/consul/command/agent"
"github.com/hashicorp/consul/consul"
"io"
"io/ioutil"
"math/rand"
"net"
"os"
"strings"
"sync/atomic"
"testing"
"time"
"github.com/hashicorp/consul/api"
"github.com/hashicorp/consul/command/agent"
"github.com/hashicorp/consul/consul"
"github.com/mitchellh/cli"
)
var offset uint64
@ -42,6 +46,15 @@ func testAgent(t *testing.T) *agentWrapper {
return testAgentWithConfig(t, func(c *agent.Config) {})
}
func testAgentWithAPIClient(t *testing.T) (*agentWrapper, *api.Client) {
agent := testAgentWithConfig(t, func(c *agent.Config) {})
client, err := api.NewClient(&api.Config{Address: agent.httpAddr})
if err != nil {
t.Fatalf("consul client: %#v", err)
}
return agent, client
}
func testAgentWithConfig(t *testing.T, cb func(c *agent.Config)) *agentWrapper {
l, err := net.Listen("tcp", "127.0.0.1:0")
if err != nil {
@ -126,3 +139,9 @@ func nextConfig() *agent.Config {
return conf
}
func assertNoTabs(t *testing.T, c cli.Command) {
if strings.ContainsRune(c.Help(), '\t') {
t.Errorf("%#v help output contains tabs", c)
}
}

View File

@ -1,18 +1,16 @@
package command
import (
"bytes"
"fmt"
"github.com/hashicorp/consul/command/agent"
"github.com/hashicorp/consul/consul"
"github.com/mitchellh/cli"
)
// VersionCommand is a Command implementation prints the version.
type VersionCommand struct {
Revision string
Version string
VersionPrerelease string
Ui cli.Ui
HumanVersion string
Ui cli.Ui
}
func (c *VersionCommand) Help() string {
@ -20,19 +18,17 @@ func (c *VersionCommand) Help() string {
}
func (c *VersionCommand) Run(_ []string) int {
var versionString bytes.Buffer
fmt.Fprintf(&versionString, "Consul %s", c.Version)
if c.VersionPrerelease != "" {
fmt.Fprintf(&versionString, ".%s", c.VersionPrerelease)
c.Ui.Output(fmt.Sprintf("Consul %s", c.HumanVersion))
if c.Revision != "" {
fmt.Fprintf(&versionString, " (%s)", c.Revision)
}
config := agent.DefaultConfig()
var supplement string
if config.Protocol < consul.ProtocolVersionMax {
supplement = fmt.Sprintf(" (agent will automatically use protocol >%d when speaking to compatible agents)",
config.Protocol)
}
c.Ui.Output(fmt.Sprintf("Protocol %d spoken by default, understands %d to %d%s",
config.Protocol, consul.ProtocolVersionMin, consul.ProtocolVersionMax, supplement))
c.Ui.Output(versionString.String())
c.Ui.Output(fmt.Sprintf("Consul Protocol: %d (Understands back to: %d)",
consul.ProtocolVersionMax, consul.ProtocolVersionMin))
return 0
}

View File

@ -37,6 +37,8 @@ Options:
-http-addr=127.0.0.1:8500 HTTP address of the Consul agent.
-datacenter="" Datacenter to query. Defaults to that of agent.
-token="" ACL token to use. Defaults to that of agent.
-stale=[true|false] Specifies if watch data is permitted to be stale.
Defaults to false.
Watch Specification:
@ -57,7 +59,7 @@ Watch Specification:
}
func (c *WatchCommand) Run(args []string) int {
var watchType, datacenter, token, key, prefix, service, tag, passingOnly, state, name string
var watchType, datacenter, token, key, prefix, service, tag, passingOnly, stale, state, name string
cmdFlags := flag.NewFlagSet("watch", flag.ContinueOnError)
cmdFlags.Usage = func() { c.Ui.Output(c.Help()) }
cmdFlags.StringVar(&watchType, "type", "", "")
@ -68,6 +70,7 @@ func (c *WatchCommand) Run(args []string) int {
cmdFlags.StringVar(&service, "service", "", "")
cmdFlags.StringVar(&tag, "tag", "", "")
cmdFlags.StringVar(&passingOnly, "passingonly", "", "")
cmdFlags.StringVar(&stale, "stale", "", "")
cmdFlags.StringVar(&state, "state", "", "")
cmdFlags.StringVar(&name, "name", "", "")
httpAddr := HTTPAddrFlag(cmdFlags)
@ -109,6 +112,14 @@ func (c *WatchCommand) Run(args []string) int {
if tag != "" {
params["tag"] = tag
}
if stale != "" {
b, err := strconv.ParseBool(stale)
if err != nil {
c.Ui.Error(fmt.Sprintf("Failed to parse stale flag: %s", err))
return 1
}
params["stale"] = b
}
if state != "" {
params["state"] = state
}

Some files were not shown because too many files have changed in this diff Show More