website: More getting started info

pull/36/head
Armon Dadgar 2014-04-10 19:06:10 -07:00
parent d4470839fd
commit 79c07e8113
3 changed files with 172 additions and 9 deletions

View File

@ -6,3 +6,82 @@ sidebar_current: "gettingstarted-checks"
# Registering Health Checks
We've already seen how simple registering a service is. In this section we will
continue by adding both a service level health check, as well as a host level
health check.
## Defining Checks
Similarly to a service, a check can be registered either by providing a
[check definition](/docs/agent/checks.html), or by making the appropriate calls to the
[HTTP API](/docs/agent/http.html). We will use a simple check definition to get started.
On the second node, we start by adding some additional configuration:
```
$ echo '{"check": {"name": "ping", "script": "ping -c1 google.com >/dev/null", "interval": "30s"}}' | sudo tee /etc/consul/ping.json
$ echo '{"service": {"name": "web", "tags": ["rails"], "port": 80,
"check": {"script": "curl localhost:80 >/dev/null 2>&1", "interval": "10s"}}}' | sudo tee /etc/consul/web.json
```
The first command adds a "ping" check. This check runs on a 30 second interval, invoking
the "ping -c1 google.com" command. The second command is modifying our previous definition of
the `web` service to include a check. This check uses curl every 10 seconds to verify that
our web server is running.
We now restart the second agent, with the same parameters as before. We should now see the following
log lines:
```
==> Starting Consul agent...
...
[INFO] agent: Synced service 'web'
[INFO] agent: Synced check 'service:web'
[INFO] agent: Synced check 'ping'
[WARN] Check 'service:web' is now critical
```
The first few log lines indicate that the agent has synced the new checks and service updates
with the Consul servers. The last line indicates that the check we added for the `web` service
is critical. This is because we are not actually running a web server and the curl test
we've added is failing!
## Checking Health Status
Now that we've added some simple checks, we can use the HTTP API to check them. First,
we can look for any failing checks:
```
$ curl http://localhost:8500/v1/health/state/critical
[{"Node":"agent-two","CheckID":"service:web","Name":"Service 'web' check","Status":"critical","Notes":"","ServiceID":"web","ServiceName":"web"}]
```
We can see that there is only a single check in the `critical` state, which is our
`web` service check. If we try to perform a DNS lookup for the service, we will see that
we don't get any results:
```
dig @127.0.0.1 -p 8600 web.service.consul
; <<>> DiG 9.8.1-P1 <<>> @127.0.0.1 -p 8600 web.service.consul
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 35753
;; flags: qr aa rd; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0
;; WARNING: recursion requested but not available
;; QUESTION SECTION:
;web.service.consul. IN A
```
The DNS interface uses the health information and avoids routing to nodes that
are failing their health checks. This is all managed for us automatically.
This section should have shown that checks can be easily added. Check definitions
can be updated by changing configuration files and sending a `SIGHUP` to the agent.
Alternatively the HTTP API can be used to add, remove and modify checks dynamically.
The API allows allows for a "dead man's switch" or [TTL based check](/docs/agent/checks.html).
TTL checks can be used to integrate an application more tightly with Consul, enabling
business logic to be evaluated as part of passing a check.

View File

@ -30,7 +30,7 @@ and it *must* be accessible by all other nodes in the cluster. The first node
will act as our server in this cluster.
```
$ consul agent -node=agent-one -bind=172.20.20.10
$ consul agent -server -bootstrap -data-dir /tmp/consul -node=agent-one -serf-bind=172.20.20.10 -server-addr=172.20.20.10:8300 -advertise=172.20.20.10
...
```
@ -40,7 +40,7 @@ as specified in the Vagrantfile. In production, you will generally want
to provide a bind address or interface as well.
```
$ consul agent -node=agent-two -bind=172.20.20.11
$ consul agent -data-dir /tmp/consul -node=agent-two -serf-bind=172.20.20.11 -server-addr=172.20.20.11:8300 -advertise=172.20.20.11
...
```
@ -55,7 +55,7 @@ Now, let's tell the first agent to join the second agent by running
the following command in a new terminal:
```
$ consul join 127.0.0.1:7947
$ consul join 172.20.20.11
Successfully joined cluster by contacting 1 nodes.
```
@ -66,12 +66,8 @@ know about each other:
```
$ consul members
agent-one 127.0.0.1:7946 alive
agent-two 127.0.0.1:7947 alive
$ consul members -rpc-addr=127.0.0.1:7374
agent-two 127.0.0.1:7947 alive
agent-one 127.0.0.1:4946 alive
agent-one 172.20.20.10:8301 alive role=consul,dc=dc1,vsn=1,vsn_min=1,vsn_max=1,port=8300,bootstrap=1
agent-two 172.20.20.11:8301 alive role=node,dc=dc1,vsn=1,vsn_min=1,vsn_max=1
```
<div class="alert alert-block alert-info">

View File

@ -6,3 +6,91 @@ sidebar_current: "gettingstarted-services"
# Registering Services
In the previous page, we created a simple cluster. Although the cluster members
could see each other, there were no registered services. In this page, we'll
modify our client to export a service.
## Defining a Service
A service can be registered either by providing a [service definition](/docs/agent/services.html),
or by making the appropriate calls to the [HTTP API](/docs/agent/http.html). First we
start by providing a simple service definition. We will by using the same setup as in the
[last page](/intro/getting-started/join.html). On the second node, we start by creating a
simple configuration:
```
$ sudo mkdir /etc/consul
$ echo '{"service": {"name": "web", "tags": ["rails"], "port": 80}}' | sudo tee /etc/consul/web.json
```
We now restart the second agent, providing the configuration directory as well as the
first node to re-join:
```
$ consul agent -data-dir /tmp/consul -node=agent-two -serf-bind=172.20.20.11 -server-addr=172.20.20.11:8300 -advertise=172.20.20.11 -config-dir /etc/consul/
==> Starting Consul agent...
...
[INFO] agent: Synced service 'web'
...
```
## Querying Services
Once the agent gets started, we should see a log output indicating that the `web` service
has been synced with the Consul servers. We can first check using the HTTP API:
```
$ curl http://localhost:8500/v1/catalog/service/web
[{"Node":"agent-two","Address":"172.20.20.11","ServiceID":"web","ServiceName":"web","ServiceTags":["rails"],"ServicePort":80}]
```
We can also do a simple DNS lookup for any nodes providing the `web` service:
```
$ dig @127.0.0.1 -p 8600 web.service.consul
; <<>> DiG 9.8.1-P1 <<>> @127.0.0.1 -p 8600 web.service.consul
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 1204
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
;; WARNING: recursion requested but not available
;; QUESTION SECTION:
;web.service.consul. IN A
;; ANSWER SECTION:
web.service.consul. 0 IN A 172.20.20.11
```
We can also filter on tags, here only requesting services matching the `rails` tag,
and specifically requesting SRV records:
```
$ dig @127.0.0.1 -p 8600 rails.web.service.consul SRV
; <<>> DiG 9.8.1-P1 <<>> @127.0.0.1 -p 8600 rails.web.service.consul SRV
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 45798
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; QUESTION SECTION:
;rails.web.service.consul. IN SRV
;; ANSWER SECTION:
rails.web.service.consul. 0 IN SRV 1 1 80 agent-two.node.dc1.consul.
;; ADDITIONAL SECTION:
agent-two.node.dc1.consul. 0 IN A 172.20.20.11
```
This shows how simple it is to get started with services. Service definitions
can be updated by changing configuration files and sending a `SIGHUP` to the agent.
Alternatively the HTTP API can be used to add, remove and modify services dynamically.