2014-02-08 00:41:03 +00:00
---
layout: "intro"
page_title: "Run the Agent"
sidebar_current: "gettingstarted-agent"
2014-10-19 23:40:10 +00:00
description: |-
After Consul is installed, the agent must be run. The agent can either run in a server or client mode. Each datacenter must have at least one server, although 3 or 5 is recommended. A single server deployment is highly discouraged as data loss is inevitable in a failure scenario.
2014-02-08 00:41:03 +00:00
---
2014-04-11 00:41:49 +00:00
# Run the Consul Agent
After Consul is installed, the agent must be run. The agent can either run
2014-04-14 19:22:03 +00:00
in a server or client mode. Each datacenter must have at least one server,
although 3 or 5 is recommended. A single server deployment is _**highly**_ discouraged
2014-04-11 19:43:06 +00:00
as data loss is inevitable in a failure scenario. [This guide ](/docs/guides/bootstrapping.html )
covers bootstrapping a new datacenter. All other agents run in client mode, which
2014-04-11 00:41:49 +00:00
is a very lightweight process that registers services, runs health checks,
and forwards queries to servers. The agent must be run for every node that
will be part of the cluster.
2014-02-08 00:41:03 +00:00
## Starting the Agent
2014-04-11 00:41:49 +00:00
For simplicity, we'll run a single Consul agent in server mode right now:
2014-02-08 00:41:03 +00:00
2014-10-19 23:40:10 +00:00
```text
2014-07-01 22:02:26 +00:00
$ consul agent -server -bootstrap-expect 1 -data-dir /tmp/consul
==> WARNING: BootstrapExpect Mode is specified as 1; this is the same as Bootstrap mode.
2014-04-11 00:41:49 +00:00
==> WARNING: Bootstrap mode enabled! Do not enable unless necessary
==> WARNING: It is highly recommended to set GOMAXPROCS higher than 1
==> Starting Consul agent...
==> Starting Consul agent RPC...
==> Consul agent running!
2014-04-11 23:23:16 +00:00
Node name: 'Armons-MacBook-Air'
Datacenter: 'dc1'
Server: true (bootstrap: true)
Client Addr: 127.0.0.1 (HTTP: 8500, DNS: 8600, RPC: 8400)
Cluster Addr: 10.1.10.38 (LAN: 8301, WAN: 8302)
2014-02-08 00:41:03 +00:00
==> Log data will now stream in as it occurs:
2014-04-11 23:23:16 +00:00
[INFO] serf: EventMemberJoin: Armons-MacBook-Air.local 10.1.10.38
2014-04-11 00:41:49 +00:00
[INFO] raft: Node at 10.1.10.38:8300 [Follower] entering Follower state
[INFO] consul: adding server for datacenter: dc1, addr: 10.1.10.38:8300
[ERR] agent: failed to sync remote state: rpc error: No cluster leader
[WARN] raft: Heartbeat timeout reached, starting election
[INFO] raft: Node at 10.1.10.38:8300 [Candidate] entering Candidate state
[INFO] raft: Election won. Tally: 1
[INFO] raft: Node at 10.1.10.38:8300 [Leader] entering Leader state
[INFO] consul: cluster leadership acquired
[INFO] consul: New leader elected: Armons-MacBook-Air
[INFO] consul: member 'Armons-MacBook-Air' joined, marking health alive
2014-02-08 00:41:03 +00:00
```
2014-04-11 00:41:49 +00:00
As you can see, the Consul agent has started and has output some log
data. From the log data, you can see that our agent is running in server mode,
and has claimed leadership of the cluster. Additionally, the local member has
been marked as a healthy member of the cluster.
2014-02-08 00:41:03 +00:00
2014-10-19 23:40:10 +00:00
~> **Note for OS X Users:** Consul uses your hostname as the
2014-04-17 16:20:51 +00:00
default node name. If your hostname contains periods, DNS queries to
that node will not work with Consul. To avoid this, explicitly set
2014-10-19 23:40:10 +00:00
the name of your node with the `-node` flag.
2014-04-17 16:17:54 +00:00
2014-02-08 00:41:03 +00:00
## Cluster Members
2014-04-11 00:41:49 +00:00
If you run `consul members` in another terminal, you can see the members of
the Consul cluster. You should only see one member (yourself). We'll cover
2014-02-08 00:41:03 +00:00
joining clusters in the next section.
2014-10-19 23:40:10 +00:00
```text
2014-04-11 00:41:49 +00:00
$ consul members
2014-07-01 22:02:26 +00:00
Node Address Status Type Build Protocol
Armons-MacBook-Air 10.1.10.38:8301 alive server 0.3.0 2
2014-02-08 00:41:03 +00:00
```
2014-04-14 19:22:03 +00:00
The output shows our own node, the address it is running on, its
2014-07-01 22:02:26 +00:00
health state, its role in the cluster, as well as some versioning information.
Additional metadata can be viewed by providing the `-detailed` flag.
2014-02-08 00:41:03 +00:00
2014-04-14 19:22:03 +00:00
The output from the `members` command is generated based on the
[gossip protocol ](/docs/internals/gossip.html ) and is eventually consistent.
For a strongly consistent view of the world, use the
[HTTP API ](/docs/agent/http.html ), which forwards the request to the
Consul servers:
2014-04-11 00:41:49 +00:00
2014-10-19 23:40:10 +00:00
```text
2014-04-11 00:41:49 +00:00
$ curl localhost:8500/v1/catalog/nodes
[{"Node":"Armons-MacBook-Air","Address":"10.1.10.38"}]
```
2014-04-14 19:22:03 +00:00
In addition to the HTTP API, the
[DNS interface ](/docs/agent/dns.html ) can be used to query the node. Note
that you have to make sure to point your DNS lookups to the Consul agent's
DNS server, which runs on port 8600 by default. The format of the DNS
entries (such as "Armons-MacBook-Air.node.consul") will be covered later.
2014-04-11 00:41:49 +00:00
2014-10-19 23:40:10 +00:00
```text
2014-04-11 00:41:49 +00:00
$ dig @127 .0.0.1 -p 8600 Armons-MacBook-Air.node.consul
2014-04-14 19:22:03 +00:00
...
2014-04-11 00:41:49 +00:00
;; QUESTION SECTION:
;Armons-MacBook-Air.node.consul. IN A
;; ANSWER SECTION:
Armons-MacBook-Air.node.consul. 0 IN A 10.1.10.38
```
2014-02-08 00:41:03 +00:00
## Stopping the Agent
You can use `Ctrl-C` (the interrupt signal) to gracefully halt the agent.
After interrupting the agent, you should see it leave the cluster gracefully
and shut down.
2014-04-11 00:41:49 +00:00
By gracefully leaving, Consul notifies other cluster members that the
2014-02-08 00:41:03 +00:00
node _left_ . If you had forcibly killed the agent process, other members
2014-04-11 00:41:49 +00:00
of the cluster would have detected that the node _failed_ . When a member leaves,
2014-05-03 22:23:16 +00:00
its services and checks are removed from the catalog. When a member fails,
its health is simply marked as critical, but is not removed from the catalog.
2014-04-11 00:41:49 +00:00
Consul will automatically try to reconnect to _failed_ nodes, which allows it
to recover from certain network conditions, while _left_ nodes are no longer contacted.
Additionally, if an agent is operating as a server, a graceful leave is important
to avoid causing a potential availability outage affecting the [consensus protocol ](/docs/internals/consensus.html ).
See the [guides section ](/docs/guides/index.html ) to safely add and remove servers.