@ -33,16 +33,16 @@ and a limit on the number of slot holders.
For the prefix we will be using for coordination, a good pattern is simply:
```text
service/< service name > /lock/
service/< service name >
```
We'll abbreviate this pattern as simply `<prefix>` for the rest of this guide.
The first step is to create a session. This is done using the
The first step is for each contender to create a session. This is done using the
[Session HTTP API ](/api/session.html#session_create ):
```text
curl -X PUT -d '{"Name": "dbservic e"}' \
curl -X PUT -d '{"Name": "db- semapho re"}' \
http://localhost:8500/v1/session/create
```
@ -54,9 +54,11 @@ This will return a JSON object contain the session ID:
}
```
Next, we create a contender entry. Each contender creates an entry that is tied
to a session. This is done so that if a contender is holding a slot and fails,
it can be detected by the other contenders.
-> **Note:** Sessions by default only make use of the gossip failure detector. That is, the session is considered held by a node as long as the default Serf health check has not declared the node unhealthy. Additional checks can be specified at session creation if desired.
Next, we create a lock contender entry. Each contender creates a kv entry that is tied
to a session. This is done so that if a contender is holding a slot and fails, its session
is detached from the key, which can then be detected by the other contenders.
Create the contender key by doing an `acquire` on `<prefix>/<session>` via `PUT` .
This is something like:
@ -65,75 +67,108 @@ This is something like:
curl -X PUT -d < body > http://localhost:8500/v1/kv/< prefix > /< session > ?acquire=< session >
```
`body` can be used to associate a meaningful value with the contender, such as its node’s name.
This body is opaque to Consul but can be useful for human operators.
The `<session>` value is the ID returned by the call to
[`/v1/session/create` ](/api/session.html#session_create ).
`body` can be used to associate a meaningful value with the contender. This is opaque
to Consul but can be useful for human operators.
The call will either return `true` or `false` . If `true` , the contender entry has been
created. If `false` , the contender node was not created; it's likely that this indicates
a session invalidation.
The next step is to us e a single key to coordinate which holders are currently
The next step is to creat e a single key to coordinate which holders are currently
reserving a slot. A good choice for this lock key is simply `<prefix>/.lock` . We will
refer to this special coordinating key as `<lock>` .
The current state of the semaphore is read by doing a `GET` on the entire `<prefix>` :
This is done with :
```text
curl http://localhost:8500/v1/kv/< prefix > ?recurse
curl -X PUT -d < body > http://localhost:8500/v1/kv/< lock > ?cas=0
```
Within the list of the entries, we should find the `<lock>` . That entry should hold
both the slot limit and the current holders. A simple JSON body like the following works:
Since the lock is being created, a `cas` index of 0 is used so that the key is only put if it does not exist.
`body` should contain both the intended slot limit for the semaphore and the session ids
of the current holders (initially only of the creator). A simple JSON body like the following works:
```text
{
"Limit": 3,
"Holders": {
"4ca8e74b-6350-7587-addf-a18084928f3c": true,
"adf4238a-882b-9ddc-4a9d-5b6758e4159e": true
}
"Limit": 2,
"Holders": [
"< session > "
]
}
```
When the `<lock>` is read, we can verify the remote `Limit` agrees with the local value. This
is used to detect a potential conflict. The next step is to determine which of the current
slot holders are still alive. As part of the results of the `GET` , we have all the contender
The current state of the semaphore is read by doing a `GET` on the entire `<prefix>` :
```text
curl http://localhost:8500/v1/kv/< prefix > ?recurse
```
Within the list of the entries, we should find two keys: the `<lock>` and the
contender key ‘< prefix > /< session > ’.
```text
[
{
"LockIndex": 0,
"Key": "< lock > ",
"Flags": 0,
"Value": "eyJMaW1pdCI6IDIsIkhvbGRlcnMiOlsiPHNlc3Npb24+Il19",
"Session": "",
"CreateIndex": 898,
"ModifyIndex": 901
},
{
"LockIndex": 1,
"Key": "< prefix > /< session > ",
"Flags": 0,
"Value": null,
"Session": "< session > ",
"CreateIndex": 897,
"ModifyIndex": 897
}
]
```
Note that the `Value` we embedded into `<lock>` is Base64 encoded when returned by the API.
When the `<lock>` is read and its `Value` is decoded, we can verify the `Limit` agrees with the `Holders` count.
This is used to detect a potential conflict. The next step is to determine which of the current
slot holders are still alive. As part of the results of the `GET` , we also have all the contender
entries. By scanning those entries, we create a set of all the `Session` values. Any of the
`Holders` that are not in that set are pruned. In effect, we are creating a set of live contenders
based on the list results and doing a set difference with the `Holders` to detect and prune
any potentially failed holders.
any potentially failed holders. In this example `<session>` is present in `Holders` and
is attached to the key `<prefix>/<session>` , so no pruning is required.
If the number of holders (after pruning) is less than the limit, a contender attempts acquisition
by adding its own session to the `Holders` and doing a Check-And-Set update of the `<lock>` . This
performs an optimistic update.
If the number of holders after pruning is less than the limit, a contender attempts acquisition
by adding its own session to the `Holders` list and doing a Check-And-Set update of the `<lock>` .
This performs an optimistic update.
This is done by:
This is done with :
```text
curl -X PUT -d < Updated Lock > http://localhost:8500/v1/kv/< lock > ?cas=< lock-modify-index >
curl -X PUT -d < Updated Lock Body > http://localhost:8500/v1/kv/< lock > ?cas=< lock-modify-index >
```
`lock-modify-index` is the latest `ModifyIndex` value known for `<lock>` , 901 in this example.
If this request succeeds with `true` , the contender now holds a slot in the semaphore.
If this fails with `false` , then likely there was a race with another contender to acquire the slot.
If this succeeds with `true` , the contender now holds a slot in the semaphore. If this fails
with `false` , then likely there was a race with another contender to acquire the slot.
Both code paths now go into an idle waiting state. In this state, we watch for changes
on `<prefix>` . This is because a slot may be released, a node may fail, etc.
Slot holders must also watch for changes since the slot may be released by an operator
or automatically released due to a false positive in the failure detector.
To re-attempt the acquisition, we watch for changes on `<prefix>` . This is because a slot
may be released, a node may fail, etc. Watching for changes is done via a blocking query
against `/kv/<prefix>?recurse` .
Note that the session by default makes use of only the gossip failure detector. That
is, the session is considered held by a node as long as the default Serf health check
has not declared the node unhealthy. Additional checks can be specified if desired.
Slot holders **must** continously watch for changes to `<prefix>` since their slot can be
released by an operator or automatically released due to a false positive in the failure detector.
On changes to `<prefix>` the lock’s `Holders` list must be re-checked to ensure the slot
is still held. Additionally, if the watch fails to connect the slot should be considered lost.
Watching for changes is done via a blocking query against `<prefix>` . If a contender
holds a slot, then on any change the `<lock>` should be re-checked to ensure the slot is
still held. If no slot is held, then the same acquisition logic is triggered to check
and potentially re-attempt acquisition. This allows a contender to steal the slot from
a failed contender or one that has voluntarily released its slot.
This semaphore system is purely *advisory* . Therefore it is up to the client to verify
that a slot is held before (and during) execution of some critical operation.
If a slot holder ever wishes to release voluntarily, this should be done by doing a
Lastly, if a slot holder ever wishes to release its slot voluntarily, it should be done by doing a
Check-And-Set operation against `<lock>` to remove its session from the `Holders` object.
Once that is done, the contender entry at `<prefix>/<session>` should be deleted. Finally,
the session should be destroyed.
Once that is done, both its contender key `<prefix>/<session>` and session should be deleted.