diff --git a/website/source/docs/guides/semaphore.html.md b/website/source/docs/guides/semaphore.html.md index 7fe1cd7de8..4ff177e755 100644 --- a/website/source/docs/guides/semaphore.html.md +++ b/website/source/docs/guides/semaphore.html.md @@ -33,16 +33,16 @@ and a limit on the number of slot holders. For the prefix we will be using for coordination, a good pattern is simply: ```text -service//lock/ +service/ ``` We'll abbreviate this pattern as simply `` for the rest of this guide. -The first step is to create a session. This is done using the +The first step is for each contender to create a session. This is done using the [Session HTTP API](/api/session.html#session_create): ```text -curl -X PUT -d '{"Name": "dbservice"}' \ +curl -X PUT -d '{"Name": "db-semaphore"}' \ http://localhost:8500/v1/session/create ``` @@ -54,9 +54,11 @@ This will return a JSON object contain the session ID: } ``` -Next, we create a contender entry. Each contender creates an entry that is tied -to a session. This is done so that if a contender is holding a slot and fails, -it can be detected by the other contenders. +-> **Note:** Sessions by default only make use of the gossip failure detector. That is, the session is considered held by a node as long as the default Serf health check has not declared the node unhealthy. Additional checks can be specified at session creation if desired. + +Next, we create a lock contender entry. Each contender creates a kv entry that is tied +to a session. This is done so that if a contender is holding a slot and fails, its session +is detached from the key, which can then be detected by the other contenders. Create the contender key by doing an `acquire` on `/` via `PUT`. This is something like: @@ -65,75 +67,108 @@ This is something like: curl -X PUT -d http://localhost:8500/v1/kv//?acquire= ``` +`body` can be used to associate a meaningful value with the contender, such as its node’s name. +This body is opaque to Consul but can be useful for human operators. + The `` value is the ID returned by the call to [`/v1/session/create`](/api/session.html#session_create). -`body` can be used to associate a meaningful value with the contender. This is opaque -to Consul but can be useful for human operators. - The call will either return `true` or `false`. If `true`, the contender entry has been created. If `false`, the contender node was not created; it's likely that this indicates a session invalidation. -The next step is to use a single key to coordinate which holders are currently +The next step is to create a single key to coordinate which holders are currently reserving a slot. A good choice for this lock key is simply `/.lock`. We will refer to this special coordinating key as ``. -The current state of the semaphore is read by doing a `GET` on the entire ``: +This is done with: ```text -curl http://localhost:8500/v1/kv/?recurse +curl -X PUT -d http://localhost:8500/v1/kv/?cas=0 ``` -Within the list of the entries, we should find the ``. That entry should hold -both the slot limit and the current holders. A simple JSON body like the following works: +Since the lock is being created, a `cas` index of 0 is used so that the key is only put if it does not exist. + +`body` should contain both the intended slot limit for the semaphore and the session ids +of the current holders (initially only of the creator). A simple JSON body like the following works: ```text { - "Limit": 3, - "Holders": { - "4ca8e74b-6350-7587-addf-a18084928f3c": true, - "adf4238a-882b-9ddc-4a9d-5b6758e4159e": true - } + "Limit": 2, + "Holders": [ + "" + ] } ``` -When the `` is read, we can verify the remote `Limit` agrees with the local value. This -is used to detect a potential conflict. The next step is to determine which of the current -slot holders are still alive. As part of the results of the `GET`, we have all the contender +The current state of the semaphore is read by doing a `GET` on the entire ``: + +```text +curl http://localhost:8500/v1/kv/?recurse + ``` + +Within the list of the entries, we should find two keys: the `` and the +contender key ‘/’. + +```text +[ + { + "LockIndex": 0, + "Key": "", + "Flags": 0, + "Value": "eyJMaW1pdCI6IDIsIkhvbGRlcnMiOlsiPHNlc3Npb24+Il19", + "Session": "", + "CreateIndex": 898, + "ModifyIndex": 901 + }, + { + "LockIndex": 1, + "Key": "/", + "Flags": 0, + "Value": null, + "Session": "", + "CreateIndex": 897, + "ModifyIndex": 897 + } +] +``` +Note that the `Value` we embedded into `` is Base64 encoded when returned by the API. + +When the `` is read and its `Value` is decoded, we can verify the `Limit` agrees with the `Holders` count. +This is used to detect a potential conflict. The next step is to determine which of the current +slot holders are still alive. As part of the results of the `GET`, we also have all the contender entries. By scanning those entries, we create a set of all the `Session` values. Any of the `Holders` that are not in that set are pruned. In effect, we are creating a set of live contenders based on the list results and doing a set difference with the `Holders` to detect and prune -any potentially failed holders. +any potentially failed holders. In this example `` is present in `Holders` and +is attached to the key `/`, so no pruning is required. -If the number of holders (after pruning) is less than the limit, a contender attempts acquisition -by adding its own session to the `Holders` and doing a Check-And-Set update of the ``. This -performs an optimistic update. +If the number of holders after pruning is less than the limit, a contender attempts acquisition +by adding its own session to the `Holders` list and doing a Check-And-Set update of the ``. +This performs an optimistic update. -This is done by: +This is done with: ```text -curl -X PUT -d http://localhost:8500/v1/kv/?cas= +curl -X PUT -d http://localhost:8500/v1/kv/?cas= ``` +`lock-modify-index` is the latest `ModifyIndex` value known for ``, 901 in this example. + +If this request succeeds with `true`, the contender now holds a slot in the semaphore. +If this fails with `false`, then likely there was a race with another contender to acquire the slot. -If this succeeds with `true`, the contender now holds a slot in the semaphore. If this fails -with `false`, then likely there was a race with another contender to acquire the slot. -Both code paths now go into an idle waiting state. In this state, we watch for changes -on ``. This is because a slot may be released, a node may fail, etc. -Slot holders must also watch for changes since the slot may be released by an operator -or automatically released due to a false positive in the failure detector. +To re-attempt the acquisition, we watch for changes on ``. This is because a slot +may be released, a node may fail, etc. Watching for changes is done via a blocking query +against `/kv/?recurse`. -Note that the session by default makes use of only the gossip failure detector. That -is, the session is considered held by a node as long as the default Serf health check -has not declared the node unhealthy. Additional checks can be specified if desired. +Slot holders **must** continously watch for changes to `` since their slot can be +released by an operator or automatically released due to a false positive in the failure detector. +On changes to `` the lock’s `Holders` list must be re-checked to ensure the slot +is still held. Additionally, if the watch fails to connect the slot should be considered lost. -Watching for changes is done via a blocking query against ``. If a contender -holds a slot, then on any change the `` should be re-checked to ensure the slot is -still held. If no slot is held, then the same acquisition logic is triggered to check -and potentially re-attempt acquisition. This allows a contender to steal the slot from -a failed contender or one that has voluntarily released its slot. +This semaphore system is purely *advisory*. Therefore it is up to the client to verify +that a slot is held before (and during) execution of some critical operation. -If a slot holder ever wishes to release voluntarily, this should be done by doing a +Lastly, if a slot holder ever wishes to release its slot voluntarily, it should be done by doing a Check-And-Set operation against `` to remove its session from the `Holders` object. -Once that is done, the contender entry at `/` should be deleted. Finally, -the session should be destroyed. +Once that is done, both its contender key `/` and session should be deleted.