Also, fix problems in shutdown.
Starting serving and shutdown still has to be cleaned up properly.
It's a mess.
Change-Id: I51061db12064e434066446e6fceac32741c4f84c
- Always spell out the time unit (e.g. milliseconds instead of ms).
- Remove "_total" from the names of metrics that are not counters.
- Make use of the "Namespace" and "Subsystem" fields in the options.
- Removed the "capacity" facet from all metrics about channels/queues.
These are all fixed via command line flags and will never change
during the runtime of a process. Also, they should not be part of
the same metric family. I have added separate metrics for the
capacity of queues as convenience. (They will never change and are
only set once.)
- I left "metric_disk_latency_microseconds" unchanged, although that
metric measures the latency of the storage device, even if it is not
a spinning disk. "SSD" is read by many as "solid state disk", so
it's not too far off. (It should be "solid state drive", of course,
but "metric_drive_latency_microseconds" is probably confusing.)
- Brian suggested to not mix "failure" and "success" outcome in the
same metric family (distinguished by labels). For now, I left it as
it is. We are touching some bigger issue here, especially as other
parts in the Prometheus ecosystem are following the same
principle. We still need to come to terms here and then change
things consistently everywhere.
Change-Id: If799458b450d18f78500f05990301c12525197d3
Having metrics with variable timestamps inconsistently
spaced when things fail will make it harder to write correct rules.
Update status page, requires some refactoring to insert a function.
Change-Id: Ie1c586cca53b8f3b318af8c21c418873063738a8
Now that the subtle bug in matttproud/golang_protobuf_extensions is
fixed, we do not need to copy the bytes of a scrape into a buffer
first before starting to parse it.
Change-Id: Ib73ecae16173ddd219cda56388a8f853332f8853
So far we've been using Go's native time.Time for anything related to sample
timestamps. Since the range of time.Time is much bigger than what we need, this
has created two problems:
- there could be time.Time values which were out of the range/precision of the
time type that we persist to disk, therefore causing incorrectly ordered keys.
One bug caused by this was:
https://github.com/prometheus/prometheus/issues/367
It would be good to use a timestamp type that's more closely aligned with
what the underlying storage supports.
- sizeof(time.Time) is 192, while Prometheus should be ok with a single 64-bit
Unix timestamp (possibly even a 32-bit one). Since we store samples in large
numbers, this seriously affects memory usage. Furthermore, copying/working
with the data will be faster if it's smaller.
*MEMORY USAGE RESULTS*
Initial memory usage comparisons for a running Prometheus with 1 timeseries and
100,000 samples show roughly a 13% decrease in total (VIRT) memory usage. In my
tests, this advantage for some reason decreased a bit the more samples the
timeseries had (to 5-7% for millions of samples). This I can't fully explain,
but perhaps garbage collection issues were involved.
*WHEN TO USE THE NEW TIMESTAMP TYPE*
The new clientmodel.Timestamp type should be used whenever time
calculations are either directly or indirectly related to sample
timestamps.
For example:
- the timestamp of a sample itself
- all kinds of watermarks
- anything that may become or is compared to a sample timestamp (like the timestamp
passed into Target.Scrape()).
When to still use time.Time:
- for measuring durations/times not related to sample timestamps, like duration
telemetry exporting, timers that indicate how frequently to execute some
action, etc.
*NOTE ON OPERATOR OPTIMIZATION TESTS*
We don't use operator optimization code anymore, but it still lives in
the code as dead code. It still has tests, but I couldn't get all of them to
pass with the new timestamp format. I commented out the failing cases for now,
but we should probably remove the dead code soon. I just didn't want to do that
in the same change as this.
Change-Id: I821787414b0debe85c9fffaeb57abd453727af0f
This adds search domain support by trying to resolve a name by
appending each search domain configured in /etc/resolv.conf until
the query succeeds (NOERROR) and has at least one answer.
Change-Id: Ibdc5138c5d8cc049e11fab90c3d5243d5a06852c
This reverts commit e3bc6fc9dc, reversing
changes made to 1cf9e5840a.
Conflicts:
retrieval/target_provider.go
Change-Id: Icb6e98fb30419e9e2fe9b686c243702ced372014
This includes required refactorings to enable replacing the http client (for
testing) and moving the NotificationReq type definitions to the "notifications"
package, so that this package doesn't need to depend on "rules" anymore and
that it can instead use a representation of the required data which only
includes the necessary fields.
This commit forces the extraction framework to read the entire response payload
into a buffer before attempting to decode it, for the underlying Protocol Buffer
message readers do not block on partial messages.
If the metrics exported by a process already contain any of a target's
base labels (such as "job" or "instance", but also any manually assigned
target-group label), don't overwrite that label, but instead add a new
label consisting of the original label name prepended with "exporter_".
This is to accomodate intermediate exporter jobs, which might indicate
e.g. the jobs and instances for which they are exporting data.
This commit updates the documentation, Makefiles, formatting, and
code semantics to support the 1.1. runtime, which includes ...
1. ``make advice``,
2. ``make format``, and
3. ``go fix`` on various targets.
This commit employs explicit memory freeing for the in-memory storage
arenas. Secondarily, we take advantage of smaller channel buffer sizes
in the test.
Instead of externally handling timeouts when scraping a target, we set
timeouts on the HTTP connection. This ensures that we don't leak
goroutines on timeouts.
[fixes#181]
Primary changes:
* Strictly typed unmarshalling of metric values
* Schema types are contained by the processor (no "type entity002")
Minor changes:
* Added ProcessorFunc type for expressing processors as simple
functions.
* Added non-destructive `Merge` method to `model.LabelSet`
ProcessorForRequestHeader now looks first for a header like
`Content-Type: application/json; schema="prometheus/telemetry";
version="0.0.1"` before falling back to checking
`X-Prometheus-API-Version`.
We're currently timestamping samples with the time at the end of a scrape
iteration. It makes more sense to use a timestamp from the beginning of the
scrape for two reasons:
a) this time is more relevant to the scraped values than the time at the
end of the HTTP-GET + JSON decoding work.
b) it reduces sample timestamp jitter if we measure at the beginning, and
not at the completion of a scrape.
In the current /status implementation, we cannot divine what the
target's state is but rather get an integer constant for it. This
commit, stringifies the constants.
This roughly comprises the following changes:
- index target pools by job instead of scrape interval
- make targets within a pool exchangable while preserving existing
health state for targets
- allow exchanging targets via HTTP API (PUT)
- show target lists in /status (experimental, for own debug use)
Right now, futureState is only used to give hints to the health scheduler, but
nowhere is this future state persisted into the target's state field, so we
don't actually track a target's state over time.
We have an open question of how long does it take for each target
pool to have the state retrieved from all participating elements.
This commit starts by providing insight into this.
client_golang was updated to support full label-oriented telemetry,
which introduced interface incompatibilities with the previous
version of Prometheus. To alleviate this, a general fetching and
processing dispatching system has been created, which discriminates
and processes according to the version of input.
Future tests around the ``TargetPool`` and ``TargetManager`` and
friends will be a lot easier when the concrete behaviors of
``Target`` can be extracted out. Plus, each ``Target``, I suspect,
will have its own resolution and query strategy.
``Target`` will be refactored down the road to support various
nuanced endpoint types. Thusly incorporating the scheduling
behavior within it will be problematic. To that end, the scheduling
behavior has been moved into a separate assistance type to improve
conciseness and testability.
``make format`` was also run.
``Target`` will be refactored down the road to support various
nuanced endpoint types. Thusly incorporating the scheduling
behavior within it will be problematic. To that end, the scheduling
behavior has been moved into a separate assistance type to improve
conciseness and testability.
``make format`` was also run.