This moves the label lookup into TSDB, whilst still keeping the cached-ref optimisation for repeated Appends.
This makes the API easier to consume and implement. In particular this change is motivated by the scrape-time-aggregation work, which I don't think is possible to implement without it as it needs access to label values.
Signed-off-by: Tom Wilkie <tom.wilkie@gmail.com>
Accepting alert_rule_test without alertname is confusing as it will
always pass with empty exp_alerts, and never with non-empty exp_alerts.
Signed-off-by: Raphael Bauduin <raphael.bauduin@tessares.net>
- Remove unrelated changes
- Refactor code out of the API module - that is already getting pretty crowded.
- Don't track reference for AddFast in remote write. This has the potential to consume unlimited server-side memory if a malicious client pushes a different label set for every series. For now, its easier and safer to always use the 'slow' path.
- Return 400 on out of order samples.
- Use remote.DecodeWriteRequest in the remote write adapters.
- Put this behing the 'remote-write-server' feature flag
- Add some (very) basic docs.
- Used named return & add test for commit error propagation
Signed-off-by: Tom Wilkie <tom.wilkie@gmail.com>
When printing the error, we still need access to the mmapped byte array
of the file. Therefore, we make sure that we run it before closing the
file.
I could have done something more complex like a defer, or not closing
the file, knowing that we would exit the program anyway. However, I
think that in case we extend this in the future, or this is copy/paster
elsewhere, we should continue closing the file. As it is small enough, I
went for the solution to call the function 3 times instead of playing
with a defer.
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
This commit adds `@ <timestamp>` modifier as per this design doc: https://docs.google.com/document/d/1uSbD3T2beM-iX4-Hp7V074bzBRiRNlqUdcWP6JTDQSs/edit.
An example query:
```
rate(process_cpu_seconds_total[1m])
and
topk(7, rate(process_cpu_seconds_total[1h] @ 1234))
```
which ranks based on last 1h rate and w.r.t. unix timestamp 1234 but actually plots the 1m rate.
Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
* Backfilling: optimize for non-consecutive blocks
When you have missing data for > 2 hours, you spend a lot of time
re-reading the complete file. It is not optimal.
This introduces a fastpath for this scenario.
Next, we do parse the metric even when we know we will not use it, based
on its timestamp. This only computes the metric when we know its
timestamp is right.
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
I initially thought I could somehow rescue the current column layout
by recycling the tabwriter, but flushing completely blanks
it. However, by setting a minimum width of 13, we get a slightly
broader DURATION column but otherwise nice formatting, unless numbers
get really big, but that's OK, I guess.
Before:
```
BLOCK ULID MIN TIME MAX TIME DURATION NUM SAMPLES NUM CHUNKS NUM SERIES SIZE
01ETN0KGNP5WWK9T5QMQGBG9F1 2020-11-19 07:39:17 +0000 UTC 2020-11-19 07:44:17 +0000 UTC 5m0.001s 8 2 2 624B
01ETN0KGQSFF0AB2QDZVQG3CWC 2020-11-19 10:25:57 +0000 UTC 2020-11-19 10:30:57 +0000 UTC 5m0.001s 8 2 2 622B
01ETN0KGSW8KYP3YPG4X20P60Z 2020-11-19 13:12:37 +0000 UTC 2020-11-19 13:17:37 +0000 UTC 5m0.001s 8 2 2 625B
```
After:
```
BLOCK ULID MIN TIME MAX TIME DURATION NUM SAMPLES NUM CHUNKS NUM SERIES SIZE
01ETN0R72SXN9A1FG732P7KFFN 2020-11-19 07:39:17 +0000 UTC 2020-11-19 07:44:17 +0000 UTC 5m0.001s 8 2 2 624B
01ETN0R74Y9AG1A1MKN4MZK7WM 2020-11-19 10:25:57 +0000 UTC 2020-11-19 10:30:57 +0000 UTC 5m0.001s 8 2 2 622B
01ETN0R76KXZ5VQECMDNES49J6 2020-11-19 13:12:37 +0000 UTC 2020-11-19 13:17:37 +0000 UTC 5m0.001s 8 2 2 625B
```
After without the `-r` flag:
```
BLOCK ULID MIN TIME MAX TIME DURATION NUM SAMPLES NUM CHUNKS NUM SERIES SIZE
01ETN0RFFJ42274NWR1GH0RTV6 1605771557000 1605771857001 5m0.001s 8 2 2 624
01ETN0RFJ1MZCHHS2SBZS8XC27 1605781557000 1605781857001 5m0.001s 8 2 2 622
01ETN0RFM98N3V4KD2DZXFGHGN 1605791557000 1605791857001 5m0.001s 8 2 2 625
```
Signed-off-by: beorn7 <beorn@grafana.com>
* Testify: move to require
Moving testify to require to fail tests early in case of errors.
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
* More moves
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
* MultiError: Refactored MultiError for more concise and safe usage.
* Less lines
* Goland IDE was marking every usage of old MultiError "potential nil" error
* It was easy to forgot using Err() when error was returned, now it's safely assured on compile time.
NOTE: Potentially I would rename package to merrors. (: In different PR.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Addressed review comments.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Addressed comments.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Fix after rebase.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Refactor test assertions
This pull request gets rid of assert.True where possible to use
fine-grained assertions.
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
* promtool: Calculate mint and maxt per test
Previously a single test that used a later eval time would make all
other tests in the file share the [mint, maxt] and potentially evaluate
far more samples than needed.
Fixes: #8019
Signed-off-by: David Leadbeater <dgl@dgl.cx>
In #7399, an early validity check of the config was introduced to
prevent the scenario where an invalid config is only detected after a
possibly very long startup procedure. However, the respective success
metrics are not updated after the initial validation so that the
success metrics suggest an invalid config. If the startup procedure,
like replaying the WAL, really takes very long, alerts about invalid
config will trigger.
This commit sets the succes metrics after initial validation. They
will be set again after the "real" config (re-)load, but that
shouldn't be a problem. The metric now truthfully represents whenever
the config was successfully loaded, no matter if the result was then
thrown away (because it was just for validation) or actually used.
Signed-off-by: beorn7 <beorn@grafana.com>
If an alert test had a failing test, then any other alert test interval
specified after that point would result in the test exiting early.
This made debugging some tests more difficult than needed.
Now only exit early for evaluation failures.
Signed-off-by: David Leadbeater <dgl@dgl.cx>
This also fixes a bug in query_log_file, which now is relative to the config file like all other paths.
Signed-off-by: Andy Bursavich <abursavich@gmail.com>
* storage: Replace usage of sync/atomic with uber-go/atomic
Signed-off-by: Javier Palomo <javier.palomo.almena@gmail.com>
* tsdb: Replace usage of sync/atomic with uber-go/atomic
Signed-off-by: Javier Palomo <javier.palomo.almena@gmail.com>
* web: Replace usage of sync/atomic with uber-go/atomic
Signed-off-by: Javier Palomo <javier.palomo.almena@gmail.com>
* notifier: Replace usage of sync/atomic with uber-go/atomic
Signed-off-by: Javier Palomo <javier.palomo.almena@gmail.com>
* cmd: Replace usage of sync/atomic with uber-go/atomic
Signed-off-by: Javier Palomo <javier.palomo.almena@gmail.com>
* scripts: Verify that we are not using restricted packages
It checks that we are not directly importing 'sync/atomic'.
Signed-off-by: Javier Palomo <javier.palomo.almena@gmail.com>
* Reorganise imports in blocks
Signed-off-by: Javier Palomo <javier.palomo.almena@gmail.com>
* notifier/test: Apply PR suggestions
Signed-off-by: Javier Palomo <javier.palomo.almena@gmail.com>
* storage/remote: avoid storing references on newEntry
Signed-off-by: Javier Palomo <javier.palomo.almena@gmail.com>
* Revert "scripts: Verify that we are not using restricted packages"
This reverts commit 278d32748e.
Signed-off-by: Javier Palomo <javier.palomo.almena@gmail.com>
* web: Group imports accordingly
Signed-off-by: Javier Palomo <javier.palomo.almena@gmail.com>
* promql: Removed global and add ability to have better interval for subqueries if not specified
## Changes
* Refactored tests for better hints testing
* Added various TODO in places to enhance.
* Moved DefaultEvalInterval global to opts with func(rangeMillis int64) int64 function instead
Motivation: At Thanos we would love to have better control over the subqueries step/interval.
This is important to choose proper resolution. I think having proper step also does not harm for
Prometheus and remote read users. Especially on stateless querier we do not know evaluation interval
and in fact putting global can be wrong to assume for Prometheus even.
I think ideally we could try to have at least 3 samples within the range, the same
way Prometheus UI and Grafana assumes.
Anyway this interfaces allows to decide on promQL user basis.
Open question: Is taking parent interval a smart move?
Motivation for removing global: I spent 1h fighting with:
=== RUN TestEvaluations
TestEvaluations: promql_test.go:31: unexpected error: error evaluating query "absent_over_time(rate(nonexistant[5m])[5m:])" (line 687): unexpected error: runtime error: integer divide by zero
--- FAIL: TestEvaluations (0.32s)
FAIL
At the end I found that this fails on most of the versions including this master if you run this test alone. If run together with many
other tests it passes. This is due to SetDefaultEvaluationInterval(1 * time.Minute)
in test that is ran before TestEvaluations. Thanks to globals (:
Let's fix it by dropping this global.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Added issue links for TODOs.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Removed irrelevant changes.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
To load ALERT_FOR_STATE only `storage.Queryable` interface is required,
so this patch uses this narrower interface for to perform this.
Signed-off-by: Frederic Branczyk <fbranczyk@gmail.com>
* Fixed nits introduced by https://github.com/prometheus/prometheus/pull/7334
* Added ChunkQueryable implementation to fanout and readyStorage.
* Added more comments.
* Changed NewVerticalChunkSeriesMerger to CompactingChunkSeriesMerger, removed tiny interface by reusing VerticalSeriesMergeFunc for overlapping algorithm for
both chunks and series, for both querying and compacting (!) + made sure duplicates are merged.
* Added ErrChunkSeriesSet
* Added Samples interface for seamless []promb.Sample to []tsdbutil.Sample conversion.
* Deprecating non chunks serieset based StreamChunkedReadResponses, added chunk one.
* Improved tests.
* Split remote client into Write (old storage) and read.
* Queryable client is now SampleAndChunkQueryable. Since we cannot use nice QueryableFunc I moved
all config based options to sampleAndChunkQueryableClient to aboid boilerplate.
In next commit: Changes for TSDB.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
When appending to the head and a chunk is full it is flushed to the disk and m-mapped (memory mapped) to free up memory
Prom startup now happens in these stages
- Iterate the m-maped chunks from disk and keep a map of series reference to its slice of mmapped chunks.
- Iterate the WAL as usual. Whenever we create a new series, look for it's mmapped chunks in the map created before and add it to that series.
If a head chunk is corrupted the currpted one and all chunks after that are deleted and the data after the corruption is recovered from the existing WAL which means that a corruption in m-mapped files results in NO data loss.
[Mmaped chunks format](https://github.com/prometheus/prometheus/blob/master/tsdb/docs/format/head_chunks.md) - main difference is that the chunk for mmaping now also includes series reference because there is no index for mapping series to chunks.
[The block chunks](https://github.com/prometheus/prometheus/blob/master/tsdb/docs/format/chunks.md) are accessed from the index which includes the offsets for the chunks in the chunks file - example - chunks of series ID have offsets 200, 500 etc in the chunk files.
In case of mmaped chunks, the offsets are stored in memory and accessed from that. During WAL replay, these offsets are restored by iterating all m-mapped chunks as stated above by matching the series id present in the chunk header and offset of that chunk in that file.
**Prombench results**
_WAL Replay_
1h Wal reply time
30% less wal reply time - 4m31 vs 3m36
2h Wal reply time
20% less wal reply time - 8m16 vs 7m
_Memory During WAL Replay_
High Churn:
10-15% less RAM - 32gb vs 28gb
20% less RAM after compaction 34gb vs 27gb
No Churn:
20-30% less RAM - 23gb vs 18gb
40% less RAM after compaction 32.5gb vs 20gb
Screenshots are in [this comment](https://github.com/prometheus/prometheus/pull/6679#issuecomment-621678932)
Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
time.Unix attaches the local timezone, which can then
leak out (e.g. in the alert json). While this is harmless,
we should be consistent.
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* [comments] change word ‘wheter’ to ‘whether’
Signed-off-by: fuling <fuling.lgz@alibaba-inc.com>
* [comments] change word ‘wheter’ to ‘whether’
Signed-off-by: fuling <fuling.lgz@alibaba-inc.com>
This is part of https://github.com/prometheus/prometheus/pull/5882 that can be done to simplify things.
All todos I added will be fixed in follow up PRs.
* querier.Querier, querier.Appender, querier.SeriesSet, and querier.Series interfaces merged
with storage interface.go. All imports that.
* querier.SeriesIterator replaced by chunkenc.Iterator
* Added chunkenc.Iterator.Seek method and tests for xor implementation (?)
* Since we properly handle SelectParams for Select methods I adjusted min max
based on that. This should help in terms of performance for queries with functions like offset.
* added Seek to deletedIterator and test.
* storage/tsdb was removed as it was only a unnecessary glue with incompatible structs.
No logic was changed, only different source of abstractions, so no need for benchmarks.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Make lookbackDelta a option of QueryEngine
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
* julius' suggestion
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
* remove trivial getter
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
* Assume lookback delta is always > 0
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
* add debug log
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
* don't expose loopback delta
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
* Specify that lookack delta is also used in federation
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
* Fix federation test
While we have added some logic to the promql engine to keep it backwards
compatible and have a 5 minute loopback by default, the web/ package is
likely to really be internal to Prometheus and we should not add the
same kind of heuritstics here.
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
* loopback delta: Fix debug log
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
A data race can happen if we run t.Log after the test t is done -- which
in this case is highly possible because of the use of subtests and the
fact that we call t.Log in a goroutine.
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
Since we use ActiveQueryTracker to check for concurrency in
d992c36b3a it does not make sense to keep
the MaxConcurrent value as an option of the PromQL engine.
This pull request removes it from the PromQL engine options, sets the
max concurrent metric to -1 if there is no active query tracker, and use
the value of the active query tracker otherwise.
It removes dead code and also will inform people who import the promql
package that we made that change, as it breaks the EngineOpts struct.
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
This change makes sure that nearly-identical Alertmanager configurations
aren't merged together.
The config's identifier was the MD5 hash of the configuration serialized
to JSON but because `relabel.Regexp` has no public field and doesn't
implement the JSON.Marshaler interface, it was always serialized to
"{}".
In practice, the identifier can be based on the index of the
configuration in the list.
Signed-off-by: Simon Pasquier <spasquie@redhat.com>
* Add time units to storage.tsdb.retention.size flag
In an effort to reduce confusion with the `m` option of the
`ParseDuration()` function, this commit adds the available time units to
the `storage.tsdb.retention.time` flag to help showcase that there is no
option for months (which could be assumed to be `m`).
If someone were looking to set the retention to six months, they may
mistakenly do so with `6m`, which would reduce their retention to six
minutes.
Signed-off-by: Brooks Swinnerton <bswinnerton@gmail.com>
Alert rules do not use the Record field, so any alerts with the same
labels and different names would be counted as being duplicates.
Promtool will now consider either field when finding duplicates.
Signed-off-by: Chris Marchbanks <csmarchbanks@gmail.com>
This patch loops through the warnings while querying the label and spits the
output to stderr
Fixes#5885
Signed-off-by: Sayan Chowdhury <sayan.chowdhury2012@gmail.com>
* Update go.mod dependencies before release
Signed-off-by: Julius Volz <julius.volz@gmail.com>
* Add issue for showing query warnings in promtool
Signed-off-by: Julius Volz <julius.volz@gmail.com>
* Revert json-iterator back to 1.1.6
It produced errors when marshaling Point values with special float
values.
Signed-off-by: Julius Volz <julius.volz@gmail.com>
* Fix expected step values in promtool tests after client_golang update
Signed-off-by: Julius Volz <julius.volz@gmail.com>
* Update generated protobuf code after proto dep updates
Signed-off-by: Julius Volz <julius.volz@gmail.com>
* Added query logging for prometheus.
Options added:
1) active.queries.filepath: Filename where queries will be recorded
2) active.queries.filesize: Size of the file where queries will be recorded.
Functionality added:
All active queries are now logged in a file. If prometheus crashes unexpectedly, these queries are also printed out on stdout in the rerun.
Queries are written concurrently to an mmaped file, and removed once they are done. Their positions in the file are reused. They are written in json format. However, due to dynamic nature of application, the json has an extra comma after the last query, and is missing an ending ']'. There may also null bytes in the tail of file.
Signed-off-by: Advait Bhatwadekar <advait123@ymail.com>
This is mostly to create consistency, not because the one or the other
way would be wrong. A few actual corrections are also included.
Signed-off-by: beorn7 <bjoern@rabenste.in>
Like `promtool check config <path/to/foo.yaml>`, which resolves relative
paths inside foo.yaml to be relative to `path/to`, this now makes
`promtool test rules <path/to/test.yaml>` do the same thing.
Signed-off-by: David Symonds <dsymonds@gmail.com>
i) Uses the more idiomatic Wrap and Wrapf methods for creating nested errors.
ii) Fixes some incorrect usages of fmt.Errorf where the error messages don't have any formatting directives.
iii) Does away with the use of fmt package for errors in favour of pkg/errors
Signed-off-by: tariqibrahim <tariq181290@gmail.com>
* Display correct values for the retention in the flags web gui.
Signed-off-by: Krasi Georgiev <kgeorgie@redhat.com>
* adding a log entry
Signed-off-by: Krasi Georgiev <kgeorgie@redhat.com>
* added the retention info to the runtime status page
Signed-off-by: Krasi Georgiev <kgeorgie@redhat.com>
* simplify the retention display
Signed-off-by: Krasi Georgiev <kgeorgie@redhat.com>
* discovery/kubernetes: fix support for password_file
Signed-off-by: Simon Pasquier <spasquie@redhat.com>
* Create and pass custom RoundTripper to Kubernetes client
Signed-off-by: Simon Pasquier <spasquie@redhat.com>
* Use inline HTTPClientConfig
Signed-off-by: Simon Pasquier <spasquie@redhat.com>
fixes https://github.com/prometheus/prometheus/issues/5213
Now that we have time and size base retention time bases should not have a default value. A default is set only when both - time and size flags are not set.
This change will not affect current installations that rely on the default time based value, and will avoid confusions when only the size retention is set and it is expected that the default time based setting would be no longer in place.
Signed-off-by: Krasi Georgiev <kgeorgie@redhat.com>
This change switches the remote_write API to use the TSDB WAL. This should reduce memory usage and prevent sample loss when the remote end point is down.
We use the new LiveReader from TSDB to tail WAL segments. Logic for finding the tracking segment is included in this PR. The WAL is tailed once for each remote_write endpoint specified. Reading from the segment is based on a ticker rather than relying on fsnotify write events, which were found to be complicated and unreliable in early prototypes.
Enqueuing a sample for sending via remote_write can now block, to provide back pressure. Queues are still required to acheive parallelism and batching. We have updated the queue config based on new defaults for queue capacity and pending samples values - much smaller values are now possible. The remote_write resharding code has been updated to prevent deadlocks, and extra tests have been added for these cases.
As part of this change, we attempt to guarantee that samples are not lost; however this initial version doesn't guarantee this across Prometheus restarts or non-retryable errors from the remote end (eg 400s).
This changes also includes the following optimisations:
- only marshal the proto request once, not once per retry
- maintain a single copy of the labels for given series to reduce GC pressure
Other minor tweaks:
- only reshard if we've also successfully sent recently
- add pending samples, latest sent timestamp, WAL events processed metrics
Co-authored-by: Chris Marchbanks <csmarchbanks.com> (initial prototype)
Co-authored-by: Tom Wilkie <tom.wilkie@gmail.com> (sharding changes)
Signed-off-by: Callum Styan <callumstyan@gmail.com>
This no longer waits for all of the scrape reload to complete
before getting a list of AMs again.
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Add flag for size based retention
Signed-off-by: Goutham Veeramachaneni <gouthamve@gmail.com>
* Deprecate the old retention flag for a new one.
Signed-off-by: Goutham Veeramachaneni <gouthamve@gmail.com>
* Add ability to take a suffix for size flag
Signed-off-by: Goutham Veeramachaneni <gouthamve@gmail.com>
* Address feedback
Signed-off-by: Goutham Veeramachaneni <gouthamve@gmail.com>
* *: use latest release of staticcheck
It also fixes a couple of things in the code flagged by the additional
checks.
Signed-off-by: Simon Pasquier <spasquie@redhat.com>
* Use official release of staticcheck
Also run 'go list' before staticcheck to avoid failures when downloading packages.
Signed-off-by: Simon Pasquier <spasquie@redhat.com>
Add WALSegmentSize as an option, and the corresponding flag "storage.tsdb.wal-segment-size" to tune the max size of wal segment files.
The addressed base problem is to reduce the disk space used by wal segment files : on a raspberry pi, for instance, we often want to reduce write load of the sd card, then, the wal directory is mounted on a memory (space limited) partition.
the default value of the segment max file size, pushed the size of directory to 128 MB for each segment , which is too much ram consumption on a rasp.
the initial discussion is at https://github.com/prometheus/tsdb/pull/450
* update promlog to latest version
Signed-off-by: Alex Yu <yu.alex96@gmail.com>
* Update api tests, fix main setup
Signed-off-by: Alex Yu <yu.alex96@gmail.com>
* tidy go.sum
Signed-off-by: Alex Yu <yu.alex96@gmail.com>
* revendor prometheus/common
Signed-off-by: Alex Yu <yu.alex96@gmail.com>
* only initialize config; use kingpin for remote_storage_adapter
Signed-off-by: Alex Yu <yu.alex96@gmail.com>
* actually parse the flags
Signed-off-by: Alex Yu <yu.alex96@gmail.com>
* clean up imports
Signed-off-by: Alex Yu <yu.alex96@gmail.com>
* web: added ability to set page title through flag.
Signed-off-by: Andrew Chiu <andrew.chiu2@baesystems.com>
* Reformatted variable names and Flag description for readability.
Signed-off-by: Andrew Chiu <andrew.chiu2@baesystems.com>
* assets_vfsdata.go
Signed-off-by: Andrew Chiu <andrew.chiu2@baesystems.com>
* Flag name changed from web.ui-title to web.page-title
Signed-off-by: Andrew Chiu <andrew.chiu2@baesystems.com>
* make assets
Signed-off-by: Andrew Chiu <andrew.chiu2@baesystems.com>
* Support writing output as json
Oftentimes I'll want to execute something based on
the output from promtool, and supporting json
makes it easy to pull out values with a supporting
tool such as jq.
Signed-off-by: stuart nelson <stuartnelson3@gmail.com>
According to the GoDoc for os.Signal [0]:
> Package signal will not block sending to c: the caller must ensure that
> c has sufficient buffer space to keep up with the expected signal rate.
> For a channel used for notification of just one signal value, a buffer
> of size 1 is sufficient.
[0] https://golang.org/pkg/os/signal/#Notify
Signed-off-by: Lucas Serven <lserven@gmail.com>
* Limit the number of samples remote read can return.
- Return 413 entity too large.
- Limit can be set be a flag. Allow 0 to mean no limit.
- Include limit in error message.
- Set default limit to 50M (* 16 bytes = 800MB).
Signed-off-by: Tom Wilkie <tom.wilkie@gmail.com>
To make local debuging with `go run` easyer moved all files into a
dedicate package `runtime`.
This allows running prometheus just by using `go run main.go` instead of
passing mani files like `go run main.go limits_default.go ...`
Signed-off-by: Krasi Georgiev <kgeorgie@redhat.com>
Sorry, I used GitHub's web-based merge-conflict-resolution editor on
https://github.com/prometheus/prometheus/pull/4308 and it didn't show me
test errors afterwards, but maybe they didn't run again or I should have
waited or something.
Signed-off-by: Julius Volz <julius.volz@gmail.com>
`debug all` - all information
`debug metrics` - metrics information
`debug pprof` - profiling information
the final result is compressed in a `tar.gz` file
Signed-off-by: chyeh <chyeh.taiwan@gmail.com>
Start rule manager only after tsdb and config is loaded.
Stop rule manager before tsdb to avoid writing to closed storage.
Wait for any in-progress reloads to complete before shutting
down rule manager, so that rule manager doesn't get updated after
being shut down.
Remove incorrect comment around shutting down query enginge.
Log when config reload is completed.
Fixes#4133Fixes#4262
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
This adds a parameter to the storage selection interface which allows
query engine(s) to pass information about the operations surrounding a
data selection.
This can for example be used by remote storage backends to infer the
correct downsampling aggregates that need to be provided.
* refactor: move targetGroup struct and CheckOverflow() to their own package
* refactor: move auth and security related structs to a utility package, fix import error in utility package
* refactor: Azure SD, remove SD struct from config
* refactor: DNS SD, remove SD struct from config into dns package
* refactor: ec2 SD, move SD struct from config into the ec2 package
* refactor: file SD, move SD struct from config to file discovery package
* refactor: gce, move SD struct from config to gce discovery package
* refactor: move HTTPClientConfig and URL into util/config, fix import error in httputil
* refactor: consul, move SD struct from config into consul discovery package
* refactor: marathon, move SD struct from config into marathon discovery package
* refactor: triton, move SD struct from config to triton discovery package, fix test
* refactor: zookeeper, move SD structs from config to zookeeper discovery package
* refactor: openstack, remove SD struct from config, move into openstack discovery package
* refactor: kubernetes, move SD struct from config into kubernetes discovery package
* refactor: notifier, use targetgroup package instead of config
* refactor: tests for file, marathon, triton SD - use targetgroup package instead of config.TargetGroup
* refactor: retrieval, use targetgroup package instead of config.TargetGroup
* refactor: storage, use config util package
* refactor: discovery manager, use targetgroup package instead of config.TargetGroup
* refactor: use HTTPClient and TLS config from configUtil instead of config
* refactor: tests, use targetgroup package instead of config.TargetGroup
* refactor: fix tagetgroup.Group pointers that were removed by mistake
* refactor: openstack, kubernetes: drop prefixes
* refactor: remove import aliases forced due to vscode bug
* refactor: move main SD struct out of config into discovery/config
* refactor: rename configUtil to config_util
* refactor: rename yamlUtil to yaml_config
* refactor: kubernetes, remove prefixes
* refactor: move the TargetGroup package to discovery/
* refactor: fix order of imports
Users are starting to use these mistakenly thinking they'll help
with issues, and thus causing some confusion.
Thus hide them and make it clear that they're only there for testing
reasons.
Currently all read queries are simply pushed to remote read clients.
This is fine, except for remote storage for wich it unefficient and
make query slower even if remote read is unnecessary.
So we need instead to compare the oldest timestamp in primary/local
storage with the query range lower boundary. If the oldest timestamp
is older than the mint parameter, then there is no need for remote read.
This is an optionnal behavior per remote read client.
Signed-off-by: Thibault Chataigner <t.chataigner@criteo.com>
Instead or only printing the help message, which is not always helpful.
For example, when upgrading from prometheus v1, the retention time value
format has changed and now only accepts one unit (e.g. "15d") where it
previously allowed more complex strings (e.g. "360h0m0s").
This commit provides the error message as an explanation for the parsing
failure.
The usage of govalidator is redundant with the call to url.Parse for
url validation. Removing it has the following benefits:
- The explicit error message is displayed instead of just a generic
valid/invalid message
- Slightly smaller code with one fewer external dependency
- Speed improvement by removing duplicate call to url.Parse (inside
govalidator.IsURL()
- Resolves issue #2717
The only potential drawback of removing govalidator is that certain
URLs will be considered valid which were previously invalid. For example:
- URLs with hostnames that start and/or end with an underscore (http://_example.com_)
- URLs with hostnames that contain some special characters (http://foo&*bar.org)
These are valid URIs according to RFC 3986 and valid domain names per RFC 2181,
however they are not valid hostnames per RFC 952.
Whenever a route prefix is applied, the router prepends the prefix to
the URL path on the request. For most handlers, this is not an issue
because the request's path is only used for routing and is not actually
needed by the handler itself. However, Prometheus delegates the handling
of the /debug/* endpoints to the http.DefaultServeMux which has it's own
routing logic that depends on the url.Path. As a result, whenever a
prefix is applied, the prefixed URL is passed to the DefaultServeMux
which has no awareness of the prefix and returns a 404.
This change fixes the issue by creating a new serveDebug handler which
routes requests /debug/* requests to appropriate net/http/pprof handler
and removing the net/http/pprof import in cmd/prometheus since it is no
longer necessary.
Fixes#2183.
* Move fingerprint to Hash()
* Move away from tsdb.MultiError
* 0777 -> 0666 for files
* checkOverflow of extra fields
Signed-off-by: Goutham Veeramachaneni <cs14btech11014@iith.ac.in>
* Print uname on prom startup
* Make uname file linux-only
* Add missing license headers
Add missing license headers
* Print OS when uname is not available
* Print only OS name when uname not available
* Remove extra space, fix cmd/prometheus/main.go license header
* Add fix for int8 and uint8 systems
* Better formatting for build tags in cmd/prometheus/uname files
* Remove newline
This is a fairly easy attempt to dynamically evict chunks based on the
heap size. A target heap size has to be set as a command line flage,
so that users can essentially say "utilize 4GiB of RAM, and please
don't OOM".
The -storage.local.max-chunks-to-persist and
-storage.local.memory-chunks flags are deprecated by this
change. Backwards compatibility is provided by ignoring
-storage.local.max-chunks-to-persist and use
-storage.local.memory-chunks to set the new
-storage.local.target-heap-size to a reasonable (and conservative)
value (both with a warning).
This also makes the metrics intstrumentation more consistent (in
naming and implementation) and cleans up a few quirks in the tests.
Answers to anticipated comments:
There is a chance that Go 1.9 will allow programs better control over
the Go memory management. I don't expect those changes to be in
contradiction with the approach here, but I do expect them to
complement them and allow them to be more precise and controlled. In
any case, once those Go changes are available, this code has to be
revisted.
One might be tempted to let the user specify an estimated value for
the RSS usage, and then internall set a target heap size of a certain
fraction of that. (In my experience, 2/3 is a fairly safe bet.)
However, investigations have shown that RSS size and its relation to
the heap size is really really complicated. It depends on so many
factors that I wouldn't even start listing them in a commit
description. It depends on many circumstances and not at least on the
risk trade-off of each individual user between RAM utilization and
probability of OOMing during a RAM usage peak. To not add even more to
the confusion, we need to stick to the well-defined number we also use
in the targeting here, the sum of the sizes of heap objects.
Currently, if a series stops to exist, its head chunk will be kept
open for an hour. That prevents it from being persisted. Which
prevents it from being evicted. Which prevents the series from being
archived.
Most of the time, once no sample has been added to a series within the
staleness limit, we can be pretty confident that this series will not
receive samples anymore. The whole chain as described above can be
started after 5m instead of 1h. In the relaxed case, this doesn't
change a lot as the head chunk timeout is only checked during series
maintenance, and usually, a series is only maintained every six
hours. However, there is the typical scenario where a large service is
deployed, the deoply turns out to be bad, and then it is deployed
again within minutes, and quite quickly the number of time series has
tripled. That's the point where the Prometheus server is stressed and
switches (rightfully) into rushed mode. In that mode, time series are
processed as quickly as possible, but all of that is in vein if all of
those recently ended time series cannot be persisted yet for another
hour. In that scenario, this change will help most, and it's exactly
the scenario where help is most desperately needed.
Rationale: The default value for GOGC is 100, i.e. a garbage collected
is initialized once as many heap space has been allocated as was in
use after the last GC was done. This ratio doesn't make a lot of sense
in Prometheus, as typically about 60% of the heap is allocated for
long-lived memory chunks (most of which are around for many hours if
not days). Thus, short-lived heap objects are accumulated for quite
some time until they finally match the large amount of memory used by
bulk memory chunks and a gigantic GC cyle is invoked. With GOGC=40, we
are essentially reinstating "normal" GC behavior by acknowledging that
about 60% of the heap are used for long-term bulk storage.
The median Prometheus production server at SoundCloud runs a GC cycle
every 90 seconds. With GOGC=40, a GC cycle is run every 35 seconds
(which is still not very often). However, the effective RAM usage is
now reduced by about 30%. If settings are updated to utilize more RAM,
the time between GC cycles goes up again (as the heap size is larger
with more long-lived memory chunks, but the frequency of creating
short-lived heap objects does not change). On a quite busy large
Prometheus server, the timing changed from one GC run every 20s to one
GC run every 12s.
In the former case (just changing GOGC, leave everything else as it
is), the CPU usage increases by about 10% (on a mid-size referenc
server from 8.1 to 8.9). If settings are adjusted, the CPU
consumptions increases more drastically (from 8 cores to 13 cores on a
large reference server), despite GCs happening more rarely, presumably
because a 50% larger set of memory chunks is managed now. Having more
memory chunks is good in many regards, and most servers are running
out of memory long before they run out of CPU cycles, so the tradeoff
is overwhelmingly positive in most cases.
Power users can still set the GOGC environment variable as usual, as
the implementation in this commit honors an explicitly set variable.
This removes legacy support for specific remote storage systems in favor
of only offering the generic remote write protocol. An example bridge
application that translates from the generic protocol to each of those
legacy backends is still provided at:
documentation/examples/remote_storage/remote_storage_bridge
See also https://github.com/prometheus/prometheus/issues/10
The next step in the plan is to re-add support for multiple remote
storages.