* promtool: backfill: allow configuring block duration
When backfilling large amounts of data across long periods of time, it
may in certain circumstances be useful to use a longer block duration to
increase the efficiency and speed of the backfilling process. This patch
adds a flag --block-duration-power to allow a user to choose the power N
where the block duration is 2^(N+1)h.
Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>
* promtool: use sub-tests in backfill testing
Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>
* backfill: add messages to tests for clarity
When someone new breaks a test, seeing "expected: false, got: true" is
really not useful. A nice message helps here.
Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>
* backfill: test long block durations
A test that uses a long block duration to write bigger blocks is added.
The check to make sure all blocks are the default duration is removed.
Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>
When using the backfill command to add data to an ephemeral/test
Prometheus instance, it is not important to see which data was added as
it is often generated ahead of time and mostly irrelevant to the
use-case. The current approach prints information about each block that
is written, but does so in a generally inefficient and costly manner.
This patch adds a `--quiet` flag that allows a user to opt out of this
behavior.
Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>
* Added walreplay API endpoint
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Added starting page to react-ui
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Documented the new endpoint
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Fixed typos
Signed-off-by: Levi Harrison <git@leviharrison.dev>
Co-authored-by: Julius Volz <julius.volz@gmail.com>
* Removed logo
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Changed isResponding to isUnexpected
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Changed width of progress bar
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Changed width of progress bar
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Added DB stats object
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Updated starting page to work with new fields
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Passing nil
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Passing nil (pt. 2)
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Passing nil (pt. 3)
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Passing nil (and also implementing a method this time) (pt. 4)
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Passing nil (and also implementing a method this time) (pt. 5)
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Changed const to let
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Passing nil (pt. 6)
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Remove SetStats method
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Added comma
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Changed api
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Changed to triple equals
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Fixed data response types
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Don't return pointer
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Changed version
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Fixed interface issue
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Fixed pointer
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Fixed copying lock value error
Signed-off-by: Levi Harrison <git@leviharrison.dev>
Co-authored-by: Julius Volz <julius.volz@gmail.com>
* Update Go modules for 2.26
Bump all Go modules to the latest upstream.
Signed-off-by: Ben Kochie <superq@gmail.com>
* Fix promtool for new client_golang
LabelValues now requires a list of string matchers.
Signed-off-by: Ben Kochie <superq@gmail.com>
By default same value that was hardcoded is used, but with the
new flag added the number of scrapes can be increased to any value.
Signed-off-by: Pau Freixes <pfreixes@gmail.com>
This moves the label lookup into TSDB, whilst still keeping the cached-ref optimisation for repeated Appends.
This makes the API easier to consume and implement. In particular this change is motivated by the scrape-time-aggregation work, which I don't think is possible to implement without it as it needs access to label values.
Signed-off-by: Tom Wilkie <tom.wilkie@gmail.com>
Accepting alert_rule_test without alertname is confusing as it will
always pass with empty exp_alerts, and never with non-empty exp_alerts.
Signed-off-by: Raphael Bauduin <raphael.bauduin@tessares.net>
- Remove unrelated changes
- Refactor code out of the API module - that is already getting pretty crowded.
- Don't track reference for AddFast in remote write. This has the potential to consume unlimited server-side memory if a malicious client pushes a different label set for every series. For now, its easier and safer to always use the 'slow' path.
- Return 400 on out of order samples.
- Use remote.DecodeWriteRequest in the remote write adapters.
- Put this behing the 'remote-write-server' feature flag
- Add some (very) basic docs.
- Used named return & add test for commit error propagation
Signed-off-by: Tom Wilkie <tom.wilkie@gmail.com>
When printing the error, we still need access to the mmapped byte array
of the file. Therefore, we make sure that we run it before closing the
file.
I could have done something more complex like a defer, or not closing
the file, knowing that we would exit the program anyway. However, I
think that in case we extend this in the future, or this is copy/paster
elsewhere, we should continue closing the file. As it is small enough, I
went for the solution to call the function 3 times instead of playing
with a defer.
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
This commit adds `@ <timestamp>` modifier as per this design doc: https://docs.google.com/document/d/1uSbD3T2beM-iX4-Hp7V074bzBRiRNlqUdcWP6JTDQSs/edit.
An example query:
```
rate(process_cpu_seconds_total[1m])
and
topk(7, rate(process_cpu_seconds_total[1h] @ 1234))
```
which ranks based on last 1h rate and w.r.t. unix timestamp 1234 but actually plots the 1m rate.
Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
* Backfilling: optimize for non-consecutive blocks
When you have missing data for > 2 hours, you spend a lot of time
re-reading the complete file. It is not optimal.
This introduces a fastpath for this scenario.
Next, we do parse the metric even when we know we will not use it, based
on its timestamp. This only computes the metric when we know its
timestamp is right.
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
I initially thought I could somehow rescue the current column layout
by recycling the tabwriter, but flushing completely blanks
it. However, by setting a minimum width of 13, we get a slightly
broader DURATION column but otherwise nice formatting, unless numbers
get really big, but that's OK, I guess.
Before:
```
BLOCK ULID MIN TIME MAX TIME DURATION NUM SAMPLES NUM CHUNKS NUM SERIES SIZE
01ETN0KGNP5WWK9T5QMQGBG9F1 2020-11-19 07:39:17 +0000 UTC 2020-11-19 07:44:17 +0000 UTC 5m0.001s 8 2 2 624B
01ETN0KGQSFF0AB2QDZVQG3CWC 2020-11-19 10:25:57 +0000 UTC 2020-11-19 10:30:57 +0000 UTC 5m0.001s 8 2 2 622B
01ETN0KGSW8KYP3YPG4X20P60Z 2020-11-19 13:12:37 +0000 UTC 2020-11-19 13:17:37 +0000 UTC 5m0.001s 8 2 2 625B
```
After:
```
BLOCK ULID MIN TIME MAX TIME DURATION NUM SAMPLES NUM CHUNKS NUM SERIES SIZE
01ETN0R72SXN9A1FG732P7KFFN 2020-11-19 07:39:17 +0000 UTC 2020-11-19 07:44:17 +0000 UTC 5m0.001s 8 2 2 624B
01ETN0R74Y9AG1A1MKN4MZK7WM 2020-11-19 10:25:57 +0000 UTC 2020-11-19 10:30:57 +0000 UTC 5m0.001s 8 2 2 622B
01ETN0R76KXZ5VQECMDNES49J6 2020-11-19 13:12:37 +0000 UTC 2020-11-19 13:17:37 +0000 UTC 5m0.001s 8 2 2 625B
```
After without the `-r` flag:
```
BLOCK ULID MIN TIME MAX TIME DURATION NUM SAMPLES NUM CHUNKS NUM SERIES SIZE
01ETN0RFFJ42274NWR1GH0RTV6 1605771557000 1605771857001 5m0.001s 8 2 2 624
01ETN0RFJ1MZCHHS2SBZS8XC27 1605781557000 1605781857001 5m0.001s 8 2 2 622
01ETN0RFM98N3V4KD2DZXFGHGN 1605791557000 1605791857001 5m0.001s 8 2 2 625
```
Signed-off-by: beorn7 <beorn@grafana.com>
* Testify: move to require
Moving testify to require to fail tests early in case of errors.
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
* More moves
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
* MultiError: Refactored MultiError for more concise and safe usage.
* Less lines
* Goland IDE was marking every usage of old MultiError "potential nil" error
* It was easy to forgot using Err() when error was returned, now it's safely assured on compile time.
NOTE: Potentially I would rename package to merrors. (: In different PR.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Addressed review comments.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Addressed comments.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Fix after rebase.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Refactor test assertions
This pull request gets rid of assert.True where possible to use
fine-grained assertions.
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
* promtool: Calculate mint and maxt per test
Previously a single test that used a later eval time would make all
other tests in the file share the [mint, maxt] and potentially evaluate
far more samples than needed.
Fixes: #8019
Signed-off-by: David Leadbeater <dgl@dgl.cx>
In #7399, an early validity check of the config was introduced to
prevent the scenario where an invalid config is only detected after a
possibly very long startup procedure. However, the respective success
metrics are not updated after the initial validation so that the
success metrics suggest an invalid config. If the startup procedure,
like replaying the WAL, really takes very long, alerts about invalid
config will trigger.
This commit sets the succes metrics after initial validation. They
will be set again after the "real" config (re-)load, but that
shouldn't be a problem. The metric now truthfully represents whenever
the config was successfully loaded, no matter if the result was then
thrown away (because it was just for validation) or actually used.
Signed-off-by: beorn7 <beorn@grafana.com>
If an alert test had a failing test, then any other alert test interval
specified after that point would result in the test exiting early.
This made debugging some tests more difficult than needed.
Now only exit early for evaluation failures.
Signed-off-by: David Leadbeater <dgl@dgl.cx>
This also fixes a bug in query_log_file, which now is relative to the config file like all other paths.
Signed-off-by: Andy Bursavich <abursavich@gmail.com>
* storage: Replace usage of sync/atomic with uber-go/atomic
Signed-off-by: Javier Palomo <javier.palomo.almena@gmail.com>
* tsdb: Replace usage of sync/atomic with uber-go/atomic
Signed-off-by: Javier Palomo <javier.palomo.almena@gmail.com>
* web: Replace usage of sync/atomic with uber-go/atomic
Signed-off-by: Javier Palomo <javier.palomo.almena@gmail.com>
* notifier: Replace usage of sync/atomic with uber-go/atomic
Signed-off-by: Javier Palomo <javier.palomo.almena@gmail.com>
* cmd: Replace usage of sync/atomic with uber-go/atomic
Signed-off-by: Javier Palomo <javier.palomo.almena@gmail.com>
* scripts: Verify that we are not using restricted packages
It checks that we are not directly importing 'sync/atomic'.
Signed-off-by: Javier Palomo <javier.palomo.almena@gmail.com>
* Reorganise imports in blocks
Signed-off-by: Javier Palomo <javier.palomo.almena@gmail.com>
* notifier/test: Apply PR suggestions
Signed-off-by: Javier Palomo <javier.palomo.almena@gmail.com>
* storage/remote: avoid storing references on newEntry
Signed-off-by: Javier Palomo <javier.palomo.almena@gmail.com>
* Revert "scripts: Verify that we are not using restricted packages"
This reverts commit 278d32748e.
Signed-off-by: Javier Palomo <javier.palomo.almena@gmail.com>
* web: Group imports accordingly
Signed-off-by: Javier Palomo <javier.palomo.almena@gmail.com>
* promql: Removed global and add ability to have better interval for subqueries if not specified
## Changes
* Refactored tests for better hints testing
* Added various TODO in places to enhance.
* Moved DefaultEvalInterval global to opts with func(rangeMillis int64) int64 function instead
Motivation: At Thanos we would love to have better control over the subqueries step/interval.
This is important to choose proper resolution. I think having proper step also does not harm for
Prometheus and remote read users. Especially on stateless querier we do not know evaluation interval
and in fact putting global can be wrong to assume for Prometheus even.
I think ideally we could try to have at least 3 samples within the range, the same
way Prometheus UI and Grafana assumes.
Anyway this interfaces allows to decide on promQL user basis.
Open question: Is taking parent interval a smart move?
Motivation for removing global: I spent 1h fighting with:
=== RUN TestEvaluations
TestEvaluations: promql_test.go:31: unexpected error: error evaluating query "absent_over_time(rate(nonexistant[5m])[5m:])" (line 687): unexpected error: runtime error: integer divide by zero
--- FAIL: TestEvaluations (0.32s)
FAIL
At the end I found that this fails on most of the versions including this master if you run this test alone. If run together with many
other tests it passes. This is due to SetDefaultEvaluationInterval(1 * time.Minute)
in test that is ran before TestEvaluations. Thanks to globals (:
Let's fix it by dropping this global.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Added issue links for TODOs.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Removed irrelevant changes.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
To load ALERT_FOR_STATE only `storage.Queryable` interface is required,
so this patch uses this narrower interface for to perform this.
Signed-off-by: Frederic Branczyk <fbranczyk@gmail.com>
* Fixed nits introduced by https://github.com/prometheus/prometheus/pull/7334
* Added ChunkQueryable implementation to fanout and readyStorage.
* Added more comments.
* Changed NewVerticalChunkSeriesMerger to CompactingChunkSeriesMerger, removed tiny interface by reusing VerticalSeriesMergeFunc for overlapping algorithm for
both chunks and series, for both querying and compacting (!) + made sure duplicates are merged.
* Added ErrChunkSeriesSet
* Added Samples interface for seamless []promb.Sample to []tsdbutil.Sample conversion.
* Deprecating non chunks serieset based StreamChunkedReadResponses, added chunk one.
* Improved tests.
* Split remote client into Write (old storage) and read.
* Queryable client is now SampleAndChunkQueryable. Since we cannot use nice QueryableFunc I moved
all config based options to sampleAndChunkQueryableClient to aboid boilerplate.
In next commit: Changes for TSDB.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
When appending to the head and a chunk is full it is flushed to the disk and m-mapped (memory mapped) to free up memory
Prom startup now happens in these stages
- Iterate the m-maped chunks from disk and keep a map of series reference to its slice of mmapped chunks.
- Iterate the WAL as usual. Whenever we create a new series, look for it's mmapped chunks in the map created before and add it to that series.
If a head chunk is corrupted the currpted one and all chunks after that are deleted and the data after the corruption is recovered from the existing WAL which means that a corruption in m-mapped files results in NO data loss.
[Mmaped chunks format](https://github.com/prometheus/prometheus/blob/master/tsdb/docs/format/head_chunks.md) - main difference is that the chunk for mmaping now also includes series reference because there is no index for mapping series to chunks.
[The block chunks](https://github.com/prometheus/prometheus/blob/master/tsdb/docs/format/chunks.md) are accessed from the index which includes the offsets for the chunks in the chunks file - example - chunks of series ID have offsets 200, 500 etc in the chunk files.
In case of mmaped chunks, the offsets are stored in memory and accessed from that. During WAL replay, these offsets are restored by iterating all m-mapped chunks as stated above by matching the series id present in the chunk header and offset of that chunk in that file.
**Prombench results**
_WAL Replay_
1h Wal reply time
30% less wal reply time - 4m31 vs 3m36
2h Wal reply time
20% less wal reply time - 8m16 vs 7m
_Memory During WAL Replay_
High Churn:
10-15% less RAM - 32gb vs 28gb
20% less RAM after compaction 34gb vs 27gb
No Churn:
20-30% less RAM - 23gb vs 18gb
40% less RAM after compaction 32.5gb vs 20gb
Screenshots are in [this comment](https://github.com/prometheus/prometheus/pull/6679#issuecomment-621678932)
Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
time.Unix attaches the local timezone, which can then
leak out (e.g. in the alert json). While this is harmless,
we should be consistent.
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* [comments] change word ‘wheter’ to ‘whether’
Signed-off-by: fuling <fuling.lgz@alibaba-inc.com>
* [comments] change word ‘wheter’ to ‘whether’
Signed-off-by: fuling <fuling.lgz@alibaba-inc.com>
This is part of https://github.com/prometheus/prometheus/pull/5882 that can be done to simplify things.
All todos I added will be fixed in follow up PRs.
* querier.Querier, querier.Appender, querier.SeriesSet, and querier.Series interfaces merged
with storage interface.go. All imports that.
* querier.SeriesIterator replaced by chunkenc.Iterator
* Added chunkenc.Iterator.Seek method and tests for xor implementation (?)
* Since we properly handle SelectParams for Select methods I adjusted min max
based on that. This should help in terms of performance for queries with functions like offset.
* added Seek to deletedIterator and test.
* storage/tsdb was removed as it was only a unnecessary glue with incompatible structs.
No logic was changed, only different source of abstractions, so no need for benchmarks.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Make lookbackDelta a option of QueryEngine
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
* julius' suggestion
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
* remove trivial getter
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
* Assume lookback delta is always > 0
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
* add debug log
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
* don't expose loopback delta
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
* Specify that lookack delta is also used in federation
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
* Fix federation test
While we have added some logic to the promql engine to keep it backwards
compatible and have a 5 minute loopback by default, the web/ package is
likely to really be internal to Prometheus and we should not add the
same kind of heuritstics here.
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
* loopback delta: Fix debug log
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
A data race can happen if we run t.Log after the test t is done -- which
in this case is highly possible because of the use of subtests and the
fact that we call t.Log in a goroutine.
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
Since we use ActiveQueryTracker to check for concurrency in
d992c36b3a it does not make sense to keep
the MaxConcurrent value as an option of the PromQL engine.
This pull request removes it from the PromQL engine options, sets the
max concurrent metric to -1 if there is no active query tracker, and use
the value of the active query tracker otherwise.
It removes dead code and also will inform people who import the promql
package that we made that change, as it breaks the EngineOpts struct.
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>