* Fix race during head compaction
Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
* Comment out the test
Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
* Skip test instead of commenting it out
Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
* Optimized label regex matcher with literal prefix and/or suffix
Signed-off-by: Marco Pracucci <marco@pracucci.com>
* Added license
Signed-off-by: Marco Pracucci <marco@pracucci.com>
* Added more tests cases with newlines
Signed-off-by: Marco Pracucci <marco@pracucci.com>
* Restored deleted test
Signed-off-by: Marco Pracucci <marco@pracucci.com>
* Fixed nits introduced by https://github.com/prometheus/prometheus/pull/7334
* Added ChunkQueryable implementation to fanout and readyStorage.
* Added more comments.
* Changed NewVerticalChunkSeriesMerger to CompactingChunkSeriesMerger, removed tiny interface by reusing VerticalSeriesMergeFunc for overlapping algorithm for
both chunks and series, for both querying and compacting (!) + made sure duplicates are merged.
* Added ErrChunkSeriesSet
* Added Samples interface for seamless []promb.Sample to []tsdbutil.Sample conversion.
* Deprecating non chunks serieset based StreamChunkedReadResponses, added chunk one.
* Improved tests.
* Split remote client into Write (old storage) and read.
* Queryable client is now SampleAndChunkQueryable. Since we cannot use nice QueryableFunc I moved
all config based options to sampleAndChunkQueryableClient to aboid boilerplate.
In next commit: Changes for TSDB.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Fixed bstream test license
Signed-off-by: Marco Pracucci <marco@pracucci.com>
* Simplified bstreamReader.loadNextBuffer()
Signed-off-by: Marco Pracucci <marco@pracucci.com>
* Fixed date in license
Signed-off-by: Marco Pracucci <marco@pracucci.com>
* Add errors and Warnings to SeriesSet
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Change Querier interface and refactor accordingly
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Refactor promql/engine to propagate warnings at eval stage
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Address review issues
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Make sure all the series from all Selects are pre-advanced
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Address review issues
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Separate merge series sets
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Clean
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Refactor merge querier failure handling
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Refactored and simplified fanout with improvements from incoming chunk iterator PRs.
* Secondary logic is hidden, instead of weird failed series set logic we had.
* Fanout is well commented
* Fanout closing record all errors
* MergeQuerier improved API (clearer)
* deferredGenericMergeSeriesSet is not needed as we return no samples anyway for failed series sets (next = false).
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Fix formatting
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Fix CI issues
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Added final tests for error handling.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Addressed Brian's comments.
* Moved hints in populate to be allocated only when needed.
* Used sync.Once in secondary Querier to achieve all-or-nothing partial response logic.
* Select after first Next is done will panic.
NOTE: in lazySeriesSet in theory we could just panic, I think however we can
totally just return error, it will panic in expand anyway.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Utilize errWithWarnings
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Fix recently introduced expansion issue
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Add tests for secondary querier error handling
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Implement lazy merge
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Add name to test cases
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Reorganize
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Address review comments
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Address review comments
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Remove redundant warnings
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Fix rebase mistake
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
Co-authored-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Moves the 64bit atomically accessed field to the top of the struct.
Signed-off-by: Bryan Varner <1652015+bvarner@users.noreply.github.com>
* Moves the 64bit atomically accessed field to the top of the struct.
Signed-off-by: Bryan Varner <1652015+bvarner@users.noreply.github.com>
* Fixing up go fmt formatting issues.
Signed-off-by: Bryan Varner <1652015+bvarner@users.noreply.github.com>
Co-authored-by: Bryan Varner <1652015+bvarner@users.noreply.github.com>
* Track open appenders in doubly-linked list to make lowWatermark O(1).
* Use RW locks.
* Added BenchmarkIsolationWithState.
Signed-off-by: Peter Štibraný <peter.stibrany@grafana.com>
* add time range params to labelNames api
Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>
* evaluate min/max time range when reading labels from the head
Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>
* add time range params to labelValues api
Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>
* fix test, add docs
Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>
* add a test for head min max range
Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>
* fix test to match comment
Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>
* address CR comments
Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>
* combine vars only used once
Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>
* add time range params to labelNames api
Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>
* evaluate min/max time range when reading labels from the head
Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>
* add time range params to labelValues api
Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>
* fix test, add docs
Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>
* add a test for head min max range
Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>
* fix test to match comment
Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>
* address CR comments
Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>
* combine vars only used once
Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>
* fix test
Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>
* restart ci
Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>
* use range expectedLabelNames instead of range actualLabelNames in test
Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>
* Callbacks for lifecycle of series in TSDB
Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
* Add more comments
Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
When appending to the head and a chunk is full it is flushed to the disk and m-mapped (memory mapped) to free up memory
Prom startup now happens in these stages
- Iterate the m-maped chunks from disk and keep a map of series reference to its slice of mmapped chunks.
- Iterate the WAL as usual. Whenever we create a new series, look for it's mmapped chunks in the map created before and add it to that series.
If a head chunk is corrupted the currpted one and all chunks after that are deleted and the data after the corruption is recovered from the existing WAL which means that a corruption in m-mapped files results in NO data loss.
[Mmaped chunks format](https://github.com/prometheus/prometheus/blob/master/tsdb/docs/format/head_chunks.md) - main difference is that the chunk for mmaping now also includes series reference because there is no index for mapping series to chunks.
[The block chunks](https://github.com/prometheus/prometheus/blob/master/tsdb/docs/format/chunks.md) are accessed from the index which includes the offsets for the chunks in the chunks file - example - chunks of series ID have offsets 200, 500 etc in the chunk files.
In case of mmaped chunks, the offsets are stored in memory and accessed from that. During WAL replay, these offsets are restored by iterating all m-mapped chunks as stated above by matching the series id present in the chunk header and offset of that chunk in that file.
**Prombench results**
_WAL Replay_
1h Wal reply time
30% less wal reply time - 4m31 vs 3m36
2h Wal reply time
20% less wal reply time - 8m16 vs 7m
_Memory During WAL Replay_
High Churn:
10-15% less RAM - 32gb vs 28gb
20% less RAM after compaction 34gb vs 27gb
No Churn:
20-30% less RAM - 23gb vs 18gb
40% less RAM after compaction 32.5gb vs 20gb
Screenshots are in [this comment](https://github.com/prometheus/prometheus/pull/6679#issuecomment-621678932)
Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
Prior to this commit we could have situations where we are creating an
appenderId but never creating an appender to go with it, therefore
blocking the low watermak.
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
Previously we were keeping up to around 6 hours of WAL around by
removing 1/3 every hours. This was excessive, so switch to removing 2/3
which will up to around 3 hours of WAL around.
This will roughly halve the size of the WAL and halve startup time for
those who are I/O bound. This may increase the checkpoint size for
those with certain churn patterns, but by much less than we're saving
from the segments.
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
time.Unix attaches the local timezone, which can then
leak out (e.g. in the alert json). While this is harmless,
we should be consistent.
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* storage: Added Chunks{Queryable/Querier/SeriesSet/Series/Iteratable. Added generic Merge{SeriesSet/Querier} implementation.
## Rationales:
In many places (e.g. chunk Remote read, Thanos Receive fetching chunk from TSDB), we operate on encoded chunks not samples.
This means that we unnecessary decode/encode, wasting CPU, time and memory.
This PR adds chunk iterator interfaces and makes the merge code to be reused between both seriesSets
I will make the use of it in following PR inside tsdb itself. For now fanout implements it and mergers.
All merges now also allows passing series mergers. This opens doors for custom deduplications other than TSDB vertical ones (e.g. offline one we have in Thanos).
## Changes
* Added Chunk versions of all iterating methods. It all starts in Querier/ChunkQuerier. The plan is that
Storage will implement both chunked and samples.
* Added Seek to chunks.Iterator interface for iterating over chunks.
* NewMergeChunkQuerier was added; Both this and NewMergeQuerier are now using generigMergeQuerier to share the code. Generic code was added.
* Improved tests.
* Added some TODO for further simplifications in next PRs.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Addressed Brian's comments.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Moved s/Labeled/SeriesLabels as per Krasi suggestion.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Addressed Krasi's comments.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Second iteration of Krasi comments.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Another round of comments.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>