Commit Graph

1240 Commits (9aa6e041d3b6cc9b7041c93d1bf34b3b11d2294e)

Author SHA1 Message Date
Ganesh Vernekar 7a763ff61e
Avoid empty mmap files by using .tmp files to write headers
Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
2020-07-14 14:59:28 +05:30
Bartlomiej Plotka 823b218e1b
Fixed race between compact (gc, populate) and head append causing unknown symbol error. (#7560)
* Fixed race between compact (gc, populate) and head append causing unknown symbol error.

Fixes https://github.com/prometheus/prometheus/issues/7373

Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>

* Addressed comments.

Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
2020-07-14 09:36:22 +01:00
Bartlomiej Plotka 492061b24c
Revert "Fix unknown symbol error during head compaction (#7526)" (#7556)
This reverts commit 30505a202a.
2020-07-11 22:37:16 +05:30
Ganesh Vernekar 30505a202a
Fix unknown symbol error during head compaction (#7526)
* Fix race during head compaction

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>

* Comment out the test

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>

* Skip test instead of commenting it out

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
2020-07-07 17:29:09 +05:30
Marco Pracucci 2f6bf7de4c
Optimise labels regex matchers containing a literal within the pattern (#7503)
* Added labels matchers regex fast path for literals within the regex

Signed-off-by: Marco Pracucci <marco@pracucci.com>
2020-07-07 09:38:04 +01:00
Harkishen Singh f32307b656
Increments WAL corruption metric on WAL corruption during checkpointing (#7491)
* Increments wal corruption metric on error during checkpointing

Signed-off-by: Harkishen-Singh <harkishensingh@hotmail.com>

* check for wal corruption error

Signed-off-by: Harkishen-Singh <harkishensingh@hotmail.com>
2020-07-05 11:25:42 +05:30
Ganesh Vernekar e65e2e0dac
Fix panic from db metrics (#7501)
Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
2020-07-05 10:11:42 +05:30
Bartlomiej Plotka 1861bf38f5
tombstones: Fixed Add method in order to support trimming time series; Simplified the algo. (#7471)
* tombstones: Fixed Add method in order to support edge trimming; Simplified the algo.

Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>

* Removed duplicated test case.

Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>

* Fixed comment, removed "edge" mention.

Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>

* Removed trimming word.

Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
2020-06-29 17:00:22 +01:00
Marco Pracucci cef4dd6fff
Optimized label regex matcher with literal prefix and/or suffix (#7453)
* Optimized label regex matcher with literal prefix and/or suffix

Signed-off-by: Marco Pracucci <marco@pracucci.com>

* Added license

Signed-off-by: Marco Pracucci <marco@pracucci.com>

* Added more tests cases with newlines

Signed-off-by: Marco Pracucci <marco@pracucci.com>

* Restored deleted test

Signed-off-by: Marco Pracucci <marco@pracucci.com>
2020-06-26 15:19:09 +05:30
Ganesh Vernekar 082c17b691
Introduce SortedLabelValues/LabelValues to speedup queries for high cardinality (#7448)
* Introduce LabelValuesUnsorted to speedup queries for high cardinality

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>

* Add sort check

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
2020-06-25 14:10:29 +01:00
Bartlomiej Plotka b788986717
storage: Adjusted fully storage layer support for chunk iterators: Remote read client, readyStorage, fanout. (#7059)
* Fixed nits introduced by https://github.com/prometheus/prometheus/pull/7334
* Added ChunkQueryable implementation to fanout and readyStorage.
* Added more comments.
* Changed NewVerticalChunkSeriesMerger to CompactingChunkSeriesMerger, removed tiny interface by reusing VerticalSeriesMergeFunc for overlapping algorithm for
both chunks and series, for both querying and compacting (!) + made sure duplicates are merged.
* Added ErrChunkSeriesSet
* Added Samples interface for seamless []promb.Sample to []tsdbutil.Sample conversion.
* Deprecating non chunks serieset based StreamChunkedReadResponses, added chunk one.
* Improved tests.
* Split remote client into Write (old storage) and read.
* Queryable client is now SampleAndChunkQueryable. Since we cannot use nice QueryableFunc I moved
all config based options to sampleAndChunkQueryableClient to aboid boilerplate.

In next commit: Changes for TSDB.

Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
2020-06-24 14:41:52 +01:00
Joe Lei 74a73ba1cf
fix analyze limit not work expected (#7430)
Signed-off-by: joelei <thezero12@hotmail.com>
2020-06-22 10:38:10 +01:00
Ganesh Vernekar b7c46a8c79
Merge remote-tracking branch 'upstream/master' into merge-release-2.19
Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
2020-06-19 12:40:29 +05:30
Ganesh Vernekar 48fae12b89
Fix unsequential m-map files (#7414)
* Fix unsequential m-map files

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>

* Fix review comments

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
2020-06-18 19:24:58 +05:30
Marco Pracucci 3b529ddbce
Cleanup bstream_test.go based on post-merge feedback received on #7390 (#7413)
* Fixed bstream test license

Signed-off-by: Marco Pracucci <marco@pracucci.com>

* Simplified bstreamReader.loadNextBuffer()

Signed-off-by: Marco Pracucci <marco@pracucci.com>

* Fixed date in license

Signed-off-by: Marco Pracucci <marco@pracucci.com>
2020-06-18 14:49:39 +05:30
Simon Pasquier d634785944
tsdb/docs: fix head chunks directory + link from README (#7309)
Signed-off-by: Simon Pasquier <spasquie@redhat.com>
2020-06-17 20:38:21 +05:30
Simon Pasquier 2f12049371
tsdb: improve logs when encountering corruption (#7308)
* tsdb: improve logs when encountering corruption

Signed-off-by: Simon Pasquier <spasquie@redhat.com>

* Wrap corrupted block errors

Signed-off-by: Simon Pasquier <spasquie@redhat.com>

* Add file path to head chunks

Signed-off-by: Simon Pasquier <spasquie@redhat.com>
2020-06-17 16:40:00 +02:00
Marco Pracucci f42ed03dc5
Optimized bstream reader used by XORChunk iterator (#7390)
* Optimized bstream reader

Signed-off-by: Marco Pracucci <marco@pracucci.com>

* Fixed linter

Signed-off-by: Marco Pracucci <marco@pracucci.com>

* Added license to new file

Signed-off-by: Marco Pracucci <marco@pracucci.com>

* Fixed type cast

Signed-off-by: Marco Pracucci <marco@pracucci.com>

* Changed comments

Signed-off-by: Marco Pracucci <marco@pracucci.com>

* Improved comments and rolledback no-op changes

Signed-off-by: Marco Pracucci <marco@pracucci.com>

* Fixed race condition

Signed-off-by: Marco Pracucci <marco@pracucci.com>
2020-06-15 16:44:40 +01:00
Julien Pivotto f893786153
Fix TSDB test failure (#7394)
PR #7338 was not rebased on top of master and interface had changed.

Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2020-06-14 22:07:23 +05:30
Krasimir Georgiev ab6203b7c7
add head compaction test (#7338) 2020-06-12 13:29:26 +03:00
Ganesh Vernekar 9593b64ce6
Merge branch 'master' into to-merge-release-2.19 2020-06-10 20:01:25 +05:30
Kemal Akkoyun 66dfb951c4
*: Consistent Error/Warning handling for SeriesSet iterator: Allowing Async Select (#7251)
* Add errors and Warnings to SeriesSet

Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>

* Change Querier interface and refactor accordingly

Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>

* Refactor promql/engine to propagate warnings at eval stage

Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>

* Address review issues

Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>

* Make sure all the series from all Selects are pre-advanced

Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>

* Address review issues

Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>

* Separate merge series sets

Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>

* Clean

Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>

* Refactor merge querier failure handling

Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>

* Refactored and simplified fanout with improvements from incoming chunk iterator PRs.

* Secondary logic is hidden, instead of weird failed series set logic we had.
* Fanout is well commented
* Fanout closing record all errors
* MergeQuerier improved API (clearer)
* deferredGenericMergeSeriesSet is not needed as we return no samples anyway for failed series sets (next = false).

Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>

* Fix formatting

Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>

* Fix CI issues

Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>

* Added final tests for error handling.

Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>

* Addressed Brian's comments.

* Moved hints in populate to be allocated only when needed.
* Used sync.Once in secondary Querier to achieve all-or-nothing partial response logic.
* Select after first Next is done will panic.

NOTE: in lazySeriesSet in theory we could just panic, I think however we can
totally just return error, it will panic in expand anyway.

Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>

* Utilize errWithWarnings

Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>

* Fix recently introduced expansion issue

Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>

* Add tests for secondary querier error handling

Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>

* Implement lazy merge

Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>

* Add name to test cases

Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>

* Reorganize

Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>

* Address review comments

Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>

* Address review comments

Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>

* Remove redundant warnings

Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>

* Fix rebase mistake

Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>

Co-authored-by: Bartlomiej Plotka <bwplotka@gmail.com>
2020-06-09 17:57:31 +01:00
Ganesh Vernekar 1627d234da
Moves the atomically accessed member to the top of the struct (#7365)
* Moves the 64bit atomically accessed field to the top of the struct.

Signed-off-by: Bryan Varner <1652015+bvarner@users.noreply.github.com>

* Moves the 64bit atomically accessed field to the top of the struct.

Signed-off-by: Bryan Varner <1652015+bvarner@users.noreply.github.com>

* Fixing up go fmt formatting issues.

Signed-off-by: Bryan Varner <1652015+bvarner@users.noreply.github.com>

Co-authored-by: Bryan Varner <1652015+bvarner@users.noreply.github.com>
2020-06-09 10:55:43 +05:30
Peter Štibraný ff80690a6e
Optimise lowWatermark in Isolation (#7332)
* Track open appenders in doubly-linked list to make lowWatermark O(1).
* Use RW locks.
* Added BenchmarkIsolationWithState.

Signed-off-by: Peter Štibraný <peter.stibrany@grafana.com>
2020-06-03 20:09:05 +02:00
Jess G fdc49fae5b
Added time range parameters to labelNames API (#7288)
* add time range params to labelNames api

Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>

* evaluate min/max time range when reading labels from the head

Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>

* add time range params to labelValues api

Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>

* fix test, add docs

Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>

* add a test for head min max range

Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>

* fix test to match comment

Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>

* address CR comments

Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>

* combine vars only used once

Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>

* add time range params to labelNames api

Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>

* evaluate min/max time range when reading labels from the head

Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>

* add time range params to labelValues api

Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>

* fix test, add docs

Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>

* add a test for head min max range

Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>

* fix test to match comment

Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>

* address CR comments

Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>

* combine vars only used once

Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>

* fix test

Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>

* restart ci

Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>

* use range expectedLabelNames instead of range actualLabelNames in test

Signed-off-by: jessicagreben <Jessica.greben1+github@gmail.com>
2020-05-30 13:50:09 +01:00
Ganesh Vernekar a1355eb7c7
Remove time based m-map file creation (#7314)
* Remove time based m-map file creation

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>

* Fix review comments

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
2020-05-29 20:08:41 +05:30
Ganesh Vernekar 83619aa9ac
Preallocate m-map file only for Windows (#7306)
Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
2020-05-28 20:24:19 +05:30
Guangwen Feng 2393d6137b
Add unit test case for func Type in record.go (#7082)
Signed-off-by: Guangwen Feng <fenggw-fnst@cn.fujitsu.com>
2020-05-27 12:08:33 +05:30
Krasimir Georgiev f4dd45609a
Use min and maxt of the range head when creating a block (#7282)
Signed-off-by: Krasi Georgiev <8903888+krasi-georgiev@users.noreply.github.com>
2020-05-22 17:00:06 +05:30
Krasimir Georgiev 09df8d94e0
More explicit chunks and head error handling. (#7277) 2020-05-22 12:03:23 +03:00
Ganesh Vernekar 1c99adb9fd
Callbacks for lifecycle of series in TSDB (#7159)
* Callbacks for lifecycle of series in TSDB

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>

* Add more comments

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
2020-05-20 18:52:08 +05:30
Ganesh Vernekar d4b9fe801f
M-map full chunks of Head from disk (#6679)
When appending to the head and a chunk is full it is flushed to the disk and m-mapped (memory mapped) to free up memory

Prom startup now happens in these stages
 - Iterate the m-maped chunks from disk and keep a map of series reference to its slice of mmapped chunks.
- Iterate the WAL as usual. Whenever we create a new series, look for it's mmapped chunks in the map created before and add it to that series.

If a head chunk is corrupted the currpted one and all chunks after that are deleted and the data after the corruption is recovered from the existing WAL which means that a corruption in m-mapped files results in NO data loss.

[Mmaped chunks format](https://github.com/prometheus/prometheus/blob/master/tsdb/docs/format/head_chunks.md)  - main difference is that the chunk for mmaping now also includes series reference because there is no index for mapping series to chunks.
[The block chunks](https://github.com/prometheus/prometheus/blob/master/tsdb/docs/format/chunks.md) are accessed from the index which includes the offsets for the chunks in the chunks file - example - chunks of series ID have offsets 200, 500 etc in the chunk files.
In case of mmaped chunks, the offsets are stored in memory and accessed from that. During WAL replay, these offsets are restored by iterating all m-mapped chunks as stated above by matching the series id present in the chunk header and offset of that chunk in that file.

**Prombench results**

_WAL Replay_

1h Wal reply time
30% less wal reply time - 4m31 vs 3m36
2h Wal reply time
20% less wal reply time - 8m16 vs 7m

_Memory During WAL Replay_

High Churn:
10-15% less RAM -  32gb vs 28gb
20% less RAM after compaction 34gb vs 27gb
No Churn:
20-30% less RAM -  23gb vs 18gb
40% less RAM after compaction 32.5gb vs 20gb

Screenshots are in [this comment](https://github.com/prometheus/prometheus/pull/6679#issuecomment-621678932)


Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
2020-05-06 21:00:00 +05:30
Bartlomiej Plotka 532f7bbac9
Merge pull request #7204 from prometheus/release-2.18
[Merge Without Squash] Merge release-2.18 back to master.
2020-05-05 18:58:45 +01:00
Ben Ye 1e4e37144d
Fixed wrongly handled not ready TSDB on web and API. (#7182)
* fix federate endpoint panic

Signed-off-by: yeya24 <yb532204897@gmail.com>

* Fixed all cases of not ready TSDB being wrongly handled.

* Fixed issue for federation.
* Ensured this will never happen again thanks to interfaces
* Fixes same issue for stats.
* Added tests for readiness.
* Fixed bug in stats. It was:
   status.MaxTime = db.Head().MaxTime()
   status.MinTime = db.Head().MaxTime()


Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>

* Addressed Brian's comments.

Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>

* Addressed Brian's comments.

Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>

Co-authored-by: Bartlomiej Plotka <bwplotka@gmail.com>
2020-04-29 17:16:14 +01:00
ga 05038b48bd
Goroutine: Fix ambiguous variable (#7175)
Signed-off-by: Gaurav Singh <gaurav1086@gmail.com>
2020-04-28 11:02:26 +01:00
Goutham Veeramachaneni 84b4d079c8
Make sure deleted intervals are excluded from Seek (#6980)
Signed-off-by: Goutham Veeramachaneni <gouthamve@gmail.com>
2020-04-23 10:00:30 +01:00
Julien Pivotto fc3fb3265a
Merge pull request #7145 from prometheus/release-2.17
Backport release 2.17 into master
2020-04-20 14:08:12 +02:00
Julien Pivotto ed1852ab95
TSDB: Isolation: avoid creating appenderId's without appender (#7135)
Prior to this commit we could have situations where we are creating an
appenderId but never creating an appender to go with it, therefore
blocking the low watermak.

Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2020-04-17 20:51:03 +02:00
ZouYu 2b7437d60e
Fix some warnings: 'redundant type from array, slice, or map composite literal' (#7109)
Signed-off-by: ZouYu <zouy.fnst@cn.fujitsu.com>
2020-04-15 11:17:41 +01:00
Marek Slabicki 8224ddec23
Capitalizing first letter of all log lines (#7043)
Signed-off-by: Marek Slabicki <thaniri@gmail.com>
2020-04-11 09:22:18 +01:00
Brian Brazil cd73b3d33e
Reduce how much old WAL we keep around. (#7098)
Previously we were keeping up to around 6 hours of WAL around by
removing 1/3 every hours. This was excessive, so switch to removing 2/3
which will up to around 3 hours of WAL around.

This will roughly halve the size of the WAL and halve startup time for
those who are I/O bound. This may increase the checkpoint size for
those with certain churn patterns, but by much less than we're saving
from the segments.

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
2020-04-07 15:55:57 +05:30
Brad Walker 3348930df5
Replace fileutil.ReadDir with ioutil.ReadDir (#7029) (#7033)
* tsdb: Replace fileutil.ReadDir with ioutil.ReadDir (#7029)

Signed-off-by: Brad Walker <brad@bradmwalker.com>

* tsdb: Remove fileutil.ReadDir (#7029)

Signed-off-by: Brad Walker <brad@bradmwalker.com>
2020-04-06 19:04:20 +05:30
MengZeLee a7982ffc0f
Fix typo (#7068)
Fix typo.

Signed-off-by: MengZn <adnt587@gmail.com>
2020-03-30 13:18:34 +05:30
Brian Brazil 7646cbca32
Use .UTC everywhere we use time.Unix (#7066)
time.Unix attaches the local timezone, which can then
leak out (e.g. in the alert json). While this is harmless,
we should be consistent.

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
2020-03-29 17:35:39 +01:00
Julien Pivotto 9057decce2
Merge pull request #7060 from prometheus/release-2.17
Release 2.17
2020-03-27 15:57:07 +01:00
Julien Pivotto ceef10cee4 Reset comment
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2020-03-26 00:17:56 +01:00
Julien Pivotto 73228b1b68 Those links should not be reverted
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2020-03-25 20:37:26 +01:00
Julien Pivotto 653f343547 Revert head posting optimization
This reverts commit 52630ad0c7.

Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2020-03-25 20:19:33 +01:00
Bartlomiej Plotka d5c33877f9
storage: Added Chunks{Queryable/Querier/SeriesSet/Series/Iteratable. Added generic Merge{SeriesSet/Querier} implementation. (#7005)
* storage: Added Chunks{Queryable/Querier/SeriesSet/Series/Iteratable. Added generic Merge{SeriesSet/Querier} implementation.

## Rationales:

In many places (e.g. chunk Remote read, Thanos Receive fetching chunk from TSDB), we operate on encoded chunks not samples.
This means that we unnecessary decode/encode, wasting CPU, time and memory.
This PR adds chunk iterator interfaces and makes the merge code to be reused between both seriesSets

I will make the use of it in following PR inside tsdb itself. For now fanout implements it and mergers.

All merges now also allows passing series mergers. This opens doors for custom deduplications other than TSDB vertical ones (e.g. offline one we have in Thanos).

## Changes

* Added Chunk versions of all iterating methods. It all starts in Querier/ChunkQuerier. The plan is that
Storage will implement both chunked and samples.
* Added Seek to chunks.Iterator interface for iterating over chunks.
* NewMergeChunkQuerier was added; Both this and NewMergeQuerier are now using generigMergeQuerier to share the code. Generic code was added.
* Improved tests.
* Added some TODO for further simplifications in next PRs.

Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>

* Addressed Brian's comments.

Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>

* Moved s/Labeled/SeriesLabels as per Krasi suggestion.

Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>

* Addressed Krasi's comments.

Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>

* Second iteration of Krasi comments.

Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>

* Another round of comments.

Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
2020-03-24 20:15:47 +00:00
Ben Kochie fac7a4a050
Merge pull request #7037 from prometheus/bjk/golint
Enable golint in CI
2020-03-24 09:20:08 +01:00
Ben Kochie 269e7c8091
Fix golint issues.
Signed-off-by: Ben Kochie <superq@gmail.com>
2020-03-23 20:38:43 +01:00
Ganesh Vernekar 6fdc852813
Fix TestHeadDeleteSimple to test reloaded Head too (#7021)
Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
2020-03-23 16:55:25 +02:00
Ganesh Vernekar e64a149984
Close Head in DBReadOnly.FlushWAL (#7022)
Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
2020-03-23 14:49:44 +05:30
zhulongcheng e813f60fd6
tsdb: fix sequence check for WAL segments (#7032)
Signed-off-by: zhulongcheng <zhulongcheng.dev@gmail.com>
2020-03-23 13:16:28 +05:30
zhulongcheng dbb8f5861d
tsdb: add tombstonesHeaderSize constant (#7028)
Signed-off-by: zhulongcheng <zhulongcheng.dev@gmail.com>
2020-03-22 12:59:35 +05:30
Julien Pivotto f1984bb007
Merge pull request #7025 from prometheus/release-2.17
Merge release 2.17 into master
2020-03-21 21:39:07 +01:00
beorn7 526cff39b9 Fix tests that were broken by #7009
Signed-off-by: beorn7 <beorn@grafana.com>
2020-03-20 21:22:58 +01:00
Bartlomiej Plotka c4eefd1b3a storage: Removed SelectSorted method; Simplified interface; Added requirement for remote read to sort response.
This is technically BREAKING CHANGE, but it was like this from the beginning: I just notice that we rely in
Prometheus on remote read being sorted. This is because we use selected data from remote reads in MergeSeriesSet
which rely on sorting.

I found during work on https://github.com/prometheus/prometheus/pull/5882 that
we do so many repetitions because of this, for not good reason. I think
I found a good balance between convenience and readability with just one method.
Smaller the interface = better.

Also I don't know what TestSelectSorted was testing, but now it's testing sorting.

Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
2020-03-20 21:14:43 +01:00
Callum Styan f802f1e8ca
Fix bug with WAL watcher and Live Reader metrics usage. (#6998)
* Fix bug with WAL watcher and Live Reader metrics usage.

Calling NewXMetrics when creating a Watcher or LiveReader results in a
registration error, which we're ignoring, and as a result other than the
first Watcher/Reader created, we had no metrics for either. So we would
only have metrics like Watcher Records Read for the first remote write
config in a users config file.

Signed-off-by: Callum Styan <callumstyan@gmail.com>
2020-03-20 17:34:15 +01:00
Bartlomiej Plotka 8fa4ada9ae
Merge pull request #7010 from prometheus/beorn7/fix-test
Fix tests that were broken by #7009
2020-03-19 17:41:02 +00:00
Ganesh Vernekar e50fdbc70c
Live m-mapping of chunks on disk (#6830)
* Live m-mapping of chunks on disk

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>

* Fix review comments

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>

* Fix review comments Part 2

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>

* Fix review comments Part 3

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>

* Fix review comments Part 4

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>

* Attempt to fix windows bug

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
2020-03-19 22:03:44 +05:30
beorn7 c0ecbb38af Fix tests that were broken by #7009
Signed-off-by: beorn7 <beorn@grafana.com>
2020-03-19 16:28:23 +01:00
Björn Rabenstein 1da83305be
Merge pull request #7009 from prometheus/release-2.17
Merge release-2.17 into master
2020-03-19 13:46:28 +01:00
zhulongcheng 5f5c7a4477
tsdb: sort checkpoints by segment number (#6987)
Signed-off-by: zhulongcheng <zhulongcheng.dev@gmail.com>
2020-03-18 20:40:41 +05:30
Julien Pivotto 8907ba6235 Make TSDB use storage errors
This fixes #6992, which was introduced by #6777. There was an
intermediate component which translated TSDB errors into storage errors,
but that component was deleted and this bug went unnoticed, until we
were watching at the Prombench results. Without this, scrape will fail
instead of dropping samples or using "Add" when the series have been
garbage collected.

Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2020-03-17 22:24:25 +01:00
Julien Pivotto 0f9e78bd88
tsdb: fix races around head chunks (#6985)
* tsdb: fix races around head chunks

Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2020-03-16 13:59:22 +01:00
Björn Rabenstein d80b0810c1
Move crucial actions to defer (#6918)
With defer having less of a performance penalty, there is no reason
not to do those crucial operations via defer.

Context: With isolation in place, if we forget to Commit/Rollback, the
low watermark will get stuck forever.

The current code should not have any bugs, but moving to defer helps
to avoid future bugs.

This is also moving the `closeAppend` in the `Commit` implementation
itself to defer. If logging to the WAL fails, we would have missed the
`closeAppend`.

Signed-off-by: beorn7 <beorn@grafana.com>
2020-03-13 20:54:47 +01:00
Bartlomiej Plotka 9d9c45588e Addressed Goutham's review.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
2020-03-13 19:18:31 +00:00
Bartlomiej Plotka cd9516316a Addressed Brian's comments.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
2020-03-13 14:37:47 +00:00
Bartlomiej Plotka fe802f29c9 storage: Removed SelectSorted method; Simplified interface; Added requirement for remote read to sort response.
This is technically BREAKING CHANGE, but it was like this from the beginning: I just notice that we rely in
Prometheus on remote read being sorted. This is because we use selected data from remote reads in MergeSeriesSet
which rely on sorting.

I found during work on https://github.com/prometheus/prometheus/pull/5882 that
we do so many repetitions because of this, for not good reason. I think
I found a good balance between convenience and readability with just one method.
Smaller the interface = better.

Also I don't know what TestSelectSorted was testing, but now it's testing sorting.

Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
2020-03-13 13:06:25 +00:00
Björn Rabenstein 6029fa0b1b
Merge pull request #6951 from prometheus/beorn7/tsdb
tsdb: Do a full rollback upon commit error
2020-03-10 18:48:57 +01:00
beorn7 f6f4fd6556 tsdb: Do a full rollback upon commit error
I think the previous behavior is problematic as it will leave
`memSeries` around that still have `pendingCommit` set to `true`.

The only case where this can happen in this code path is a failure to
write to the WAL, in which case we are probably in trouble anyway. I
believe, however, we should still try to do the right thing and do the
full rollback. This will implicitly try to write to the WAL again, but
this time without samples, which may even succeed. (But we propagate
the previous error in any case.)

This also adds `a.head.putSeriesBuffer(a.sampleSeries)` to Rollback,
which was previously missing.

Signed-off-by: beorn7 <beorn@grafana.com>
2020-03-10 14:54:41 +01:00
李国忠 261cbab8e9
remove Unused parameter 'reg' in wal.Open function (#6941)
Signed-off-by: fuling <fuling.lgz@alibaba-inc.com>
2020-03-10 11:01:47 +05:30
johncming df14bca643
tsdb: fix typo for wrong metric name (#6938)
Signed-off-by: johncming <johncming@yahoo.com>
2020-03-09 08:25:31 +00:00
beorn7 0193b746b1 Defer call to iso.closeAppend
This is taken from #6918. Since we probably won't merge #6918 before
the relase, we have to do this bit of it as it fixes an actual bug
(iso.closeAppend is not called if the append fails because of an error
logging to the WAL).

Signed-off-by: beorn7 <beorn@grafana.com>
2020-03-04 23:33:30 +01:00
Ganesh Vernekar 2cc32c332d Log WAL replay duration
Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
2020-03-03 18:00:04 +01:00
Julien Pivotto ed623f69e2
tsdb: don't allow ingesting empty labelsets (#6891)
* tsdb: don't allow ingesting empty labelsets

When we ingest an empty labelset in the head, further blocks can not be
compacted, with the error:

```
level=error ts=2020-02-27T21:26:58.379Z caller=db.go:659 component=tsdb
msg="compaction failed" err="persist head block: write compaction:
add series: out-of-order series added with label set \"{}\" / prev:
\"{}\""
```

We should therefore reject those invalid empty labelsets upfront.

This can be reproduced with the following:

```
cat << END > prometheus.yml
scrape_configs:
  - job_name: 'prometheus'
    scrape_interval: 1s
    basic_auth:
      username: test
      password: test
    metric_relabel_configs:
    - regex: ".*"
      action: labeldrop

    static_configs:
    - targets:
      - 127.0.1.1:9090
END
./prometheus --storage.tsdb.min-block-duration=1m
```
And wait a few minutes.

Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2020-03-02 07:18:05 +00:00
beorn7 d9af11e636 Do not attempt isolation for appendID == 0
Signed-off-by: beorn7 <beorn@grafana.com>
2020-03-01 02:48:35 +01:00
beorn7 7f30b0984d Implement isolation
This has been ported from https://github.com/prometheus/tsdb/pull/306.

Original implementation by @brian-brazil, explained in detail in the
2nd half of this talk:
https://promcon.io/2017-munich/talks/staleness-in-prometheus-2-0/

The implementation was then processed by @gouthamve into the PR linked
above. Relevant slide deck:
https://docs.google.com/presentation/d/1-ICg7PEmDHYcITykD2SR2xwg56Tzf4gr8zfz1OerY5Y/edit?usp=drivesdk

Signed-off-by: beorn7 <beorn@grafana.com>
Co-authored-by: Brian Brazil <brian.brazil@robustperception.io>
Co-authored-by: Goutham Veeramachaneni <gouthamve@gmail.com>
2020-02-28 14:18:39 +01:00
beorn7 6b8181370f Fix punctuation nits
Signed-off-by: beorn7 <beorn@grafana.com>
2020-02-28 14:17:33 +01:00
Peter Štibraný 1d396b96dc
Specify that returned samples must be ordered by timestamp. (#6877)
Signed-off-by: Peter Štibraný <peter.stibrany@grafana.com>
2020-02-26 13:11:55 +00:00
Marco Pracucci c391b6ca43
Use a cryptographically random generator for ULID
Signed-off-by: Marco Pracucci <marco@pracucci.com>
2020-02-25 12:12:03 +01:00
Julien Pivotto 52630ad0c7 Make head Postings only return series in time range
benchmark                                                old ns/op      new ns/op      delta
BenchmarkQuerierSelect/Head/1of1000000-8                 405805161      120436132      -70.32%
BenchmarkQuerierSelect/Head/10of1000000-8                403079620      120624292      -70.07%
BenchmarkQuerierSelect/Head/100of1000000-8               404678647      120923522      -70.12%
BenchmarkQuerierSelect/Head/1000of1000000-8              403145813      118636563      -70.57%
BenchmarkQuerierSelect/Head/10000of1000000-8             405020046      125716206      -68.96%
BenchmarkQuerierSelect/Head/100000of1000000-8            426305002      175808499      -58.76%
BenchmarkQuerierSelect/Head/1000000of1000000-8           619002108      567013003      -8.40%
BenchmarkQuerierSelect/SortedHead/1of1000000-8           1276316086     120281094      -90.58%
BenchmarkQuerierSelect/SortedHead/10of1000000-8          1282631170     121836526      -90.50%
BenchmarkQuerierSelect/SortedHead/100of1000000-8         1325824787     121174967      -90.86%
BenchmarkQuerierSelect/SortedHead/1000of1000000-8        1271386268     121025117      -90.48%
BenchmarkQuerierSelect/SortedHead/10000of1000000-8       1280223345     130838948      -89.78%
BenchmarkQuerierSelect/SortedHead/100000of1000000-8      1271401620     243635515      -80.84%
BenchmarkQuerierSelect/SortedHead/1000000of1000000-8     1360256090     1307744674     -3.86%
BenchmarkQuerierSelect/Block/1of1000000-8                748183120      707888498      -5.39%
BenchmarkQuerierSelect/Block/10of1000000-8               741084129      716317249      -3.34%
BenchmarkQuerierSelect/Block/100of1000000-8              722157273      735624256      +1.86%
BenchmarkQuerierSelect/Block/1000of1000000-8             727587744      731981838      +0.60%
BenchmarkQuerierSelect/Block/10000of1000000-8            727518578      726860308      -0.09%
BenchmarkQuerierSelect/Block/100000of1000000-8           765577046      757382386      -1.07%
BenchmarkQuerierSelect/Block/1000000of1000000-8          1126722881     1084779083     -3.72%

benchmark                                                old allocs     new allocs     delta
BenchmarkQuerierSelect/Head/1of1000000-8                 4000018        24             -100.00%
BenchmarkQuerierSelect/Head/10of1000000-8                4000036        82             -100.00%
BenchmarkQuerierSelect/Head/100of1000000-8               4000216        625            -99.98%
BenchmarkQuerierSelect/Head/1000of1000000-8              4002016        6028           -99.85%
BenchmarkQuerierSelect/Head/10000of1000000-8             4020016        60037          -98.51%
BenchmarkQuerierSelect/Head/100000of1000000-8            4200016        600047         -85.71%
BenchmarkQuerierSelect/Head/1000000of1000000-8           6000016        6000016        +0.00%
BenchmarkQuerierSelect/SortedHead/1of1000000-8           4000055        28             -100.00%
BenchmarkQuerierSelect/SortedHead/10of1000000-8          4000073        87             -100.00%
BenchmarkQuerierSelect/SortedHead/100of1000000-8         4000253        630            -99.98%
BenchmarkQuerierSelect/SortedHead/1000of1000000-8        4002053        6036           -99.85%
BenchmarkQuerierSelect/SortedHead/10000of1000000-8       4020053        60054          -98.51%
BenchmarkQuerierSelect/SortedHead/100000of1000000-8      4200053        600074         -85.71%
BenchmarkQuerierSelect/SortedHead/1000000of1000000-8     6000053        6000053        +0.00%
BenchmarkQuerierSelect/Block/1of1000000-8                6000021        6000021        +0.00%
BenchmarkQuerierSelect/Block/10of1000000-8               6000057        6000057        +0.00%
BenchmarkQuerierSelect/Block/100of1000000-8              6000417        6000417        +0.00%
BenchmarkQuerierSelect/Block/1000of1000000-8             6004017        6004017        +0.00%
BenchmarkQuerierSelect/Block/10000of1000000-8            6040017        6040017        +0.00%
BenchmarkQuerierSelect/Block/100000of1000000-8           6400017        6400017        +0.00%
BenchmarkQuerierSelect/Block/1000000of1000000-8          10000018       10000018       +0.00%

benchmark                                                old bytes     new bytes     delta
BenchmarkQuerierSelect/Head/1of1000000-8                 176001177     1392          -100.00%
BenchmarkQuerierSelect/Head/10of1000000-8                176002329     4368          -100.00%
BenchmarkQuerierSelect/Head/100of1000000-8               176013849     33520         -99.98%
BenchmarkQuerierSelect/Head/1000of1000000-8              176129056     321456        -99.82%
BenchmarkQuerierSelect/Head/10000of1000000-8             177281049     3427376       -98.07%
BenchmarkQuerierSelect/Head/100000of1000000-8            188801049     35055408      -81.43%
BenchmarkQuerierSelect/Head/1000000of1000000-8           304001059     304001049     -0.00%
BenchmarkQuerierSelect/SortedHead/1of1000000-8           229192188     2488          -100.00%
BenchmarkQuerierSelect/SortedHead/10of1000000-8          229193340     5568          -100.00%
BenchmarkQuerierSelect/SortedHead/100of1000000-8         229204860     35536         -99.98%
BenchmarkQuerierSelect/SortedHead/1000of1000000-8        229320060     345104        -99.85%
BenchmarkQuerierSelect/SortedHead/10000of1000000-8       230472060     3894672       -98.31%
BenchmarkQuerierSelect/SortedHead/100000of1000000-8      241992060     40511632      -83.26%
BenchmarkQuerierSelect/SortedHead/1000000of1000000-8     357192060     357192060     +0.00%
BenchmarkQuerierSelect/Block/1of1000000-8                227201516     227201506     -0.00%
BenchmarkQuerierSelect/Block/10of1000000-8               227203057     227203041     -0.00%
BenchmarkQuerierSelect/Block/100of1000000-8              227217161     227217165     +0.00%
BenchmarkQuerierSelect/Block/1000of1000000-8             227358279     227358289     +0.00%
BenchmarkQuerierSelect/Block/10000of1000000-8            228769485     228769475     -0.00%
BenchmarkQuerierSelect/Block/100000of1000000-8           242881487     242881477     -0.00%
BenchmarkQuerierSelect/Block/1000000of1000000-8          384001705     384001705     +0.00%

Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2020-02-20 22:41:46 +01:00
Brian Brazil cebe36c7d5 Make head Postings only return series in time range
Series() will fetch all the metadata for a series,
even if it's going to be filtered later due to time ranges.

For 1M series we save ~1.1s if you only needed some of the data, but take an
extra ~.2s if you did want everything.

benchmark                                  old ns/op      new ns/op      delta
BenchmarkHeadSeries/1of1000000-4           1443715987     131553480      -90.89%
BenchmarkHeadSeries/10of1000000-4          1433394040     130730596      -90.88%
BenchmarkHeadSeries/100of1000000-4         1437444672     131360813      -90.86%
BenchmarkHeadSeries/1000of1000000-4        1438958659     132573137      -90.79%
BenchmarkHeadSeries/10000of1000000-4       1438061766     145742377      -89.87%
BenchmarkHeadSeries/100000of1000000-4      1455060948     281659416      -80.64%
BenchmarkHeadSeries/1000000of1000000-4     1633524504     1803550153     +10.41%

benchmark                                  old allocs     new allocs     delta
BenchmarkHeadSeries/1of1000000-4           4000055        28             -100.00%
BenchmarkHeadSeries/10of1000000-4          4000073        87             -100.00%
BenchmarkHeadSeries/100of1000000-4         4000253        630            -99.98%
BenchmarkHeadSeries/1000of1000000-4        4002053        6036           -99.85%
BenchmarkHeadSeries/10000of1000000-4       4020053        60054          -98.51%
BenchmarkHeadSeries/100000of1000000-4      4200053        600074         -85.71%
BenchmarkHeadSeries/1000000of1000000-4     6000053        6000094        +0.00%

benchmark                                  old bytes     new bytes     delta
BenchmarkHeadSeries/1of1000000-4           229192184     2488          -100.00%
BenchmarkHeadSeries/10of1000000-4          229193336     5568          -100.00%
BenchmarkHeadSeries/100of1000000-4         229204856     35536         -99.98%
BenchmarkHeadSeries/1000of1000000-4        229320056     345104        -99.85%
BenchmarkHeadSeries/10000of1000000-4       230472056     3894673       -98.31%
BenchmarkHeadSeries/100000of1000000-4      241992056     40511632      -83.26%
BenchmarkHeadSeries/1000000of1000000-4     357192056     402380440     +12.65%

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
2020-02-20 22:18:42 +01:00
Bartlomiej Plotka 48ead578a0 Moved tsdbconfig to main.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
2020-02-18 11:25:36 +00:00
Bartlomiej Plotka a20bebf7eb Moved readyStorage to main.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
2020-02-17 18:03:57 +00:00
Bartlomiej Plotka 8a775bc468 Moved unit agnostic options to separate pkg.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
2020-02-17 18:03:57 +00:00
Bartlomiej Plotka 59c9d6ef45 Addressed Brian's comments, moved metrics to main.go
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
2020-02-17 18:03:57 +00:00
Bartlomiej Plotka 5d84e5d895 Make chunkenc.Iterator.At behaviour unspecified without Next done.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
2020-02-17 18:03:57 +00:00
Bartlomiej Plotka ad51c649b5 Fixed float conversion difference.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
2020-02-17 18:03:57 +00:00
Bartlomiej Plotka cfba92a133 Addressed comments.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
2020-02-17 18:03:57 +00:00
Bartlomiej Plotka c0a9ab0829 Close db properly in tests.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
2020-02-17 18:03:57 +00:00
Bartlomiej Plotka fb79f515fc Fixed second bug.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
2020-02-17 18:03:57 +00:00
Bartlomiej Plotka aadffd1360 Finally found a fix for the bug I was chasing for 2h...
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
2020-02-17 18:03:57 +00:00
Bartlomiej Plotka 849faa407b Minor fixes.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
2020-02-17 18:03:57 +00:00
Bartlomiej Plotka 2cf637fbf5 Addressed comments.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
2020-02-17 18:03:57 +00:00
Bartlomiej Plotka 34426766d8 Unify Iterator interfaces. All point to storage now.
This is part of https://github.com/prometheus/prometheus/pull/5882 that can be done to simplify things.
All todos I added will be fixed in follow up PRs.

* querier.Querier, querier.Appender, querier.SeriesSet, and querier.Series interfaces merged
with storage interface.go. All imports that.
* querier.SeriesIterator replaced by chunkenc.Iterator
* Added chunkenc.Iterator.Seek method and tests for xor implementation (?)
* Since we properly handle SelectParams for Select methods I adjusted min max
based on that. This should help in terms of performance for queries with functions like offset.
* added Seek to deletedIterator and test.
* storage/tsdb was removed as it was only a unnecessary glue with incompatible structs.

No logic was changed, only different source of abstractions, so no need for benchmarks.

Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
2020-02-17 18:03:54 +00:00
Bartlomiej Plotka 88af973663
Merge pull request #6820 from codesome/break-compact
Break DB.Compact and DB.CompactHead and DB.CompactBlocks
2020-02-17 13:20:21 +00:00
Zhou Hao e628fd7735
fix comments spelling (#6829)
Signed-off-by: Zhou Hao <zhouhao@cn.fujitsu.com>
2020-02-17 12:45:11 +01:00
johncming c30abf1e2b tsdb/wal: remove unused argument.
Signed-off-by: johncming <johncming@yahoo.com>
2020-02-15 21:54:09 +08:00
Ganesh Vernekar 6f1d2ec73e
Break DB.Compact and DB.compactHead and DB.compactBlocks. Add DB.CompactHead.
Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
2020-02-14 20:33:26 +05:30
Ganesh Vernekar bc6fd96979
Refactor TestGCChunkAccess and TestGCSeriesAccess to create chunks by appending samples
Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
2020-02-06 18:26:18 +05:30
Bartlomiej Plotka 641676b397
Merge pull request #6687 from pracucci/reduce-tsdb-head-inuse-memory
Trim TSDB head chunks after being cut, to reduce inuse memory
2020-02-06 10:55:33 +00:00
Ganesh Vernekar 0a27df92f0
Refactor tsdb/chunks/chunks.go for future PRs (#6754)
Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
2020-02-05 19:09:40 +05:30
Ganesh Vernekar 56bf0ee4dc
Add OpenMmapFileWithSize method (#6753)
Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
2020-02-05 19:08:30 +05:30
Marco Pracucci 699f3e8f4d
Added comments to the Chunk interface
Signed-off-by: Marco Pracucci <marco@pracucci.com>
2020-02-05 13:07:41 +01:00
Marco Pracucci 0703dae7cc
Compact TSDB head chunks after being cut, to reduce inuse memory
Signed-off-by: Marco Pracucci <marco@pracucci.com>
2020-02-05 13:00:39 +01:00
Ben Ye 492414542e
fix matcher for regex (#6540)
Signed-off-by: yeya24 <yb532204897@gmail.com>
2020-02-05 10:53:12 +00:00
LongKB 1ebcf3d5e4
Fix some typo in comments (#6730)
Signed-off-by: Kim Bao Long <longkb@vn.fujitsu.com>
2020-01-31 12:11:52 +05:30
Callum Styan 83601202d9
Add missing param to NewHead call in test. (#6725)
Signed-off-by: Callum Styan <callumstyan@gmail.com>
2020-01-30 11:45:14 -08:00
Thor 17d8c49919
made stripe size configurable (#6644)
Signed-off-by: Thor <thansen@digitalocean.com>
2020-01-30 12:42:43 +05:30
Brian Brazil 61262159c4 Simplify benchmark given the new API
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
2020-01-28 14:38:09 +00:00
Brian Brazil 38d32e0686 Don't sort postings if we only have one block.
Sorting the heads postings can be quite slow.
We only need sorted series when merging with another
querier, so only sort then.
This will make big queries that only touch the head faster,
though queries that touch both the head and a block will still
be the same speed. This probably won't help much with graphing
unless the range is under an hour, however it should make most
recording rules faster.

Add gaurantee that remote read streaming produces sorted series.

PromQL benchmarks for histograms show only 2-3% improvement, but
they're only over 1k series.

benchmark                                                old ns/op      new ns/op      delta
BenchmarkQuerierSelect/Head/1of1000000-4                 1375486282     507657736      -63.09%
BenchmarkQuerierSelect/Head/10of1000000-4                1387859004     507769850      -63.41%
BenchmarkQuerierSelect/Head/100of1000000-4               1387087935     506029110      -63.52%
BenchmarkQuerierSelect/Head/1000of1000000-4              1386869064     504521986      -63.62%
BenchmarkQuerierSelect/Head/10000of1000000-4             1386213685     505210422      -63.55%
BenchmarkQuerierSelect/Head/100000of1000000-4            1392754988     529842406      -61.96%
BenchmarkQuerierSelect/Head/1000000of1000000-4           1569414722     725059506      -53.80%
BenchmarkQuerierSelect/SortedHead/1of1000000-4           1381019902     1370495863     -0.76%
BenchmarkQuerierSelect/SortedHead/10of1000000-4          1375696209     1366789468     -0.65%
BenchmarkQuerierSelect/SortedHead/100of1000000-4         1386009422     1364519297     -1.55%
BenchmarkQuerierSelect/SortedHead/1000of1000000-4        1377700532     1364486191     -0.96%
BenchmarkQuerierSelect/SortedHead/10000of1000000-4       1383539536     1369545314     -1.01%
BenchmarkQuerierSelect/SortedHead/100000of1000000-4      1410089163     1394731339     -1.09%
BenchmarkQuerierSelect/SortedHead/1000000of1000000-4     1634744148     1581554956     -3.25%
BenchmarkQuerierSelect/Block/1of1000000-4                881741242      879839470      -0.22%
BenchmarkQuerierSelect/Block/10of1000000-4               880381562      882846038      +0.28%
BenchmarkQuerierSelect/Block/100of1000000-4              887519357      881016916      -0.73%
BenchmarkQuerierSelect/Block/1000of1000000-4             902194205      883433524      -2.08%
BenchmarkQuerierSelect/Block/10000of1000000-4            892321964      885130170      -0.81%
BenchmarkQuerierSelect/Block/100000of1000000-4           938604466      933527150      -0.54%
BenchmarkQuerierSelect/Block/1000000of1000000-4          1313510845     1295881124     -1.34%

benchmark                                                old allocs     new allocs     delta
BenchmarkQuerierSelect/Head/1of1000000-4                 4000056        4000018        -0.00%
BenchmarkQuerierSelect/Head/10of1000000-4                4000074        4000036        -0.00%
BenchmarkQuerierSelect/Head/100of1000000-4               4000254        4000216        -0.00%
BenchmarkQuerierSelect/Head/1000of1000000-4              4002054        4002016        -0.00%
BenchmarkQuerierSelect/Head/10000of1000000-4             4020054        4020016        -0.00%
BenchmarkQuerierSelect/Head/100000of1000000-4            4200054        4200016        -0.00%
BenchmarkQuerierSelect/Head/1000000of1000000-4           6000054        6000016        -0.00%
BenchmarkQuerierSelect/SortedHead/1of1000000-4           4000071        4000071        +0.00%
BenchmarkQuerierSelect/SortedHead/10of1000000-4          4000089        4000089        +0.00%
BenchmarkQuerierSelect/SortedHead/100of1000000-4         4000269        4000269        +0.00%
BenchmarkQuerierSelect/SortedHead/1000of1000000-4        4002069        4002069        +0.00%
BenchmarkQuerierSelect/SortedHead/10000of1000000-4       4020069        4020069        +0.00%
BenchmarkQuerierSelect/SortedHead/100000of1000000-4      4200069        4200069        +0.00%
BenchmarkQuerierSelect/SortedHead/1000000of1000000-4     6000069        6000069        +0.00%
BenchmarkQuerierSelect/Block/1of1000000-4                6000023        6000022        -0.00%
BenchmarkQuerierSelect/Block/10of1000000-4               6000059        6000058        -0.00%
BenchmarkQuerierSelect/Block/100of1000000-4              6000419        6000418        -0.00%
BenchmarkQuerierSelect/Block/1000of1000000-4             6004019        6004018        -0.00%
BenchmarkQuerierSelect/Block/10000of1000000-4            6040019        6040018        -0.00%
BenchmarkQuerierSelect/Block/100000of1000000-4           6400019        6400018        -0.00%
BenchmarkQuerierSelect/Block/1000000of1000000-4          10000020       10000019       -0.00%

benchmark                                                old bytes     new bytes     delta
BenchmarkQuerierSelect/Head/1of1000000-4                 229192200     176001176     -23.21%
BenchmarkQuerierSelect/Head/10of1000000-4                229193352     176002328     -23.21%
BenchmarkQuerierSelect/Head/100of1000000-4               229204872     176013848     -23.21%
BenchmarkQuerierSelect/Head/1000of1000000-4              229320072     176129048     -23.20%
BenchmarkQuerierSelect/Head/10000of1000000-4             230472072     177281048     -23.08%
BenchmarkQuerierSelect/Head/100000of1000000-4            241992072     188801048     -21.98%
BenchmarkQuerierSelect/Head/1000000of1000000-4           357192072     304001048     -14.89%
BenchmarkQuerierSelect/SortedHead/1of1000000-4           229193928     229193928     +0.00%
BenchmarkQuerierSelect/SortedHead/10of1000000-4          229195080     229195080     +0.00%
BenchmarkQuerierSelect/SortedHead/100of1000000-4         229206600     229206600     +0.00%
BenchmarkQuerierSelect/SortedHead/1000of1000000-4        229321800     229321800     +0.00%
BenchmarkQuerierSelect/SortedHead/10000of1000000-4       230473800     230473800     +0.00%
BenchmarkQuerierSelect/SortedHead/100000of1000000-4      241993800     241993800     +0.00%
BenchmarkQuerierSelect/SortedHead/1000000of1000000-4     357193800     357193800     +0.00%
BenchmarkQuerierSelect/Block/1of1000000-4                227201516     227201500     -0.00%
BenchmarkQuerierSelect/Block/10of1000000-4               227202924     227202908     -0.00%
BenchmarkQuerierSelect/Block/100of1000000-4              227217036     227217020     -0.00%
BenchmarkQuerierSelect/Block/1000of1000000-4             227358156     227358140     -0.00%
BenchmarkQuerierSelect/Block/10000of1000000-4            228769356     228769340     -0.00%
BenchmarkQuerierSelect/Block/100000of1000000-4           242881356     242881340     -0.00%
BenchmarkQuerierSelect/Block/1000000of1000000-4          384001616     384001600     -0.00%

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
2020-01-28 09:14:56 +00:00
Brian Brazil d682731efc Extend BenchmarkQuerierSelect to use multiple blocks.
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
2020-01-28 09:14:56 +00:00
Ganesh Vernekar 21a5cf5d1d
Bring back tombstones to Head block (#6542)
* Bring back tombstones to Head block

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>

* Add test cases

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>

* Cleanup

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
2020-01-20 21:08:00 +05:30
Julien Pivotto 46d18112a3 tsdb: error on series with duplicate labels (#6664)
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2020-01-20 11:05:27 +00:00
Ganesh Vernekar e0733a99e3
Expose DB.Compact()
Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
2020-01-20 12:59:49 +05:30
John McBride 669592a2c4 Exports metric for WAL write errors (#6647)
* Exports metric for WAL write errors

Signed-off-by: John McBride <jpmmcbride@gmail.com>

* Correct name for counter

Signed-off-by: John McBride <jpmmcbride@gmail.com>

* Move WAL write failure to wal.go

Signed-off-by: John McBride <jpmmcbride@gmail.com>

* WAL write fail metric moved to Log for external consumers

Signed-off-by: John McBride <jpmmcbride@gmail.com>
2020-01-17 12:12:04 -08:00
Ganesh Vernekar bc42cf6806
Merge pull request #6559 from yeya24/tomb-test
Add a tomb interval test case
2020-01-16 11:42:28 +05:30
Julien Pivotto 201491fd1b Remove old checkpoint dir if it still exists (#6621)
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2020-01-14 09:35:24 +00:00
Julien Pivotto 0cff72d756 tsdb: writePostingsToTmpFiles returns nil instead of err (#6618)
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2020-01-13 22:40:12 +00:00
Julien Pivotto 398bd84d6f small tsdb fixes (#6616)
* tsdb: register compactions_skipped_total

That metric was not registered.

I also reordered the metrics in the list.

* tsdb: display correct error when WAL can't be read

Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2020-01-13 22:15:45 +00:00
Bartlomiej Plotka 1e64d757f7
Merge pull request #6585 from prometheus/fix-symbols-lookup
tsdb: Fixed Symbol Lookup edge case; Added tests.
2020-01-10 11:44:23 +00:00
Guangwen Feng 5e906cc09a Fix typo in comment for func AddPadding
Signed-off-by: Guangwen Feng <fenggw-fnst@cn.fujitsu.com>
2020-01-10 18:11:30 +08:00
Bartlomiej Plotka 59ea320d96 tsdb: Fixed Symbol Lookup edge case; Added tests.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
2020-01-09 11:35:58 +00:00
Bartlomiej Plotka 863613300d Exposed FileWriter. (#6589)
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
2020-01-09 11:28:10 +00:00
Nguyen Hai Truong b7376f3bff Remove duplicated words (#6569)
Although it is spelling mistake, it might make an affect while reading.

Signed-off-by: Nguyen Hai Truong <truongnh@fujitsu.com>
2020-01-08 08:43:27 +00:00
yeya24 06fd7cf3ba add tomb interval testcase
Signed-off-by: yeya24 <yb532204897@gmail.com>
2020-01-07 20:24:46 +00:00
Brian Brazil 4708915ac6 Replace StringTuples with []string
Benchmarks show slight cpu/allocs improvements.

benchmark                                                               old ns/op      new ns/op      delta
BenchmarkPostingsForMatchers/Head/n="1"-4                               269978625      235305110      -12.84%
BenchmarkPostingsForMatchers/Head/n="1",j="foo"-4                       129739974      121646193      -6.24%
BenchmarkPostingsForMatchers/Head/j="foo",n="1"-4                       123826274      122056253      -1.43%
BenchmarkPostingsForMatchers/Head/n="1",j!="foo"-4                      126962188      130038235      +2.42%
BenchmarkPostingsForMatchers/Head/i=~".*"-4                             6423653989     5991126455     -6.73%
BenchmarkPostingsForMatchers/Head/i=~".+"-4                             6934647521     7033370634     +1.42%
BenchmarkPostingsForMatchers/Head/i=~""-4                               1177781285     1121497736     -4.78%
BenchmarkPostingsForMatchers/Head/i!=""-4                               7033680256     7246094991     +3.02%
BenchmarkPostingsForMatchers/Head/n="1",i=~".*",j="foo"-4               293702332      287440212      -2.13%
BenchmarkPostingsForMatchers/Head/n="1",i=~".*",i!="2",j="foo"-4        307628268      307039964      -0.19%
BenchmarkPostingsForMatchers/Head/n="1",i!=""-4                         512247746      480003862      -6.29%
BenchmarkPostingsForMatchers/Head/n="1",i!="",j="foo"-4                 361199794      367066917      +1.62%
BenchmarkPostingsForMatchers/Head/n="1",i=~".+",j="foo"-4               478863761      476037784      -0.59%
BenchmarkPostingsForMatchers/Head/n="1",i=~"1.+",j="foo"-4              103394659      102902098      -0.48%
BenchmarkPostingsForMatchers/Head/n="1",i=~".+",i!="2",j="foo"-4        482552781      475453903      -1.47%
BenchmarkPostingsForMatchers/Head/n="1",i=~".+",i!~"2.*",j="foo"-4      559257389      589297047      +5.37%
BenchmarkPostingsForMatchers/Block/n="1"-4                              36492          37012          +1.42%
BenchmarkPostingsForMatchers/Block/n="1",j="foo"-4                      557788         611903         +9.70%
BenchmarkPostingsForMatchers/Block/j="foo",n="1"-4                      554443         573814         +3.49%
BenchmarkPostingsForMatchers/Block/n="1",j!="foo"-4                     553227         553826         +0.11%
BenchmarkPostingsForMatchers/Block/i=~".*"-4                            113855090      111707221      -1.89%
BenchmarkPostingsForMatchers/Block/i=~".+"-4                            133994674      136520728      +1.89%
BenchmarkPostingsForMatchers/Block/i=~""-4                              38138091       36299898       -4.82%
BenchmarkPostingsForMatchers/Block/i!=""-4                              28861213       27396723       -5.07%
BenchmarkPostingsForMatchers/Block/n="1",i=~".*",j="foo"-4              112699941      110853868      -1.64%
BenchmarkPostingsForMatchers/Block/n="1",i=~".*",i!="2",j="foo"-4       113198026      111389742      -1.60%
BenchmarkPostingsForMatchers/Block/n="1",i!=""-4                        28994069       27363804       -5.62%
BenchmarkPostingsForMatchers/Block/n="1",i!="",j="foo"-4                29709406       28589223       -3.77%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",j="foo"-4              134695119      135736971      +0.77%
BenchmarkPostingsForMatchers/Block/n="1",i=~"1.+",j="foo"-4             26783286       25826928       -3.57%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",i!="2",j="foo"-4       134733254      134116739      -0.46%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",i!~"2.*",j="foo"-4     160713937      158802768      -1.19%

benchmark                                                               old allocs     new allocs     delta
BenchmarkPostingsForMatchers/Head/n="1"-4                               36             36             +0.00%
BenchmarkPostingsForMatchers/Head/n="1",j="foo"-4                       38             38             +0.00%
BenchmarkPostingsForMatchers/Head/j="foo",n="1"-4                       38             38             +0.00%
BenchmarkPostingsForMatchers/Head/n="1",j!="foo"-4                      42             40             -4.76%
BenchmarkPostingsForMatchers/Head/i=~".*"-4                             61             59             -3.28%
BenchmarkPostingsForMatchers/Head/i=~".+"-4                             100088         100087         -0.00%
BenchmarkPostingsForMatchers/Head/i=~""-4                               100053         100051         -0.00%
BenchmarkPostingsForMatchers/Head/i!=""-4                               100087         100085         -0.00%
BenchmarkPostingsForMatchers/Head/n="1",i=~".*",j="foo"-4               44             42             -4.55%
BenchmarkPostingsForMatchers/Head/n="1",i=~".*",i!="2",j="foo"-4        50             48             -4.00%
BenchmarkPostingsForMatchers/Head/n="1",i!=""-4                         100076         100074         -0.00%
BenchmarkPostingsForMatchers/Head/n="1",i!="",j="foo"-4                 100077         100075         -0.00%
BenchmarkPostingsForMatchers/Head/n="1",i=~".+",j="foo"-4               100077         100074         -0.00%
BenchmarkPostingsForMatchers/Head/n="1",i=~"1.+",j="foo"-4              11167          11165          -0.02%
BenchmarkPostingsForMatchers/Head/n="1",i=~".+",i!="2",j="foo"-4        100082         100080         -0.00%
BenchmarkPostingsForMatchers/Head/n="1",i=~".+",i!~"2.*",j="foo"-4      111265         111261         -0.00%
BenchmarkPostingsForMatchers/Block/n="1"-4                              6              6              +0.00%
BenchmarkPostingsForMatchers/Block/n="1",j="foo"-4                      11             11             +0.00%
BenchmarkPostingsForMatchers/Block/j="foo",n="1"-4                      11             11             +0.00%
BenchmarkPostingsForMatchers/Block/n="1",j!="foo"-4                     15             13             -13.33%
BenchmarkPostingsForMatchers/Block/i=~".*"-4                            12             10             -16.67%
BenchmarkPostingsForMatchers/Block/i=~".+"-4                            100040         100038         -0.00%
BenchmarkPostingsForMatchers/Block/i=~""-4                              100045         100043         -0.00%
BenchmarkPostingsForMatchers/Block/i!=""-4                              100041         100039         -0.00%
BenchmarkPostingsForMatchers/Block/n="1",i=~".*",j="foo"-4              17             15             -11.76%
BenchmarkPostingsForMatchers/Block/n="1",i=~".*",i!="2",j="foo"-4       23             21             -8.70%
BenchmarkPostingsForMatchers/Block/n="1",i!=""-4                        100046         100044         -0.00%
BenchmarkPostingsForMatchers/Block/n="1",i!="",j="foo"-4                100050         100048         -0.00%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",j="foo"-4              100049         100047         -0.00%
BenchmarkPostingsForMatchers/Block/n="1",i=~"1.+",j="foo"-4             11150          11148          -0.02%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",i!="2",j="foo"-4       100055         100053         -0.00%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",i!~"2.*",j="foo"-4     111238         111234         -0.00%

benchmark                                                               old bytes     new bytes     delta
BenchmarkPostingsForMatchers/Head/n="1"-4                               10887816      10887817      +0.00%
BenchmarkPostingsForMatchers/Head/n="1",j="foo"-4                       5456648       5456648       +0.00%
BenchmarkPostingsForMatchers/Head/j="foo",n="1"-4                       5456648       5456648       +0.00%
BenchmarkPostingsForMatchers/Head/n="1",j!="foo"-4                      5456792       5456712       -0.00%
BenchmarkPostingsForMatchers/Head/i=~".*"-4                             258254408     258254328     -0.00%
BenchmarkPostingsForMatchers/Head/i=~".+"-4                             273912888     273912904     +0.00%
BenchmarkPostingsForMatchers/Head/i=~""-4                               17266680      17266600      -0.00%
BenchmarkPostingsForMatchers/Head/i!=""-4                               273912416     273912336     -0.00%
BenchmarkPostingsForMatchers/Head/n="1",i=~".*",j="foo"-4               7062578       7062498       -0.00%
BenchmarkPostingsForMatchers/Head/n="1",i=~".*",i!="2",j="foo"-4        7062770       7062690       -0.00%
BenchmarkPostingsForMatchers/Head/n="1",i!=""-4                         28152346      28152266      -0.00%
BenchmarkPostingsForMatchers/Head/n="1",i!="",j="foo"-4                 22721178      22721098      -0.00%
BenchmarkPostingsForMatchers/Head/n="1",i=~".+",j="foo"-4               22721336      22721224      -0.00%
BenchmarkPostingsForMatchers/Head/n="1",i=~"1.+",j="foo"-4              3623804       3623733       -0.00%
BenchmarkPostingsForMatchers/Head/n="1",i=~".+",i!="2",j="foo"-4        22721480      22721400      -0.00%
BenchmarkPostingsForMatchers/Head/n="1",i=~".+",i!~"2.*",j="foo"-4      24816652      24816444      -0.00%
BenchmarkPostingsForMatchers/Block/n="1"-4                              296           296           +0.00%
BenchmarkPostingsForMatchers/Block/n="1",j="foo"-4                      424           424           +0.00%
BenchmarkPostingsForMatchers/Block/j="foo",n="1"-4                      424           424           +0.00%
BenchmarkPostingsForMatchers/Block/n="1",j!="foo"-4                     1544          1464          -5.18%
BenchmarkPostingsForMatchers/Block/i=~".*"-4                            1606114       1606045       -0.00%
BenchmarkPostingsForMatchers/Block/i=~".+"-4                            17264709      17264629      -0.00%
BenchmarkPostingsForMatchers/Block/i=~""-4                              17264780      17264696      -0.00%
BenchmarkPostingsForMatchers/Block/i!=""-4                              17264680      17264600      -0.00%
BenchmarkPostingsForMatchers/Block/n="1",i=~".*",j="foo"-4              1606253       1606165       -0.01%
BenchmarkPostingsForMatchers/Block/n="1",i=~".*",i!="2",j="foo"-4       1606445       1606348       -0.01%
BenchmarkPostingsForMatchers/Block/n="1",i!=""-4                        17264808      17264728      -0.00%
BenchmarkPostingsForMatchers/Block/n="1",i!="",j="foo"-4                17264936      17264856      -0.00%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",j="foo"-4              17264965      17264885      -0.00%
BenchmarkPostingsForMatchers/Block/n="1",i=~"1.+",j="foo"-4             3148262       3148182       -0.00%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",i!="2",j="foo"-4       17265141      17265061      -0.00%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",i!~"2.*",j="foo"-4     20416944      20416784      -0.00%

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
2020-01-07 12:20:03 +00:00
Brian Brazil e9ede51753 Remove last vestiges of never used composite index code.
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
2020-01-07 12:20:03 +00:00
Brian Brazil 000e306f2f
Handle V1 indexes, some of which have unsorted posting offset tables. (#6564)
Fixes #6535

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
2020-01-06 14:06:11 +00:00
Brian Brazil 536d416299
Fix tsdb code and tests to work on Windows. (#6547)
Add back Windows CI, we lost it when tsdb was merged into the prometheus
repo. There's many tests failing outside tsdb, so only test tsdb for
now.

Fixes #6513

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
2020-01-04 14:55:02 +00:00
Josh Soref 91d76c8023 Spelling (#6517)
* spelling: alertmanager

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: attributes

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: autocomplete

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: bootstrap

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: caught

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: chunkenc

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: compaction

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: corrupted

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: deletable

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: expected

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: fine-grained

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: initialized

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: iteration

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: javascript

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: multiple

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: number

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: overlapping

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: possible

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: postings

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: procedure

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: programmatic

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: queuing

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: querier

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: repairing

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: received

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: reproducible

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: retention

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: sample

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: segements

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: semantic

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: software [LICENSE]

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: staging

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: timestamp

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: unfortunately

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: uvarint

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: subsequently

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: ressamples

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>
2020-01-02 15:54:09 +01:00
Bartlomiej Plotka 3b8ef6386c
Fixed race in Chunks method. (#6515)
Added regression test.

Fixes #6512

Before (not deterministic result due to concurrency):
```
=== RUN   TestChunkReader_ConcurrentRead
--- FAIL: TestChunkReader_ConcurrentRead (0.01s)
    db_test.go:2702: unexpected error: checksum mismatch expected:597ad276, actual:597ad276
    db_test.go:2702: unexpected error: checksum mismatch expected:dd0cdbc2, actual:dd0cdbc2
FAIL
```

After succuess on multiple runs.


Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
2019-12-24 22:55:22 +01:00
Brian Brazil 2b653ee230 Write label indices based on the posting offset table.
This avoids having to build it up in RAM, and means that all variable
memory usage for compactions is now 0.25 bytes per symbol plus a few
O(labelnames) structures. So in practice, pretty close to constant
memory for compactions.

benchmark                                                                                   old ns/op       new ns/op       delta
BenchmarkCompaction/type=normal,blocks=4,series=10000,samplesPerSeriesPerBlock=101-4        662974828       667162981       +0.63%
BenchmarkCompaction/type=normal,blocks=4,series=10000,samplesPerSeriesPerBlock=1001-4       2459590377      2131168138      -13.35%
BenchmarkCompaction/type=normal,blocks=4,series=10000,samplesPerSeriesPerBlock=2001-4       3808280548      3919290378      +2.91%
BenchmarkCompaction/type=normal,blocks=4,series=10000,samplesPerSeriesPerBlock=5001-4       8513884311      8738099339      +2.63%
BenchmarkCompaction/type=vertical,blocks=4,series=10000,samplesPerSeriesPerBlock=101-4      1898843003      1944131966      +2.39%
BenchmarkCompaction/type=vertical,blocks=4,series=10000,samplesPerSeriesPerBlock=1001-4     5601478437      6031391658      +7.67%
BenchmarkCompaction/type=vertical,blocks=4,series=10000,samplesPerSeriesPerBlock=2001-4     11225096097     11359624463     +1.20%
BenchmarkCompaction/type=vertical,blocks=4,series=10000,samplesPerSeriesPerBlock=5001-4     23994637282     23919583343     -0.31%
BenchmarkCompactionFromHead/labelnames=1,labelvalues=100000-4                               891042098       826898358       -7.20%
BenchmarkCompactionFromHead/labelnames=10,labelvalues=10000-4                               915949138       902555676       -1.46%
BenchmarkCompactionFromHead/labelnames=100,labelvalues=1000-4                               955138431       879067946       -7.96%
BenchmarkCompactionFromHead/labelnames=1000,labelvalues=100-4                               991447640       958785968       -3.29%
BenchmarkCompactionFromHead/labelnames=10000,labelvalues=10-4                               1068729356      980249080       -8.28%

benchmark                                                                                   old allocs     new allocs     delta
BenchmarkCompaction/type=normal,blocks=4,series=10000,samplesPerSeriesPerBlock=101-4        470778         470556         -0.05%
BenchmarkCompaction/type=normal,blocks=4,series=10000,samplesPerSeriesPerBlock=1001-4       791429         791225         -0.03%
BenchmarkCompaction/type=normal,blocks=4,series=10000,samplesPerSeriesPerBlock=2001-4       1111514        1111257        -0.02%
BenchmarkCompaction/type=normal,blocks=4,series=10000,samplesPerSeriesPerBlock=5001-4       2111498        2111369        -0.01%
BenchmarkCompaction/type=vertical,blocks=4,series=10000,samplesPerSeriesPerBlock=101-4      841433         841220         -0.03%
BenchmarkCompaction/type=vertical,blocks=4,series=10000,samplesPerSeriesPerBlock=1001-4     1911469        1911202        -0.01%
BenchmarkCompaction/type=vertical,blocks=4,series=10000,samplesPerSeriesPerBlock=2001-4     3041558        3041328        -0.01%
BenchmarkCompaction/type=vertical,blocks=4,series=10000,samplesPerSeriesPerBlock=5001-4     6741534        6741382        -0.00%
BenchmarkCompactionFromHead/labelnames=1,labelvalues=100000-4                               824856         820873         -0.48%
BenchmarkCompactionFromHead/labelnames=10,labelvalues=10000-4                               887220         885180         -0.23%
BenchmarkCompactionFromHead/labelnames=100,labelvalues=1000-4                               905253         901539         -0.41%
BenchmarkCompactionFromHead/labelnames=1000,labelvalues=100-4                               925148         913632         -1.24%
BenchmarkCompactionFromHead/labelnames=10000,labelvalues=10-4                               1019141        978727         -3.97%

benchmark                                                                                   old bytes      new bytes      delta
BenchmarkCompaction/type=normal,blocks=4,series=10000,samplesPerSeriesPerBlock=101-4        35694744       41523836       +16.33%
BenchmarkCompaction/type=normal,blocks=4,series=10000,samplesPerSeriesPerBlock=1001-4       53405264       59499056       +11.41%
BenchmarkCompaction/type=normal,blocks=4,series=10000,samplesPerSeriesPerBlock=2001-4       74160320       78151568       +5.38%
BenchmarkCompaction/type=normal,blocks=4,series=10000,samplesPerSeriesPerBlock=5001-4       120878480      135364672      +11.98%
BenchmarkCompaction/type=vertical,blocks=4,series=10000,samplesPerSeriesPerBlock=101-4      203832448      209925504      +2.99%
BenchmarkCompaction/type=vertical,blocks=4,series=10000,samplesPerSeriesPerBlock=1001-4     341029208      346551064      +1.62%
BenchmarkCompaction/type=vertical,blocks=4,series=10000,samplesPerSeriesPerBlock=2001-4     580217176      582345224      +0.37%
BenchmarkCompaction/type=vertical,blocks=4,series=10000,samplesPerSeriesPerBlock=5001-4     1356872288     1363495368     +0.49%
BenchmarkCompactionFromHead/labelnames=1,labelvalues=100000-4                               119535672      94815920       -20.68%
BenchmarkCompactionFromHead/labelnames=10,labelvalues=10000-4                               115352280      95980776       -16.79%
BenchmarkCompactionFromHead/labelnames=100,labelvalues=1000-4                               119472320      98724460       -17.37%
BenchmarkCompactionFromHead/labelnames=1000,labelvalues=100-4                               111979312      94325456       -15.77%
BenchmarkCompactionFromHead/labelnames=10000,labelvalues=10-4                               116628584      98566344       -15.49%

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
2019-12-18 12:05:42 +00:00
Brian Brazil 7d1aad46b8 Put postings in a temporary file during index writing.
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
2019-12-18 00:55:29 +00:00
Brian Brazil dee6981a6c Move writing of index label indices into IndexWriter.
Now you only need to provide symbols and series to IndexWriter.

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
2019-12-17 22:15:35 +00:00
Brian Brazil 1733724e30 Factor out index file writing code.
Now that we have more than one file open at a time, deduplicate a bit.

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
2019-12-17 21:54:13 +00:00
Brian Brazil 85964ce567 During compaction spool postings offset table on the side.
This avoids building it up in memory.

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
2019-12-17 21:20:10 +00:00
Brian Brazil d782387f81
Stream symbols during compaction. (#6468)
Rather than buffer up symbols in RAM, do it one by one
during compaction. Then use the reader's symbol handling
for symbol lookups during the rest of the index write.

There is some slowdown in compaction, due to having to look through a file
rather than a hash lookup. This is noise to the overall cost of compacting
series with thousands of samples though.

benchmark                                                                                   old ns/op       new ns/op       delta
BenchmarkCompaction/type=normal,blocks=4,series=10000,samplesPerSeriesPerBlock=101-4        539917175       675341565       +25.08%
BenchmarkCompaction/type=normal,blocks=4,series=10000,samplesPerSeriesPerBlock=1001-4       2441815993      2477453524      +1.46%
BenchmarkCompaction/type=normal,blocks=4,series=10000,samplesPerSeriesPerBlock=2001-4       3978543559      3922909687      -1.40%
BenchmarkCompaction/type=normal,blocks=4,series=10000,samplesPerSeriesPerBlock=5001-4       8430219716      8586610007      +1.86%
BenchmarkCompaction/type=vertical,blocks=4,series=10000,samplesPerSeriesPerBlock=101-4      1786424591      1909552782      +6.89%
BenchmarkCompaction/type=vertical,blocks=4,series=10000,samplesPerSeriesPerBlock=1001-4     5328998202      6020839950      +12.98%
BenchmarkCompaction/type=vertical,blocks=4,series=10000,samplesPerSeriesPerBlock=2001-4     10085059958     11085278690     +9.92%
BenchmarkCompaction/type=vertical,blocks=4,series=10000,samplesPerSeriesPerBlock=5001-4     25497010155     27018079806     +5.97%
BenchmarkCompactionFromHead/labelnames=1,labelvalues=100000-4                               2427391406      2817217987      +16.06%
BenchmarkCompactionFromHead/labelnames=10,labelvalues=10000-4                               2592965497      2538805050      -2.09%
BenchmarkCompactionFromHead/labelnames=100,labelvalues=1000-4                               2437388343      2668012858      +9.46%
BenchmarkCompactionFromHead/labelnames=1000,labelvalues=100-4                               2317095324      2787423966      +20.30%
BenchmarkCompactionFromHead/labelnames=10000,labelvalues=10-4                               2600239857      2096973860      -19.35%

benchmark                                                                                   old allocs     new allocs     delta
BenchmarkCompaction/type=normal,blocks=4,series=10000,samplesPerSeriesPerBlock=101-4        500851         470794         -6.00%
BenchmarkCompaction/type=normal,blocks=4,series=10000,samplesPerSeriesPerBlock=1001-4       821527         791451         -3.66%
BenchmarkCompaction/type=normal,blocks=4,series=10000,samplesPerSeriesPerBlock=2001-4       1141562        1111508        -2.63%
BenchmarkCompaction/type=normal,blocks=4,series=10000,samplesPerSeriesPerBlock=5001-4       2141576        2111504        -1.40%
BenchmarkCompaction/type=vertical,blocks=4,series=10000,samplesPerSeriesPerBlock=101-4      871466         841424         -3.45%
BenchmarkCompaction/type=vertical,blocks=4,series=10000,samplesPerSeriesPerBlock=1001-4     1941428        1911415        -1.55%
BenchmarkCompaction/type=vertical,blocks=4,series=10000,samplesPerSeriesPerBlock=2001-4     3071573        3041510        -0.98%
BenchmarkCompaction/type=vertical,blocks=4,series=10000,samplesPerSeriesPerBlock=5001-4     6771648        6741509        -0.45%
BenchmarkCompactionFromHead/labelnames=1,labelvalues=100000-4                               731493         824888         +12.77%
BenchmarkCompactionFromHead/labelnames=10,labelvalues=10000-4                               793918         887311         +11.76%
BenchmarkCompactionFromHead/labelnames=100,labelvalues=1000-4                               811842         905204         +11.50%
BenchmarkCompactionFromHead/labelnames=1000,labelvalues=100-4                               832244         925081         +11.16%
BenchmarkCompactionFromHead/labelnames=10000,labelvalues=10-4                               921553         1019162        +10.59%

benchmark                                                                                   old bytes      new bytes      delta
BenchmarkCompaction/type=normal,blocks=4,series=10000,samplesPerSeriesPerBlock=101-4        40532648       35698276       -11.93%
BenchmarkCompaction/type=normal,blocks=4,series=10000,samplesPerSeriesPerBlock=1001-4       60340216       53409568       -11.49%
BenchmarkCompaction/type=normal,blocks=4,series=10000,samplesPerSeriesPerBlock=2001-4       81087336       72065552       -11.13%
BenchmarkCompaction/type=normal,blocks=4,series=10000,samplesPerSeriesPerBlock=5001-4       142485576      120878544      -15.16%
BenchmarkCompaction/type=vertical,blocks=4,series=10000,samplesPerSeriesPerBlock=101-4      208661368      203831136      -2.31%
BenchmarkCompaction/type=vertical,blocks=4,series=10000,samplesPerSeriesPerBlock=1001-4     347345904      340484696      -1.98%
BenchmarkCompaction/type=vertical,blocks=4,series=10000,samplesPerSeriesPerBlock=2001-4     585185856      576244648      -1.53%
BenchmarkCompaction/type=vertical,blocks=4,series=10000,samplesPerSeriesPerBlock=5001-4     1357641792     1358966528     +0.10%
BenchmarkCompactionFromHead/labelnames=1,labelvalues=100000-4                               126486664      119666744      -5.39%
BenchmarkCompactionFromHead/labelnames=10,labelvalues=10000-4                               122323192      115117224      -5.89%
BenchmarkCompactionFromHead/labelnames=100,labelvalues=1000-4                               126404504      119469864      -5.49%
BenchmarkCompactionFromHead/labelnames=1000,labelvalues=100-4                               119047832      112230408      -5.73%
BenchmarkCompactionFromHead/labelnames=10000,labelvalues=10-4                               136576016      116634800      -14.60%

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
2019-12-17 19:49:54 +00:00
Brian Brazil 767fa704b6 Load only some offsets into the symbol table into memory.
Rather than keeping the entire symbol table in memory, keep every nth
offset and walk from there to the entry we need. This ends up slightly
slower, ~360ms per 1M series returned from PostingsForMatchers which is
not much considering the rest of the CPU such a query would go on to
use.

Make LabelValues use the postings tables, rather than having
to do symbol lookups. Use yoloString, as PostingsForMatchers
doesn't need the strings to stick around and adjust the API
call to keep the Querier open until it's all marshalled.

Remove allocatedSymbols memory optimisation, we no longer keep all the
symbol strings in heap memory. Remove LabelValuesFor and LabelIndices,
they're dead code. Ensure we've still tests for label indices,
and add missing test that we can work with old V1 Format index files.

PostingForMatchers performance is slightly better, with a big drop in
allocation counts due to using yoloString for LabelValues:

benchmark                                                               old ns/op     new ns/op     delta
BenchmarkPostingsForMatchers/Block/n="1"-4                              36698         36681         -0.05%
BenchmarkPostingsForMatchers/Block/n="1",j="foo"-4                      522786        560887        +7.29%
BenchmarkPostingsForMatchers/Block/j="foo",n="1"-4                      511652        537680        +5.09%
BenchmarkPostingsForMatchers/Block/n="1",j!="foo"-4                     522102        564239        +8.07%
BenchmarkPostingsForMatchers/Block/i=~".*"-4                            113689911     111795919     -1.67%
BenchmarkPostingsForMatchers/Block/i=~".+"-4                            135825572     132871085     -2.18%
BenchmarkPostingsForMatchers/Block/i=~""-4                              40782628      38038181      -6.73%
BenchmarkPostingsForMatchers/Block/i!=""-4                              31267869      29194327      -6.63%
BenchmarkPostingsForMatchers/Block/n="1",i=~".*",j="foo"-4              112733329     111568823     -1.03%
BenchmarkPostingsForMatchers/Block/n="1",i=~".*",i!="2",j="foo"-4       112868153     111232029     -1.45%
BenchmarkPostingsForMatchers/Block/n="1",i!=""-4                        31338257      29349446      -6.35%
BenchmarkPostingsForMatchers/Block/n="1",i!="",j="foo"-4                32054482      29972436      -6.50%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",j="foo"-4              136504654     133968442     -1.86%
BenchmarkPostingsForMatchers/Block/n="1",i=~"1.+",j="foo"-4             27960350      27264997      -2.49%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",i!="2",j="foo"-4       136765564     133860724     -2.12%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",i!~"2.*",j="foo"-4     163714583     159453668     -2.60%

benchmark                                                               old allocs     new allocs     delta
BenchmarkPostingsForMatchers/Block/n="1"-4                              6              6              +0.00%
BenchmarkPostingsForMatchers/Block/n="1",j="foo"-4                      11             11             +0.00%
BenchmarkPostingsForMatchers/Block/j="foo",n="1"-4                      11             11             +0.00%
BenchmarkPostingsForMatchers/Block/n="1",j!="foo"-4                     17             15             -11.76%
BenchmarkPostingsForMatchers/Block/i=~".*"-4                            100012         12             -99.99%
BenchmarkPostingsForMatchers/Block/i=~".+"-4                            200040         100040         -49.99%
BenchmarkPostingsForMatchers/Block/i=~""-4                              200045         100045         -49.99%
BenchmarkPostingsForMatchers/Block/i!=""-4                              200041         100041         -49.99%
BenchmarkPostingsForMatchers/Block/n="1",i=~".*",j="foo"-4              100017         17             -99.98%
BenchmarkPostingsForMatchers/Block/n="1",i=~".*",i!="2",j="foo"-4       100023         23             -99.98%
BenchmarkPostingsForMatchers/Block/n="1",i!=""-4                        200046         100046         -49.99%
BenchmarkPostingsForMatchers/Block/n="1",i!="",j="foo"-4                200050         100050         -49.99%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",j="foo"-4              200049         100049         -49.99%
BenchmarkPostingsForMatchers/Block/n="1",i=~"1.+",j="foo"-4             111150         11150          -89.97%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",i!="2",j="foo"-4       200055         100055         -49.99%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",i!~"2.*",j="foo"-4     311238         111238         -64.26%

benchmark                                                               old bytes     new bytes     delta
BenchmarkPostingsForMatchers/Block/n="1"-4                              296           296           +0.00%
BenchmarkPostingsForMatchers/Block/n="1",j="foo"-4                      424           424           +0.00%
BenchmarkPostingsForMatchers/Block/j="foo",n="1"-4                      424           424           +0.00%
BenchmarkPostingsForMatchers/Block/n="1",j!="foo"-4                     552           1544          +179.71%
BenchmarkPostingsForMatchers/Block/i=~".*"-4                            1600482       1606125       +0.35%
BenchmarkPostingsForMatchers/Block/i=~".+"-4                            17259065      17264709      +0.03%
BenchmarkPostingsForMatchers/Block/i=~""-4                              17259150      17264780      +0.03%
BenchmarkPostingsForMatchers/Block/i!=""-4                              17259048      17264680      +0.03%
BenchmarkPostingsForMatchers/Block/n="1",i=~".*",j="foo"-4              1600610       1606242       +0.35%
BenchmarkPostingsForMatchers/Block/n="1",i=~".*",i!="2",j="foo"-4       1600813       1606434       +0.35%
BenchmarkPostingsForMatchers/Block/n="1",i!=""-4                        17259176      17264808      +0.03%
BenchmarkPostingsForMatchers/Block/n="1",i!="",j="foo"-4                17259304      17264936      +0.03%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",j="foo"-4              17259333      17264965      +0.03%
BenchmarkPostingsForMatchers/Block/n="1",i=~"1.+",j="foo"-4             3142628       3148262       +0.18%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",i!="2",j="foo"-4       17259509      17265141      +0.03%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",i!~"2.*",j="foo"-4     20405680      20416944      +0.06%

However overall Select performance is down and involves more allocs, due to
having to do more than a simple map lookup to resolve a symbol and that all the strings
returned are allocated:

benchmark                                           old ns/op     new ns/op      delta
BenchmarkQuerierSelect/Block/1of1000000-4           506092636     862678244      +70.46%
BenchmarkQuerierSelect/Block/10of1000000-4          505638968     860917636      +70.26%
BenchmarkQuerierSelect/Block/100of1000000-4         505229450     882150048      +74.60%
BenchmarkQuerierSelect/Block/1000of1000000-4        515905414     862241115      +67.13%
BenchmarkQuerierSelect/Block/10000of1000000-4       516785354     874841110      +69.29%
BenchmarkQuerierSelect/Block/100000of1000000-4      540742808     907030187      +67.74%
BenchmarkQuerierSelect/Block/1000000of1000000-4     815224288     1181236903     +44.90%

benchmark                                           old allocs     new allocs     delta
BenchmarkQuerierSelect/Block/1of1000000-4           4000020        6000020        +50.00%
BenchmarkQuerierSelect/Block/10of1000000-4          4000038        6000038        +50.00%
BenchmarkQuerierSelect/Block/100of1000000-4         4000218        6000218        +50.00%
BenchmarkQuerierSelect/Block/1000of1000000-4        4002018        6002018        +49.97%
BenchmarkQuerierSelect/Block/10000of1000000-4       4020018        6020018        +49.75%
BenchmarkQuerierSelect/Block/100000of1000000-4      4200018        6200018        +47.62%
BenchmarkQuerierSelect/Block/1000000of1000000-4     6000018        8000019        +33.33%

benchmark                                           old bytes     new bytes     delta
BenchmarkQuerierSelect/Block/1of1000000-4           176001468     227201476     +29.09%
BenchmarkQuerierSelect/Block/10of1000000-4          176002620     227202628     +29.09%
BenchmarkQuerierSelect/Block/100of1000000-4         176014140     227214148     +29.09%
BenchmarkQuerierSelect/Block/1000of1000000-4        176129340     227329348     +29.07%
BenchmarkQuerierSelect/Block/10000of1000000-4       177281340     228481348     +28.88%
BenchmarkQuerierSelect/Block/100000of1000000-4      188801340     240001348     +27.12%
BenchmarkQuerierSelect/Block/1000000of1000000-4     304001340     355201616     +16.84%

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
2019-12-17 18:56:58 +00:00
Brian Brazil 1d1732bc25 Add benchmark for Querier.Select over blocks and head.
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
2019-12-17 18:56:58 +00:00
Brian Brazil 0482d93fe6 Add contexts to index writer to fix test races.
With recent speed improvements to populate block,
the cancellation test now fails regularly on CI.
Use contexts to get the index writer to shut down
much faster, and that allows us to make the cancellation
test faster too.

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
2019-12-16 17:28:29 +00:00
Brian Brazil cf76daed2f Avoid WriteAt for Postings.
Flushing buffers and doing a pwrite per posting is expensive
time wise, so go back to the old way for those. This doubles
our memory usage, but that's still small as it's only
~8 bytes per time series in the index. This is 30-40% faster.

benchmark                                                         old ns/op      new ns/op     delta
BenchmarkCompactionFromHead/labelnames=1,labelvalues=100000-4     1101429174     724362123     -34.23%
BenchmarkCompactionFromHead/labelnames=10,labelvalues=10000-4     1074466374     720977022     -32.90%
BenchmarkCompactionFromHead/labelnames=100,labelvalues=1000-4     1166510282     677702636     -41.90%
BenchmarkCompactionFromHead/labelnames=1000,labelvalues=100-4     1075013071     696855960     -35.18%
BenchmarkCompactionFromHead/labelnames=10000,labelvalues=10-4     1231673790     829328610     -32.67%

benchmark                                                         old allocs     new allocs     delta
BenchmarkCompactionFromHead/labelnames=1,labelvalues=100000-4     832571         731435         -12.15%
BenchmarkCompactionFromHead/labelnames=10,labelvalues=10000-4     894875         793823         -11.29%
BenchmarkCompactionFromHead/labelnames=100,labelvalues=1000-4     912931         811804         -11.08%
BenchmarkCompactionFromHead/labelnames=1000,labelvalues=100-4     933511         832366         -10.83%
BenchmarkCompactionFromHead/labelnames=10000,labelvalues=10-4     1022791        921554         -9.90%

benchmark                                                         old bytes     new bytes     delta
BenchmarkCompactionFromHead/labelnames=1,labelvalues=100000-4     129063496     126472364     -2.01%
BenchmarkCompactionFromHead/labelnames=10,labelvalues=10000-4     124154888     122300764     -1.49%
BenchmarkCompactionFromHead/labelnames=100,labelvalues=1000-4     128790648     126394856     -1.86%
BenchmarkCompactionFromHead/labelnames=1000,labelvalues=100-4     120570696     118946548     -1.35%
BenchmarkCompactionFromHead/labelnames=10000,labelvalues=10-4     138754288     136317432     -1.76%

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
2019-12-16 15:30:49 +00:00
Brian Brazil 971dafdfbe Coalesce series reads where we can.
When compacting rather than doing a read of all
series in the index per label name, do many at once
but only when it won't use (much) more ram than writing the
special all index does.

original in-memory postings:
BenchmarkCompactionFromHead/labelnames=1,labelvalues=100000-4                  1        1202383447 ns/op        158936496 B/op   1031511 allocs/op
BenchmarkCompactionFromHead/labelnames=10,labelvalues=10000-4                  1        1141792706 ns/op        154453408 B/op   1093453 allocs/op
BenchmarkCompactionFromHead/labelnames=100,labelvalues=1000-4                  1        1169288829 ns/op        161072336 B/op   1110021 allocs/op
BenchmarkCompactionFromHead/labelnames=1000,labelvalues=100-4                  1        1115700103 ns/op        149480472 B/op   1129180 allocs/op
BenchmarkCompactionFromHead/labelnames=10000,labelvalues=10-4                  1        1283813141 ns/op        162937800 B/op   1202771 allocs/op

before:
BenchmarkCompactionFromHead/labelnames=1,labelvalues=100000-4                  1        1145195941 ns/op        131749984 B/op    834400 allocs/op
BenchmarkCompactionFromHead/labelnames=10,labelvalues=10000-4                  1        1233526345 ns/op        127889416 B/op    897033 allocs/op
BenchmarkCompactionFromHead/labelnames=100,labelvalues=1000-4                  1        1821942296 ns/op        131665648 B/op    914836 allocs/op
BenchmarkCompactionFromHead/labelnames=1000,labelvalues=100-4                  1        8035568665 ns/op        123811832 B/op    934312 allocs/op
BenchmarkCompactionFromHead/labelnames=10000,labelvalues=10-4                  1       71325926267 ns/op        140722648 B/op   1016824 allocs/op

after:
BenchmarkCompactionFromHead/labelnames=1,labelvalues=100000-4                  1        1101429174 ns/op        129063496 B/op    832571 allocs/op
BenchmarkCompactionFromHead/labelnames=10,labelvalues=10000-4                  1        1074466374 ns/op        124154888 B/op    894875 allocs/op
BenchmarkCompactionFromHead/labelnames=100,labelvalues=1000-4                  1        1166510282 ns/op        128790648 B/op    912931 allocs/op
BenchmarkCompactionFromHead/labelnames=1000,labelvalues=100-4                  1        1075013071 ns/op        120570696 B/op    933511 allocs/op
BenchmarkCompactionFromHead/labelnames=10000,labelvalues=10-4                  1        1231673790 ns/op        138754288 B/op   1022791 allocs/op

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
2019-12-12 08:38:14 +00:00
Brian Brazil 373a1fdfbf Reread index series rather than storing in memory.
Rather than building up a 2nd copy of all the posting
tables, construct it from the data we've already written
to disk. This takes more time, but saves memory.

Current benchmark numbers have this as slightly faster, but that's
likely due to the synthetic data not having many label names.
Memory usage is roughly halved for the relevant bits.

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
2019-12-11 22:23:39 +00:00
Brian Brazil 48d25e6fe7 Reduce memory used by postings offset table.
Rather than keeping the offset of each postings list, instead
keep the nth offset of the offset of the posting list. As postings
list offsets have always been sorted, we can then get to the closest
entry before the one we want an iterate forwards.

I haven't done much tuning on the 32 number, it was chosen to try
not to read through more than a 4k page of data.

Switch to a bulk interface for fetching postings. Use it to avoid having
to re-read parts of the posting offset table when querying lots of it.

For a index with what BenchmarkHeadPostingForMatchers uses RAM
for r.postings drops from 3.79MB to 80.19kB or about 48x.
Bytes allocated go down by 30%, and suprisingly CPU usage drops by
4-6% for typical queries too.

benchmark                                                               old ns/op      new ns/op      delta
BenchmarkPostingsForMatchers/Block/n="1"-4                              35231          36673          +4.09%
BenchmarkPostingsForMatchers/Block/n="1",j="foo"-4                      563380         540627         -4.04%
BenchmarkPostingsForMatchers/Block/j="foo",n="1"-4                      536782         534186         -0.48%
BenchmarkPostingsForMatchers/Block/n="1",j!="foo"-4                     533990         541550         +1.42%
BenchmarkPostingsForMatchers/Block/i=~".*"-4                            113374598      117969608      +4.05%
BenchmarkPostingsForMatchers/Block/i=~".+"-4                            146329884      139651442      -4.56%
BenchmarkPostingsForMatchers/Block/i=~""-4                              50346510       44961127       -10.70%
BenchmarkPostingsForMatchers/Block/i!=""-4                              41261550       35356165       -14.31%
BenchmarkPostingsForMatchers/Block/n="1",i=~".*",j="foo"-4              112544418      116904010      +3.87%
BenchmarkPostingsForMatchers/Block/n="1",i=~".*",i!="2",j="foo"-4       112487086      116864918      +3.89%
BenchmarkPostingsForMatchers/Block/n="1",i!=""-4                        41094758       35457904       -13.72%
BenchmarkPostingsForMatchers/Block/n="1",i!="",j="foo"-4                41906372       36151473       -13.73%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",j="foo"-4              147262414      140424800      -4.64%
BenchmarkPostingsForMatchers/Block/n="1",i=~"1.+",j="foo"-4             28615629       27872072       -2.60%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",i!="2",j="foo"-4       147117177      140462403      -4.52%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",i!~"2.*",j="foo"-4     175096826      167902298      -4.11%

benchmark                                                               old allocs     new allocs     delta
BenchmarkPostingsForMatchers/Block/n="1"-4                              4              6              +50.00%
BenchmarkPostingsForMatchers/Block/n="1",j="foo"-4                      7              11             +57.14%
BenchmarkPostingsForMatchers/Block/j="foo",n="1"-4                      7              11             +57.14%
BenchmarkPostingsForMatchers/Block/n="1",j!="foo"-4                     15             17             +13.33%
BenchmarkPostingsForMatchers/Block/i=~".*"-4                            100010         100012         +0.00%
BenchmarkPostingsForMatchers/Block/i=~".+"-4                            200069         200040         -0.01%
BenchmarkPostingsForMatchers/Block/i=~""-4                              200072         200045         -0.01%
BenchmarkPostingsForMatchers/Block/i!=""-4                              200070         200041         -0.01%
BenchmarkPostingsForMatchers/Block/n="1",i=~".*",j="foo"-4              100013         100017         +0.00%
BenchmarkPostingsForMatchers/Block/n="1",i=~".*",i!="2",j="foo"-4       100017         100023         +0.01%
BenchmarkPostingsForMatchers/Block/n="1",i!=""-4                        200073         200046         -0.01%
BenchmarkPostingsForMatchers/Block/n="1",i!="",j="foo"-4                200075         200050         -0.01%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",j="foo"-4              200074         200049         -0.01%
BenchmarkPostingsForMatchers/Block/n="1",i=~"1.+",j="foo"-4             111165         111150         -0.01%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",i!="2",j="foo"-4       200078         200055         -0.01%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",i!~"2.*",j="foo"-4     311282         311238         -0.01%

benchmark                                                               old bytes     new bytes     delta
BenchmarkPostingsForMatchers/Block/n="1"-4                              264           296           +12.12%
BenchmarkPostingsForMatchers/Block/n="1",j="foo"-4                      360           424           +17.78%
BenchmarkPostingsForMatchers/Block/j="foo",n="1"-4                      360           424           +17.78%
BenchmarkPostingsForMatchers/Block/n="1",j!="foo"-4                     520           552           +6.15%
BenchmarkPostingsForMatchers/Block/i=~".*"-4                            1600461       1600482       +0.00%
BenchmarkPostingsForMatchers/Block/i=~".+"-4                            24900801      17259077      -30.69%
BenchmarkPostingsForMatchers/Block/i=~""-4                              24900836      17259151      -30.69%
BenchmarkPostingsForMatchers/Block/i!=""-4                              24900760      17259048      -30.69%
BenchmarkPostingsForMatchers/Block/n="1",i=~".*",j="foo"-4              1600557       1600621       +0.00%
BenchmarkPostingsForMatchers/Block/n="1",i=~".*",i!="2",j="foo"-4       1600717       1600813       +0.01%
BenchmarkPostingsForMatchers/Block/n="1",i!=""-4                        24900856      17259176      -30.69%
BenchmarkPostingsForMatchers/Block/n="1",i!="",j="foo"-4                24900952      17259304      -30.69%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",j="foo"-4              24900993      17259333      -30.69%
BenchmarkPostingsForMatchers/Block/n="1",i=~"1.+",j="foo"-4             3788311       3142630       -17.04%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",i!="2",j="foo"-4       24901137      17259509      -30.69%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",i!~"2.*",j="foo"-4     28693086      20405680      -28.88%

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
2019-12-11 19:59:31 +00:00
Brian Brazil aff9f7a9e8 Extend PostingsForMatchers benchmark to cover Blocks too.
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
2019-12-11 19:59:31 +00:00
Brian Brazil 4df814f509
Don't buffer up tables in memory on compaction. (#6422)
We can instead write it as we go, and then go back and write in the
length at the end.

Also fix the compaction benchmark, which indicates no changes.

For the benchmark, this brings maximum memory usage of the buffers 
from ~200kB down to 128B.

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
2019-12-11 12:49:13 +00:00
johncming 0ae048ff78 tsdb: add error details in log. (#6415)
Signed-off-by: johncming <johncming@yahoo.com>
2019-12-09 10:37:01 +00:00
Ben Ye c0250cf120 Fix some typos(#6423)
Signed-off-by: yeya24 <yb532204897@gmail.com>
2019-12-08 19:16:46 +00:00
Krasimir Georgiev 549164f252
return err instead of panic for corrupted chunk (#6040)
* Fix tsdb panic when querying corrupted chunks.
check that the chunk segment has enough data to read all chunk pieces.
* refactor, simplify and add tests.
* simpfiy WriteChunks implementation

Signed-off-by: Krasi Georgiev <8903888+krasi-georgiev@users.noreply.github.com>
2019-12-04 09:37:49 +02:00
Callum Styan 5830e03691
Merge pull request #6337 from cstyan/rw-log-replay
Log the start and end of the WAL replay within the WAL watcher.
2019-12-03 13:24:26 -08:00
Callum Styan 6a24eee340 Simplify duration check for watcher WAL replay.
Signed-off-by: Callum Styan <callumstyan@gmail.com>
2019-11-26 16:53:11 -08:00
Dipack P Panjabi e2dd5b61ef Added CreateBlock and CreateHead functions to new file (#6331)
* Added CreateBlock and CreateHead functions to new file to make it reusable across packages.

Signed-off-by: Dipack P Panjabi <dipack.panjabi@gmail.com>
2019-11-21 19:10:25 +07:00
Callum Styan 2d3ce3916c Log the start and end of the WAL replay within the WAL watcher.
Signed-off-by: Callum Styan <callumstyan@gmail.com>
2019-11-19 14:21:13 -08:00
Tim Bart 2e77f3a52b Fix typo in posting stats. (#6343)
Signed-off-by: Tim Bart <tbart@cloudflare.com>
2019-11-19 21:03:24 +00:00
Tom Wilkie de0a772b8e Port tsdb to use pkg/labels. (#6326)
* Port tsdb to use pkg/labels.

Signed-off-by: Tom Wilkie <tom.wilkie@gmail.com>

* Get tests passing.

Signed-off-by: Tom Wilkie <tom.wilkie@gmail.com>

* Remove useless cast.

Signed-off-by: Tom Wilkie <tom.wilkie@gmail.com>

* Appease linters.

Signed-off-by: Tom Wilkie <tom.wilkie@gmail.com>

* Fix review comments

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
2019-11-18 11:53:33 -08:00
naivewong 23c0299d85 [tsdb] Improve mergeSeriesSet (#5920)
* refactor and simplify mergeSeriesSet

Signed-off-by: naivewong <867245430@qq.com>
2019-11-15 21:45:29 +07:00
Dipack P Panjabi ce7bab04dd Compute WAL size and use it during retention size checks (#5886)
* Compute WAL size and account for it when applying the retention settings.

Signed-off-by: Dipack P Panjabi <dpanjabi@hudson-trading.com>
2019-11-12 09:40:16 +07:00
Chris Marchbanks c5b3f0221f Decode WAL in Separate Goroutine (#6230)
* Make WAL replay benchmark more representative

Signed-off-by: Chris Marchbanks <csmarchbanks@gmail.com>

* Move decoding records from the WAL into goroutine

Decoding the WAL records accounts for a significant amount of time on
startup, and can be done in parallel with creating series/samples to
speed up startup. However, records still must be handled in order, so
only a single goroutine can do the decoding.

benchmark
old ns/op     new ns/op     delta
BenchmarkLoadWAL/batches=10,seriesPerBatch=100,samplesPerSeries=7200-8
481607033     391971490     -18.61%
BenchmarkLoadWAL/batches=10,seriesPerBatch=10000,samplesPerSeries=50-8
836394378     629067006     -24.79%
BenchmarkLoadWAL/batches=10,seriesPerBatch=1000,samplesPerSeries=480-8
348238658     234218667     -32.74%

Signed-off-by: Chris Marchbanks <csmarchbanks@gmail.com>
2019-11-07 17:26:45 +01:00
Sharad Gaur e94503ff5c Head Cardinality Status Page (#6125)
* Adding TSDB Head Stats like cardinality to Status Page

Signed-off-by: Sharad Gaur <sgaur@splunk.com>

* Moving mutx to Head

Signed-off-by: Sharad Gaur <sgaur@splunk.com>

* Renaming variabls

Signed-off-by: Sharad Gaur <sgaur@splunk.com>

* Renaming variabls and html

Signed-off-by: Sharad Gaur <sgaur@splunk.com>

* Removing unwanted whitespaces

Signed-off-by: Sharad Gaur <sgaur@splunk.com>

* Adding Tests, Banchmarks and Max Heap for Postings Stats

Signed-off-by: Sharad Gaur <sgaur@splunk.com>

* Adding more tests for postingstats and web handler

Signed-off-by: Sharad Gaur <sgaur@splunk.com>

* Adding more tests for postingstats and web handler

Signed-off-by: Sharad Gaur <sgaur@splunk.com>

* Remove generated asset file that is no longer used

Signed-off-by: Chris Marchbanks <csmarchbanks@gmail.com>

* Changing comment and variable name for more readability

Signed-off-by: Sharad Gaur <sgaur@splunk.com>

* Using time.Duration in postings status function and removing refresh button from web page

Signed-off-by: Sharad Gaur <sgaur@splunk.com>
2019-11-04 19:06:13 -07:00
Goutham Veeramachaneni 53a99530d6
Edit TSDB README badges
Signed-off-by: Goutham Veeramachaneni <gouthamve@gmail.com>
2019-10-24 15:35:47 +05:30
yuxiaobo 7850f1b35c new world spelling mistake
Signed-off-by: yuxiaobo <yuxiaobogo@163.com>
2019-10-17 19:09:54 +08:00
Björn Rabenstein 3c70284802
Merge pull request #6137 from gouthamve/propose-ganesh-tsdb
Step down and propose Ganesh as TSDB maintainer
2019-10-15 12:00:58 +02:00
Goutham Veeramachaneni b035c55a96
Step down and propose Ganesh as TSDB maintainer
Signed-off-by: Goutham Veeramachaneni <gouthamve@gmail.com>
2019-10-15 12:42:20 +05:30
Krasi Georgiev b05b5f9a30
remove debug fmt.Println in tombstones. (#6135)
Signed-off-by: Krasi Georgiev <8903888+krasi-georgiev@users.noreply.github.com>
2019-10-14 14:45:26 +03:00
yuxiaobo 47e51c8b2b Correct spelling mistakes
Signed-off-by: yuxiaobo <yuxiaobogo@163.com>
2019-10-10 18:46:27 +08:00
Krasi Georgiev 81d284f806
Merge the 2.13 release branch to master (#6117) 2019-10-09 17:41:46 +02:00
Björn Rabenstein fc945c4e8a
Merge pull request #6021 from prometheus/reload-checkpoint-async
Garbage collect asynchronously in the WAL Watcher
2019-10-08 11:01:13 +02:00
Ganesh Vernekar 4dfc7a2e77
Remove unnecessary lock in loadWAL (#6107)
Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
2019-10-08 11:17:40 +05:30
Chris Marchbanks 8df4bca470
Garbage collect asynchronously in the WAL Watcher
The WAL Watcher replays a checkpoint after it is created in order to
garbage collect series that no longer exist in the WAL. Currently the
garbage collection process is done serially with reading from the tip of
the WAL which can cause large delays in writing samples to remote
storage just after compaction occurs.

This also fixes a memory leak where dropped series are not cleaned up as
part of the SeriesReset process.

Signed-off-by: Chris Marchbanks <csmarchbanks@gmail.com>
2019-10-07 14:36:10 -06:00
Ganesh Vernekar 53ea6d6390
Allocate the shards only once while reading WAL (#6093)
Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
2019-10-03 15:21:39 +05:30
Ganesh Vernekar 493ef2f771
Benchmark for loading WAL (#6081)
* Benchmark for loading WAL

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>

* Add more cases

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
2019-10-03 11:49:55 +05:30
陈谭军 103f26d188 fix the wrong word (#6069)
Signed-off-by: chentanjun <2799194073@qq.com>
2019-09-30 09:54:55 -06:00
AllenZMC 1e62435960 fix wrong spells in live_reader.go (#5899)
* fix wrong spells in live_reader.go
* fix wrong spells in lex.go

Signed-off-by: czm <zhongming.chang@daocloud.io>
2019-09-21 16:36:33 +03:00
johncming 612f9cb361 tsdb/wal: pull out wal metrics separately as tsdb.DB (#5957)
Signed-off-by: johncming <johncming@yahoo.com>
2019-09-19 14:24:34 +03:00
zhulongcheng e081406b5b tsdb: update chunks format (#6033)
Signed-off-by: zhulongcheng <zhulongcheng.dev@gmail.com>
2019-09-19 13:56:32 +03:00
Callum Styan 3344bb5c33 Move WAL watcher code to tsdb/wal package. (#5999)
* Move WAL watcher code to tsdb/wal package.

Signed-off-by: Callum Styan <callumstyan@gmail.com>

* Fix tests after moving WAL watcher code.

Signed-off-by: Callum Styan <callumstyan@gmail.com>

* Lint fixes.

Signed-off-by: Callum Styan <callumstyan@gmail.com>
2019-09-19 14:45:41 +05:30
Yao Zengzeng d1f21552b9 some refactor to make PostingsForMatchers more readable (#5897)
Signed-off-by: YaoZengzeng <yaozengzeng@zju.edu.cn>
2019-09-13 16:10:35 +01:00
Lucas Servén Marín 8ab628b354 tsdb: allow readonly DB to create flush WAL (#6006)
This PR gives the readonly DB the ability to create blocks from the WAL.
In order to implement this, we modify DBReadOnly.Blocks() to return an
empty slice and no error if no blocks are found.
xref: https://github.com/prometheus/tsdb/issues/346#issuecomment-520786524

Signed-off-by: Lucas Servén Marín <lserven@gmail.com>
2019-09-13 11:25:21 +01:00
zhulongcheng 937cc1a52a tsdb: add block meta version constant (#5994)
Signed-off-by: zhulongcheng <zhulongcheng.dev@gmail.com>
2019-09-09 12:28:01 +03:00
Erfan Besharat 9336c01dfd Add methods to fetch page's buf data in tsdb WAL (#5967)
* move the WAL page buf reset in its own func

Signed-off-by: Erfan Besharat <erbesharat@gmail.com>
2019-08-30 11:38:36 +03:00
陈谭军 50d453b3c3 fix-up tsdb-typo (#5954)
Signed-off-by: chentanjun <2799194073@qq.com>
2019-08-28 14:43:02 +01:00
li mengyang 1c6d2194c4 fix spelling mistakes in docs (#5952)
Signed-off-by: hwdef <hwdef97@gmail.com>
2019-08-27 11:33:40 -06:00
johncming 7d43feb03f tsdb/wal: some small refactoring for easier reading (#5930)
Signed-off-by: johncming <johncming@yahoo.com>
2019-08-22 16:12:59 +03:00
Chris Marchbanks 5e36aa1491 Add test for MemPostings.Delete (#5910)
Signed-off-by: Chris Marchbanks <csmarchbanks@gmail.com>
2019-08-22 15:19:12 +03:00
Bartek Plotka f0863a604e Removed extra tsdb/testutil after merge.
Signed-off-by: Bartek Plotka <bwplotka@gmail.com>
2019-08-14 10:12:32 +01:00
Ganesh Vernekar 5ecef3542d
Cleanup after merging tsdb into prometheus
Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
2019-08-13 14:04:14 +05:30
Ganesh Vernekar 7cf09b0395
Moving tsdb into its own subdirectory
Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
2019-08-13 13:58:49 +05:30