Commit Graph

18 Commits (e1bfdd22571d874e1f9154d1a35ce3a8569ded36)

Author SHA1 Message Date
Bryan Boreham 31c5760551
Neater string vs byte-slice conversions (#14425)
unsafe.Slice and unsafe.StringData were added in Go 1.20

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2024-09-21 12:19:21 +02:00
Bryan Boreham 0a4f130b39 [Comment] Correct the comment on Decbuf.UvarintBytes
The value is valid when returned, but can become invalid later.

Return to previous wording.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2024-08-30 09:40:18 +01:00
beorn7 0f760f63dd lint: Revamp our linting rules, mostly around doc comments
Several things done here:

- Set `max-issues-per-linter` to 0 so that we actually see all linter
  warnings and not just 50 per linter. (As we also set
  `max-same-issues` to 0, I assume this was the intention from the
  beginning.)

- Stop using the golangci-lint default excludes (by setting
  `exclude-use-default: false`. Those are too generous and don't match
  our style conventions. (I have re-added some of the excludes
  explicitly in this commit. See below.)

- Re-add the `errcheck` exclusion we have used so far via the
  defaults.

- Exclude the signature requirement `govet` has for `Seek` methods
  because we use non-standard `Seek` methods a lot. (But we keep other
  requirements, while the default excludes completely disabled the
  check for common method segnatures.)

- Exclude warnings about missing doc comments on exported symbols. (We
  used to be pretty adamant about doc comments, but stopped that at
  some point in the past. By now, we have about 500 missing doc
  comments. We may consider reintroducing this check, but that's
  outside of the scope of this commit. The default excludes of
  golangci-lint essentially ignore doc comments completely.)

- By stop using the default excludes, we now get warnings back on
  malformed doc comments. That's the most impactful change in this
  commit. It does not enforce doc comments (again), but _if_ there is
  a doc comment, it has to have the recommended form. (Most of the
  changes in this commit are fixing this form.)

- Improve wording/spelling of some comments in .golangci.yml, and
  remove an outdated comment.

- Leave `package-comments` inactive, but add a TODO asking if we
  should change that.

- Add a new sub-linter `comment-spacings` (and fix corresponding
  comments), which avoids missing spaces after the leading `//`.

Signed-off-by: beorn7 <beorn@grafana.com>
2024-08-22 17:36:11 +02:00
Matthieu MOREL 4d6d3c1715
tsdb/encoding: use Go standard errors package
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2023-11-11 19:01:11 +01:00
Jesus Vazquez e934d0f011 Merge 'main' into sparsehistogram
Signed-off-by: Jesus Vazquez <jesus.vazquez@grafana.com>
2022-10-05 22:14:49 +02:00
Abirdcfly 314aa45c2c
chore: remove duplicate word in comments (#11225)
Signed-off-by: Abirdcfly <fp544037857@gmail.com>

Signed-off-by: Abirdcfly <fp544037857@gmail.com>
2022-08-27 22:21:41 +02:00
beorn7 5d4db805ac Merge branch 'main' into sparsehistogram 2021-11-17 19:57:31 +01:00
Mateusz Gozdek 1a6c2283a3 Format Go source files using 'gofumpt -w -s -extra'
Part of #9557

Signed-off-by: Mateusz Gozdek <mgozdekof@gmail.com>
2021-11-02 19:52:34 +01:00
Ganesh Vernekar f0688c21d6
Log sparse histograms into WAL and replay from it (#9191)
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
2021-08-11 17:38:48 +05:30
Ganesh Vernekar 095f572d4a
Sync sparsehistogram branch with main (#9189)
* Fix `kuma_sd` targetgroup reporting (#9157)

* Bundle all xDS targets into a single group

Signed-off-by: austin ce <austin.cawley@gmail.com>

* Snapshot in-memory chunks on shutdown for faster restarts (#7229)

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>

* Rename links

Signed-off-by: Levi Harrison <git@leviharrison.dev>

* Remove Individual Data Type Caps in Per-shard Buffering for Remote Write (#8921)

* Moved everything to nPending buffer

Signed-off-by: Levi Harrison <git@leviharrison.dev>

* Simplify exemplar capacity addition

Signed-off-by: Levi Harrison <git@leviharrison.dev>

* Added pre-allocation

Signed-off-by: Levi Harrison <git@leviharrison.dev>

* Don't allocate if not sending exemplars

Signed-off-by: Levi Harrison <git@leviharrison.dev>

* Avoid deadlock when processing duplicate series record (#9170)

* Avoid deadlock when processing duplicate series record

`processWALSamples()` needs to be able to send on its output channel
before it can read the input channel, so reads to allow this in case the
output channel is full.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>

* processWALSamples: update comment

Previous text seems to relate to an earlier implementation.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>

* Optimise WAL loading by removing extra map and caching min-time (#9160)

* BenchmarkLoadWAL: close WAL after use

So that goroutines are stopped and resources released

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>

* BenchmarkLoadWAL: make series IDs co-prime with #workers

Series are distributed across workers by taking the modulus of the
ID with the number of workers, so multiples of 100 are a poor choice.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>

* BenchmarkLoadWAL: simulate mmapped chunks

Real Prometheus cuts chunks every 120 samples, then skips those samples
when re-reading the WAL. Simulate this by creating a single mapped chunk
for each series, since the max time is all the reader looks at.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>

* Fix comment

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>

* Remove series map from processWALSamples()

The locks that is commented to reduce contention in are now sharded
32,000 ways, so won't be contended. Removing the map saves memory and
goes just as fast.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>

* loadWAL: Cache the last mmapped chunk time

So we can skip calling append() for samples it will reject.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>

* Improvements from code review

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>

* Full stops and capitals on comments

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>

* Cache max time in both places mmappedChunks is updated

Including refactor to extract function `setMMappedChunks`, to reduce
code duplication.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>

* Update head min/max time when mmapped chunks added

This ensures we have the correct values if no WAL samples are added for
that series.

Note that `mSeries.maxTime()` was always `math.MinInt64` before, since
that function doesn't consider mmapped chunks.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>

* Split Go and React Tests (#8897)

* Added go-ci and react-ci

Co-authored-by: Julien Pivotto <roidelapluie@inuits.eu>
Signed-off-by: Levi Harrison <git@leviharrison.dev>

* Remove search keymap from new expression editor (#9184)

Signed-off-by: Julius Volz <julius.volz@gmail.com>

Co-authored-by: Austin Cawley-Edwards <austin.cawley@gmail.com>
Co-authored-by: Levi Harrison <git@leviharrison.dev>
Co-authored-by: Julien Pivotto <roidelapluie@inuits.eu>
Co-authored-by: Bryan Boreham <bjboreham@gmail.com>
Co-authored-by: Julius Volz <julius.volz@gmail.com>
2021-08-11 15:43:17 +05:30
Ganesh Vernekar ee7e0071d1
Snapshot in-memory chunks on shutdown for faster restarts (#7229)
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
2021-08-06 17:51:01 +01:00
Bryan Boreham dea37853d9
tsdb: use dennwc/varint to speed up WAL decoding (#9106)
* tsdb: use dennwc/varint to speed up decoding

This is a tiny library, MIT-licensed, which unrolls the loop to go
about twice as fast.

Needed to copy the sign-inverting logic inline, previously provided by
the `binary` package.

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>

* More comments to explain varint decoding

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
2021-07-27 10:02:57 +05:30
Brian Brazil cf76daed2f Avoid WriteAt for Postings.
Flushing buffers and doing a pwrite per posting is expensive
time wise, so go back to the old way for those. This doubles
our memory usage, but that's still small as it's only
~8 bytes per time series in the index. This is 30-40% faster.

benchmark                                                         old ns/op      new ns/op     delta
BenchmarkCompactionFromHead/labelnames=1,labelvalues=100000-4     1101429174     724362123     -34.23%
BenchmarkCompactionFromHead/labelnames=10,labelvalues=10000-4     1074466374     720977022     -32.90%
BenchmarkCompactionFromHead/labelnames=100,labelvalues=1000-4     1166510282     677702636     -41.90%
BenchmarkCompactionFromHead/labelnames=1000,labelvalues=100-4     1075013071     696855960     -35.18%
BenchmarkCompactionFromHead/labelnames=10000,labelvalues=10-4     1231673790     829328610     -32.67%

benchmark                                                         old allocs     new allocs     delta
BenchmarkCompactionFromHead/labelnames=1,labelvalues=100000-4     832571         731435         -12.15%
BenchmarkCompactionFromHead/labelnames=10,labelvalues=10000-4     894875         793823         -11.29%
BenchmarkCompactionFromHead/labelnames=100,labelvalues=1000-4     912931         811804         -11.08%
BenchmarkCompactionFromHead/labelnames=1000,labelvalues=100-4     933511         832366         -10.83%
BenchmarkCompactionFromHead/labelnames=10000,labelvalues=10-4     1022791        921554         -9.90%

benchmark                                                         old bytes     new bytes     delta
BenchmarkCompactionFromHead/labelnames=1,labelvalues=100000-4     129063496     126472364     -2.01%
BenchmarkCompactionFromHead/labelnames=10,labelvalues=10000-4     124154888     122300764     -1.49%
BenchmarkCompactionFromHead/labelnames=100,labelvalues=1000-4     128790648     126394856     -1.86%
BenchmarkCompactionFromHead/labelnames=1000,labelvalues=100-4     120570696     118946548     -1.35%
BenchmarkCompactionFromHead/labelnames=10000,labelvalues=10-4     138754288     136317432     -1.76%

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
2019-12-16 15:30:49 +00:00
Brian Brazil 971dafdfbe Coalesce series reads where we can.
When compacting rather than doing a read of all
series in the index per label name, do many at once
but only when it won't use (much) more ram than writing the
special all index does.

original in-memory postings:
BenchmarkCompactionFromHead/labelnames=1,labelvalues=100000-4                  1        1202383447 ns/op        158936496 B/op   1031511 allocs/op
BenchmarkCompactionFromHead/labelnames=10,labelvalues=10000-4                  1        1141792706 ns/op        154453408 B/op   1093453 allocs/op
BenchmarkCompactionFromHead/labelnames=100,labelvalues=1000-4                  1        1169288829 ns/op        161072336 B/op   1110021 allocs/op
BenchmarkCompactionFromHead/labelnames=1000,labelvalues=100-4                  1        1115700103 ns/op        149480472 B/op   1129180 allocs/op
BenchmarkCompactionFromHead/labelnames=10000,labelvalues=10-4                  1        1283813141 ns/op        162937800 B/op   1202771 allocs/op

before:
BenchmarkCompactionFromHead/labelnames=1,labelvalues=100000-4                  1        1145195941 ns/op        131749984 B/op    834400 allocs/op
BenchmarkCompactionFromHead/labelnames=10,labelvalues=10000-4                  1        1233526345 ns/op        127889416 B/op    897033 allocs/op
BenchmarkCompactionFromHead/labelnames=100,labelvalues=1000-4                  1        1821942296 ns/op        131665648 B/op    914836 allocs/op
BenchmarkCompactionFromHead/labelnames=1000,labelvalues=100-4                  1        8035568665 ns/op        123811832 B/op    934312 allocs/op
BenchmarkCompactionFromHead/labelnames=10000,labelvalues=10-4                  1       71325926267 ns/op        140722648 B/op   1016824 allocs/op

after:
BenchmarkCompactionFromHead/labelnames=1,labelvalues=100000-4                  1        1101429174 ns/op        129063496 B/op    832571 allocs/op
BenchmarkCompactionFromHead/labelnames=10,labelvalues=10000-4                  1        1074466374 ns/op        124154888 B/op    894875 allocs/op
BenchmarkCompactionFromHead/labelnames=100,labelvalues=1000-4                  1        1166510282 ns/op        128790648 B/op    912931 allocs/op
BenchmarkCompactionFromHead/labelnames=1000,labelvalues=100-4                  1        1075013071 ns/op        120570696 B/op    933511 allocs/op
BenchmarkCompactionFromHead/labelnames=10000,labelvalues=10-4                  1        1231673790 ns/op        138754288 B/op   1022791 allocs/op

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
2019-12-12 08:38:14 +00:00
Brian Brazil 373a1fdfbf Reread index series rather than storing in memory.
Rather than building up a 2nd copy of all the posting
tables, construct it from the data we've already written
to disk. This takes more time, but saves memory.

Current benchmark numbers have this as slightly faster, but that's
likely due to the synthetic data not having many label names.
Memory usage is roughly halved for the relevant bits.

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
2019-12-11 22:23:39 +00:00
Brian Brazil 48d25e6fe7 Reduce memory used by postings offset table.
Rather than keeping the offset of each postings list, instead
keep the nth offset of the offset of the posting list. As postings
list offsets have always been sorted, we can then get to the closest
entry before the one we want an iterate forwards.

I haven't done much tuning on the 32 number, it was chosen to try
not to read through more than a 4k page of data.

Switch to a bulk interface for fetching postings. Use it to avoid having
to re-read parts of the posting offset table when querying lots of it.

For a index with what BenchmarkHeadPostingForMatchers uses RAM
for r.postings drops from 3.79MB to 80.19kB or about 48x.
Bytes allocated go down by 30%, and suprisingly CPU usage drops by
4-6% for typical queries too.

benchmark                                                               old ns/op      new ns/op      delta
BenchmarkPostingsForMatchers/Block/n="1"-4                              35231          36673          +4.09%
BenchmarkPostingsForMatchers/Block/n="1",j="foo"-4                      563380         540627         -4.04%
BenchmarkPostingsForMatchers/Block/j="foo",n="1"-4                      536782         534186         -0.48%
BenchmarkPostingsForMatchers/Block/n="1",j!="foo"-4                     533990         541550         +1.42%
BenchmarkPostingsForMatchers/Block/i=~".*"-4                            113374598      117969608      +4.05%
BenchmarkPostingsForMatchers/Block/i=~".+"-4                            146329884      139651442      -4.56%
BenchmarkPostingsForMatchers/Block/i=~""-4                              50346510       44961127       -10.70%
BenchmarkPostingsForMatchers/Block/i!=""-4                              41261550       35356165       -14.31%
BenchmarkPostingsForMatchers/Block/n="1",i=~".*",j="foo"-4              112544418      116904010      +3.87%
BenchmarkPostingsForMatchers/Block/n="1",i=~".*",i!="2",j="foo"-4       112487086      116864918      +3.89%
BenchmarkPostingsForMatchers/Block/n="1",i!=""-4                        41094758       35457904       -13.72%
BenchmarkPostingsForMatchers/Block/n="1",i!="",j="foo"-4                41906372       36151473       -13.73%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",j="foo"-4              147262414      140424800      -4.64%
BenchmarkPostingsForMatchers/Block/n="1",i=~"1.+",j="foo"-4             28615629       27872072       -2.60%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",i!="2",j="foo"-4       147117177      140462403      -4.52%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",i!~"2.*",j="foo"-4     175096826      167902298      -4.11%

benchmark                                                               old allocs     new allocs     delta
BenchmarkPostingsForMatchers/Block/n="1"-4                              4              6              +50.00%
BenchmarkPostingsForMatchers/Block/n="1",j="foo"-4                      7              11             +57.14%
BenchmarkPostingsForMatchers/Block/j="foo",n="1"-4                      7              11             +57.14%
BenchmarkPostingsForMatchers/Block/n="1",j!="foo"-4                     15             17             +13.33%
BenchmarkPostingsForMatchers/Block/i=~".*"-4                            100010         100012         +0.00%
BenchmarkPostingsForMatchers/Block/i=~".+"-4                            200069         200040         -0.01%
BenchmarkPostingsForMatchers/Block/i=~""-4                              200072         200045         -0.01%
BenchmarkPostingsForMatchers/Block/i!=""-4                              200070         200041         -0.01%
BenchmarkPostingsForMatchers/Block/n="1",i=~".*",j="foo"-4              100013         100017         +0.00%
BenchmarkPostingsForMatchers/Block/n="1",i=~".*",i!="2",j="foo"-4       100017         100023         +0.01%
BenchmarkPostingsForMatchers/Block/n="1",i!=""-4                        200073         200046         -0.01%
BenchmarkPostingsForMatchers/Block/n="1",i!="",j="foo"-4                200075         200050         -0.01%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",j="foo"-4              200074         200049         -0.01%
BenchmarkPostingsForMatchers/Block/n="1",i=~"1.+",j="foo"-4             111165         111150         -0.01%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",i!="2",j="foo"-4       200078         200055         -0.01%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",i!~"2.*",j="foo"-4     311282         311238         -0.01%

benchmark                                                               old bytes     new bytes     delta
BenchmarkPostingsForMatchers/Block/n="1"-4                              264           296           +12.12%
BenchmarkPostingsForMatchers/Block/n="1",j="foo"-4                      360           424           +17.78%
BenchmarkPostingsForMatchers/Block/j="foo",n="1"-4                      360           424           +17.78%
BenchmarkPostingsForMatchers/Block/n="1",j!="foo"-4                     520           552           +6.15%
BenchmarkPostingsForMatchers/Block/i=~".*"-4                            1600461       1600482       +0.00%
BenchmarkPostingsForMatchers/Block/i=~".+"-4                            24900801      17259077      -30.69%
BenchmarkPostingsForMatchers/Block/i=~""-4                              24900836      17259151      -30.69%
BenchmarkPostingsForMatchers/Block/i!=""-4                              24900760      17259048      -30.69%
BenchmarkPostingsForMatchers/Block/n="1",i=~".*",j="foo"-4              1600557       1600621       +0.00%
BenchmarkPostingsForMatchers/Block/n="1",i=~".*",i!="2",j="foo"-4       1600717       1600813       +0.01%
BenchmarkPostingsForMatchers/Block/n="1",i!=""-4                        24900856      17259176      -30.69%
BenchmarkPostingsForMatchers/Block/n="1",i!="",j="foo"-4                24900952      17259304      -30.69%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",j="foo"-4              24900993      17259333      -30.69%
BenchmarkPostingsForMatchers/Block/n="1",i=~"1.+",j="foo"-4             3788311       3142630       -17.04%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",i!="2",j="foo"-4       24901137      17259509      -30.69%
BenchmarkPostingsForMatchers/Block/n="1",i=~".+",i!~"2.*",j="foo"-4     28693086      20405680      -28.88%

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
2019-12-11 19:59:31 +00:00
Brian Brazil 4df814f509
Don't buffer up tables in memory on compaction. (#6422)
We can instead write it as we go, and then go back and write in the
length at the end.

Also fix the compaction benchmark, which indicates no changes.

For the benchmark, this brings maximum memory usage of the buffers 
from ~200kB down to 128B.

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
2019-12-11 12:49:13 +00:00
Ganesh Vernekar 7cf09b0395
Moving tsdb into its own subdirectory
Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
2019-08-13 13:58:49 +05:30