We don't use seriesShard during DB initialization, so we can use the
same 8 bytes to store mmMaxTime, and save those during the rest of the
lifetime of the database.
This doesn't affect CPU performance.
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
In each case the following member requires 8-byte alignment, so moving
one beside the other shrinks memSeries from 176 to 168 bytes, when
compiled with `-tags stringlabels`.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* TSDB: Document what needs locking in memSeries
* TSDB: Lock around access to series labels
So we can modify them to reset the symbol-table.
* TSDB: Make label locking conditional on build tag
---------
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Pass affected labels to MemPostings.Delete
As suggested by @bboreham, we can track the labels of the deleted series
and avoid iterating through all the label/value combinations.
This looks much faster on the MemPostings.Delete call. We don't have a
benchmark on stripeSeries.gc() where we'll pay the price of iterating
the labels of each one of the deleted series.
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* TSDB: Don't compact the head block when empty
Don't compact the Head block if there have not yet been any samples
appended.
Previously, the logic for determining if the head should be compacted
relied on the default values for min and max time and integer overflow
when they were checked in `Head.compactable()`. The check in
`Head.compactable()` effectively did `math.MinInt64 - math.MaxInt64`
which overflowed and wrapped to `1`. Since `1` is less than `1.5`
times the chunk range, compaction did not happen. This was the correct
behavior but relying on overflow wrapping is surprising.
This change add a method for checking if the min and max time for the
head is unset and uses it to short-circuit compaction in that case.
It also replaces several explicit checks for the default value to
determine if the head has not yet had any samples added.
Signed-off-by: Nick Pillitteri <nick.pillitteri@grafana.com>
Dogfood native histograms.
Allow dependent projects to migrate to native histograms.
I took the defaults from client_golang.
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
This PR is a reference implementation of the proposal described in #10420.
In addition to what described in #10420, in this PR I've introduced labels.StableHash(). The idea is to offer an hashing function which doesn't change over time, and that's used by query sharding in order to get a stable behaviour over time. The implementation of labels.StableHash() is the hashing function used by Prometheus before stringlabels, and what's used by Grafana Mimir for query sharding (because built before stringlabels was a thing).
Follow up work
As mentioned in #10420, if this PR is accepted I'm also open to upload another foundamental piece used by Grafana Mimir query sharding to accelerate the query execution: an optional, configurable and fast in-memory cache for the series hashes.
Signed-off-by: Marco Pracucci <marco@pracucci.com>
Mutex is 8 bytes; RWMutex is 24 bytes and much more complicated. Since
`RLock` is only used in two places, `UpdateMetadata` and `Delete`,
neither of which are hotspots, we should use the cheaper one.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Lifting an optimisation from Agent code, `seriesHashmap.del` can use
the unique series reference, doesn't need to check Labels.
Also streamline the logic for deleting from `unique` and `conflicts` maps,
and add some comments to help the next person.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Append created timestamps.
Signed-off-by: Arthur Silva Sens <arthur.sens@coralogix.com>
* Log when created timestamps are ignored
Signed-off-by: Arthur Silva Sens <arthur.sens@coralogix.com>
* Proposed changes to Append CT PR.
Changes:
* Changed textparse Parser interface for consistency and robustness.
* Changed CT interface to be more explicit and handle validation.
* Simplified test, change scrapeManager to allow testability.
* Added TODOs.
Signed-off-by: bwplotka <bwplotka@gmail.com>
* Updates.
Signed-off-by: bwplotka <bwplotka@gmail.com>
* Addressed comments.
Signed-off-by: bwplotka <bwplotka@gmail.com>
* Refactor head_appender test
Signed-off-by: Arthur Silva Sens <arthur.sens@coralogix.com>
* Fix linter issues
Signed-off-by: Arthur Silva Sens <arthur.sens@coralogix.com>
* Use model.Sample in head appender test
Signed-off-by: Arthur Silva Sens <arthur.sens@coralogix.com>
---------
Signed-off-by: Arthur Silva Sens <arthur.sens@coralogix.com>
Signed-off-by: bwplotka <bwplotka@gmail.com>
Co-authored-by: bwplotka <bwplotka@gmail.com>
* Fix tsdb.stripeSeries.gc so it handles conflicts properly
tsdb.stripeSeries.gc needs to prune seriesHashmap.conflicts first,
otherwise seriesHashmap replaces the unique field with the first among
the conflicts. Also add regression test.
Signed-off-by: Arve Knudsen <arve.knudsen@gmail.com>
* TestStripeSeries_gc: Support stringlabels, don't use internals
Signed-off-by: Arve Knudsen <arve.knudsen@gmail.com>
---------
Signed-off-by: Arve Knudsen <arve.knudsen@gmail.com>
* Fix tsdb.seriesHashmap.set by making receiver a pointer
The method tsdb.seriesHashmap.set currently doesn't set the conflicts
field properly, due to the receiver being a non-pointer. Fix by turning
the receiver into a pointer, and add a corresponding regression test.
Signed-off-by: Arve Knudsen <arve.knudsen@gmail.com>
* Add failing test.
Signed-off-by: Charles Korn <charles.korn@grafana.com>
* Don't run OOO head garbage collection while reads are running.
Signed-off-by: Charles Korn <charles.korn@grafana.com>
* Add further test cases for different order of operations.
Signed-off-by: Charles Korn <charles.korn@grafana.com>
* Ensure all queriers are closed if `DB.blockChunkQuerierForRange()` fails.
Signed-off-by: Charles Korn <charles.korn@grafana.com>
* Ensure all queriers are closed if `DB.Querier()` fails.
Signed-off-by: Charles Korn <charles.korn@grafana.com>
* Invert error handling in `DB.Querier()` and `DB.blockChunkQuerierForRange()` to make it clearer
Signed-off-by: Charles Korn <charles.korn@grafana.com>
* Ensure that queries that touch OOO data can't block OOO head garbage collection forever.
Signed-off-by: Charles Korn <charles.korn@grafana.com>
* Address PR feedback: fix parameter name in comment
Co-authored-by: Jesus Vazquez <jesusvazquez@users.noreply.github.com>
Signed-off-by: Charles Korn <charleskorn@users.noreply.github.com>
* Address PR feedback: use `lastGarbageCollectedMmapRef`
Signed-off-by: Charles Korn <charles.korn@grafana.com>
* Address PR feedback: ensure pending reads are cleaned up if creating an OOO querier fails
Signed-off-by: Charles Korn <charles.korn@grafana.com>
---------
Signed-off-by: Charles Korn <charles.korn@grafana.com>
Signed-off-by: Charles Korn <charleskorn@users.noreply.github.com>
Co-authored-by: Jesus Vazquez <jesusvazquez@users.noreply.github.com>
* Make head block ULIDs descriptive
As far as I understand, these ULIDs aren't persisted anywhere, so it
should be safe to change them.
When debugging an issue, seeing an ULID like
`2ZBXFNYVVFDXFPGSB1CHFNYQTZ` or `33DXR7JA39CHDKMQ9C40H6YVVF` isn't very
helpful, so I propose to make them readable in their ULID string
version.
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Set a different ULID for RangeHead
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
---------
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
Instead of a map of slices of `*memSeries`, ready for any of them to
hold series where hash values collide, split into a map of `*memSeries`
and a map of slices which is usually empty, since hash collisions are
a one-in-a-billion thing.
The `del` method gets more complicated, to maintain the invariant that
a series is only in one of the two maps.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Extract the middle of the loop into a function, so it will be
easier to modify the `seriesHashmap` data structure.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
When Prometheus restarts it creates every series read in from the WAL,
but many of those series will be finished, and never receive any more
samples. By defering allocation of the txRing slice to when it is first
needed, we save 32 bytes per stale series.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Additionally wrap WBL replay error
Although WBL replay is already wrapped with errLoadWbl,
there are other errors that can happen during a WBL replay.
We should not try to repair WAL in those cases.
This commit additionally wraps the final error in Head.Init again
with errLoadWbl so that WBL replay errors can be identified properly.
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
Signed-off-by: Jesus Vazquez <jesusvzpg@gmail.com>
Co-authored-by: Jesus Vazquez <jesusvzpg@gmail.com>
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Additionally wrap WBL replay error
Although WBL replay is already wrapped with errLoadWbl,
there are other errors that can happen during a WBL replay.
We should not try to repair WAL in those cases.
This commit additionally wraps the final error in Head.Init again
with errLoadWbl so that WBL replay errors can be identified properly.
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
Signed-off-by: Jesus Vazquez <jesusvzpg@gmail.com>
Co-authored-by: Jesus Vazquez <jesusvzpg@gmail.com>
This avoids situations where metrics are scraped before the data they
are trying to look at is initialized.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Add a chunk size limit in bytes
This creates a hard cap for XOR chunks of 1024 bytes.
The limit for histogram chunk is also 1024 bytes, but it is a soft limit as a histogram has a dynamic size, and even a single one could be larger than 1024 bytes.
This also avoids cutting new histogram chunks if the existing chunk has fewer than 10 histograms yet. In that way, we are accepting "jumbo chunks" in order to have at least 10 histograms in a chunk, allowing compression to kick in.
Signed-off-by: Justin Lei <justin.lei@grafana.com>
When a particular SeriesLifecycleCallback tries to optimize and run
closer to the Head, keeping track of the HeadSeriesRef instead of the
labelsets, it's impossible to handle the PostDeletion callback properly
as there's no way to know which series refs were deleted from the head.
This changes the callback to provide the series refs alongside the
labelsets, so the implementation can choose what to do.
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
Currently memSeries holds a single head chunk in-memory and a slice of mmapped chunks.
When append() is called on memSeries it might decide that a new headChunk is needed to use for given append() call.
If that happens it will first mmap existing head chunk and only after that happens it will create a new empty headChunk and continue appending
our sample to it.
Since appending samples uses write lock on memSeries no other read or write can happen until any append is completed.
When we have an append() that must create a new head chunk the whole memSeries is blocked until mmapping of existing head chunk finishes.
Mmapping itself uses a lock as it needs to be serialised, which means that the more chunks to mmap we have the longer each chunk might wait
for it to be mmapped.
If there's enough chunks that require mmapping some memSeries will be locked for long enough that it will start affecting
queries and scrapes.
Queries might timeout, since by default they have a 2 minute timeout set.
Scrapes will be blocked inside append() call, which means there will be a gap between samples. This will first affect range queries
or calls using rate() and such, since the time range requested in the query might have too few samples to calculate anything.
To avoid this we need to remove mmapping from append path, since mmapping is blocking.
But this means that when we cut a new head chunk we need to keep the old one around, so we can mmap it later.
This change makes memSeries.headChunk a linked list, memSeries.headChunk still points to the 'open' head chunk that receives new samples,
while older, yet to be mmapped, chunks are linked to it.
Mmapping is done on a schedule by iterating all memSeries one by one. Thanks to this we control when mmapping is done, since we trigger
it manually, which reduces the risk that it will have to compete for mmap locks with other chunks.
Signed-off-by: Łukasz Mierzwa <l.mierzwa@gmail.com>
* WIP implement WAL watcher reading via notifications over a channel from
the TSDB code
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Notify via head appenders Commit (finished all WAL logging) rather than
on each WAL Log call
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Fix misspelled Notify plus add a metric for dropped Write notifications
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Update tests to handle new notification pattern
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* this test maybe needs more time on windows?
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* does this test need more time on windows as well?
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* read timeout is already a time.Duration
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* remove mistakenly commited benchmark data files
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* address some review feedback
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* fix missed changes from previous commit
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Fix issues from wrapper function
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* try fixing race condition in test by allowing tests to overwrite the
read ticker timeout instead of calling the Notify function
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* fix linting
Signed-off-by: Callum Styan <callumstyan@gmail.com>
---------
Signed-off-by: Callum Styan <callumstyan@gmail.com>
`head.deleted` holds the WAL segment in use at the time each series was
removed from the head. At the end of `truncateWAL()` we will delete
all segments up to `last`, so we can drop any series that were last seen
in a segment at or before that point.
(same change in Prometheus Agent too)
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
In the past, every sample value was a float, so it was fine to call a
variable holding such a float "value" or "sample". With native
histograms, a sample might have a histogram value. And a histogram
value is still a value. Calling a float value just "value" or "sample"
or "V" is therefore misleading. Over the last few commits, I already
renamed many variables, but this cleans up a few more places where the
changes are more invasive.
Note that we do not to attempt naming in the JSON APIs or in the
protobufs. That would be quite a disruption. However, internally, we
can call variables as we want, and we should go with the option of
avoiding misunderstandings.
Signed-off-by: beorn7 <beorn@grafana.com>
* Use zeropool.Pool to workaround SA6002
I built a tiny library called https://github.com/colega/zeropool to
workaround the SA6002 staticheck issue.
While searching for the references of that SA6002 staticheck issues on
Github first results was Prometheus itself, with quite a lot of ignores
of it.
This changes the usages of `sync.Pool` to `zeropool.Pool[T]` where a
pointer is not available.
Also added a benchmark for HeadAppender Append/Commit when series
already exist, which is one of the most usual cases IMO, as I didn't find
any.
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Improve BenchmarkHeadAppender with more cases
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* A little copying is better than a little dependency
https://www.youtube.com/watch?v=PAAkCSZUG1c&t=9m28s
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Fix imports order
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Add license header
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Copyright should be on one of the first 3 lines
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Use require.Equal for testing
I don't depend on testify in my lib, but here we have it available.
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Avoid flaky test
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Also use zeropool for pointsPool in engine.go
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
---------
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
Without this fix, if snapshots were enabled, and wbl goes missing
between restarts, then TSDB does not recognize that there are ooo
mmap chunks on disk and we cannot query them until those chunks
are compacted into blocks.
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>