The new function `rangeEvalAgg` is mostly a copy of `rangeEval`, but
without `initSeries` which we don't need and inlining the callback to
`aggregation()`.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
The existing aggregation function is very long and covers very different
cases.
`aggregationCountValues` is just for `count_values`, which differs from
other aggregations in that it outputs as many series per group as there
are values in the input.
Remove the top-level switch on string parameter type; use the same `Op`
check there as elswehere.
Pull checking parameters out to caller, where it is only executed once.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
The strings produced by these tests can run to thousands of characters,
which makes test logs difficult to read.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* promql: include more details in error message when creating test query fails
Signed-off-by: Charles Korn <charles.korn@grafana.com>
* Include more details when an unexpected metric is returned
Signed-off-by: Charles Korn <charles.korn@grafana.com>
---------
Signed-off-by: Charles Korn <charles.korn@grafana.com>
The definition of histograms in the test framework may create
histograms in a non-compact form. Since histogram comparison relies on
exact equality of the bucket layout, we have to compact the histograms
created by the test framework language before comparing them to
histograms returned from the PromQL engine.
Signed-off-by: beorn7 <beorn@grafana.com>
The size of histogram points are now bigger by 24 bytes due to the
custom values slice.
When histograms are loaded into partial results in vector selectors
we use HPoint type where the size is calculated as
(size of histogram + 8 for timestamp)/16.
a3d1a46eda/promql/value.go (L176)
When histograms are put into Sample type in range evaluations, the
Sample has more overhead and the size is calculated differently:
(size of histogram / 16) + 1 for time stamp.
a3d1a46eda/promql/engine.go (L1928)
When the size of the histogram is 16k, then the first calculation gives k
but the second gives k+1 for the sample count.
If the histogram size is 16k+8, then both would give k+1.
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
Restrict the capacity of first argument to `append()` to force an allocation.
This is for the slice implementation only.
Signed-off-by: Domantas Jadenkus <djadenkus@gmail.com>
* Extract method to make it easier to test.
Signed-off-by: Charles Korn <charles.korn@grafana.com>
* Remove superfluous interface definition.
Signed-off-by: Charles Korn <charles.korn@grafana.com>
* Add test cases for existing instant query functionality.
Signed-off-by: Charles Korn <charles.korn@grafana.com>
* Add support for testing range queries
Signed-off-by: Charles Korn <charles.korn@grafana.com>
* Expand test coverage for instant queries and clarify error when a float is returned but a histogram is expected (or vice versa)
Signed-off-by: Charles Korn <charles.korn@grafana.com>
* Improve error message formatting
Signed-off-by: Charles Korn <charles.korn@grafana.com>
* Add test case for instant query command with invalid timestamp
Signed-off-by: Charles Korn <charles.korn@grafana.com>
* Fix linting warning.
Signed-off-by: Charles Korn <charles.korn@grafana.com>
* Remove superfluous print statement and expected result
Signed-off-by: Charles Korn <charles.korn@grafana.com>
* Fix linting warning.
Signed-off-by: Charles Korn <charles.korn@grafana.com>
* Add note about ordered range eval commands.
Signed-off-by: Charles Korn <charles.korn@grafana.com>
* Check that matrix results are always sorted by labels.
Signed-off-by: Charles Korn <charles.korn@grafana.com>
---------
Signed-off-by: Charles Korn <charles.korn@grafana.com>
Using testify outside of unit tests results in panics rather than a
useful error for the user.
Fixes#13703
Signed-off-by: David Leadbeater <dgl@dgl.cx>
This is a bit tough to explain, but I'll try:
`rate` & friends have a sophisticated extrapolation algorithm.
Usually, we extrapolate the result to the total interval specified in
the range selector. However, if the first sample within the range is
too far away from the beginning of the interval, or if the last sample
within the range is too far away from the end of the interval, we
assume the series has just started half a sampling interval before the
first sample or after the last sample, respectively, and shorten the
extrapolation interval correspondingly. We calculate the sampling
interval by looking at the average time between samples within the
range, and we define "too far away" as "more than 110% of that
sampling interval".
However, if this algorithm leads to an extrapolated starting value
that is negative, we limit the start of the extrapolation interval to
the point where the extrapolated starting value is zero.
At least that was the intention.
What we actually implemented is the following: If extrapolating all
the way to the beginning of the total interval would lead to an
extrapolated negative value, we would only extrapolate to the zero
point as above, even if the algorithm above would have selected a
starting point that is just half a sampling interval before the first
sample and that starting point would not have an extrapolated negative
value. In other word: What was meant as a _limitation_ of the
extrapolation interval yielded a _longer_ extrapolation interval in
this case.
There is an exception to the case just described: If the increase of
the extrapolation interval is more than 110% of the sampling interval,
we suddenly drop back to only extrapolate to half a sampling interval.
This behavior can be nicely seen in the testcounter_zero_cutoff test,
where the rate goes up all the way to 0.7 and then jumps back to 0.6.
This commit changes the behavior to what was (presumably) intended
from the beginning: The extension of the extrapolation interval is
only limited if actually needed to prevent extrapolation to negative
values, but the "limitation" never leads to _more_ extrapolation
anymore.
The difference is subtle, and probably it never bothered anyone.
However, if you calculate a rate of a classic histograms, the old
behavior might create non-monotonic histograms as a result (because of
the jumps you can see nicely in the old version of the
testcounter_zero_cutoff test). With this fix, that doesn't happen
anymore.
Signed-off-by: beorn7 <beorn@grafana.com>
* add custom buckets to native histogram model
* simple copy for custom bounds
* return errors for unsupported add/sub operations
* add test cases for string and update appendhistogram in scrape to account for new schema
* check fields which are supposed to be unused but may affect results in equals
* allow appending custom buckets histograms regardless of max schema
Signed-off-by: Jeanette Tan <jeanette.tan@grafana.com>
Aggregations discard the metric name, so don't try to
include it in the error message.
Add a test that generates this warning.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
This adds support for the new grammar of `{"metric_name", "l1"="val"}` to promql and some of the exposition formats.
This grammar will also be valid for non-UTF-8 names.
UTF-8 names will not be considered valid unless model.NameValidationScheme is changed.
This does not update the go expfmt parser in text_parse.go, which will be addressed by https://github.com/prometheus/common/issues/554/.
Part of https://github.com/prometheus/prometheus/issues/13095
Signed-off-by: Owen Williams <owen.williams@grafana.com>
Fixes#11708.
If a range vector is fixen in time with the @ modifier, it gets still
moved around for different steps in a range query. Since no additional
points are retrieved from the TSDB, this leads to steadily emptying
the range, leading to the weird behavior described in isse #11708.
This only happens for functions listed in `AtModifierUnsafeFunctions`,
and the only of those that takes a range vector is `predict_linear`,
which is the reason why we see it only for this particular function.
Signed-off-by: beorn7 <beorn@grafana.com>
These functions act on the labels only, so don't need to go step by step
over the samples in a range query.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Reusing points slice from previous series when the slice is under utilized
* Adding comments on the bench test
Signed-off-by: Alan Protasio <alanprot@gmail.com>
The last_over_time retains a histogram sample without making a copy.
This sample is now coming from the buffered iterator used for windowing functions,
and can be reused for reading subsequent samples as the iterator progresses.
I would propose copying the sample in the last_over_time function, similar to
how it is done for rate, sum_over_time and others.
Signed-off-by: Filip Petkovski <filip.petkovsky@gmail.com>
This function is called very frequently when executing PromQL functions,
and we can do it much more efficiently inside Labels.
In the common case that `__name__` comes first in the labels, we simply
re-point to start at the next label, which is nearly free.
`DropMetricName` is now so cheap I removed the cache - benchmarks show
everything still goes faster.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Optimize histogram iterators
Histogram iterators allocate new objects in the AtHistogram and
AtFloatHistogram methods, which makes calculating rates over long
ranges expensive.
In #13215 we allowed an existing object to be reused
when converting an integer histogram to a float histogram. This commit follows
the same idea and allows injecting an existing object in the AtHistogram and
AtFloatHistogram methods. When the injected value is nil, iterators allocate
new histograms, otherwise they populate and return the injected object.
The commit also adds a CopyTo method to Histogram and FloatHistogram which
is used in the BufferedIterator to overwrite items in the ring instead of making
new copies.
Note that a specialized HPoint pool is needed for all of this to work
(`matrixSelectorHPool`).
---------
Signed-off-by: Filip Petkovski <filip.petkovsky@gmail.com>
Co-authored-by: George Krajcsovits <krajorama@users.noreply.github.com>
Add warnings for histogramRate applied with isCounter not matching counter/gauge histogram
---------
Signed-off-by: Jeanette Tan <jeanette.tan@grafana.com>
Restore more efficient version of NewPossibleNonCounterInfo annotation
Signed-off-by: Jeanette Tan <jeanette.tan@grafana.com>
---------
Signed-off-by: Jeanette Tan <jeanette.tan@grafana.com>
Conditions are ANDed inside the same matcher but matchers are ORed
Including unit tests for "promtool tsdb dump".
Refactor some matchers scraping utils.
Signed-off-by: machine424 <ayoubmrini424@gmail.com>
This reverts commit 2ddb3596ef.
Various tests are failing in CI after this change; reverting to free up
other work.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
In https://github.com/prometheus/prometheus/pull/13276 we started reusing float histogram objects to reduce allocations in PromQL.
That PR introduces a bug where histogram pointers gets copied to the beginning of the histograms slice,
but are still kept in the end of the slice. When a new histogram is read into the last element,
it can overwrite a previous element because the pointer is the same.
This commit fixes the issue by moving outdated points to the end of the slice
so that we don't end up with duplicate pointers in the same buffer. In other words,
the slice gets rotated so that old objects can get reused.
Signed-off-by: Filip Petkovski <filip.petkovsky@gmail.com>
This commit reduces the memory needed to query native histogram objects
by reusing existing HPoint instances.
Signed-off-by: Filip Petkovski <filip.petkovsky@gmail.com>
The 'ToFloat' method on integer histograms currently allocates new memory
each time it is called.
This commit adds an optional *FloatHistogram parameter that can be used
to reuse span and bucket slices. It is up to the caller to make sure the
input float histogram is not used anymore after the call.
Signed-off-by: Filip Petkovski <filip.petkovsky@gmail.com>
promql: Improve histogram_quantile calculation for classic buckets
Tiny differences between classic buckets are most likely caused by floating point precision issues. With this commit, relative changes below a certain threshold are ignored. This makes the result of histogram_quantile more meaningful, and also avoids triggering the _input to histogram_quantile needed to be fixed for monotonicity_ annotations in unactionable cases.
This commit also adds explanation of the new adjustment and of the monotonicity annotation to the documentation of `histogram_quantile`.
---------
Signed-off-by: Jeanette Tan <jeanette.tan@grafana.com>
* Add benchmark for native histograms
This commit adds a PromQL benchmark for queries on native histograms.
Signed-off-by: Filip Petkovski <filip.petkovsky@gmail.com>
This PR adds an Experimental flag to the functions.
This can be used by https://github.com/prometheus/prometheus/pull/13059
but also xrate and other future functions.
Signed-off-by: Julien Pivotto <roidelapluie@o11y.eu>
This function is useful to analyze promQL queries. We want to use this in Mimir to record the time range which the query touches.
I also chose to remove the `Engine` receiver because it was unnecessary, and it makes it easier to use, but happy to refactor that if you disagree.
The function is untested on its own. If you prefer to have unit tests now that its exported, I can look into adding some.
Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com>
* Remove NewPossibleNonCounterInfo until it can be made more efficient, and avoid creating empty annotations as much as possible
Signed-off-by: Jeanette Tan <jeanette.tan@grafana.com>
Return annotations (warnings and infos) from PromQL queries
This generalizes the warnings we have already used before (but only for problems with remote read) as "annotations".
Annotations can be warnings or infos (the latter could be false positives). We do not treat them different in the API for now and return them all as "warnings". It would be easy to distinguish them and return infos separately, should that appear useful in the future.
The new annotations are then used to create a lot of warnings or infos during PromQL evaluations. Partially these are things we have wanted for a long time (e.g. inform the user that they have applied `rate` to a metric that doesn't look like a counter), but the new native histograms have created even more needs for those annotations (e.g. if a query tries to aggregate float numbers with histograms).
The annotations added here are not yet complete. A prominent example would be a warning about a range too short for a rate calculation. But such a warnings is more tricky to create with good fidelity and we will tackle it later.
Another TODO is to take annotations into account when evaluating recording rules.
---------
Signed-off-by: Jeanette Tan <jeanette.tan@grafana.com>
Make it more likely that contributors will run the benchmark suite.
count_values needs more than 2GB at 1,000 steps, so just run it for 100.
And remove 10-step variant because it doesn't add much to 100 and
1000-step benchmarks.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Otherwise we have a highly unusual situation of over 100 chunks
in the headChunks list of each series, which heavily skews
performance.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
promql: Extend testing framework to support native histograms
This includes both the internal testing framework as well as the rules unit test feature of promtool.
This also adds a bunch of basic tests. Many of the code level tests can now be converted to tests within the framework, and more tests can be added easily.
---------
Signed-off-by: Harold Dost <h.dost@criteo.com>
Signed-off-by: Gregor Zeitlinger <gregor.zeitlinger@grafana.com>
Signed-off-by: Stephen Lang <stephen.lang@grafana.com>
Co-authored-by: Harold Dost <h.dost@criteo.com>
Co-authored-by: Stephen Lang <stephen.lang@grafana.com>
Co-authored-by: Gregor Zeitlinger <gregor.zeitlinger@grafana.com>
So far, `ValidateHistogram` would not detect if the count did not
include the count in the zero bucket. This commit fixes the problem
and updates all the tests that have been undetected offenders so far.
Note that this problem would only ever create false negatives, so we
never falsely rejected to store a histogram because of it.
On the other hand, `ValidateFloatHistogram` has been to strict with
the count being at least as large as the sum of the counts in all the
buckets. Float precision issues could create false positives here, see
products of PromQL evaluations, it's actually quite hard to put an
upper limit no the floating point imprecision. Users could produce the
weirdest expressions, maxing out float precision problems. Therefore,
this commit simply removes that particular check from
`ValidateFloatHistogram`.
Signed-off-by: beorn7 <beorn@grafana.com>
promql engine: check unique labels using existing map
ContainsSameLabelset constructs a map with the same hash key as the one used to compile the output of rangeEval, so we can use that one and save work.
Need to hold the timestamp so we can be sure we saw the same series in the same evaluation.
`ContainsSameLabelset` constructs a map with the same hash key as
the one used to compile the output of `rangeEval`, so we can use that
one and save work.
Need to hold the timestamp so we can be sure we saw the same series
in the same evaluation.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
The operator changes the meaning of the metric, so the metric name should
be dropped. Technically this would be a breaking change, but it's also very
obviously a bug and not likely that anyone depends on it.
Signed-off-by: Julius Volz <julius.volz@gmail.com>
Convert QueryOpts to an interface so that downstream projects like
https://github.com/thanos-community/promql-engine could extend the query
options with engine specific options that are not in the original
engine.
Will be used to enable query analysis per-query.
Signed-off-by: Giedrius Statkevičius <giedrius.statkevicius@vinted.com>
Handle more arithmetic operators and aggregators for native histograms
This includes operators for multiplication (formerly known as scaling), division, and subtraction. Plus aggregations for average and the avg_over_time function.
Stdvar and stddev will (for now) ignore histograms properly (rather than counting them but adding a 0 for them).
Signed-off-by: Jeanette Tan <jeanette.tan@grafana.com>
So far, if a target exposes a histogram with both classic and native
buckets, a native-histogram enabled Prometheus would ignore the
classic buckets. With the new scrape config option
`scrape_classic_histograms` set, both buckets will be ingested,
creating all the series of a classic histogram in parallel to the
native histogram series. For example, a histogram `foo` would create a
native histogram series `foo` and classic series called `foo_sum`,
`foo_count`, and `foo_bucket`.
This feature can be used in a migration strategy from classic to
native histograms, where it is desired to have a transition period
during which both native and classic histograms are present.
Note that two bugs in classic histogram parsing were found and fixed
as a byproduct of testing the new feature:
1. Series created from classic _gauge_ histograms didn't get the
_sum/_count/_bucket prefix set.
2. Values of classic _float_ histograms weren't parsed properly.
Signed-off-by: beorn7 <beorn@grafana.com>
Wiser coders than myself have come to the conclusion that a `switch`
statement is almost always superior to a statement that includes any
`else if`.
The exceptions that I have found in our codebase are just these two:
* The `if else` is followed by an additional statement before the next
condition (separated by a `;`).
* The whole thing is within a `for` loop and `break` statements are
used. In this case, using `switch` would require tagging the `for`
loop, which probably tips the balance.
Why are `switch` statements more readable?
For one, fewer curly braces. But more importantly, the conditions all
have the same alignment, so the whole thing follows the natural flow
of going down a list of conditions. With `else if`, in contrast, all
conditions but the first are "hidden" behind `} else if `, harder to
spot and (for no good reason) presented differently from the first
condition.
I'm sure the aforemention wise coders can list even more reasons.
In any case, I like it so much that I have found myself recommending
it in code reviews. I would like to make it a habit in our code base,
without making it a hard requirement that we would test on the CI. But
for that, there has to be a role model, so this commit eliminates all
`if else` occurrences, unless it is autogenerated code or fits one of
the exceptions above.
Signed-off-by: beorn7 <beorn@grafana.com>
We haven't updated golint-ci in our CI yet, but this commit prepares
for that.
There are a lot of new warnings, and it is mostly because the "revive"
linter got updated. I agree with most of the new warnings, mostly
around not naming unused function parameters (although it is justified
in some cases for documentation purposes – while things like mocks are
a good example where not naming the parameter is clearer).
I'm pretty upset about the "empty block" warning to include `for`
loops. It's such a common pattern to do something in the head of the
`for` loop and then have an empty block. There is still an open issue
about this: https://github.com/mgechev/revive/issues/810 I have
disabled "revive" altogether in files where empty blocks are used
excessively, and I have made the effort to add individual
`// nolint:revive` where empty blocks are used just once or twice.
It's borderline noisy, though, but let's go with it for now.
I should mention that none of the "empty block" warnings for `for`
loop bodies were legitimate.
Signed-off-by: beorn7 <beorn@grafana.com>
In the past, every sample value was a float, so it was fine to call a
variable holding such a float "value" or "sample". With native
histograms, a sample might have a histogram value. And a histogram
value is still a value. Calling a float value just "value" or "sample"
or "V" is therefore misleading. Over the last few commits, I already
renamed many variables, but this cleans up a few more places where the
changes are more invasive.
Note that we do not to attempt naming in the JSON APIs or in the
protobufs. That would be quite a disruption. However, internally, we
can call variables as we want, and we should go with the option of
avoiding misunderstandings.
Signed-off-by: beorn7 <beorn@grafana.com>
In other words: Instead of having a “polymorphous” `Point` that can
either contain a float value or a histogram value, use an `FPoint` for
floats and an `HPoint` for histograms.
This seemingly small change has a _lot_ of repercussions throughout
the codebase.
The idea here is to avoid the increase in size of `Point` arrays that
happened after native histograms had been added.
The higher-level data structures (`Sample`, `Series`, etc.) are still
“polymorphous”. The same idea could be applied to them, but at each
step the trade-offs needed to be evaluated.
The idea with this change is to do the minimum necessary to get back
to pre-histogram performance for functions that do not touch
histograms. Here are comparisons for the `changes` function. The test
data doesn't include histograms yet. Ideally, there would be no change
in the benchmark result at all.
First runtime v2.39 compared to directly prior to this commit:
```
name old time/op new time/op delta
RangeQuery/expr=changes(a_one[1d]),steps=1-16 391µs ± 2% 542µs ± 1% +38.58% (p=0.000 n=9+8)
RangeQuery/expr=changes(a_one[1d]),steps=10-16 452µs ± 2% 617µs ± 2% +36.48% (p=0.000 n=10+10)
RangeQuery/expr=changes(a_one[1d]),steps=100-16 1.12ms ± 1% 1.36ms ± 2% +21.58% (p=0.000 n=8+10)
RangeQuery/expr=changes(a_one[1d]),steps=1000-16 7.83ms ± 1% 8.94ms ± 1% +14.21% (p=0.000 n=10+10)
RangeQuery/expr=changes(a_ten[1d]),steps=1-16 2.98ms ± 0% 3.30ms ± 1% +10.67% (p=0.000 n=9+10)
RangeQuery/expr=changes(a_ten[1d]),steps=10-16 3.66ms ± 1% 4.10ms ± 1% +11.82% (p=0.000 n=10+10)
RangeQuery/expr=changes(a_ten[1d]),steps=100-16 10.5ms ± 0% 11.8ms ± 1% +12.50% (p=0.000 n=8+10)
RangeQuery/expr=changes(a_ten[1d]),steps=1000-16 77.6ms ± 1% 87.4ms ± 1% +12.63% (p=0.000 n=9+9)
RangeQuery/expr=changes(a_hundred[1d]),steps=1-16 30.4ms ± 2% 32.8ms ± 1% +8.01% (p=0.000 n=10+10)
RangeQuery/expr=changes(a_hundred[1d]),steps=10-16 37.1ms ± 2% 40.6ms ± 2% +9.64% (p=0.000 n=10+10)
RangeQuery/expr=changes(a_hundred[1d]),steps=100-16 105ms ± 1% 117ms ± 1% +11.69% (p=0.000 n=10+10)
RangeQuery/expr=changes(a_hundred[1d]),steps=1000-16 783ms ± 3% 876ms ± 1% +11.83% (p=0.000 n=9+10)
```
And then runtime v2.39 compared to after this commit:
```
name old time/op new time/op delta
RangeQuery/expr=changes(a_one[1d]),steps=1-16 391µs ± 2% 547µs ± 1% +39.84% (p=0.000 n=9+8)
RangeQuery/expr=changes(a_one[1d]),steps=10-16 452µs ± 2% 616µs ± 2% +36.15% (p=0.000 n=10+10)
RangeQuery/expr=changes(a_one[1d]),steps=100-16 1.12ms ± 1% 1.26ms ± 1% +12.20% (p=0.000 n=8+10)
RangeQuery/expr=changes(a_one[1d]),steps=1000-16 7.83ms ± 1% 7.95ms ± 1% +1.59% (p=0.000 n=10+8)
RangeQuery/expr=changes(a_ten[1d]),steps=1-16 2.98ms ± 0% 3.38ms ± 2% +13.49% (p=0.000 n=9+10)
RangeQuery/expr=changes(a_ten[1d]),steps=10-16 3.66ms ± 1% 4.02ms ± 1% +9.80% (p=0.000 n=10+9)
RangeQuery/expr=changes(a_ten[1d]),steps=100-16 10.5ms ± 0% 10.8ms ± 1% +3.08% (p=0.000 n=8+10)
RangeQuery/expr=changes(a_ten[1d]),steps=1000-16 77.6ms ± 1% 78.1ms ± 1% +0.58% (p=0.035 n=9+10)
RangeQuery/expr=changes(a_hundred[1d]),steps=1-16 30.4ms ± 2% 33.5ms ± 4% +10.18% (p=0.000 n=10+10)
RangeQuery/expr=changes(a_hundred[1d]),steps=10-16 37.1ms ± 2% 40.0ms ± 1% +7.98% (p=0.000 n=10+10)
RangeQuery/expr=changes(a_hundred[1d]),steps=100-16 105ms ± 1% 107ms ± 1% +1.92% (p=0.000 n=10+10)
RangeQuery/expr=changes(a_hundred[1d]),steps=1000-16 783ms ± 3% 775ms ± 1% -1.02% (p=0.019 n=9+9)
```
In summary, the runtime doesn't really improve with this change for
queries with just a few steps. For queries with many steps, this
commit essentially reinstates the old performance. This is good
because the many-step queries are the one that matter most (longest
absolute runtime).
In terms of allocations, though, this commit doesn't make a dent at
all (numbers not shown). The reason is that most of the allocations
happen in the sampleRingIterator (in the storage package), which has
to be addressed in a separate commit.
Signed-off-by: beorn7 <beorn@grafana.com>
query_samples_total is a counter that tracks the total number of samples loaded by all queries.
The goal with this metric is to be able to see the amount of 'work' done by Prometheus to service queries.
At the moment we have metrics with the number of queries, plus more detailed metrics showing how much time each step of a query takes.
While those metrics do help they don't show us the whole picture.
Queries that do load more samples are (in general) more expensive than queries that do load fewer samples.
This means that looking only at the number of queries doesn't tell us how much 'work' Prometheus received.
Adding a counter that tracks the total number of samples loaded allows us to see if there was a spike in the cost of queries, not just the number of them.
Signed-off-by: Łukasz Mierzwa <l.mierzwa@gmail.com>
* Use zeropool.Pool to workaround SA6002
I built a tiny library called https://github.com/colega/zeropool to
workaround the SA6002 staticheck issue.
While searching for the references of that SA6002 staticheck issues on
Github first results was Prometheus itself, with quite a lot of ignores
of it.
This changes the usages of `sync.Pool` to `zeropool.Pool[T]` where a
pointer is not available.
Also added a benchmark for HeadAppender Append/Commit when series
already exist, which is one of the most usual cases IMO, as I didn't find
any.
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Improve BenchmarkHeadAppender with more cases
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* A little copying is better than a little dependency
https://www.youtube.com/watch?v=PAAkCSZUG1c&t=9m28s
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Fix imports order
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Add license header
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Copyright should be on one of the first 3 lines
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Use require.Equal for testing
I don't depend on testify in my lib, but here we have it available.
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Avoid flaky test
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Also use zeropool for pointsPool in engine.go
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
---------
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
It took a `Labels` where the memory could be re-used, but in practice
this hardly ever benefitted. Especially after converting `relabel.Process`
to `relabel.ProcessBuilder`.
Comparing the parameter to `nil` was a bug; `EmptyLabels` is not `nil`
so the slice was reallocated multiple times by `append`.
Lastly `Builder.Labels()` now estimates that the final size will depend
on labels added and deleted.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
In Thanos we would like to start experimenting with custom functions that are
currently not part of the PromQL spec. We would do this by adding an implementation
for those functions in the Thanos engine: https://github.com/thanos-community/promql-engine and allow
users to decide which engine they want to use on a per-query basis.
Since we use the PromQL parser from Prometheus, injecting functions in the global `Functions` variable
would mean they also become available for the Prometheus engine. To avoid this side-effect, this commit
exposes a Parser interface in which the supported functions can be injected as an option. If not functions
are injected, the parser implementation will default to the functions defined in the global Functions variable.
Signed-off-by: Filip Petkovski <filip.petkovsky@gmail.com>
TestConcurrentRangeQueries runs many queries, up to 4 at the same time,
to try to expose any race conditions.
This change stops four of them from running with a thousand or more steps:
`holt_winters(a_X[1d], 0.3, 0.3)`
`changes(a_X[1d])`
`rate(a_X[1d])`
`absent_over_time(a_X[1d])`
Particularly when the test runs with `-race` in CI, this reduces the
time and resources required.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
This is a bit more conservative than we could be. As long as a chunk
isn't the first in a block, we can be pretty sure that the previous
chunk won't disappear. However, the incremental gain of returning
NotCounterReset in these cases is probably very small and might not be
worth the code complications.
Wwith this, we now also pay attention to an explicitly set counter
reset during ingestion. While the case doesn't show up in practice
yet, there could be scenarios where the metric source knows there was
a counter reset even if it might not be visible from the values in the
histogram. It is also useful for testing.
Signed-off-by: beorn7 <beorn@grafana.com>
Extends Appender.AppendHistogram function to accept the FloatHistogram. TSDB supports appending, querying, WAL replay, for this new type of histogram.
Signed-off-by: Marc Tudurí <marctc@protonmail.com>
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
Co-authored-by: Ganesh Vernekar <ganeshvern@gmail.com>
We have a LabelBuilder in EvalNodeHelper; use it instead of creating a new one at every step.
Need to take some care that different uses of enh.lb do not overlap.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* promql: refactor BenchmarkRangeQuery so we can re-use test cases
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* promql: add test for race conditions in query engine
Note we skip large count_values queries -
`count_values` allocates a slice per unique value in the output, and
this test has unique values on every step of every series so it adds up
to a lot of slices. Add Go runtime overhead for checking `-race`, and
it chews up many gigabytes.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* TestConcurrentRangeQueries: wait before starting goroutine
Instead of starting 100 goroutines which just wait for the semaphore.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
We use `labels.Builder` to parse metrics, to avoid depending on the
internal implementation. This is not efficient, but the feature is only
used in tests. It wasn't efficient previously either - calling `Sort()`
after adding each label.
`createLabelsForAbsentFunction` also uses a Builder now, and gets
an extra `map` to replace the previous `Has()` usage.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Fix up promql to compile with changes to Labels
Re-use previous memory if it is already of the correct type.
In `NewListSeries` we hoist the conversion to an interface value out
so it only allocates once.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Patterned after `Chunk.Iterator()`: pass the old iterator in so it
can be re-used to avoid allocating a new object.
(This commit does not do any re-use; it is just changing all the method
signatures so re-use is possible in later commits.)
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Switch from 'sanity' to more inclusive lanuage
"Removing ableist language in code is important; it helps to create and
maintain an environment that welcomes all developers of all backgrounds,
while emphasizing that we as developers select the most articulate,
precise, descriptive language we can rather than relying on metaphors.
The phrase sanity check is ableist, and unnecessarily references mental
health in our code bases. It denotes that people with mental illnesses
are inferior, wrong, or incorrect, and the phrase sanity continues to be
used by employers and other individuals to discriminate against these
people."
From https://gist.github.com/seanmhanson/fe370c2d8bd2b3228680e38899baf5cc
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
The bucket receiving math.MaxFloat64 observations now has
math.MaxFloat64 as upper bound, while the bucket after it (the last
possible bucket) has +Inf.
This also adds a test for getBound and moves the getBound code to
generic.go (where it should have been in the first place).
Signed-off-by: beorn7 <beorn@grafana.com>
* histogram: Simplify iterators
We don't really need currLower and currUpper and can calculate it when
needed (as already done for the floatBucketIterator). The calculation
is cheap, while keeping those extra variables around costs RAM
(potentially a lot with many iterators).
* histogram: Convert Bucket/FloatBucket to one generic type
* histogram: Move some bucket iterator code into generic base iterator
* histogram: Remove cumulative iterator for FloatHistogram
We added it in the past for completeness (Histogram has one), but it
has never been used. Plus, even the cumulative iterator for Histogram
is only there for test reasons.
We can always add it back, and then maybe even using generics.
Signed-off-by: beorn7 <beorn@grafana.com>
If we are populating series for a subquery then set the interval
parameter accordingly so that downstream users could use that
information.
Signed-off-by: Giedrius Statkevičius <giedrius.statkevicius@vinted.com>
Use new experimental package `golang.org/x/exp/slices`.
slices.Sort works on values that are directly comparable, like ints,
so avoids the overhad of an interface call to `.Less()`.
Left tests unchanged, because they don't need the speed and it may be
a cross-check that slices.Sort gives the same answer.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
And use the new method to call to compact Histograms during
parsing. This happens for both `Histogram` and `FloatHistogram`. In
this way, if targets decide to optimize the exposition size by merging
spans with empty buckets in between, we still get a normalized
results. It will also normalize away any valid but weird
representations like empty spans, spans with offset zero, and empty
buckets at the start or end of a span.
The implementation seemed easy at first as it just turns the
`compactBuckets` helper into a generic function (which now got its own
file). However, the integer Histograms have delta buckets instead of
absolute buckets, which had to be treated specially in the generic
`compactBuckets` function. To make sure it works, I have added plenty
of explicit tests for `Histogram` in addition to the `FloatHistogram`
tests.
I have also updated the doc comment for the `Compact` method.
Based on the insights now expressed in the doc comment, compacting
with a maxEmptyBuckets > 0 is rarely useful. Therefore, this commit
also sets the value to 0 in the two cases we were using 3 so far. We
might still want to reconsider, so I don't want to remove the
maxEmptyBuckets parameter right now.
Signed-off-by: beorn7 <beorn@grafana.com>
And a few cases of `EmptyLabels()`.
Replacing code which assumes the internal structure of `Labels`.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Update go to 1.19, set min version to 1.18
Signed-off-by: Julien Pivotto <roidelapluie@o11y.eu>
* Update golangci-lint
Signed-off-by: Julien Pivotto <roidelapluie@o11y.eu>
Signed-off-by: Julien Pivotto <roidelapluie@o11y.eu>
* model/relabel: Add benchmark
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* model/relabel: re-use Builder across relabels
Saves memory allocations.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* labels.Builder: allow re-use of result slice
This reduces memory allocations where the caller has a suitable slice available.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* model/relabel: re-use source values slice
To reduce memory allocations.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Unwind one change causing test failures
Restore original behaviour in PopulateLabels, where we must not overwrite the input set.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* relabel: simplify values optimisation
Use a stack-based array for up to 16 source labels, which will be the
vast majority of cases.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* lint
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Add histogram validation
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Correct negative offset validation
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Address review comments
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Validation benchmark
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Add more checks
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Attempt to fix tests
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Fix stuff
Signed-off-by: Levi Harrison <git@leviharrison.dev>
* Prettifier: Add spaces with non-callable keywords
I prefer to have a difference between, on one side: functions calls, end(), start(), and on the other side with, without, ignoring, by and group_rrigt, group_left.
The reasoning is that the former ones are not calls, while other are
functions. Additionally, it matches the examples in our documentation.
Signed-off-by: Julien Pivotto <roidelapluie@o11y.eu>
* Fix tests
Signed-off-by: Julien Pivotto <roidelapluie@o11y.eu>
We print the stacktrace of a panic when query causes one, but there's no
information about the query itself, which makes it harder to debug and
reproduce the issue.
This adds the 'expr' string to the logged panic.
Signed-off-by: Łukasz Mierzwa <l.mierzwa@gmail.com>
* Shortcut Matrix.ContainsSameLabelset()
It's quite often to execute this check on a Matrix that has zero or only
one series. There's no need to allocate a map for those cases.
There's also a one-liner for two-series case, so why not using it?
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Add license header
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Optimize Vector.ContainsSameLabelset
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Implement Pretty() function for AST nodes.
Signed-off-by: Harkishen-Singh <harkishensingh@hotmail.com>
This commit adds .Pretty() for all nodes of PromQL AST.
Each .Pretty() prettifies the node it belongs to, and under
no circustance, the parent or child node is touch/prettified.
Read more in the "Approach" part in `prettier.go`
* Refactor functions between printer.go & prettier.go
Signed-off-by: Harkishen-Singh <harkishensingh@hotmail.com>
This commit removes redundancy between printer.go and prettier.go
by taking out the common code into separate private functions.
* Add more unit tests for Prettier.
Signed-off-by: Harkishen-Singh <harkishensingh@hotmail.com>
* Add support for spliting function calls with 1 arg & unary expressions.
Signed-off-by: Harkishen-Singh <harkishensingh@hotmail.com>
This commit does 2 things:
1. It adds support to split function calls that have 1 arg and exceeds the max_characters_per_line
to multiple lines.
2. Splits Unary expressions that exceed the max_characters_per_line. This is done by formatting the child node
and then removing the prefix indent, which is already applied before the unary operator.
This follow a simple function-based approach to access the count and
sum fields of a native Histogram. It might be more elegant to
implement “accessors” via the dot operator, as considered in the
brainstorming doc [1]. However, that would require the introduction of
a whole new concept in PromQL. For the PoC, we should be fine with the
function-based approch. Even the obvious inefficiencies (rate'ing a
whole histogram twice when we only want to rate each the count and the
sum once) could be optimized behind the scenes.
Note that the function-based approach elegantly solves the problem of
detecting counter resets in the sum of observations in the case of
negative observations. (Since the whole native Histogram is rate'd,
the counter reset is detected for the Histogram as a whole.)
We will decide later if an “accessor” approach is really needed. It
would change the example expression for average duration in
functions.md from
histogram_sum(rate(http_request_duration_seconds[10m]))
/
histogram_count(rate(http_request_duration_seconds[10m]))
to
rate(http_request_duration_seconds.sum[10m])
/
rate(http_request_duration_seconds.count[10m])
[1]: https://docs.google.com/document/d/1ch6ru8GKg03N02jRjYriurt-CZqUVY09evPg6yKTA1s/edit
Signed-off-by: beorn7 <beorn@grafana.com>
Essentially, this mirrors the existing behavior for negative buckets:
If a histogram has only negative buckets, the upper bound of the zero
bucket is assumed to be zero.
Furthermore, it makes sure that the zero bucket boundaries are not
modified if a histogram that has no buckets at all but samples in the
zero bucket.
Also, add an TODO to vet if we really want this behavior.
Signed-off-by: beorn7 <beorn@grafana.com>
* Labels: create signature with/without labels
Instead of creating a new Labels slice then converting to signature,
go directly to the signature and save time.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Labels: refactor Builder tests
Have one test with a range of cases, and have them check the final
output rather than checking the internal structure of the Builder.
Also add a couple of cases where the value is "", which should be
interpreted as 'delete'.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Labels: add 'Keep' function to Builder
This lets us replace `Labels.WithLabels` with the more general `Builder`.
In `engine.resultMetric()` we can call `Keep()` instead of checking
and calling `Del()`.
Avoid calling `Sort()` in `Builder.Labels()` if we didn't add anything,
so that `Keep()` has the same performance as `WithLabels()`.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
While empty buckets can make sense in the internal representation (by
joining spans that would otherwise need more overhead for separate
representation), there are no spans in the JSON rendering. Therefore,
the JSON should not contain any empty buckets, since any buckets not
included in the output counts as empty anyway.
This changes both the inefficient MarshalJSON implementation as well
as the jsoniter implementation.
Signed-off-by: beorn7 <beorn@grafana.com>
* refactor: move from io/ioutil to io and os packages
* use fs.DirEntry instead of os.FileInfo after os.ReadDir
Signed-off-by: MOREL Matthieu <matthieu.morel@cnp.fr>
This now even enables jsoniter marshaling of Points in an instant
query (which previously used the traditional JSON marshaling).
Signed-off-by: beorn7 <beorn@grafana.com>
For conventional histograms, we need to gather all the individual
bucket timeseries at a data point to do the quantile calculation. The
code so far mirrored this behavior for the new native
histograms. However, since a single data point contains all the
buckets alreade, that's actually not needed. This PR simplifies the
code while still detecting a mix of conventional and native
histograms.
The weird signature calculation for the conventional histograms is
getting even weirder because of that. If this PR turns out to do the
right thing, I will implement a proper fix for the signature
calculation upstream.
Signed-off-by: beorn7 <beorn@grafana.com>
This commit ensures 64-bit integers are used in various tests that other wise
fail in 32-bit architectures.
It also adds support for int64 and uint64 types in the template.convertToFloat
function to support the test changes.
Closes: 10481
Signed-off-by: Martina Ferrari <tina@debian.org>
This exactly corresponds to the statistic compared against MaxSamples
during the course of query execution, so users can see how close their
queries are to a limit.
Co-authored-by: Harkishen Singh <harkishensingh@hotmail.com>
Co-authored-by: Andrew Bloomgarden <blmgrdn@amazon.com>
Signed-off-by: Andrew Bloomgarden <blmgrdn@amazon.com>
We always track total samples queried and add those to the standard set
of stats queries can report.
We also allow optionally tracking per-step samples queried. This must be
enabled both at the engine and query level to be tracked and rendered.
The engine flag is exposed via a Prometheus feature flag, while the
query flag is set when stats=all.
Co-authored-by: Alan Protasio <approtas@amazon.com>
Co-authored-by: Andrew Bloomgarden <blmgrdn@amazon.com>
Co-authored-by: Harkishen Singh <harkishensingh@hotmail.com>
Signed-off-by: Andrew Bloomgarden <blmgrdn@amazon.com>
* Run gofumpt on all files
Getting golangci-lint errors when building on my laptop, possibly because I have newer version of gofumpt then what it was formatted with.
Run gofumpt -w -extra on all files as it will be needed in the future anyway.
* Update golangci-lint to v1.44.2
v1.44.0 upgraded gofumpt so bumping version in CI will help keep formatting correct for everyone
* Address golangci-lint error
Getting 'error-strings: error strings should not be capitalized or end with punctuation or a newline' from revive here.
Drop new line.
Signed-off-by: Łukasz Mierzwa <l.mierzwa@gmail.com>
* Improve error logging for missing config and QL dir
Currently, when Prometheus can't open its config file or the query
logging dir under the data dir, it only logs what it has been given
default or commandline/config. Depending on the environment this can be
less than helpful, since the working directory may be unclear to the
user. I have specifically kept the existing error messages as intact as
possible to a) still log the parameter as given and b) cause as little
disruption for log-parsers/-analyzers as possible.
So in case of the config file or the data dir being non-absolute paths,
I use os.GetWd to find the working dir and assemble an absolute path for
error logging purposes. If GetWd fails, we just log "unknown", as
recovering from an error there would be very complex measure, likely not
worth the code/effort.
Example errors:
```
$ ./prometheus
ts=2022-02-06T16:00:53.034Z caller=main.go:445 level=error msg="Error loading config (--config.file=prometheus.yml)" fullpath=/home/klausman/src/prometheus/prometheus.yml err="open prometheus.yml: no such file or directory"
$ touch prometheus.yml
$ ./prometheus
[...]
ts=2022-02-06T16:01:00.992Z caller=query_logger.go:99 level=error component=activeQueryTracker msg="Error opening query log file" file=data/queries.active fullpath=/home/klausman/src/prometheus/data/queries.active err="open data/queries.active: permission denied"
panic: Unable to create mmap-ed active query log
[...]
$
```
Signed-off-by: Tobias Klausmann <klausman@schwarzvogel.de>
* Replace our own logic with just using filepath.Abs()
Signed-off-by: Tobias Klausmann <klausman@schwarzvogel.de>
* Further simplification
Signed-off-by: Tobias Klausmann <klausman@schwarzvogel.de>
* Review edits
Signed-off-by: Tobias Klausmann <klausman@schwarzvogel.de>
* Review edits
Signed-off-by: Tobias Klausmann <klausman@schwarzvogel.de>
* Review edits
Signed-off-by: Tobias Klausmann <klausman@schwarzvogel.de>