We use `labels.Builder` to parse metrics, to avoid depending on the
internal implementation. This is not efficient, but the feature is only
used in tests. It wasn't efficient previously either - calling `Sort()`
after adding each label.
`createLabelsForAbsentFunction` also uses a Builder now, and gets
an extra `map` to replace the previous `Has()` usage.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Fix up promql to compile with changes to Labels
Patterned after `Chunk.Iterator()`: pass the old iterator in so it
can be re-used to avoid allocating a new object.
(This commit does not do any re-use; it is just changing all the method
signatures so re-use is possible in later commits.)
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
If we are populating series for a subquery then set the interval
parameter accordingly so that downstream users could use that
information.
Signed-off-by: Giedrius Statkevičius <giedrius.statkevicius@vinted.com>
Use new experimental package `golang.org/x/exp/slices`.
slices.Sort works on values that are directly comparable, like ints,
so avoids the overhad of an interface call to `.Less()`.
Left tests unchanged, because they don't need the speed and it may be
a cross-check that slices.Sort gives the same answer.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* model/relabel: Add benchmark
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* model/relabel: re-use Builder across relabels
Saves memory allocations.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* labels.Builder: allow re-use of result slice
This reduces memory allocations where the caller has a suitable slice available.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* model/relabel: re-use source values slice
To reduce memory allocations.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Unwind one change causing test failures
Restore original behaviour in PopulateLabels, where we must not overwrite the input set.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* relabel: simplify values optimisation
Use a stack-based array for up to 16 source labels, which will be the
vast majority of cases.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* lint
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
We print the stacktrace of a panic when query causes one, but there's no
information about the query itself, which makes it harder to debug and
reproduce the issue.
This adds the 'expr' string to the logged panic.
Signed-off-by: Łukasz Mierzwa <l.mierzwa@gmail.com>
* Labels: create signature with/without labels
Instead of creating a new Labels slice then converting to signature,
go directly to the signature and save time.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Labels: refactor Builder tests
Have one test with a range of cases, and have them check the final
output rather than checking the internal structure of the Builder.
Also add a couple of cases where the value is "", which should be
interpreted as 'delete'.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Labels: add 'Keep' function to Builder
This lets us replace `Labels.WithLabels` with the more general `Builder`.
In `engine.resultMetric()` we can call `Keep()` instead of checking
and calling `Del()`.
Avoid calling `Sort()` in `Builder.Labels()` if we didn't add anything,
so that `Keep()` has the same performance as `WithLabels()`.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
For conventional histograms, we need to gather all the individual
bucket timeseries at a data point to do the quantile calculation. The
code so far mirrored this behavior for the new native
histograms. However, since a single data point contains all the
buckets alreade, that's actually not needed. This PR simplifies the
code while still detecting a mix of conventional and native
histograms.
The weird signature calculation for the conventional histograms is
getting even weirder because of that. If this PR turns out to do the
right thing, I will implement a proper fix for the signature
calculation upstream.
Signed-off-by: beorn7 <beorn@grafana.com>
This exactly corresponds to the statistic compared against MaxSamples
during the course of query execution, so users can see how close their
queries are to a limit.
Co-authored-by: Harkishen Singh <harkishensingh@hotmail.com>
Co-authored-by: Andrew Bloomgarden <blmgrdn@amazon.com>
Signed-off-by: Andrew Bloomgarden <blmgrdn@amazon.com>
We always track total samples queried and add those to the standard set
of stats queries can report.
We also allow optionally tracking per-step samples queried. This must be
enabled both at the engine and query level to be tracked and rendered.
The engine flag is exposed via a Prometheus feature flag, while the
query flag is set when stats=all.
Co-authored-by: Alan Protasio <approtas@amazon.com>
Co-authored-by: Andrew Bloomgarden <blmgrdn@amazon.com>
Co-authored-by: Harkishen Singh <harkishensingh@hotmail.com>
Signed-off-by: Andrew Bloomgarden <blmgrdn@amazon.com>
This follows the line of argument that the invariant of not looking
ahead of the query time was merely emerging behavior and not a
documented stable feature. Any query that looks ahead of the query
time was simply invalid before the introduction of the negative offset
and the @ modifier.
Signed-off-by: beorn7 <beorn@grafana.com>
This can happen if the aggregation starts with a float and later
encounters a histogram. In that case, the newly encountered histogram
would have been added to a nil histogram.
This should be tested, of course, but that's best done within the
PromQL testing framework, which we still need to enable for histograms
(for which we have a TODO in the code and now also a card in the GH
project).
Signed-off-by: beorn7 <beorn@grafana.com>
* MergeFloatBucketIterator for []FloatBucketIterator
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
* histogram_quantile for histograms
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
* Fix histogram_quantile
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
* Unit test and enhancements
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
* Iterators to iterate buckets in reverse and all buckets together including zero bucket
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
* Consider all buckets for histogram_quantile and fix the implementation
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
* Remove unneeded code
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
* Fix lint
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
`BufferedSeriesIterator` and `MemoizedSeriesIterator` use a method
called `Values` for exactly the purpose for which all other iterators
of the same kind use a method called `At`. That alone is confusing,
but on top of that, the `Values` method only returns a single sample,
not multiple values. I assume the naming has historical reasons. This
commit makes it more consistent. It is now easier to read, and now
`BufferedSeriesIterator` and `MemoizedSeriesIterator` implement
`chunkenc.Iterator` like many other iterators, too.
Signed-off-by: beorn7 <beorn@grafana.com>
- Pick At... method via return value of Next/Seek.
- Do not clobber returned buckets.
- Add partial FloatHistogram suppert.
Note that the promql package is now _only_ dealing with
FloatHistograms, following the idea that PromQL only knows float
values.
As a byproduct, I have removed the histogramSeries metric. In my
understanding, series can have both float and histogram samples, so
that metric doesn't make sense anymore.
As another byproduct, I have converged the sampleBuf and the
histogramSampleBuf in memSeries into one. The sample type stored in
the sampleBuf has been extended to also contain histograms even before
this commit.
Signed-off-by: beorn7 <beorn@grafana.com>
This is to avoid copying the many fields of a histogram.Histogram all
the time.
This also fixes a bunch of formerly broken tests.
Signed-off-by: beorn7 <beorn@grafana.com>
* Add test case to showcase the problem in #9590
Signed-off-by: Thomas Jackson <jacksontj.89@gmail.com>
* Don't unwrap ParenExpr in newStepInvariantExpr
Fixes#9590
Signed-off-by: Thomas Jackson <jacksontj.89@gmail.com>
This creates a new `model` directory and moves all data-model related
packages over there:
exemplar labels relabel rulefmt textparse timestamp value
All the others are more or less utilities and have been moved to `util`:
gate logging modetimevfs pool runtime
Signed-off-by: beorn7 <beorn@grafana.com>
* promql: copy data when short-circuiting
Because the range query loop re-uses the output buffer each time round,
we must copy results into the buffer rather than using input as output.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Add benchmark case for many-to-one join
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* query_range: compute join signatures just once
For an expression like `a + on(p,q) b`, extract the `p,q` part from each
series once, instead of re-computing at every step of the range.
Although there was a cache, computing the key by concatenating all
labels was expensive.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Add benchmark for query_range with topk
Modify sample data so values within a metric differ
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Optimise topk where k==1
In this case we don't need a heap to keep track of values; just a single
slot is fine.
Simplify the initialization of the heap: since all cases start off as a
single-item heap we can just assign the value directly.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Allow at least one slot in results for topk, quantile
k isn't set for quantile, but we need space to start collecting values
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>