promql: correctly handle unary negation of native histograms and add tests for multiplication and division of native histograms by negative scalars
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
The linear interpolation (assuming that observations are uniformly
distributed within a bucket) is a solid and simple assumption in lack
of any other information. However, the exponential bucketing used by
standard schemas of native histograms has been chosen to cover the
whole range of observations in a way that bucket populations are
spread out over buckets in a reasonably way for typical distributions
encountered in real-world scenarios.
This is the origin of the idea implemented here: If we divide a given
bucket into two (or more) smaller exponential buckets, we "most
naturally" expect that the samples in the original buckets will split
among those smaller buckets in a more or less uniform fashion. With
this assumption, we end up with an "exponential interpolation", which
therefore appears to be a better match for histograms with exponential
bucketing.
This commit leaves the linear interpolation in place for NHCB, but
changes the interpolation for exponential native histograms to
exponential. This affects `histogram_quantile` and
`histogram_fraction` (because the latter is more or less the inverse
of the former).
The zero bucket has to be treated specially because the assumption
above would lead to an "interpolation to zero" (the bucket density
approaches infinity around zero, and with the postulated uniform usage
of buckets, we would end up with an estimate of zero for all quantiles
ending up in the zero bucket). We simply fall back to linear
interpolation within the zero bucket.
At the same time, this commit makes the call to stick with the
assumption that the zero bucket only contains positive observations
for native histograms without negative buckets (and vice versa). (This
is an assumption relevant for interpolation. It is a mostly academic
point, as the zero bucket is supposed to be very small anyway.
However, in cases where it _is_ relevantly broad, the assumption helps
a lot in practice.)
This commit also updates and completes the documentation to match both
details about interpolation.
As a more high level note: The approach here attempts to strike a
balance between a more simplistic approach without any assumption, and
a more involved approach with more sophisticated assumptions. I will
shortly describe both for reference:
The "zero assumption" approach would be to not interpolate at all, but
_always_ return the harmonic mean of the bucket boundaries of the
bucket the quantile ends up in. This has the advantage of minimizing
the maximum possible relative error of the quantile estimation.
(Depending on the exact definition of the relative error of an
estimation, there is also an argument to return the arithmetic mean of
the bucket boundaries.) While limiting the maximum possible relative
error is a good property, this approach would throw away the
information if a quantile is closer to the upper or lower end of the
population within a bucket. This can be valuable trending information
in a dashboard. With any kind of interpolation, the maximum possible
error of a quantile estimation increases to the full width of a bucket
(i.e. it more than doubles for the harmonic mean approach, and
precisely doubles for the arithmetic mean approach). However, in
return the _expectation value_ of the error decreases. The increase of
the theoretical maximum only has practical relevance for pathologic
distributions. For example, if there are thousand observations within
a bucket, they could _all_ be at the upper bound of the bucket. If the
quantile calculation picks the 1st observation in the bucket as the
relevant one, an interpolation will yield a value close to the lower
bucket boundary, while the true quantile value is close to the upper
boundary.
The "fancy interpolation" approach would be one that analyses the
_actual_ distribution of samples in the histogram. A lot of statistics
could be applied based on the information we have available in the
histogram. This would include the population of neighboring (or even
all) buckets in the histogram. In general, the resolution of a native
histogram should be quite high, and therefore, those "fancy"
approaches would increase the computational cost quite a bit with very
little practical benefits (i.e. just tiny corrections of the estimated
quantile value). The results are also much harder to reason with.
Signed-off-by: beorn7 <beorn@grafana.com>
Fixes#14859, although we'll have to see about a long-term fix. Hopefully it'll
be fixed upstream with a follow-up version.
Signed-off-by: Julius Volz <julius.volz@gmail.com>
Instead of a 2-bit write followed by a 14-bit write, do two 8-bit
writes, which goes much faster since it avoids looping.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
Benchmarks must do the same work N times.
Run 3 cases, where the values are constant, vary a bit, and vary a lot.
Also aim for 120 samples same as TSDB default.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Make rate possible non-counter annotation consistent
Previously a PossibleNonCounterInfo annotation would be left in cases
where a range-vector selects 1 float data point, even if no more points
are selected in order to calculate a rate.
This change ensures an output float exists before emitting such an
annotation.
This fixes an inconsistency where a series with mixed data (ie, a float
and a native histogram) would emit an annotation without any points.
For example,
```
load 1m
series{label="a"} 1 {{schema:1 sum:10 count:5 buckets:[1 2 3]}}
eval instant at 1m rate(series[1m1s])
```
Would have a PossibleNonCounterInfo annotation.
Wheras
```
load 1m
series{label="a"} {{schema:1 sum:10 count:5 buckets:[1 2 3]}} {{schema:1 sum:15 count:10 buckets:[1 2 3]}}
eval instant at 1m rate(series[1m1s])
```
Would not.
---------
Signed-off-by: Joshua Hesketh <josh@nitrotech.org>
Shortcut for `.*` matches newlines as well.
Add preamble change ^(?s:
Add test
dotAll flag por al regex
Add and fix regex tests
Signed-off-by: Mario Fernandez <mariofer@redhat.com>
Several regexps were coded like `"^.*$"`, which is an unnatural
formulation nobody is likely to use. Inside `NewMatcher`, `^` and `$`
are added anyway, which makes the form in the benchmark redundant.
It even printed it out in the expected way.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
In a non-dev build (no initial double-renders), the tree lines would not be
rendered correctly from the parent of a binop to its first child, because the
first child would be rendered before the parent, and the parent's ref hadn't
been set yet at that time. Switched from a normal ref to a callback-based ref
with explicit React state update to make sure that the child gets to know about
the parent's (later) rendered div element.
Signed-off-by: Julius Volz <julius.volz@gmail.com>