While empty buckets can make sense in the internal representation (by
joining spans that would otherwise need more overhead for separate
representation), there are no spans in the JSON rendering. Therefore,
the JSON should not contain any empty buckets, since any buckets not
included in the output counts as empty anyway.
This changes both the inefficient MarshalJSON implementation as well
as the jsoniter implementation.
Signed-off-by: beorn7 <beorn@grafana.com>
This now even enables jsoniter marshaling of Points in an instant
query (which previously used the traditional JSON marshaling).
Signed-off-by: beorn7 <beorn@grafana.com>
For conventional histograms, we need to gather all the individual
bucket timeseries at a data point to do the quantile calculation. The
code so far mirrored this behavior for the new native
histograms. However, since a single data point contains all the
buckets alreade, that's actually not needed. This PR simplifies the
code while still detecting a mix of conventional and native
histograms.
The weird signature calculation for the conventional histograms is
getting even weirder because of that. If this PR turns out to do the
right thing, I will implement a proper fix for the signature
calculation upstream.
Signed-off-by: beorn7 <beorn@grafana.com>
* create lezer-promql module + move codemirror to a pure esm module + unified dependencies
Signed-off-by: Augustin Husson <husson.augustin@gmail.com>
* ignore test utils file and remove the type "module" in package.json
Signed-off-by: Augustin Husson <husson.augustin@gmail.com>
* use jest to run the lezer-promql test
Signed-off-by: Augustin Husson <husson.augustin@gmail.com>
* give an automatic way to update the ui dependencies
Signed-off-by: Augustin Husson <husson.augustin@gmail.com>
* update all dependencies using make update-npm-deps
Signed-off-by: Augustin Husson <husson.augustin@gmail.com>
* fix react-app test
Signed-off-by: Augustin Husson <husson.augustin@gmail.com>
* remove generated file
Signed-off-by: Augustin Husson <husson.augustin@gmail.com>
* remove unnecessary backslash in script
Signed-off-by: Augustin Husson <husson.augustin@gmail.com>
* fix reviews
Signed-off-by: Augustin Husson <husson.augustin@gmail.com>
* rewording
Signed-off-by: Augustin Husson <husson.augustin@gmail.com>
* use npx to run lezer-generator
Signed-off-by: Augustin Husson <husson.augustin@gmail.com>
* tsdb: avoid slice-to-interface allocation in EnsureOrder
This is pulling the `seriesRefSlice` out of the loop, so the compiler
doesn't allocate a new one on the heap every time.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* tsdb: use pointer type in Pool for EnsureOrder
As noted by staticcheck, Pool prefers the objects in the pool to have
pointer type. This is a little more fiddly to code, but avoids
allocation of a wrapper object every time a slice is put into the pool.
Removed a comment that said fixing this has a performance penalty: not
borne out by benchmarks.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
This commit adds an alert in the prometheus mixin which triggers when
Prometheus has failed scrapes that have exceeded the configured
sample_limit for that job.
Signed-off-by: fpetkovski <filip.petkovsky@gmail.com>
This commit ensures 64-bit integers are used in various tests that other wise
fail in 32-bit architectures.
It also adds support for int64 and uint64 types in the template.convertToFloat
function to support the test changes.
Closes: 10481
Signed-off-by: Martina Ferrari <tina@debian.org>
* discovery: expose HTTP client options to discoverers
Signed-off-by: Robert Fratto <robertfratto@gmail.com>
* discovery/http: use HTTP client options for created client
Signed-off-by: Robert Fratto <robertfratto@gmail.com>
* scrape: use a list of HTTP client options instead of just dial context
Signed-off-by: Robert Fratto <robertfratto@gmail.com>
* discovery: rephrase comment
Signed-off-by: Robert Fratto <robertfratto@gmail.com>
Per Julien's feedback on #10369, we're choosing to be consistent with
data types inside the stats structure (ints) rather than with the points
format that is part of the normal query responses (strings). We have
this option because this data cannot be NaN/Inf.
Signed-off-by: Andrew Bloomgarden <blmgrdn@amazon.com>
This exactly corresponds to the statistic compared against MaxSamples
during the course of query execution, so users can see how close their
queries are to a limit.
Co-authored-by: Harkishen Singh <harkishensingh@hotmail.com>
Co-authored-by: Andrew Bloomgarden <blmgrdn@amazon.com>
Signed-off-by: Andrew Bloomgarden <blmgrdn@amazon.com>
This allows other implementations to inject their own statistics that
they're gathering in data linked from the context.Context. For example,
Cortex can inject its stats.Stats value under the `cortex` key.
Signed-off-by: Andrew Bloomgarden <blmgrdn@amazon.com>
We always track total samples queried and add those to the standard set
of stats queries can report.
We also allow optionally tracking per-step samples queried. This must be
enabled both at the engine and query level to be tracked and rendered.
The engine flag is exposed via a Prometheus feature flag, while the
query flag is set when stats=all.
Co-authored-by: Alan Protasio <approtas@amazon.com>
Co-authored-by: Andrew Bloomgarden <blmgrdn@amazon.com>
Co-authored-by: Harkishen Singh <harkishensingh@hotmail.com>
Signed-off-by: Andrew Bloomgarden <blmgrdn@amazon.com>