Commit Graph

257 Commits (faf398e38059083c22c1bc7b8a725b9274148e17)

Author SHA1 Message Date
Brian Brazil f9d21f10ec
Only relabelling should apply for scrape_samples_scraped_post_relabelling. (#7342)
More consistent variable names.

Fixes #7298

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
2020-06-04 16:00:37 +01:00
Brian Brazil c9565f08aa
Pass reference to checkAddError so appendErrors is updated. (#7294)
This was preventing the warnings from being logged.

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
2020-05-26 15:14:55 +01:00
Marek Slabicki 8224ddec23
Capitalizing first letter of all log lines (#7043)
Signed-off-by: Marek Slabicki <thaniri@gmail.com>
2020-04-11 09:22:18 +01:00
Callum Styan c453def8c5
Separate scrape add error checking out into it's own function. (#6930)
* Separate scrape add error checking out into it's own function.

Signed-off-by: Callum Styan <callumstyan@gmail.com>

* pass sampleLimitError to checkAddError instead of returning an error

Signed-off-by: Callum Styan <callumstyan@gmail.com>

* Return bool, error from checkAddError so we can properly handle
ErrNotFound for AddFast. This should in theory never happen, but the
previous code path handled this case. Adds a test for this, which master
passes and the previous commit fails.

Signed-off-by: Callum Styan <callumstyan@gmail.com>

* Address comment changes.

Signed-off-by: Callum Styan <callumstyan@gmail.com>

* Move sampleAdded inside the loop iteration within append, since that's
the only block the variable is used in.

Signed-off-by: Callum Styan <callumstyan@gmail.com>
2020-03-25 19:31:48 -07:00
Julien Pivotto d6ad5551c9
Scrape: do not put staleness marker when cache is reused (#7011)
* Scrape: do not put staleness marker when cache is reused

Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2020-03-20 17:43:26 +01:00
Julien Pivotto 8907ba6235 Make TSDB use storage errors
This fixes #6992, which was introduced by #6777. There was an
intermediate component which translated TSDB errors into storage errors,
but that component was deleted and this bug went unnoticed, until we
were watching at the Prombench results. Without this, scrape will fail
instead of dropping samples or using "Add" when the series have been
garbage collected.

Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2020-03-17 22:24:25 +01:00
Björn Rabenstein d80b0810c1
Move crucial actions to defer (#6918)
With defer having less of a performance penalty, there is no reason
not to do those crucial operations via defer.

Context: With isolation in place, if we forget to Commit/Rollback, the
low watermark will get stuck forever.

The current code should not have any bugs, but moving to defer helps
to avoid future bugs.

This is also moving the `closeAppend` in the `Commit` implementation
itself to defer. If logging to the WAL fails, we would have missed the
`closeAppend`.

Signed-off-by: beorn7 <beorn@grafana.com>
2020-03-13 20:54:47 +01:00
Brian Brazil 5da8990053
Log scrape append failures as debug rather than warn. (#6852)
This is most likely due to an endpoint not producing valid
metrics output, which we should treat the same as a failed
scrape, and thus not spam the application logs with it.

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
2020-03-06 00:46:03 +00:00
李国忠 52025bd7a9
[comments] change word ‘wheter’ to ‘whether’ (#6912)
* [comments] change word ‘wheter’ to ‘whether’
Signed-off-by: fuling <fuling.lgz@alibaba-inc.com>

* [comments] change word ‘wheter’ to ‘whether’
Signed-off-by: fuling <fuling.lgz@alibaba-inc.com>
2020-03-02 13:51:24 +05:30
Julien Pivotto ed623f69e2
tsdb: don't allow ingesting empty labelsets (#6891)
* tsdb: don't allow ingesting empty labelsets

When we ingest an empty labelset in the head, further blocks can not be
compacted, with the error:

```
level=error ts=2020-02-27T21:26:58.379Z caller=db.go:659 component=tsdb
msg="compaction failed" err="persist head block: write compaction:
add series: out-of-order series added with label set \"{}\" / prev:
\"{}\""
```

We should therefore reject those invalid empty labelsets upfront.

This can be reproduced with the following:

```
cat << END > prometheus.yml
scrape_configs:
  - job_name: 'prometheus'
    scrape_interval: 1s
    basic_auth:
      username: test
      password: test
    metric_relabel_configs:
    - regex: ".*"
      action: labeldrop

    static_configs:
    - targets:
      - 127.0.1.1:9090
END
./prometheus --storage.tsdb.min-block-duration=1m
```
And wait a few minutes.

Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2020-03-02 07:18:05 +00:00
Bartlomiej Plotka 34426766d8 Unify Iterator interfaces. All point to storage now.
This is part of https://github.com/prometheus/prometheus/pull/5882 that can be done to simplify things.
All todos I added will be fixed in follow up PRs.

* querier.Querier, querier.Appender, querier.SeriesSet, and querier.Series interfaces merged
with storage interface.go. All imports that.
* querier.SeriesIterator replaced by chunkenc.Iterator
* Added chunkenc.Iterator.Seek method and tests for xor implementation (?)
* Since we properly handle SelectParams for Select methods I adjusted min max
based on that. This should help in terms of performance for queries with functions like offset.
* added Seek to deletedIterator and test.
* storage/tsdb was removed as it was only a unnecessary glue with incompatible structs.

No logic was changed, only different source of abstractions, so no need for benchmarks.

Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
2020-02-17 18:03:54 +00:00
gotjosh 8b49c9285d
scrape: Add metrics to track bytes and entries in the metadata cache (#6675)
Signed-off-by: gotjosh <josue@grafana.com>
2020-01-29 11:13:18 +00:00
Julien Pivotto fafb7940b1 Pass over scrape cache to the next scrape (#6670)
* Pass over scrape cache to the next scrape

Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2020-01-22 12:13:47 +00:00
gotjosh 05842176a6 Make the scrape.metricMetadataStore interface public
To test the implementation of our metric metadata API, we need to represent various states of metadata in the scrape metadata store. That is currently not possible as the interface and method to set the store are private.

This changes the interface, list and get methods, and the SetMetadaStore function to be public.

Incidentally, the scrapeCache implementation needs to be renamed to match the new signature.

Signed-off-by: gotjosh <josue@grafana.com>
2019-12-05 10:29:58 +00:00
Geoffrey Beausire 5cb7987314 Fix relabaling collision when using exported label
When using both a label and the suffix+label in the
relabel config. It's possible that Prometheus remove
the suffx+label for no obvious reason. It's due to a
collision when merging labels from target and from
the sample.

Signed-off-by: Geoffrey Beausire <g.beausire@criteo.com>
2019-11-26 11:03:11 +01:00
Dustin Hooten ca60bf298c React UI: Implement /targets page (#6276)
* Add LastScrapeDuration to targets endpoint

Signed-off-by: Dustin Hooten <dhooten@splunk.com>

* Add Scrape job name to targets endpoint

Signed-off-by: Dustin Hooten <dhooten@splunk.com>

* Implement the /targets page in react

Signed-off-by: Dustin Hooten <dhooten@splunk.com>

* Add state query param to targets endpoint

Signed-off-by: Dustin Hooten <dhooten@splunk.com>

* Use state filter in api call

Signed-off-by: Dustin Hooten <dhooten@splunk.com>

* api feedback

Signed-off-by: Dustin Hooten <dhooten@splunk.com>

* pr feedback frontend

Signed-off-by: Dustin Hooten <dhooten@splunk.com>

* Implement and use localstorage hook

Signed-off-by: Dustin Hooten <dhooten@splunk.com>

* PR feedback

Signed-off-by: Dustin Hooten <dhooten@splunk.com>
2019-11-11 22:42:24 +01:00
johncming 1fa5a75a3a Ctx name (#5961)
* scrape: rename ctx name for readability

Signed-off-by: johncming <johncming@yahoo.com>

* scrape: use self ctx instead of parent ctx.

Signed-off-by: johncming <johncming@yahoo.com>
2019-08-28 15:55:09 +02:00
Julius Volz b5c833ca21
Update go.mod dependencies before release (#5883)
* Update go.mod dependencies before release

Signed-off-by: Julius Volz <julius.volz@gmail.com>

* Add issue for showing query warnings in promtool

Signed-off-by: Julius Volz <julius.volz@gmail.com>

* Revert json-iterator back to 1.1.6

It produced errors when marshaling Point values with special float
values.

Signed-off-by: Julius Volz <julius.volz@gmail.com>

* Fix expected step values in promtool tests after client_golang update

Signed-off-by: Julius Volz <julius.volz@gmail.com>

* Update generated protobuf code after proto dep updates

Signed-off-by: Julius Volz <julius.volz@gmail.com>
2019-08-14 11:00:39 +02:00
Brian Brazil e62f30d497
Correctly handle empty labels from alert templates. (#5845)
Fixes https://github.com/prometheus/common/issues/36

Move logic handling this into the labels package,
so all the cases are handled in one place and we're
less likely to have this come up again.

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
2019-08-13 11:19:17 +01:00
yeya24 b7bb278e95 make targets active parallel (#5740)
Signed-off-by: yeya24 <yb532204897@gmail.com>
2019-07-29 17:08:54 +01:00
Simon Pasquier 9a1935d641
scrape: remove unused type (#5761)
Signed-off-by: Simon Pasquier <spasquie@redhat.com>
2019-07-15 08:54:22 +02:00
Brian Brazil b98e818876
Add scrape_series_added per-scrape metric. (#5546)
This is an estimate of churn, with series being added to the cache being
considered churn. This will have both false positives (e.g. series
appearing and disappearing) and false negatives (e.g. series hit
sample_limit, but still created in head block), but should be generally
useful as-is.

Relevant docs live in another repo.

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
2019-05-08 22:24:00 +01:00
Romain Baugue 95193fa027 Exhaust every request body before closing it (#5166) (#5479)
From the documentation:
> The default HTTP client's Transport may not
> reuse HTTP/1.x "keep-alive" TCP connections if the Body is
> not read to completion and closed.

This effectively enable keep-alive for the fixed requests.

Signed-off-by: Romain Baugue <romain.baugue@elwinar.com>
2019-04-18 09:50:37 +01:00
Simon Pasquier c1682adb2f Bump prometheus/common to v0.3.0 (#5344)
* Reload certificates from disk automatically

This change bumps github.com/prometheus/common to include
https://github.com/prometheus/common/pull/173

Signed-off-by: Simon Pasquier <spasquie@redhat.com>

* scrape: close idle connections on reload/stop

Signed-off-by: Simon Pasquier <spasquie@redhat.com>

* use v0.3.0 tag

Signed-off-by: Simon Pasquier <spasquie@redhat.com>
2019-04-10 13:20:00 +01:00
Brian Brazil f7184978f4 Protect against memory exhaustion when scraping.
Now that we're not losing the scrape cache across failed
scrape, a scrape that continually failed but had varying
series or metadata (e.g. timestamps in metric names,
plus hitting smaple_limit) would grow the cache indefinitely.

Add some code to catch that, and flush the cache anyway.

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
2019-04-04 19:09:11 +01:00
Brian Brazil dd3073616c Don't lose the scrape cache on a failed scrape.
This avoids CPU usage increasing when the target comes back.

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
2019-04-04 19:09:11 +01:00
Tariq Ibrahim 8fdfa8abea refine error handling in prometheus (#5388)
i) Uses the more idiomatic Wrap and Wrapf methods for creating nested errors.
ii) Fixes some incorrect usages of fmt.Errorf where the error messages don't have any formatting directives.
iii) Does away with the use of fmt package for errors in favour of pkg/errors

Signed-off-by: tariqibrahim <tariq181290@gmail.com>
2019-03-26 00:01:12 +01:00
Julien Pivotto 4397916cb2 Add honor_timestamps (#5304)
Fixes #5302

Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2019-03-15 10:04:15 +00:00
xjewer 0d1a69353e scrape: Add global jitter for HA server (#5181)
* scrape: Add global jitter for HA server

Covers issue in https://github.com/prometheus/prometheus/pull/4926#issuecomment-449039848
where the HA setup become a problem for targets unable to be scraped simultaneously.
The new jitter per server relies on the hostname and external labels which necessarily to be uniq.

As before, scrape offset will be calculated with regard the absolute time, so even
restart/reload doesn't change scrape time per scrape target + prometheus instance.

Use fqdn if possible, otherwise fall back to the hostname. It adds extra random seed
to calculate server hash to be distinguish on machines with the same hostname, but
different DC.

Signed-off-by: Aleksei Semiglazov <xjewer@gmail.com>
2019-03-12 10:46:15 +00:00
Julien Pivotto 04ce817c49 scrape: Rewrite scrape loop options as a struct (#5314)
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2019-03-12 10:26:18 +00:00
Simon Pasquier 12708acd15
scrape: catch errors when creating HTTP clients (#5182)
* scrape: catch errors when creating HTTP clients

This change makes sure that no scrape pool is created with a nil HTTP
client.

Signed-off-by: Simon Pasquier <spasquie@redhat.com>

* Address Tariq's comment

Signed-off-by: Simon Pasquier <spasquie@redhat.com>

* Address Brian's comment

Signed-off-by: Simon Pasquier <spasquie@redhat.com>
2019-02-13 14:24:22 +01:00
Simon Pasquier f678e27eb6
*: use latest release of staticcheck (#5057)
* *: use latest release of staticcheck

It also fixes a couple of things in the code flagged by the additional
checks.

Signed-off-by: Simon Pasquier <spasquie@redhat.com>

* Use official release of staticcheck

Also run 'go list' before staticcheck to avoid failures when downloading packages.

Signed-off-by: Simon Pasquier <spasquie@redhat.com>
2019-01-04 14:47:38 +01:00
Bartek Płotka 62c8337e77 Moved configuration into `relabel` package. (#4955)
Adapted top dir relabel to use pkg relabel structs.

Removal of this in a separate tracked here: https://github.com/prometheus/prometheus/issues/3647

Signed-off-by: Bartek Plotka <bwplotka@gmail.com>
2018-12-18 11:26:36 +00:00
Ben Kochie c6399296dc
Fix spelling/typos (#4921)
* Fix spelling/typos

Fix spelling/typos reported by codespell/misspell.
* UK -> US spelling changes.

Signed-off-by: Ben Kochie <superq@gmail.com>
2018-11-27 17:44:29 +01:00
Brian Brazil d2f0f54d68
Pass through content-type for non-compressed output. (#4912)
Fixes #4911

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
2018-11-26 13:05:07 +00:00
arugaki b98a5acb57 fix buf redeclared in scrapeloop (#4873)
* buf Has been declared in scrape.go in line 785, I think it is unnecessary to declare a new variable again here.

Signed-off-by: arugakiWei <arugaki.wei@daocloud.io>

* delete the buf in line 785 because  it is never used.

Signed-off-by: arugakiWei <arugaki.wei@daocloud.io>
2018-11-22 12:13:52 +08:00
Simon Pasquier ed19373a78
*: remove use of golang.org/x/net/context (#4869)
* *: remove use of golang.org/x/net/context

Signed-off-by: Simon Pasquier <spasquie@redhat.com>

* scrape: fix TestTargetScrapeScrapeCancel

Signed-off-by: Simon Pasquier <spasquie@redhat.com>
2018-11-19 12:31:16 +01:00
Brian Brazil 9c03e11c2c Hook OpenMetrics parser into scraping.
Extend metadata api to support units.

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
2018-10-18 13:58:00 +01:00
Brian Brazil ffe7efb411 Prepare for multiple text formats
Pass content type down to text parser.

Add layer of indirection in front of text parser,
and rename to avoid future clashes.

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
2018-10-18 13:58:00 +01:00
Krasi Georgiev 47a673c3a0
process scrape loops reloading in parallel (#4526)
The scrape manage receiver's channel now just saves the target sets
and another backgorund runner updates the scrape loops every 5 seconds.
This is so that the scrape manager doesn't block the receiving channel
when it does the long background reloading of the scrape loops.

Active and dropped targets are now saved in each scrape pool instead of
the scrape manager. This is mainly to avoid races when getting the
targets via the web api.

When reloading the scrape loops now happens in parallel to speed up the
final disared state and this also speeds up the prometheus's shutting
down.

Also updated some funcs signatures in the web package for consistency.

Signed-off-by: Krasi Georgiev <kgeorgie@redhat.com>
2018-09-26 12:20:56 +03:00
Julius Volz 8fbe1b5133
Handle a bunch of unchecked errors (#4461)
There are many more (mostly finalizers like Close/Stop/etc.), but most of
the others seemed like one couldn't do much about them anyway.

Signed-off-by: Julius Volz <julius.volz@gmail.com>
2018-08-17 17:24:35 +02:00
Fabian Reinartz 057a5ae2b1 Address comments
Signed-off-by: Fabian Reinartz <freinartz@google.com>
2018-06-06 11:21:17 -04:00
Fabian Reinartz ad4c33c1ff scrape,api: provide per-target metric metadata
This adds a per-target cache of scraped metadata. The metadata is only
available for the lifecycle of the attached target. An API endpoint allows
to select metadata by metric name and a label selection of targets.

Signed-off-by: Fabian Reinartz <freinartz@google.com>
2018-06-06 05:56:10 -04:00
Fabian Reinartz 76a4a46cb0 pkg/textparse: refactor and add metadata handling
Extends the parser to allow retrieving metadata.
The lexer now yields proper tokens that are fed into a hand-written
parser on top.

Signed-off-by: Fabian Reinartz <freinartz@google.com>
2018-05-25 11:27:25 -04:00
Adam Shannon 809881d7f5 support reading basic_auth password_file for HTTP basic auth (#4077)
Issue: https://github.com/prometheus/prometheus/issues/4076

Signed-off-by: Adam Shannon <adamkshannon@gmail.com>
2018-04-25 18:19:06 +01:00
Björn Rabenstein 91e470d733
Merge pull request #4096 from simonpasquier/fix-scrape-races-2.2
Fix scrape races (release-2.2 branch)
2018-04-25 15:36:29 +02:00
Simon Pasquier 2cbba4e948 scrape: fix data races
This commit avoids passing the full scrape configuration down to the
scrape loop to fix data races when the scrape configuration is being
reloaded.

Signed-off-by: Simon Pasquier <spasquie@redhat.com>
2018-04-18 11:17:31 +02:00
beorn7 94ff07b81d Merge branch 'release-2.2'
Signed-off-by: beorn7 <beorn@soundcloud.com>
2018-04-10 16:50:35 +02:00
Krasi Georgiev ddd46de6f4 Races/3994 (#4005)
Fix race by properly locking access to scrape pools. Use separate mutex for information needed by UI so that UI isn't blocked when targets are being updated.
2018-04-09 15:18:25 +01:00
zjwzte b7a37a1604 Fix magic number. 2018-03-15 10:15:35 +08:00
Fabian Reinartz ef567ceb7d
Merge pull request #3835 from krasi-georgiev/pool-package-generalize
Pool package generalize
2018-02-28 14:30:46 +01:00
Krasi Georgiev 933ab8a34e stupid return mistake fix, and dropped the additional assertion check! 2018-02-20 13:32:23 +02:00
Brian Brazil 4addee2bee Fix honor_labels for empty labels, prune empty labels.
The semantics of honor_labels are that if a target exposes
and empty label it will override the target labels. This PR
fixes that by once again distinguishing between empty labels
and missing labels in this one use case.

Beyond that empty labels should be pruned and not added to storage,
which this also fixes.

Fixes #3841
2018-02-20 11:05:54 +00:00
Krasi Georgiev 9878d66484 exit early and panic for non slice on buffer.Put 2018-02-19 14:31:04 +02:00
Krasi Georgiev f2c5913416 generalize the pool package and refactor scraper to use it. 2018-02-13 22:44:51 +02:00
Krasi Georgiev 404b306fb9 Meta labels sd 3693 (#3805)
Always keep the discovered labels up to date.

add test that DiscoveredLabels are always updated
2018-02-07 10:29:27 +00:00
Krasi Georgiev b75428ec19 rename package retrieve to scrape
no fucnctinal changes just renaming retrieval to scrape
2018-02-01 09:55:07 +00:00