mirror of https://github.com/prometheus/prometheus
8.9 KiB
8.9 KiB
title | sort_rank |
---|---|
Unit Testing for Rules | 6 |
Unit Testing for Rules
You can use promtool
to test your rules.
# For a single test file.
./promtool test rules test.yml
# If you have multiple test files, say test1.yml,test2.yml,test2.yml
./promtool test rules test1.yml test2.yml test3.yml
Test file format
# This is a list of rule files to consider for testing. Globs are supported.
rule_files:
[ - <file_name> ]
[ evaluation_interval: <duration> | default = 1m ]
# The order in which group names are listed below will be the order of evaluation of
# rule groups (at a given evaluation time). The order is guaranteed only for the groups mentioned below.
# All the groups need not be mentioned below.
group_eval_order:
[ - <group_name> ]
# All the tests are listed here.
tests:
[ - <test_group> ]
<test_group>
# Series data
interval: <duration>
input_series:
[ - <series> ]
# Name of the test group
[ name: <string> ]
# Unit tests for the above data.
# Unit tests for alerting rules. We consider the alerting rules from the input file.
alert_rule_test:
[ - <alert_test_case> ]
# Unit tests for PromQL expressions.
promql_expr_test:
[ - <promql_test_case> ]
# External labels accessible to the alert template.
external_labels:
[ <labelname>: <string> ... ]
# External URL accessible to the alert template.
# Usually set using --web.external-url.
[ external_url: <string> ]
<series>
# This follows the usual series notation '<metric name>{<label name>=<label value>, ...}'
# Examples:
# series_name{label1="value1", label2="value2"}
# go_goroutines{job="prometheus", instance="localhost:9090"}
series: <string>
# This uses expanding notation.
# Expanding notation:
# 'a+bxn' becomes 'a a+b a+(2*b) a+(3*b) … a+(n*b)'
# Read this as series starts at a, then n further samples incrementing by b.
# 'a-bxn' becomes 'a a-b a-(2*b) a-(3*b) … a-(n*b)'
# Read this as series starts at a, then n further samples decrementing by b (or incrementing by negative b).
# 'axn' becomes 'a a a … a' (n times) - it's a shorthand for 'a+0xn'
# There are special values to indicate missing and stale samples:
# '_' represents a missing sample from scrape
# 'stale' indicates a stale sample
# Examples:
# 1. '-2+4x3' becomes '-2 2 6 10' - series starts at -2, then 3 further samples incrementing by 4.
# 2. ' 1-2x4' becomes '1 -1 -3 -5 -7' - series starts at 1, then 4 further samples decrementing by 2.
# 3. ' 1x4' becomes '1 1 1 1 1' - shorthand for '1+0x4', series starts at 1, then 4 further samples incrementing by 0.
# 4. ' 1 _x3 stale' becomes '1 _ _ _ stale' - the missing sample cannot increment, so 3 missing samples are produced by the '_x3' expression.
#
# Native histogram notation:
# Native histograms can be used instead of floating point numbers using the following notation:
# {{schema:1 sum:-0.3 count:3.1 z_bucket:7.1 z_bucket_w:0.05 buckets:[5.1 10 7] offset:-3 n_buckets:[4.1 5] n_offset:-5}}
# Native histograms support the same expanding notation as floating point numbers, i.e. 'axn', 'a+bxn' and 'a-bxn'.
# All properties are optional and default to 0. The order is not important. The following properties are supported:
# - schema (int):
# Currently valid schema numbers are -4 <= n <= 8. They are all for
# base-2 bucket schemas, where 1 is a bucket boundary in each case, and
# then each power of two is divided into 2^n logarithmic buckets. Or
# in other words, each bucket boundary is the previous boundary times
# 2^(2^-n).
# - sum (float):
# The sum of all observations, including the zero bucket.
# - count (non-negative float):
# The number of observations, including those that are NaN and including the zero bucket.
# - z_bucket (non-negative float):
# The sum of all observations in the zero bucket.
# - z_bucket_w (non-negative float):
# The width of the zero bucket.
# If z_bucket_w > 0, the zero bucket contains all observations -z_bucket_w <= x <= z_bucket_w.
# Otherwise, the zero bucket only contains observations that are exactly 0.
# - buckets (list of non-negative floats):
# Observation counts in positive buckets. Each represents an absolute count.
# - offset (int):
# The starting index of the first entry in the positive buckets.
# - n_buckets (list of non-negative floats):
# Observation counts in negative buckets. Each represents an absolute count.
# - n_offset (int):
# The starting index of the first entry in the negative buckets.
values: <string>
<alert_test_case>
Prometheus allows you to have same alertname for different alerting rules. Hence in this unit testing, you have to list the union of all the firing alerts for the alertname under a single <alert_test_case>
.
# The time elapsed from time=0s when the alerts have to be checked.
eval_time: <duration>
# Name of the alert to be tested.
alertname: <string>
# List of expected alerts which are firing under the given alertname at
# given evaluation time. If you want to test if an alerting rule should
# not be firing, then you can mention the above fields and leave 'exp_alerts' empty.
exp_alerts:
[ - <alert> ]
<alert>
# These are the expanded labels and annotations of the expected alert.
# Note: labels also include the labels of the sample associated with the
# alert (same as what you see in `/alerts`, without series `__name__` and `alertname`)
exp_labels:
[ <labelname>: <string> ]
exp_annotations:
[ <labelname>: <string> ]
<promql_test_case>
# Expression to evaluate
expr: <string>
# The time elapsed from time=0s when the expression has to be evaluated.
eval_time: <duration>
# Expected samples at the given evaluation time.
exp_samples:
[ - <sample> ]
<sample>
# Labels of the sample in usual series notation '<metric name>{<label name>=<label value>, ...}'
# Examples:
# series_name{label1="value1", label2="value2"}
# go_goroutines{job="prometheus", instance="localhost:9090"}
labels: <string>
# The expected value of the PromQL expression.
value: <number>
Example
This is an example input file for unit testing which passes the test. test.yml
is the test file which follows the syntax above and alerts.yml
contains the alerting rules.
With alerts.yml
in the same directory, run ./promtool test rules test.yml
.
test.yml
# This is the main input for unit testing.
# Only this file is passed as command line argument.
rule_files:
- alerts.yml
evaluation_interval: 1m
tests:
# Test 1.
- interval: 1m
# Series data.
input_series:
- series: 'up{job="prometheus", instance="localhost:9090"}'
values: '0 0 0 0 0 0 0 0 0 0 0 0 0 0 0'
- series: 'up{job="node_exporter", instance="localhost:9100"}'
values: '1+0x6 0 0 0 0 0 0 0 0' # 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0
- series: 'go_goroutines{job="prometheus", instance="localhost:9090"}'
values: '10+10x2 30+20x5' # 10 20 30 30 50 70 90 110 130
- series: 'go_goroutines{job="node_exporter", instance="localhost:9100"}'
values: '10+10x7 10+30x4' # 10 20 30 40 50 60 70 80 10 40 70 100 130
# Unit test for alerting rules.
alert_rule_test:
# Unit test 1.
- eval_time: 10m
alertname: InstanceDown
exp_alerts:
# Alert 1.
- exp_labels:
severity: page
instance: localhost:9090
job: prometheus
exp_annotations:
summary: "Instance localhost:9090 down"
description: "localhost:9090 of job prometheus has been down for more than 5 minutes."
# Unit tests for promql expressions.
promql_expr_test:
# Unit test 1.
- expr: go_goroutines > 5
eval_time: 4m
exp_samples:
# Sample 1.
- labels: 'go_goroutines{job="prometheus",instance="localhost:9090"}'
value: 50
# Sample 2.
- labels: 'go_goroutines{job="node_exporter",instance="localhost:9100"}'
value: 50
alerts.yml
# This is the rules file.
groups:
- name: example
rules:
- alert: InstanceDown
expr: up == 0
for: 5m
labels:
severity: page
annotations:
summary: "Instance {{ $labels.instance }} down"
description: "{{ $labels.instance }} of job {{ $labels.job }} has been down for more than 5 minutes."
- alert: AnotherInstanceDown
expr: up == 0
for: 10m
labels:
severity: page
annotations:
summary: "Instance {{ $labels.instance }} down"
description: "{{ $labels.instance }} of job {{ $labels.job }} has been down for more than 5 minutes."