|
|
|
@ -1,6 +1,6 @@
|
|
|
|
|
# RabbitMQ Scraping |
|
|
|
|
|
|
|
|
|
This is an example on how to setup RabbitMQ so prometheus can scrap data from it. |
|
|
|
|
This is an example on how to setup RabbitMQ so Prometheus can scrap data from it. |
|
|
|
|
It uses a third party [RabbitMQ exporter](https://github.com/kbudde/rabbitmq_exporter). |
|
|
|
|
|
|
|
|
|
Since the [RabbitMQ exporter](https://github.com/kbudde/rabbitmq_exporter) needs to |
|
|
|
@ -8,7 +8,7 @@ connect on RabbitMQ management API to scrap data, and it defaults to localhost,
|
|
|
|
|
easier to simply embed the **kbudde/rabbitmq-exporter** on the same pod as RabbitMQ, |
|
|
|
|
this way they share the same network. |
|
|
|
|
|
|
|
|
|
With this pod running you will have the exporter scraping data, but prometheus have not |
|
|
|
|
With this pod running you will have the exporter scraping data, but Prometheus have not |
|
|
|
|
yet found the exporter and is not scraping data from it. |
|
|
|
|
|
|
|
|
|
For more details on how to use kubernetes service discovery take a look on the |
|
|
|
@ -22,7 +22,7 @@ is exposing metrics. To do that you need to define a service that:
|
|
|
|
|
* Add the annotation: prometheus.io/scrape: "true" |
|
|
|
|
* Add the annotation: prometheus.io/port: "9090" |
|
|
|
|
|
|
|
|
|
And you should be able to see your RabbitMQ exporter being scrapped on prometheus status page. |
|
|
|
|
Since the ip that will be scrapped will be the pod endpoint it is important that the node |
|
|
|
|
where prometheus is running have access to the Kubernetes overlay network |
|
|
|
|
And you should be able to see your RabbitMQ exporter being scraped on Prometheus status page. |
|
|
|
|
Since the ip that will be scraped will be the pod endpoint it is important that the node |
|
|
|
|
where Prometheus is running have access to the Kubernetes overlay network |
|
|
|
|
(flannel, weave, aws, or any of the other options that Kubernetes gives to you). |
|
|
|
|