Details
-
Improvement
-
Resolution: Unresolved
-
Major
-
None
-
None
Description
There is a common need to correlate metrics from multiple independent exporters from the same host/instance. Typically a job is configured with target:port, this is also what is stored in the instance label. For example if I wanted to retrieve kv_* metrics and correlate those metrics with metrics from node exporter on the same dashboard. The only way to do that is use the same instance. But if the couchbase metrics are instance="host1:8091" and node exporter metrics are instance="host1:9200" it becomes verify difficult.
To make this possible so they share the same instance label but still scrape different targets I use something similar to the following:
scrape_configs:
|
- job_name: node_exporter
|
metrics_path: /metrics
|
relabel_configs:
|
# explicitly set the instance label so it doesn't include the port
|
- source_labels: [__address__]
|
action: replace
|
regex: "(.+)" |
replacement: "$1" |
target_label: instance
|
# add the port to the address prior to scraping
|
- source_labels: [__address__]
|
action: replace
|
regex: "(.+)" |
replacement: "$1:9200" |
target_label: __address__
|
file_sd_configs:
|
- files:
|
- /etc/prometheus/file_sd/*.json
|
refresh_interval: 5m
|
- job_name: process_exporter
|
metrics_path: /metrics
|
relabel_configs:
|
# explicitly set the instance label so it doesn't include the port
|
- source_labels: [__address__]
|
action: replace
|
regex: "(.+)" |
replacement: "$1" |
target_label: instance
|
# add the port to the address prior to scraping
|
- source_labels: [__address__]
|
action: replace
|
regex: "(.+)" |
replacement: "$1:9256" |
target_label: __address__
|
file_sd_configs:
|
- files:
|
- /etc/prometheus/file_sd/*.json
|
refresh_interval: 5m
|
- job_name: couchbase
|
metrics_path: /metrics
|
basic_auth:
|
username: Administrator
|
password: password
|
relabel_configs:
|
# explicitly set the instance label so it doesn't include the port
|
- source_labels: [__address__]
|
action: replace
|
regex: "(.+)" |
replacement: "$1" |
target_label: instance
|
# add the port to the address prior to scraping
|
- source_labels: [__address__]
|
action: replace
|
regex: "(.+)" |
replacement: "$1:8091" |
target_label: __address__
|
file_sd_configs:
|
- files:
|
- /etc/prometheus/file_sd/*.json
|
refresh_interval: 5m
|
The file_sd file looks like this:
cluster1.json
[
|
{
|
"targets": [ |
"host1.cluster1.acme.com", |
"host2.cluster1.acme.com", |
"host3.cluster1.acme.com", |
"host4.cluster1.acme.com", |
"host5.cluster1.acme.com", |
"host6.cluster1.acme.com", |
"host7.cluster1.acme.com" |
],
|
"labels": { |
"cluster_name": "Demo Cluster 1" |
}
|
}
|
]
|
cluster2.json
[
|
{
|
"targets": [ |
"host1.cluster2.acme.com", |
"host2.cluster2.acme.com", |
"host3.cluster2.acme.com" |
],
|
"labels": { |
"cluster_name": "Demo Cluster 2" |
}
|
}
|
]
|
This approach also simplifies the amount of file_sd configs that are needed as the same targets can be used for different jobs.