Closed pflong closed 9 months ago
could you try to provide target as a query parameter?
could you try to provide target as a query parameter?
Thanks for your reply.
result from prometheus http api, like this ?
curl -X GET -s -g 'localhost:9090/api/v1/query?query=mongodb_mongod_replset_my_state{instance="localhost:9216"}' | python -m json.tool
{
"data": {
"result": [
{
"metric": {
"__name__": "mongodb_mongod_replset_my_state",
"instance": "localhost:9216",
"job": "prometheus",
"set": "shard0"
},
"value": [
1702367813.46,
"2"
]
}
],
"resultType": "vector"
},
"status": "success"
}
no, like this
curl -X GET -s localhost:9216/scrape?target=mongodb://127.0.0.1:27018
ok. it works with specified target
for port 27020
curl -s localhost:9216/scrape?target=mongodb://10.1.1.1:27020 | grep mongodb_mongod_replset_member_config_version
# HELP mongodb_mongod_replset_member_config_version The configVersion value is the replica set configuration version.
# TYPE mongodb_mongod_replset_member_config_version gauge
mongodb_mongod_replset_member_config_version{name="10.1.1.3:27020",set="shard1",state="SECONDARY"} 5
mongodb_mongod_replset_member_config_version{name="10.1.1.1:27020",set="shard1",state="PRIMARY"} 5
mongodb_mongod_replset_member_config_version{name="10.1.1.2:27020",set="shard1",state="SECONDARY"} 5
for port 27019
curl -s localhost:9216/scrape?target=mongodb://10.1.1.1:27019 | grep mongodb_mongod_replset_member_config_version
# HELP mongodb_mongod_replset_member_config_version The configVersion value is the replica set configuration version.
# TYPE mongodb_mongod_replset_member_config_version gauge
mongodb_mongod_replset_member_config_version{name="10.1.1.3:27019",set="shard0",state="SECONDARY"} 5
mongodb_mongod_replset_member_config_version{name="10.1.1.1:27019",set="shard0",state="SECONDARY"} 5
mongodb_mongod_replset_member_config_version{name="10.1.1.2:27019",set="shard0",state="PRIMARY"} 5
How can I use it correctly with multi targets?
@pflong if you are asking about prometheus, then this should be useful for you https://prometheus.io/docs/guides/multi-target-exporter/#querying-multi-target-exporters-with-prometheus
@pflong if you are asking about prometheus, then this should be useful for you https://prometheus.io/docs/guides/multi-target-exporter/#querying-multi-target-exporters-with-prometheus
@BupycHuk Thanks a lot, since it's not easy to use multi target on prometheus and grafana(need some relabel work), finally I choose to set a mongodb exporter for each instance(localhost:27017, localhost27018....)
./mongodb_exporter --mongodb.uri=mongodb://localhost:27017 --collect-all --compatible-mode --web.listen-address=":9217" &
./mongodb_exporter --mongodb.uri=mongodb://localhost:27018 --collect-all --compatible-mode --web.listen-address=":9218" &
...
./mongodb_exporter --mongodb.uri=mongodb://localhost:27021--collect-all --compatible-mode --web.listen-address=":9221" &
scrape_configs:
- job_name: "mongos"
static_configs:
- targets: ["localhost:9217"]
- job_name: "mongo-config"
static_configs:
- targets: ["localhost:9218"]
- job_name: "mongo-shard0"
static_configs:
- targets: ["localhost:9219"]
- job_name: "mongo-shard1"
static_configs:
- targets: ["localhost:9220"]
- job_name: "mongo-shard2"
static_configs:
- targets: ["localhost:9221"]
all replset metrics works well
{
"status":"success",
"data":{
"resultType":"vector",
"result":[
{
"metric":{
"__name__":"mongodb_mongod_replset_member_config_version",
"instance":"localhost:9218",
"job":"mongo-config",
"name":"10.1.1.3:27018",
"set":"config",
"state":"SECONDARY"
},
"value":[
1702702447.356,
"5"
]
},
{
"metric":{
"__name__":"mongodb_mongod_replset_member_config_version",
"instance":"localhost:9218",
"job":"mongo-config",
"name":"10.1.1.1:27018",
"set":"config",
"state":"SECONDARY"
},
"value":[
1702702447.356,
"5"
]
},
{
"metric":{
"__name__":"mongodb_mongod_replset_member_config_version",
"instance":"localhost:9218",
"job":"mongo-config",
"name":"10.1.1.2:27018",
"set":"config",
"state":"PRIMARY"
},
"value":[
1702702447.356,
"5"
]
},
{
"metric":{
"__name__":"mongodb_mongod_replset_member_config_version",
"instance":"localhost:9219",
"job":"mongo-shard0",
"name":"10.1.1.3:27019",
"set":"shard0",
"state":"SECONDARY"
},
"value":[
1702702447.356,
"5"
]
},
{
"metric":{
"__name__":"mongodb_mongod_replset_member_config_version",
"instance":"localhost:9219",
"job":"mongo-shard0",
"name":"10.1.1.1:27019",
"set":"shard0",
"state":"SECONDARY"
},
"value":[
1702702447.356,
"5"
]
},
{
"metric":{
"__name__":"mongodb_mongod_replset_member_config_version",
"instance":"localhost:9219",
"job":"mongo-shard0",
"name":"10.1.1.2:27019",
"set":"shard0",
"state":"PRIMARY"
},
"value":[
1702702447.356,
"5"
]
},
{
"metric":{
"__name__":"mongodb_mongod_replset_member_config_version",
"instance":"localhost:9220",
"job":"mongo-shard1",
"name":"10.1.1.3:27020",
"set":"shard1",
"state":"SECONDARY"
},
"value":[
1702702447.356,
"5"
]
},
{
"metric":{
"__name__":"mongodb_mongod_replset_member_config_version",
"instance":"localhost:9220",
"job":"mongo-shard1",
"name":"10.1.1.1:27020",
"set":"shard1",
"state":"PRIMARY"
},
"value":[
1702702447.356,
"5"
]
},
{
"metric":{
"__name__":"mongodb_mongod_replset_member_config_version",
"instance":"localhost:9220",
"job":"mongo-shard1",
"name":"10.1.1.2:27020",
"set":"shard1",
"state":"SECONDARY"
},
"value":[
1702702447.356,
"5"
]
},
{
"metric":{
"__name__":"mongodb_mongod_replset_member_config_version",
"instance":"localhost:9221",
"job":"mongo-shard2",
"name":"10.1.1.3:27021",
"set":"shard2",
"state":"SECONDARY"
},
"value":[
1702702447.356,
"5"
]
},
{
"metric":{
"__name__":"mongodb_mongod_replset_member_config_version",
"instance":"localhost:9221",
"job":"mongo-shard2",
"name":"10.1.1.1:27021",
"set":"shard2",
"state":"SECONDARY"
},
"value":[
1702702447.356,
"5"
]
},
{
"metric":{
"__name__":"mongodb_mongod_replset_member_config_version",
"instance":"localhost:9221",
"job":"mongo-shard2",
"name":"10.1.1.2:27021",
"set":"shard2",
"state":"PRIMARY"
},
"value":[
1702702447.356,
"5"
]
}
]
}
}
I close this issue, thank you @BupycHuk
Describe the bug
a sharding cluster with 6 shards on 3 nodes,each nodes:
we specified multi target specified, but only the first take effect on some metrics (e.g mongodb_mongod_replset_member_config_version)
To Reproduce
step1
./mongodb_exporter --mongodb.uri=mongodb://10.1.1.2:27019,mongodb://10.1.1.2:27020 --collect-all --compatible-mode
we get
we only get shard0, which is on port 27019
step2
let's put 27020 in front.
./mongodb_exporter --mongodb.uri=mongodb://10.1.1.2:27020,mongodb://10.1.1.2:27019 --collect-all --compatible-mode we get
we only get shard1, which is on port 27020