Closed Hygens closed 4 years ago
I had the same issue but I was using nifi-prometheus-nar-1.8.0.nar with nifi 1.9.2. I moved to to nifi-prometheus-nar-1.9.2.nar and it fixed the issue.
i got same error.
I switched pushgateway to v0.5.2 and this error disappeared. It seems nifi-prometheus-reporter doesn't support pushgateway above 0.5.2.
After testing from 0.5.2 to 0.10.0, all works fine until 0.10.0. https://github.com/prometheus/pushgateway/blob/master/CHANGELOG.md Changelog indicate as following:
[CHANGE] Change of the storage format (necessary for the hash collision bugfix below). #293 [CHANGE] Check pushed metrics immediately and reject them if inconsistent. Successful pushes now result in code 200 (not 202).
Which match the observation which pushgateway v0.10.0 can accepted push from nifi but nifi would still issue error msg.
Hi previa,
Thank you for share that information. I sent that for the technical team and after validate that solution. I'll be close that issue for while.
All the best, Hygens
I have one scenario where I get the following errors same with all configurations correct about documentations for Pushgateway:
2019-10-15 08:14:54,789 ERROR [Timer-Driven Process Thread-4] o.a.n.r.p.PrometheusReportingTask PrometheusReportingTask[id=cf0d9548-016d-1000-7798-424e9933586b] Failed pushing Nifi-metrics to Prometheus PushGateway due to java.io.IOException: Response code from http://localhost:9091/metrics/job/nifi_reporting_job/instance/user-Dell was 200; routing to failure: {} java.io.IOException: Response code from http://localhost:9091/metrics/job/nifi_reporting_job/instance/user-Dell was 200 at io.prometheus.client.exporter.PushGateway.doRequest(PushGateway.java:304) at io.prometheus.client.exporter.PushGateway.pushAdd(PushGateway.java:178) at org.apache.nifi.reporting.prometheus.PrometheusReportingTask.onTrigger(PrometheusReportingTask.java:170) at org.apache.nifi.controller.tasks.ReportingTaskWrapper.run(ReportingTaskWrapper.java:44) at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748)
The installation for that is:
Nifi 1.9.2 with nifi-prometheus-reporter.nar on lib folder installed local out from docker or docker-compose and ReportinTask configured as below:
Apache Livy installed local out from docker or docker-compose.
Pushgateway, Prometheus and Grafana are separeted docker files with --network host setted on same machine with other softwares but the real installation are Nifi and Livy in one server and others docker files in Cloud.
I have one ExecuteScript component in ProcessGroup on Nifi as below:
The ExecuteScript access Livy Api via Nifi because I'm listening json files in one local folder and passing parameters for Spark Jobs via Python and passing that parameters for execute Livy Batch Api:
import json import java.io from org.apache.commons.io import IOUtils from java.nio.charset import StandardCharsets from org.apache.nifi.processor.io import InputStreamCallback import json, pprint, requests, textwrap
class PyReadStreamCallback(InputStreamCallback): def init(self): pass
end class
flowFile = session.get()
if flowFile is None: session.transfer(flowFile, REL_FAILURE)
obj = PyReadStreamCallback() session.read(flowFile, obj) parsedJson = json.loads(obj.val)
headers = {'Content-Type': 'application/json'} batchs_url = 'http://localhost:8998/batches' host = 'http://localhost:8998'
r = requests.post(batchs_url, data=obj.val, headers=headers) pprint.pprint(r.json())
statement_url = host + r.headers['location'] r = requests.get(statement_url, headers=headers) pprint.pprint(r.json())
with open('/data/personal.json', 'w') as json_file:
json.dump(parsedJson, json_file)
session.transfer(flowFile, REL_SUCCESS)
Testing with the docker-compose for nifi-prometheus-reporter I'm getting the same error.
What is the problem about that because I'm getting some scraped data on Pushgateway and get too that error?!