divinity666 / ruby-grafana-reporter

Reporting Service for Grafana
MIT License
66 stars 5 forks source link

'Prometheus' (Grafana::PrometheusDatasource) failed with an internal error: undefined method `[]' for nil:NilClass #12

Closed QuimLaz closed 1 year ago

QuimLaz commented 2 years ago

Good morning, I've been trying to generate my own report using the demo_report.pdf generated by:

ruby-grafana-reporter -w

Even the sample report does not show any of the query results. I end only having empty spaces on the final document where the results of the query should be, and it happens with all kinds of querys. Here is a sample of a query that causes the undefined method error:

include::grafana_sql_table:1[sql="node_memory_MemTotal_bytes{job="ConnectorA_tech"} - node_memory_Active_bytes{job="ConnectorA_tech"}",filter_columns="time",dashboard="0E0FhC7Wz",from="now-1h",to="now"]

I've tried with simpler querys (the same one without subtracting values for instance) but it leads to the same error. And passing the query as a String inside the block lead to further errors.

On my environment I'm executing ruby-grafana-reporter on the host, but I have Grafana and Prometheus running on Docker Containers, but with they are accessible trough they usual ports, I'm guessing that is the issue and some further configuration is needed to access Prometheus correcly.

Could you point me to the right direction to solve the issue? Thanks for your time

divinity666 commented 2 years ago

Thanks for trying out the reporter.

Basically prometheus should work fine, if it can be used properly from grafana.

Please try out the following:

1) Your query needs to escape the ticks: include::grafana_sql_table:1[sql="node_memory_MemTotal_bytes{job=\"ConnectorA_tech\"} - node_memory_Active_bytes{job=\"ConnectorA_tech\"}",filter_columns="time",dashboard="0E0FhC7Wz",from="now-1h",to="now"] Does that work already?

2) Try using grafana_panel_image on an existing prometheus panel. Does that work? 3) Try using grafana_panel_query_table on an existing prometheus panel. Does that work?

Which errors are shown? Try calling the reporter with -d DEBUG option to get some more output.

Best regards

QuimLaz commented 2 years ago

Hey sorry for the delay.

I've been trying your recommendations and number one worked fine for the sql_tables. I managed to get working the grafana_sql_value as well as a workaround since I'm using Grafana's variables for the queries, and that was causing issues.

My main concern is that I can't get the grafana_panel_image working. I've tried creating a new panel without using the variables, updating the query on the dashboard to add the '\' on the jobs name but I always get the same error:

F, [2021-09-28T13:36:18.804879 #14565] FATAL -- : undefined method `[]' for nil:NilClass

The query for the image is beeing sent to grafana's port, but I have the Grafana renderer exposed on another one, but setting it up as the Grafana documentation recommends. Could this be the issue here?

Thanks for your time

divinity666 commented 2 years ago

Good to hear, that the sql query is now running properly - I guess I should improve the debug messages so that hunting down missing escaped ticks becomes easier. Noted :-)

About the ˋgrafana_panel_imageˋ: It is correct, that the renderer sends the request to grafana. Grafana then forwards this request to the renderer. Here some leading questions:

1) Does your panel show the results properly in the dashboard? If not, fix that first and save the dashboard. 2) If you click on the headline of the panel in the dashboard and choose 'Share' in the context menu. Then use the 'direct image' link. Does this rendering work? If not, fix that first. 3) If that does not help, please provide some debug information from grafana and the image renderer for these requests?

Hope this helps. Best regards

divinity666 commented 2 years ago

Closed because of lacking feedback - feel free to reopen case

Nikoos commented 2 years ago

Hello @divinity666 @QuimLaz,

I think I hit the same issue, to give you context, I set up a quick prometheus datasource connected to my grafana (8.3), I am able to generate an image from my panel with the following option :

grafana_panel_image::4[dashboard="-tLf652nz"]

I can see in the debug log, that ruby-grafana-reporter is reaching the given url :

https://xxxxxxxxxxxx:3000/render/d-solo/-tLf652nz?panelId=4&fullscreen=true&theme=light&timeout=60&var-template=demo_report&from=1638937569000&to=1638959168000

However, when I try the other way: gathering result from query with the following code:

include::grafana_sql_table:3[sql="prometheus_http_requests_total"]

In the grafana reporter debug, the following error appears:

D, [2021-12-08T11:29:30.315735 #2235] DEBUG -- : Processing SqlTableIncludeProcessor (instance: default, datasource: 3, sql: prometheus_http_requests_total)
D, [2021-12-08T11:29:30.316072 #2235] DEBUG -- : Requesting https://xxxxxxxxxxxx:3000/api/datasources/proxy/3/api/v1/query_range?start=1638959364000&end=1638959364000&query=prometheus_http_requests_total with '' and timeout '60'
D, [2021-12-08T11:29:30.362367 #2235] DEBUG -- : Received response #<Net::HTTPBadRequest:0x00005574cedab2c0>
D, [2021-12-08T11:29:30.362408 #2235] DEBUG -- : HTTP response body: {"status":"error","errorType":"bad_data","error":"invalid parameter \"step\": cannot parse \"\" to a valid duration"}
E, [2021-12-08T11:29:30.362827 #2235] ERROR -- : GrafanaReporterError: The datasource request to 'Prometheus' (Grafana::PrometheusDatasource) failed with an internal error: undefined method `[]' for nil:NilClass

That request seems to return an error code:

https://xxxxxxxxxxxx:3000/api/datasources/proxy/3/api/v1/query_range?start=1638959364000&end=1638959364000&query=prometheus_http_requests_total

The grafana webserver gave me the following error message:

{"status":"error","errorType":"bad_data","error":"invalid parameter \"step\": cannot parse \"\" to a valid duration"}

I tried to find the error, and if I use the custom URL directly in my web browser (I replace query_range by query):

https://xxxxxxxxxxxx:3000/api/datasources/proxy/3/api/v1/query?start=1638958288000&end=1638958288000&query=prometheus_http_requests_total

The following output is displayed:

{"status":"success","data":{"resultType":"vector","result":[{"metric":{"__name__":"prometheus_http_requests_total","code":"200","handler":"/-/ready","instance":"localhost:9090","job":"prometheus"},"value":[1638959446.953,"1"]},{"metric":{"__name__":"prometheus_http_requests_total","code":"200","handler":"/api/v1/label/:name/values","instance":"localhost:9090","job":"prometheus"},"value":[1638959446.953,"9"]},{"metric":{"__name__":"prometheus_http_requests_total","code":"200","handler":"/api/v1/labels","instance":"localhost:9090","job":"prometheus"},"value":[1638959446.953,"8"]},{"metric":{"__name__":"prometheus_http_requests_total","code":"200","handler":"/api/v1/metadata","instance":"localhost:9090","job":"prometheus"},"value":[1638959446.953,"8"]},{"metric":{"__name__":"prometheus_http_requests_total","code":"200","handler":"/api/v1/query","instance":"localhost:9090","job":"prometheus"},"value":[1638959446.953,"29"]},{"metric":{"__name__":"prometheus_http_requests_total","code":"200","handler":"/api/v1/query_range","instance":"localhost:9090","job":"prometheus"},"value":[1638959446.953,"42"]},{"metric":{"__name__":"prometheus_http_requests_total","code":"200","handler":"/api/v1/series","instance":"localhost:9090","job":"prometheus"},"value":[1638959446.953,"3"]},{"metric":{"__name__":"prometheus_http_requests_total","code":"200","handler":"/graph","instance":"localhost:9090","job":"prometheus"},"value":[1638959446.953,"1"]},{"metric":{"__name__":"prometheus_http_requests_total","code":"200","handler":"/metrics","instance":"localhost:9090","job":"prometheus"},"value":[1638959446.953,"537"]},{"metric":{"__name__":"prometheus_http_requests_total","code":"200","handler":"/static/*filepath","instance":"localhost:9090","job":"prometheus"},"value":[1638959446.953,"1"]},{"metric":{"__name__":"prometheus_http_requests_total","code":"302","handler":"/","instance":"localhost:9090","job":"prometheus"},"value":[1638959446.953,"2"]},{"metric":{"__name__":"prometheus_http_requests_total","code":"400","handler":"/api/v1/query_range","instance":"localhost:9090","job":"prometheus"},"value":[1638959446.953,"25"]},{"metric":{"__name__":"prometheus_http_requests_total","code":"503","handler":"/api/v1/query_range","instance":"localhost:9090","job":"prometheus"},"value":[1638959446.953,"11"]}]}}

Which is indeed the result of my basic query (I am testing ruby-grafana-reporter).

I will try to find in official documentation if there was an update on query_range vs query.

Best regards,

Nikoos

divinity666 commented 2 years ago

This is great information, Nikoos! I reopen this case.

Indeed there is something wrong with the prometheus requests in the reporter. I'll investigate.

divinity666 commented 2 years ago

I just released a new software version. This case should be solved.

Please reevaluate and close the issue if successfull.