Improvement Request to log queries rejected for breaching query limits.
When limiting the number of data points, you get an error message like this:
t=2019-05-15T08:56:20+0000 lvl=info msg="Request failed" logger=tsdb.opentsdb status="413 Request Entity Too Large" body="{\"error\":{\"code\":413,\"message\":\"Sorry, you have attempted to fetch more than our limit of 1000000 data points. Please try filtering using more tags or decrease your time range.\", ...
It would be more useful if when query limits are exceeded, the query was logged in this error message so that we could track them down and fix / improve the queries. The whole company uses Grafana on OpenTSDB so I can't guess which queries from which out of 1000+ dashboards these are, so this logging is quite essential to improving the situation.
This should also apply to queries that exceed max size, and if it starts working, queries that exceed the timeouts (which doesn't work at time of writing, see #1635).
Improvement Request to log queries rejected for breaching query limits.
When limiting the number of data points, you get an error message like this:
It would be more useful if when query limits are exceeded, the query was logged in this error message so that we could track them down and fix / improve the queries. The whole company uses Grafana on OpenTSDB so I can't guess which queries from which out of 1000+ dashboards these are, so this logging is quite essential to improving the situation.
This should also apply to queries that exceed max size, and if it starts working, queries that exceed the timeouts (which doesn't work at time of writing, see #1635).