netscaler / netscaler-adc-metrics-exporter

Export metrics from Citrix ADC (NetScaler) to Prometheus
90 stars 32 forks source link

Exception in thread #3

Closed siarhei-makarevich closed 4 years ago

siarhei-makarevich commented 5 years ago

From time to time (at least once a week) i got an error what caused thread spawning on exporter host plus management cpu utilization on netscaler goes up to 100%

Error log: `Exception in thread Thread-6927: Traceback (most recent call last): File "/usr/lib64/python3.4/socketserver.py", line 617, in process_request_thread self.finish_request(request, client_address) File "/usr/lib64/python3.4/socketserver.py", line 344, in finish_request self.RequestHandlerClass(request, client_address, self) File "/usr/lib64/python3.4/socketserver.py", line 673, in init self.handle() File "/usr/lib64/python3.4/http/server.py", line 401, in handle self.handle_one_request() File "/usr/lib64/python3.4/http/server.py", line 389, in handle_one_request method() File "/usr/lib/python3.4/site-packages/prometheus_client/exposition.py", line 153, in do_GET self.wfile.write(output) File "/usr/lib64/python3.4/socket.py", line 398, in write return self._sock.send(b) BrokenPipeError: [Errno 32] Broken pipe

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/usr/lib64/python3.4/threading.py", line 911, in _bootstrap_inner self.run() File "/usr/lib64/python3.4/threading.py", line 859, in run self._target(*self._args, *self._kwargs) File "/usr/lib64/python3.4/socketserver.py", line 620, in process_request_thread self.handle_error(request, client_address) File "/usr/lib64/python3.4/socketserver.py", line 360, in handle_error print('-'40) OSError: [Errno 5] Input/output error`

Rakshith1342 commented 5 years ago

@siarhei-makarevich please provide more details if possible:

  1. The environment in which it was running:

    • Was the exporter running as a container or pod?
    • Exporter version? (present in /exporter/VERSION file)
    • Was Prometheus used with it? What were the scrape configs (especially scrape_timeout and scrape_interval) provided to Prometheus?
  2. Flags which were provided to the exporter.

  3. Full log file if possible.

siarhei-makarevich commented 5 years ago
  1. The environment in which it was running:

    • just python process on the host(not container, not pod)
    • Exporter version? (present in /exporter/VERSION file)

      -bash-4.1$ cat version/VERSION 1.0.4

    • no scrape_timeout or scrape_interval was used
  2. Flags which were provided to the exporter.

    python3 ./netscaler-metrics-exporter.py --target-nsip 10.10.10.10 --username statsuser --password "secret" --secure yes --port 9280 --metrics-file ./metrics.json --log-file /var/log/netscaler-metrics-exporter.log 2>> /var/log/netscaler-metrics-exporter.log &

  3. Full log file if possible.

netscaler-metrics-exporter.log.managementCPU_100.zip - for Management CPU 100% load

netscaler-metrics-exporter.log.threadspawn.zip - for multiple threads spawn (almost the same at the beginning but have difference at the end of log)

aroraharsh23 commented 5 years ago

@siarhei-makarevich Can you please confirm if the same issue is seen on the latest version. I have been trying to reproduce but unable to do so.

aroraharsh23 commented 5 years ago

@siarhei-makarevich From the backtrace, I see that prometheus_client app has raised exception. It is not the exporter. Wished to check how it is related to NetScaler metrics exporter. Kindly elaborate so that we can help.