prometheus / blackbox_exporter

Blackbox prober exporter
https://prometheus.io
Apache License 2.0
4.7k stars 1.05k forks source link

TCP prober is always returning success no matter witch port you use. #810

Open LuisMLGDev opened 3 years ago

LuisMLGDev commented 3 years ago

Hi guys, I've configured a TCP check like this:

  modules:
      tcp_connect:
          prober: tcp
          timeout: 5s

Basically tthe issue is I always getting a sussccess no matter which port uses or if the service is really up or not... module=tcp_connect target=testing.miservice.link:28666 level=debug msg="Successfully dialed" module=tcp_connect target=testing.miservice.link:28555 level=debug msg="Successfully dialed"

Obviously, you need the proper port to get a successful connection. When I try with telnet ... telnet connect or not depends if the port is the proper one.

The issue was tested in 0.18 and 0.19 versions and BlackBox exporter was deployed in an EKS cluster. The same blackbox contains a configuration for HTTP_2xx and all these endpoints (http prober) are working fine.

Scrape config:

    - job_name: 'blackbox_tcp'
      metrics_path: /probe
      params:
        module: [tcp_connect]
      static_configs:
        - targets:
          - testing.miservice.link:28555
          - testing.miservice.link:28666
      relabel_configs:
        - source_labels: [__address__]
          target_label: __param_target
        - source_labels: [__param_target]
          target_label: instance
        - target_label: __address__
          replacement: monitoring-prometheus-blackbox-exporter:9115

I really spent much time doing many tests and with no luck. Any suggestions or ideas?

thanks in advance!

xbglowx commented 3 years ago

I ran into a similar problem, since we use istio. I am still trying to figure out if there is a way to bypass istio based on hostnames, so that I don't have to hardcode ips in the destination rules of istio.

For my one case, I switched to the ssh_banner tcp probe, which can be found in the example config.

LuisMLGDev commented 3 years ago

Hi xbglowx! Thank you for the feedback. I'm using istio too so It makes sense we share the issue. I'm gonna look into that and try to find a workaround. ssh_banner is not for me I don't have ssh access to those servers :( I will keep this post updated

Thanks again!

b-onigam commented 3 years ago

I have the same problem, whether the port is listening or not, it always returns success

roidelapluie commented 3 years ago

This would mean that istio is taking over all tcp connections and what you see is a success while connecting to the MITM istio proxy, is that correct?

LuisMLGDev commented 3 years ago

Yes, since you add the envoy sidecar Istio takes over all the TCP connectivities. I'm pretty sure that there is a way to configure the Istio-proxy to pass through some particular connections but I didn't have time to look into that. For me, the work-around was to add a pod annotation to avoid the envoy sidecar be installed. It's not ideal I know but it's working and for now is enough.

xbglowx commented 3 years ago

This issue can probably be closed, since it is not a bug in blackbox-exporter. Although, maybe there should be a note about using blackbox tcp with a proxy?

missthesky commented 2 years ago

Same problem , but i didt'n use isto,in my env , i use ipmasq as a deamonset. any ideas ?