nodiesorg / nodies_monitoring

6 stars 3 forks source link

Keep original port number in bcexporter & loki config #13

Closed fortsoftware closed 2 years ago

fortsoftware commented 2 years ago

Issue ticket number and link

NONE

Description

There is no need to override the port number in the blockchain exporter config file. I am testing with custom ports and realized this with trial and error. As soon as I reverted the port number to the default one, which is the one that the process within the container uses, it started working fine and showing the metrics in the dashboard.

Type of change

Please delete option that is not relevant.

tohoPF commented 2 years ago

We still want to allow port flexibility. So long as you update the server.exporter_endpoints.blockchain with the same hostip:port you're launching the client up with, it should be functional.

Can you give us steps to replicate the behavior you're experiencing?

fortsoftware commented 2 years ago

We still want to allow port flexibility. So long as you update the server.exporter_endpoints.blockchain with the same hostip:port you're launching the client up with, it should be functional.

Can you give us steps to replicate the behavior you're experiencing?

Sure! Let me describe a bit since there is a possibility that I misunderstood something along the way and could be in the wrong here.

I did set up the server and clients using custom port numbers (no default ports). When I launched everything, node + container monitoring worked fine but the chain monitoring dashboard was not working. The dashboard was just empty.

Then I proceeded to check the blockchain_exporter logs and it seemed to be working fine given that it was logging stuff like this

http://<ip:port> sent metrics for chain 0021
Status: synced
Current Height: 15899825
Latest Height: 15899825

Given my lack of knowledge in the area, I went into trial an error mode. I hypothesized that maybe the exporter_port in the config.yml of the blockchain_exporter was supposed to be based on the container port number (9877) rather than the host port number (which I had set to 19877) for the process. So I manually changed it from 19877 to 9877 and then restarted the container and everything started to work fine and the chains dashboard started to show the graphs and data. Note that my blockchain exporter is still listening to 19877 in the host, but it seems that the the exporter_port in the config.yml of the blockchain exporter should be set to the container port number (9877) and not the host port number. At least thats how I got it working on my side.

I can share more details and log data if necessary. I did also tried a couple of times just to confirm my understanding... with and without my change and confirmed I needed to apply this change in order to get it working on my side.

fortsoftware commented 2 years ago

Also, according to https://github.com/prometheus/client_python#http, it seems that it should be the container port that should be specified there, given that it is the port that the server will listen to because it runs inside the container.

fortsoftware commented 2 years ago

@tohoPF I found a similar issue with the loki config. For example, I used 13100 as the custom host port for the loki process and I observed that loki-config.yml http_listen_port was set to 13100.

This makes the loki process unreachable since the docker container port mapping for loki is set to host-port:3100 (3100 is hardcoded as the container port), so http_listen_port should also be set to 3100 instead of the custom host port.

tohoPF commented 2 years ago

If you're doing manual setup, I highly recommend using our settings.yml and setup.py. The blockchain_exporter's port should be set to the same one that is public facing in docker compose which by default is 9877. This is how Prometheus is able to access the exporter and scrape the metrics.

As for Loki, http_listen_port is where loki is listening. Currently, the only thing that pushes to loki is Promtail for logs. Promtail uses a push model and is able to push logs to that port. This is why we use the public port and not the internal port.

Can you retry with latest staging branch using settings.yml and setup.py and tell me if you still face the same issues?

fortsoftware commented 2 years ago

If you're doing manual setup, I highly recommend using our settings.yml and setup.py. The blockchain_exporter's port should be set to the same one that is public facing in docker compose which by default is 9877. This is how Prometheus is able to access the exporter and scrape the metrics.

As for Loki, http_listen_port is where loki is listening. Currently, the only thing that pushes to loki is Promtail for logs. Promtail uses a push model and is able to push logs to that port. This is why we use the public port and not the internal port.

Can you retry with latest staging branch using settings.yml and setup.py and tell me if you still face the same issues?

I am using settings.yml and setup.py. I am not doing any custom configuration other than specifying custom ports. Then I run the setup.py first and then I start the containers in the client/server side with the docker compose command. The only way it works if I leave the container port in the app specific settings in each of the services.

I did try again and ran everything from scratch because to be honest, I have custom iptables stuff for firewall purposes so there is a possibility that I have something wrong there, but for me it only works if I specify the container port in the config files referenced in this PR.

Sorry to bug you here, but could you try custom ports and let me know if it works for you? Note that for blockchain_exporter, you need to rebuild the image/container because if you ran with default ports previously and you don't rebuild it will stick with the default port and it will work (this is what happened to me in the past). So to be 100% sure, docker exec to it and confirm that the config has the custom port specified.

For reference, here is what my settings.xml file looks like, I just add 10000 to every port number.

clients:
  node_exporter:
    port: 19100
  cadvisor:
    port: 18080
  promtail:
    port: 19080
    loki_endpoint: <redacted>
    loki_port: 13100
    log_root_path: /var/log/nginx
  blockchain_exporter:
    port: 19877
    alias_enabled: True
    alias_name: LOCALHOST

server:
  host_ip: <redacted>
  loki:
    port: 13100
  prometheus:
    port: 19090
    exporter_endpoints:
      cadvisor: ["<redacted>:18080"]
      blockchain: ["<redacted>:19877"]
      node: ["<redacted>:19100"]
  grafana:
    port: 13000
  minio:
    port: 19000
  promtail:
    port: 19080
  alerts:
    interval: 60s
    contactpoints:
      slack:
        enabled: False
        url: https://hooks.slack.com/services/your_slack_webhook_string
      discord:
        enabled: True
        url: <redacted>
      teams:
        enabled: False
        url: https://ms_teams_url
      email:
        enabled: False
        addresses: ["me@example.com", "you@example.com"]
      webhook:
        enabled: False
        url: https://endpoint_url
        httpMethod: POST # <string> options: POST, PUT
        username: my_username
        password: my_password
        authorization_scheme: my_bearer
        authorization_credentials: my_credentials
        maxAlerts: '10'
fortsoftware commented 2 years ago

Wait, so now that I am thinking about this. Let me try again on a different environment with minimal iptables config to see how it goes for me.

nodiesBlade commented 2 years ago

Thanks @fortsoftware , we'll be replicating this on our environment today

tohoPF commented 2 years ago

pushing changes for this fix rn

tohoPF commented 2 years ago

@fortsoftware Hey I pushed some changes and tested. Let me know if the new commit on staging branch is working better for you.

fortsoftware commented 2 years ago

@fortsoftware Hey I pushed some changes and tested. Let me know if the new commit on staging branch is working better for you.

Hello @tohoPF! Sorry for the late reply here. I got very busy during the past couple of days. I am going to test again tomorrow and let you know how it goes. Thanks for the changes!

fortsoftware commented 2 years ago

@tohoPF it works fine now! Thanks! I will be closing this PR since it is not necessary anymore.

fortsoftware commented 2 years ago

Changes not necessary anymore.