cybertec-postgresql / pgwatch2

PostgreSQL metrics monitor/dashboard
BSD 3-Clause "New" or "Revised" License
1.82k stars 232 forks source link

instances defined in config.yaml do not load into ui tool at :8090 and prevent adding instances through the UI #695

Closed laurencefass closed 1 year ago

laurencefass commented 1 year ago

I am trying to load an instance using config.yaml. Not only does defining an instance with PW2_CONFIG not work, but it prevents me from manually entering and instance with the exact same details through the UI.

Summary:

These are my settings.

docker-compose.yml

    pgwatch2:
        image: cybertec/pgwatch2-postgres:1.10.0
        container_name: pgwatch2
        restart: unless-stopped
        volumes:
            - ./local_docker/pgwatch2/config.yaml:/etc/pgwatch2-configmap/config.yaml
        # environment: 
        #     - PW2_CONFIG=/etc/pgwatch2-configmap/config.yaml
        ports:
            - "3010:3000"
            - "8090:8080"

i'm running these pg queries on the database after bootstrapping which enables monitoring only if I excluded PW2_CONFIG and I define my instance manually in the UI

CREATE ROLE pgwatch2 WITH LOGIN PASSWORD 's********1';
ALTER ROLE pgwatch2 CONNECTION LIMIT 3;
GRANT pg_monitor TO pgwatch2;   -- v10+
GRANT CONNECT ON DATABASE events_service TO pgwatch2;
GRANT USAGE ON SCHEMA public TO pgwatch2; -- NB! pgwatch doesn't necessarily require using the public schema though!
GRANT EXECUTE ON FUNCTION pg_stat_file(text) to pgwatch2; -- needed by the wal_size metric
CREATE EXTENSION IF NOT EXISTS pg_stat_statements; -- needed by the pg_stat_statements metric

config.yaml (based on this template - Ive tried to disable as much as possible)

- unique_name: events_service_db_1 # an arbitrary name for the monitored DB
  dbtype: postgres # defaults to postgres if not specified
  host: postgres
  port: 5432 # defaults to 5432 if not specified
  dbname: events_service
  user: pgwatch2
  password: s********1
  sslmode: disable # supported options: disable, require, verify-ca, verify-full
  # libpq_conn_str: postgresql://user@localhost:5432/postgres  # overrides single connect params. no pwd encryption possible
  stmt_timeout: 10 # in seconds
  is_superuser: true # setting to true will try to auto-create all metric fetching "helpers"
  preset_metrics: full # from list of presets defined in "metrics/preset-configs.yaml"
  custom_metrics: full # if both preset and custom are specified, custom wins
  preset_metrics_standby: minimal # optional metrics configuration for standby / replica state, v1.8.1+
  custom_metrics_standby: minimal 
  dbname_include_pattern: # regex to filter databases to actually monitor for the "continuous" modes
  dbname_exclude_pattern:
  is_enabled: true
#   group: default # just for logical grouping of DB hosts or for "sharding", i.e. splitting the workload between many gatherer daemons
#   custom_tags: # option to add arbitrary tags (Influx / Postgres storage only) for every stored data row
#       aws_instance_id: i-0af01c0123456789a # for example to fetch data from some other source onto a same Grafana graph
#   sslrootcert: ''
#   sslcert: ''
#   sslkey: ''  
pashagolub commented 1 year ago

Hello.

According to the manual:

one can also deploy the pgwatch2 gatherer daemons more easily in a de-centralized way, by specifying monitoring configuration via YAML files. In that case there is no need for a central Postgres “config DB”.

That means one cannot use YAML config file and Postgres config database simultaneously. Since web UI is a frontend for a config database there is no possibility to use it as well.

The straightforward solution might be to update YAML configuration manually and force pgwatch2 to reread it.