grafana / loki

Like Prometheus, but for logs.
https://grafana.com/loki
GNU Affero General Public License v3.0
23.8k stars 3.43k forks source link

Promtail config for PostgreSQL logs #11098

Open prensgold opened 1 year ago

prensgold commented 1 year ago

I want to capture Postgresql logs with Promtail and display them with Loki in Grafana. I have seen many screenshots on the internet sharing that the warning info debug error logs were colored by Loki. However, I could not color them in Loki.

Although there is an error in the log below, I could not show an error in the label in the gui.

Can you guide me? Where do I need to fix or configure?

image

--here is config yaml for promtail scrape_configs:

wilfriedroset commented 1 year ago

Depending on the PostgreSQL version you are using you can log in JSON which become simpler to process with grafana agent and remove the need to parse the log line with a regex. I believe this was introduce in PostgreSQL 15

Regarding the log volume colors, you need to make sure that the level field name is correctly define. You also need to adjust the value of the level field to what is expected from grafana, see: https://github.com/grafana/grafana/blob/main/packages/grafana-data/src/types/logs.ts#L15C2-L34

Side notes: A couple years ago, I've been able to achieve what you are trying to do with promtail. See: https://github.com/wilfriedroset/remote-storage-wars/blob/master/playbook/group_vars/meta-role_patroni_server/promtail.yml

The code is outdated by you might be able to reuse some part. I hope it helps.

umutoguz commented 1 year ago

Hi wilfriedroset, We changed the config-promtail.yml file according to our own postgresql logs. How can we change the value of the level field according to grafana?

server: http_listen_port: 9080 grpc_listen_port: 0

positions: filename: /tmp/positions.yaml

clients:

scrape_configs:

prensgold commented 1 year ago

Depending on the PostgreSQL version you are using you can log in JSON which become simpler to process with grafana agent and remove the need to parse the log line with a regex. I believe this was introduce in PostgreSQL 15

Regarding the log volume colors, you need to make sure that the level field name is correctly define. You also need to adjust the value of the level field to what is expected from grafana, see: https://github.com/grafana/grafana/blob/main/packages/grafana-data/src/types/logs.ts#L15C2-L34

Side notes: A couple years ago, I've been able to achieve what you are trying to do with promtail. See: https://github.com/wilfriedroset/remote-storage-wars/blob/master/playbook/group_vars/meta-role_patroni_server/promtail.yml

The code is outdated by you might be able to reuse some part. I hope it helps.

Hi Wilfried ,

I prepared the following scrape_configs definition with the config you shared. I now catch info and errors visually. Thank you very much.

image

I want to mark the lines containing AUDIT: and duration: with a warning in the two lines in the postgresql log I shared below. However, these are marked with LOG in the postgresql log. I mark the scrape_configs lines with LOG as info. How can I create a definition that can mark both LOG: and AUDIT: as warnings?

2023-11-02 20:18:52 +03 [2393279]: [20-1] db=express,user=soft_app,app=PostgreSQL JDBC Driver,client=192.192.12.12 192.192.12.12(23210) LOG: duration: 1629.515 ms execute : select * from test.mobile_device_info where user_id=$2 order by CREATED_DATE desc limit 1) ORDER BY LASTUPDATEDATE DESC NULLS LAST 2023-11-02 20:19:26 +03 [2381250]: [14-1] db=postgres,user=postgres,app=psql,client=[local] [local] LOG: AUDIT: SESSION,9,1,DDL,CREATE TABLE,TABLE,public.deneme,create table deneme ( id int);, 2023-11-02 20:19:27 +03 [2381250]: [15-1] db=postgres,user=postgres,app=psql,client=[local] [local] LOG: AUDIT: SESSION,10,1,DDL,DROP TABLE,TABLE,public.deneme,drop table deneme;,

scrape_configs:

wilfriedroset commented 1 year ago

I reckon that you can repeat several time the block template in the pipeline_stages list. If I'm correct you can therefore map all postgresql log levels to the "standard" log levels expected by grafana when building the log volume panel.

This is an example from my old code so you might need to adjust it to your linking

      # need to format PostgreSQL default log level to have grafana color properly
      # DEBUG5, DEBUG4, DEBUG3, DEBUG2, DEBUG1, INFO, NOTICE, WARNING, ERROR, LOG, FATAL, and PANIC
      # Grafana: https://github.com/grafana/grafana/blob/main/packages/grafana-data/src/types/logs.ts#L9-L27
      - template:
          source: level
          template: '{% raw %}{{ regexReplaceAllLiteral "DEBUG.*" .level "debug" }}{% endraw %}'
      - template:
          source: level
          template: '{% raw %}{{ regexReplaceAllLiteral "INFO|NOTICE|LOG" .level "info" }}{% endraw %}'
      - template:
          source: level
          template: '{% raw %}{{ ToLower .level }}{% endraw %}'
kazeyoba commented 9 months ago

Well its working :)

server:
  http_listen_port: 9080
  grpc_listen_port: 0
positions:
  filename: /tmp/positions.yaml
client:
  url: http://localhost:3100/loki/api/v1/push

scrape_configs:
  - job_name: postgresql
    static_configs:
      - labels:
          job: postgresql
          host: localhost
          __path__: /var/log/postgresql/*.log

    pipeline_stages:
      - match:
          selector: '{job="postgresql"}'
          stages:
            - multiline:
                firstline: '^\d{4}-\d{2}-\d{2}'
                max_wait_time: 3s
                # default is 128, we might need more depending on the size of the query
                max_lines: 256
            - regex:
                expression: '^(?P<timestamp>[0-9]{4}-[0-9]{2}-[0-9]{2} [0-9]{2}:[0-9]{2}:[0-9]{2}\.[0-9]+ [A-Z]+) \[[0-9]+\] (?P<user>[a-zA-Z0-9_]+)@(?P<database>[a-zA-Z0-9_]+) (?P<level>[A-Z]+):  (?P<message>.*)$'
            - template:
                source: level
                template: '{{ regexReplaceAllLiteral "DETAIL|DEBUG|INFO|NOTICE|LOG" .level "info" }}'
            - labels:
                level:
                user:
                database:
                message:
wilfriedroset commented 9 months ago

depending on your PostgreSQL version, I would suggest to log in json this might save you time down the road.

kazeyoba commented 9 months ago

We're currently in the study phase, and haven't yet implemented a log agegration solution.

alexitheodore commented 3 months ago

@kazeyoba for the yaml that you got working, can you provide the postgres log_line_prefix that you used? That should correspond with the regex:expression line, right?