fluent / fluent-bit

Fast and Lightweight Logs and Metrics processor for Linux, BSD, OSX and Windows
https://fluentbit.io
Apache License 2.0
5.71k stars 1.56k forks source link

Issue filtering with new GeoIP2 module #3109

Closed proffalken closed 3 years ago

proffalken commented 3 years ago

Bug Report

Describe the bug When the GeoIP2 filter is enabled, I cannot get fluent-bit to augment my logs in the way I would expect.

To Reproduce

Configure Fluent-bit as follows:

[INPUT]
    Name                syslog
    Listen              0.0.0.0
    Port                1514
    Parser              syslog-rfc3164
    Mode                udp

[FILTER]
    Name parser
    Match *
    Key_Name message
    Parser iptables # Uses the parser from https://github.com/fluent/fluent-bit/pull/3108 )

[FILTER]
    Name geoip2
    Match *
    Database /usr/share/GEOIP2/GEOIP2city.mmdb
    Lookup_key source
    Record fb_city                  source    %{city.names.en}
    Record fb_latitude            source    %{location.latitude}
    Record fb_longitude         source    %{location.longitude}
    Record fb_country            source    %{country.iso_code}
    Record fb_country_name source    %{country.names.en}
    Record fb_postal_code    source    %{postal.code}
    Record fb_region_code    source    %{subdivisions.0.iso_code}
    Record fb_region_name   source    %{subdivisions.0.names.en}

[OUTPUT]
    Name        forward
    Match         *
    Host          127.0.0.1
    Port           24224 

Expected behavior

All log data should be augmented with fb_<field name> with the values populated as appropriate.

Screenshots

Note: Fields circled in red are from Fluent-bit, fields circled in green are from fluentd with the GeoIP Filter. Both solutions point to the same GEOIP2city.mmdb file, and use the filtered source field as the IP address to lookup against.

fluent-bit-geoip2

Your Environment

Additional context Trying to get the same functionality from Fluent-bit that I currently get from Fluentd to remove Fluentd from my logging stack!

frenchviking commented 3 years ago

Hello,

I'm having the same issue.

fluent bit v1.7 deployed as a daemonset on kubernetes.

maxmind database is hosted on a persistent volume and mounted to the pods. The DB is managed and updated by Maxmind's geoipupdate tool, also deployed on the cluster (image maxmindinc/geoipupdate). Mounting the volume on a pod with a terminal I can confirm that the DB files are availables.

Here is the configmap storing the fluent config.

apiVersion: v1
kind: ConfigMap
metadata:
  name: fluent-bit-config
  namespace: kube-logging
  labels:
    k8s-app: fluent-bit
data:
  # Configuration files: server, input, filters and output
  # ======================================================
  fluent-bit.conf: |
    [SERVICE]
        Flush         1
        Log_Level     info
        Daemon        off
        Parsers_File  parsers.conf
        HTTP_Server   On
        HTTP_Listen   0.0.0.0
        HTTP_Port     2020

    @INCLUDE input-kubernetes.conf
    @INCLUDE input-traefikee.conf
    @INCLUDE filter-geoip.conf
    @INCLUDE filter-kubernetes.conf
    @INCLUDE filter-traefikee.conf
    @INCLUDE output-elasticsearch.conf

  input-kubernetes.conf: |
    [INPUT]
        Name              tail
        Tag               kube.*
        Path              /var/log/containers/*.log
        Exclude_Path      /var/log/containers/kube-prod-controller*.log,/var/log/containers/kube-prod-proxy*.log,
        Parser            docker
        DB                /var/log/flb_kube.db
        Mem_Buf_Limit     5MB
        Skip_Long_Lines   On
        Refresh_Interval  10

  input-traefikee.conf: |
    [INPUT]
        Name              tail
        Tag               traefikee.*
        Path              /var/log/containers/kube-prod-controller*.log,/var/log/containers/kube-prod-proxy*.log
        Parser            docker
        DB                /var/log/flb_traefik.db
        Mem_Buf_Limit     5MB
        Skip_Long_Lines   On
        Refresh_Interval  1

  filter-kubernetes.conf: |
    [FILTER]
        Name                kubernetes
        Match               kube.*
        Kube_URL            https://kubernetes.default.svc:443
        Kube_CA_File        /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        Kube_Token_File     /var/run/secrets/kubernetes.io/serviceaccount/token
        Kube_Tag_Prefix     kube.var.log.containers.
        Merge_Log           On
        Merge_Log_Key       log_processed
        K8S-Logging.Parser  On
        K8S-Logging.Exclude Off

  filter-traefikee.conf: |
    [FILTER]
        Name                kubernetes
        Match               traefikee.*
        Merge_Log           On
        Merge_Log_Key       traefik

  filter-geoip.conf: |
    [FILTER]
        Name                geoip2
        Match               traefikee.*
        Database            /geoip/GeoLite2-City.mmdb
        Lookup_key          traefik.ClientHost
        Record traefik.country ClientHost %{country.names.en}
        Record traefik.isocode ClientHost %{country.iso_code}
        Record traefik.latitude ClientHost %{location.latitude}
        Record traefik.longitude ClientHost %{location.longitude}

  output-elasticsearch.conf: |
        [OUTPUT]
            Name            es
            Match           *
            Host            eshost
            Port            19200
            HTTP_User       ${FLUENT_ELASTICSEARCH_USER}
            HTTP_Passwd     ${FLUENT_ELASTICSEARCH_PASSWORD}
            tls             on
            tls.verify      off
            Index           kube-
            Logstash_prefix kube
            Logstash_Format On
            Replace_Dots    On
            Retry_Limit     False

  parsers.conf: |
    [PARSER]
        Name   apache
        Format regex
        Regex  ^(?<host>[^ ]*) [^ ]* (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^\"]*?)(?: +\S*)?)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")?$
        Time_Key time
        Time_Format %d/%b/%Y:%H:%M:%S %z

    [PARSER]
        Name   apache2
        Format regex
        Regex  ^(?<host>[^ ]*) [^ ]* (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^ ]*) +\S*)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")?$
        Time_Key time
        Time_Format %d/%b/%Y:%H:%M:%S %z

    [PARSER]
        Name   apache_error
        Format regex
        Regex  ^\[[^ ]* (?<time>[^\]]*)\] \[(?<level>[^\]]*)\](?: \[pid (?<pid>[^\]]*)\])?( \[client (?<client>[^\]]*)\])? (?<message>.*)$

    [PARSER]
        Name   nginx
        Format regex
        Regex ^(?<remote>[^ ]*) (?<host>[^ ]*) (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^\"]*?)(?: +\S*)?)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")?$
        Time_Key time
        Time_Format %d/%b/%Y:%H:%M:%S %z

    [PARSER]
        Name   json
        Format json
        Time_Key time
        Time_Format %d/%b/%Y:%H:%M:%S %z

    [PARSER]
        Name        docker
        Format      json
        Time_Key    time
        Time_Format %Y-%m-%dT%H:%M:%S.%L
        Time_Keep   On

    [PARSER]
        # http://rubular.com/r/tjUt3Awgg4
        Name cri
        Format regex
        Regex ^(?<time>[^ ]+) (?<stream>stdout|stderr) (?<logtag>[^ ]*) (?<message>.*)$
        Time_Key    time
        Time_Format %Y-%m-%dT%H:%M:%S.%L%z

    [PARSER]
        Name        syslog
        Format      regex
        Regex       ^\<(?<pri>[0-9]+)\>(?<time>[^ ]* {1,2}[^ ]* [^ ]*) (?<host>[^ ]*) (?<ident>[a-zA-Z0-9_\/\.\-]*)(?:\[(?<pid>[0-9]+)\])?(?:[^\:]*\:)? *(?<message>.*)$
        Time_Key    time
        Time_Format %b %d %H:%M:%S

The daemonset manifest:

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluent-bit
  namespace: kube-logging
  labels:
    k8s-app: fluent-bit-logging
    version: v1
    kubernetes.io/cluster-service: "true"
spec:
  selector:
    matchLabels:
      k8s-app: fluent-bit-logging
  updateStrategy:
    rollingUpdate:
      maxUnavailable: 6
  template:
    metadata:
      labels:
        k8s-app: fluent-bit-logging
        version: v1
        kubernetes.io/cluster-service: "true"
      annotations:
        prometheus.io/scrape: "true"
        prometheus.io/port: "2020"
        prometheus.io/path: /api/v1/metrics/prometheus
    spec:
      containers:
      - name: fluent-bit
        image: fluent/fluent-bit:1.7
        imagePullPolicy: Always
        ports:
          - containerPort: 2020
        env:
        #ELASTIC CONFIG
        - name: FLUENT_ELASTICSEARCH_USER
          valueFrom: 
            secretKeyRef:
              name: fluentd-elastic-secret
              key: username
        - name: FLUENT_ELASTICSEARCH_PASSWORD
          valueFrom:
            secretKeyRef:
              name: fluentd-elastic-secret
              key: password
        volumeMounts:
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
        - name: fluent-bit-config
          mountPath: /fluent-bit/etc/
        **- name: geoip
          mountPath: /geoip**
      terminationGracePeriodSeconds: 0
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: fluent-bit-config
        configMap:
          name: fluent-bit-config
      **- name: geoip
        persistentVolumeClaim:
          claimName: geoip
          readOnly: true**
      serviceAccountName: fluent-bit
      tolerations:
      - key: node-role.kubernetes.io/master
        operator: Exists
        effect: NoSchedule
      - operator: "Exists"
        effect: "NoExecute"
      - operator: "Exists"
        effect: "NoSchedule"

Volume manifest:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: geoip
  labels:
    type: azure-file
  namespace: kube-logging
spec:
  storageClassName: azurefile
  capacity:
    storage: 1G
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  azureFile:
    secretName: azure-storage-geoip-secret
    secretNamespace: kube-logging
    shareName: geoipdb
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: geoip
  namespace: kube-logging
spec:
  selector:
    matchLabels:
      name: geoip
  resources:
    requests:
      storage: 1G
  accessModes:
    - ReadWriteOnce
  storageClassName: azurefile
  volumeName: geoip

Not shown on the screenshot but the field traefik.ClientHost used in the config filter-geoip.conf Lookup_key contains the client IP (X-Forwarded-For/X-Real-Ip)

Screenshot at 2021-02-25 16-30-36

agup006 commented 3 years ago

editing last remark - I made a stupid mistake in configuration where I didn't specify the lookup key in the Record field and now this is working as expected for me. @frenchviking I'm wondering if you still need the traefikas part of each Record's Lookup Key.

@proffalken are you seeing any errors in the log file? Potentially not being able to read the DB?

Configuration:

[SERVICE]
    flush        1
    log_level    info 
    parsers_file parsers.conf

[INPUT]
    name              tail
    path              /var/log/apache/*.log
    parser            apache
    tag               apache
    read_from_head    true

[FILTER]
    name geoip2
    match apache
    database /home/anugup/GeoLite2-City.mmdb
    lookup_key host
    Record country host %{country.names.en}
    Record isocode host %{country.iso_code}
    Record city    host %{city.names.en}
    Record latitude host %{location.latitude}
    Record longitude host %{location.longitude}
    Record postal_code host %{postal.code}
    Record region-code host %{subdivisions.0.iso_code}
    Record region-name host %{subdivisions.0.names.en}

[OUTPUT]
    name stdout
    format json
    match *

JSON record

{"date":1614192785.0,"host":"24.32.25.22","user":"-","method":"GET","path":"/apps/cart.jsp?appID=3790","code":"200","size":"4968","referer":"http://carey.info/","agent":"Mozilla/5.0 (Macintosh; U; PPC Mac OS X 10_5_0) AppleWebKit/5351 (KHTML, like Gecko) Chrome/15.0.814.0 Safari/5351","country":"United States","isocode":"US","city":"Truckee","latitude":39.3385,"longitude":-120.1729,"postal_code":"96161","region-code":"CA","region-name":null}
proffalken commented 3 years ago

@agup006 The only issue the logs is a warn as follows:

Feb 26 07:16:38 hivemind fluent-bit[3514400]: [2021/02/26 07:16:38] [ warn] [filter:geoip2:geoip2.1] cannot get value: The lookup path does not match the data (key that doesn't exist, array index bigger than the array, expected array or map where none exists)

Sending the data to STDOUT suggests that this is due to the source field not being present, so that makes sense.

I'm wondering if it's to do with where I'm calling the parser - I notice that you're calling the "apache" parser in the input and then matching against that, I'm matching against the SYSLOG parser, and then filtering against the IP Tables parser I created, and then filtering again against the GeopIP2 parser.

My assumption was that data would flow through the filters from top to bottom, but I'm now wondering if this assumption is correct?

My config is as follows:

[INPUT]
    Name            syslog
    Listen            0.0.0.0
    Port               1514
    Parser           syslog-rfc3164 # FIRST PARSER
    Mode             udp
    tag                 unifi

[FILTER]
    Name parser # SECOND PARSER
    Match *
    Key_Name message
    Parser iptables
    Tag unifi

[FILTER]
    Name geoip2 # GEOIP LOOKUP
    Match unifi
    Database /usr/share/GEOIP2/GEOIP2city.mmdb
    Lookup_key source
    Record fb_city         source    %{city.names.en}
   # Additional "record" statements 

The output from the above is a blank fb_city field, so I'm wondering if it is trying to parse the original message data and failing rather than the post-filter data that has the correct fields?

frenchviking commented 3 years ago

@agup006 I tried the lookup_key with or without the "traefik" prefix with no success. Also I changed the processing order of my filters as follow:

@INCLUDE input-kubernetes.conf
@INCLUDE input-traefikee.conf
@INCLUDE filter-kubernetes.conf
@INCLUDE filter-traefikee.conf
@INCLUDE filter-geoip.conf
@INCLUDE output-elasticsearch.conf

If the filters are parsed and events processed in this order, all records should have the traefik prefix so now I'm using

    [FILTER]
        Name                geoip2
        Match               traefikee.*
        Database            /geoip/GeoLite2-City.mmdb
        Lookup_key          traefik.ClientHost
        Record country traefik.ClientHost %{country.names.en}
        Record isocode traefik.ClientHost %{country.iso_code}
        Record latitude traefik.ClientHost %{location.latitude}
        Record longitude traefik.ClientHost %{location.longitude}

Still empty fields and no warn/error output. Since its working for you I guess there is a misconfiguration on my side.

EDIT: I must specify that I need the traefik prefix because my original record contains a "log" fields with the nested json containing the ClientHost field : @timestamp:Feb 26, 2021 @ 11:41:28.988 log:{"ClientHost":"xxx.xxx.xxx.xxx","ClientUsername":"-","Duration":5148601,"OriginContentSize":994,"OriginDuration":5013198,[...]10:41:28Z"} stream:stdout time:Feb 26, 2021 @ 11:41:28.988 traefik.ClientHost:82.65.118.252 traefik.ClientUsername:- traefik.Duration:5148601 traefik.OriginContentSize:994 traefik.OriginDuration:5013198 traefik.OriginStatus:200 traefik.RequestContentSize:0 [...]

frenchviking commented 3 years ago

Digging with log_level debug. Weird event while reading the log containing my traefik events : [input:tail:tail.1] cannot read info from: /var/log/containers/kube-prod-proxy*.log Not sure what to do since the traefik logs are processed.

While I can see that my filters 1 & 2 are loaded I don't see any record regarding the third filter, the geoip one.

    Feb 26, 2021 @ 12:01:20.487 | [2021/02/26 11:01:20] [ info] [filter:kubernetes:kubernetes.0] API server connectivity OK
  | Feb 26, 2021 @ 12:01:20.487 | [2021/02/26 11:01:20] [ info] [filter:kubernetes:kubernetes.1] https=1 host=kubernetes.default.svc port=443
  | Feb 26, 2021 @ 12:01:20.487 | [2021/02/26 11:01:20] [ info] [filter:kubernetes:kubernetes.1] local POD info OK
  | Feb 26, 2021 @ 12:01:20.487 | [2021/02/26 11:01:20] [ info] [filter:kubernetes:kubernetes.1] testing connectivity with API server...
  | Feb 26, 2021 @ 12:01:20.461 | [2021/02/26 11:01:20] [ info] [filter:kubernetes:kubernetes.0] local POD info OK

Since the records are added the filter is loaded. I'm in a dead end. For now.

agup006 commented 3 years ago

Another thought @frenchviking is that its a nested JSON and that the lookup key might need to reflect a nested structure

agup006 commented 3 years ago

My assumption was that data would flow through the filters from top to bottom, but I'm now wondering if this assumption is correct?

This assumption is correct - quick visual (link.calyptia.com/1j8) image

Could we try commenting out the GeoIP filter to see what the data looks like in stdout?

proffalken commented 3 years ago

My assumption was that data would flow through the filters from top to bottom, but I'm now wondering if this assumption is correct?

This assumption is correct - quick visual (link.calyptia.com/1j8) image

Could we try commenting out the GeoIP filter to see what the data looks like in stdout?

Thanks for this @agup006 , always nice to know that your assumptions are valid! :rofl:

The output without the geoip module is as follows:

[18] unifi: [1614587169.000000000, {"rule_chain"=>"WAN_LOCAL", "rule_name"=>"default", "accept_or_drop"=>"D", "in_interface"=>"pppoe0", "source"=>"45.155.205.155", "dest"=>"<REDACTED>", "pkt_len"=>"40", "pkt_tos"=>"0x00", "pkt_prec"=>"0x00", "pkt_ttl"=>"244", "pkt_id"=>"36920", "pkg_frg"=>" ", "protocol"=>"TCP", "source_port"=>"53737", "dest_port"=>"22247", "proto_window_size"=>"1024", "pkt_res"=>"0x00", "pkt_type"=>"SYN", "pkg_urgency"=>"0"}]

I can see the source field there, so what am I missing in the geoip config?

frenchviking commented 3 years ago

So I have commented out all the kubernetes input and filter, and the elasticsearch output, replaced by a stdout config. This is what comes out (reformatted for reading):

[
    {
        "log": "{\"ClientHost\":\"xxx.xxx.xxx.xxx\",\"ClientUsername\":\"-\",\"Duration\":6093421,\"OriginContentSize\":1051,\"OriginDuration\":5973818,\"OriginStatus\":200,\"RequestContentSize\":0,\"RequestHost\":\"mydomain.com\",\"RequestMethod\":\"GET\",\"RequestPath\":\"/\",\"RequestPort\":\"-\",\"RequestProtocol\":\"HTTP/2.0\",\"RequestScheme\":\"https\",\"RouterName\":\"admin-traefikwhoamiingressroutetls-7d7186ba388c551fb42b@kubernetescrd\",\"ServiceAddr\":\"10.244.3.9:80\",\"ServiceName\":\"admin-traefikwhoamiingressroutetls-7d7186ba388c551fb42b@kubernetescrd\",\"entryPointName\":\"https\",\"level\":\"info\",\"msg\":\"\",\"time\":\"2021-03-01T08:20:53Z\"}\n",
        "stream": "stdout",
        "time": "2021-03-01T08:20:53.910408886Z",
        "traefik": {
            "ClientHost": "xxx.xxx.xxx.xxx",
            "ClientUsername": "-",
            "Duration": 6093421,
            "OriginContentSize": 1051,
            "OriginDuration": 5973818,
            "OriginStatus": 200,
            "RequestContentSize": 0,
            "RequestHost": "mydomain.com",
            "RequestMethod": "GET",
            "RequestPath": "/",
            "RequestPort": "-",
            "RequestProtocol": "HTTP/2.0",
            "RequestScheme": "https",
            "RouterName": "admin-traefikwhoamiingressroutetls-7d7186ba388c551fb42b@kubernetescrd",
            "ServiceAddr": "10.244.3.9:80",
            "ServiceName": "admin-traefikwhoamiingressroutetls-7d7186ba388c551fb42b@kubernetescrd",
            "entryPointName": "https",
            "level": "info",
            "msg": "",
            "time": "2021-03-01T08:20:53Z"
        }
    }
]

Everything looking good isn't it ?

Here is the version with the geoip filter :

[
    {
        "log": "{\"ClientHost\":\"xxx.xxx.xxx.xxx\",\"ClientUsername\":\"-\",\"Duration\":1048021,\"OriginContentSize\":1052,\"OriginDuration\":887617,\"OriginStatus\":200,\"RequestContentSize\":0,\"RequestHost\":\"wmydomain.com\",\"RequestMethod\":\"GET\",\"RequestPath\":\"/\",\"RequestPort\":\"-\",\"RequestProtocol\":\"HTTP/2.0\",\"RequestScheme\":\"https\",\"RouterName\":\"admin-traefikwhoamiingressroutetls-7d7186ba388c551fb42b@kubernetescrd\",\"ServiceAddr\":\"10.244.4.10:80\",\"ServiceName\":\"admin-traefikwhoamiingressroutetls-7d7186ba388c551fb42b@kubernetescrd\",\"entryPointName\":\"https\",\"level\":\"info\",\"msg\":\"\",\"time\":\"2021-03-01T08:32:37Z\"}\n",
        "stream": "stdout",
        "time": "2021-03-01T08:32:37.769095306Z",
        "traefik": {
            "ClientHost": "xxx.xxx.xxx.xxx",
            "ClientUsername": "-",
            "Duration": 1048021,
            "OriginContentSize": 1052,
            "OriginDuration": 887617,
            "OriginStatus": 200,
            "RequestContentSize": 0,
            "RequestHost": "wmydomain.com",
            "RequestMethod": "GET",
            "RequestPath": "/",
            "RequestPort": "-",
            "RequestProtocol": "HTTP/2.0",
            "RequestScheme": "https",
            "RouterName": "admin-traefikwhoamiingressroutetls-7d7186ba388c551fb42b@kubernetescrd",
            "ServiceAddr": "10.244.4.10:80",
            "ServiceName": "admin-traefikwhoamiingressroutetls-7d7186ba388c551fb42b@kubernetescrd",
            "entryPointName": "https",
            "level": "info",
            "msg": "",
            "time": "2021-03-01T08:32:37Z"
        },
        "geoloc.city": null,
        "geoloc.country": null,
        "geoloc.isocode": null,
        "geoloc.latitude": null,
        "geoloc.longitude": null,
        "geoloc.postal_code": null,
        "geoloc.region-code": null,
        "geoloc.region-name": null
    }
]
agup006 commented 3 years ago

@frenchviking, thanks for providing that info -the main reason I think this is failing is that traefik.ClientHost won't access the nested field.

I'm unsure if the GeoIP2 filter supports record accessor which would use the following for the lookup key: traefik[ClientHost] https://docs.fluentbit.io/manual/administration/configuring-fluent-bit/record-accessor. Are you able to try that out? Alternatively, you could add a NEST filter to pull that value outside to run GeoIP2 query on top: https://docs.fluentbit.io/manual/pipeline/filters/nest

proffalken commented 3 years ago

ok, shotgun-debugging has led me to the following:

The "fault" is raised at https://github.com/fluent/fluent-bit/blob/master/plugins/filter_geoip2/geoip2.c#L259, however I think it's actually failing at https://github.com/fluent/fluent-bit/blob/master/plugins/filter_geoip2/geoip2.c#L233 because the entry appears to be blank when trying to print it to the log.

My C is very rusty, but I'm wondering if there's a way to get more debug statements both in here and in the libmaxmind code to help us troubleshoot this?

For example, if I add the following code at lines 220 and 260 accordingly, I get the output below:

...
220     flb_warn("Looking for IP: %s", ip);
...
260  if (status != MMDB_SUCCESS) {
261             flb_plg_warn(ctx->ins, "looking for entry: %s", entry);
262             flb_plg_warn(ctx->ins, "cannot get value: %s", MMDB_strerror(status));
263             msgpack_pack_nil(packer);
264             continue;
265         }

output

[2021/03/01 20:53:17] [ warn] Looking for IP: 8.8.8.8
[2021/03/01 20:53:17] [ warn] [filter:geoip2:geoip2.0] looking for entry: 
[2021/03/01 20:53:17] [ warn] [filter:geoip2:geoip2.0] cannot get value: The lookup path does not match the data (key that doesn't exist, array index bigger than the array, expected array or map where none exists)

Note that I have updated my config file to ship dummy data as follows (this now mirrors the example):

[SERVICE]
    flush        5
    daemon       Off
    log_level    trace
    parsers_file parsers.conf
    plugins_file plugins.conf
    http_server  Off
    http_listen  0.0.0.0
    http_port    2020
    storage.metrics on
[INPUT]
    Name   dummy
    Dummy  {"remote_addr": "8.8.8.8"}
[FILTER]
    Name geoip2
    Match *
    Database geodb.mmdb
    Lookup_key remote_addr
    Record fbcity         remote_addr    %{city.names.en}
    Record fblatitude     remote_addr    %{location.latitude}
    Record fblongitude    remote_addr    %{location.longitude}
    Record fbcountry      remote_addr    %{country.iso_code}
    Record fbcountryname remote_addr    %{country.names.en}
    Record fbpostalcode  remote_addr    %{postal.code}
    Record fbregioncode  remote_addr    %{subdivisions.0.iso_code}
    Record fbregionname  remote_addr    %{subdivisions.0.names.en}
[OUTPUT]
    name                   stdout
    match                  *

And the visualiser (amazing tool btw!) shows the following (link.calyptia.com/wbv):

2021-03-01_21-05

proffalken commented 3 years ago

OK, I've got back to the beginning.

FluentD 1.7.1 with the configuration from the docs works, I've no idea why it didn't:

# Config File
[SERVICE]
    flush        5
    daemon       Off
    log_level    warn
    parsers_file parsers.conf
    plugins_file plugins.conf
    http_server  Off
    http_listen  0.0.0.0
    http_port    2020
    storage.metrics on

[INPUT]
    Name dummy
    dummy {"source": "8.8.8.8"}

[FILTER]
    Name geoip2
    Match *
    Database /var/lib/GeoIP/GeoLite2-City.mmdb
    Lookup_key source
    Record country source %{country.names.en}
    Record isocode source %{country.iso_code}

[OUTPUT]
    name                   stdout
    match                  *

Output:

Fluent Bit v1.7.1
* Copyright (C) 2019-2021 The Fluent Bit Authors
* Copyright (C) 2015-2018 Treasure Data
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[0] dummy.0: [1614672980.880655118, {"source"=>"8.8.8.8", "country"=>"United States", "isocode"=>"US"}]
[1] dummy.0: [1614672981.880636167, {"source"=>"8.8.8.8", "country"=>"United States", "isocode"=>"US"}]
[2] dummy.0: [1614672982.880637144, {"source"=>"8.8.8.8", "country"=>"United States", "isocode"=>"US"}]
[3] dummy.0: [1614672983.880641827, {"source"=>"8.8.8.8", "country"=>"United States", "isocode"=>"US"}]

As soon as I update it to the desired config with the IP Tables filter, it stops for everything apart from my own public IP Address:

# IP Tables Config
[SERVICE]
    flush        5
    daemon       Off
    log_level    warn
    parsers_file parsers.conf
    plugins_file plugins.conf
    http_server  Off
    http_listen  0.0.0.0
    http_port    2020
    storage.metrics on

[INPUT]
    Name                syslog
    Listen              0.0.0.0
    Port                1514
    Parser              syslog-rfc3164
    Mode                udp
    tag                 unifi

[FILTER]
    Name parser
    Match *
    Key_Name message
    Parser iptables

[FILTER]
    Name geoip2
    Match *
    Database /var/lib/GeoIP/GeoLite2-City.mmdb
    Lookup_key source
    Record country source %{country.names.en}
    Record isocode source %{country.iso_code}

[OUTPUT]
    name                   stdout
    match                  *

output (grepping for the external interface):

[20] unifi: [1614673119.000000000, {"rule_chain"=>"WAN_LOCAL", "rule_name"=>"default", "accept_or_drop"=>"D", "in_interface"=>"pppoe0", "source"=>"51.148.x.x", "dest"=>"224.0.0.1", "pkt_len"=>"32", "pkt_tos"=>"0x00", "pkt_prec"=>"0xC0", "pkt_ttl"=>"1", "pkt_id"=>"12496", "pkg_frg"=>" ", "protocol"=>"2", "country"=>"United Kingdom", "isocode"=>"GB"}]
[2] unifi: [1614673131.000000000, {"rule_chain"=>"WAN_LOCAL", "rule_name"=>"default", "accept_or_drop"=>"D", "in_interface"=>"pppoe0", "source"=>"92.63.196.13", "dest"=>"51.148.x.x", "pkt_len"=>"40", "pkt_tos"=>"0x00", "pkt_prec"=>"0x00", "pkt_ttl"=>"247", "pkt_id"=>"63509", "pkg_frg"=>" ", "protocol"=>"TCP", "source_port"=>"41637", "dest_port"=>"3374", "proto_window_size"=>"1024", "pkt_res"=>"0x00", "pkt_type"=>"SYN", "pkg_urgency"=>"0", "country"=>nil, "isocode"=>nil}]

If I run the mmdblookup tool against the other source IP, it finds the data:

mmdblookup --file /var/lib/GeoIP/GeoLite2-City.mmdb --ip 92.63.196.13

  {
    "continent": 
      {
        "code": 
          "EU" <utf8_string>
        "geoname_id": 
          6255148 <uint32>
        "names": 
          {
            "de": 
              "Europa" <utf8_string>
            "en": 
              "Europe" <utf8_string>
            "es": 
              "Europa" <utf8_string>
            "fr": 
              "Europe" <utf8_string>
            "ja": 
              "ヨーロッパ" <utf8_string>
            "pt-BR": 
              "Europa" <utf8_string>
            "ru": 
              "Европа" <utf8_string>
            "zh-CN": 
              "欧洲" <utf8_string>
          }
      }
    "country": 
      {
        "geoname_id": 
          2017370 <uint32>
        "iso_code": 
          "RU" <utf8_string>
        "names": 
          {
            "de": 
              "Russland" <utf8_string>
            "en": 
              "Russia" <utf8_string>
            "es": 
              "Rusia" <utf8_string>
            "fr": 
              "Russie" <utf8_string>
            "ja": 
              "ロシア" <utf8_string>
            "pt-BR": 
              "Rússia" <utf8_string>
            "ru": 
              "Россия" <utf8_string>
            "zh-CN": 
              "俄罗斯联邦" <utf8_string>
          }
      }
    "location": 
      {
        "accuracy_radius": 
          1000 <uint16>
        "latitude": 
          55.738600 <double>
        "longitude": 
          37.606800 <double>
        "time_zone": 
          "Europe/Moscow" <utf8_string>
      }
    "registered_country": 
      {
        "geoname_id": 
          2017370 <uint32>
        "iso_code": 
          "RU" <utf8_string>
        "names": 
          {
            "de": 
              "Russland" <utf8_string>
            "en": 
              "Russia" <utf8_string>
            "es": 
              "Rusia" <utf8_string>
            "fr": 
              "Russie" <utf8_string>
            "ja": 
              "ロシア" <utf8_string>
            "pt-BR": 
              "Rússia" <utf8_string>
            "ru": 
              "Россия" <utf8_string>
            "zh-CN": 
              "俄罗斯联邦" <utf8_string>
          }
      }
  }
frenchviking commented 3 years ago

@frenchviking, thanks for providing that info -the main reason I think this is failing is that traefik.ClientHost won't access the nested field.

I'm unsure if the GeoIP2 filter supports record accessor which would use the following for the lookup key: traefik[ClientHost] https://docs.fluentbit.io/manual/administration/configuring-fluent-bit/record-accessor. Are you able to try that out? Alternatively, you could add a NEST filter to pull that value outside to run GeoIP2 query on top: https://docs.fluentbit.io/manual/pipeline/filters/nest

So I tried the following for my geoip2 filter. One with "log" value in case it was not replaced by the merge log key, and the one with it.

Lookup_key          $log['ClientHost']
        Record geoloc.city    $log['ClientHost'] %{city.names.en}

and

Lookup_key          $traefik['ClientHost']
        Record geoloc.city    $traefik['ClientHost'] %{city.names.en}

Still no luck. And I'm not able to have the Newt filter lift keys up with the following :

  filter-traefikee-lift.conf: |
    [FILTER]
        Name                nest
        Match               traefikee.*
        Operation           lift
        Nested_under        log

I tested it with only the tail INPUT and then the nest FILTER. Could it be because my original json has backslashes ?

agup006 commented 3 years ago

@proffalken The only difference for the 1.7.1 and previous config looks like the path between the DB. If we take one of the JSON outputs from the IP Tables parser and set that as the new dummy input does that work as well?

agup006 commented 3 years ago

Could it be because my original json has backslashes ?

@frenchviking potentially, we could use another filter to convert escaped JSON into JSON before sending off to the GeoIP filter.

https://docs.fluentbit.io/manual/pipeline/parsers/decoders#getting-started

proffalken commented 3 years ago

@proffalken The only difference for the 1.7.1 and previous config looks like the path between the DB. If we take one of the JSON outputs from the IP Tables parser and set that as the new dummy input does that work as well?

@agup006 you may have stumbled on something here - I don't seem to get the output from the iptables parser as JSON, I get it as what looks like a hash?

[13] unifi: [1614757243.000000000, {"rule_chain"=>"WAN_LOCAL", "rule_name"=>"default", "accept_or_drop"=>"D", "in_interface"=>"pppoe0", "source"=>"89.248.174.3", "dest"=>"<redacted>", "pkt_len"=>"40", "pkt_tos"=>"0x00", "pkt_prec"=>"0x00", "pkt_ttl"=>"248", "pkt_id"=>"54321", "pkg_frg"=>" ", "protocol"=>"TCP", "source_port"=>"60305", "dest_port"=>"9002", "proto_window_size"=>"65535", "pkt_res"=>"0x00", "pkt_type"=>"SYN", "pkg_urgency"=>"0", "pri"=>"4", "time"=>"Mar  3 07:40:43", "host"=>"USG01", "ident"=>"kernel", "message"=>"[WAN_LOCAL-default-D]IN=pppoe0 OUT= MAC= SRC=89.248.174.3 DST=<redacted> LEN=40 TOS=0x00 PREC=0x00 TTL=248 ID=54321 PROTO=TCP SPT=60305 DPT=9002 WINDOW=65535 RES=0x00 SYN URGP=0 "}]

This is with the following config:

[SERVICE]
    flush        5
    daemon       Off
    log_level    warn
    parsers_file parsers.conf
    plugins_file plugins.conf
    http_server  Off
    http_listen  0.0.0.0
    http_port    2020
    storage.metrics on

[INPUT]
    Name                syslog
    Listen              0.0.0.0
    Port                1514
    Parser              syslog-rfc3164
    Mode                udp
    tag                 unifi

[FILTER]
    Name parser
    Match *
    Key_Name message
    Parser iptables
    Preserve_Key True
    Reserve_Data True

[OUTPUT]
    Name stdout
    Match *

#[FILTER]
#    Name geoip2
#    Match *
#    Database /var/lib/GeoIP/GeoLite2-City.mmdb
#    Lookup_key source
#    Record fbcountry source %{country.names.en}

[OUTPUT]
    Name        forward
    Match         *
    Host          127.0.0.1
    Port           24224

I'm now wondering if I'm configuring the filter correctly given that the key/value pairs are separated by => rather than :?

frenchviking commented 3 years ago

Could it be because my original json has backslashes ?

@frenchviking potentially, we could use another filter to convert escaped JSON into JSON before sending off to the GeoIP filter.

https://docs.fluentbit.io/manual/pipeline/parsers/decoders#getting-started

I finally got it working with a specific FILTER parser and specifying the "log" Key_Name.

  filter-traefikee.conf: |
    [FILTER]
        Name                parser
        Parser              traefik
        Match               traefikee.*
        Key_Name            log

Thank you for your help on this !

proffalken commented 3 years ago

@agup006 after our chat last night I've tried adding in an extra filter, and now it's working - I'd love to know why!

[SERVICE]
    flush        5
    daemon       Off
    log_level    warn
    parsers_file parsers.conf
    plugins_file plugins.conf
    http_server  Off
    http_listen  0.0.0.0
    http_port    2020
    storage.metrics on

[INPUT]
    Name                syslog
    Listen              0.0.0.0
    Port                1514
    Parser              syslog-rfc3164
    Mode                udp
    tag                 unifi

[FILTER]
    Name parser
    Parser iptables
    Match *
    Key_Name message

###### REWRITE THE `source` FIELD TO BE CALLED `remote_addr` #######
[FILTER]
    Name modify
    Match *
    Rename source remote_addr

###### COPY THE MMDB TO /etc/fluent-bit/ INSTEAD OF /var/lib/GeoIP2 (the MaxMind updater location) #####
[FILTER]
    Name geoip2
    Match *
    Database GeoLite2-City.mmdb
    Lookup_key remote_addr
    Record country remote_addr %{country.names.en}
    Record isocode remote_addr %{country.iso_code}
    Record latitude remote_addr %{location.latitude}
    Record longitude remote_addr %{location.longitude}

[OUTPUT]
    Name stdout
    Match *

This results in the following:

[2] unifi: [1614841296.000000000, {"rule_chain"=>"WAN_LOCAL", "rule_name"=>"default", "accept_or_drop"=>"D", "in_interface"=>"pppoe0", "remote_addr"=>"80.82.78.82", "dest"=>"<REDACTED>", "pkt_len"=>"40", "pkt_tos"=>"0x00", "pkt_prec"=>"0x00", "pkt_ttl"=>"248", "pkt_id"=>"42474", "pkg_frg"=>" ", "protocol"=>"TCP", "source_port"=>"45589", "dest_port"=>"13483", "proto_window_size"=>"1024", "pkt_res"=>"0x00", "pkt_type"=>"SYN", "pkg_urgency"=>"0", "country"=>"United Kingdom", "isocode"=>"GB", "latitude"=>51.496400, "longitude"=>-0.122400}]

I'll run this as a test config for the next few days and see what happens - thanks again for all your help!

github-actions[bot] commented 3 years ago

This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.

github-actions[bot] commented 3 years ago

This issue was closed because it has been stalled for 5 days with no activity.

Owemeone commented 3 years ago

@agup006 could you please respond to the why question? The workaround works, but it's ugly to do for all fields