fluent / fluentd

Fluentd: Unified Logging Layer (project under CNCF)
https://www.fluentd.org
Apache License 2.0
12.88k stars 1.34k forks source link

fluentd forward log to k8s through ingress-controller #2736

Closed kiddingl closed 3 years ago

kiddingl commented 4 years ago

Check CONTRIBUTING guideline first and here is the list to help us investigate the problem.

Is your feature request related to a problem? Please describe. I have a k8s cluster, fluentd and fluent-bit are running on k8s cluster. fluenld and fluent-bit can collect all k8s cluster logs and output to elasticsearch, But we have some applications deployed outside the k8s cluster, I want to collect these applications logs by using outside fluentd forward data to inside k8s cluster fluentd。This transmission process should be secure(with tls and auth)

I saw this document from the official website:

<match debug.**>
  @type forward
  transport tls
  tls_cert_path /path/to/fluentd.crt # Set the path to the certificate file.
  tls_verify_hostname true           # Set false to ignore cert hostname.
  <server>
    host 192.168.1.2
    port 24224
  </server>
</match>

my host is servername I can‘t forward log to cluster fuentd with outside fluend

Describe the solution you'd like

This host field should be modified to servername

I forward logs by using outside fluentd to forwared inside cluster fluentd.

** My mind: image

kiddingl commented 4 years ago

My english is poor

cosmo0920 commented 4 years ago

You should expose k8s running Fluentd service port to outside k8s.

example expose case study is: https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/

But, Fluentd team does not use issue tracker as a support forum: https://github.com/fluent/fluentd/blob/master/CONTRIBUTING.md

Could you use mailing list or community slack channel for the next time? Thanks.

kiddingl commented 4 years ago

I create a ingreess

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{},"name":"fluentd-ingress","namespace":"log"},"spec":{"rules":[{"host":"fluentd.staging.xfreeapp.com","http":{"paths":[{"backend":{"serviceName":"fluentd","servicePort":24224},"path":"/"}]}}]}}
  creationTimestamp: "2019-12-13T07:55:04Z"
  generation: 1
  name: fluentd-ingress
  namespace: log
  resourceVersion: "5216638"
  selfLink: /apis/extensions/v1beta1/namespaces/log/ingresses/fluentd-ingress
  uid: dfa910b1-1d7d-11ea-969c-525400277932
spec:
  rules:
  - host: fluentd.staging.xfreeapp.com
    http:
      paths:
      - backend:
          serviceName: fluentd
          servicePort: 24224
        path: /
status:
  loadBalancer:
    ingress:
    - {}

My fluentd service:

apiVersion: v1
kind: Service
metadata:
  annotations:
    prometheus.io/port: "24231"
    prometheus.io/scrape: "true"
  creationTimestamp: "2019-11-26T08:09:41Z"
  labels:
    app: fluentd
    chart: fluentd-0.1.5
    heritage: Tiller
    release: fluentd
  name: fluentd
  namespace: log
  resourceVersion: "1230062"
  selfLink: /api/v1/namespaces/log/services/fluentd
  uid: 18e2205c-1024-11ea-9ab4-525400516f5c
spec:
  clusterIP: 10.107.245.64
  ports:
  - name: syslog
    port: 514
    protocol: UDP
    targetPort: syslog
  - name: fluentd
    port: 24224
    protocol: TCP
    targetPort: fluentd
  - name: prometheus
    port: 24231
    protocol: TCP
    targetPort: prometheus
  selector:
    app: fluentd
    release: fluentd
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

My docker containered fluend proxy config:

<source>
  @type syslog
  port  5140
  bind  0.0.0.0
  tag syslog
</source>
<match syslog.**>
  @type rewrite_tag_filter
  <rule>
    key     ident
    pattern ^nginx_access_log$
    tag     ngx-access
  </rule>
  <rule>
    key     ident
    pattern ^nginx_error_log$
    tag     ngx-err
  </rule>
</match>

<match ngx-access>
  @type forward
  <server>
    host fluentd.staging.xfreeapp.com
    port 80
  #  host 172.16.46.109
  #  port 24224
  </server>
  <buffer>
    @type file
    path  /var/log/fluent/ngx-access
    chunk_limit_size  8MB
    flush_interval  5s
    flush_mode  interval
    flush_thread_count  2
    overflow_action  drop_oldest_chunk
    queue_limit_length  256
    retry_forever
    retry_max_interval  30
    retry_max_interval  30
  </buffer>
</match>
<match ngx-err>
  @type forward
  <server>
    host fluentd.staging.xfreeapp.com
    port 80
   # host 172.16.46.109
   # port 24224
      </server>
  <buffer>
    @type file
    path  /var/log/fluent/ngx-err
    chunk_limit_size  8MB
    flush_interval  5s
    flush_mode  interval
    flush_thread_count  2
    overflow_action  drop_oldest_chunk
    queue_limit_length  256
    retry_forever
    retry_max_interval  30
    retry_max_interval  30
  </buffer>
</match>

My nginx some of config

    log_format test_json escape=json '{"host":"$host",'
       '"remote_ip":"$remote_addr",'
       '"time":"$time_iso8601",'
       '"the_real_ip":"hello",'
       '"method":"$request_method",'
       '"uri":"$uri",'
       '"args":"$args",'
       '"server_protocol":"$server_protocol",'
       '"status":"$status",'
       '"body_bytes_sent":"$body_bytes_sent",'
       '"referer":"$http_referer",'
       '"user_agent":"$http_user_agent",'
       '"x_forwarded_for":"$http_x_forwarded_for",'
       '"request_time":"$request_time",'
       '"upstream_response_time":"$upstream_response_time",'
       '"proxy_upstream_name":"hello",'
       '"upstream_addr":"$upstream_addr"'
       '}';

    access_log  syslog:server=172.16.1.11:5140,tag=nginx_access_log test_json;
    error_log   syslog:server=172.16.1.11:5140,tag=nginx_error_log;

My fluend docker logs:

2019-12-16 04:21:57 +0000 [info]: starting fluentd-1.2.4 pid=8 ruby="2.3.3"
2019-12-16 04:21:57 +0000 [info]: spawn command to main:  cmdline=["/usr/bin/ruby2.3", "-Eascii-8bit:ascii-8bit", "/usr/local/bin/fluentd", "--under-supervisor"]
2019-12-16 04:21:58 +0000 [info]: gem 'fluent-plugin-detect-exceptions' version '0.0.11'
2019-12-16 04:21:58 +0000 [info]: gem 'fluent-plugin-elasticsearch' version '2.11.11'
2019-12-16 04:21:58 +0000 [info]: gem 'fluent-plugin-geoip' version '1.3.0'
2019-12-16 04:21:58 +0000 [info]: gem 'fluent-plugin-kubernetes_metadata_filter' version '2.0.0'
2019-12-16 04:21:58 +0000 [info]: gem 'fluent-plugin-mongo' version '1.2.1'
2019-12-16 04:21:58 +0000 [info]: gem 'fluent-plugin-multi-format-parser' version '1.0.0'
2019-12-16 04:21:58 +0000 [info]: gem 'fluent-plugin-prometheus' version '1.0.1'
2019-12-16 04:21:58 +0000 [info]: gem 'fluent-plugin-redis_list_poller' version '1.1.0'
2019-12-16 04:21:58 +0000 [info]: gem 'fluent-plugin-rewrite-tag-filter' version '2.1.1'
2019-12-16 04:21:58 +0000 [info]: gem 'fluent-plugin-sampling-filter' version '1.1.0'
2019-12-16 04:21:58 +0000 [info]: gem 'fluent-plugin-sql' version '1.0.0'
2019-12-16 04:21:58 +0000 [info]: gem 'fluent-plugin-systemd' version '1.0.1'
2019-12-16 04:21:58 +0000 [info]: gem 'fluentd' version '1.2.4'
2019-12-16 04:21:58 +0000 [info]: adding match pattern="fluent.**" type="null"
2019-12-16 04:21:58 +0000 [info]: adding match pattern="syslog.**" type="rewrite_tag_filter"
2019-12-16 04:21:58 +0000 [info]: #0 adding rewrite_tag_filter rule: ident [#<Fluent::PluginHelper::RecordAccessor::Accessor:0x007f2a258d4860 @keys="ident">, /^nginx_access_log$/, "", "ngx-access"]
2019-12-16 04:21:58 +0000 [info]: #0 adding rewrite_tag_filter rule: ident [#<Fluent::PluginHelper::RecordAccessor::Accessor:0x007f2a258cf720 @keys="ident">, /^nginx_error_log$/, "", "ngx-err"]
2019-12-16 04:21:58 +0000 [info]: adding match pattern="ngx-access" type="forward"
2019-12-16 04:21:58 +0000 [info]: #0 adding forwarding server 'fluentd.staging.xfreeapp.com:80' host="fluentd.staging.xfreeapp.com" port=80 weight=60 plugin_id="object:3f95145425e8"
2019-12-16 04:21:58 +0000 [info]: adding match pattern="ngx-err" type="forward"
2019-12-16 04:21:58 +0000 [info]: #0 adding forwarding server 'fluentd.staging.xfreeapp.com:80' host="fluentd.staging.xfreeapp.com" port=80 weight=60 plugin_id="object:3f95146e1f84"
2019-12-16 04:21:58 +0000 [info]: adding source type="syslog"
2019-12-16 04:21:58 +0000 [info]: #0 starting fluentd worker pid=13 ppid=8 worker=0
2019-12-16 04:21:58 +0000 [info]: #0 delayed_commit_timeout is overwritten by ack_response_timeout
2019-12-16 04:21:58 +0000 [info]: #0 delayed_commit_timeout is overwritten by ack_response_timeout
2019-12-16 04:21:58 +0000 [info]: #0 listening syslog socket on 0.0.0.0:5140 with udp
2019-12-16 04:21:58 +0000 [info]: #0 fluentd worker is now running worker=0

I got nothing

kiddingl commented 4 years ago

if I set fluentd forward host is pod IP, I can get data. I set ingress address that get nothing

kiddingl commented 4 years ago

@cosmo0920

cosmo0920 commented 4 years ago

I set ingress address that get nothing

Yeah, kubernetes ingress does not support TCP for now. https://github.com/kubernetes/kubernetes/issues/23291 Your Fluentd configuration wants to connect via TCP with forward protocol but ingress routes external requests via HTTP protocol. That's why ingress does not handle external forward protocol which is used by fluentd-client.

toan-hf commented 4 years ago

you can use Nginx ingress. They supported for TCP & UDP https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/

cosmo0920 commented 4 years ago

@toan-hf Nginx ingress configuration says that its kind is Service and its type is LoadBalancer. It seems that it is not Ingress kind but Service. @kiddingl wants to know how to setup Fluentd load balancing with Ingress kind not Service kind.

ezkol commented 4 years ago

what about a "half manual" approach with node ports , static IPs and fluetnd service discovery https://docs.fluentd.org/service_discovery/srv

github-actions[bot] commented 3 years ago

This issue has been automatically marked as stale because it has been open 90 days with no activity. Remove stale label or comment or this issue will be closed in 30 days

github-actions[bot] commented 3 years ago

This issue was automatically closed because of stale in 30 days