roffe / kube-gelf

CoreOS Kubernetes logging to Graylog
MIT License
65 stars 20 forks source link

Cron and Message Consumption #15

Open mhobotpplnet opened 6 years ago

mhobotpplnet commented 6 years ago

So my messages get read and then about 5-10min later it does nothing, if I restart kube-gelfs it starts processing, is that part of the cron that it should be doing? We are on k8 1.8 tho.

Btw am getting this in kube-gelf

2018-03-09 15:39:52 +0000 [warn]: #0 failed to flush the buffer. retry_time=11 next_retry_seconds=2018-03-09 15:44:47 +0000 chunk="566fc78cf54353f411fdc9ef374c0fd9" error_class=Encoding::UndefinedConversionError error="\"\\xE2\" from ASCII-8BIT to UTF-8"
  2018-03-09 15:39:52 +0000 [warn]: #0 suppressed same stacktrace
roffe commented 6 years ago

No this seems as a error processing your logs, do they bu chance contain non utf8 characters?

mhobotpplnet commented 6 years ago

Maybe , ill take a peek at it, good question

mhobotpplnet commented 6 years ago

Hey roffe , going back to this - where would I be able to find the characters , I mean this logs like everything so if it does, it would be from some container that I am not yet to find.

ojaoferreira commented 5 years ago

I have the same problem. Any solution.

roffe commented 5 years ago

@johnjohnofficial no, i do not use this myself any longer so any help from any of the users would be appreciated

ojaoferreira commented 5 years ago

@roffe Thanks for answering.

@mhobotpplnet So my messages get read and then about 5-10min later it does nothing, if I restart kube-gelfs it starts processing, is that part of the cron that it should be doing?

roffe commented 5 years ago

the cron was to fix the following bugs, not that it stops processing:

in_tail prevents docker from removing container https://github.com/fluent/fluentd/issues/1680.

in_tail removes untracked file position during startup phase. It means the content of pos_file is growing until restart when you tails lots of files with dynamic path setting. Will fix this problem in the future. Check this issue. https://github.com/fluent/fluentd/issues/1126.

AlexanderSBorisov commented 5 years ago

you need to install fluent-plugin-record-modifier gem and place additional filter into fluent.conf

<filter **>
 @type record_modifier
 char_encoding utf-8
</filter>
roffe commented 5 years ago

Thanks @AlexanderSBorisov your changes has been merged

jardiacaj commented 5 years ago

Hey, I'm seeing the following error message when the pods start: [error]: config error file="/fluentd/etc/fluent.conf" error_class=Fluent::ConfigError error="Unknown filter plugin 'record_modifier'. Run 'gem search -rd fluent-plugin' to find plugins" It seems to me that this is related to the configuration change? I'm using the current master (0f77dd4) which uses the image roffe/kube-gelf:v1.2.1.

voxmaster commented 5 years ago

I tried to use roffe/kube-gelf:latest - it helps with Unknown filter plugin 'record_modifier' But the next error appears:

/var/lib/gems/2.3.0/gems/fluentd-1.5.0/lib/fluent/config/basic_parser.rb:92:in `parse_error!': expected end of line at fluent.conf line 15,16 (Fluent::ConfigParseError)
 14:   <entry>
 15:     field_map {"MESSAGE": "log", "_PID": ["process", "pid"], "_CMDLINE": "process", "_COMM": "cmd"}
     ----------------^
 16:     fields_strip_underscores true

UDP: field_map '{"MESSAGE": "log", "_PID": ["process", "pid"], "_CMDLINE": "process", "_COMM": "cmd"}' fixed the problem, but I'm not sure if I'm right

nagaland88 commented 5 years ago

I have the same issue with the kube-gelf:latest quoting the field_map does fixes the error but I think nothing gets pushed to graylog

roffe commented 5 years ago

@jardiacaj try :latest tag, it should have the record_modifier in it

@voxmaster i have added the quoted string to the latest image now

@nagaland88 do you have the protocol env variable set in the config? (GELF_PROTOCOL)

nagaland88 commented 5 years ago

Yes ,

GELF_HOST: graylog-svc GELF_PORT: 12201   GELF_PROTOCOL: udp but still can't see any messages into graylog. What is also strange is that if I try to see the logs from any of the kube-gelf pods , they are all empty