Open ryanwitt opened 6 years ago
actually fixed in https://github.com/advantageous/systemd-cloud-watch/pull/16
Can confirm this issue occurs on Ubuntu 18.04 LTS with v0.2.1.
I tried reverting to v0.2.0 to avoid the issue but its still there unfortunately.
I am working around this issue by adding StandardOutput=null in the [Service] of my unit file /lib/systemd/system/journald-cloudwatch-logs.service
I’m working around the issue by piping the output through grep -v
. E.g.:
# journald-cloudwatch-logs.service
- ExecStart=/usr/local/bin/journald-cloudwatch-logs /usr/local/etc/journald-cloudwatch-logs.conf
+ ExecStart=/bin/sh -c "/usr/local/bin/journald-cloudwatch-logs /usr/local/etc/journald-cloudwatch-logs.conf | grep --line-buffered -v 'Systemd CloudWatch: batches sent'"
Unlike @donovan’s approach, this allows the binary to log other entries – which can be useful if it e.g. logs an error.
--line-buffered
in grep
is required because otherwise grep will hold off logging for a while, and you won’t see any logs. (This is different from when logging into stdout
.)
I've been testing this tool out to use in production and came across what looks like an annoying bug in the latest released binary (
v0.2.1
). Several minutes into a run, the worker starts spewing logs at ~12KiB/s:Fortunately, I don't think these correspond to CloudWatch API calls, since I don't see them in the network traffic while these messages are being printed. However, if the logger is set up inside a systemd unit, this could cause lots of feedback spam to CloudWatch logs. 🌊
The
batches sent
number seems to me like it might reflect the number of journal entries since the logger started.Environment: Ubuntu 16.04 LTS
Installation:
IAM Policy
Any ideas on what this could be before I start learning go?