Open bjaglin opened 10 years ago
This is the only solution that came to mind for me as well. @bjaglin was this your final thought on the matter or did you find an alternative?
Unless I'm mistaken, this container currently doesn't log anything.
I prefer to have each container log to stdout, then I can choose to send that on to a logging container if I want to. The (hackish) way I ended up doing it is here: https://gist.github.com/nicot/6c680c626156f842444f
Just start the container and mount /dev/log from the host into the container. With an haproxy config that logs to /dev/log it works fine except that systemd-journald doesn't associate it with the haproxy unit (if you're using systemd units to control it).
docker run -p 80:80 -v /dev/log:/dev/log haproxy
haproxy.cfg: global log /dev/log local2 ...
Any chance we'll see this implemented?
Docker already has its own docker logs facility, and administrators expect things to show up there. Therefore, /dev/log seems like a poor replacement. Any chance proper logging to stdout could be added?
I tried a complicated workaround with rsyslogd in the container, but while it seems to work more or less (I'm getting regular log messages, at least some of them) I'm running into this message quite often:
[ALERT] 087/151909 (47) : sendmsg logger #1 failed: Resource temporarily unavailable (errno=11)
[ALERT] 087/151909 (47) : sendmsg logger #2 failed: Resource temporarily unavailable (errno=11)
[ALERT] 087/151909 (47) : sendmsg logger #1 failed: Resource temporarily unavailable (errno=11)
[ALERT] 087/151909 (47) : sendmsg logger #2 failed: Resource temporarily unavailable (errno=11)
The suggested solution here http://comments.gmane.org/gmane.comp.web.haproxy/4716 isn't really useful because it suggests changing a kernel option on the host which shouldn't be necessary to run a docker container properly.
Therefore it would be really really helpful if proper stdout logging could be added to haproxy itself..
See also https://github.com/docker/docker/issues/13726 for minor enlightenment.
One option is to supply remote syslog endpoints per container (marking up the messages with source host as appropriate) - logstash with multiple syslog inputs may be useful. This might even be automated.
It seems reasonable to assume anyone using this container will need to supply their own haproxy.cfg
file. Perhaps the best solution is to mark up the shipping config file with comments explaining that either the end user rebuilds this container with a syslog forwarding daemon or change the IP/port to a network-point such as above suggestion.
Do I understand this correctly that we should again manually add some daemon for logging things in the container/image? Proper logging isn't some sort of "optional" feature, therefore I think documentation isn't the solution here and the container should really already have this integrated.
As a side note, just redirecting everything else from docker to the systemd logs as well seems weird, since while then everything would be in one place it's kind of an odd solution because just one single container isn't really configured to use the docker logging infrastructure properly. It seems like the wrong end to address this..
However, maybe it might be preferable/easiest to convince the haproxy folks to simply add stdout logging instead of attempting all those logging daemon workarounds..
+1 for stdout/stderr logging - so tools based on Docker API like logspout or sematext-docker-agent could get the logs from the Docker API, and it works one with syslog or any other log driver. The user could specify the log driver settings and forward logs to dedicated logging services.
@JonasT and @megastef you need to argue this with the haproxy authors. This container is merely a wrapper around what they ship.
Their product logs to syslog
and this container builds with that addressed as 127.0.0.1
hence it is effectively lost without intervention. Replacing haproxy.cfg
needs to happen for each user anyway so that's when the change to locally available syslog
resources happens naturally.
The container could ship with a syslog
daemon, this might be a useful improvement. No doubt some will argue this would be bloat too...
+1 for shipping with a syslog daemon, if that means that docker logs
would work as expected.
In all honesty I know basically nothing about syslog, and the thought of managing my logs out-of-band with all my other docker services makes me a bit uncomfortable. I can imagine there are many who are in my same boat. Maybe there could be tags with and without an embedded syslog?
any update on this? how to do logging properly?
Run rsyslog
on your host. Most distributions install this by default with a configuration providing /dev/log
and recording to /var/log/syslog
. Launch your containers (such as haproxy
) bind-mounting /dev/log
in, and allow your application (haproxy
) to write to that. This adheres to decades-old Unlx standards which are unlikely to change soon.
Once you move beyond managing a single host, rsyslog
on each host can simply copy the logs it receives from the applications within those containers to a remote aggregation point (logstash
is an example). We have logstash
parsing the messages into Elastic search indices - works well.
Remember many applications log limited messages to STDOUT
and dramatic errors to STDERR
which are the only outputs captured by docker
. These too can be sent to syslog
but are very different to those logs an application author wants to send to the system's standard logging facility (syslog
).
Those wondering how to disambiguate multiple identical applications on the same host, what you really want is for that application to include a symbolic name (from an environment variable, for instance) in it's messages. This has nothing to do with Docker. haproxy
I believe can include it's hostname in it's messages which should suffice.
http://cbonte.github.io/haproxy-dconv/1.6/configuration.html#log-send-hostname also lets you specify the hostname to be sent.
Note that the haproxy:alpine image already has a syslogd. I was able to get logging to stdout by using the following in docker-compose.yml:
command: /bin/sh -c "/sbin/syslogd -O /dev/stdout && haproxy -f /usr/local/etc/haproxy/haproxy.cfg
a similar command should work from a dockerfile as well
@dack neat! I wonder if this could be made the default for the haproxy image?
I'm currently using a simple syslog-ng sidecar container to log from haproxy. I wouldn't want there to be a default syslog option. Haproxy has intentionally chosen not to log through stdout and using a sidecar makes it easy to respect that.
I wouldn't want there to be a default syslog option.
Why not? There could be a trivial ENV var setting added to turn it off, and just because you use a special setup doesn't mean the container shouldn't be in a default working state.
@JonasT isn't that simply shifting the problem? People installing haproxy in non-Docker environments simply configure it to talk to their existing syslog infrastructure. I wouldn't want to launch it in a Docker container then find out I need to override the supplied syslog to log to my existing syslog infrastructure.
Docker can direct container output (STDOUT/STDERR) to a given syslog instance already, I guess "they" expect you to be running containers in a hosting environment that already has adequate syslog services for your scenario.
If a container shipped with a syslog receiver for logging purposes, surely it should be opt-in at run-time, not opt-out.
@jmkgreen We don't need to change anything for a non-docker environment. Just change the Dockerfile to make haproxy log to stdout/stderr. Ideally haproxy would have a CLI or config file option that could be used for this, but it does not. That's why I've run syslogd in the container - purely to get stdout/stderr logging (which should be the default for any docker container).
@jmkgreen docker simply expects stdout/stderr logging per default and integrates this into the built-in docker logs functionality. That is how the docker universe works (edit: as far as I can tell! Maybe I have been doing it all wrong? But that's how I have encountered it for 99% of the containers I have seen), at least for now. I didn't make the rules.
As a result I don't think it shouldn't be opt-in, because an opt-out works just as nicely and IMHO it is better to stick to the standard behavior of a docker container (which is "make the application log to stdout" - which generally doesn't require syslogd at all, that's just an haproxy special case) and not to something you consider to be superior but which nobody else follows with their default behavior, unless there is a very good reason. However, I bet a minimal syslog-ng doesn't eat much resources, and if you add an opt-out ENV var that prevents it from launching at all if people don't want it, there is literally no performance impact in any way for anyone who doesn't want it. So there is no good reason IMHO.
Therefore, I really think it would be preferable to adapt the default behavior to match everyone else's containers, and if someone wants high-performance logging directly to the host's syslog, just add ENV var options to make it happen and everyone can use what they want with the default behavior matching every docker user's expectations.
@JonasT This is the message from Willy I was alluding to earlier: https://www.mail-archive.com/haproxy@formilux.org/msg17436.html
If the author of the software we're wrapping in a container has explicitly chosen not to support logging to stdout, I don't think we should hack in syslog in the container to transform it to a stdout stream. I'd rather keep it in some kind of syslog all the way to my log aggregator unless we know that the way docker handles stdout is somehow faster than the 'normal' scenario.
This is exactly the sort of scenario that the sidecar docker design pattern is good for if you don't have access or don't want to use a system syslog.
2 years later: nope, no logging to stdout.
haproxy is already falling short by not having a native integration with KV backends for service discovery.
I don't really see what the argument is against my solution.
The current situation:
With my solution:
There is zero impact on anyone who wants to ditch the docker logging and use pure syslog instead. For everyone else, they get standard docker logging instead of nothing at all. Seems like a win/win to me.
@dack the performance impact? @ryansch explained this and linked to a further explanation by Willy, the haproxy author a couple of months ago.
Without benchmarks I don't really buy that argument. Furthermore, the users who would most benefit from stdout-by-default probably won't be operating at a scale where it would be an issue.
@PriceChild With the method I proposed, it's actually run through syslogd. So haproxy is not directly writing to stdout and would not have to wait for any stdout buffering (as everything is buffered by syslogd, not haproxy).
Here's the stdout sidecar I use when running haproxy locally: https://github.com/outstand/docker-syslog-ng-stdout
Retaining a way to run with possibly faster non-stdout configuration isn't wrong. However, bundling syslog-ng in a way that can be disabled easily(!) is an absolute no-brainer, just slap a line into the README with the env var or something and done. Therefore, there is absolutely no good reason for breaking this per default and making everyone's life harder just because it might be a bit slower. Anyone who cares about that would need to launch an external syslog anyway, so it is technically impossible to offer a zero-conf version for those people - so a simple ENV switch is absolutely appropriate for that purpose.
Summed up, I really don't get why you just don't bundle syslog-ng with an ENV switch to make the container not launch & use it at runtime for the people who don't want it. I've suggested this in a previous comment, and all you've said is basically "but for some people that's not the desired solution", for which the ENV switch is the entire point. I still don't get what the actual problem is?
Unless the few megabytes to store syslog-ng or a few lines of script to handle the ENV var at launch is a huge problem, I still don't see a good reason not to add this feature... (we're talking about basic logs out of the box here, not some fancy addition that isn't really needed for operation)
EDIT: and if that's still too much work, just use what @dack proposed and add a README section that documents how to easily use a one-liner to change the launch command for external syslogd logging as previously. It won't even be hard to do, everyone who wants it will be easily able to find and use it to get the old behavior - while the container will suddenly work out of the box with proper functionality per default.
EDIT 2: just to spell this out more verbosely: I absolutely think using a sidecar approach is nice for advanced features. The only reason I think it's a bad idea here is because logging via docker logs is an absolute core feature that should "just work". There is nothing wrong with having optional other ways to do logging faster/better/... that can be easily enabled, but IMHO having it not enabled in the expected way per default purely for a performance improvement of unknown dimensions & politics (original software vendor doesn't really like that feature implemented that way) is a mistake, especially given the absolute minuscule impact on Dockerfile complexity, docker image size etc. for providing this in a more reasonable way
If the concern is that the users running their own syslog don't want an extra syslogd running in the haproxy container (doing nothing and using a tiny amount of RAM), then we could even use a wrapper script that would only launch syslogd if haproxy is configured to log to 127.0.0.1. A bit more complicated than my one-liner fix, but could be done fairly easily. Or as @JonasT said, just add an ENV var to disable it. Extra storage to bundle syslogd is a non-issue, it's already bundled in the container now.
In response to stdout being explicitly unsupported. Here's how the official Apache httpd container logs to stdout: https://github.com/docker-library/httpd/blob/b13054c7de5c74bbaa6d595dbe38969e6d4f860c/2.4/Dockerfile#L72. It too is based on debian:jessie. It logs to the special file /proc/self/fd/1
with error log to /proc/self/fd/2
.
After trying some of the workarounds described here I have put together a compact way to redirect syslog to stdout thus container's log with syslog server in Go: https://github.com/pgaertig/haproxy-docker . All parts are lightweight and statically linked so porting the image to Alpine should be trivial if anyone needs that.
@pgaertig At the moment my Dockerfile for my haproxy service looks like the following:
FROM haproxy:1.5
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
It'd be really nice if I could simply change that from to your image and have the ability to log to stdout.
@adamkdean just open new issue there to discuss as this is off-topic here
Hey @dack I am using your solution for throwing haproxy logs to stdout, but somehow i am not getting the requests logs, just the startup ones :s, have you managed to get request logs also?
@wichon I'm not personally using request logs, but my guess would be that the verbosity level of either haproxy or syslogd is too low to allow them through. Check that your haproxy config is set to log those messages, and try adding something like -l 7 to the syslogd command.
@dack No luck :(, my bet is that syslogd is not listening in the UDP port 514, which is the one haproxy uses to send log data to syslog.
@wichon Are you using the alpine-based haproxy container? If not, syslogd may require a totally different set of options/configuration. I have only tried my solution with Alpine. The busybox syslogd (as used in Alpine) listens on UDP 514 by default. You can see the CLI options here: https://busybox.net/downloads/BusyBox.html (search for syslogd on that page).
@bjaglin hello, can you tell me how you do it ?
@JonasT i also encounter this kind of situation : [ALERT] 087/151909 (47) : sendmsg logger #1 failed: Resource temporarily unavailable (errno=11) [ALERT] 087/151909 (47) : sendmsg logger #2 failed: Resource temporarily unavailable (errno=11) [ALERT] 087/151909 (47) : sendmsg logger #1 failed: Resource temporarily unavailable (errno=11) [ALERT] 087/151909 (47) : sendmsg logger #2 failed: Resource temporarily unavailable (errno=11) How do you solve ? tks
Starting rsyslogd appears to allow the logging to work but then the container doesn't responder to stop events and has to be killed. Is there a way to resolve that?
@dack With your solution, haproxy is not pid 1, and therefore wouldn't receive signals passed via docker. See https://www.ctl.io/developers/blog/post/gracefully-stopping-docker-containers.
You can use https://github.com/Yelp/dumb-init or https://github.com/krallin/tini to forward signals in a docker container.
Edit: Or the --init
switch to docker run
will do it too.
FWIW for debug and development, I used the tips from @dack but had to make an adjustment (this may have been the issue @wichon saw).
Using a Docker file
FROM haproxy:1.7-alpine
and with haproxy.cfg
global
log /dev/log local0
I started up syslogd
and haproxy
with the following script
#!/bin/sh -ex
/sbin/syslogd -O /proc/1/fd/1 # <--- link to docker's stdout, not "your stdout"
haproxy -f /usr/local/etc/haproxy/haproxy.cfg -db # <--- stay in foreground
Again, I'm using this mostly for debug and devel. You'd want something to forward signals, etc for a production (a mini init process such as https://github.com/krallin/tini , HT @ryansch ).
Comments and thoughts welcome!
n
docker run --init -p 1234:1234 --rm -w /etc/haproxy -it my-haproxy /usr/bin/start.sh
+ /sbin/syslogd -O /proc/1/fd/1
+ haproxy -f /usr/local/etc/haproxy/haproxy.cfg -db
Jun 30 19:55:00 5367269dac3e syslog.info syslogd started: BusyBox v1.25.1
Jun 30 19:55:00 5367269dac3e local0.notice haproxy[9]: Proxy http-in started.
@dack the performance impact? @ryansch explained this and linked to a further explanation by Willy, the haproxy author a couple of months ago.
I'd love to see benchmarks. It seems to me that "we will not provide this convenient feature because performance" does need to be supported by numbers, otherwise the convenience is more helpful.
Also the nginx docker container (and thus nginx) logs to stdout/stderr just fine. It's insanely focussed on performance and can be used as a reverse proxy for multiple back-ends. Though I do want to use haproxy for the health checks (paid feature in nginx and I am working with a non-profit).
Please make the official haproxy image to use docker's logging standards by default. It's a PITA not being able to tell what the hell is going on
I'd also need a debug mode :D wouldn't it be an okay idea to have an haproxy image that has full debug mode enabled ? So we don't taint the release versions, but still can debug easily ?
I am now using tini and a script like the one @client9 provided. This is a better workaround than my original one, as it handles signals properly. If you are using swarm services/stacks, then --init is not available as an option. However, you can just add tini via the Dockerfile like this:
RUN apk add --no-cache tini
ENTRYPOINT ["/sbin/tini","/entrypoint.sh"]
Hello @dack I'm a real newbie and I cannot figure out how to setup this log feature with the changes you provided. Can you provide (in a gist or even a repo) the complete files (the dockerfile, the script and if possible a basic haproxy config file). I always get 503 status despite connecting directly into the server using its IP is working. I cannot figure out what the problem could be, so I wanted to check the logs. Thank you very much.
Still nothing? I was unable to find any info how to configure official haproxy image…
The instructions are explicit about using
127.0.0.1
for logging, but AFAIK, the rsyslogd daemon is not started within the container (if it's even installed). Am I missing something?I am now leaning towards
--link
ing this container towards a rsyslogd one, to use it as log target within the configuration. Anyone else had a similar approach?Thanks!