Closed smartyjohn closed 4 months ago
Hi, thanks for pointing this as this will be an issue on v0.9 - with HAProxy 1.9 and its stdout logging.
The only reason to capture stdout/err in the current code is to log any warnings after a reload which would be redirected to fd1 anyway.
Has this been fixed? I don't see the logs updating when a connection tries to come in
Hi, this is a v0.9 feature when haproxy will be updated to 1.9.
Hi, I finally took the time to properly analyse what's happening under the hood. The main problem here is that haproxy forks the process to the background and it looses the FDs assigned to it. This can be easily reproduced running a bare simple config and start with haproxy -f cfg -D
. Only the infos logged before haproxy goes to background are displayed.
I can see some routes to follow:
In short this is not as simple as it sounds to be. New ideas are very welcome.
Hi, what about a sidecar container? An advantage is to separate ingress logs from haproxy/requests logs. Have a look at this instructions and please let me know what you think.
@jcmoraisjr, this is still an issue. Any progress?
When implementing your udplog-d
utility, would you configure the haproxy.tmpl
file like the following?
defaults
option httplog
frontend default-frontend
log 127.0.0.1:1514 local0
backend k8s-pods
log 127.0.0.1:1514 local0
Hi, this won't be an issue starting on v0.12, which should have a snapshot tag in a week or so. Starting v0.12 one can run haproxy itself on a sidecar container, which will fix the stdout logging issue.
When using a syslog-like sidecar container you can configure the loopback IP, just like you did, since both containers are under the same network. I'd also add format raw
(doc) if using udplog-d
because it isn't a syslog server and doesn't understand the protocol.
Thank you for the update. Yes, I learned that I needed to use the format raw
so that I did not get the syslog headers on every log line.
Hi @jcmoraisjr,
Hi, this won't be an issue starting on v0.12, which should have a snapshot tag in a week or so. Starting v0.12 one can run haproxy itself on a sidecar container, which will fix the stdout logging issue.
I am not sure I understand correctly. Could you explain a bit?
Are you saying that from v0.12 onwards we should be able to configure something like log stdout local0
in the main haproxy container? I am currently trying out v0.13.4, but have not yet managed to build a configuration that prints the request logs to stdout. Any examples would be welcome.
Or are you suggesting to run one haproxy as main container and then another one as a sidecar and to send logs there,, basically removing the need for another syslog image?
Hi @lenhard in the current versions of the controller you cannot mix the logs of the controller and the proxy into a single container, so you have at least two options that the helm chart helps you to accomplish. The first one is to configure controller.logs.enabled
as true which will create a syslog sidecar container where haproxy can send logs to. The other one is running haproxy itself as a sidecar container - instead of running in background along side the controller - you can see how it works and how to configure it in the external haproxy example doc - in short configuring controller.haproxy.enabled
as true. This option haproxy sends logs to stdout, no need to use syslog at all.
A quick update on this issue. Currently there are a few distinct ways to configure haproxy logging:
syslog-endpoint: 10.0.0.1:514
, changing to the endpoint of the log service;controller.logs
option;--master-worker=true
; 2) global syslog-endpoint: stdout
and syslog-format: raw
.Added in the logging documentation how to configure haproxy logs to stdout. It works since v0.14. Closing.
(The test case is only relevant to haproxy v1.9+ [future bug/enhancement], but the underlying cause may be impacting current versions as well. )
I've been trying out haproxy v1.9.8 in a custom docker build, and it's support of new logging methods to stdout and stderr. These do not work. When haproxy is spawned (reloaded?) by the controller its file descriptors for stdout and stderr are set to
/dev/null
. This can be seen from anlsof
in the running container of the first three file descriptors for the process (0=stdin, 1=stdout, 2=stderr):This may affect current versions as it would seem as soon as a reload occurs then all output via stdout/err from the haproxy instance would be lost.
It seems to me the correct value of stdout/stderr when the process is first spawned should either be the same as the controller's stdout/stderr or else the file descriptors from opening
/proc/1/fd/1
and/proc/1/fd/2
respectively as those are collected by Docker (they should all point to the same endpoint anyway). I did see the reload spawning [controller.go/OnUpdate()] and it may be as simple as setting reloadCmd.StdOut/StdErr; but that's outside my realm.As an aside for others who may be looking for important debugging messages, you can use the existing syslog facilities or you can use socat (already present in the container) to create a datagram socket to forward to
/proc/1/fd/1
as a hackish workaround (e.g. via a manual exec shell or a modified init script--definitely not for production usage):And the ConfigMap change: