Open Tristan971 opened 2 years ago
That is interesting. Thank you for the detailed description.
We (Yahoo) have properties that run with stderr logging and we haven't had anyone report seeing the logging stall. We should be able to get this to work, therefore.
Since this happens so quickly, I suggest enabling the log
debug tag and monitor it for a few minutes until logging stops. Then see whether those debug logs provide any clues about what is going wrong:
CONFIG proxy.config.diags.debug.enabled INT 3
CONFIG proxy.config.diags.debug.tags STRING log
Note: Use 3
for the debug.enabled
log, it is much more efficient than 1
. After you collect the information you need, you'll likely want to disable debug logging so it doesn't impact production traffic.
Thanks for the prompt reply.
First, I set debug.enabled to 1 in part because this is a dev server so the perf impact is irrelevant to me, but mostly because with 3 I wasn't seeing debug logs being printed at all 😅
However after a few tests I was surprisingly not always able to reproduce the issue. Looking at the git history (as I tried quite a few things yesterday), this is the piece of records.config that causes the issue:
CONFIG proxy.config.log.max_secs_per_buffer INT 1
CONFIG proxy.config.log.periodic_tasks_interval INT 1
When both lines are removed, logs work fine (at least for a while longer; it's not impossible the issue just takes longer to show in such case; currently started without both and waiting to find that out).
Commenting out log.max_secs_per_buffer
alone does not fix the issue, and it doesn't make much sense to comment out log.periodic_tasks_interval
alone (at least as per the docs).
Attached is either way the full diags.log when both are present (and I set diags to log to a file for the occasion, ofc)
The last successful stderr output line was at 17:15:42, and the relevant (I suspect) bit is unfortunately not very informative:
[Jul 12 17:15:42.786] [LOG_FLUSH] DEBUG: <Log.cc:1421 (flush_thread_main)> (log) Successfully wrote some stuff to stderr
...
...
[Jul 12 17:15:46.833] [LOG_FLUSH] DEBUG: <LogFile.cc:275 (close_file)> (log-file) LogFile stderr is closed
[Jul 12 17:15:46.833] [LOG_FLUSH] DEBUG: <LogFile.cc:242 (open_file)> (log-file) writing header to LogFile stderr
[Jul 12 17:15:46.833] [LOG_FLUSH] DEBUG: <LogFile.cc:249 (open_file)> (log) exiting LogFile::open_file(), file=stderr presumably open
[Jul 12 17:15:46.833] [LOG_FLUSH] ERROR: Failed to write log to stderr: [tried 101, wrote 0, Bad file descriptor]
[Jul 12 17:15:47.633] [ET_NET 2] DEBUG: <traffic_server.cc:398 (periodic)> (log) in DiagsLogContinuation, checking on diags.log
is about the first error information in there
however I don't see anything else that'd be particularly obvious
Nevermind the remark about the two lines removal fixing it; it does happen without them, just after approximately 7 minutes instead of 3.
Tried a few more things to run the container as "standardly" as possible. Notably, starting traffic_manager (pid 1) as root, to no avail.
However something interesting seems to be that the traffic server process seems to get restarted around when it happens:
Jul 13 00:14:30 - - 1.2.3.4 - TCP_MISS 0ms - "GET http://$redacted/healthz HTTP/1.0" 200 3
Jul 13 00:14:35 - - 1.2.3.4 - TCP_MISS 0ms - "GET http://$redacted/healthz HTTP/1.0" 200 3
Jul 13 00:14:40 - - 1.2.3.4 - TCP_MISS 0ms - "GET http://$redacted/healthz HTTP/1.0" 200 3
Jul 13 00:14:45 Traffic Server 9.2.0+mangadex-105defc Jul 12 2022 23:49:12 runner-samzfqgz-project-37606524-concurrent-0ppjbl
Jul 13 00:14:45 traffic_server: using root directory '/usr'
Jul 13 00:14:45 [Jul 13 00:14:45.388] [LOG_FLUSH] ERROR: Failed to write log to stdout: [tried 101, wrote 0, Bad file descriptor]
Jul 13 00:15:49 [Jul 13 00:15:49.666] [LOG_FLUSH] ERROR: The following message was suppressed 12 times.
Jul 13 00:15:49 [Jul 13 00:15:49.666] [LOG_FLUSH] ERROR: Failed to write log to stdout: [tried 101, wrote 0, Bad file descriptor]
Here's the "extended" version of it, from the last Successfully wrote some stuff to stdout
before TS restart; not very different from before however
If someone else encounters this, for now I'm just resorting to the old-and-dirty tricks of disabling log rotation and symlinking log files to /dev/stdout or /dev/stderr...
ie ln -s /dev/stdout /var/log/trafficserver/traffic.log
etc
@Tristan971 Is it possible for you to make a Docker container, so we can reproduce the issue? We haven't been able to reproduce the issue ourselves.
This issue has been automatically marked as stale because it has not had recent activity. Marking it stale to flag it for further consideration by the community.
While trying to run ATS in a Docker container, I thought I'd make use of #7937 as a nice way to handle logs.
So I built branch 9.2.x, specifically 15bea4dd946c8cb6fc2000a4b31cf4f2f261b29d and set the following:
In records.config:
In logging.yaml:
Then I use
traffic_manager
without arguments as (non-root) PID 1 in the container.At first things look alright, but after approximately 3 minutes, the logs just stop being output. Moving back error and diags to a file I can see:
Looking into the container, I also notice the following:
As one would expect, the first two commands cause "test" to be printed on the container logs, and the third does nothing.
It seems like stderr gets disconnected entirely at some point? A
traffic_ctl server restart
does bring logs back for another 3 minutes, but then the issue happens again. (note that the other ones work just fine)At first I thought it would be some issue with ATS trying to "roll" the pipe or something akin to that, but adding
CONFIG proxy.config.log.rolling_enabled INT 0
to records.config doesn't make a difference, so I guess it's not that.If it is relevant, this was compiled and ran on a Debian Bullseye image, and the image in question is
registry.gitlab.com/mangadex-pub/trafficserver:9.2.x-bullseye-dbcecd20
.Also, ATS otherwise keeps running just fine while that happens.
I doubt this will be particularly relevant, but here's the logs until they stop (when all set to stderr):
Logs
``` 2022-07-12T03:07:42 [E. Mgmt] log ==> [TrafficManager] using root directory '/usr' 2022-07-12T03:07:42 [Jul 12 02:07:42.435] traffic_manager STATUS: opened stderr 2022-07-12T03:07:42 [Jul 12 02:07:42.435] traffic_manager NOTE: updated diags config 2022-07-12T03:07:42 [Jul 12 02:07:42.437] traffic_manager NOTE: [LocalManager::listenForProxy] Listening on port: 8080 (ipv4) 2022-07-12T03:07:42 [Jul 12 02:07:42.437] traffic_manager NOTE: [TrafficManager] Setup complete 2022-07-12T03:07:43 [Jul 12 02:07:43.438] traffic_manager NOTE: [ProxyStateSet] Traffic Server Args: ' -M' 2022-07-12T03:07:43 [Jul 12 02:07:43.439] traffic_manager NOTE: [LocalManager::listenForProxy] Listening on port: 8080 (ipv4) 2022-07-12T03:07:43 [Jul 12 02:07:43.439] traffic_manager NOTE: [LocalManager::startProxy] Launching ts process 2022-07-12T03:07:43 [Jul 12 02:07:43.463] traffic_manager NOTE: [LocalManager::pollMgmtProcessServer] New process connecting fd '9' 2022-07-12T03:07:43 [Jul 12 02:07:43.463] traffic_manager NOTE: [Alarms::signalAlarm] Server Process born 2022-07-12T03:07:45 [Jul 12 02:07:45.472] traffic_server STATUS: opened stderr 2022-07-12T03:07:45 [Jul 12 02:07:45.472] traffic_server NOTE: updated diags config 2022-07-12T03:07:45 [Jul 12 02:07:45.483] traffic_server NOTE: storage.config loading ... 2022-07-12T03:07:45 [Jul 12 02:07:45.483] traffic_server NOTE: storage.config finished loading 2022-07-12T03:07:45 [Jul 12 02:07:45.492] traffic_server NOTE: ip_allow.yaml loading ... 2022-07-12T03:07:45 [Jul 12 02:07:45.493] traffic_server NOTE: ip_allow.yaml finished loading 2022-07-12T03:07:45 [Jul 12 02:07:45.494] traffic_server NOTE: parent.config loading ... 2022-07-12T03:07:45 [Jul 12 02:07:45.494] traffic_server NOTE: parent.config finished loading 2022-07-12T03:07:45 [Jul 12 02:07:45.498] traffic_server NOTE: /etc/trafficserver/logging.yaml loading ... 2022-07-12T03:07:45 [Jul 12 02:07:45.500] traffic_server NOTE: /etc/trafficserver/logging.yaml finished loading 2022-07-12T03:07:45 [Jul 12 02:07:45.504] traffic_server NOTE: logging initialized[3], logging_mode = 3 2022-07-12T03:07:45 [Jul 12 02:07:45.504] traffic_server NOTE: Initialized plugin_dynamic_reload_mode: 1 2022-07-12T03:07:45 [Jul 12 02:07:45.504] traffic_server NOTE: plugin.config loading ... 2022-07-12T03:07:45 [Jul 12 02:07:45.505] traffic_server NOTE: loading plugin '/usr/lib/trafficserver/modules/header_rewrite.so' 2022-07-12T03:07:45 [Jul 12 02:07:45.507] traffic_server NOTE: loading plugin '/usr/lib/trafficserver/modules/healthchecks.so' 2022-07-12T03:07:45 [Jul 12 02:07:45.510] traffic_server NOTE: plugin.config finished loading 2022-07-12T03:07:45 [Jul 12 02:07:45.515] traffic_server NOTE: /etc/trafficserver/sni.yaml loading ... 2022-07-12T03:07:45 [Jul 12 02:07:45.516] traffic_server NOTE: /etc/trafficserver/sni.yaml finished loading 2022-07-12T03:07:45 [Jul 12 02:07:45.516] traffic_server NOTE: ssl_multicert.config loading ... 2022-07-12T03:07:45 [Jul 12 02:07:45.517] traffic_server NOTE: /etc/trafficserver/ssl_multicert.config finished loading 2022-07-12T03:07:45 [Jul 12 02:07:45.518] traffic_server NOTE: volume.config loading ... 2022-07-12T03:07:45 [Jul 12 02:07:45.518] traffic_server NOTE: volume.config finished loading 2022-07-12T03:07:45 [Jul 12 02:07:45.534] traffic_server NOTE: remap.config loading ... 2022-07-12T03:07:45 [Jul 12 02:07:45.534] traffic_server NOTE: strategies.yaml loading ... 2022-07-12T03:07:45 [Jul 12 02:07:45.534] traffic_server NOTE: No NextHop strategy configs were loaded. 2022-07-12T03:07:45 [Jul 12 02:07:45.535] traffic_server NOTE: strategies.yaml finished loading 2022-07-12T03:07:45 [Jul 12 02:07:45.537] traffic_server NOTE: remap.config finished loading 2022-07-12T03:07:45 [Jul 12 02:07:45.544] [TS_MAIN] NOTE: traffic server running 2022-07-12T03:07:45 [Jul 12 02:07:45.544] [TS_MAIN] NOTE: Traffic Server is running unprivileged, not switching to user 'nobody' 2022-07-12T03:07:45 [Jul 12 02:07:45.767] [ET_NET 3] NOTE: recovery clearing offsets of Vol /var/cache/trafficserver/cache.db 32768:13107196 : [153985024, 162373632] sync_serial 33 next 34 2022-07-12T03:07:45 [Jul 12 02:07:45.928] [ET_NET 2] NOTE: hosting.config loading ... 2022-07-12T03:07:45 [Jul 12 02:07:45.933] [ET_NET 2] NOTE: hosting.config finished loading 2022-07-12T03:07:45 [Jul 12 02:07:45.951] [ET_NET 2] NOTE: cache enabled 2022-07-12T03:07:45 [Jul 12 02:07:45.951] [ET_NET 2] NOTE: Traffic Server is fully initialized. 2022-07-12T03:07:52 - - redacted - TCP_MISS 1ms - "GET http://redacted/healthz HTTP/1.0" 200 3 2022-07-12T03:07:57 - - redacted - TCP_MISS 0ms - "GET http://redacted/healthz HTTP/1.0" 200 3 2022-07-12T03:08:02 - - redacted - TCP_MISS 0ms - "GET http://redacted/healthz HTTP/1.0" 200 3 2022-07-12T03:08:07 - - redacted - TCP_MISS 0ms - "GET http://redacted/healthz HTTP/1.0" 200 3 2022-07-12T03:08:12 - - redacted - TCP_MISS 0ms - "GET http://redacted/healthz HTTP/1.0" 200 3 2022-07-12T03:08:17 - - redacted - TCP_MISS 0ms - "GET http://redacted/healthz HTTP/1.0" 200 3 2022-07-12T03:08:22 - - redacted - TCP_MISS 0ms - "GET http://redacted/healthz HTTP/1.0" 200 3 2022-07-12T03:08:27 - - redacted - TCP_MISS 0ms - "GET http://redacted/healthz HTTP/1.0" 200 3 2022-07-12T03:08:32 - - redacted - TCP_MISS 0ms - "GET http://redacted/healthz HTTP/1.0" 200 3 2022-07-12T03:08:37 - - redacted - TCP_MISS 0ms - "GET http://redacted/healthz HTTP/1.0" 200 3 2022-07-12T03:08:42 - - redacted - TCP_MISS 1ms - "GET http://redacted/healthz HTTP/1.0" 200 3 2022-07-12T03:08:47 - - redacted - TCP_MISS 1ms - "GET http://redacted/healthz HTTP/1.0" 200 3 2022-07-12T03:08:51 - - redacted - TCP_MISS 0ms - "GET http://redacted/healthz HTTP/1.0" 200 3 2022-07-12T03:08:56 - - redacted - TCP_MISS 0ms - "GET http://redacted/healthz HTTP/1.0" 200 3 2022-07-12T03:09:01 - - redacted - TCP_MISS 0ms - "GET http://redacted/healthz HTTP/1.0" 200 3 2022-07-12T03:09:06 - - redacted - TCP_MISS 0ms - "GET http://redacted/healthz HTTP/1.0" 200 3 2022-07-12T03:09:11 - - redacted - TCP_MISS 0ms - "GET http://redacted/healthz HTTP/1.0" 200 3 2022-07-12T03:09:16 - - redacted - TCP_MISS 0ms - "GET http://redacted/healthz HTTP/1.0" 200 3 2022-07-12T03:09:21 - - redacted - TCP_MISS 0ms - "GET http://redacted/healthz HTTP/1.0" 200 3 2022-07-12T03:09:27 - - redacted - TCP_MISS 0ms - "GET http://redacted/healthz HTTP/1.0" 200 3 2022-07-12T03:09:30 - - redacted - TCP_MISS 0ms - "GET http://redacted/healthz HTTP/1.0" 200 3 2022-07-12T03:09:35 - - redacted - TCP_MISS 0ms - "GET http://redacted/healthz HTTP/1.0" 200 3 2022-07-12T03:09:40 - - redacted - TCP_MISS 0ms - "GET http://redacted/healthz HTTP/1.0" 200 3 2022-07-12T03:09:45 - - redacted - TCP_MISS 0ms - "GET http://redacted/healthz HTTP/1.0" 200 3 2022-07-12T03:09:50 - - redacted - TCP_MISS 0ms - "GET http://redacted/healthz HTTP/1.0" 200 3 2022-07-12T03:09:55 - - redacted - TCP_MISS 0ms - "GET http://redacted/healthz HTTP/1.0" 200 3 2022-07-12T03:10:00 - - redacted - TCP_MISS 0ms - "GET http://redacted/healthz HTTP/1.0" 200 3 2022-07-12T03:10:05 - - redacted - TCP_MISS 0ms - "GET http://redacted/healthz HTTP/1.0" 200 3 2022-07-12T03:10:10 - - redacted - TCP_MISS 0ms - "GET http://redacted/healthz HTTP/1.0" 200 3 2022-07-12T03:10:15 - - redacted - TCP_MISS 0ms - "GET http://redacted/healthz HTTP/1.0" 200 3 2022-07-12T03:10:20 - - redacted - TCP_MISS 0ms - "GET http://redacted/healthz HTTP/1.0" 200 3 [here it just abruptly stops] ```Hopefully this helps track down the issue