Open ludoch opened 8 years ago
The request log is setup in jetty9 by gae.xml and in jetty9-compat by a similar gae.xml
This indicates we are rolling the logs daily and that we keep the logs for 2 days.
Note that the pure jetty request log does not have a size based log rotation, so we could still fill up the disk. However, we have the ability to plugin something like logback which I believe does have this capability. @joakime can you confirm that and perhaps provide a PR to change the configuration? @ludoch please say if you want size based rotation
I think 2 days is too long (there is another agent looking at the /var/log/appengine_logs area and it can catch up to 5 or 10 minutes with rotation.. Logback: can it be configure by time and size, i.e rotate every 30mins or when the size is bigger than xMb? That would be ideal.
@joakime also it would be good to filter out the routine health check requests
With logback, you can do this .
See http://logback.qos.ch/manual/appenders.html#RollingFileAppender
There's 2 ways to do this ...
Using separate rolling & triggering policies:
What you'll setup is a rollingPolicy
which is based on time.
and a triggeringPolicy
based on file size.
Something like this ...
<configuration>
<appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>gae.log</file>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<!-- daily rollover -->
<fileNamePattern>gae.%d{yyyy-MM-dd}.log</fileNamePattern>
<!-- keep 30 days' worth of history capped at 3GB total size -->
<maxHistory>30</maxHistory>
<totalSizeCap>3GB</totalSizeCap>
</rollingPolicy>
<triggeringPolicy class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
<maxFileSize>30MB</maxFileSize>
</triggeringPolicy>
<encoder>
<pattern>%-4relative [%thread] %-5level %logger{35} - %msg%n</pattern>
</encoder>
</appender>
<root level="DEBUG">
<appender-ref ref="FILE" />
</root>
</configuration>
Using time & size based rolling policy only:
<configuration>
<appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>gae.log</file>
<rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
<!-- rollover daily -->
<fileNamePattern>gae-%d{yyyy-MM-dd}.%i.txt</fileNamePattern>
<!-- each file should be at most 30MB, keep 60 days worth of history, but at most 3GB -->
<maxFileSize>30MB</maxFileSize>
<maxHistory>60</maxHistory>
<totalSizeCap>3GB</totalSizeCap>
</rollingPolicy>
<encoder>
<pattern>%-4relative [%thread] %-5level %logger{35} - %msg%n</pattern>
</encoder>
</appender>
<root level="DEBUG">
<appender-ref ref="FILE" />
</root>
</configuration>
I've personally never setup a log to roll as aggressively as every 30 minutes, so I'll need to dig some more to figure that aspect out. (Hourly is probably already supported by the built-in policies)
@joakime also it would be good to filter out the routine health check requests
+1 however I don't think we should completely filter them out. Instead we should either filter out only the ones that return 200 or better yet change the log level of the ones that return 200 to debug and if possible change the failed ones to warning.
@joakime could you also please document how to make logging work when running locally from jetty9-compat-base: doesn't seem to notice a logging.properties file in the webapp's WEB-INF dir.
Wait, the webapp's WEB-INF dir?
Do we want to have the webapp's logging punch a hole through the WebApp isolation layers? (ie: WebAppClassloader)
I would strongly discourage this behavior, as there are existing webapps that people deploy that do not like this (hudson, jenkins, and nexus are some obvious examples)
This issue has been about server side logging, and that's the way I've been progressing, throwing forced webapp logging into the mix should be a separate issue, with a separate PR (to make the eventual revert easier).
@joakime The GAE deployment unit is the war file, so any configuration uploaded for deployment is in the war file. So it's not necessarily punching a hole in the classloader, rather it is just like how WEB-INF/appengine-web.xml
is used to configure the container. It is documented here.
So as this is established practise, I do not believe we can change that. It doesn't need a hole punched in the classloader and the VmRuntimeWebAppContext
actively seeks the logging.properties file from the webapp and uses it to amend the configuration of the containers logger. So it really is a container configurations that is just deployed via the webapp.
The issue Jan is having is that it does not appear to work when a webapp is run locally in the jetty-9-compat-base. So when you review the logging, can you also investigate why it doesn't work locally (specially as that will be useful for you to test any changes that you make).
Logging events produced by the webapp wont use the server logging layer.
Having the server dive into the webapp, grab some configuration, and configure the server is no big deal, that works.
But expecting the logging events at the webapp to flow out of the webapp and into the server logging requires hole punching (java.util.logging works because its already hole punched).
Without punching a hole, the logging from commons-logging, log4j, slf4j, jetty-log (from the use of jetty-client) would not flow out of the webapp, they would have their own logging layer setup at the webapp side.
Another approach, without hole punching, is to force the logging at the webapp side to use the server side components (a feature that the jetty-webapp-logging deployment module does). But this has proven VERY problematic on some webapps, as some webapps expect to be able to manage their own logging (which this technique forbids)
See Issue #289 for proposed change to the JSON format to suit the multiple logger world we live in now.
The fact that we don't generate the hidden classes list has bitten us in this case. We needed to have added org.slf4j.
and org.apache.log4j
to the server class.
But we also need a suite of test webapps using common frameworks that we can deploy against to ensure that we don't break stuff like this.
obsoleted by https://github.com/GoogleCloudPlatform/jetty-runtime/issues/4, unless @ludoch wants a solution in master of this repository? If not please close.
No need for master in this repo.
On Wed, Sep 7, 2016 at 6:32 PM, Greg Wilkins notifications@github.com wrote:
obsoleted by GoogleCloudPlatform/jetty-runtime#4 https://github.com/GoogleCloudPlatform/jetty-runtime/issues/4, unless @ludoch https://github.com/ludoch wants a solution in master of this repository? If not please close.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/GoogleCloudPlatform/appengine-java-vm-runtime/issues/231#issuecomment-245469257, or mute the thread https://github.com/notifications/unsubscribe-auth/AAE4zXVex9WJK_YilGxf6YDTb5XmTBbOks5qn2WigaJpZM4IYJK- .
Reopening, because we need the fix in master as well due to a customer issue.
Can we replace the current config with something that rolls request logs more aggressively? Either keep only several hours worth of logs, or restrict by size?
<!-- =========================================================== -->
<!-- Setup a request log -->
<!-- =========================================================== -->
<Ref refid="Handlers">
<Call name="addHandler">
<Arg>
<New id="RequestLog" class="org.eclipse.jetty.server.handler.RequestLogHandler">
<Set name="requestLog">
<New id="RequestLogImpl" class="org.eclipse.jetty.server.NCSARequestLog">
<Arg><SystemProperty name="jetty.logs" default="/var/log/app_engine"/>/request.yyyy_mm_dd.log</Arg>
<Set name="retainDays">2</Set>
<Set name="append">true</Set>
<Set name="extended">true</Set>
<Set name="LogTimeZone">GMT</Set>
<Set name="logLatency">true</Set>
<Set name="preferProxiedForAddress">true</Set>
</New>
</Set>
</New>
</Arg>
</Call>
@meltsufin this is only for request logs and it can only roll on a daily basis, the only real size tweak here is the number of retained days
It proper path forward would be to wire up the logback or jul setup that does provide a facility for this. For the request logs it would go to either of these via the slf4j request log implementation.
It sounds like there is no minimally invasive change that we can do right now, except for changing retainDays to 1.
'no minimally invasive change' - that would be correct, resolving this would require a small handful of changes to logging
[edit] optionally disable for this client pending the changes to resolve this issue?
As per https://github.com/GoogleCloudPlatform/jetty-runtime/issues/60, I'm not sure we need a request log file at all? The gcloud console produces a request log separately that will eventually be re-integrated with the application log stream.
As @jmcc0nn3ll says, the default jetty request log only supports 24hr roll over. To do size related rollover, we would need to configure our logback integration, which has wider logging implications.
Thus I think we should just remove the log for now, and it can be added back if and when the requirements process we are separately engaged in identifies a need.
+1 for removing the Jetty request log altogether for now.
We do not want to fill the VM disk space with logs. The environment providing the local logs to the console supports log rotation. Just need to verify we do it.