Open norbjd opened 5 years ago
There is no documentation about this internal GCP incompatibility, as well as a 3rd party tutorial implying the contrary.
There should at least be a note that you can't use gcplogs
with GCE
in the docs, as assuming the contrary is perfectly reasonable.
Hey,
the agent sets the logging driver to json-file, but these logs should be then picked up by the stackdriver-logging agent (running as a different container) and should be available in Stackdriver: https://cloud.google.com/compute/docs/containers/deploying-containers#viewing_container_logs
Does this not work for you?
@pderkowski Ah, I was trying to setup structured logging, but I guess I should have configured the agent for structured logging. I came across that page a few times during troubleshooting, but I couldn't even tell it was related to Compute Engine. I don't think the compute engine docs mention the agent, it's just referred to as "Stackdriver Logging"
So, basically, if you're logging from a container on GCE you are using the logging agent
Thanks for your answers.
I was confused because in the GCE instance I've got (created with Terraform), there was no stackdriver-logging-agent
container started.
I've noticed that when creating a GCE instance with a container from the Console, a metadata entry google-logging-enabled = "true"
were added. I suppose that the presence of this metadata entry have consequences on Konlet behaviour (start or not the stackdriver-logging-agent
). Am I right?
Anyway, in my case, I was missing this metadata key/value when creating the instance with Terraform (because I didn't know it existed). So basically, I just had to define the metadata
of my instance as following :
metadata {
gce-container-declaration = <<EOF
spec:
containers:
- name: my-container
image: 'my-image'
EOF
google-logging-enabled = "true"
}
The container stackdriver-logging-agent
in the instance now redirects logs from my-container
's stdout to Stackdriver.
@norbjd You are right, this metadata key is necessary. Please note though that creating instances with containers through the instance API is not an officially supported mode and can change any time without notice.
Hello.
Another problem that occurs because of hard coding json-file as the log driver with the default options is that by default it will create one uncompressed and unlimited log file that fills up the VM disk until it fails.
This happens now when using the gcloud beta compute instances create-with-container
.
Thanks.
Hi @pderkowski ,
After some research, I'm agree with @yuvaldrori 's opinions. Current settings about logging driver will create one uncompressed and unlimited log file, even container stackdriver-logging-agent
is running. I think this is not acceptable for production usage, right ?
So maybe add max-size & max-file settings to json logging driver config section is valid options seems all container logs will be redirected to stackdriver for now ?
Having the log-driver be configurable (and preferably respecting what is in /etc/docker/daemon.json) would help in use cases where the user doesn't want to use stackdriver for application logs for whatever reason.
Forcing the user to use the json-file logger and thus stackdriver makes forwarding the logs to other centralized logging services a pain that is completely unneccessary (set up pub/sub channel, create instances that receive and parse log messages, perform the forwarding, add monitoring of this instance, set up error handling etc.. All of this could be made superfluous by letting docker handle the logging.)
Hello,
When Konlet starts a container, logging driver is explicitly set to
json-file
in the code :https://github.com/GoogleCloudPlatform/konlet/blob/fcc4fb619405c7ec26e8026ab1196a3f043bbf0e/gce-containers-startup/runtime/runtime.go#L311
It will be convenient to be able to change the driver (for example to
gcplogs
if logs have to be redirected to Google Cloud Logging (Stackdriver)).Could it be possible to get the logging configuration from
/etc/docker/daemon.json
, or at least to be able to choose the logging driver?Thank you very much.