Open ringerc opened 10 months ago
As far as I can tell the only way to set this is through the global klog.SetLogger
. This suffers from the same issues as the gRPC logger did, see #4971
I forgot I'd filed this, and raised a more detailed issue https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/30928 which identifies the specific components affected, lists log messages etc.
@mx-psi Explanation makes sense.
The great majority will be using the collector as a standalone binary though. Is it not feasible to have initialization that's done only when running standalone, and to document that users of the collector as a library may need to do corresponding initialization in their own applications?
I guess this is something we can consider, maybe at the command level. I don't feel like the solution we used in #4971 is entirely satisfactory, but this is probably something we can tackle.
I am going to label this with area:service
for lack of a better label, but I am not convinced that this should be initialized at the service level.
@mx-psi Logging adapters could be provided as helper packages by the collector, each one as a lazy one-shot initialization helper. Each exporter/receiver/processor that needs a given logging integration would call the appropriate logging adapter's initialization func. So the first such that happens to have a dependency that requires klog will automatically init the klog adapter, etc.
So someone using the collector as a library can turn it off, feature flags could be used to suppress the adapters' initialization. Or a package level func that collector-as-a-library integrators can call before the main collector entrypoint, making the package init function a no-op.
If those helpers are split into separate packages like logging-adapter-zap-klog
then they'll only get built into the collector when one or more of the built exporter/receiver/processors requires it, so the collector won't pull in BobsNotInventedHereLoggingFramework in minimal builds.
It's hard to imagine a situation where the end user would have the need to turn off such logging integrations, so I don't see why they should be exposed in configuration at all - service level or otherwise. If the collector is built standalone it's always going to want the logging adapters; if it's built as a library, whether they're wanted is going to be a decision by the integrator not the end user. But if configuration is really needed, adding it per-logging-adapter under service.telemetry
may make sense.
It's a bit ugly to make the collector as a whole care about specific logging library dependencies that are only actually used by individual exporters/receivers/processors - but there's not much of a way around it when the logging component does everything in global process context then people use it in libraries. Ah, the Java flashbacks.
Describe the bug Components of the otel collector that invoke the golang Kubernetes API client do not configure a log sink adapter from klog to the otel collector log sink. This causes logs from the go kube client library to ignore the otel collector's configured log level and format.
Steps to reproduce
Deploy an otel collector on k8s (
kind
should do), with ak8sattributes
processor like:and a service logging config like:
Delete any
RoleBinding
orClusterRoleBinding
for itsServiceAccount
to ensure that k8s API calls fail when processing metrics.What did you expect to see?
json-format log messages from go-client
What did you see instead?
default klog-format log messages like
What version did you use?
0.87.0 (contrib)
What config did you use?
As above. I'll edit to add a self contained repro config soon if feasible, but the problem is pretty obvious even without a repro and config. Failure to configure klog to use a sink/adapter.
Environment
kube 1.27 (Azure AKS)
Additional context
None really