Policy server allows to configure the following aspects of his log behaviour:
Log level
Log format
Both aspects can be configured either via cli flag or via dedicated environment variables.
Currently kubewarden-controller reads the configuration params of policy server from a dedicated ConfigMap called policy-server. This ConfigMap is created by our helm chart.
The kubewarden-controller should be extended to read the logging-specific params from the policy-server ConfigMap, and then use them to create the right Deployment object for policy server.
The next section cover the logging params that can be set.
Log level
The log level to be used can be specified either via the --log-level flag or via the KUBEWARDEN_LOG_LEVEL environment variable.
Log format
The log format to be used can be specified either via the --log-fmt flag or via the KUBEWARDEN_LOG_FMT environment variable.
Depending on the log format, other options might have to be specified.
Jaeger
When the jaeger format is used, logs are sent to a Jaeger collector.
Setting KUBEWARDEN_LOG_FMT=jaeger is not enough, our controller has to support also the following scenarios.
Tuning via environment variables
By default, policy-server will attempt to send the trace a events to a Jaeger collector that is listening on localhost. This will not work with our deployment.
The user can configure some Jaeger options via these dedicated environment variables (note well: there are no cli flags):
Maximum time the Jaeger exporter will wait for each batch export
10s
OTEL_EXPORTER_JAEGER_USER
Username to be used for HTTP basic authentication
-
OTEL_EXPORTER_JAEGER_PASSWORD
Password to be used for HTTP basic authentication
-
Our controller should always forward these environment variables into the policy-server deployment.
Jaeger operator
The environment variables method shown above is nice, but this is not what people use in production (for example there would be no way to send traces in an encrypted way).
Jaeger provides a dedicated operator that takes care of setting up the whole Jaeger infrastructure (not relevant to us) plus injecting a Jaeger collector into the workloads that demand it (relevant to us).
Quoting the official documentation:
The operator can inject Jaeger Agent sidecars in Deployment workloads, provided that the deployment or its namespace has the annotation sidecar.jaegertracing.io/inject with a suitable value. The values can be either "true" (as string), or the Jaeger instance name, as returned by kubectl get jaegers. When "true" is used, there should be exactly one Jaeger instance for the same namespace as the deployment, otherwise, the operator can’t figure out automatically which Jaeger instance to use. A specific Jaeger instance name on a deployment has a higher precedence than true applied on its namespace.
Our users should be able to specify the value of the sidecar.jaegertracing.io/inject annotation. The controller will then ensure the policy-server deployment has the right annotation added.
We have to come up with a configuration key inside of our ConfigMap to represent the sidecar.jaegertracing.io/inject value. When this key is not present the annotation will not be created.
Open Telemetry Collector
The Open Telemetry project provides a collector
component that can be used to receive, process and export telemetry data
in a vendor agnostic way.
Logs are sent to this type of collector when the otlp format is chosen.
Currently policy-server has the following limitations:
Traces can be sent to the collector only via grpc. The HTTP transport layer is not supported.
The Open Telemetry Collector must be listening on localhost. When deployed on Kubernetes, policy-server must have the Open Telemetry Collector running as a sidecar.
Policy server doesn't expose any configuration setting for Open Telemetry (e.g.: endpoint URL, encryption, authentication,...). All of the tuning has to be done on the collector process that runs as a sidecar.
The Open Telemetry operator works in a different way, compared to the Jaeger operator.
It requires a resource of type OpenTelemetryCollector to be defined inside of
the namespace where the Deployment runs.
Creating this OpenTelemetryCollector resource via our controller would be too complicated (many parameters, it could change over the time). Because of that we have to adopt this approach:
The kubewarden helm chart will perform the creation of the OpenTelemetryCollector resource
The name of the resource will always be policy-server
The controller will add the following annotation to the policy server Deployment: sidecar.opentelemetry.io/inject: "policy-server"
Policy server allows to configure the following aspects of his log behaviour:
Both aspects can be configured either via cli flag or via dedicated environment variables.
Currently kubewarden-controller reads the configuration params of policy server from a dedicated ConfigMap called
policy-server
. This ConfigMap is created by our helm chart.The kubewarden-controller should be extended to read the logging-specific params from the
policy-server
ConfigMap, and then use them to create the right Deployment object for policy server.The next section cover the logging params that can be set.
Log level
The log level to be used can be specified either via the
--log-level
flag or via theKUBEWARDEN_LOG_LEVEL
environment variable.Log format
The log format to be used can be specified either via the
--log-fmt
flag or via theKUBEWARDEN_LOG_FMT
environment variable.Depending on the log format, other options might have to be specified.
Jaeger
When the
jaeger
format is used, logs are sent to a Jaeger collector.Setting
KUBEWARDEN_LOG_FMT=jaeger
is not enough, our controller has to support also the following scenarios.Tuning via environment variables
By default, policy-server will attempt to send the trace a events to a Jaeger collector that is listening on localhost. This will not work with our deployment.
The user can configure some Jaeger options via these dedicated environment variables (note well: there are no cli flags):
Our controller should always forward these environment variables into the policy-server deployment.
Jaeger operator
The environment variables method shown above is nice, but this is not what people use in production (for example there would be no way to send traces in an encrypted way).
Jaeger provides a dedicated operator that takes care of setting up the whole Jaeger infrastructure (not relevant to us) plus injecting a Jaeger collector into the workloads that demand it (relevant to us).
Quoting the official documentation:
Our users should be able to specify the value of the
sidecar.jaegertracing.io/inject
annotation. The controller will then ensure the policy-server deployment has the right annotation added.We have to come up with a configuration key inside of our ConfigMap to represent the
sidecar.jaegertracing.io/inject
value. When this key is not present the annotation will not be created.Open Telemetry Collector
The Open Telemetry project provides a collector component that can be used to receive, process and export telemetry data in a vendor agnostic way.
Logs are sent to this type of collector when the
otlp
format is chosen.Currently policy-server has the following limitations:
The Open Telemetry project provides a Kubernetes Operator. We expect our users to rely on it and leverage its sidecar injection feature.
The Open Telemetry operator works in a different way, compared to the Jaeger operator. It requires a resource of type
OpenTelemetryCollector
to be defined inside of the namespace where the Deployment runs.Creating this
OpenTelemetryCollector
resource via our controller would be too complicated (many parameters, it could change over the time). Because of that we have to adopt this approach:OpenTelemetryCollector
resourcepolicy-server
sidecar.opentelemetry.io/inject: "policy-server"