Open csmarvz opened 2 years ago
Hello @csmarvz!
AKS will reconcile all of the addon / system deployments. This means that any kubectl edit
's on these resources will be reset once that kicks in (usually within 5 minutes). Fun history here is that we actually used to reconcile the configmap too, but users wanted to add in custom nonMasqueradeCIDRs
so V2 was made to handle multiple configmaps (one that AKS reconciles, one that can be customized freely).
For deployment container arguments, this means that AKS controls them. For example, on kube-proxy
deployment they also force --v=3
. For ip-masq-agent-v2, this is also the case but with --v=2
.
May I ask what the reason for wanting to lower the log verbosity is? Would be great to know the use-case here. Thanks!
Hello @mattstam !
Thanks for the explanation :) The use-case is only to reduce the quantity of logs (we use datadog and pay for the logs)
Cheers
Thanks for sharing the use-case, I will try to provide some method to allow manual overriding of log verbosity level via the configmap.
Some comments about this issue?
Hello @mattstam !
Thanks for the explanation :) The use-case is only to reduce the quantity of logs (we use datadog and pay for the logs)
Cheers
We have the same use-case, what did you do about this?
Same issue here.
Following this example: https://github.com/Azure/ip-masq-agent-v2/blob/master/examples/config-custom.yaml
I tried this ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: azure-ip-masq-agent-config
namespace: kube-system
labels:
component: ip-masq-agent
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: EnsureExists
data:
ip-masq-agent: |-
--v=1
Resulting in an error "cannot convert string to go value".
And then I tried this CM:
apiVersion: v1
kind: ConfigMap
metadata:
name: azure-ip-masq-agent-config
namespace: kube-system
labels:
component: ip-masq-agent
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: EnsureExists
data:
ip-masq-agent: |-
v: 1
Which is simply ignored.
How do we have to define the LogLevel within the ConfigMap?
I have the exact same issue of Datadog and a very verbose log. However aside from the cost of the many many logs, we also suffer from all the logs being logged as errors (which they're clearly not) This pollutes our dashboard of "is everything healthy" and is even more annoying than the sheer volume.
So if any option exist to change the log level, I'm very interested to hear about it.
Oh and somewhat unrelated to the above - i doubt ConfigMap will solve anything as that's mapped into a config file in /etc/config
, which is not where the --v
argument goes sadly :-/
Hello @mattstam ! Thanks for the explanation :) The use-case is only to reduce the quantity of logs (we use datadog and pay for the logs) Cheers
We have the same use-case, what did you do about this?
My solution was to exclude them entirely. The trick is to add the env variable DD_CONTAINER_EXCLUDE_LOGS to datadog's daemon set:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: datadog
namespace: default
labels: {}
spec:
revisionHistoryLimit: 10
selector:
matchLabels:
app: datadog
template:
metadata:
labels:
app: datadog
name: datadog
annotations: {}
spec:
securityContext:
runAsUser: 0
hostPID: true
containers:
- name: agent
image: "gcr.io/datadoghq/agent:7.45.0"
imagePullPolicy: IfNotPresent
command: ["agent", "run"]
resources: {}
ports:
- containerPort: 8125
name: dogstatsdport
protocol: UDP
env:
- name: DD_CONTAINER_EXCLUDE_LOGS
value: image:mcr.microsoft.com/aks/ip-masq-agent-v2
....
You can ignore logs from other images too, just separate them with a space or new line, like:
- name: DD_CONTAINER_EXCLUDE_LOGS
value: image:mcr.microsoft.com/aks/ip-masq-agent-v2 image:mcr.microsoft.com/oss/kubernetes/coredns
image:mcr.microsoft.com/oss/kubernetes/metrics-scraper image:mcr.microsoft.com/oss/kubernetes/kube-proxy
Hope that it can help someone
Hello, I'm running an aks cluster (with kubernetes v1.23.5) and I'd like to limit the logs generated by ip-masq-agent-v2 to errors. I've seen that I could use the -v flag and set it to 1 in the daemonset for example, but somehow the default value 2 is always restored.
That's what I'm doing:
kubectl edit daemonset azure-ip-masq-agent -n kube-system
replace
- --v=2
by- --v=1
save and exitpods are recreated and it looks good, but a few minutes later, logs are coming again. If I redo
kubectl edit daemonset azure-ip-masq-agent -n kube-system
, I can see the that -v is set to 2 again.Do you have any idea of what I can do there? Thanks