Open JAORMX opened 10 months ago
@JAORMX can you please advice how you check the value ? , here is how our node collector extract the info
@JAORMX can you please advice how you check the value ? , here is how our node collector extract the info
Thanks for the link @chen-keinan ! So, in this case, the --authorization-mode
flag is not set at all in the kubelet's command line. Instead, they rely on setting it in the kubelet config which is in /etc/kubernetes/kubelet/kubelet-config.json
. In that file you'll find the relevant authorization
key, with a mode
setting.
@JAORMX can you please advice how you check the value ? , here is how our node collector extract the info
Thanks for the link @chen-keinan ! So, in this case, the
--authorization-mode
flag is not set at all in the kubelet's command line. Instead, they rely on setting it in the kubelet config which is in/etc/kubernetes/kubelet/kubelet-config.json
. In that file you'll find the relevantauthorization
key, with amode
setting.
thanks for your input , I suspected it as well , I'll update the checks and release a new k8s-node-collector
@chen-keinan feel free to ping me once you have a review up. Thanks for checking this out
@JAORMX please let me know if v0.18.0-rc
solve this issue
Will do, after the holidays. I'm back to work on Jan 2
@chen-keinan it did not work. I still get that issue after the v0.18.0 upgrade. There's also other funky errors that are not applicable, such as the report complaining about API Server config permissions. We don't even have that config as we don't run the API server (it's a managed k8s).
Thanks , strange ,I test it on managed k8s as well, I'll have another look.
Yes, I'm using the helm chart.
@chen-keinan I reverified and I had misread the report. The permissions are not reported as an error. However, the original issue still is being reported:
{
"id": "4.2.2",
"name": "Ensure that the --authorization-mode argument is not set to AlwaysAllow",
"severity": "CRITICAL",
"totalFail": 2
}
And it shouldn't. It should run a re-scan once a day right? I did wait a full day after the upgrade.
Its cron based you can configure it https://github.com/aquasecurity/trivy-operator/blob/main/deploy/helm/values.yaml#L517
@JAORMX could you please see if you can catch the output of node collector
:
kubectl logs -n trivy-system node-collector-<id>
before the pod is deleted
and let me know what value you get for kubeletAuthorizationModeArgumentSet
, example :
"kubeletAuthorizationModeArgumentSet": {
"values": [
"Webhook"
]
}
@chen-keinan I don't see a node collector pod. Does it get claned up?
@chen-keinan I don't see a node collector pod. Does it get claned up?
yes , it need to catch it fast after job completed it get cleaned up
What steps did you take and what happened:
While manually auditing our nodes, it turns out that we don't set
AlwaysAlllow
and actually have a compliant setting.What did you expect to happen:
Expected that result not to fail.
Environment:
trivy-operator version
): 0.17.1kubectl version
):