SAP-archive / karydia

Kubernetes Security Walnut
Other
77 stars 10 forks source link

Invalid buffer of Karydia configuration #111

Closed CodeClinch closed 5 years ago

CodeClinch commented 5 years ago

Description

After the change of the Karydia configuration the buffers are not updated immediately. If a pod gets deployed in this phase (config is updated, but buffers are not) the pod might get an old default configuration.

User Story

As a developer I want to have the new values immediately. And I would like to have a consistent cluster.

Implementation idea

ionysos commented 5 years ago

We (@marwinski, @dacappo, @ionysos) discussed this and decided to go with the current approach where it is possible to get an old default configuration for max. some seconds in the worst case in between the change of KarydiaConfig and the real karydia config variable's value update at runtime. This is default Kubernetes (K8s) behavior and we have two different roles in mind. The first one is the cluster admin, cluster security expert and/or cluster operator who sets up, configures, updates and/or deletes clusters and karydia and only infrequently updates the karydia default configuration (KarydiaConfig) to new requirements. The second one is the (application) developer and/or (application) operator who (re-)deploys, updates, (re-)starts and/or deletes namespaces, service accounts, pods and/or other resources for the application. So, e.g. an automatic deployment / provisioning from a new K8s cluster to an up and running application where some components rely on one KarydiaConfig and some other components rely on another KarydiaConfig is NOT a valid use case because such scenarios should be represented via (karydia) supported annotations at these specific components.

But for traceability reasons we should annotate each and every K8s component which was touched / modified by karydia with the applied (karydia) configuration (KarydiaConfig at execution time). This will be implemented with #124 and #121.