Closed rootxrishabh closed 5 months ago
Snyk issues remediation:-
There are still some pkgs whose fixes are pushed to the main branch but not published yet.
karmor install --save
failing if no kubernetes cluster is configured
We do not mandate K8s cluster if we only need to save
Registry flag is not working
./karmor install -r "ttl.sh" --save
Expected - config images and operator image to be prepended with registry info Actual - No change in config
./karmor install --tag=v1.2.1 --save
Not working as expected
Expected - config images and operator image to have their tags changed Actual - No change in config
I couldn't help but notice, even if I am just using save. It takes a lot of time to generate things. Is it a drawback of using helm client?
I couldn't help but notice, even if I am just using save. It takes a lot of time to generate things. Is it a drawback of using helm client?
I tried printing manifests without running the client and it returns a null pointer. So, the install/upgrade client had to dryrun in order to get the manifests. Taking a look at other points :eyes:
Have we verified this behaviour with a legacy karmor install? Please include screenshots
`rootxrishabh@rootxrishabh:~/kubearmor-client$ ./karmor install --legacy=true
๐ Environment : k3s
๐ฅ CRD kubearmorpolicies.security.kubearmor.com
โน๏ธ CRD kubearmorpolicies.security.kubearmor.com already exists
๐ฅ CRD kubearmorhostpolicies.security.kubearmor.com
โน๏ธ CRD kubearmorhostpolicies.security.kubearmor.com already exists
๐ซ Service Account
๐ซ Service Account
โ๏ธ Cluster Role
โน๏ธ Cluster Role already exists
โ๏ธ Cluster Role Bindings
โน๏ธ Cluster Role Bindings already exists
โ๏ธ KubeArmor Relay Roles
๐ก KubeArmor Relay Service
๐ฐ KubeArmor Relay Deployment
๐ก KubeArmor DaemonSet - Init kubearmor/kubearmor-init:stable, Container kubearmor/kubearmor:stable -gRPC=32767
โน๏ธ KubeArmor DaemonSet already exists
๐ก KubeArmor Controller TLS certificates
๐ซ KubeArmor Controller Service Account
โ๏ธ KubeArmor Controller Roles
๐ KubeArmor Controller Deployment
๐ KubeArmor Controller Metrics Service
๐ KubeArmor Controller Webhook Service
๐ฅณ Done Installing KubeArmor
๐คฉ KubeArmor Controller Mutation Admission Registration
โน๏ธ KubeArmor Controller Mutation Admission Registration already exists 88%
๐ KubeArmor ConfigMap Creation 88%
๐ Checking if KubeArmor pods are running...โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ] 111.76%
๐ฅณ Done Checking , ALL Services are running!
โ๏ธ Execution Time : 13.858858021s
๐ง Verifying KubeArmor functionality (this may take upto a minute)...
๐ก๏ธ Your Cluster is Armored Up!
rootxrishabh@rootxrishabh:~/kubearmor-client$ ./karmor uninstall
โ KubeArmor is either not installed, or the specified namespace is incorrect.
โน๏ธ Please ensure you have installed KubeArmor, and check that you are specifying the correct namespace.
โน๏ธ Attempting legacy uninstallation.
๐๏ธ Mutation Admission Registration
๐๏ธ KubeArmor Services
โ Service: kubearmor removed
โ Service: kubearmor-controller-metrics-service removed
๐จ Service Accounts
โ ServiceAccount kubearmor removed
โ ServiceAccount kubearmor-relay removed
โ ServiceAccount kubearmor-controller removed
๐จ Cluster Roles
โ ClusterRole kubearmor-snitch removed
โ ClusterRole kubearmor-clusterrole removed
โ ClusterRole kubearmor-relay-clusterrole removed
โ ClusterRole kubearmor-controller-proxy-role removed
โ ClusterRole kubearmor-controller-clusterrole removed
๐จ Cluster Role Bindings
โ ClusterRoleBinding kubearmor-clusterrolebinding removed
โ ClusterRoleBinding kubearmor-relay-clusterrolebinding removed
โ ClusterRoleBinding kubearmor-controller-clusterrolebinding removed
โ ClusterRoleBinding kubearmor-controller-proxy-rolebinding removed
โ ClusterRoleBinding kubearmor-snitch-binding removed
๐งน Roles
โ Role kubearmor-controller-leader-election-role removed
๐งน RoleBindings
โ RoleBinding kubearmor-controller-leader-election-rolebinding removed
๐ป KubeArmor Controller TLS certificates
โ KubeArmor Controller TLS certificate kubearmor-controller-webhook-server-cert removed
๐ป KubeArmor ConfigMap
โ ConfigMap kubearmor-config removed
๐ป KubeArmor DaemonSet
โ KubeArmor DaemonSet kubearmor removed
๐ป KubeArmor Deployments
โ KubeArmor Deployment kubearmor-relay removed
โ KubeArmor Deployment kubearmor-controller removed
๐ Checking if KubeArmor pods are stopped...
๐ด Done Checking; all services are stopped!
โ๏ธ Termination Time: 9.787902842s `
Yes
Installation Process in general is smooooth. Great Work.
But this does not look cohesive.
Deployment is not ready
messageskarmor uninstall
This is not working as expected I did the standard helm install But it is saying it cannot detect it It fellback to legacy uninstallation, but my helm package still exists
Installation Process in general is smooooth. Great Work.
But this does not look cohesive.
* the helm logs and emoji logs are not streamlines, We should suppress helm logs and manually log based on the output maybe. Not sure what's best here. * We are spamming `Deployment is not ready` messages * We need to type something when KubeArmor Daemonset is deploying, Because it takes a while, including messaging around expected time, inform that snitch is running
I think we should handle logs ourselves and suppress helm logs. We could manually output logs for deployments etc. I will add deployment message for snitch and include an expected time logs.
karmor uninstall
This is not working as expected I did the standard helm install But it is saying it cannot detect it It fellback to legacy uninstallation, but my helm package still exists
Yes, I was thinking of discussing this earlier. If we remove the default namespace in uninstall and don't specify namespace for example "kubearmor" then our current helm implementation doesn't automatically detect the installed namespace. I guess implementing auto detection of namespace is required?
I guess implementing auto detection of namespace is required?
Yup. We can list packages and go through that ig.
I don't think the first log is looking nice, is it something we need to keep?
It is not necessary, just denoting the initiation of helm client. Should we suppress logs for save flag?
I think it will automatically be handled when we have the entire logging figured out for helm.
./karmor install --tag=v1.2.1 --save
Not working as expected
Expected - config images and operator image to have their tags changed Actual - No change in config
We currently dont have --save watching tag and registry flags. I will do it :+1:
We currently dont have --save watching tag and registry flags. I will do it ๐
That's concerning. Ideal case it should be not be 2 different approaches. We finalise on the config then decide whether to save it or install it.
We currently dont have --save watching tag and registry flags. I will do it ๐
That's concerning. Ideal case it should be not be 2 different approaches. We finalise on the config then decide whether to save it or install it.
Got it. I will make sure we are handling all flags first.
I couldn't help but notice, even if I am just using save. It takes a lot of time to generate things. Is it a drawback of using helm client?
Update: I fixed --save not running for clusterless environments and now the process is running client side only, executes without helm logs and is quick.
Here's a demo of the functionality -
We should be updating the operator image as well when we use tag/flags/registry
operator image is not changing with any flags. We might need to impement a new flag to update operator image. I believe the helm chart accepts updates to operator image as well
We can now change imagePullPolicy, tag, image, registry.
Also the uninstall command stays stuck right now because of - https://github.com/kubearmor/KubeArmor/issues/1638
@DelusionalOptimist can we address the review changes first and then take a look at the controller issue?
@DelusionalOptimist can we address the review changes first and then take a look at the controller issue?
Yeah yeah it's cool. Just mentioned it here as an FYI for reviewers. We can look into it later. This PR is priority.
@rootxrishabh the defaultposture flags are not working for me
I used block capabilities as flag but it is still in audit mode
@PrimalPimmy can you please check the kubearmor-config configmap. Currently posture and visibility settings are being incorrectly displayed by karmor probe for operator installation. Ref
@PrimalPimmy can you please check the kubearmor-config configmap. Currently posture and visibility settings are being incorrectly displayed by karmor probe for operator installation.
Still not working
file is not going to audit mode when set?
I see that the kubearmorconfig is not being recreated. You need to do
karmor uninstall --force
to remove the CR and then reinstall with the posture settings.
Snyk issues remediation:-
github.com/Microsoft/hcsshim@v0.11.4
.github.com/Microsoft/hcsshim@v0.11.4
.github.com/google/certificate-transparency-go@v1.1.7
.github.com/urfave/negroni
introduced through github.com/sigstore/timestamp-authority@v1.2.0 โบ github.com/urfave/negroni@v1.0.0
fix pushed to master branch but not yet published .helm.sh/helm/v3/cmd/helm
. I see that the kubearmorconfig is not being recreated. You need to do karmor uninstall --force to remove the CR and then reinstall with the posture settings.
Is there any way to get the configmap recreated, imo it's a better way. Maybe remove the configmap after uninstalling
I see that the kubearmorconfig is not being recreated. You need to do karmor uninstall --force to remove the CR and then reinstall with the posture settings.
Is there any way to get the configmap recreated, imo it's a better way. Maybe remove the configmap after uninstalling
The configmap kubearmor-config does get deleted but the operator CR kubearmorconfig-default does not and so for every new installation after this, the CR does not get updated and thus we don't see posture settings supplied through CLI. Inorder to remove the CR(kubearmorconfig-default), we must do karmor uninstall --force
and do a fresh installation.
Ref
I see that the kubearmorconfig is not being recreated. You need to do karmor uninstall --force to remove the CR and then reinstall with the posture settings.
Is there any way to get the configmap recreated, imo it's a better way. Maybe remove the configmap after uninstalling
The configmap kubearmor-config does get deleted but the operator CR kubearmorconfig-default does not and so for every new installation after this, the CR does not get updated and thus we don't see posture settings supplied through CLI. Inorder to remove the CR(kubearmorconfig-default), we must do
karmor uninstall --force
and do a fresh installation. Ref
We can notice it outputs KubeArmorConfig created in a fresh installation.
If I do a kubectl apply -f
with the update config, it does update existing config if there are changes. I believe the same behaviour should translate here
We should switch to doing
err:= Resource.Create()
if !strings.Contains(err.Error(), "already exists") {
err:= Resource.Update()
return err/nil
}
If I do a
kubectl apply -f
with the update config, it does update existing config if there are changes. I believe the same behaviour should translate here
I remember us discussing the idea of karmor update
. I think we discussed that we would update all operator CR options separately using the update command, it can be handled in a separate issue. Wdyt?
I remember us discussing the idea of
karmor update
. I think we discussed that we would update all operator CR options separately using the update command, it can be handled in a separate issue. Wdyt?
This is a different issue. People won't realise that CR is not updated. the update command was meant to patch things as well patch the helm release if needed.
We cannot ask people to run uninstall with force just so that they have a fresh installation.
It's a 2 liner change here so I believe it's worth it to handle here. Wdyt?
I remember us discussing the idea of
karmor update
. I think we discussed that we would update all operator CR options separately using the update command, it can be handled in a separate issue. Wdyt?This is a different issue. People won't realise that CR is not updated. the update command was meant to patch things as well patch the helm release if needed.
We cannot ask people to run uninstall with force just so that they have a fresh installation.
It's a 2 liner change here so I believe it's worth it to handle here. Wdyt?
Agreed. --force
uninstallation shouldn't be an intermediatory to update the CR. I will implement the CR updation here.
Purpose: This PR shifts karmor install and karmor uninstall to use kubearmor operator using helm.
Does this PR introduce a breaking change? Yes