Open showpopulous opened 1 year ago
I think this comes up time and again because probably when clusters are upgraded, the objects resources are rewritten, not reapplied. So the last-applied-configuration does not reflect the current object metadata - as such it might be a good idea to at least have a flag to ignore that and just grab the actual current manifest.
Many of the resources are updated manifest (argo syncs it) and show up like this in my cluster
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"autoscaling/v2beta2","kind":"HorizontalPodAutoscaler","metadata":{"annotations":{},"labels":{"app.kubernetes.io/instance":"cross-products-currency-service"},"name":"cross-products-currency-service-hpa"},"spec":{"maxReplicas":10,"metrics":[{"resource":{"name":"memory","target":{"averageUtilization":70,"type":"Utilization"}},"type":"Resource"},{"resource":{"name":"cpu","target":{"averageUtilization":70,"type":"Utilization"}},"type":"Resource"}],"minReplicas":2,"scaleTargetRef":{"apiVersion":"apps/v1","kind":"Deployment","name":"cross-products-currency-service-deployment"}}}
creationTimestamp: "2023-03-21T08:54:30Z"
labels:
app.kubernetes.io/instance: cross-products-currency-service
name: cross-products-currency-service-hpa
namespace: example
spec:
maxReplicas: 10
metrics:
- resource:
name: memory
target:
averageUtilization: 70
type: Utilization
type: Resource
- resource:
name: cpu
target:
averageUtilization: 70
type: Utilization
type: Resource
minReplicas: 2
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: cross-products-currency-service-deployment
status:
{ }
As you can see the last-applied-configuration
is the old version, but the object is updated.
But the report returns
➜ ct-argo-k8s-dev-manifests git:(main) ~/workstation/workspace-5/silver-surfer_0.1.2_darwin_amd64/kubedd --target-kubernetes-version 1.27 | grep cross-products-currency-service-hpa
cross-products cross-products-currency-service-hpa HorizontalPodAutoscaler autoscaling/v2beta2 autoscaling/v2 can be migrated with just apiVersion change
Why can't we check for the actual apiVersion in the cluster -
kubent
also has the same issue
By default kubernetes returns the manifest in latest version. The actual version which was applied via kubectl apply is available inside last-applied-configuration
therefore to right way to check whether migration is required is to check version inside the last-applied-configuration
and not the returned manifest.
Thanks for the comment @pghildiyal
But what do you suggest for the scenario I have mentioned above ?
Why is my last-applied-configuration
deviating from the actual configuration ? Only my actual configuration has the updated apiVersion and this manifest won't change for years until we need to tweak something. So last-applied-configuration
continues have stale data.
What are your suggestion to solve this scenario and use your tool to do a valid analysis for the depricated apis. Because as of now running kubedd
on my cluster gives me fake information of deprecation.
Also, if addition of a switch is a possibility to scan actual ApiVersion instead of last-applied-configuration
Please advise
I pulled the latest release version of silver-surfer as of creation of this issue, installed and ran according to the documentation called out in the README, and witnessed that the report being generated called out one of our daemonset resources on some clusters are using a deprecated version:
This same issue was happening in the same environment for k8s 1.24 on a different resource referenced here I have looked in the namespace for the node-exporter in the cluster and inspected its entire -o yaml output there is no single copy of v1beta2 in it. We are unsure of where silver surfer is finding the API it is flagging.