Open likakuli opened 2 years ago
Do you have any idea to fix this?
RB and Work are both internal resource which user should not access them directly. But there may be some user use them directly and user-facing change is inevitable for those users.
I think we should change RB and Work name specification and offer a guarantee that user will not need to redeploy their original resource like deployment and pp and user's resource already dispatched to member clusters can work normally when upgrading RB and Work name.
I think we can provide new sub-command in kubectl-karmada whose function is to add some labels in old rb and work's metadata and karmada-controller-manager can use it when reconcile rb and work to determine whether the old rb, work and work's manifest should to be deleted.
I think we should change RB and Work name specification and offer a guarantee that user will not need to redeploy their original resource like deployment and pp and user's resource already dispatched to member clusters can work normally when upgrading RB and Work name.
+1
I think we can provide new sub-command in kubectl-karmada whose function is to add some labels in old rb and work's metadata and karmada-controller-manager can use it when reconcile rb and work to determine whether the old rb, work and work's manifest should to be deleted.
I didn't get it. an example?
I can work on this and #2090 if there is nobody working on them /cc @RainbowMango
Thanks, But, seems we haven't made the final solution yet.
/assign @dddddai Since @dddddai is leading the effort for this now. We hope to solve this issue in release-1.5.
@likakuli I know you are not feeling well recently, best wishes. I'm investigating this issue and will continue to reply with my findings.
For rb, its name is generated by original resource object name and kind, so if there are two resources with same name and kind but different group version info in same namespace, the rb's name will be same too.
This case probably never happen. We can't apply the same resource(with the same name, kind, and namespace) but with a different group version
, because, in the ETCD layer, only the preferred
version will be stored.
For instance:
-bash-5.0# kubectl apply -f hpav2beta2.yaml
Warning: autoscaling/v2beta2 HorizontalPodAutoscaler is deprecated in v1.23+, unavailable in v1.26+; use autoscaling/v2 HorizontalPodAutoscaler
horizontalpodautoscaler.autoscaling/php-apache created
-bash-5.0#
-bash-5.0# kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
php-apache Deployment/php-apache <unknown>/80% 1 10 0 13s
-bash-5.0# kubectl apply -f hpav2.yaml
horizontalpodautoscaler.autoscaling/php-apache configured
-bash-5.0# kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
php-apache Deployment/php-apache <unknown>/80% 1 10 0 30s
The above testing was run against Karmada v1.4.1 which used the kube-apiserver@v1.25.2
.
The yamls:
-bash-5.0# cat hpav2.yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: php-apache
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: php-apache
minReplicas: 1
maxReplicas: 10
-bash-5.0# cat hpav2beta2.yaml
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: php-apache
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: php-apache
minReplicas: 1
maxReplicas: 10
the two resource you mentioned above are in the same group autoscaling, if you create a crd with different group and same kind and version, then it will be applied successfully. such as k8s statefulset and openkruise statefulset
Oh, yeah, you are right! Thanks for reminding me.
So glad to hear you, how are you doing now?
no fever, wet cough 😄
That's great, you made it! I'm still waiting in fear :) I'm feeling closer and closer.
I'm trying to figure out which PR/Issue should be included in the coming v1.7 release which is planned at the end of this month. I guess we don't have enough time for this feature, so I'm moving this to v1.8.
Note that we are working on this issue, and we are going to do some preparatory work in v1.7 and continue in v1.8. See #4000 for more details.
What happened:
Name collision happens both for rb and work.
For rb, its name is generated by original resource object name and kind, so if there are two resources with same name and kind but different group version info in same namespace, the rb's name will be same too.
For work, its name is generated by original resource object namespace, name and rb's hash. Namespace, Name and hash can be same at the same time for different original resource object, so the work's name can be same too.
All the places to create or update rb or work is called by CreateOrUpdate function which will result to override when name collision happens. It's dangerous.
What you expected to happen: rb、work name unique for each resource How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
kubectl-karmada version
orkarmadactl version
):