Open jiayi-1994 opened 1 week ago
Thanks for your feedback, do you have some spiderpool-controller and agent logs to provide? Also kubectl describe po
to show the specific error?
The issue has been reproduced, thank you for reporting it.
Also configure the error annotations, which will also cause the pool to run out : v1.multus-cni.io/default-network: default/macvlan-ens3 k8s.v1.cni.cncf.io/networks: macvlan-ens3
Also configure the error annotations, which will also cause the pool to run out : v1.multus-cni.io/default-network: default/macvlan-ens3 k8s.v1.cni.cncf.io/networks: macvlan-ens3
Did you specify such an annotation?
ipam.spidernet.io/ippools
Can you show what it looks like?
Spiderpool Version
v0.9.7
Main CNI
macvlan
bug description
In the case of multiple NICs, if NIC 1 succeeds in assigning an ip from sp1, and NIC 2 fails to assign an ip (maybe sp2 has run out of ip), it will result in a creation failure and cause sp1 to run out of ip as well.
The multus call to cni ADD fails with a DEL rollback, but the SpiderPool is skipped, resulting in an entire SpiderIpool being exhausted.
What did you expect to happen?
how to fix it Or use subnet instead ?
How to reproduce it (as minimally and precisely as possible)
No response
Additional Context
No response