-
Notifications
You must be signed in to change notification settings - Fork 129
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] Scale down of deployment left pod references undeleted #483
Comments
/cc @mlguerrero12 |
@mlguerrero12 any update regarding this issue? |
no, I haven´t worked on this. Will do soon. |
any update on this issue? |
I have done some investigation. So the issue is happening because leader election fails when we are calling cleanupFunc(which is IPManagement) whereabouts/pkg/controlloop/pod.go Line 237 in c4d2f71
It fails with this error: 31 leaderelection.go:336] error initially creating leader election record : the server does not allow this method on the requested resource
Seems like cc @maiqueb since this was committed by you some help would be appreciated. |
/assign @adilGhaffarDev |
Describe the bug
We are having a situation in our test cluster where Pod references left undeleted during scale down operation of a deployment. It get increased if we do multiple scale down/up operation. After scaling down of deployment from 200 to 1, most of the pods stays in
Terninating
state and needs long time to get deleted from the cluster. After the deletion some of the pod references are still visible and may increase after multiple scale up/down. Here is listed some queries during the process:Scale down form 200 to 1 and when most of the pods are in
Terminating
stateWhen scale down from 200 to 1 is completed and all replicas are deleted
We can see one extra pod reference is visible in 2.2.2.0/24, 3.3.3.0/24 and 4.4.4.0/24 ranges. This will grow in case we do multiple scale up/down operation on deployment.
Expected behavior
All podReference of deleted pods should be removed from the list and keep visible only running pod's podReference. For example expected output in this case where only one pod is running after scale down should be:
To Reproduce
Steps to reproduce the behavior:
make kind
(1 control plane and 2 worker)kubectl get ippools.whereabouts.cni.cncf.io -n kube-system 9.9.9.0-24 -o yaml | grep -c podref
, we can see extra podReference visible which should get removed after deletion of a pod.Environment:
kubectl version
): 1.30uname -a
): N/AAdditional info / context
Add any other information / context about the problem here.
The text was updated successfully, but these errors were encountered: