mirror of https://github.com/k3s-io/k3s
![]() The test used to scale the StatefulSet down to 0, wait for ListPods to return 0 matching Pods, and then scale the StatefulSet back up. This was prone to a race in which StatefulSet was told to scale back up before it had observed its own deletion of the last Pod, as evidenced by logs showing the creation of Pod ss-1 prior to the creation of the replacement Pod ss-0. We now wait for the controller to observe all deletions before scaling it back up. This should fix flakes of the form: ``` Too many pods scheduled, expected 1 got 2 ``` |
||
---|---|---|
.. | ||
BUILD | ||
OWNERS | ||
cronjob.go | ||
daemon_restart.go | ||
daemon_set.go | ||
deployment.go | ||
disruption.go | ||
framework.go | ||
job.go | ||
rc.go | ||
replica_set.go | ||
statefulset.go | ||
types.go |