mirror of https://github.com/k3s-io/k3s
![]() Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>. Fixes the races around devicemanager Allocate() and endpoint deletion. There is a race in predicateAdmitHandler Admit() that getNodeAnyWayFunc() could get Node with non-zero deviceplugin resource allocatable for a non-existing endpoint. That race can happen when a device plugin fails, but is more likely when kubelet restarts as with the current registration model, there is a time gap between kubelet restart and device plugin re-registration. During this time window, even though devicemanager could have removed the resource initially during GetCapacity() call, Kubelet may overwrite the device plugin resource capacity/allocatable with the old value when node update from the API server comes in later. This could cause a pod to be started without proper device runtime config set. To solve this problem, introduce endpointStopGracePeriod. When a device plugin fails, don't immediately remove the endpoint but set stopTime in its endpoint. During kubelet restart, create endpoints with stopTime set for any checkpointed registered resource. The endpoint is considered to be in stopGracePeriod if its stoptime is set. This allows us to track what resources should be handled by devicemanager during the time gap. When an endpoint's stopGracePeriod expires, we remove the endpoint and its resource. This allows the resource to be exported through other channels (e.g., by directly updating node status through API server) if there is such use case. Currently endpointStopGracePeriod is set as 5 minutes. Given that an endpoint is no longer immediately removed upon disconnection, mark all its devices unhealthy so that we can signal the resource allocatable change to the scheduler to avoid scheduling more pods to the node. When a device plugin endpoint is in stopGracePeriod, pods requesting the corresponding resource will fail admission handler. Tested: Ran GPUDevicePlugin e2e_node test 100 times and all passed now. **What this PR does / why we need it**: **Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*: Fixes https://github.com/kubernetes/kubernetes/issues/60176 **Special notes for your reviewer**: **Release note**: ```release-note Fixes the races around devicemanager Allocate() and endpoint deletion. ``` |
||
---|---|---|
.. | ||
builder | ||
conformance | ||
environment | ||
jenkins | ||
perftype | ||
remote | ||
runner | ||
services | ||
system | ||
BUILD | ||
OWNERS | ||
README.md | ||
apparmor_test.go | ||
benchmark_util.go | ||
container.go | ||
container_log_rotation_test.go | ||
container_manager_test.go | ||
cpu_manager_test.go | ||
critical_pod_test.go | ||
density_test.go | ||
device_plugin.go | ||
doc.go | ||
docker_test.go | ||
docker_util.go | ||
dockershim_checkpoint_test.go | ||
dynamic_kubelet_config_test.go | ||
e2e_node_suite_test.go | ||
eviction_test.go | ||
framework.go | ||
garbage_collector_test.go | ||
gke_environment_test.go | ||
gpu_device_plugin.go | ||
gpus.go | ||
gubernator.sh | ||
hugepages_test.go | ||
image_id_test.go | ||
image_list.go | ||
kubelet_test.go | ||
lifecycle_hook_test.go | ||
log_path_test.go | ||
mirror_pod_test.go | ||
node_container_manager_test.go | ||
node_problem_detector_linux.go | ||
pods_container_manager_test.go | ||
resource_collector.go | ||
resource_usage_test.go | ||
restart_test.go | ||
runtime_conformance_test.go | ||
security_context_test.go | ||
simple_mount.go | ||
summary_test.go | ||
util.go | ||
volume_manager_test.go |
README.md
See e2e-node-tests