mirror of https://github.com/k3s-io/k3s
![]() Automatic merge from submit-queue (batch tested with PRs 60900, 62215, 62196). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>. [Flaky test fix] Use memory.force_empty before and after eviction tests **What this PR does / why we need it**: (copied from https://github.com/kubernetes/kubernetes/pull/60720): MemoryAllocatableEviction tests have been somewhat flaky: https://k8s-testgrid.appspot.com/sig-node-kubelet#kubelet-serial-gce-e2e&include-filter-by-regex=MemoryAllocatable The failure on the flakes is ["Pod ran to completion"](https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-serial/3785#k8sio-memoryallocatableeviction-slow-serial-disruptive-when-we-run-containers-that-should-cause-memorypressure-should-eventually-evict-all-of-the-correct-pods). Looking at [an example log](https://storage.googleapis.com/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-serial/3785/artifacts/tmp-node-e2e-6070a774-cos-stable-63-10032-71-0/kubelet.log) (and search for memory-hog-pod, we can see that this pod fails admission because the allocatable memory threshold has already been crossed. `eviction manager: thresholds - ignoring grace period: threshold [signal=allocatableMemory.available, quantity=250Mi] observed 242404Ki` https://github.com/kubernetes/kubernetes/pull/60720 wasn't effective. To clean-up after each eviction test, and prepare for the next, use memory.force_empty to make the kernel reclaim memory in the allocatable cgroup before and after eviction tests. **Special notes for your reviewer**: I tested to make sure this doesn't break Cgroup Manager tests. It should work on both cgroupfs and systemd based systems, although I have only tested in on cgroupfs. **Release note**: ```release-note NONE ``` /assign @yujuhong @Random-Liu /sig node /priority important-soon /kind bug its getting a little late in the release cycle, so we can probably wait until after code freeze is lifted for this. |
||
---|---|---|
.. | ||
builder | ||
conformance | ||
environment | ||
jenkins | ||
perftype | ||
remote | ||
runner | ||
services | ||
system | ||
BUILD | ||
OWNERS | ||
README.md | ||
apparmor_test.go | ||
benchmark_util.go | ||
container.go | ||
container_log_rotation_test.go | ||
container_manager_test.go | ||
cpu_manager_test.go | ||
critical_pod_test.go | ||
density_test.go | ||
device_plugin.go | ||
doc.go | ||
docker_test.go | ||
docker_util.go | ||
dockershim_checkpoint_test.go | ||
dynamic_kubelet_config_test.go | ||
e2e_node_suite_test.go | ||
eviction_test.go | ||
framework.go | ||
garbage_collector_test.go | ||
gke_environment_test.go | ||
gpu_device_plugin.go | ||
gubernator.sh | ||
hugepages_test.go | ||
image_id_test.go | ||
image_list.go | ||
kubelet_test.go | ||
lifecycle_hook_test.go | ||
log_path_test.go | ||
mirror_pod_test.go | ||
node_container_manager_test.go | ||
node_problem_detector_linux.go | ||
pods_container_manager_test.go | ||
resource_collector.go | ||
resource_usage_test.go | ||
restart_test.go | ||
runtime_conformance_test.go | ||
security_context_test.go | ||
simple_mount.go | ||
summary_test.go | ||
util.go | ||
volume_manager_test.go |
README.md
See e2e-node-tests