mirror of https://github.com/k3s-io/k3s
![]() Automatic merge from submit-queue Add memory available to summary stats provider To support out of resource killing when low on memory, we want to let operators specify eviction thresholds based on available memory instead of memory usage for ease of use when working with heterogeneous nodes. So for example, a valid eviction threshold would be the following: * If node.memory.available < 200Mi for 30s, then evict pod(s) For the node, `memory.availableBytes` is always known since the `memory.limit_in_bytes` is always known for root cgroup. For individual containers in pods, we only populate the `availableBytes` if the container was launched with a memory limit specified. When no memory limit is specified, the cgroupfs sets a value of 1 << 63 in the `memory.limit_in_bytes` so we look for a similar max value to handle unbounded limits, and ignore setting `memory.availableBytes`. FYI @vishh @timstclair - as discussed on Slack. /cc @kubernetes/sig-node @kubernetes/rh-cluster-infra |
||
---|---|---|
.. | ||
api/v1alpha1/stats | ||
cadvisor | ||
client | ||
cm | ||
config | ||
container | ||
custommetrics | ||
dockertools | ||
envvars | ||
leaky | ||
lifecycle | ||
metrics | ||
network | ||
pleg | ||
pod | ||
prober | ||
qos | ||
rkt | ||
server | ||
status | ||
types | ||
util | ||
OWNERS | ||
container_bridge.go | ||
disk_manager.go | ||
disk_manager_test.go | ||
doc.go | ||
flannel_helper.go | ||
image_manager.go | ||
image_manager_test.go | ||
kubelet.go | ||
kubelet_test.go | ||
networks.go | ||
oom_watcher.go | ||
oom_watcher_test.go | ||
pod_workers.go | ||
pod_workers_test.go | ||
reason_cache.go | ||
reason_cache_test.go | ||
root_context_linux.go | ||
root_context_unsupported.go | ||
runonce.go | ||
runonce_test.go | ||
runtime.go | ||
util.go | ||
volume_manager.go | ||
volumes.go |