Automatic merge from submit-queue
Introduce node memory pressure condition to scheduler
Following the work done by @derekwaynecarr at https://github.com/kubernetes/kubernetes/pull/21274, introducing memory pressure predicate for scheduler.
Missing:
* write down unit-test
* test the implementation
At the moment this is a heads up for further discussion how the new node's memory pressure condition should be handled in the generic scheduler.
**Additional info**
* Based on [1], only best effort pods are subject to filtering.
* Based on [2], best effort pods are those pods "iff requests & limits are not specified for any resource across all containers".
[1] 542668cc79/docs/proposals/kubelet-eviction.md (scheduler)
[2] https://github.com/kubernetes/kubernetes/pull/14943
For certain volume types (e.g. AWS EBS or GCE PD), a limitted
number of such volumes can be attached to a given node. This commit
introduces a predicate with allows cluster admins to cap
the maximum number of volumes matching a particular type attached to a
given node.
The volume type is configurable by passing a pair of filter functions,
and the maximum number of such volumes is configurable to allow node
admins to reserve a certain number of volumes for system use.
By default, the predicate is exposed as MaxEBSVolumeCount and
MaxGCEPDVolumeCount (for AWS ElasticBlocKStore and GCE PersistentDisk
volumes, respectively), each of which can be configured using the
`KUBE_MAX_PD_VOLS` environment variable.
Fixes#7835
For AWS EBS, a volume can only be attached to a node in the same AZ.
The scheduler must therefore detect if a volume is being attached to a
pod, and ensure that the pod is scheduled on a node in the same AZ as
the volume.
So that the scheduler need not query the cloud provider every time, and
to support decoupled operation (e.g. bare metal) we tag the volume with
our placement labels. This is done automatically by means of an
admission controller on AWS when a PersistentVolume is created backed by
an EBS volume.
Support for tagging GCE PVs will follow.
Pods that specify a volume directly (i.e. without using a
PersistentVolumeClaim) will not currently be scheduled correctly (i.e.
they will be scheduled without zone-awareness).
There's not a huge amount of detail in the docs as to how the scheduler
actually works, which is probably a good thing both for readability and
because it makes it easier to tweak the zone-spreading approach in the
future, but we should include some information that we do spread across
zones if zone information is present on the nodes.