Merge pull request #28150 from xiangpengzhao/fix-md-issue-link

Automatic merge from submit-queue

Add links to issues referenced in some design documents

Add link to issues referenced in `nodeaffinity.md` and `podaffinity.md`. It would be helpful to visit the issue directly. Also fix broken format of `podaffinity.md` when # is at the beginning of a line.
pull/6/head
k8s-merge-robot 2016-07-01 23:33:35 -07:00 committed by GitHub
commit b8aaf8d933
3 changed files with 41 additions and 32 deletions

View File

@ -211,7 +211,7 @@ Users should not start using `NodeAffinity` until the full implementation has
been in Kubelet and the master for enough binary versions that we feel
comfortable that we will not need to roll back either Kubelet or master to a
version that does not support them. Longer-term we will use a programatic
approach to enforcing this (#4855).
approach to enforcing this ([#4855](https://github.com/kubernetes/kubernetes/issues/4855)).
## Implementation plan
@ -234,7 +234,7 @@ longer satisfies `RequiredDuringSchedulingRequiredDuringExecution` (see [this co
We assume Kubelet publishes labels describing the node's membership in all of
the relevant scheduling domains (e.g. node name, rack name, availability zone
name, etc.). See #9044.
name, etc.). See [#9044](https://github.com/kubernetes/kubernetes/issues/9044).
## Extensibility
@ -268,10 +268,11 @@ Are there any other fields we should convert from `map[string]string` to
## Related issues
The review for this proposal is in #18261.
The review for this proposal is in [#18261](https://github.com/kubernetes/kubernetes/issues/18261).
The main related issue is #341. Issue #367 is also related. Those issues
reference other related issues.
The main related issue is [#341](https://github.com/kubernetes/kubernetes/issues/341).
Issue [#367](https://github.com/kubernetes/kubernetes/issues/367) is also related.
Those issues reference other related issues.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -430,8 +430,8 @@ foreach node A of {N}
In this section we discuss three issues with RequiredDuringScheduling
anti-affinity: Denial of Service (DoS), co-existing with daemons, and
determining which pod(s) to kill. See issue #18265 for additional discussion of
these topics.
determining which pod(s) to kill. See issue [#18265](https://github.com/kubernetes/kubernetes/issues/18265)
for additional discussion of these topics.
### Denial of Service
@ -501,8 +501,9 @@ A cluster administrator may wish to allow pods that express anti-affinity
against all pods, to nonetheless co-exist with system daemon pods, such as those
run by DaemonSet. In principle, we would like the specification for
RequiredDuringScheduling inter-pod anti-affinity to allow "toleration" of one or
more other pods (see #18263 for a more detailed explanation of the toleration
concept). There are at least two ways to accomplish this:
more other pods (see [#18263](https://github.com/kubernetes/kubernetes/issues/18263)
for a more detailed explanation of the toleration concept).
There are at least two ways to accomplish this:
* Scheduler special-cases the namespace(s) where daemons live, in the
sense that it ignores pods in those namespaces when it is
@ -562,12 +563,12 @@ that trigger killing of P? More generally, how long should the system wait
before declaring that P's affinity is violated? (Of course affinity is expressed
in terms of label selectors, not for a specific pod, but the scenario is easier
to describe using a concrete pod.) This is closely related to the concept of
forgiveness (see issue #1574). In theory we could make this time duration be
configurable by the user on a per-pod basis, but for the first version of this
feature we will make it a configurable property of whichever component does the
killing and that applies across all pods using the feature. Making it
configurable by the user would require a nontrivial change to the API syntax
(since the field would only apply to
forgiveness (see issue [#1574](https://github.com/kubernetes/kubernetes/issues/1574)).
In theory we could make this time duration be configurable by the user on a per-pod
basis, but for the first version of this feature we will make it a configurable
property of whichever component does the killing and that applies across all pods
using the feature. Making it configurable by the user would require a nontrivial
change to the API syntax (since the field would only apply to
RequiredDuringSchedulingRequiredDuringExecution affinity).
## Implementation plan
@ -602,7 +603,7 @@ Do so in a way that addresses the "determining which pod(s) to kill" issue.
We assume Kubelet publishes labels describing the node's membership in all of
the relevant scheduling domains (e.g. node name, rack name, availability zone
name, etc.). See #9044.
name, etc.). See [#9044](https://github.com/kubernetes/kubernetes/issues/9044).
## Backward compatibility
@ -612,7 +613,7 @@ Users should not start using `Affinity` until the full implementation has been
in Kubelet and the master for enough binary versions that we feel comfortable
that we will not need to roll back either Kubelet or master to a version that
does not support them. Longer-term we will use a programmatic approach to
enforcing this (#4855).
enforcing this ([#4855](https://github.com/kubernetes/kubernetes/issues/4855)).
## Extensibility
@ -673,23 +674,27 @@ pod to name the data rather than the node.
## Related issues
The review for this proposal is in #18265.
The review for this proposal is in [#18265](https://github.com/kubernetes/kubernetes/issues/18265).
The topic of affinity/anti-affinity has generated a lot of discussion. The main
issue is #367 but #14484/#14485, #9560, #11369, #14543, #11707, #3945, #341,
# 1965, and #2906 all have additional discussion and use cases.
issue is [#367](https://github.com/kubernetes/kubernetes/issues/367)
but [#14484](https://github.com/kubernetes/kubernetes/issues/14484)/[#14485](https://github.com/kubernetes/kubernetes/issues/14485),
[#9560](https://github.com/kubernetes/kubernetes/issues/9560), [#11369](https://github.com/kubernetes/kubernetes/issues/11369),
[#14543](https://github.com/kubernetes/kubernetes/issues/14543), [#11707](https://github.com/kubernetes/kubernetes/issues/11707),
[#3945](https://github.com/kubernetes/kubernetes/issues/3945), [#341](https://github.com/kubernetes/kubernetes/issues/341),
[#1965](https://github.com/kubernetes/kubernetes/issues/1965), and [#2906](https://github.com/kubernetes/kubernetes/issues/2906)
all have additional discussion and use cases.
As the examples in this document have demonstrated, topological affinity is very
useful in clusters that are spread across availability zones, e.g. to co-locate
pods of a service in the same zone to avoid a wide-area network hop, or to
spread pods across zones for failure tolerance. #17059, #13056, #13063, and
spread pods across zones for failure tolerance. [#17059](https://github.com/kubernetes/kubernetes/issues/17059),
[#13056](https://github.com/kubernetes/kubernetes/issues/13056), [#13063](https://github.com/kubernetes/kubernetes/issues/13063),
and [#4235](https://github.com/kubernetes/kubernetes/issues/4235) are relevant.
# 4235 are relevant.
Issue [#15675](https://github.com/kubernetes/kubernetes/issues/15675) describes connection affinity, which is vaguely related.
Issue #15675 describes connection affinity, which is vaguely related.
This proposal is to satisfy #14816.
This proposal is to satisfy [#14816](https://github.com/kubernetes/kubernetes/issues/14816).
## Related work

View File

@ -45,7 +45,8 @@ nodes with a particular piece of hardware could be reserved for pods that
require that hardware, or a node could be marked as unschedulable when it is
being drained before shutdown, or a node could trigger evictions when it
experiences hardware or software problems or abnormal node configurations; see
issues #17190 and #3885 for more discussion.
issues [#17190](https://github.com/kubernetes/kubernetes/issues/17190) and
[#3885](https://github.com/kubernetes/kubernetes/issues/3885) for more discussion.
## Taints, tolerations, and dedicated nodes
@ -274,7 +275,8 @@ taints and tolerations. Obviously this makes it impossible to securely enforce
rules like dedicated nodes. We need some mechanism that prevents regular users
from mutating the `Taints` field of `NodeSpec` (probably we want to prevent them
from mutating any fields of `NodeSpec`) and from mutating the `Tolerations`
field of their pods. #17549 is relevant.
field of their pods. [#17549](https://github.com/kubernetes/kubernetes/issues/17549)
is relevant.
Another security vulnerability arises if nodes are added to the cluster before
receiving their taint. Thus we need to ensure that a new node does not become
@ -303,14 +305,15 @@ Users should not start using taints and tolerations until the full
implementation has been in Kubelet and the master for enough binary versions
that we feel comfortable that we will not need to roll back either Kubelet or
master to a version that does not support them. Longer-term we will use a
progamatic approach to enforcing this (#4855).
progamatic approach to enforcing this ([#4855](https://github.com/kubernetes/kubernetes/issues/4855)).
## Related issues
This proposal is based on the discussion in #17190. There are a number of other
related issues, all of which are linked to from #17190.
This proposal is based on the discussion in [#17190](https://github.com/kubernetes/kubernetes/issues/17190).
There are a number of other related issues, all of which are linked to from
[#17190](https://github.com/kubernetes/kubernetes/issues/17190).
The relationship between taints and node drains is discussed in #1574.
The relationship between taints and node drains is discussed in [#1574](https://github.com/kubernetes/kubernetes/issues/1574).
The concepts of taints and tolerations were originally developed as part of the
Omega project at Google.