Automatic merge from submit-queue
Add gnufied as reviewer for pkg/volume
I have helped review and contributed code to this
area already.
cc @saad-ali @jsafrane @childsb
Automatic merge from submit-queue
fc: Drop multipath.conf snippet
**What this PR does / why we need it**:
Removes multipath.conf - The code does not make use of it - or ensure s that it's getting used - and it should in addition be handled elsewehre.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
```
A minimalistic multipath.conf got written, but it was useless, as
it is unclear if multipathd is running and there was also no
config reload triggered.
This patch drops this snippet. In general it's probably a better idea
to leave the multipath.conf to the component managing the host.
Signed-off-by: Fabian Deutsch <fabiand@fedoraproject.org>
Out-of-tree external provisioners have the same purpose as in-tree provisioners. As external provisioners work with PV and PVC datastructures it's an advantage to import certain Kubernetes packages instead of copy-pasting the Kubernetes code.
That's why the constants are made public so that they can be used in an external provisioner.
Each major interface is now in its own file. Any package private
functions that are only referenced by a particular module was also moved
to the corresponding file. All common helper functions were moved to
gce_util.go.
This change is a pure movement of code; no semantic changes were made.
Automatic merge from submit-queue
Fix adding disks to more than one scsi adapter. Fixes#42399
**What this PR does / why we need it**: Allows a single node to use more than 16 disks.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#42399
**Special notes for your reviewer**:
**Release note**:
```release-note
Fix adding disks to more than one scsi adapter.
```
Auto-generated via:
git grep -l [Ss]uccesfully | xargs sed -ri 's/([sS])uccesfully/\1uccessfully/g'
I noticed this when running kube-scheduler with --v4 and it is annoying.
Then manually reverted changed to the vendored bits.
Automatic merge from submit-queue (batch tested with PRs 43533, 43539)
kuberuntime: don't override the pod IP for pods using host network
This fixes the issue of not passing pod IP via downward API for host network pods.
Automatic merge from submit-queue (batch tested with PRs 43465, 43529, 43474, 43521)
kubelet/cni: hook network plugin Status() up to CNI network discovery
Ensure that the plugin returns NotReady status until there is a
CNI network available which can be used to set up pods.
Fixes: https://github.com/kubernetes/kubernetes/issues/43014
I think the only reason it wasn't done like this in the first place was that the dynamic "reread /etc/cni/net.d every 10s forever" was added long after the Status() hook was. What do you think?
@freehan @caseydavenport @luxas @jbeda
Automatic merge from submit-queue
Disable readyReplicas validation for Deployments
Because there is no field in 1.5, when we update to 1.6 and the
controller tries to update the Deployment, it will be denied by
validation because the pre-existing availableReplicas field is greater
than readyReplicas (normally readyReplicas should always be greater or
equal).
Fixes https://github.com/kubernetes/kubernetes/issues/43392
@kubernetes/sig-apps-bugs
Because there is no field in 1.5, when we update to 1.6 and the
controller tries to update the Deployment, it will be denied by
validation because the pre-existing availableReplicas field is greater
than readyReplicas (normally readyReplicas should always be greater or
equal).
Automatic merge from submit-queue
Add validation for affinities and taints/tolerations annotations.
It fixes annotations validation issues for pod/node affinities and taints/tolerations annotations for 1.5 to 1.6 upgrade tests as discussed in the issue https://github.com/kubernetes/kubernetes/issues/43228 .
@davidopp @derekwaynecarr @kubernetes/sig-scheduling-pr-reviews
Automatic merge from submit-queue (batch tested with PRs 43481, 43419, 42741, 43480)
controller: work around milliseconds skew in AddAfter
AddAfter is not requeueing precisely after the provided time and may
skew for some millieseconds. This is really important because controllers
don't relist often so a missed check because of ms difference is
essentially dropping the key. For example, in [1] the test requeues a
Deployment for a progress check after 10s[2] but the Deployment is synced
9ms earlier ending up in the controller not recognizing the Deployment as
failed thus dropping it from the queue w/o any error. The drop is fixed by
forcing the controller to resync the Deployment but we are going to resync
after the full duration.
@deads2k if you don't like this I am going to handle this on a case by case basis
[1] https://github.com/kubernetes/kubernetes/issues/39785#issuecomment-279959133
[2] c48b2cab0f/test/e2e/deployment.go (L1122)
Automatic merge from submit-queue
Rate limit HPA controller to sync period
Since the HPA controller pulls information from an external source that
makes no guarantees about consistency, it's possible for the HPA
to get into an infinite update loop -- if the metrics change with
every query, the HPA controller will run it's normal reconcilation,
post a status update, see that status update itself, fetch new metrics,
and if those metrics are different, post another status update, and
repeat. This can lead to continuously updating a single HPA.
By rate-limiting each HPA to once per sync interval, we prevent this
from happening.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
Ensure slices are serialized as zero-length, not null
Fixes https://github.com/kubernetes/kubernetes/issues/43203 null serialization of slices to prevent NPE errors in clients that store and expect to receive non-null JSON values in these fields.
Ensures when we are converting to an external slice field that will be serialized even if empty (has `json` tag that does not include `omitempty`), we populate it with `[]`, not `nil`
Other places I considered putting this logic instead:
* When unmarshaling
* Would have to be done for both protobuf and ugorji
* Would still have to be done here (or on marshal) to handle cases where we construct objects to return
* When marshaling
* Would have to switch to use custom json marshaler (currently we use stdlib)
* When defaulting
* Defaulting isn't run on some fields, notably, pod template in rc/deployment spec
* Would still have to be done here (or on marshal) to handle cases where we construct objects to return
```release-note
API fields that previously serialized null arrays as `null` and empty arrays as `[]` no longer distinguish between those values and always output `[]` when serializing to JSON.
```
Automatic merge from submit-queue
Install a REJECT rule for nodeport with no backend
Rather than actually accepting the connection, REJECT. This will avoid
CLOSE_WAIT.
Fixes#43212
@justinsb @felipejfc @spiddy
Automatic merge from submit-queue
Use uid in config.go instead of pod full name.
For https://github.com/kubernetes/kubernetes/issues/43397.
In config.go, use pod uid in pod cache.
Previously, if we update the static pod, even though a new UID is generated in file.go, config.go will only reference the pod with pod full name, and never update the pod UID in the internal cache. This causes:
1) If we change container spec, kubelet will restart the corresponding container because the container hash is changed.
2) If we change pod spec, kubelet will do nothing.
With this fix, kubelet will always restart pod whenever pod spec (including container spec) is changed.
@yujuhong @bowei @dchen1107
/cc @kubernetes/sig-node-bugs
Automatic merge from submit-queue
Use Semantic.DeepEqual to compare DaemonSet template on updates
Switch to `Semantic.DeepEqual` when comparing templates on DaemonSet updates, since we can't distinguish between `null` and `[]` in protobuf. This avoids unnecessary DaemonSet pods restarts.
I didn't touch `reflect.DeepEqual` used in controller because it's close to release date, and the DeepEqual in the controller doesn't cause serious issues (except for maybe causing more enqueues than needed).
Fixes#43218
@liggitt @kargakis @lukaszo @kubernetes/sig-apps-pr-reviews
Automatic merge from submit-queue (batch tested with PRs 42452, 43399)
Modify getInstanceByName to avoid calling getInstancesByNames
This PR modify getInstanceByname to loop through all management zones
directly instead of calling getInstancesByNames. Currently
getInstancesByNames use a node name prefix as a filter to list the
instances. If the prefix does not match, it will return all instances
which is very wasteful since getInstanceByName only query one instance
with a specific name.
Partially fix issue #42445
Automatic merge from submit-queue (batch tested with PRs 43398, 43368)
CRI: add support for dns cluster first policy
**What this PR does / why we need it**:
PR #29378 introduces ClusterFirstWithHostNet policy but only dockertools was updated to support the feature.
This PR updates kuberuntime to support it for all runtimes.
**Which issue this PR fixes**
fixes#43352
**Special notes for your reviewer**:
Candidate for v1.6.
**Release note**:
```release-note
NONE
```
cc @thockin @luxas @vefimova @Random-Liu
Automatic merge from submit-queue
Deflake TestSyncDeploymentDeletionRace
**What this PR does / why we need it**:
The cache was sometimes catching up while we were testing the case
where the cache is not yet caught up.
Before this fix, I could reproduce the failure with the following
command. After the fix, it passes.
```
go test -count 100000 -run TestSyncDeploymentDeletionRace
```
I checked the other controllers, and they all were already not starting informers for the deletion race test. I also checked that the deletion race tests for other controllers all pass with `-count 100000`.
**Which issue this PR fixes**:
Fixes#43390
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 42659, 43370)
dockershim: process protocol correctly for port mapping
**What this PR does / why we need it**:
dockershim: process protocol correctly for port mapping.
**Which issue this PR fixes**
Fixes#43365.
**Special notes for your reviewer**:
Should be included in v1.6.
**Release note**:
```release-note
NONE
```
cc/ @Random-Liu @justinsb @kubernetes/sig-node-pr-reviews
The cache was sometimes catching up while we were testing the case
where the cache is not yet caught up.
Before this fix, I could reproduce the failure with the following
command. After the fix, it passes.
```
go test -count 100000 -run TestSyncDeploymentDeletionRace
```
lsblk reports FSTYPE of devices with partition tables as empty string "",
which is indistinguishable from empty devices. We must look for dependent
devices (i.e. partitions) to see that the device is really empty and report
error otherwise.
I checked that LVM, LUKS and MD RAID have their own FSTYPE in lsblk output,
so it should be only a partition table that has empty FSTYPE.
The main point of this patch is to run lsblk without "-n", i.e. print all
dependent devices and check if they're there.
PR #29378 introduces ClusterFirstWithHostNet, but docker doesn't support
setting dns options togather with hostnetwork. This commit rewrites
resolv.conf same as dockertools.
PR #29378 introduces ClusterFirstWithHostNet policy but only dockertools
was updated to support the feature. This PR updates kuberuntime to
support it for all runtimes.
Also fixes#43352.
Automatic merge from submit-queue
GC: Fix re-adoption race when orphaning dependents.
**What this PR does / why we need it**:
The GC expects that once it sees a controller with a non-nil
DeletionTimestamp, that controller will not attempt any adoption.
There was a known race condition that could cause a controller to
re-adopt something orphaned by the GC, because the controller is using a
cached value of its own spec from before DeletionTimestamp was set.
This fixes that race by doing an uncached quorum read of the controller
spec just before the first adoption attempt. It's important that this
read occurs after listing potential orphans. Note that this uncached
read is skipped if no adoptions are attempted (i.e. at steady state).
**Which issue this PR fixes**:
Fixes#42639
**Special notes for your reviewer**:
**Release note**:
```release-note
```
cc @kubernetes/sig-apps-pr-reviews
Automatic merge from submit-queue (batch tested with PRs 42828, 43116)
Apply taint tolerations for NoExecute for all static pods.
Fixed https://github.com/kubernetes/kubernetes/issues/42753
**Release note**:
```
Apply taint tolerations for NoExecute for all static pods.
```
cc/ @davidopp
The GC expects that once it sees a controller with a non-nil
DeletionTimestamp, that controller will not attempt any adoption.
There was a known race condition that could cause a controller to
re-adopt something orphaned by the GC, because the controller is using a
cached value of its own spec from before DeletionTimestamp was set.
This fixes that race by doing an uncached quorum read of the controller
spec just before the first adoption attempt. It's important that this
read occurs after listing potential orphans. Note that this uncached
read is skipped if no adoptions are attempted (i.e. at steady state).
Automatic merge from submit-queue (batch tested with PRs 43313, 43257, 43271, 43307)
Fix AWS untagged instances
To revert to 1.5 behaviour we need to consider untagged
instances if no clusterID has been specified or found.
Fixes https://github.com/kubernetes/kubernetes/issues/43063
cc @justinsb
Automatic merge from submit-queue (batch tested with PRs 43313, 43257, 43271, 43307)
Remove 'all namespaces' meaning of empty list in PodAffinityTerm
Removes the distinction between `null` and `[]` for the PodAffinityTerm#namespaces field (option 4 discussed in https://github.com/kubernetes/kubernetes/issues/43203#issuecomment-287237992), since we can't distinguish between them in protobuf (and it's a less than ideal API)
Leaves the door open to reintroducing "all namespaces" function via a dedicated field or a dedicated token in the list of namespaces
Wanted to get a PR open and tests green in case we went with this option.
Not sure what doc/release-note is needed if the "all namespaces" function is not present in 1.6
Automatic merge from submit-queue
kubectl: Use v1.5-compatible ownership logic when listing dependents.
**What this PR does / why we need it**:
This restores compatibility between kubectl 1.6 and clusters running Kubernetes 1.5.x. It introduces transitional ownership logic in which the client considers ControllerRef when it exists, but does not require it to exist.
If we were to ignore ControllerRef altogether (pre-1.6 client behavior), we would introduce a new failure mode in v1.6 because controllers that used to get stuck due to selector overlap will now make progress. For example, that means when reaping ReplicaSets of an overlapping Deployment, we would risk deleting ReplicaSets belonging to a different Deployment that we aren't about to delete.
This transitional logic avoids such surprises in 1.6 clusters, and does no worse than kubectl 1.5 did in 1.5 clusters. To prevent this when kubectl 1.5 is used against 1.6 clusters, we can cherrypick this change.
**Which issue this PR fixes**:
Fixes#43159
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue
Fix revision when SetDeploymentRevision
When some oldRSs be deleted or cleared(eg. revisionHistoryLimit set 0), the revision for SetDeploymentRevision is incorrect
Automatic merge from submit-queue
protobuf generation modifies types.go, which needs to be copied out
This was broken when we moved to the build container, but no one
noticed. Made it so that we get a test error if a field in a registered type has a json tag with no protobuf tag.
Fixes#35486
Automatic merge from submit-queue
construction of GC should not fail for restmapper error caused by tpr
Fix https://github.com/kubernetes/kubernetes/issues/43147.
The issue is that GC will fail its initialization due to an RESTMapper error cause by tpr. This PR lets GC log the error instead of failing.
Automatic merge from submit-queue
Fix polarity of a test in NodePort allocation
The result of this was that an update to a Service would release the
NodePort temporarily (the repair loop would fix it in a minute). During
that window, another Service could get allocated that Port.
Fixes#43233
Automatic merge from submit-queue (batch tested with PRs 43254, 43255, 43184, 42509)
Update comment on the default DeletionPropagationPolicy
This is fixing the API doc, so apply the 1.6 milestone.
I'll run update-all.sh before merge.
The result of this was that an update to a Service would release the
NodePort temporarily (the repair loop would fix it in a minute). During
that window, another Service could get allocated that Port.
In particular, we should not assume ControllerRefs are necessarily set.
However, we can still use ControllerRefs that do exist to avoid
interfering with controllers that do use it.
This effectively reverts the client-side changes in
cec3899b96.
We have to maintain the old behavior on the client side to support
version skew when talking to old servers that set the annotation.
However, the new server-side behavior is still to NOT set the
annotation.
Automatic merge from submit-queue
recycle pod can't get the event since channel closed
What this PR does / why we need it:
We create a hostPath type PV with "Recycle" persistentVolumeReclaimPolicy, and bind a PVC to it, but after deleted the PVC, the PV cannot become to available status. This is happened after we upgrade etcd to 3.0. The reason is:
If the channel used to get the pod message and events been abnormal closed(for example, the event channel maybe closed because of "required revision has been compacted" error), the function internalRecycleVolumeByWatchingPodUntilCompletion will stuck in a loop, and the recycle pod will not been deleted, the PV can not become into available status
Special notes for your reviewer:
None
Release note:
Automatic merge from submit-queue
Prevent protobuf storage with etcd2
Prevents accidentally storing protobuf content in etcd2 when upgrading to 1.6
c.f. https://github.com/kubernetes/kubernetes/issues/42976#issuecomment-286537139
```release-note
if kube-apiserver is started with `--storage-backend=etcd2`, the media type `application/json` is used.
```
Automatic merge from submit-queue (batch tested with PRs 40964, 42967, 43091, 43115)
Improve code coverage for pkg/kubelet/status/generate.go
**What this PR does / why we need it**:
Improve code coverage for pkg/kubelet/status/generate.go from #39559
Thanks.
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue
Fix Deployment upgrade test.
**What this PR does / why we need it**:
When the upgrade test operates on Deployments in a pre-1.6 cluster (i.e. during the Setup phase), it needs to use the v1.5 deployment/util logic. In particular, the v1.5 logic does not filter children to only those with a matching ControllerRef.
**Which issue this PR fixes**:
Fixes#42738
**Special notes for your reviewer**:
**Release note**:
```release-note
```
cc @kubernetes/sig-apps-pr-reviews
Automatic merge from submit-queue (batch tested with PRs 43106, 43110)
Wait for garbagecollector to be synced in test
Fix#42952
Without the `cache.WaitForCacheSync` in the test, it's possible for the GC to get a merged event of RC's creation and its update (update to deletionTimestamp != 0), before GC gets the creation event of the pod, so it's possible the GC will handle the foreground deletion of the RC before it adds the Pod to the dependency graph, thus the race.
With the `cache.WaitForCacheSync` in the test, because GC runs a single thread to process graph changes, it's guaranteed the Pod will be added to the dependency graph before GC handles the foreground deletion of the RC.
Note that this pull fixes the race in the test. The race described in the first point of #26120 still exists.
Automatic merge from submit-queue (batch tested with PRs 42747, 43030)
dockershim: remove corrupted sandbox checkpoints
This is a workaround to ensure that kubelet doesn't block forever when
the checkpoint is corrupted.
This is a workaround for #43021
Automatic merge from submit-queue (batch tested with PRs 42747, 43030)
kube-proxy/iptables: optimize endpoint map creation by excluding invalid endpoints earlier
We don't need to do as much work as we were doing, if we exclude invalid endpoints earlier in the endpoints processing.
Fixes: https://github.com/kubernetes/kubernetes/issues/42210
@freehan @liggitt @thockin if you could review this with a fine-toothed comb... I can't immediately think of why invalid endpoints would be useful for the HealthChecker, and this PR prevents the HC from seeing these endpoints.
Automatic merge from submit-queue
dockershim: call sync() after writing the checkpoint
This ensures the checkpoint files are persisted.
This fixes#43021
Automatic merge from submit-queue (batch tested with PRs 42854, 43105, 43090)
Update ScaleIO volume plugin default readOnly value
This commit updates the code to set readOnly attribute to be set to false.
**What this PR does / why we need it**:
This PR is a minor fix that updates the default value of `readOnly` attribute to `false`.
**Release note**:
```release-note
NONE
```
When the upgrade test operates on Deployments in a pre-1.6 cluster
(i.e. during the Setup phase), it needs to use the v1.5 deployment/util
logic. In particular, the v1.5 logic does not filter children to only
those with a matching ControllerRef.
Automatic merge from submit-queue (batch tested with PRs 42775, 42991, 42968, 43029)
Remove 'beta' from default storage class annotation
I forgot to update default storage class annotation in my storage.k8s.io/v1beta1 -> v1 PRs. Let's fix it before 1.6 is released.
I consider it as a bugfix, in #40088 I already updated the release notes to include non-beta annotation `storageclass.kubernetes.io/is-default-class`
```release-note
NONE
```
@kubernetes/sig-storage-pr-reviews
@msau42, please help with merging.
Automatic merge from submit-queue (batch tested with PRs 42775, 42991, 42968, 43029)
`kubectl taint` update for `NoExecute`.
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
Fixes#42774
**Release note**:
```release-note
```
Automatic merge from submit-queue
Add DaemonSet templateGeneration validation and tests, and fix a bunch of validation test errors
For DaemonSet update:
1. Validate that templateGeneration is increased when and only when template is changed
1. Validate that templateGeneration is never decreased
1. Added validation tests for templateGeneration
1. Fix a bunch of errors in validate tests
- fake tests: almost all validation test error cases failed on "missing resource version", "name changes", "missing update strategy", "selector/template labels mismatch", not on the real validation we wanted to test
- some error cases should be success cases
@kargakis @lukaszo @kubernetes/sig-apps-bugs
*I've verified locally that all DaemonSet e2e tests pass with this change.*
This commit updates the code to set the default value of the readOnly attribute to false.
It also updates the example docs to add full list of supported plugin attributes and doc.
Automatic merge from submit-queue (batch tested with PRs 42942, 42935)
pkg/util/flock: Fix the flock so it actually locks.
With this PR, the second call to `Acquire()` will block unless the lock is released (process exits).
Also removed the memory mutex in the previous code since we don't need `Release()` here so no need to save and protect the local fd.
Fix#42929.
Automatic merge from submit-queue (batch tested with PRs 42942, 42935)
[Bug] Handle container restarts and avoid using runtime pod cache while allocating GPUs
Fixes#42412
**Background**
Support for multiple GPUs is an experimental feature in v1.6.
Container restarts were handled incorrectly which resulted in stranding of GPUs
Kubelet is incorrectly using runtime cache to track running pods which can result in race conditions (as it did in other parts of kubelet). This can result in same GPU being assigned to multiple pods.
**What does this PR do**
This PR tracks assignment of GPUs to containers and returns pre-allocated GPUs instead of (incorrectly) allocating new GPUs.
GPU manager is updated to consume a list of active pods derived from apiserver cache instead of runtime cache.
Node e2e has been extended to validate this failure scenario.
**Risk**
Minimal/None since support for GPUs is an experimental feature that is turned off by default. The code is also isolated to GPU manager in kubelet.
**Workarounds**
In the absence of this PR, users can mitigate the original issue by setting `RestartPolicyNever` in their pods.
There is no workaround for the race condition caused by using the runtime cache though.
Hence it is worth including this fix in v1.6.0.
cc @jianzhangbjz @seelam @kubernetes/sig-node-pr-reviews
Replaces #42560
Currently, TCPSocketAction always uses Pod's IP in connection. But when a
pod uses the host network, sometimes firewall rules may prevent kubelet
from connecting through the Pod's IP. This PR introduces the 'Host' field
for TCPSocketAction, and if it is set to non-empty string, the probe will
be performed on the configured host rather than the Pod's IP. This gives
users an opportunity to explicitly specify 'localhost' as the target for
the above situations.
With this PR, the second call to `Acquire()` will block unless the lock is released (process exits).
Also removed the memory mutex in the previous code since we don't need `Release()` here so no need to save and protect the local fd.
Fix#42929.
Automatic merge from submit-queue
Fixed incorrect result of getMinTolerationTime.
For the following case, `getMinTolerationTime` should return one; but it returned -1 :
1. for tolerations[0], TolerationSeconds is nil, minTolerationTime is not set
2. for tolerations[1], it's TolerationSeconds (1) is bigger than `minTolerationTime`, so minTolerationTime is still -1 which means infinite.
```
+ {
+ tolerations: []v1.Toleration{
+ {
+ TolerationSeconds: nil,
+ },
+ {
+ TolerationSeconds: &one,
+ },
+ },
+ },
```
Automatic merge from submit-queue
Fix taint based pod eviction for clusters where controller manager is not running with allocate-node-cidrs set
Fixes https://github.com/kubernetes/kubernetes/issues/42733
In my cluster, I have not set allocate-node-cidr, and It is causing taint based pod eviction to fail.
@gmarek @kubernetes/sig-scheduling-bugs @davidopp @derekwaynecarr
Automatic merge from submit-queue (batch tested with PRs 41794, 42349, 42755, 42901, 42933)
Fix DefaultTolerationSeconds admission plugin
DefaultTolerationSeconds is not working as expected. It is supposed to add default tolerations (for unreachable and notready conditions). but no pod was getting these toleration. And api server was throwing this error:
```
Mar 08 13:43:57 fedora25 hyperkube[32070]: E0308 13:43:57.769212 32070 admission.go:71] expected pod but got Pod
Mar 08 13:43:57 fedora25 hyperkube[32070]: E0308 13:43:57.789055 32070 admission.go:71] expected pod but got Pod
Mar 08 13:44:02 fedora25 hyperkube[32070]: E0308 13:44:02.006784 32070 admission.go:71] expected pod but got Pod
Mar 08 13:45:39 fedora25 hyperkube[32070]: E0308 13:45:39.754669 32070 admission.go:71] expected pod but got Pod
Mar 08 14:48:16 fedora25 hyperkube[32070]: E0308 14:48:16.673181 32070 admission.go:71] expected pod but got Pod
```
The reason for this error is that the input to admission plugins is internal api objects not versioned objects so expecting versioned object is incorrect. Due to this, no pod got desired tolerations and it always showed:
```
Tolerations: <none>
```
After this fix, the correct tolerations are being assigned to pods as follows:
```
Tolerations: node.alpha.kubernetes.io/notReady=:Exists:NoExecute for 300s
node.alpha.kubernetes.io/unreachable=:Exists:NoExecute for 300s
```
@davidopp @kevin-wangzefeng @kubernetes/sig-scheduling-pr-reviews @kubernetes/sig-scheduling-bugs @derekwaynecarr
Fixes https://github.com/kubernetes/kubernetes/issues/42716
Automatic merge from submit-queue
Invalid environment var names are reported and pod starts
When processing EnvFrom items, all invalid keys are collected and
reported as a single event.
The Pod is allowed to start.
fixes#42583
Automatic merge from submit-queue
[Federation] Kubefed Init should use the right RBAC API version clientset
**What this PR does / why we need it**:
Implements the need as described in https://github.com/kubernetes/kubernetes/issues/41263
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
https://github.com/kubernetes/kubernetes/issues/41263
**Special notes for your reviewer**:
@madhusudancs @shashidharatd @marun
cc @kubernetes/sig-federation-bugs
**Release note**:
```
NONE
```
Automatic merge from submit-queue (batch tested with PRs 38805, 42362, 42862)
Let GC print specific message for RESTMapping failure
Make the error messages reported in https://github.com/kubernetes/kubernetes/issues/39816 to be more specific, also only print the message once.
I'll also update the garbage collector's doc to clearly state we don't support tpr yet.
We'll wait for the watchable discovery feature (@sttts are you going to work on that?) to land in 1.7, and then enable the garbage collector to handle TPR.
cc @hongchaodeng @MikaelCluseau @djMax
Automatic merge from submit-queue (batch tested with PRs 38805, 42362, 42862)
Fix deployment generator after introducing deployments in apps/v1beta1
This PR does two things:
1. Switches all generator to produce versioned objects, to bypass the problem of having an object in multiple versions, which then results in not having stable generator (iow. producing exactly the same object).
2. Introduces new generator for `apps/v1beta1` deployments.
@kargakis @janetkuo ptal
@kubernetes/sig-apps-pr-reviews @kubernetes/sig-cli-pr-reviews ptal
This is a followup to https://github.com/kubernetes/kubernetes/pull/39683, so I'm adding 1.6 milestone.
```release-note
Introduce new generator for apps/v1beta1 deployments
```
Automatic merge from submit-queue (batch tested with PRs 42608, 42444)
Return nil when deleting non-exist GCE PD
When gce cloud tries to delete a disk, if the disk could not be found
from the zones, the function should return nil error. This modified behavior is also consistent with AWS
Automatic merge from submit-queue (batch tested with PRs 42877, 42853)
Remove unused functions and make logs slightly better
Zero risk cleanup, removing function that are not used anymore, and adding few more logs to help debugging problems.
cc @aveshagarwal
Automatic merge from submit-queue
Use Prometheus instrumentation conventions
The `System` and `Subsystem` parameters are subject to removal.
(x-ref: https://github.com/prometheus/client_golang/issues/240)
All metrics should use base units, which is seconds in the duration
case.
Counters should always end in `_total` and metrics should avoid
referring to potential label dimensions. Those should rather be
mentioned in the documentation string.
@kubernetes/sig-instrumentation
Reference docs:
https://prometheus.io/docs/practices/instrumentation/https://prometheus.io/docs/practices/naming/
**Release note**:
```
Breaking change: Renamed REST client Prometheus metrics to follow the instrumentation conventions ("request_latency_microseconds" -> "rest_client_request_latency_seconds", "request_status_codes" -> "rest_client_requests_total"). Please update your alerting pipeline if you rely on them.
```
Automatic merge from submit-queue
Add new DaemonSetStatus to kubectl printer and describer
@kargakis @lukaszo @kubernetes/sig-apps-pr-reviews @kubernetes/sig-cli-pr-reviews
```release-note
Add new DaemonSet status fields to kubectl printer and describer.
```
The `System` and `Subsystem` parameters are subject to removal.
(x-ref: https://github.com/prometheus/client_golang/issues/240)
All metrics should use base units, which is seconds in the duration
case.
Counters should always end in `_total` and metrics should avoid
referring to potential label dimensions. Those should rather be
mentioned in the documentation string.
Automatic merge from submit-queue (batch tested with PRs 42811, 42859)
Validation PVs for mount options
We are going to move the validation in its own package and we will be calling validation for individual volume types as needed.
Fixes https://github.com/kubernetes/kubernetes/issues/42573
Automatic merge from submit-queue (batch tested with PRs 42024, 42780, 42808, 42640)
kubectl: respect DaemonSet strategy parameters for rollout status
It handles "after-merge" comments from #41116
cc @kargakis @janetkuo
I will add one more e2e test later. I need to handle some in company stuff.
Automatic merge from submit-queue (batch tested with PRs 42024, 42780, 42808, 42640)
Node controller test flake 39975 with delay for try function
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#39975
/cc @ncdc @gmarek @liggitt
Automatic merge from submit-queue
Remove VCenterPort from vsphere cloud provider.
**What this PR does / why we need it**:
Address a bug inside vsphere cloud provider when a port number other than 443 is specified inside the config file.
The url which is used for communicating with govmomi should not include port number.
A port number other than 443 will result in 404 error.
VCenterPort stays in VSphereConfig structure for backward compatibility.
**Which issue this PR fixes** : fixes https://github.com/kubernetes/kubernetes-anywhere/issues/338
Automatic merge from submit-queue (batch tested with PRs 42734, 42745, 42758, 42814, 42694)
Dropped docker 1.9.x support. Changed the minimumDockerAPIVersion to
1.22
cc/ @Random-Liu @yujuhong
We talked about dropping docker 1.9.x support for a while. I just realized that we haven't really done it yet.
```release-note
Dropped the support for docker 1.9.x and the belows.
```
Since the HPA controller pulls information from an external source that
makes no guarantees about consistency, it's possible for the HPA
to get into an infinite update loop -- if the metrics change with
every query, the HPA controller will run it's normal reconcilation,
post a status update, see that status update itself, fetch new metrics,
and if those metrics are different, post another status update, and
repeat. This can lead to continuously updating a single HPA.
By rate-limiting each HPA to once per sync interval, we prevent this
from happening.
Automatic merge from submit-queue (batch tested with PRs 42786, 42553)
Updated auto generated protobuf codes.
Generated by `./hack/update-generated-protobuf-dockerized.sh` in Mac.
Automatic merge from submit-queue
dockershim: Fix the race condition in ListPodSandbox
In ListPodSandbox(), we
1. List all sandbox docker containers
2. List all sandbox checkpoints. If the checkpoint does not have a
corresponding container in (1), we return partial result based on
the checkpoint.
The problem is that new PodSandboxes can be created between step (1) and
(2). In those cases, we will see the checkpoints, but not the sandbox
containers. This leads to strange behavior because the partial result
from the checkpoint does not include some critical information. For
example, the creation timestamp'd be zero, and that would cause kubelet's
garbage collector to immediately remove the sandbox.
This change fixes that by getting the list of checkpoints before listing
all the containers (since in RunPodSandbox we create them in the reverse
order).
Automatic merge from submit-queue (batch tested with PRs 42211, 38691, 42737, 42757, 42754)
Only create the symlink when container log path exists
When using `syslog` logging driver instead of `json-file`, there will not be container log files such as `<containerID-json.log>`. We should not create symlink in this case.
In ListPodSandbox(), we
1. List all sandbox docker containers
2. List all sandbox checkpoints. If the checkpoint does not have a
corresponding container in (1), we return partial result based on
the checkpoint.
The problem is that new PodSandboxes can be created between step (1) and
(2). In those cases, we will see the checkpoints, but not the sandbox
containers. This leads to strange behavior because the partial result
from the checkpoint does not include some critical information. For
example, the creation timestamp'd be zero, and that would cause kubelet's
garbage collector to immediately remove the sandbox.
This change fixes that by getting the list of checkpoints before listing
all the containers (since in RunPodSandbox we create them in the reverse
order).
1. Validate that templateGeneration is increased when and only when template is changed
2. Validate that templateGeneration is never decreased
3. Added validation tests for templateGeneration
4. Fix a bunch of errors in validate tests, for example, all validation test error cases failed
on lack of resource version, or on name changes, not on the real validation we wanted to test
Automatic merge from submit-queue
Use namespace from context
Fixes#42653
Updates rbac_test.go to submit objects without namespaces set, which matches how actual objects are submitted to the API.