Automatic merge from submit-queue
Adding SCSI controller type filter for vSphere disk attach
Hot plug of disks to a SCSI controller of type lsilogic doesn't work as expected. When a device is detached from the controller, it fails to remove the device from the /dev path which makes the subsequent attaches to the node to fail. With scsi controller types lsilogic-sas or paravirtual this seems to work well. This patch filters the existing controller for these types, and if it doesn't find one, it creates a new controller for disk attach.
This PR is dependent on https://github.com/kubernetes/kubernetes/pull/26658 (1st commit) also targeting this for 1.3
Automatic merge from submit-queue
rkt: Refactor grace termination period.
Add `TimeoutStopSec` service option to support grace termination.
Found we can improve the grace-period-termination by adding a systemd service option.
cc @kubernetes/sig-rktnetes
Automatic merge from submit-queue
let dynamic client handle non-registered ListOptions
And register v1.ListOptions in the policy group.
Fix#27622
@lavalamp @smarterclayton @krousey
Automatic merge from submit-queue
Make kubelet CNI network plugin runtime-agnostic
cni.go has a couple docker-isms in it still, so let's remove those and make the plugin runtime-agnostic. Also fixes some docker-isms in kubenet that snuck in with the HostPort changes.
Automatic merge from submit-queue
Removing name field from Member for compatibility with OpenStack Liberty
In OpenStack Mitaka, the name field for members was added as an optional field but does not exist in Liberty. Therefore the current implementation for lbaas v2 will not work in Liberty.
Automatic merge from submit-queue
Prevent detach before node status update
The PR prevents the attach/detach controller from start a detach operation before updating the node status (to remove the volume from the list of attached volumes).
Fixes https://github.com/kubernetes/kubernetes/issues/27836
Use the generic runtime method to get the netns path. Also
move reading the container IP address into cni (based off kubenet)
instead of having it in the Docker manager code. Both old and new
methods use nsenter and /sbin/ip and should be functionally
equivalent.
Automatic merge from submit-queue
Rephrase 'pv not found in cache' warnings.
When kubelet starts a pod that refers to non-existing PV, PVC or Node, it should clearly show that the requested element does not exist.
Previous `PersistentVolumeClaim 'default/ceph-claim-wm' is not in cache` looks like random kubelet hiccup, while `PersistentVolumeClaim 'default/ceph-claim-wm' not found` suggests that the object may not exist at all and it might be an user error.
Fixes#27523
Automatic merge from submit-queue
AWS/GCE: Spread PetSet volume creation across zones, create GCE volumes in non-master zones
Long term we plan on integrating this into the scheduler, but in the
short term we use the volume name to place it onto a zone.
We hash the volume name so we don't bias to the first few zones.
If the volume name "looks like" a PetSet volume name (ending with
-<number>) then we use the number as an offset. In that case we hash
the base name.
Automatic merge from submit-queue
GCE provider: Create TargetPool with 200 instances, then update with rest
GCE provider: Create TargetPool with 200 instances, then update with rest
Tested with 2000 nodes, this actually meets the GCE API specifications (which is nutty). Previous PR (#25178) was based on a mistaken understanding of a poorly documented set of limitations, and even poorer testing, for which I am embarassed.
Also includes the revert of #25178 (review commits separately).
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/.github/PULL_REQUEST_TEMPLATE.md?pixel)]()
Automatic merge from submit-queue
Add AWS volume plugin attach tests.
@kubernetes/sig-storage
This it a test, it does not really matter if it catches 1.3 train or the next one.
Automatic merge from submit-queue
Rename **/manager.go for better logging
Rename `pkg/kubelet/*/manager.go` to `pkg/kubelet/*/*_manager.go`.
**Justification:** Our current logging library, [glog](https://github.com/golang/glog), logs the filename where the log was generated, but not the full path. Ex:
```
I0608 00:28:25.116905 2847 manager.go:1024] Started watching for new ooms in manager
```
We have too many files named `manager.go`, which makes it difficult to identify log messages originating from them:
```console
$ find . -name "manager.go"
./pkg/kubelet/status/manager.go
./pkg/kubelet/dockertools/manager.go
./pkg/kubelet/eviction/manager.go
./pkg/kubelet/pod/manager.go
./pkg/kubelet/prober/manager.go
./vendor/github.com/vmware/govmomi/session/manager.go
./vendor/github.com/google/cadvisor/manager/manager.go
./vendor/github.com/coreos/go-oidc/key/manager.go
```
/cc @kubernetes/sig-node This change will probably invoke rebase hell, but now seems like a reasonable time for it (with less churn leading up to release).
Automatic merge from submit-queue
implement desiredWorld populator to sync up with informer
fixes#26994
This change implements the desiredStateOfWorld populator to sync up with
the pod informer. It periodically check each pod in the
desiredStateOfworld and verify whether it is still in pod informer
cache. If it not, remove it from the desiredStateOfWorld
Tested with 2000 nodes, this actually meets the GCE API specifications
(which is nutty). Previous PR (#25178) was based on a mistaken
understanding of a poorly documented set of limitations, and even
poorer testing, for which I am embarassed.
This change implements the desiredStateOfWorld populator to sync up with
the pod informer. It periodically check each pod in the
desiredStateOfworld and verify whether it is still in pod informer
cache. If it not, remove it from the desiredStateOfWorld
In OpenStack Mitaka, the name field for members was added as an optional
field but does not exist in Liberty. Therefore the current
implementation for lbaas v2 will not work in Liberty.
Lots of comments describing the heuristics, how it fits together and the
limitations.
In particular, we can't guarantee correct volume placement if the set of
zones is changing between allocating volumes.
Automatic merge from submit-queue
Deployment controller's cleanupUnhealthyReplicas should respect minReadySeconds
```release-note
Fixed an issue that Deployment may be scaled down further than allowed by maxUnavailable when minReadySeconds is set.
```
Fixes#26834
Detected by a flake in deployment rollover e2e test (the only test that specifies `minReadySeconds`).
cc @kubernetes/deployment @pwittrock
cc @mqliang who first added `cleanupUnhealthyReplicas` in deployment controller
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/.github/PULL_REQUEST_TEMPLATE.md?pixel)]()
Automatic merge from submit-queue
Reapply ScheduledJob tests (2ab885a53a)
Re-applied the ScheduledJob tests (#25737) which were reverted due to an integration test error in #27184.
The problem was in `TestBatchGroupBackwardCompatibility` which is testing backwards compatibility for storing jobs (`extensions/v1beta1` vs `batch/v1`), which is not needed for `batch/v2alpha1`. I've added a skip to aforementioned test for that group. See `test/integration/master_test.go` for the actual fix.
@caesarxuchao @mikedanese ptal
@piosz @jszczepkowski @erictune fyi
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/.github/PULL_REQUEST_TEMPLATE.md?pixel)]()
Automatic merge from submit-queue
GCE provider: Limit Filter calls to regexps rather than insane blobs
Filters can't exceed 4k, and GET requests against the GCE API are also limited, so these break down in different ways at different cluster counts. Fix it by introducing an advisory `node-instance-prefix` configuration in the GCE provider that can hint the `EnsureLoadBalancer`/`UpdateLoadBalancer code` (and the firewall creation/update code). If it's not there, or wrong (a hostname that's registered violates it), just ignore it and grab the whole project.
Fixes#27731
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/.github/PULL_REQUEST_TEMPLATE.md?pixel)]()
Filters can't exceed 4k, and GET requests against the GCE API are also
limited, so these break down in different ways at different cluster
counts. Fix it by introducing an advisory node-instance-prefix
configuration in the GCE provider that can hint the
EnsureLoadBalancer/UpdateLoadBalancer code (and the firewall
creation/update code). If it's not there, or wrong (a hostname that's
registered violates it), just ignore it and grab the whole project.
When kubelet starts a pod that refers to non-existing PV, PVC or Node, it
should clearly show that the requested element does not exist.
Previous "PersistentVolumeClaim 'default/ceph-claim-wm' is not in cache"
looks like random kubelet hiccup, while "PersistentVolumeClaim
'default/ceph-claim-wm' not found" suggests that the object may not exist at
all and it might be an user error.
Fixes#27523