Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
fix all the typos across the project
**What this PR does / why we need it**:
There are lots of typos across the project. We should avoid small PRs on fixing those annoying typos, which is time-consuming and low efficient.
This PR does fix all the typos across the project currently. And with #59463, typos could be avoided when a new PR gets merged.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
/sig testing
/area test-infra
/sig release
/cc @ixdy
/assign @fejta
**Release note**:
```release-note
None
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Adding kubemci e2e test for ingress spec conformance
**What this PR does / why we need it**:
Adding an e2e test case for kubemci to verify that it conforms to the ingress spec.
Not all tests will pass right now, but adding it will enable us to track the latest status.
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Bury KubeletConfiguration.ConfigTrialDuration for now
Based on discussion in https://github.com/kubernetes/kubernetes/pull/53833/files#r166669046, this PR chooses not to expose a knob for the trial duration yet. It is unclear exactly which shape this functionality should take in the API.
```release-note
The alpha KubeletConfiguration.ConfigTrialDuration field is no longer available.
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
vSphere test infrastructure improvement and new node-unregister test
**What this PR does / why we need it**:
- Created conf file parsing logic for vSphere tests
- Created NodeMapper to generate node-vsphere map
- Updated bootstrap to parse conf file and generate node-vsphere map, and set it in TestContext
- Moved bootstrap.go and context.go up, in vsphere package to avoid cyclic package dependencies
- Added node register/unregister test, that consumes new test-infra
**Which issue(s) this PR fixes**:
Fixes https://github.com/vmware/kubernetes/issues/437
Fixes https://github.com/vmware/kubernetes/issues/379
**Special notes for your reviewer**:
- Successfully ran vSphere e2e tests to ensure that the bootstrapping is happening only once. More tests in progress
- Successfully ran 'Node Unregister'
```
bash-3.2$ go run hack/e2e.go --check-version-skew=false --v --test --test_args=‘--ginkgo.focus=Node\sUnregister’
flag provided but not defined: -check-version-skew
Usage of /var/folders/97/lnlv1n317xl2ty8hdn7zptxr00b37m/T/go-build743103230/command-line-arguments/_obj/exe/e2e:
-get
go get -u kubetest if old or not installed (default true)
-old duration
Consider kubetest old if it exceeds this (default 24h0m0s)
Will run 1 of 724 specs
Feb 5 22:20:09.890: INFO: >>> kubeConfig: /Users/pshahzeb/kube176.json
Feb 5 22:20:09.903: INFO: Waiting up to 4h0m0s for all (but 0) nodes to be schedulable
Feb 5 22:20:10.036: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace ‘kube-system’ to be running and ready
Feb 5 22:20:10.182: INFO: 13 / 13 pods in namespace ‘kube-system’ are running and ready (0 seconds elapsed)
Feb 5 22:20:10.182: INFO: expected 4 pod replicas in namespace ‘kube-system’, 4 are Running and Ready.
Feb 5 22:20:10.203: INFO: Waiting for pods to enter Success, but no pods in “kube-system” match label map[name:e2e-image-puller]
Feb 5 22:20:10.203: INFO: Dumping network health container logs from all nodes...
Feb 5 22:20:10.236: INFO: e2e test version: v1.6.0-alpha.0.22494+e66916e052163a-dirty
Feb 5 22:20:10.261: INFO: kube-apiserver version: v1.9.2
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Node Unregister [Feature:vsphere] [Slow] [Disruptive]
node unregister
/Users/pshahzeb/k8s/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere/vsphere_volume_node_delete.go:53
[BeforeEach] [sig-storage] Node Unregister [Feature:vsphere] [Slow] [Disruptive]
/Users/pshahzeb/k8s/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Feb 5 22:20:10.268: INFO: >>> kubeConfig: /Users/pshahzeb/kube176.json
STEP: Building a namespace api object
Feb 5 22:20:11.043: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Node Unregister [Feature:vsphere] [Slow] [Disruptive]
/Users/pshahzeb/k8s/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere/vsphere_volume_node_delete.go:41
Feb 5 22:20:11.063: INFO: Initializing vc server 10.160.240.176
Feb 5 22:20:11.063: INFO: ConfigFile &{{administrator@vsphere.local Admin!23 443 true k8s-dc 0} map[10.160.240.176:0xc420babe30] {VM Network} {pvscsi} {10.160.240.176 k8s-dc kubernetes vsanDatastore k8s-cluster}}
vSphere instances map[10.160.240.176:0xc420b08830]
[It] node unregister
/Users/pshahzeb/k8s/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere/vsphere_volume_node_delete.go:53
STEP: Get total Ready nodes
Feb 5 22:20:11.566: INFO: vmx file path is [vsanDatastore] 2e98735a-cdb9-c3f3-63d8-020010188a6a/kubernetes-node1.vmx
STEP: Unregister a node VM
Feb 5 22:20:11.686: INFO: Powering off node VM kubernetes-node1
Feb 5 22:20:14.148: INFO: Unregistering node VM kubernetes-node1
STEP: Verifying the ready node counts
STEP: Register back the node VM
Feb 5 22:20:49.490: INFO: Registering node VM kubernetes-node1
Feb 5 22:20:51.785: INFO: Powering on node VM kubernetes-node1
STEP: Verifying the ready node counts
Feb 5 22:21:40.600: INFO: Condition Ready of node kubernetes-node1 is false instead of true. Reason: KubeletNotReady, message: container runtime is down
Feb 5 22:21:45.625: INFO: Condition Ready of node kubernetes-node1 is false instead of true. Reason: KubeletNotReady, message: container runtime is down
STEP: Sanity check for volume lifecycle
STEP: Creating Storage Class With storage policy params
STEP: Creating PVC using the Storage Class
STEP: Waiting for claim to be in bound phase
Feb 5 22:21:50.718: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-ztj7g to have phase Bound
Feb 5 22:22:15.053: INFO: PersistentVolumeClaim pvc-ztj7g found and phase=Bound (24.334875493s)
STEP: Creating pod to attach PV to the node
STEP: Verify the volume is accessible and available in the pod
Feb 5 22:22:25.976: INFO: Running ‘/Users/pshahzeb/k8s/kubernetes/_output/bin/kubectl --server=https://10.160.241.49 --kubeconfig=/Users/pshahzeb/kube176.json exec pvc-tester-q7q2w --namespace=e2e-tests-node-unregister-csdrc -- /bin/touch /mnt/volume1/emptyFile.txt’
Feb 5 22:22:26.740: INFO: stderr: “”
Feb 5 22:22:26.740: INFO: stdout: “”
STEP: Deleting pod
Feb 5 22:22:26.740: INFO: Deleting pod “pvc-tester-q7q2w” in namespace “e2e-tests-node-unregister-csdrc”
Feb 5 22:22:26.799: INFO: Wait up to 5m0s for pod “pvc-tester-q7q2w” to be fully deleted
STEP: Waiting for volumes to be detached from the node
Feb 5 2223:16.966: INFO: Volume “[vsanDatastore] f0c55f5a-7349-1aad-2464-02001067f24e/kubernetes-dynamic-pvc-04775fe5-0b06-11e8-9872-005056809c8d.vmdk” has successfully detached from “kubernetes-node1"
Feb 5 2223:16.966: INFO: Deleting PersistentVolumeClaim “pvc-ztj7g”
[AfterEach] [sig-storage] Node Unregister [Feature:vsphere] [Slow] [Disruptive]
/Users/pshahzeb/k8s/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb 5 2223:17.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace “e2e-tests-node-unregister-csdrc” for this suite.
Feb 5 2223:23.158: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 5 2223:24.421: INFO: namespace: e2e-tests-node-unregister-csdrc, resource: bindings, ignored listing per whitelist
Feb 5 2223:24.795: INFO: namespace e2e-tests-node-unregister-csdrc deletion completed in 7.715803086s
• [SLOW TEST:194.521 seconds]
[sig-storage] Node Unregister [Feature:vsphere] [Slow] [Disruptive]
/Users/pshahzeb/k8s/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
node unregister
/Users/pshahzeb/k8s/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere/vsphere_volume_node_delete.go:53
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSFeb 5 2223:24.797: INFO: Running AfterSuite actions on all node
Feb 5 2223:24.798: INFO: Running AfterSuite actions on node 1
Ran 1 of 724 Specs in 194.905 seconds
SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 723 Skipped PASS
Ginkgo ran 1 suite in 3m15.529747133s
Test Suite Passed
2018/02/05 2223:24 util.go:174: Step ‘./hack/ginkgo-e2e.sh --ginkgo.focus=Node\sUnregister’ finished in 3m16.095671615s
2018/02/05 2223:24 e2e.go:81: Done
```
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 59424, 59672, 59313, 59661). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Disable symbol resolution by pprof in profile-gatherer
Because otherwise it is failing while trying to symbolize, due to lack of a kube-apiserver binary locally (as noted by @wojtek-t) within the job pod:
```
Local symbolization failed for kube-apiserver: open /usr/local/bin/kube-apiserver: no such file or directory
```
This does seem to still produce a graph with all named references - so it seems fine to avoid it. The [documentation](https://github.com/google/pprof/blob/master/doc/pprof.md#symbolization) says:
```
pprof can add symbol information to a profile that was collected only with address information. This is useful for profiles for compiled languages, where it may not be easy or even possible for the profile source to include function names or source coordinates.
```
So my feeling is that for golang, the function names, etc are included in the profile source.
/cc @wojtek-t @kubernetes/sig-scalability-misc
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 59424, 59672, 59313, 59661). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
[e2e gce-ingress] Scale test to measure ingress create/update latency
**What this PR does / why we need it**:
Adding a basic scale test. Test procedure:
- Create O(1) ingresses, measure creation latency for each ingress.
- Create and update one more ingress, do similar measurement on create & update latency.
- Repeat first two steps with O(10) ingresses.
- Repeat first two steps with O(100) ingresses.
Couple side notes:
- Each ingress reference a separate service.
- All services share the same set of backend pods.
- All ingress share one TLS secret.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #NONE
**Special notes for your reviewer**:
/assign @rramkumar1 @nicksardo @bowei
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 59466, 58912, 59605, 59548). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Rename and restructure local PV tests
**What this PR does / why we need it**:
Reorganizes the local PV tests to have a more consistent structure.
@kubernetes/sig-storage-pr-reviews
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Enable golinting for scheduler packages.
**What this PR does / why we need it**:
Enable golinting for scheduler packages
**Which issue(s) this PR fixes**:
Fixes#58234
**Special notes for your reviewer**:
- `pkg/scheduler/api` and `pkg/scheduler/api/v1` are not removed from `hack/.golint_failures`, because there are auto-generated go files by `deepcopy-gen` in the package, which have golint errors and are not suggested manually edited.
- Please help to refine the comments if there are error or inaccurate descriptions. Thanks!
**Release note**:
```release-note
Enable golint for `pkg/scheduler` and fix the golint errors in it.
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Redesign and implement volume reconstruction work
This PR is the first part of redesign of volume reconstruction work. The detailed design information is https://github.com/kubernetes/community/pull/1601
The changes include
1. Remove dependency on volume spec stored in actual state for volume
cleanup process (UnmountVolume and UnmountDevice)
Modify AttachedVolume struct to add DeviceMountPath so that volume
unmount operation can use this information instead of constructing from
volume spec
2. Modify reconciler's volume reconstruction process (syncState). Currently workflow
is when kubelet restarts, syncState() is only called once before
reconciler starts its loop.
a. If volume plugin supports reconstruction, it will use the
reconstructed volume spec information to update actual state as before.
b. If volume plugin cannot support reconstruction, it will use the
scanned mount path information to clean up the mounts.
In this PR, all the plugins still support reconstruction (except
glusterfs), so reconstruction of some plugins will still have issues.
The next PR will modify those plugins that cannot support reconstruction
well.
This PR addresses issue #52683
Automatic merge from submit-queue (batch tested with PRs 59580, 58854). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Prefer apps/v1 storage for daemonsets, deployments, replicasets, statefulsets
The workload API objects went GA in 1.9. This means we can safely begin persisting them in etcd in apps/v1 format in 1.10.
xref #43214
```release-note
DaemonSet, Deployment, ReplicaSet, and StatefulSet objects are now persisted in etcd in apps/v1 format
```
Automatic merge from submit-queue (batch tested with PRs 59190, 59360). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Adding benchmarks to envelop encryption integration tests
**What this PR does / why we need it**:
Adding benchmarks for envelop encryption integration tests.
Allows to estimate how envelop encryption may impact the performance of KubeAPI server.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 57824, 58806, 59410, 59280). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
2nd try at using a vanity GCR name
The 2nd commit here is the changes relative to the reverted PR. Please focus review attention on that.
This is the 2nd attempt. The previous try (#57573) was reverted while we
figured out the regional mirrors (oops).
New plan: k8s.gcr.io is a read-only facade that auto-detects your source
region (us, eu, or asia for now) and pulls from the closest. To publish
an image, push k8s-staging.gcr.io and it will be synced to the regionals
automatically (similar to today). For now the staging is an alias to
gcr.io/google_containers (the legacy URL).
When we move off of google-owned projects (working on it), then we just
do a one-time sync, and change the google-internal config, and nobody
outside should notice.
We can, in parallel, change the auto-sync into a manual sync - send a PR
to "promote" something from staging, and a bot activates it. Nice and
visible, easy to keep track of.
xref https://github.com/kubernetes/release/issues/281
TL;DR:
* The new `staging-k8s.gcr.io` is where we push images. It is literally an alias to `gcr.io/google_containers` (the existing repo) and is hosted in the US.
* The contents of `staging-k8s.gcr.io` are automatically synced to `{asia,eu,us)-k8s.gcr.io`.
* The new `k8s.gcr.io` will be a read-only alias to whichever regional repo is closest to you.
* In the future, images will be promoted from `staging` to regional "prod" more explicitly and auditably.
```release-note
Use "k8s.gcr.io" for pulling container images rather than "gcr.io/google_containers". Images are already synced, so this should not impact anyone materially.
Documentation and tools should all convert to the new name. Users should take note of this in case they see this new name in the system.
```
This is the 2nd attempt. The previous was reverted while we figured out
the regional mirrors (oops).
New plan: k8s.gcr.io is a read-only facade that auto-detects your source
region (us, eu, or asia for now) and pulls from the closest. To publish
an image, push k8s-staging.gcr.io and it will be synced to the regionals
automatically (similar to today). For now the staging is an alias to
gcr.io/google_containers (the legacy URL).
When we move off of google-owned projects (working on it), then we just
do a one-time sync, and change the google-internal config, and nobody
outside should notice.
We can, in parallel, change the auto-sync into a manual sync - send a PR
to "promote" something from staging, and a bot activates it. Nice and
visible, easy to keep track of.
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
verify no extra RS was created when re-creating a deployment
**What this PR does / why we need it**:
This PR verifies no extra RS was created when re-creating a deployment to adopt previously orphaned RS by improving existing `testDeploymentsControllerRef` e2e test. This also verifies that collision avoidance mechanism works as expected.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#59213
**Release note**:
```release-note
NONE
```
/sig apps
Automatic merge from submit-queue (batch tested with PRs 59010, 59212, 59281, 59014, 59297). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Replace nominateNodeName annotation with PodStatus.NominatedNodeName
**What this PR does / why we need it**:
Replaces nominateNodeName annotation with PodStatus.NominatedNodeName in scheudler's logic. We don't expect any logic/behavior changes.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
ref #57471
/sig scheduling
cc: @k82cn @aveshagarwal @resouer
Automatic merge from submit-queue (batch tested with PRs 59010, 59212, 59281, 59014, 59297). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
fix deployment's collision avoidance mechanism
**What this PR does / why we need it**:
This PR modifies deployment's collision avoidance mechanism to take into account a replicaset's `.Labels` field change. This ensures that the mechanism can be triggered when the user updates only the `.Labels` field of the replicaset without modifying the replicaset's PodTemplateSpec.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
xref #59213
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 59010, 59212, 59281, 59014, 59297). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fix flaky AdmissionWebhook e2e-crd tests
**What this PR does / why we need it**: Several of the tests("It") in the e2e suite reuse the CRD.
However they each try to setup and tear down the CRD independently.
Since these tests can be running in parallel, causing intermittant
failures.
Changes the test to set up one shared CRD and reuse.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#58855
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Patch ingress upgrade test to ignore checking certain GCP resources
**What this PR does / why we need it**:
In certain situations, GCP resources after an upgrade or downgrade will be different because of semantic changes to the glbc. Therefore in the test, we need to account for this difference and make sure the test does not fail because of it.
```release-note
None
```
cc @MrHohn
/assign @bowei
Automatic merge from submit-queue (batch tested with PRs 59276, 51042, 58973, 59377, 59472). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fix local PV node affinity tests and only run once
**What this PR does / why we need it**:
* Don't look for specific scheduling error messages for the NodeAffinity tests. Unit/integration will cover that.
* Move PV NodeAffinity tests outside the local volume loop. Mounts are not involved so don't need to be tested per volume type.
* Move mount failure tests outside the local volume loop.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#59369
**Release note**:
```release-note
NONE
```
@kubernetes/sig-storage-pr-reviews
Automatic merge from submit-queue (batch tested with PRs 59276, 51042, 58973, 59377, 59472). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Update Container Runtime Interface to use enumerated namespace modes
**What this PR does / why we need it**: This updates the CRI as described in the [Shared PID Namespace](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node/pod-pid-namespace.md#container-runtime-interface-changes) proposal. This change to the alpha API is not backwards compatible: implementations of the CRI will need to update to the new API version.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
WIP #1615
**Special notes for your reviewer**:
/assign @yujuhong
**Release note**:
```release-note
[action-required] The Container Runtime Interface (CRI) version has increased from v1alpha1 to v1alpha2. Runtimes implementing the CRI will need to update to the new version, which configures container namespaces using an enumeration rather than booleans.
```
Automatic merge from submit-queue (batch tested with PRs 59276, 51042, 58973, 59377, 59472). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Allow passing request-timeout from NewRequest all the way down
**What this PR does / why we need it**:
Currently if you pass `--request-timeout` it's not passed all the way down to the actual request object. There's a separate field on the `Request` object that allows setting that timeout, but it's not taken from that flag.
@smarterclayton @deads2k ptal, this is coming from https://github.com/openshift/origin/pull/13701
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Do not recycle volumes that are used by pods
**What this PR does / why we need it**:
Recycler should wait until all pods that use a volume are finished.
Consider this scenario:
1. User creates a PVC that's bound to a NFS PV.
2. User creates a pod that uses the PVC
3. User deletes the PVC.
Now the PV gets `Released` (the PVC does not exists) and recycled, however the PV is still mounted to a running pod. PVC protection won't help us, because it puts finalizers on PVC that is under user's control and user can remove it.
This PR checks that there is no pod that uses a PV before it recycles it.
**Release note**:
```release-note
NONE
```
/sig storage
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
remove a todo which is out of date
**What this PR does / why we need it**:
fix todo: move container.go to e2e/framework
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
This also incorporates the version string into the package name so
that incompatibile versions will fail to connect.
Arbitrary choices:
- The proto3 package name is runtime.v1alpha2. The proto compiler
normally translates this to a go package of "runtime_v1alpha2", but
I renamed it to "v1alpha2" for consistency with existing packages.
- kubelet/apis/cri is used as "internalapi". I left it alone and put the
public "runtimeapi" in kubelet/apis/cri/runtime.