Automatic merge from submit-queue
vSphere cloud provider: re-use session for vCenter logins
This change allows for the re-use of a vCenter client session. Addresses #34491
Automatic merge from submit-queue
Node status updater should SetNodeStatusUpdateNeeded if it fails to
update status
When volume controller tries to update the node status, if it fails to
update the nodes status, it should call SetNodeStatusUpdateNeeded so
that the volume list could be updated next time.
Automatic merge from submit-queue
Remove versioned LabelSelectors
We have LabelSelectors defined in `unversioned`, `batch/v1`, `batch/v2alpha1`, and `extensions/v1beta1`. Their definitions are all the same. I kept the definition in `unversioned` and removed the others. It only makes sense to define a versioned LabelSelectors if the definition is different.
Automatic merge from submit-queue
Fix devices information struct in container
So far nowhere use the ```Devices``` which in ```RunContainerOptions```. But when I want to use it, found that it could be better if change it, because Devices in container is like:
```json
"Devices": [
{
"PathOnHost": "/dev/nvidiactl",
"PathInContainer": "/dev/nvidiactl",
"CgroupPermissions": "mrw"
},
{
"PathOnHost": "/dev/nvidia-uvm",
"PathInContainer": "/dev/nvidia-uvm",
"CgroupPermissions": "mrw"
},
{
"PathOnHost": "/dev/nvidia0",
"PathInContainer": "/dev/nvidia0",
"CgroupPermissions": "mrw"
}
],
```
Automatic merge from submit-queue
always clean gce resources in service e2e
@bprashanth the previous PR was closed when I squashed my commits.
Here is the new change set, please help to review again.
1). only the following two It() create, I created a string array to persist the LB name so that they can be cleaned in AfterEach(), and the string array was reset after clean up.
```
"should be able to change the type and ports of a service [Slow]"
"should be able to create services of type LoadBalancer and externalTraffic=localOnly"
```
2). Directly call gce api to delete the resource and ignore any error returned.
Automatic merge from submit-queue
Add GroupVersion tags to OpenAPI spec and remove all specs except main one
Tags are used as a grouping mechanism in OpenAPI. We generated one spec per GroupVersion before for this grouping but by adding those tags in this PR, those files have no use. We can always add them back if there were a use-case for them.
**Release note**:
```release-note
Deprecate OpenAPI spec for GroupVersion endpoints in favor of single spec /swagger.json
```
Reference: #13414
Automatic merge from submit-queue
CRI: Instrumented cri service
For https://github.com/kubernetes/kubernetes/issues/29478.
This PR added instrumented CRI service. Because we are adding the instrumented wrapper inside kuberuntime, it should work for both grpc and non-grpc integration.
This will be useful to compare latency difference between grpc and non-grpc integration, although there shouldn't be too much difference.
@yujuhong @feiskyer
/cc @kubernetes/sig-node
Automatic merge from submit-queue
Refactor PortForward server methods into the portforward package
Refactor PortForward code into it's own package so it can be reused in the CRI streaming library without pulling in lots of extra dependencies.
This is a straightforward move. Nothing is changed other than a few references to the package.
Automatic merge from submit-queue
Fix volume states out of sync problem after kubelet restarts
When kubelet restarts, all the information about the volumes will be
gone from actual/desired states. When update node status with mounted
volumes, the volume list might be empty although there are still volumes
are mounted and in turn causing master to detach those volumes since
they are not in the mounted volumes list. This fix is to make sure only
update mounted volumes list after reconciler starts sync states process.
This sync state process will scan the existing volume directories and
reconstruct actual states if they are missing.
This PR also fixes the problem during orphaned pods' directories. In
case of the pod directory is unmounted but has not yet deleted (e.g.,
interrupted with kubelet restarts), clean up routine will delete the
directory so that the pod directoriy could be cleaned up (it is safe to
delete directory since it is no longer mounted)
The third issue this PR fixes is that during reconstruct volume in
actual state, mounter could not be nil since it is required for creating
container.VolumeMap. If it is nil, it might cause nil pointer exception
in kubelet.
Detailed design proposal is #33203
Automatic merge from submit-queue
CRI: Add dockershim grpc server.
This PR adds a in-process grpc server for dockershim.
Flags change:
1. `container-runtime` will not be automatically set to remote when `container-runtime-endpoint` is set. @feiskyer
2. set kubelet flag `--experimental-runtime-integration-type=remote --container-runtime-endpoint=UNIX_SOCKET_FILE_PATH` to enable the in-process dockershim grpc server.
3. set node e2e test flag `--runtime-integration-type=remote -container-runtime-endpoint=UNIX_SOCKET_FILE_PATH` to run node e2e test against in-process dockershim grpc server.
I've run node e2e test against the remote cri integration, tests which don't rely on stream and log functions can pass.
This unblocks the following work:
1) CRI conformance test.
2) Performance comparison between in-process integration and in-process grpc integration.
@yujuhong @feiskyer
/cc @kubernetes/sig-node
Automatic merge from submit-queue
Fixed mutation warning in Attach/Detach controller
Objects from shared informer must not be changed, they are shared among all
controllers.
```release-note
Fixed mutation warning in Attach/Detach controller
```
This fixes CacheMutationDetector panic with this output:
```
CACHE *api.Node[5] ALTERED!
{"metadata":{"name":"ip-172-18-8-71.ec2.internal","selfLink":"/api/v1/nodes/ip-172-18-8-71.ec2.internal","uid":"73d07d16-976e-11e6-8225-0e2f14b56070","resourceVersion":"136","creationTimestamp":"2016-10-21T09:12:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/instance-type":"t2.medium","beta.kubernetes.io/os":"linux","failure-domain.beta.kubernetes.io/region":"us-east-1","failure-domain.beta.kubernetes.io/zone":"us-east-1d","kubernetes.io/hostname":"ip-172-18-8-71.ec2.internal"},"annotations":{"volumes.kubernetes.io/controller-managed-attach-detach":"true"}},"spec":{"externalID":"i-9cb6180f","providerID":"aws:///us-east-1d/i-9cb6180f"},"status":{"capacity":{"alpha.kubernetes.io/nvidia-gpu":"0","cpu":"2","memory":"4045568Ki","pods":"110"},"allocatable":{"alpha.kubernetes.io/nvidia-gpu":"0","cpu":"2","memory":"4045568Ki","pods":"110"},"conditions":[{"type":"OutOfDisk","status":"False","lastHeartbeatTime":"2016-10-21T09:12:52Z","lastTransitionTime":"2016-10-21T09:12:12Z","reason":"KubeletHasSufficientDisk","message":"kubelet has sufficient disk space available"},{"type":"MemoryPressure","status":"False","lastHeartbeatTime":"2016-10-21T09:12:52Z","lastTransitionTime":"2016-10-21T09:12:12Z","reason":"KubeletHasSufficientMemory","message":"kubelet has sufficient memory available"},{"type":"DiskPressure","status":"False","lastHeartbeatTime":"2016-10-21T09:12:52Z","lastTransitionTime":"2016-10-21T09:12:12Z","reason":"KubeletHasNoDiskPressure","message":"kubelet has no disk pressure"},{"type":"InodePressure","status":"False","lastHeartbeatTime":"2016-10-21T09:12:52Z","lastTransitionTime":"2016-10-21T09:12:12Z","reason":"KubeletHasNoInodePressure","message":"kubelet has no inode pressure"},{"type":"Ready","status":"True","lastHeartbeatTime":"2016-10-21T09:12:52Z","lastTransitionTime":"2016-10-21T09:12:22Z","reason":"KubeletReady","message":"kubelet is posting ready status"}],"addresses":[{"type":"InternalIP","address":"172.18.8.71"},{"type":"LegacyHostIP","address":"172.18.8.71"},{"type":"ExternalIP","address":"54.85.104.236"}],"daemonEndpoints":{"kubeletEndpoint":{"Port":10250}},"nodeInfo":{"machineID":"78a79498db8e4fdc9ac24b5e436a982c","systemUUID":"EC2BB406-5467-4ABE-B54D-D9993C45714F","bootID":"2553d6b8-1ddb-4ef0-902a-d09a807b89ba","kernelVersion":"4.6.7-300.fc24.x86_64","osImage":"Fedora 24 (Cloud Edition)","containerRuntimeVersion":"docker://1.10.3","kubeletVersion":"v1.5.0-alpha.1.726+5aac5eddb809e4","kubeProxyVersion":"v1.5.0-alpha.1.726+5aac5eddb809e4","operatingSystem":"linux","architecture":"amd64"},"images":[{"names":["openshift/origin-release:latest"],"sizeBytes":714569002},{"names":["openshift/origin-haproxy-router-base:latest"],"sizeBytes":294417608},{"names":["openshift/origin-base:latest"],"sizeBytes":275310761},{"names":["docker.io/centos@sha256:2ae0d2c881c7123870114fb9cc7afabd1e31f9888dac8286884f6cf59373ed9b","docker.io/centos:centos7"],"sizeBytes":196744353},{"names":["gcr.io/google_containers/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff","gcr.io/google_containers/busybox:1.24"],"sizeBytes":1113554},{"names":["gcr.io/google_containers/pause-amd64@sha256:163ac025575b775d1c0f9bf0bdd0f086883171eb475b5068e7defa4ca9e76516","gcr.io/google_containers/pause-amd64:3.0"],"sizeBytes":746888}],"volumesInUse":["kubernetes.io/aws-ebs/aws://us-east-1d/vol-f4bd0352"]
A: ,"volumesAttached":[{"name":"kubernetes.io/aws-ebs/aws://us-east-1d/vol-f4bd0352","devicePath":"/dev/xvdba"}]}}
B: }}
```
@saad-ali @jingxu97
When kubelet restarts, all the information about the volumes will be
gone from actual/desired states. When update node status with mounted
volumes, the volume list might be empty although there are still volumes
are mounted and in turn causing master to detach those volumes since
they are not in the mounted volumes list. This fix is to make sure only
update mounted volumes list after reconciler starts sync states process.
This sync state process will scan the existing volume directories and
reconstruct actual states if they are missing.
This PR also fixes the problem during orphaned pods' directories. In
case of the pod directory is unmounted but has not yet deleted (e.g.,
interrupted with kubelet restarts), clean up routine will delete the
directory so that the pod directoriy could be cleaned up (it is safe to
delete directory since it is no longer mounted)
The third issue this PR fixes is that during reconstruct volume in
actual state, mounter could not be nil since it is required for creating
container.VolumeMap. If it is nil, it might cause nil pointer exception
in kubelet.
Details are in proposal PR #33203
Automatic merge from submit-queue
Do not log stack trace for the error http.StatusBadRequest (400).
<!-- Thanks for sending a pull request! Here are some tips for you:
1. If this is your first time, read our contributor guidelines https://github.com/kubernetes/kubernetes/blob/master/CONTRIBUTING.md and developer guide https://github.com/kubernetes/kubernetes/blob/master/docs/devel/development.md
2. If you want *faster* PR reviews, read how: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/faster_reviews.md
3. Follow the instructions for writing a release note: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/pull-requests.md#release-notes
-->
**What this PR does / why we need it**:
This PR fixes an issue where stack trace is being logged in kubelet when the status http.StatusBadRequest occurs.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
<!-- Steps to write your release note:
1. Use the release-note-* labels to set the release note state (if you have access)
2. Enter your extended release note in the below block; leaving it blank means using the PR title as the release note. If no release note is required, just write `NONE`.
-->
```release-note
```
Automatic merge from submit-queue
Make "Unschedulable" reason a constant in api
String "Unschedulable" is used in couple places in K8S:
* scheduler
* federation replicaset and deployment controllers
* cluster autoscaler
* rescheduler
This PR makes the string a part of API so it not changed.
cc: @davidopp @fgrzadkowski @wojtek-t
Objects from shared informer must not be changed, they are shared among all
controllers.
This fixes CacheMutationDetector panic with this output:
CACHE *api.Node[5] ALTERED!
{"metadata":{"name":"ip-172-18-8-71.ec2.internal","selfLink":"/api/v1/nodes/ip-172-18-8-71.ec2.internal","uid":"73d07d16-976e-11e6-8225-0e2f14b56070","resourceVersion":"136","creationTimestamp":"2016-10-21T09:12:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/instance-type":"t2.medium","beta.kubernetes.io/os":"linux","failure-domain.beta.kubernetes.io/region":"us-east-1","failure-domain.beta.kubernetes.io/zone":"us-east-1d","kubernetes.io/hostname":"ip-172-18-8-71.ec2.internal"},"annotations":{"volumes.kubernetes.io/controller-managed-attach-detach":"true"}},"spec":{"externalID":"i-9cb6180f","providerID":"aws:///us-east-1d/i-9cb6180f"},"status":{"capacity":{"alpha.kubernetes.io/nvidia-gpu":"0","cpu":"2","memory":"4045568Ki","pods":"110"},"allocatable":{"alpha.kubernetes.io/nvidia-gpu":"0","cpu":"2","memory":"4045568Ki","pods":"110"},"conditions":[{"type":"OutOfDisk","status":"False","lastHeartbeatTime":"2016-10-21T09:12:52Z","lastTransitionTime":"2016-10-21T09:12:12Z","reason":"KubeletHasSufficientDisk","message":"kubelet has sufficient disk space available"},{"type":"MemoryPressure","status":"False","lastHeartbeatTime":"2016-10-21T09:12:52Z","lastTransitionTime":"2016-10-21T09:12:12Z","reason":"KubeletHasSufficientMemory","message":"kubelet has sufficient memory available"},{"type":"DiskPressure","status":"False","lastHeartbeatTime":"2016-10-21T09:12:52Z","lastTransitionTime":"2016-10-21T09:12:12Z","reason":"KubeletHasNoDiskPressure","message":"kubelet has no disk pressure"},{"type":"InodePressure","status":"False","lastHeartbeatTime":"2016-10-21T09:12:52Z","lastTransitionTime":"2016-10-21T09:12:12Z","reason":"KubeletHasNoInodePressure","message":"kubelet has no inode pressure"},{"type":"Ready","status":"True","lastHeartbeatTime":"2016-10-21T09:12:52Z","lastTransitionTime":"2016-10-21T09:12:22Z","reason":"KubeletReady","message":"kubelet is posting ready status"}],"addresses":[{"type":"InternalIP","address":"172.18.8.71"},{"type":"LegacyHostIP","address":"172.18.8.71"},{"type":"ExternalIP","address":"54.85.104.236"}],"daemonEndpoints":{"kubeletEndpoint":{"Port":10250}},"nodeInfo":{"machineID":"78a79498db8e4fdc9ac24b5e436a982c","systemUUID":"EC2BB406-5467-4ABE-B54D-D9993C45714F","bootID":"2553d6b8-1ddb-4ef0-902a-d09a807b89ba","kernelVersion":"4.6.7-300.fc24.x86_64","osImage":"Fedora 24 (Cloud Edition)","containerRuntimeVersion":"docker://1.10.3","kubeletVersion":"v1.5.0-alpha.1.726+5aac5eddb809e4","kubeProxyVersion":"v1.5.0-alpha.1.726+5aac5eddb809e4","operatingSystem":"linux","architecture":"amd64"},"images":[{"names":["openshift/origin-release:latest"],"sizeBytes":714569002},{"names":["openshift/origin-haproxy-router-base:latest"],"sizeBytes":294417608},{"names":["openshift/origin-base:latest"],"sizeBytes":275310761},{"names":["docker.io/centos@sha256:2ae0d2c881c7123870114fb9cc7afabd1e31f9888dac8286884f6cf59373ed9b","docker.io/centos:centos7"],"sizeBytes":196744353},{"names":["gcr.io/google_containers/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff","gcr.io/google_containers/busybox:1.24"],"sizeBytes":1113554},{"names":["gcr.io/google_containers/pause-amd64@sha256:163ac025575b775d1c0f9bf0bdd0f086883171eb475b5068e7defa4ca9e76516","gcr.io/google_containers/pause-amd64:3.0"],"sizeBytes":746888}],"volumesInUse":["kubernetes.io/aws-ebs/aws://us-east-1d/vol-f4bd0352"]
A: ,"volumesAttached":[{"name":"kubernetes.io/aws-ebs/aws://us-east-1d/vol-f4bd0352","devicePath":"/dev/xvdba"}]}}
B: }}
Automatic merge from submit-queue
Use the rawTerminal setting from the container itself
**What this PR does / why we need it**:
Checks whether the container is set for rawTerminal connection and uses the appropriate connection.
Prevents the output `Error from server: Unrecognized input header` when doing `kubectl run`.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, #<issue_number>, ...)` format, will close that issue when PR gets merged)*:
helps with case 1 in #28695, resolves#30159
**Special notes for your reviewer**:
**Release note**:
```
release-note-none
```
Automatic merge from submit-queue
e2e node plumbing and bundling for GCI mounter
**Note:** The code in this PR only bundles the mounter and modifies `--mounter-path` if it can find `cluster/gce/gci/mounter` in the K8s source dir when building the test bundle.
This bundles the mounter script for GCI with the node e2e tests and allows the `--mounter-path` to be passed to the Kubelet via the node test framework. The node test runner will detect when we are running on a remote GCI node and add the appropriate `--mounter-path` to the `testArgs`.
It also includes a simple node test that mounts a tmpfs volume. This will exercise the Kubelet's mounter code path.
**ITEM OF NOTE:** To get the k8s root dir (in order to copy the mount script into the tarball), I changed `getK8sRootDir` -> `GetK8sRootDir` in `test/e2e_node/build/build.go`. Based on the comment above that function (and the fact that it was private to begin with), I'm not sure this is the best way to do things:
```
// TODO: Dedup / merge this with comparable utilities in e2e/util.go
```
On the other hand, the `e2e/util.go` file mentioned in that comment doesn't exist anymore. This should be resolved before this PR is merged.
update status
When volume controller tries to update the node status, if it fails to
update the nodes status, it should call SetNodeStatusUpdateNeeded so
that the volume list could be updated next time.
Automatic merge from submit-queue
CRI: Refactor kuberuntime unit test
Based on https://github.com/kubernetes/kubernetes/pull/34858
This PR:
1) Refactor the fake runtime service and some kuberuntime unit test.
2) Add better garbage collection unit test.
3) Fix init container unit test which isn't testing correctly. Some other unit tests may also need to be fixed.
4) Add pod log directory garbage collection unit test.
@feiskyer @yujuhong
/cc @kubernetes/sig-node
Automatic merge from submit-queue
Refactor some functions in kube-dns to reduce surface area
- Moves federation query path out to its own method
- Creates dns/util and moves some trivial methods to that package
This is just moving of code.
Automatic merge from submit-queue
Kubelet getting node from apiserver cache before update.
This is blocked on #35218 (however it's ready for review).
It seems to visibly reduce the apiserver metrics (and I didn't observe higher number of conflicts even in 2000-node kubemark).
Automatic merge from submit-queue
Create restclient interface
Refactoring of code to allow replace *restclient.RESTClient with any RESTClient implementation that implements restclient.RESTClientInterface interface.
Automatic merge from submit-queue
CRI: Handle container/sandbox restarts for pod with RestartPolicy == …
If all sandbox and containers are dead in a pod, and the restart policy is
"Never", kubelet should not try to recreate all of them.
Automatic merge from submit-queue
Deny service ClusterIP update from `None`
**What this PR does / why we need it**: Headless service should not be transformed into a service with ClusterIP, therefore update of this field if it's set to `None` is disallowed.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#33029
**Release note**:
```release-note
Changing service ClusterIP from `None` is not allowed anymore.
```
Automatic merge from submit-queue
add generic shared informer backed by existing informer
Adds the ability to get an informer and lister that returns `[]runtime.Object` methods with the "normal" filtering capabilities based on a `GroupResource`. Right now, it only works on known types (and re-uses those caches for efficiency by having a different skin on the `Index`). It should be extended in the future.
@derekwaynecarr I think this gives you the types you were looking for to avoid the ugly array copies.
Automatic merge from submit-queue
Support resourceVersion in GetToList - unify interface of List and Ge…
This pretty much unifies the interface of List() and GetToList() methods of storage interface.
I'm going to use it in a subsequent PR to improve performance of the whole cluster.
Automatic merge from submit-queue
Return an empty network namespace path for exited infra containers
If the infra container has already terminated, `docker inspect` will report
pid 0. The path constructed using the pid to check the network namespace of
the process will be invalid. This commit changes docker to report an empty
path to stop kubenet from erroring out whenever TearDown is called on an
exited infra container.
This is not a fix for all the plugins, as some plugins may require the actual
network namespace to tear down properly.
Automatic merge from submit-queue
Remove last probe time from replica sets
While experimenting with Deployment conditions I found out that if we are going to use lastProbeTime as we are supposed to be using it then we hotloop between updates (see https://github.com/kubernetes/kubernetes/pull/19343#issuecomment-255096778 for more info)
cc: @smarterclayton @soltysh
Automatic merge from submit-queue
Add an informer for StorageClass
Add an informer for `StorageClass` for later consumption in quota.
/cc @eparis @deads2k @erinboyd
Automatic merge from submit-queue
Optimize label selector
The number of values for a given label is generally pretty small (in huge majority of cases it is exactly one value).
Currently computing selectors is up to 50% of CPU usage in both apiserver and scheduler.
Changing the structure in which those values are stored from map to slice improves the performance of typical usecase for computing selectors.
Early results:
- scheduler throughput it ~15% higher
- apiserver cpu-usage is also lower (seems to be also ~10-15%)
If the infra container has already terminated, `docker inspect` will report
pid 0. The path constructed using the pid to check the network namespace of
the process will be invalid. This commit changes docker to report an empty
path to stop kubenet from erroring out whenever TearDown is called on an
exited infra container.
This is not a fix for all the plugins, as some plugins may require the actual
network namespace to tear down properly.
In order to be able to use new mounter library, this PR adds the
mounterPath flag to kubelet which passes the flag to the mount
interface. If flag is empty, mount uses default mount path.
Automatic merge from submit-queue
move proxytransport config out of the genericapiserver
Proxy transport is not generic. This moves it to the master config where it is used.
Automatic merge from submit-queue
rkt: Convert image name to be a valid acidentifier
**Release note**:
<!-- Steps to write your release note:
1. Use the release-note-* labels to set the release note state (if you have access)
2. Enter your extended release note in the below block; leaving it blank means using the PR title as the release note. If no release note is required, just write `NONE`.
-->
```release-note
Fix a bug under the rkt runtime whereby image-registries with ports would not be fetched from
```
This fixes a bug whereby an image reference that included a port was not
recognized after being downloaded, and so could not be run
This is the quick-and-simple fix. In the longer term, we'll want to refactor image logic a bit more to handle the many special cases that the current code does not, mostly related to library images on dockerhub.
/cc @yifan-gu @kubernetes/sig-rktnetes
Automatic merge from submit-queue
Loadbalanced client src ip preservation enters beta
Sounds like we're going to try out the proposal (https://github.com/kubernetes/kubernetes/issues/30819#issuecomment-249877334) for annotations -> fields on just one feature in 1.5 (scheduler). Or do we want to just convert to fields right now?
Automatic merge from submit-queue
Add NodePort value in kubectl output
This PR enhances kubectl output after the command execution: `kubectl get svc`.
It additionally shows the value of the NodePort.
This PR is a response for this issue: https://github.com/kubernetes/kubernetes/issues/34100
Automatic merge from submit-queue
wait until the pods are deleted completely
Drain the pods on a node safely by keeping polling until all pods has been deleted.
```release-note
kubectl drain now waits until pods have been delete from the Node before exiting
```
Fixes: #34782
Automatic merge from submit-queue
Add comment for why EndpointsStore doesn't have DeleteFunc
I was confused about why EndpointsStore does not need to have the DeleteFunc. I think it would be helpful to have a comment here.
@bprashanth @bowei
Automatic merge from submit-queue
requests.storage is a standard resource name
The value `requests.storage` is a valid standard resource name but was omitted from the standard list.
Automatic merge from submit-queue
PVC informer lister supports listing
This will be used in follow-on PRs for quota evaluation backed by informers for pvcs.
/cc @deads2k @eparis @kubernetes/sig-storage
Automatic merge from submit-queue
cloudprovider/gce: canonicalize instance name when returning instance array
'names' is an array of FQDNs. 'instances' is a map indexed by canonicalized
name. Clearly these two won't always match, so when building the final
instance array to return, make sure to look up map entries by their canonicalized
name.
In the below example, "ocp-master-5pob" is clearly found as a GCE instance
but when building the final instance array it cannot be matched as the code
is looking for "ocp-master-5pob.c.ose-refarch.internal" instead. The node
is then deleted from the cluster as it cannot be found by the cloud provider.
```
gce.go:2519] ### getInstancesByNames([ocp-master-5pob.c.ose-refarch.internal]): initial node prefix ocp-
gce.go:2530] ### getInstancesByNames([ocp-master-5pob.c.ose-refarch.internal]): looking for instances map[ocp-master-5pob:<nil>]
gce.go:2533] ### getInstancesByNames([ocp-master-5pob.c.ose-refarch.internal]): getting zone 'europe-west1-c' (remaining 1)
gce.go:2563] ### getInstancesByNames([ocp-master-5pob.c.ose-refarch.internal]): instance name <omitted> not requested
gce.go:2563] ### getInstancesByNames([ocp-master-5pob.c.ose-refarch.internal]): instance name <omitted> not requested
gce.go:2533] ### getInstancesByNames([ocp-master-5pob.c.ose-refarch.internal]): getting zone 'europe-west1-b' (remaining 1)
gce.go:2563] ### getInstancesByNames([ocp-master-5pob.c.ose-refarch.internal]): instance name <omitted> not requested
gce.go:2576] ### getInstancesByNames([ocp-master-5pob.c.ose-refarch.internal]): found instance 'ocp-master-5pob' remaining 0
gce.go:2563] ### getInstancesByNames([ocp-master-5pob.c.ose-refarch.internal]): instance name <omitted> not requested
gce.go:2533] ### getInstancesByNames([ocp-master-5pob.c.ose-refarch.internal]): getting zone 'europe-west1-d' (remaining 0)
gce.go:2588] Failed to retrieve instance: "ocp-master-5pob.c.ose-refarch.internal"
gce.go:2624] ### getInstanceByName(ocp-master-5pob.c.ose-refarch.internal): got []: instance not found
gce.go:2626] getInstanceByName/multiple-zones: failed to get instance ocp-master-5pob.c.ose-refarch.internal; err: instance not found
nodecontroller.go:587] Deleting node (no longer present in cloud provider): ocp-master-5pob.c.ose-refarch.internal
nodecontroller.go:664] Recording Deleting Node ocp-master-5pob.c.ose-refarch.internal because it's not present according to cloud provider event message for node ocp-master-5pob.c.ose-refarch.internal
```
Automatic merge from submit-queue
Add import restriction to clientset
This prevents clientset from import more packages in pkg/client/unversioned, because we want to deprecate pkg/client/unversioned.
cc @ingvagabund
Automatic merge from submit-queue
Add validation that detects repeated keys in the labels and annotations maps
Fixes#2965 (a nearly 2 year old feature request!)
@kubernetes/kubectl
@eparis
Automatic merge from submit-queue
Avoid computing keys multiple times in Cacher
This should significantly reduce both cpu-usage and number of allocations in Cacher.
This is a proper follow-up from #30998
@deads2k @liggitt