Automatic merge from submit-queue
Remove DynamicVolumeProvisioning from feature gate
**What this PR does / why we need it**:
Remove `DynamicVolumeProvisioning` from feature gate.
**Which issue this PR fixes** : fixes#51120
**Special notes for your reviewer**:
N/A
**Release note**:
No
Automatic merge from submit-queue
Provide a way to omit Event stages in audit policy
This provide a way to omit some stages for each audit policy rule.
For example:
```
apiVersion: audit.k8s.io/v1beta1
kind: Policy
- level: Metadata
resources:
- group: "rbac.authorization.k8s.io"
resources: ["roles"]
omitStages:
- "RequestReceived"
```
RequestReceived stage will not be emitted to audit backends with previous config.
**Release note**:
```
None
```
#
Automatic merge from submit-queue (batch tested with PRs 49727, 51792)
Introducing metrics-server
ref https://github.com/kubernetes/features/issues/271
There is still some work blocked on problems with repo synchronization:
- migrate to `v1beta1` introduced in #51653
- bump deps to HEAD
Will do it in a follow up PRs once the issue is resolved.
```release-note
Introduced Metrics Server
```
Automatic merge from submit-queue (batch tested with PRs 49727, 51792)
Implement Controller for growing persistent volumes
This PR implements API and controller plane changes necessary for doing controller side resize.
xref : https://github.com/kubernetes/community/pull/657
Also xref https://github.com/kubernetes/features/issues/284
```
Add alpha support for allowing users to grow persistent volumes. Currently we only support volume types that just require control plane resize (such as glusterfs) and don't need separate file system resize.
```
Updates https://github.com/kubernetes/kubernetes/issues/48561
This provide a way to omit some stages for each audit policy rule.
For example:
apiVersion: audit.k8s.io/v1beta1
kind: Policy
- level: Metadata
resources:
- group: "rbac.authorization.k8s.io"
resources: ["roles"]
omitStages:
- "RequestReceived"
RequestReceived stage will not be emitted to audit backends with
previous config.
Automatic merge from submit-queue
Fixes grace period in delete
**What this PR does / why we need it**: Fixes `kubectl delete` ignoring `--grace-period`.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes https://github.com/openshift/origin/issues/15060 found in OpenShift.
**Release note**:
```release-note
NONE
```
Introduce feature gate for expanding PVs
Add a field to SC
Add new Conditions and feature tag pvc update
Add tests for size update via feature gate
register the resize admission plugin
Update golint failures
Automatic merge from submit-queue (batch tested with PRs 51845, 51868, 51864)
Update sys spec to support docker 1.11-1.13 and overlay2.
Fixes https://github.com/kubernetes/kubernetes/issues/32536.
Update docker spec to:
1) Support overlay2;
2) Support docker version 1.11-1.13.
@dchen1107 @yguo0905 @luxas
/cc @kubernetes/sig-node-pr-reviews
```release-note
Kubernetes 1.8 supports docker version 1.11.x, 1.12.x and 1.13.x. And also supports overlay2.
```
Automatic merge from submit-queue
Enable batch/v1beta1.CronJobs by default
This PR re-applies the cronjobs->beta back (https://github.com/kubernetes/kubernetes/pull/51720) with the fix from @shyamjvs.
Fixes#51692
@apelisse @dchen1107 @smarterclayton ptal
@janetkuo @erictune fyi
Automatic merge from submit-queue
Remove deprecated init-container in annotations
fixes#50655fixes#51816closes#41004fixes#51816
Builds on #50654 and drops the initContainer annotations on conversion to prevent bypassing API server validation/security and targeting version-skewed kubelets that still honor the annotations
```release-note
The deprecated alpha and beta initContainer annotations are no longer supported. Init containers must be specified using the initContainers field in the pod spec.
```
Automatic merge from submit-queue
Workloads deprecation 1.8
**What this PR does / why we need it**: This PR deprecates the Deployment, ReplicaSet, and DaemonSet kinds in the extensions/v1beta1 group version and the StatefulSet, Deployment, and ControllerRevision kinds in the apps/v1beta1 group version. The Deployment, ReplicaSet, DaemonSet, StatefuSet, and ControllerRevision kinds in the apps/v1beta2 group version are now the current version.
xref kubernetes/features#353
```release-note
The Deployment, DaemonSet, and ReplicaSet kinds in the extensions/v1beta1 group version are now deprecated, as are the Deployment, StatefulSet, and ControllerRevision kinds in apps/v1beta1. As they will not be removed until after a GA version becomes available, you may continue to use these kinds in existing code. However, all new code should be developed against the apps/v1beta2 group version.
```
Automatic merge from submit-queue (batch tested with PRs 51682, 51546, 51369, 50924, 51827)
Clear values for disabled alpha fields
Fixes#51831
Before persisting new or updated resources, alpha fields that are disabled by feature gate must be removed from the incoming objects.
This adds a helper for clearing these values for pod specs and calls it from the strategies of all in-tree resources containing pod specs.
Addresses https://github.com/kubernetes/community/pull/869
Automatic merge from submit-queue (batch tested with PRs 51682, 51546, 51369, 50924, 51827)
kubeadm: Detect kubelet readiness and error out if the kubelet is unhealthy
**What this PR does / why we need it**:
In order to improve the UX when the kubelet is unhealthy or stopped, or whatever, kubeadm now polls the kubelet's API after 40 and 60 seconds, and then performs an exponential backoff for a total of 155 seconds.
If the kubelet endpoint is not returning `ok` by then, kubeadm gives up and exits.
This will miligate at least 60% of our "[apiclient] Created API client, waiting for control plane to come up" issues in the kubeadm issue tracker 🎉, as kubeadm now informs the user what's wrong and also doesn't deadlock like before.
Demo:
```
lucas@THEGOPHER:~/luxas/kubernetes$ sudo ./kubeadm init --skip-preflight-checks
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.7.4
[init] Using Authorization modes: [Node RBAC]
[preflight] Skipping pre-flight checks
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [thegopher kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.115]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] This often takes around a minute; or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 40.502199 seconds
[markmaster] Will mark node thegopher as master by adding a label and a taint
[markmaster] Master thegopher tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: 5776d5.91e7ed14f9e274df
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run (as a regular user):
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
http://kubernetes.io/docs/admin/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join --token 5776d5.91e7ed14f9e274df 192.168.1.115:6443 --discovery-token-ca-cert-hash sha256:6f301ce8c3f5f6558090b2c3599d26d6fc94ffa3c3565ffac952f4f0c7a9b2a9
lucas@THEGOPHER:~/luxas/kubernetes$ sudo ./kubeadm reset
[preflight] Running pre-flight checks
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Removing kubernetes-managed containers
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes /var/lib/etcd]
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
lucas@THEGOPHER:~/luxas/kubernetes$ sudo systemctl stop kubelet
lucas@THEGOPHER:~/luxas/kubernetes$ sudo ./kubeadm init --skip-preflight-checks
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.7.4
[init] Using Authorization modes: [Node RBAC]
[preflight] Skipping pre-flight checks
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [thegopher kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.115]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] This often takes around a minute; or longer if the control plane images have to be pulled.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp 127.0.0.1:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp 127.0.0.1:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp 127.0.0.1:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz/syncloop' failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp 127.0.0.1:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz/syncloop' failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp 127.0.0.1:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz/syncloop' failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp 127.0.0.1:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp 127.0.0.1:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz/syncloop' failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp 127.0.0.1:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp 127.0.0.1:10255: getsockopt: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by that:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
- There is no internet connection; so the kubelet can't pull the following control plane images:
- gcr.io/google_containers/kube-apiserver-amd64:v1.7.4
- gcr.io/google_containers/kube-controller-manager-amd64:v1.7.4
- gcr.io/google_containers/kube-scheduler-amd64:v1.7.4
You can troubleshoot this for example with the following commands if you're on a systemd-powered system:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
couldn't initialize a Kubernetes cluster
```
In this demo, I'm first starting kubeadm normally and everything works as usual.
In the second case, I'm explicitely stopping the kubelet so it doesn't run, and skipping preflight checks, so that kubeadm doesn't even try to exec `systemctl start kubelet` like it does usually.
That obviously results in a non-working system, but now kubeadm tells the user what's the problem instead of waiting forever.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
Fixes: https://github.com/kubernetes/kubeadm/issues/377
**Special notes for your reviewer**:
**Release note**:
```release-note
kubeadm: Detect kubelet readiness and error out if the kubelet is unhealthy
```
@kubernetes/sig-cluster-lifecycle-pr-reviews @pipejakob
cc @justinsb @kris-nova @lukemarsden as well as you wanted this feature :)
Automatic merge from submit-queue (batch tested with PRs 51682, 51546, 51369, 50924, 51827)
Remove duplicate fake and unused openapi
**What this PR does / why we need it**:
Follow-up on PR #50404
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
rsync IPVS proxier to the HEAD of iptables
**What this PR does / why we need it**:
There was a significant performance improvement made to iptables. Since IPVS proxier makes use of iptables in some use cases, I think we should rsync IPVS proxier to the HEAD of iptables.
**Which issue this PR fixes** :
xref #51679
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 51819, 51706, 51761, 51818, 51500)
fix kube-proxy panic because of nil sessionAffinityConfig
**What this PR does / why we need it**:
fix kube-proxy panic because of nil sessionAffinityConfig
**Which issue this PR fixes**: closes#51499
**Special notes for your reviewer**:
I apology that this bug is introduced by #49850 :(
@thockin @smarterclayton @gnufied
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 51819, 51706, 51761, 51818, 51500)
Fix providerID update validation
**What this PR does / why we need it**:
Cloud controller manager supports updating providerID in #50730, but the node updating was blocked by
validation rule.
This is to propose a fix for updating the validation rule by allowing altering spec.providerID if not set.
Please check #51596 for detail
**Which issue this PR fixes**
fixes#51596
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 51819, 51706, 51761, 51818, 51500)
Fix local storage code to follow go style
**What this PR does / why we need it**:
Fix local storage code to follow go style
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 51819, 51706, 51761, 51818, 51500)
Update godep-licenses script to work on darwin
**What this PR does / why we need it**:
When reviewing #51766, noticed that the godep licenses script has slightly different output on darwin (using the BSD md5sum) and linux (using the GNU md5sum).
This adds an awk trim to ensure that the output is the same on both platforms.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
Add RBAC, healthchecks, autoscalers and update Calico to v2.5.1
**What this PR does / why we need it**:
- Updates Calico to `v2.5`
- Calico/node to `v2.5.1`
- Calico CNI to `v1.10.0`
- Typha to `v0.4.1`
- Enable health check endpoints
- Add Readiness probe for calico-node and Typha
- Add Liveness probe for calico-node and Typha
- Add RBAC manifest
- With calico ClusterRole, ServiceAccount and ClusterRoleBinding
- Add Calico CRDs in the Calico manifest (only works for k8s v1.7+)
- Add vertical autoscaler for calico-node and Typha
- Add horizontal autoscaler for Typha
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
Job failure policy controller support
**What this PR does / why we need it**:
Start implementing the support of the "Backoff policy and failed pod limit" in the ```JobController``` defined in https://github.com/kubernetes/community/pull/583.
This PR depends on a previous PR #48075 that updates the K8s API types.
TODO:
* [X] Implement ```JobSpec.BackoffLimit``` support
* [x] Rebase when #48075 has been merged.
* [X] Implement end2end tests
implements https://github.com/kubernetes/community/pull/583
**Special notes for your reviewer**:
**Release note**:
```release-note
Add backoff policy and failed pod limit for a job
```
Automatic merge from submit-queue (batch tested with PRs 51805, 51725, 50925, 51474, 51638)
Allow custom client verbs to be generated using client-gen
This change will allow to define custom verbs for resources using the following new tag:
```
// +genclient:method=Foo,verb=create,subresource=foo,input=Bar,output=k8s.io/pkg/api.Blah
```
This will generate client method `Foo(bar *Bar) (*api.Blah, error)` (format depends on the particular verb type)
With this change we can add `UpdateScale()` and `GetScale()` into all scalable resources. Note that intention of this PR is not to fix the Scale(), but that is used as an example of this new capability.
Additionally this will also allow us to get rid of `// +genclient:noStatus` and fix guessing of the "updateStatus" subresource presence based on the existence of '.Status' field.
Basically you will have to add following into all types you want to generate `UpdateStatus()` for:
```
// +genclient:method=UpdateStatus,verb=update,subresource=status
```
This allows further extension of the client without writing an expansion (which proved to be pain to maintain and copy...). Also allows to customize native CRUD methods if needed (input/output types).
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 51805, 51725, 50925, 51474, 51638)
Flexvolume dynamic plugin discovery: Prober unit tests and basic e2e test.
**What this PR does / why we need it**: Tests for changes introduced in PR #50031 .
As part of the prober unit test, I mocked filesystem, filesystem watch, and Flexvolume plugin initialization.
Moved the filesystem event goroutine to watcher implementation.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#51147
**Special notes for your reviewer**:
First commit contains added functionality of the mock filesystem.
Second commit is the refactor for moving mock filesystem into a common util directory.
Third commit is the unit and e2e tests.
**Release note**:
```release-note
NONE
```
/release-note-none
/sig storage
/assign @saad-ali @liggitt
/cc @mtaufen @chakri-nelluri @wongma7
Automatic merge from submit-queue (batch tested with PRs 51805, 51725, 50925, 51474, 51638)
Limit events accepted by API Server
**What this PR does / why we need it**:
This PR adds the ability to limit events processed by an API server. Limits can be set globally on a server, per-namespace, per-user, and per-source+object. This is needed to prevent badly-configured or misbehaving players from making a cluster unstable.
Please see https://github.com/kubernetes/community/pull/945.
**Release Note:**
```release-note
Adds a new alpha EventRateLimit admission control that is used to limit the number of event queries that are accepted by the API Server.
```
Automatic merge from submit-queue
kubeadm: Add omitempty tags to nullable values and use metav1.Duration
**What this PR does / why we need it**:
From @sttts review of https://github.com/kubernetes/kubernetes/pull/49959; we found some shortcomings of the current JSON tags in the kubeadm config file.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
Note that https://github.com/kubernetes/kubernetes/pull/49959 will not be merged for v1.8, but this at least improves the state in v1.8 without changing anything really.
**Release note**:
```release-note
NONE
```
@kubernetes/sig-api-machinery-pr-reviews
Automatic merge from submit-queue (batch tested with PRs 50579, 50875, 51797, 51807, 51803)
make url parsing in apiserver configurable
We have known cases where the attributes for a request are assigned differently. The kubelet is one example. This makes the value an interface, not a struct, and provides a hook for (non-default) users to override it.
Automatic merge from submit-queue (batch tested with PRs 50579, 50875, 51797, 51807, 51803)
oidc auth: make the OIDC claims prefix configurable
Add the following flags to control the prefixing of usernames and
groups authenticated using OpenID Connect tokens.
--oidc-username-prefix
--oidc-groups-prefix
```release-note
The OpenID Connect authenticator can now use a custom prefix, or omit the default prefix, for username and groups claims through the --oidc-username-prefix and --oidc-groups-prefix flags. For example, the authenticator can map a user with the username "jane" to "google:jane" by supplying the "google:" username prefix.
```
Closes https://github.com/kubernetes/kubernetes/issues/50408
Ref https://github.com/kubernetes/kubernetes/issues/31380
cc @grillz @kubernetes/sig-auth-pr-reviews @thomastaylor312 @gtaylor
Automatic merge from submit-queue
Fixes kubernetes/kubernetes#29271: accept prefixed namespaces
**What this PR does / why we need it**: `kubectl get namespaces -o name` outputs the names of all namespaces, prefixed with `namespaces/`. This changeset allows these namespace names to be passed directly back in to `kubectl` via the `-n` flag without reprocessing them to remove `namespaces/`.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#29271
**Special notes for your reviewer**:
**Release note**:
```NONE
```
Automatic merge from submit-queue
Implement the `kubeadm upgrade` command
**What this PR does / why we need it**:
Implements the kubeadm upgrades proposal: https://docs.google.com/document/d/1PRrC2tvB-p7sotIA5rnHy5WAOGdJJOIXPPv23hUFGrY/edit#
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
fixes: https://github.com/kubernetes/kubeadm/issues/14
**Special notes for your reviewer**:
I'm gonna split out changes not directly related to the upgrade procedure into separate PRs as dependencies as we go.
**Now ready for review. Please look at `cmd/kubeadm/app/phases/upgrade` first and give feedback on that. The rest kind of follows.**
**Release note**:
```release-note
Implemented `kubeadm upgrade plan` for checking whether you can upgrade your cluster to a newer version
Implemented `kubeadm upgrade apply` for upgrading your cluster from one version to an other
```
cc @fabriziopandini @kubernetes/sig-cluster-lifecycle-pr-reviews @craigtracey @mattmoyer