Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add gnufied as AWS approver.
@gnufied has been maintaining the storage part of AWS cloud provider for a long while and he deserves to be approver.
```release-note
NONE
```
/sig aws
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
fix typo in resource_allocation.go
**What this PR does / why we need it**:
fix a typo in resource_allocation.go file
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
N/A
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
fix the error prone account creation method of blob disk
**What this PR does / why we need it**:
use new account generation method for blob disk to fix the error prone account creation method of blob disk
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#59738
**Special notes for your reviewer**:
**Release note**:
```
fix the error prone account creation method of azure blob disk
```
/assign @karataliu
/sig azure
Automatic merge from submit-queue (batch tested with PRs 59298, 59773, 59772). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
bazel: update digests for debian-iptables-amd64 and busybox
**What this PR does / why we need it**: I've pushed updated (rebased) versions of the `debian-base-ARCH:0.3` and `debian-iptables-ARCH:v10` images. Since bazel uses the sha256 digest instead of the tag, we need to update those accordingly.
I also bumped the busybox digest, which hasn't been updated since last summer. This is updating it from v1.26.2 to v1.28.0. Note that the non-bazel build process uses `busybox:latest`, and so has already been using busybox v1.28.0.
**Special notes for your reviewer**:
We will update the hyperkube-base image in #57648.
**Release note**:
```release-note
NONE
```
/assign @tallclair
/cc @rphillips @rvkubiak
Automatic merge from submit-queue (batch tested with PRs 59298, 59773, 59772). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Bump GLBC to 0.9.8-alpha.2 and change back to --verbose
**What this PR does / why we need it**:
Bumps GLBC version to 0.9.8-alpha.2 which is logically equivalent to 0.9.8-alpha.1 except verbose mode sets v=3 instead of v=4
**Special notes for your reviewer**:
/cc @rramkumar1
/assign @bowei
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 59298, 59773, 59772). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add etcd 3.x minor version rollback support to migrate-if-needed.sh
Provide automatic etcd 3.x minor version downgrade when using the gcr.io/google_containers/etcd docker images to operate etcd.
Uses `etcdctl snapshot save` and `etcdctl snapshot restore` to safely downgrade etcd from 3.2->3.1 or 3.1->3.0. This is safe because the data storage file formats used by etcd have not changed between these versions.
Intended as a stop-gap until we can introduce more comprehensive downgrade support in etcd. The main limitation of this approach is that it is not able to perform zero downtime downgrades for HA clusters. For HA clusters, all members must be stopped and downgraded before the cluster may be restarted at the downgraded version.
Example usage:
- Initially the [etcd.manifest](58547ebd72/cluster/gce/manifests/etcd.manifest (L43)) is set to gcr.io/google_containers/etcd:3.0.17, TARGET_VERSION=3.0.17
- A upgrade to 3.1.11 is initiated.
- etcd.manifest is updated to gcr.io/google_containers/etcd:3.1.11, TARGET_VERSION=3.1.11
- etcd restarts and establishes 3.1 as it's "cluster version"
- For whatever reason, a downgrade is initiated
- etcd.manifest is updated gcr.io/google_containers/etcd:3.1.11, TARGET_VERSION=3.0.17
- migrate-if-needed.sh detects that the current version (3.1.11) is newer than the target version, so it:
- creates a snapshot using etcd & etcdctl 3.1.11
- backs up the data dir
- restores the snapshot using etcdctl 3.0.17 to create a replacement data dir
- starts etcd 3.0.17
Note that while this will rollback to an earlier etcd version, the newer etcd gcr.io image version must continue to be used throughout the downgrade. Only TARGET_VERSION is downgraded.
Test coverage was lacking for `migrate-if-needed.sh` so this adds some container level testing to the `Makefile` for migrating and rolling back. This surfaced a couple bugs that are fixed by this PR as well.
cc @mml @lavalamp @wenjiaswe
```release-note
Add automatic etcd 3.2->3.1 and 3.1->3.0 minor version rollback support to gcr.io/google_container/etcd images. For HA clusters, all members must be stopped before performing a rollback.
```
Automatic merge from submit-queue (batch tested with PRs 59479, 59246). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add tests for schedulercache
**What this PR does / why we need it**:
Add tests for `node_info.go` under `schedulercache` package.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Avoid race condition when updating equivalence cache
**What this PR does / why we need it**:
Lock the ecache to update the ecache on each predicate running, to avoid race condition.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fix#58507
**Special notes for your reviewer**:
None
**Release note**:
```release-note
None
```
Automatic merge from submit-queue (batch tested with PRs 59767, 56454, 59237, 59730, 55479). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
kubeadm: add configuration option to not taint master
**What this PR does / why we need it**:
Although tainting the master is normally a good and proper thing to do in some situations (docker for mac in our case, but I suppose minikube and such as well) having a single host configuration is desirable.
In linuxkit we have a [workaround](443e47c408/projects/kubernetes/kubernetes/kubeadm-init.sh (L19...L22)) to remove the taint after initialisation. With the change here we could simply populate `/etc/kubeadm/kubeadm.yaml` with `noTaintMaster: true` instead and have it never be tainted in the first place.
I have only added this to the config file and not to the CLI since AIUI the latter is somewhat deprecated.
The code also arranges to _remove_ an existing taint if it is unwanted. I'm unsure if this behaviour is correct or desirable, I think a reasonable argument could be made for leaving an existing taint in place too.
Signed-off-by: Ian Campbell <ijc@docker.com>
**Release note**:
Since the requirement for this option is rather niche and not best practice in the majority of cases I'm not sure if it warrants mentioning in the release notes? If it were then perhaps
```release-note
`kubeadm init` can now omit the tainting of the master node if configured to do so in `kubeadm.yaml`.
```
Automatic merge from submit-queue (batch tested with PRs 59767, 56454, 59237, 59730, 55479). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Requesting new credentials when node names change
**What this PR does / why we need it**:
Updating kubernetes-worker charm to request a new token when the node name changes due to a cloud provider change to kubelet-extra-args
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/491
**Special notes for your reviewer**:
**Release note**:
```release-note
Updated kubernetes-worker to request new security tokens when the aws cloud provider changes the registered node name.
```
Automatic merge from submit-queue (batch tested with PRs 59767, 56454, 59237, 59730, 55479). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Change critical pods’ template to use priority
**What this PR does / why we need it**:
Change critical pods’ template to use priority
Thanks.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
ref #57471
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 59767, 56454, 59237, 59730, 55479). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Block Volume: Refactor volumehandler in operationexecutor
**What this PR does / why we need it**:
Based on discussion with @saad-ali at #51494, we need refactor volumehandler in operationexecutor
for Block Volume feature. We don't need to add volumehandler as separated object.
```
VolumeHandler does not need to be a separate object that is constructed inline like this.
You can create a new operation, e.g. UnmountOperation to which you pass the spec,
and it can return either a UnmountVolume or UnmapVolume.
```
**Which issue(s) this PR fixes** : no related issue.
**Special notes for your reviewer**:
@saad-ali @msau42
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
kubelet: check for illegal phase transition
I have been unable to root cause the transition from `Failed` phase to `Succeeded`. I have been unable to recreate as well. However, our CI in Origin, where we have controllers that look for these transitions and rely on the phase transition rules being respected, obviously indicate that the illegal transition does occur.
This PR will prevent the illegal phase transition from propagating into the kubelet caches or the API server.
Fixes https://github.com/kubernetes/kubernetes/issues/58711
xref https://github.com/openshift/origin/issues/17595
@dashpole @yujuhong @derekwaynecarr @smarterclayton @tnozicka
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Remove controller-manager --service-sync-period flag
**What this PR does / why we need it**:
This PR removes controller manager --service-sync-period flag which is not used anywhere in the code and is causing confusion
**Which issue(s) this PR fixes**
https://github.com/kubernetes/kubernetes/issues/58776
**Special notes for your reviewer**:
@deads2k this remove the flag as per the discussion on #58776
2 commits
1. one for code change
2. one for auto generated code
**Release note**:
```release-note
1. Controller-manager --service-sync-period flag is removed (was never used in the code).
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Remove validation failure of Pod priority when the feature is disabled
**What this PR does / why we need it**:
I learned that fields specified in the API should be silently ignored when the feature is disabled. This makes sense as downgrading a cluster would fail otherwise. This PR removes the validation logic that ensures Pod priority is not set when priority feature is disabled.
**Special notes for your reviewer**:
**Release note**:
```release-note
Pod priority can be specified ins PodSpec even when the feature is disabled, but it will be effective only when the feature is enabled.
```
/sig scheduling
ref: #57471
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Skip TestRoutes when there are no vm(s)
**What this PR does / why we need it**:
TestRoutes assumes that there is at least one vm in the OpenStack it
is connecting to. So let's limit this test to run properly only when
we are running in a VM or one was created already outside of the
test harness
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
Please see https://github.com/dims/openstack-cloud-controller-manager/issues/73 for some more context
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Feature Gate - Kubeadm Audit Logging
Fixeskubernetes/kubeadm#623
Signed-off-by: Chuck Ha <ha.chuck@gmail.com>
**What this PR does / why we need it**:
This PR enables [Auditing](https://kubernetes.io/docs/tasks/debug-application-cluster/audit/) behind a featureGate. A user can supply their own audit policy with configuration option as well as a place for the audit logs to live. If no policy is supplied a default policy will be provided. The default policy will log all Metadata level policy logs. It is the example provided in the documentation.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixeskubernetes/kubeadm#623
**Special notes for your reviewer**:
**Release note**:
```release-note
kubeadm: Enable auditing behind a feature gate.
```
Although tainting the master is normally a good and proper thing to do in some
situations (docker for mac in our case, but I suppose minikube and such as
well) having a single host configuration is desirable.
In linuxkit we have a [workaround](443e47c408/projects/kubernetes/kubernetes/kubeadm-init.sh (L19...L22))
to remove the taint after initialisation. With the change here we could simply
populate /etc/kubeadm/kubeadm.yaml` with `noTaintMaster: true` instead and have
it never be tainted in the first place.
I have only added this to the config file and not to the CLI since AIUI the
latter is somewhat deprecated.
The code also arranges to _remove_ an existing taint if it is unwanted. I'm
unsure if this behaviour is correct or desirable, I think a reasonable argument
could be made for leaving an existing taint in place too.
Signed-off-by: Ian Campbell <ijc@docker.com>
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
correct the ConstructVolumeSpec func path value
**What this PR does / why we need it**:
the path value is incorrect
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
fix comments
change azureDiskSharedAccountNamePrefix var
rename sharedDiskAccountNamePrefix
use default vhd container name as "vhds"
use one commaon func: SearchStorageAccount
fix comments
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Map correct vmset name for internal load balancers
**What this PR does / why we need it**:
When creating an internal loadbalancer, e.g.
```sh
cat << EOF | kubectl create -f -
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
- name: https
port: 443
targetPort: 443
protocol: TCP
selector:
app: ingress-nginx
EOF
```
Then wait a while, and no target backends present for the internal load balancer even after 15 mins.
![](https://user-images.githubusercontent.com/10303889/36070528-54aa9848-0f22-11e8-834b-7401a2fa7611.png)
Refer https://github.com/Azure/acs-engine/issues/2151#issuecomment-364726846.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#59746
**Special notes for your reviewer**:
Should cherry pick to v1.9, v1.8, and v1.7 (and requires resolving conflicts manually).
**Release note**:
```release-note
Map correct vmset name for internal load balancers
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
fix --watch on multiple requests
**Release note**:
```release-note
NONE
```
`kubectl get <resource> --watch` only supports watching a single resource kind at a time.
This check fails if more than one resource `Info` is returned.
When dealing with large quantities of a single resource kind, or an amount that exceeds the value of `--chunk-size`, more than one request is made to the server causing a resource `Info` to be created for each of the requests, ultimately causing the above check to fail even though we are dealing with the same type of resource.
This patch modifies that check to take into account the GVKs of all infos returned, and only fail if at least one differs.
cc @deads2k
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add generic cache for Azure VMSS
**What this PR does / why we need it**:
This PR adds a generic cache for VMSS and removes old list-based cache.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Continue of ##58770.
**Special notes for your reviewer**:
Depends on #59520.
**Release note**:
```release-note
Add generic cache for Azure VMSS
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Subtract local ephemeral storage resource from NodeInfo when removing pod
**What this PR does / why we need it**:
When we are removing pods, we need to subtract local ephemeral storage resource from NodeInfo
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
/kind bug
/sig storage
/sig scheduling
/assign @jingxu97 @bsalamat
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
kubeadm: imagePullPolicy option in init config
**What this PR does / why we need it**:
This PR adds `imagePullPolicy` option to the `kubeadm init` configuration file.
The new `imagePullPolicy` option is forwarded to the generated kubelet static pods for etcd, kube-apiserver, kube-controller-manager and kube-scheduler. This option allows for precise image pull policy specification for master nodes and thus for more tight control over images. It is useful in CI environments and in environments, where the user has total control over master VM templates (thus, the master VM templates can be preloaded with the required Docker images for the control plane services).
**Special notes for your reviewer**:
/cc @kubernetes/sig-cluster-lifecycle-pr-reviews
/area kubeadm
/assign @luxas
**Release note**:
```release-note
kubeadm: New "imagePullPolicy" option in the init configuration file, that gets forwarded to kubelet static pods to control pull policy for etcd and control plane images.
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
fix the create azure file pvc failure if there is no storage account in current resource group
**What this PR does / why we need it**:
When create an azure file PVC, there will be error if there is no storage account in current resource group.
With this PR, a storage account will be created if there is no storage account in current resource group.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#56556
**Special notes for your reviewer**:
1. rephrase the code logic of `CreateFileShare` func.
```
if accountName is empty, then
find a storage account that matches accountType
if no storage account found, then
create a new account
else
we only use user specified storage account
create a file share according to found storage account
```
2. Use func `getStorageAccountName` to get a unique storage account name by UUID, a storage account for azure file would be like `f0b2b0bd40c010112e897fa`. And in next PR, I will use this function to create storage account for azure disk, the storage account for azure disk would be like `d8f3ad8ad92000f1e1e88bd`.
**Release note**:
```
fix the create azure file pvc failure if there is no storage account in current resource group
```
/sig azure
/assign @rootfs
use new storage account name generation method
use uuid to generate account name
change azure file account prefix
use uniqueID to generate a storage account name
fix comments
fix comments
fix comments
fix a storage account matching bug
only use UUID in getStorageAccountName func
use shorter storage account prefix for azure file
fix comments
fix comments
fix comments
fix rebase build error
rewrite CreateFileShare code logic
fix gofmt issue
fix test error
fix comments
fix a location matching bug
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
fix all the typos across the project
**What this PR does / why we need it**:
There are lots of typos across the project. We should avoid small PRs on fixing those annoying typos, which is time-consuming and low efficient.
This PR does fix all the typos across the project currently. And with #59463, typos could be avoided when a new PR gets merged.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
/sig testing
/area test-infra
/sig release
/cc @ixdy
/assign @fejta
**Release note**:
```release-note
None
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
New github id - FengyunPan -> FengyunPan2
PanFengyun <pan_feng_yun@163.com>'s previous github id was @FengyunPan
Due to some problem with github, he lost access to @FengyunPan and
is not using @FengyunPan2. So let's switch over to the new id. Github
has promised to release the previous id back in 6 months, so we may
have to switch it back later.
**What this PR does / why we need it**:
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
[kube-proxy]fix bad error message in kube-proxy.log when run cluster in local
**What this PR does / why we need it**:
When run `hack/local-up-cluster.sh` in local with `iptable` proxymode, we can see `/tmp/kube-proxy.log ` include some error messages, like that:
```shell
E0117 09:10:45.720142 108141 proxier.go:863] Error deleting dummy device kube-ipvs0 created by IPVS proxier: error deleting a non-exist dummy device: kube-ipvs0
E0117 09:10:45.729617 108141 proxier.go:838] Failed to execute iptables-restore for nat: exit status 1 (iptables-restore: line 7 failed
)
E0117 09:10:45.730508 108141 proxier.go:876] Error removing ipset KUBE-LOOP-BACK, error: error destroying set KUBE-LOOP-BACK:, error: exit status 1
E0117 09:10:45.731329 108141 proxier.go:876] Error removing ipset KUBE-CLUSTER-IP, error: error destroying set KUBE-CLUSTER-IP:, error: exit status 1
E0117 09:10:45.732100 108141 proxier.go:876] Error removing ipset KUBE-LOAD-BALANCER, error: error destroying set KUBE-LOAD-BALANCER:, error: exit status 1
E0117 09:10:45.732855 108141 proxier.go:876] Error removing ipset KUBE-NODE-PORT-TCP, error: error destroying set KUBE-NODE-PORT-TCP:, error: exit status 1
E0117 09:10:45.735082 108141 proxier.go:876] Error removing ipset KUBE-NODE-PORT-UDP, error: error destroying set KUBE-NODE-PORT-UDP:, error: exit status 1
E0117 09:10:45.735829 108141 proxier.go:876] Error removing ipset KUBE-EXTERNAL-IP, error: error destroying set KUBE-EXTERNAL-IP:, error: exit status 1
E0117 09:10:45.736619 108141 proxier.go:876] Error removing ipset KUBE-LOAD-BALANCER-SOURCE-IP, error: error destroying set KUBE-LOAD-BALANCER-SOURCE-IP:, error: exit status 1
E0117 09:10:45.737360 108141 proxier.go:876] Error removing ipset KUBE-LOAD-BALANCER-SOURCE-CIDR, error: error destroying set KUBE-LOAD-BALANCER-SOURCE-CIDR:, error: exit status 1
E0117 09:10:45.738114 108141 proxier.go:876] Error removing ipset KUBE-LOAD-BALANCER-MASQ, error: error destroying set KUBE-LOAD-BALANCER-MASQ:, error: exit status 1
```
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes [https://github.com/kubernetes/kubernetes/issues/58366](https://github.com/kubernetes/kubernetes/issues/58366)
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Adding kubemci e2e test for ingress spec conformance
**What this PR does / why we need it**:
Adding an e2e test case for kubemci to verify that it conforms to the ingress spec.
Not all tests will pass right now, but adding it will enable us to track the latest status.
```release-note
NONE
```
PanFengyun <pan_feng_yun@163.com>'s previous github id was @FengyunPan
Due to some problem with github, he lost access to @FengyunPan and
is not using @FengyunPan2. So let's switch over to the new id. Github
has promised to release the previous id back in 6 months, so we may
have to switch it back later.
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add generic cache for Azure VM/LB/NSG/RouteTable
**What this PR does / why we need it**:
Part of #58770. This PR adds a generic cache of Azure VM/LB/NSG/RouteTable for reducing ARM calls.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Part of #58770
**Special notes for your reviewer**:
**Release note**:
```release-note
Add generic cache for Azure VM/LB/NSG/RouteTable
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Bury KubeletConfiguration.ConfigTrialDuration for now
Based on discussion in https://github.com/kubernetes/kubernetes/pull/53833/files#r166669046, this PR chooses not to expose a knob for the trial duration yet. It is unclear exactly which shape this functionality should take in the API.
```release-note
The alpha KubeletConfiguration.ConfigTrialDuration field is no longer available.
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
add more error logs in kubectl run
**What this PR does / why we need it**:
closes issue #29307
now will get log with different cases:
```bash
kubectl run hello-node --image=aronchick/hello-node:2.0 --port=8080 --expose=true
service "hello-node" created
deployment "hello-node" created
kubectl run hello-node --image=aronchick/hello-node:2.0 --port=8080 --expose=true
Error from server (AlreadyExists): deployments.extensions "hello-node" already exists
Error from server (AlreadyExists): services "hello-node" already exists
kubectl delete deploy hello-node
deployment "hello-node" deleted
kubectl run hello-node --image=aronchick/hello-node:2.0 --port=8080 --expose=true
deployment "hello-node" created
Error from server (AlreadyExists): services "hello-node" already exists
kubectl delete svc hello-node
service "hello-node" deleted
kubectl run hello-node --image=aronchick/hello-node:2.0 --port=8080 --expose=true
service "hello-node" created
Error from server (AlreadyExists): deployments.extensions "hello-node" already exists
```
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```