**What type of PR is this?**
/kind cleanup
**What this PR does / why we need it**:
$ hack/verify-golint.sh
Errors from golint:
pkg/cloudprovider/providers/aws/aws_fakes.go:357:9: if block ends with a return statement, so drop this else and outdent its block
pkg/volume/util/util.go:204:9: if block ends with a return statement, so drop this else and outdent its block
**Which issue(s) this PR fixes** *(optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged)*:
**Special notes for your reviewer**:
**Release note**:
```
NONE
```
This corrects a problem where valid security group ports were removed
unintentionally when updating a service or when node changes occur.
Fixes#60825, #64148
- Move from the old github.com/golang/glog to k8s.io/klog
- klog as explicit InitFlags() so we add them as necessary
- we update the other repositories that we vendor that made a similar
change from glog to klog
* github.com/kubernetes/repo-infra
* k8s.io/gengo/
* k8s.io/kube-openapi/
* github.com/google/cadvisor
- Entirely remove all references to glog
- Fix some tests by explicit InitFlags in their init() methods
Change-Id: I92db545ff36fcec83afe98f550c9e630098b3135
When the cloud-controller-manager is running with PV label initializing controller
and NFS volume is created, it causes nill reference error.
related to #68996
When the cloud-controller-manager is running with PV label initializing controller
and NFS volume is created, it causes nill reference error.
fixes#68996
Force ELB to ensureHealthCheck when target pool exists.
Add e2e test to ensure that HC interval will be reconciled when
kube-controller-manager restarts.
Health checks with bigger thresholds and larger intervals will not be reconciled.
Add unittest for ILB and ELB to ensure HC reconciles and is configurable.
The previous version forced us to create AWS IAM Policies that are too
permissive when dealing with volumes. That's because:
1. Volumes were created without tags that identifies the new resource as
managed by the cluster. So technically the resourse, at creation time,
is not owned by the cluster.
2. Tags were added to the volume making the resource now managed by the
cluster. The problem being that it could make ANY volume as managed by the
cluster. Thus allowing resources that aren't really part of the cluster,
or part of no cluster at all, to become a resource managed by the cluster.
By combining the operations we can both make the code simpler, since we
don't need to deal with deleting a volume in case we can't apply tags to
it, plus the security model gets a nice improvement.