Automatic merge from submit-queue
Removes additional columns in test_owners.csv
**What this PR does / why we need it**:
fixes a huge collection of typos with the number of columns in the CSV file that probably has broken the auto assign bot
**Special notes for your reviewer**:
None
**Release note**:
`NONE`
Automatic merge from submit-queue
add godep manifest files to staging repos
The staging repos should have manifests that match the godeps of kube so we know what they build against. We don't need the actual vendored code, since a sync script on the other side needs to find the correct level of other staging directories and thus requires its own `godep restore && go get && godep save` cycle.
@sttts ptal
@lavalamp @caesarxuchao client-go needs a lot of unwinding to do something similar, but the idea is that you can run an acyclic path to get this updated by copying the types and dependencies with `go list`, then generate the clients, then generate this manifest. Then in your sync script you can pull the proper levels and finish the actual vendoring.
Automatic merge from submit-queue (batch tested with PRs 41349, 41532, 41256, 41587, 41657)
Update kubectl in addon-manager to use HPA in autoscaling/v1
Addon-manager is broken since HPA objects were removed from extensions api group.
Came across the logs from [the latest addon-manager on Jenkins](https://storage.googleapis.com/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce/4290/artifacts/bootstrap-e2e-master/kube-addon-manager.log):
```
INFO: == Entering periodical apply loop at 2017-02-16T17:33:37+0000 ==
error: error pruning namespaced object extensions/v1beta1, Kind=HorizontalPodAutoscaler: the server could not find the requested resource
WRN: == Failed to execute /usr/local/bin/kubectl apply --namespace=kube-system -f /etc/kubernetes/addons --prune=true -l kubernetes.io/cluster-service=true --recursive >/dev/null at 2017-02-16T17:33:38+0000. 2 tries remaining. ==
error: error pruning namespaced object extensions/v1beta1, Kind=HorizontalPodAutoscaler: the server could not find the requested resource
WRN: == Failed to execute /usr/local/bin/kubectl apply --namespace=kube-system -f /etc/kubernetes/addons --prune=true -l kubernetes.io/cluster-service=true --recursive >/dev/null at 2017-02-16T17:33:46+0000. 1 tries remaining. ==
error: error pruning namespaced object extensions/v1beta1, Kind=HorizontalPodAutoscaler: the server could not find the requested resource
WRN: == Failed to execute /usr/local/bin/kubectl apply --namespace=kube-system -f /etc/kubernetes/addons --prune=true -l kubernetes.io/cluster-service=true --recursive >/dev/null at 2017-02-16T17:33:53+0000. 0 tries remaining. ==
WRN: == Kubernetes addon update completed with errors at 2017-02-16T17:33:58+0000 ==
```
And notice this commit (f66679a4e9) came in two weeks ago, which removed HorizontalPodAutoscaler from extensions/v1beta1.
Addon-manager is now partially functioning that it could successfully create and update addons, but will fail to prune objects, which means upgrade tests may mostly fail.
Pushed another version of addon-manager with kubectl v1.6.0-alpha.2 ([release 2 days ago](https://github.com/kubernetes/kubernetes/releases/tag/v1.6.0-alpha.2)) for fixing, including below images:
- gcr.io/google-containers/kube-addon-manager:v6.4-alpha.2
- gcr.io/google-containers/kube-addon-manager-amd64:v6.4-alpha.2
- gcr.io/google-containers/kube-addon-manager-arm:v6.4-alpha.2
- gcr.io/google-containers/kube-addon-manager-arm64:v6.4-alpha.2
- gcr.io/google-containers/kube-addon-manager-ppc64le:v6.4-alpha.2
- gcr.io/google-containers/kube-addon-manager-s390x:v6.4-alpha.2
@mikedanese
cc @wojtek-t @shyamjvs
Automatic merge from submit-queue (batch tested with PRs 41349, 41532, 41256, 41587, 41657)
Lint fixes for the master and worker Python code.
**What this PR does / why we need it**: lint fixes for the python code.
**Which issue this PR fixes** none
**Special notes for your reviewer**: This is lint fixes for the Juju python code.
**Release note**:
```release-note
NONE
```
Please consider these changes so we can pass flake8 lint tests in our build process.
Automatic merge from submit-queue (batch tested with PRs 41349, 41532, 41256, 41587, 41657)
client-go: don't import client auth provider packages
Both of these auth providers are useful for kubectl but not so much for everyone importing client-go. Let users optionally import them (example [0]) and reduce the overall number of imports that client-go requires.
Quick grep seems to imply it wont import it after.
```
$ grep -r 'client-go/plugin/pkg/client/auth' staging/
staging/src/k8s.io/client-go/plugin/pkg/client/auth/plugins.go: _ "k8s.io/client-go/plugin/pkg/client/auth/gcp"
staging/src/k8s.io/client-go/plugin/pkg/client/auth/plugins.go: _ "k8s.io/client-go/plugin/pkg/client/auth/oidc"
staging/src/k8s.io/client-go/examples/third-party-resources/main.go: _ "k8s.io/client-go/plugin/pkg/client/auth/gcp"
staging/src/k8s.io/kube-aggregator/pkg/client/clientset_generated/clientset/clientset.go: _ "k8s.io/client-go/plugin/pkg/client/auth"
staging/src/k8s.io/kube-aggregator/pkg/client/clientset_generated/internalclientset/clientset.go: _ "k8s.io/client-go/plugin/pkg/client/auth"
```
closes https://github.com/kubernetes/client-go/issues/49
updates https://github.com/kubernetes/client-go/issues/79 (removes cloud.google.com/go import)
cc @kubernetes/sig-api-machinery-pr-reviews @kubernetes/sig-auth-pr-reviews
```release-notes
client-go no longer imports GCP OAuth2 and OpenID Connect packages by default.
```
[0] 8b466d64c5/examples/third-party-resources/main.go (L34-L35)
Automatic merge from submit-queue (batch tested with PRs 41349, 41532, 41256, 41587, 41657)
Enable pod level cgroups by default
**What this PR does / why we need it**:
It enables pod level cgroups by default.
**Special notes for your reviewer**:
This is intended to be enabled by default on 2/14/2017 per the plan outlined here:
https://github.com/kubernetes/community/pull/314
**Release note**:
```release-note
Each pod has its own associated cgroup by default.
```
Automatic merge from submit-queue (batch tested with PRs 41844, 41803, 39116, 41129, 41240)
NPD: Update NPD test.
For https://github.com/kubernetes/node-problem-detector/issues/58.
Update NPD e2e test based on the new behavior.
Note that before merging this PR, we need to merge all pending PRs in npd, and release the v0.3.0-alpha.1 version of NPD.
/cc @dchen1107 @kubernetes/node-problem-detector-reviewers
Automatic merge from submit-queue (batch tested with PRs 41844, 41803, 39116, 41129, 41240)
Cleanup client example
**What this PR does / why we need it**:
- Package level `config` variable in `third-party-resources/main.go` is not used, it is shadowed by the one defined in `main()`. Should probably be deleted.
- Package level `kubeconfig ` variable in `out-of-cluster/main.go` is global - make it private to `main()`.
**Which issue this PR fixes**
This fixes https://github.com/kubernetes/client-go/issues/59, except the part about global `api.Scheme`, also adds test with interface check. Supersedes https://github.com/kubernetes/client-go/pull/61.
**Special notes for your reviewer**:
This is my first PR to Kubernetes :)
Automatic merge from submit-queue (batch tested with PRs 41844, 41803, 39116, 41129, 41240)
Allow for not-ready pods in large clusters
This is to workaround issues with non-starting pods in large clusters in roughly 1/3rd of runs.
Automatic merge from submit-queue (batch tested with PRs 41844, 41803, 39116, 41129, 41240)
test: fetch updated deployment before finding new and old rss
@krousey @janetkuo ptal
Ref https://github.com/kubernetes/kubernetes/issues/41518
Automatic merge from submit-queue
Log that debug handlers have been turned on.
**What this PR does / why we need it**: PR allows user to have a message in logs that debug handlers are on. It should allow the operator to know and automate a check for the case where debug has been left on.
**Release note**:
```
NONE
```
Conversions can mutate the underlying object (and ours were).
Make a deepcopy before our first conversion at the very start
of the reconciler method in order to avoid mutating the shared
informer cache during conversion.
Fixes#41768
Automatic merge from submit-queue
Prompt user to use secure config in kubeadm
If don't set the kubeconfig, the default action is to use insecure port to connect to apiserver. It's necessary to tell people to use the admin.kubeconfig
```
#kubectl cluster-info
Kubernetes master is running at http://localhost:8080
KubeDNS is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/kube-dns
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
```
Automatic merge from submit-queue (batch tested with PRs 41364, 40317, 41326, 41783, 41782)
Debug what is hapening in large clusters
What I'm seeing in large clusters is:
```
I0219 19:34:29.994] [90m/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:44[0m
I0219 19:34:29.994] [90m------------------------------[0m
I0219 21:27:11.421] Dumping master and node logs to /workspace/_artifacts
I0219 21:27:11.422] Master SSH not supported for gke
```
i have no idea what is happening during those 2 hours, and would like to understand this.
Automatic merge from submit-queue (batch tested with PRs 41364, 40317, 41326, 41783, 41782)
Add ability to enable cache mutation detector in GCE
Add the ability to enable the cache mutation detector in GCE. The current default behavior (disabled) is retained.
When paired with https://github.com/kubernetes/test-infra/pull/1901, we'll be able to detect shared informer cache mutations in gce e2e PR jobs.
Automatic merge from submit-queue (batch tested with PRs 41364, 40317, 41326, 41783, 41782)
changes to cleanup the volume plugin for recycle
**What this PR does / why we need it**:
Code cleanup. Changing from creating a new interface from the plugin, that then calls a function to recycle a volume, to adding the function to the plugin itself.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#26230
**Special notes for your reviewer**:
Took same approach from closed PR #28432.
Do you want the approach to be the same for NewDeleter(), NewMounter(), NewUnMounter() and should they be in this same PR or submit different PR's for those?
**Release note**:
```NONE
```