Automatic merge from submit-queue (batch tested with PRs 41146, 41486, 41482, 41538, 41784)
Switch statefulset controller to shared informers
Originally part of #40097
I *think* the controller currently makes a deep copy of a StatefulSet before it mutates it, but I'm not 100% sure. For those who are most familiar with this code, could you please confirm?
@beeps @smarterclayton @ingvagabund @sttts @liggitt @deads2k @kubernetes/sig-apps-pr-reviews @kubernetes/sig-scalability-pr-reviews @timothysc @gmarek @wojtek-t
Automatic merge from submit-queue (batch tested with PRs 41146, 41486, 41482, 41538, 41784)
Add apply view-last-applied subcommand
reopen pr https://github.com/kubernetes/kubernetes/pull/40984, implement part of https://github.com/kubernetes/community/pull/287
for now unit test all pass, the output looks like:
```console
shiywang@dhcp-140-33 template $ ./kubectl apply view last-applied deployment nginx-deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
name: nginx-deployment
spec:
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: nginx
spec:
containers:
- image: nginx:1.12.10
name: nginx
ports:
- containerPort: 80
resources: {}
status: {}
```
```release-note
Support new kubectl apply view-last-applied command for viewing the last configuration file applied
```
not sure if there is any flag I should updated or the some error handling I should changed.
will generate docs when you guys think is ok.
cc @pwittrock @jessfraz @AdoHe @ymqytw
Automatic merge from submit-queue (batch tested with PRs 38957, 41819, 41851, 40667, 41373)
Move pvutil.go from e2e package to framework package
**What this PR does / why we need it**:
This PR moves pvutil.go to the e2e/framework package.
I am working on a PV upgrade test, and would like to use some of the wrapper functions in pvutil.go. However, the upgrade test is in the upgrade package, and not the e2e package, and it cannot import the e2e package because it would create a circular dependency. So pvutil.go needs to be moved out of e2e in order to break the circular dependency. This is a refactoring name change, no logic has been modified.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*:
**Special notes for your reviewer**:
**Release note**:
NONE
Automatic merge from submit-queue (batch tested with PRs 38957, 41819, 41851, 40667, 41373)
Change taints/tolerations to api fields
This PR changes current implementation of taints and tolerations from annotations to API fields. Taint and toleration are now part of `NodeSpec` and `PodSpec`, respectively. The annotation keys: `scheduler.alpha.kubernetes.io/tolerations` and `scheduler.alpha.kubernetes.io/taints` have been removed.
**Release note**:
Pod tolerations and node taints have moved from annotations to API fields in the PodSpec and NodeSpec, respectively. Pod tolerations and node taints that are defined in the annotations will be ignored. The annotation keys: `scheduler.alpha.kubernetes.io/tolerations` and `scheduler.alpha.kubernetes.io/taints` have been removed.
Automatic merge from submit-queue
Removes additional columns in test_owners.csv
**What this PR does / why we need it**:
fixes a huge collection of typos with the number of columns in the CSV file that probably has broken the auto assign bot
**Special notes for your reviewer**:
None
**Release note**:
`NONE`
Automatic merge from submit-queue (batch tested with PRs 41349, 41532, 41256, 41587, 41657)
Update kubectl in addon-manager to use HPA in autoscaling/v1
Addon-manager is broken since HPA objects were removed from extensions api group.
Came across the logs from [the latest addon-manager on Jenkins](https://storage.googleapis.com/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce/4290/artifacts/bootstrap-e2e-master/kube-addon-manager.log):
```
INFO: == Entering periodical apply loop at 2017-02-16T17:33:37+0000 ==
error: error pruning namespaced object extensions/v1beta1, Kind=HorizontalPodAutoscaler: the server could not find the requested resource
WRN: == Failed to execute /usr/local/bin/kubectl apply --namespace=kube-system -f /etc/kubernetes/addons --prune=true -l kubernetes.io/cluster-service=true --recursive >/dev/null at 2017-02-16T17:33:38+0000. 2 tries remaining. ==
error: error pruning namespaced object extensions/v1beta1, Kind=HorizontalPodAutoscaler: the server could not find the requested resource
WRN: == Failed to execute /usr/local/bin/kubectl apply --namespace=kube-system -f /etc/kubernetes/addons --prune=true -l kubernetes.io/cluster-service=true --recursive >/dev/null at 2017-02-16T17:33:46+0000. 1 tries remaining. ==
error: error pruning namespaced object extensions/v1beta1, Kind=HorizontalPodAutoscaler: the server could not find the requested resource
WRN: == Failed to execute /usr/local/bin/kubectl apply --namespace=kube-system -f /etc/kubernetes/addons --prune=true -l kubernetes.io/cluster-service=true --recursive >/dev/null at 2017-02-16T17:33:53+0000. 0 tries remaining. ==
WRN: == Kubernetes addon update completed with errors at 2017-02-16T17:33:58+0000 ==
```
And notice this commit (f66679a4e9) came in two weeks ago, which removed HorizontalPodAutoscaler from extensions/v1beta1.
Addon-manager is now partially functioning that it could successfully create and update addons, but will fail to prune objects, which means upgrade tests may mostly fail.
Pushed another version of addon-manager with kubectl v1.6.0-alpha.2 ([release 2 days ago](https://github.com/kubernetes/kubernetes/releases/tag/v1.6.0-alpha.2)) for fixing, including below images:
- gcr.io/google-containers/kube-addon-manager:v6.4-alpha.2
- gcr.io/google-containers/kube-addon-manager-amd64:v6.4-alpha.2
- gcr.io/google-containers/kube-addon-manager-arm:v6.4-alpha.2
- gcr.io/google-containers/kube-addon-manager-arm64:v6.4-alpha.2
- gcr.io/google-containers/kube-addon-manager-ppc64le:v6.4-alpha.2
- gcr.io/google-containers/kube-addon-manager-s390x:v6.4-alpha.2
@mikedanese
cc @wojtek-t @shyamjvs
Automatic merge from submit-queue (batch tested with PRs 41349, 41532, 41256, 41587, 41657)
Enable pod level cgroups by default
**What this PR does / why we need it**:
It enables pod level cgroups by default.
**Special notes for your reviewer**:
This is intended to be enabled by default on 2/14/2017 per the plan outlined here:
https://github.com/kubernetes/community/pull/314
**Release note**:
```release-note
Each pod has its own associated cgroup by default.
```
Automatic merge from submit-queue (batch tested with PRs 41844, 41803, 39116, 41129, 41240)
NPD: Update NPD test.
For https://github.com/kubernetes/node-problem-detector/issues/58.
Update NPD e2e test based on the new behavior.
Note that before merging this PR, we need to merge all pending PRs in npd, and release the v0.3.0-alpha.1 version of NPD.
/cc @dchen1107 @kubernetes/node-problem-detector-reviewers
Automatic merge from submit-queue (batch tested with PRs 41844, 41803, 39116, 41129, 41240)
Allow for not-ready pods in large clusters
This is to workaround issues with non-starting pods in large clusters in roughly 1/3rd of runs.
Automatic merge from submit-queue (batch tested with PRs 41844, 41803, 39116, 41129, 41240)
test: fetch updated deployment before finding new and old rss
@krousey @janetkuo ptal
Ref https://github.com/kubernetes/kubernetes/issues/41518
Automatic merge from submit-queue (batch tested with PRs 41364, 40317, 41326, 41783, 41782)
Debug what is hapening in large clusters
What I'm seeing in large clusters is:
```
I0219 19:34:29.994] [90m/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:44[0m
I0219 19:34:29.994] [90m------------------------------[0m
I0219 21:27:11.421] Dumping master and node logs to /workspace/_artifacts
I0219 21:27:11.422] Master SSH not supported for gke
```
i have no idea what is happening during those 2 hours, and would like to understand this.
Automatic merge from submit-queue
[Federation] Modify the comments in Federation E2E tests to use standard Go conventions for documentation comments
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 41709, 41685, 41754, 41759, 37237)
Projected volume plugin
This is a WIP volume driver implementation as noted in the commit for https://github.com/kubernetes/kubernetes/pull/35313.
change to GetOriginalConfiguration
add bazel
refactor apply view-last-applied command
update some changes
minor change
add unit tests, update
update some codes and genreate docs
update LongDesc
Automatic merge from submit-queue (batch tested with PRs 41706, 39063, 41330, 41739, 41576)
[Kubemark] Add option to log hollow-node logs
Ref https://github.com/kubernetes/kubernetes/issues/41613
Added an option to log kubemark hollow-node logs which includes kubelet, kubeproxy and npd logs for each hollow-node.
Setting the env var `ENABLE_HOLLOW_NODE_LOGS=true` should now enable logging for tests.
cc @kubernetes/sig-scalability-misc @wojtek-t @gmarek @yujuhong @Random-Liu
Automatic merge from submit-queue (batch tested with PRs 41421, 41440, 36765, 41722)
Use watch param instead of deprecated /watch/ prefix
Switches clients to use watch param instead of /watch/ prefix
```release-note
Clients now use the `?watch=true` parameter to make watch API calls, instead of the `/watch/` path prefix
```
Automatic merge from submit-queue (batch tested with PRs 41421, 41440, 36765, 41722)
ResourceQuota ability to support default limited resources
Add support for the ability to configure the quota system to identify specific resources that are limited by default. A limited resource means its consumption is denied absent a covering quota. This is in contrast to the current behavior where consumption is unlimited absent a covering quota. Intended use case is to allow operators to restrict consumption of high-cost resources by default.
Example configuration:
**admission-control-config-file.yaml**
```
apiVersion: apiserver.k8s.io/v1alpha1
kind: AdmissionConfiguration
plugins:
- name: "ResourceQuota"
configuration:
apiVersion: resourcequota.admission.k8s.io/v1alpha1
kind: Configuration
limitedResources:
- resource: pods
matchContains:
- pods
- requests.cpu
- resource: persistentvolumeclaims
matchContains:
- .storageclass.storage.k8s.io/requests.storage
```
In the above configuration, if a namespace lacked a quota for any of the following:
* cpu
* any pvc associated with particular storage class
The attempt to consume the resource is denied with a message stating the user has insufficient quota for the matching resources.
```
$ kubectl create -f pvc-gold.yaml
Error from server: error when creating "pvc-gold.yaml": insufficient quota to consume: gold.storageclass.storage.k8s.io/requests.storage
$ kubectl create quota quota --hard=gold.storageclass.storage.k8s.io/requests.storage=10Gi
$ kubectl create -f pvc-gold.yaml
... created
```
Automatic merge from submit-queue
Fix kubemark default e2e test suite's name
Seems like the suite "[Feature:performance]" doesn't trigger tests anymore. Changed it to "[Feature:Performance]" in kubemark run-e2e-tests.sh.
cc @wojtek-t @gmarek
Automatic merge from submit-queue
Adds kube-public to the whitelist to not be deleted for e2e tests
We added the `kube-public` namespace but didn't add it to a whitelist of namespaces to not delete as part of e2e cleanup.
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 39373, 41585, 41617, 41707, 39958)
[Federation][e2e] Remove ns creation in federated clusters
**What this PR does / why we need it**:
In federation e2e, framework creates a namespace for each test case. the same ns is supposed to be created in federated clusters. Due to issues in namespace controller, this was not working earlier. but now it is working.
so currently the namespace is created twice, once by namespace controller and another when we call `getRegisteredClusters`. depending on the timing of these 2 calls, some [test cases fails ](https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-federation/1199#k8sio-federation-secrets-featurefederation-secret-objects-should-not-be-deleted-from-underlying-clusters-when-orphandependents-is-true). So removing the creation of namespace when `getRegisteredClusters` which is unnecessary.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
Fixes flakes in federation e2e.
cc @madhusudancs @nikhiljindal @kubernetes/sig-federation-bugs
Automatic merge from submit-queue (batch tested with PRs 39373, 41585, 41617, 41707, 39958)
Owners file related changes for kubectl and docs contributors
- adding a command to kubectl updates the root .generated_docs file requiring root level approval: move .generated_docs under docs/
- run hack/update-generated-docs.sh so the docs are up to date
- add kubectl contributors to test/OWNERS and test/fixtures/pkg/kubectl/OWNERS so they can approve kubectl e2e test changes
```release-note
NONE
```