Commit Graph

6495 Commits (c36eee2a0cb7e91826c162a277cc135fdc6a205a)

Author SHA1 Message Date
Kubernetes Submit Queue 4396f19c61 Merge pull request #41482 from ncdc/shared-informers-11-statefulset
Automatic merge from submit-queue (batch tested with PRs 41146, 41486, 41482, 41538, 41784)

Switch statefulset controller to shared informers

Originally part of #40097 

I *think* the controller currently makes a deep copy of a StatefulSet before it mutates it, but I'm not 100% sure. For those who are most familiar with this code, could you please confirm?

@beeps @smarterclayton @ingvagabund @sttts @liggitt @deads2k @kubernetes/sig-apps-pr-reviews @kubernetes/sig-scalability-pr-reviews @timothysc @gmarek @wojtek-t
2017-02-22 21:09:35 -08:00
Kubernetes Submit Queue afd3db25cf Merge pull request #41146 from shiywang/apply-view1
Automatic merge from submit-queue (batch tested with PRs 41146, 41486, 41482, 41538, 41784)

 Add apply view-last-applied subcommand

reopen pr https://github.com/kubernetes/kubernetes/pull/40984, implement part of https://github.com/kubernetes/community/pull/287
for now unit test all pass, the output looks like:

```console
shiywang@dhcp-140-33 template $ ./kubectl apply view last-applied deployment nginx-deployment 
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  creationTimestamp: null
  name: nginx-deployment
spec:
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx:1.12.10
        name: nginx
        ports:
        - containerPort: 80
        resources: {}
status: {}
```

```release-note
Support new kubectl apply view-last-applied command for viewing the last configuration file applied
```

not sure if there is any flag I should updated or the some error handling I should changed.
will generate docs when you guys think is ok.
cc @pwittrock @jessfraz @AdoHe @ymqytw
2017-02-22 21:09:31 -08:00
Kubernetes Submit Queue 9cbaff9e0f Merge pull request #41373 from msau42/e2e-pvutil
Automatic merge from submit-queue (batch tested with PRs 38957, 41819, 41851, 40667, 41373)

Move pvutil.go from e2e package to framework package

**What this PR does / why we need it**:  

This PR moves pvutil.go to the e2e/framework package.

I am working on a PV upgrade test, and would like to use some of the wrapper functions in pvutil.go.  However, the upgrade test is in the upgrade package, and not the e2e package, and it cannot import the e2e package because it would create a circular dependency.  So pvutil.go needs to be moved out of e2e in order to break the circular dependency.  This is a refactoring name change, no logic has been modified.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: 

**Special notes for your reviewer**:

**Release note**:

NONE
2017-02-22 19:59:37 -08:00
Kubernetes Submit Queue 59f4c5911a Merge pull request #41819 from dchen1107/master
Automatic merge from submit-queue (batch tested with PRs 38957, 41819, 41851, 40667, 41373)

Bump GCI to gci-stable-56-9000-84-2

Changelogs since gci-beta-56-9000-80-0:

- Fixed google-accounts-daemon breaks on GCI when network is unavailable.
- Fixed iptables-restore performance regression.

cc/ @adityakali @Random-Liu @fabioy
2017-02-22 19:59:33 -08:00
Kubernetes Submit Queue 6024f56f80 Merge pull request #38957 from aveshagarwal/master-taints-tolerations-api-fields
Automatic merge from submit-queue (batch tested with PRs 38957, 41819, 41851, 40667, 41373)

Change taints/tolerations to api fields

This PR changes current implementation of taints and tolerations from annotations to API fields. Taint and toleration are now part of `NodeSpec` and `PodSpec`, respectively. The annotation keys: `scheduler.alpha.kubernetes.io/tolerations` and `scheduler.alpha.kubernetes.io/taints`  have been removed.

**Release note**:
Pod tolerations and node taints have moved from annotations to API fields in the PodSpec and NodeSpec, respectively. Pod tolerations and node taints that are defined in the annotations will be ignored. The annotation keys: `scheduler.alpha.kubernetes.io/tolerations` and `scheduler.alpha.kubernetes.io/taints`  have been removed.
2017-02-22 19:59:31 -08:00
Kubernetes Submit Queue fcb234e580 Merge pull request #41920 from fejta/spiffxp
Automatic merge from submit-queue

Make spiffxp an owner of test/...
2017-02-22 18:18:29 -08:00
Kubernetes Submit Queue b164e64619 Merge pull request #41660 from calebamiles/wip-sig-owners-for-tests
Automatic merge from submit-queue

Removes additional columns in test_owners.csv

**What this PR does / why we need it**:

fixes a huge collection of typos with the number of columns in the CSV file that probably has broken the auto assign bot

**Special notes for your reviewer**:

None

**Release note**:

`NONE`
2017-02-22 18:18:14 -08:00
Erick Fejta 53377b1f20 Make spiffxp an owner of test/... 2017-02-22 12:50:17 -08:00
Kubernetes Submit Queue fe34705f8a Merge pull request #41587 from MrHohn/addon-manager-fix-hpa
Automatic merge from submit-queue (batch tested with PRs 41349, 41532, 41256, 41587, 41657)

Update kubectl in addon-manager to use HPA in autoscaling/v1

Addon-manager is broken since HPA objects were removed from extensions api group.

Came across the logs from [the latest addon-manager on Jenkins](https://storage.googleapis.com/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce/4290/artifacts/bootstrap-e2e-master/kube-addon-manager.log):
```
INFO: == Entering periodical apply loop at 2017-02-16T17:33:37+0000 ==
error: error pruning namespaced object extensions/v1beta1, Kind=HorizontalPodAutoscaler: the server could not find the requested resource
WRN: == Failed to execute /usr/local/bin/kubectl  apply --namespace=kube-system -f /etc/kubernetes/addons     --prune=true -l kubernetes.io/cluster-service=true --recursive >/dev/null at 2017-02-16T17:33:38+0000. 2 tries remaining. ==
error: error pruning namespaced object extensions/v1beta1, Kind=HorizontalPodAutoscaler: the server could not find the requested resource
WRN: == Failed to execute /usr/local/bin/kubectl  apply --namespace=kube-system -f /etc/kubernetes/addons     --prune=true -l kubernetes.io/cluster-service=true --recursive >/dev/null at 2017-02-16T17:33:46+0000. 1 tries remaining. ==
error: error pruning namespaced object extensions/v1beta1, Kind=HorizontalPodAutoscaler: the server could not find the requested resource
WRN: == Failed to execute /usr/local/bin/kubectl  apply --namespace=kube-system -f /etc/kubernetes/addons     --prune=true -l kubernetes.io/cluster-service=true --recursive >/dev/null at 2017-02-16T17:33:53+0000. 0 tries remaining. ==
WRN: == Kubernetes addon update completed with errors at 2017-02-16T17:33:58+0000 ==
```

And notice this commit (f66679a4e9) came in two weeks ago, which removed HorizontalPodAutoscaler from extensions/v1beta1.

Addon-manager is now partially functioning that it could successfully create and update addons, but will fail to prune objects, which means upgrade tests may mostly fail.

Pushed another version of addon-manager with kubectl v1.6.0-alpha.2 ([release 2 days ago](https://github.com/kubernetes/kubernetes/releases/tag/v1.6.0-alpha.2)) for fixing, including below images:
- gcr.io/google-containers/kube-addon-manager:v6.4-alpha.2
- gcr.io/google-containers/kube-addon-manager-amd64:v6.4-alpha.2
- gcr.io/google-containers/kube-addon-manager-arm:v6.4-alpha.2
- gcr.io/google-containers/kube-addon-manager-arm64:v6.4-alpha.2
- gcr.io/google-containers/kube-addon-manager-ppc64le:v6.4-alpha.2
- gcr.io/google-containers/kube-addon-manager-s390x:v6.4-alpha.2

@mikedanese 

cc @wojtek-t @shyamjvs
2017-02-22 08:12:46 -08:00
Kubernetes Submit Queue d1687d2f67 Merge pull request #41349 from derekwaynecarr/enable-pod-cgroups
Automatic merge from submit-queue (batch tested with PRs 41349, 41532, 41256, 41587, 41657)

Enable pod level cgroups by default

**What this PR does / why we need it**:
It enables pod level cgroups by default.

**Special notes for your reviewer**:
This is intended to be enabled by default on 2/14/2017 per the plan outlined here:
https://github.com/kubernetes/community/pull/314

**Release note**:
```release-note
Each pod has its own associated cgroup by default.
```
2017-02-22 08:12:37 -08:00
Avesh Agarwal b4d3d24eaf Update tests. 2017-02-22 09:27:42 -05:00
Andy Goldstein f6a186b1e1 Switch statefulset controller to shared informers 2017-02-22 08:53:51 -05:00
Kubernetes Submit Queue eef16cf141 Merge pull request #41240 from Random-Liu/update-npd-test
Automatic merge from submit-queue (batch tested with PRs 41844, 41803, 39116, 41129, 41240)

NPD: Update NPD test.

For https://github.com/kubernetes/node-problem-detector/issues/58.

Update NPD e2e test based on the new behavior.

Note that before merging this PR, we need to merge all pending PRs in npd, and release the v0.3.0-alpha.1 version of NPD.

/cc @dchen1107 @kubernetes/node-problem-detector-reviewers
2017-02-22 05:48:45 -08:00
Kubernetes Submit Queue af4513cd3f Merge pull request #41803 from wojtek-t/allowed_not_running_pods
Automatic merge from submit-queue (batch tested with PRs 41844, 41803, 39116, 41129, 41240)

Allow for not-ready pods in large clusters

This is to workaround issues with non-starting pods in large clusters in roughly 1/3rd of runs.
2017-02-22 05:48:38 -08:00
Kubernetes Submit Queue 32c88a032f Merge pull request #41844 from kargakis/upgrade-test-fix
Automatic merge from submit-queue (batch tested with PRs 41844, 41803, 39116, 41129, 41240)

test: fetch updated deployment before finding new and old rss

@krousey @janetkuo ptal

Ref https://github.com/kubernetes/kubernetes/issues/41518
2017-02-22 05:48:36 -08:00
Wojciech Tyczynski 6d303d3329 Increase cpu for kubeproxy in kubemark in large clusters 2017-02-22 08:44:34 +01:00
Michail Kargakis 58f6eb34d1 test: fetch updated deployment before finding new and old rss 2017-02-22 00:25:35 +01:00
caleb miles f3eee6af65 Update tests listed in test/test_owners.py
- Adds SIG Storage ownership of Projected* tests
  - discussion: https://github.com/kubernetes/kubernetes/issues/19762
  - proposal: https://github.com/kubernetes/kubernetes/pull/35313
2017-02-21 14:48:34 -08:00
caleb miles 86b8cb411a Cleans up test/test_owners.csv
- sorts all e2e tests (with sort (GNU coreutils) 8.25)
- moves all k8s.io/* tests to the end
- removes duplicated tests (with uniq (GNU coreutils) 8.25)
2017-02-21 14:15:02 -08:00
Derek Carr 43ae6f49ad Enable per pod cgroups, fix defaulting of cgroup-root when not specified 2017-02-21 16:34:22 -05:00
Madhusudan.C.S 2cb2200847 Move kube-dns ConfigMap creation/deletion out of federated services e2e tests to federation-up.sh/federation-down.sh where the clusters are joined/unjoined. 2017-02-21 10:27:31 -08:00
Dawn Chen 57fe26111e Update node-e2e to gci-stable-56-9000-84-2 2017-02-21 10:05:44 -08:00
Kubernetes Submit Queue 7a06e41f93 Merge pull request #41782 from wojtek-t/speedup_dns_autoscaling_test
Automatic merge from submit-queue (batch tested with PRs 41364, 40317, 41326, 41783, 41782)

Speedup dns-autoscaling test in large clusters
2017-02-21 07:45:46 -08:00
Kubernetes Submit Queue d209b3f316 Merge pull request #41783 from wojtek-t/debug_large_clusters_hanging
Automatic merge from submit-queue (batch tested with PRs 41364, 40317, 41326, 41783, 41782)

Debug what is hapening in large clusters

What I'm seeing in large clusters is:
```
I0219 19:34:29.994]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:44
I0219 19:34:29.994] ------------------------------
I0219 21:27:11.421] Dumping master and node logs to /workspace/_artifacts
I0219 21:27:11.422] Master SSH not supported for gke
```

i have no idea what is happening during those 2 hours, and would like to understand this.
2017-02-21 07:45:44 -08:00
Kubernetes Submit Queue 8e6643acd4 Merge pull request #41364 from perotinus/fix-doc-comments
Automatic merge from submit-queue

[Federation] Modify the comments in Federation E2E tests to use standard Go conventions for documentation comments

```release-note
NONE
```
2017-02-21 07:06:55 -08:00
Wojciech Tyczynski 3c6a37193a Allow for not-ready pods in large clusters 2017-02-21 15:01:08 +01:00
Kubernetes Submit Queue 9ee2ab799f Merge pull request #41717 from kargakis/add-upgrade-test-logging
Automatic merge from submit-queue

Spew replica sets in any deployment upgrade test failure

Should help identifying whether the new replica set is considered as old after the upgrade (or maybe it's something else too).

For debugging https://github.com/kubernetes/kubernetes/issues/41518
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-latest-upgrade-master/5/

The failure seems suspiciously related to https://github.com/kubernetes/kubernetes/issues/40415 but it may not be related at all too...

@kubernetes/sig-apps-bugs
2017-02-21 05:25:15 -08:00
Kubernetes Submit Queue e65ac460eb Merge pull request #37237 from jpeeler/implementation-volumeaio
Automatic merge from submit-queue (batch tested with PRs 41709, 41685, 41754, 41759, 37237)

Projected volume plugin

This is a WIP volume driver implementation as noted in the commit for https://github.com/kubernetes/kubernetes/pull/35313.
2017-02-21 04:27:51 -08:00
shiywang 557c18694a Add apply view last-applied subcommand
change to GetOriginalConfiguration

add bazel

refactor apply view-last-applied command

update some changes

minor change

add unit tests, update

update some codes and genreate docs

update LongDesc
2017-02-21 20:08:25 +08:00
Wojciech Tyczynski 29c417629d Speedup dns-autoscaling test in large clusters 2017-02-21 12:20:35 +01:00
Wojciech Tyczynski eec946d20c Debug what is hapening in large clusters 2017-02-21 11:39:26 +01:00
Kubernetes Submit Queue 70c9eebd21 Merge pull request #41739 from shyamjvs/hollow-node-logs
Automatic merge from submit-queue (batch tested with PRs 41706, 39063, 41330, 41739, 41576)

[Kubemark] Add option to log hollow-node logs

Ref https://github.com/kubernetes/kubernetes/issues/41613

Added an option to log kubemark hollow-node logs which includes kubelet, kubeproxy and npd logs for each hollow-node.
Setting the env var `ENABLE_HOLLOW_NODE_LOGS=true` should now enable logging for tests.

cc @kubernetes/sig-scalability-misc @wojtek-t @gmarek @yujuhong @Random-Liu
2017-02-21 02:24:43 -08:00
Wojciech Tyczynski a21b08d00f Revert "Use watch param instead of deprecated /watch/ prefix" 2017-02-21 08:37:51 +01:00
Zihong Zheng 2c8e89820a Update kubectl in addon-manager to use HPA in autoscaling/v1 instead of extensions/v1beta1 2017-02-20 10:49:10 -08:00
Kubernetes Submit Queue dfacc61c5f Merge pull request #41722 from liggitt/watch-prefix
Automatic merge from submit-queue (batch tested with PRs 41421, 41440, 36765, 41722)

Use watch param instead of deprecated /watch/ prefix

Switches clients to use watch param instead of /watch/ prefix

```release-note
Clients now use the `?watch=true` parameter to make watch API calls, instead of the `/watch/` path prefix
```
2017-02-20 10:37:44 -08:00
Kubernetes Submit Queue 506950ada0 Merge pull request #36765 from derekwaynecarr/quota-precious-resources
Automatic merge from submit-queue (batch tested with PRs 41421, 41440, 36765, 41722)

ResourceQuota ability to support default limited resources

Add support for the ability to configure the quota system to identify specific resources that are limited by default.  A limited resource means its consumption is denied absent a covering quota.  This is in contrast to the current behavior where consumption is unlimited absent a covering quota.  Intended use case is to allow operators to restrict consumption of high-cost resources by default.

Example configuration:

**admission-control-config-file.yaml**
```
apiVersion: apiserver.k8s.io/v1alpha1
kind: AdmissionConfiguration
plugins:
- name: "ResourceQuota"
  configuration:
    apiVersion: resourcequota.admission.k8s.io/v1alpha1
    kind: Configuration
    limitedResources:
    - resource: pods
      matchContains:
      - pods
      - requests.cpu
    - resource: persistentvolumeclaims
      matchContains:
      - .storageclass.storage.k8s.io/requests.storage
```

In the above configuration, if a namespace lacked a quota for any of the following:
* cpu
* any pvc associated with particular storage class

The attempt to consume the resource is denied with a message stating the user has insufficient quota for the matching resources.

```
$ kubectl create -f pvc-gold.yaml 
Error from server: error when creating "pvc-gold.yaml": insufficient quota to consume: gold.storageclass.storage.k8s.io/requests.storage
$ kubectl create quota quota --hard=gold.storageclass.storage.k8s.io/requests.storage=10Gi
$ kubectl create -f pvc-gold.yaml 
... created
```
2017-02-20 10:37:42 -08:00
Jeff Peeler ec701a65e8 Generated files for projected volume driver 2017-02-20 13:09:41 -05:00
Jeff Peeler 8fb1b71c66 Implements projected volume driver
Proposal: kubernetes/kubernetes#35313
2017-02-20 12:56:04 -05:00
Kubernetes Submit Queue eb755a3306 Merge pull request #41750 from wojtek-t/speedup_density_test
Automatic merge from submit-queue (batch tested with PRs 41751, 41750)

Speedup density test
2017-02-20 09:45:38 -08:00
Kubernetes Submit Queue 5fb6b91faf Merge pull request #41751 from shyamjvs/fix-kubemark-default-suite
Automatic merge from submit-queue

Fix kubemark default e2e test suite's name

Seems like the suite "[Feature:performance]" doesn't trigger tests anymore. Changed it to "[Feature:Performance]" in kubemark run-e2e-tests.sh.

cc @wojtek-t @gmarek
2017-02-20 09:27:22 -08:00
Shyam Jeedigunta 7802c82671 Fix kubemark default e2e test suite's name 2017-02-20 16:08:28 +01:00
Wojciech Tyczynski f17765ab72 Speedup density test 2017-02-20 16:06:05 +01:00
Shyam Jeedigunta ed0ab3cd8e [Kubemark] Add option to log hollow-node logs 2017-02-20 11:52:49 +01:00
Wojciech Tyczynski 4426156aa6 More resources for hollowproxy in large kubemarks 2017-02-20 09:26:17 +01:00
Jordan Liggitt f950171003
Switch watch prefixes to params 2017-02-19 23:51:58 -05:00
Jordan Liggitt 308fdcd13f
Pass typed options to dynamic client 2017-02-19 22:12:55 -05:00
Kubernetes Submit Queue bd1a222173 Merge pull request #41420 from jbeda/add-public-to-e2e
Automatic merge from submit-queue

Adds kube-public to the whitelist to not be deleted for e2e tests

We added the `kube-public` namespace but didn't add it to a whitelist of namespaces to not delete as part of e2e cleanup.

```release-note
```
2017-02-19 14:38:01 -08:00
Kubernetes Submit Queue 0dc52d7919 Merge pull request #41707 from shashidharatd/federation-service-e2e-2
Automatic merge from submit-queue (batch tested with PRs 39373, 41585, 41617, 41707, 39958)

[Federation][e2e] Remove ns creation in federated clusters

**What this PR does / why we need it**:
In federation e2e, framework creates a namespace for each test case. the same ns is supposed to be created in federated clusters. Due to issues in namespace controller, this was not working earlier. but now it is working.
so currently the namespace is created twice, once by namespace controller and another when we call `getRegisteredClusters`. depending on the timing of these 2 calls, some [test cases fails ](https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-federation/1199#k8sio-federation-secrets-featurefederation-secret-objects-should-not-be-deleted-from-underlying-clusters-when-orphandependents-is-true). So removing the creation of namespace when `getRegisteredClusters` which is unnecessary.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
Fixes flakes in federation e2e.

cc @madhusudancs @nikhiljindal @kubernetes/sig-federation-bugs
2017-02-19 13:50:41 -08:00
Kubernetes Submit Queue a962f5d2e4 Merge pull request #41585 from pwittrock/owners
Automatic merge from submit-queue (batch tested with PRs 39373, 41585, 41617, 41707, 39958)

Owners file related changes for kubectl and docs contributors

- adding a command to kubectl updates the root .generated_docs file requiring root level approval: move .generated_docs under docs/
- run hack/update-generated-docs.sh so the docs are up to date
- add kubectl contributors to test/OWNERS and test/fixtures/pkg/kubectl/OWNERS so they can approve kubectl e2e test changes


```release-note
NONE
```
2017-02-19 13:50:38 -08:00
Michail Kargakis 7b8f95080c Spew replica sets in any deployment upgrade test failure 2017-02-19 14:35:32 +01:00