Commit Graph

220 Commits (52f3380ef3a5d5de4d75ff92defd1c9af9d775bc)

Author SHA1 Message Date
Clayton Coleman 596d9de8fa
update: linted packages 2016-12-10 18:05:37 -05:00
Derek Carr 459a7a05f1 Ability to quota storage by storage class 2016-12-09 13:26:59 -05:00
Kubernetes Submit Queue 67669f1fd0 Merge pull request #36643 from kzwang/ingress-controller
Automatic merge from submit-queue (batch tested with PRs 37870, 36643, 37664, 37545)

Add option to disable federation ingress controller

**What this PR does / why we need it**:
Added an option to enable/disable federation ingress controller as currently federated ingresses doesn't work in environments other than GCE/GKE. Also ignore reconcile config maps if no federated ingresses exist.

**Which issue this PR fixes** 
fixes #33943

@quinton-hoole

**Release note**:
```release-note
Add `--controllers` flag to federation controller manager for enable/disable federation ingress controller
```
2016-12-06 00:22:54 -08:00
Kubernetes Submit Queue 2708f5c7dd Merge pull request #37870 from madhusudancs/fed-svc-no-cascading-delete-e2e-fix
Automatic merge from submit-queue

[Federation] Separate the cleanup phases of service and service shards so that service shards can be cleaned up even after the service is deleted elsewhere.

Fixes Federated Service e2e test.

This separation is necessary because "Federated Service DNS should be
able to discover a federated service" e2e test recently added a case
where it deletes the service from federation but not the shards from
the underlying clusters.

Because of the way cleanup was implemented in the AfterEach block
currently, we did not cleanup any of the underlying shards. However,
separating the two phases of the cleanup needs this separation.

cc @kubernetes/sig-cluster-federation @nikhiljindal
2016-12-05 23:50:33 -08:00
Kevin Wang 007d7a802f Add option to disable federation ingress controller 2016-12-06 15:14:21 +11:00
Eric Paris 78798f6191 Remove girishkalele from most places
This also updates the maintainers list and reassigns his tests
2016-12-05 19:29:34 -05:00
Madhusudan.C.S 28a7ff3cd0 Test owners update. 2016-12-05 10:33:16 -08:00
Kubernetes Submit Queue 5e41d0904f Merge pull request #37830 from sttts/sttts-stratify-cert-generation
Automatic merge from submit-queue

Stratify apiserver cert generation

- move self-signed cert generation to `SecureServingOptions.MaybeDefaultWithSelfSignedCerts`
- make cert generation only depend on `ServerRunOptions`, not on an unfinished `Config` (this breaks the chicken-egg problem of a finished config in https://github.com/kubernetes/kubernetes/pull/35387#pullrequestreview-5368176)
- move loopback client config code into `config_selfclient.go`

Replaces https://github.com/kubernetes/kubernetes/pull/35387#event-833649341 by getting rid of duplicated `Complete`.
2016-12-05 10:15:47 -08:00
Dr. Stefan Schimanski 3f01c37b9d Update generated files 2016-12-05 16:05:52 +01:00
Dr. Stefan Schimanski 2dff13f332 Update generated files 2016-12-05 12:42:31 +01:00
Derek McQuay 644a0ceec9 kubeadm: adding test owner and bazel update 2016-12-02 08:42:46 -08:00
Kubernetes Submit Queue edefe66c78 Merge pull request #36106 from apprenda/kubeadm-unit-tests-pkg-node
Automatic merge from submit-queue

Kubeadm unit tests pkg node

Added unit tests for the kubeadm/app/node package testing functionality of bootstrap.go, csr.go, and discovery.go. 

This PR is part of the ongoing effort to add tests (#35025)

/cc @pires @jbeda
2016-12-02 05:45:01 -08:00
Kubernetes Submit Queue 6abb472357 Merge pull request #37720 from freehan/lb-src-update
Automatic merge from submit-queue

Fix Service Update on LoadBalancerSourceRanges Field

Fixes: https://github.com/kubernetes/kubernetes/issues/33033
Also expands: https://github.com/kubernetes/kubernetes/pull/32748
2016-12-01 18:21:39 -08:00
Minhan Xia 0ea7b950d3 add e2e test for updating lb source range 2016-12-01 13:56:21 -08:00
Derek McQuay 4ab42db17e kubeadm: unit tests for app/node/ pkg 2016-12-01 09:30:19 -08:00
David Ashpole a232c15a45 InodeEviction test tests that when some pods create many empty files, both in their container and in volumes, they are evicted before pods that act normally. 2016-11-28 13:09:40 -08:00
Derek Carr 1ec69f658c Fix cross-build for memcg notification 2016-11-23 12:36:04 -05:00
Kubernetes Submit Queue 21c3b3028f Merge pull request #34414 from yarntime/add_defaults_test
Automatic merge from submit-queue

add test file for autoscaling defaults

add test file `pkg/apis/autoscaling/v1/defaults_test.go`
2016-11-21 05:23:13 -08:00
Kubernetes Submit Queue c183664c55 Merge pull request #36630 from nikhiljindal/E2edsi
Automatic merge from submit-queue

Adding e2e tests for validating cascading deletion of federation resources

Ref https://github.com/kubernetes/kubernetes/issues/33612

Adding e2e tests for code in https://github.com/kubernetes/kubernetes/pull/36330 and https://github.com/kubernetes/kubernetes/pull/36476.

Adding test cases to ensure that cascading deletion happens as expected.
Also adding code to ensure that all resources are cleaned up in AfterEach.

cc @kubernetes/sig-cluster-federation @caesarxuchao @madhusudancs
2016-11-18 17:03:27 -08:00
nikhiljindal 90645ce2b7 Automated bazel and owner changes 2016-11-18 14:44:06 -08:00
Saad Ali 172e7562fb Revert "add e2e test for kubectl in a Pod" 2016-11-16 21:38:19 -08:00
Kubernetes Submit Queue f0aba3d6fe Merge pull request #35811 from dashpole/garbage_collect_testing
Automatic merge from submit-queue

Garbage collection tests the MaxPerPodContainers and MaxContainers constraints

This is the first version of this test.  It tests that containers are garbage collected according to the default configuration.
2016-11-15 11:22:52 -08:00
David Ashpole f6224590f7 Test Container Garbage Collection 2016-11-15 09:15:31 -08:00
Rajat Ramesh Koujalagi c0f62b2186 Add test for: mount a secret in the presence of another secret in a
different namespace
2016-11-14 10:52:15 -08:00
Kubernetes Submit Queue 44f672e5e2 Merge pull request #34877 from resouer/e2e-log-path
Automatic merge from submit-queue

Add e2e node test for log path

fixes #34661

A node e2e test to check if container logs files are properly created with right content.

Since the log files under `/var/log/containers` are actually symbolic of docker containers log files, we can not use a pod to mount them in and do check (symbolic doesn't supported by docker volume).

cc @Random-Liu
2016-11-10 08:35:59 -08:00
nikhiljindal 675da90d51 autogenerated bazel and test owner changes 2016-11-09 21:41:19 -08:00
Kubernetes Submit Queue b504d4d991 Merge pull request #36445 from soltysh/update_owners
Automatic merge from submit-queue

Updated test_owners.csv

@kargakis you asked for it
2016-11-09 07:32:32 -08:00
Kubernetes Submit Queue c52efa570d Merge pull request #36079 from apprenda/windows_kube_proxy
Automatic merge from submit-queue

Add Windows support to kube-proxy

<!--  Thanks for sending a pull request!  Here are some tips for you:
1. If this is your first time, read our contributor guidelines https://github.com/kubernetes/kubernetes/blob/master/CONTRIBUTING.md and developer guide https://github.com/kubernetes/kubernetes/blob/master/docs/devel/development.md
2. If you want *faster* PR reviews, read how: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/faster_reviews.md
3. Follow the instructions for writing a release note: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/pull-requests.md#release-notes
-->

**What this PR does / why we need it**:
This is the first stab at supporting kube-proxy (userspace mode) on Windows

**Which issue this PR fixes** : 
fixes #30278

**Special notes for your reviewer**:
The MVP uses `netsh portproxy` to redirect traffic from `ServiceIP:ServicePort` to a `LocalIP:LocalPort`. 
For the next version we are expecting to have guidance from Microsoft Container Networking team.

**Limitations**:
Current implementation does not support DNS queries over UDP as `netsh portproxy` currently only supports TCP. We are working with Microsoft to remediate this.

cc: @brendandburns @dcbw 

**Release note**:
<!--  Steps to write your release note:
1. Use the release-note-* labels to set the release note state (if you have access) 
2. Enter your extended release note in the below block; leaving it blank means using the PR title as the release note. If no release note is required, just write `NONE`. 
-->
```release-note
```
2016-11-09 01:26:27 -08:00
Kubernetes Submit Queue b3e4083f49 Merge pull request #36133 from luomiao/photon-support-PR-v2
Automatic merge from submit-queue

Support persistent volume usage for kubernetes running on Photon Controller platform

**What this PR does / why we need it:**
Enable the persistent volume usage for kubernetes running on Photon platform.
Photon Controller: https://vmware.github.io/photon-controller/

_Only the first commit include the real code change.
The following commits are for third-party vendor dependency and auto-generated code/docs updating._

Two components are added:
pkg/cloudprovider/providers/photon: support Photon Controller as cloud provider
pkg/volume/photon_pd: support Photon persistent disk as volume source for persistent volume

Usage introduction:
a. Photon Controller is supported as cloud provider.
When choosing to use photon controller as a cloud provider, "--cloud-provider=photon --cloud-config=[path_to_config_file]" is required for kubelet/kube-controller-manager/kube-apiserver. The config file of Photon Controller should follow the following usage:

```
[Global]
target = http://[photon_controller_endpoint_IP]
ignoreCertificate = true
tenant = [tenant_name]
project = [project_name]
overrideIP = true
```

b. Photon persistent disk is supported as volume source/persistent volume source.
yaml usage:

```
volumes:
  - name: photon-storage-1
    photonPersistentDisk:
        pdID: "643ed4e2-3fcc-482b-96d0-12ff6cab2a69"
```
pdID is the persistent disk ID from Photon Controller.

c. Enable Photon Controller as volume provisioner.
yaml usage:

```
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
  name: gold_sc
provisioner: kubernetes.io/photon-pd
parameters:
  flavor: persistent-disk-gold
```

The flavor "persistent-disk-gold" needs to be created by Photon platform admin before hand.
2016-11-09 00:10:22 -08:00
Kubernetes Submit Queue ee029d9f3f Merge pull request #33850 from ymqytw/add_e2e_test_for_kubectl_in_pod
Automatic merge from submit-queue

add e2e test for kubectl in a Pod

Add a e2e test to make sure kubectl can talk to the api server when it is mounted in a pod.
Fixes: #33138
2016-11-08 21:00:53 -08:00
Harry Zhang fad1990eaa Fixe verify bazel
Remove rootfs and chroot in scripts
2016-11-08 13:01:28 -05:00
Miao Luo b22ccc6780 Support persistent volume on Photon Controller platform
1. Enable Photon Controller as cloud provider
2. Support Photon persistent disk as volume source/persistent volume
source
2016-11-08 09:36:16 -08:00
Maciej Szulik a4caa51056 Updated test_owners.csv 2016-11-08 16:28:46 +01:00
Kubernetes Submit Queue 866293b704 Merge pull request #33366 from rhcarvalho/execincontainer-timeout-argument
Automatic merge from submit-queue

Add timeout argument to ExecInContainer

<!--  Thanks for sending a pull request!  Here are some tips for you:
1. If this is your first time, read our contributor guidelines https://github.com/kubernetes/kubernetes/blob/master/CONTRIBUTING.md and developer guide https://github.com/kubernetes/kubernetes/blob/master/docs/devel/development.md
2. If you want *faster* PR reviews, read how: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/faster_reviews.md
3. Follow the instructions for writing a release note: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/pull-requests.md#release-notes
-->

**What this PR does / why we need it**: This is related to https://github.com/kubernetes/kubernetes/issues/26895. It brings a timeout to the signature of `ExecInContainer` so that we can take timeouts into account in the future. Unlike my first attempt in https://github.com/kubernetes/kubernetes/pull/27956, it doesn't immediately observe the timeout, because it is impossible to do it with the current state of the Docker Remote API (the default exec handler implementation).

**Special notes for your reviewer**: This shares commits with https://github.com/kubernetes/kubernetes/pull/27956, but without some of them that have more controversial implications (actually supporting the timeouts). The original PR shall be closed in the current state to preserve the history (instead of dropping commits in that PR).

Pinging the original people working on this change: @ncdc @sttts @vishh @dims 

**Release note**:

<!--  Steps to write your release note:
1. Use the release-note-* labels to set the release note state (if you have access) 
2. Enter your extended release note in the below block; leaving it blank means using the PR title as the release note. If no release note is required, just write `NONE`. 
-->

``` release-note
NONE
```
2016-11-08 01:41:19 -08:00
Kubernetes Submit Queue d8fa6a99a2 Merge pull request #36296 from nikhiljindal/cascDelFedSecret
Automatic merge from submit-queue

Adding cadcading deletion support for federated secrets

Ref https://github.com/kubernetes/kubernetes/issues/33612

Adding cascading deletion support for federated secrets.
The code is same as that for namespaces.  Just ensuring that DeletionHelper functions are called at right places in secret_controller.
Also added e2e tests.

cc @kubernetes/sig-cluster-federation @caesarxuchao

```release-note
federation: Adding support for DeleteOptions.OrphanDependents for federated secrets. Setting it to false while deleting a federated secret also deletes the corresponding secrets from all registered clusters.
```
2016-11-07 22:15:08 -08:00
Kubernetes Submit Queue a0c34eee35 Merge pull request #33239 from MrHohn/dns-autoscaler
Automatic merge from submit-queue

Deploy kube-dns with cluster-proportional-autoscaler

This PR integrates [cluster-proportional-autoscaler](https://github.com/kubernetes-incubator/cluster-proportional-autoscaler) with kube-dns for DNS horizontal autoscaling. 

Fixes #28648 and #27781.
2016-11-07 19:31:31 -08:00
nikhiljindal 11ede23257 bazel changes 2016-11-07 11:43:00 -08:00
Zihong Zheng 452e6d8c11 Adds e2e tests for DNS horizontal autoscaling feature
The e2e tests cover cases like cluster size changed, parameters
changed, ConfigMap got deleted, autoscaler pod got deleted, etc.
They are separated into a fast part(could be run parallelly) and
a slow part(put in [serial]). The fast part of the e2e tests cost
around 50 seconds to run.
2016-11-07 11:28:52 -08:00
Kubernetes Submit Queue 1866e1862e Merge pull request #36021 from soltysh/cronjobs
Automatic merge from submit-queue

Rename ScheduledJobs to CronJobs

I went with @smarterclayton idea of registering named types in schema. This way we can support both the new (CronJobs) and old (ScheduledJobs) resource name. Fixes #32150.

fyi @erictune @caesarxuchao @janetkuo 

Not ready yet, but getting close there...

**Release note**:
```release-note
Rename ScheduledJobs to CronJobs.
```
2016-11-07 07:12:17 -08:00
Rodolfo Carvalho 506129ba4e Add timeout argument to ExecInContainer
This allows us to interrupt/kill the executed command if it exceeds the
timeout (not implemented by this commit).

Set timeout in Exec probes. HTTPGet and TCPSocket probes respect the
timeout, while Exec probes used to ignore it.

Add e2e test for exec probe with timeout. However, the test is skipped
while the default exec handler doesn't support timeouts.
2016-11-07 13:00:59 +01:00
Kubernetes Submit Queue e6fadcbf4b Merge pull request #36283 from nikhiljindal/nscascdelTests
Automatic merge from submit-queue

Adding more e2e tests for federated namespace cascading deletion and fixing bugs

Ref https://github.com/kubernetes/kubernetes/issues/33612

Adding more e2e tests for testing cascading deletion of federated namespace.
New tests are now verifying that cascading deletion happen when DeletionOptions.OrphanDependents=false and it does not happen when DeleteOptions.OrphanDependents=true.

Also updated deletion helper to always add OrphanFinalizer. generic registry will remove it if DeleteOptions.OrphanDependents=false. Also updated namespace registry to do the same.

We need to add the orphan finalizer to keep the orphan by default behavior. We assume that its dependents are going to be orphaned and hence add that finalizer. If user does not want the orphan behavior, he can do so using DeleteOptions and then the registry will remove that finalizer.

cc @kubernetes/sig-cluster-federation @caesarxuchao @derekwaynecarr
2016-11-07 01:37:14 -08:00
Maciej Szulik 41d88d30dd Rename ScheduledJob to CronJob 2016-11-07 10:14:12 +01:00
Paulo Pires acf3f368bc
Added new userspace proxy mode specifically for Windows. 2016-11-07 09:11:35 +00:00
Kubernetes Submit Queue cc7070d5d8 Merge pull request #35583 from justinsb/replace_ratelimit
Automatic merge from submit-queue

Create simple version of ratelimit package

Allows for better testing.
2016-11-07 00:01:18 -08:00
Kubernetes Submit Queue c02a9c6aad Merge pull request #36080 from ncdc/lister-gen
Automatic merge from submit-queue

lister-gen updates

- Remove "zz_generated." prefix from generated lister file names
- Add support for expansion interfaces
- Switch to new generated JobLister

@deads2k @liggitt @sttts @mikedanese @caesarxuchao for the lister-gen changes
@soltysh @deads2k for the informer / job controller changes
2016-11-06 06:05:23 -08:00
Kubernetes Submit Queue 5e8b22fdcb Merge pull request #36013 from bowei/kubedns-logging
Automatic merge from submit-queue

Kubedns logging

fixes 
https://github.com/kubernetes/kubernetes/issues/29053

may resolve https://github.com/kubernetes/kubernetes/issues/29054, but depends on what the specific ask is
2016-11-06 03:38:27 -08:00
Kubernetes Submit Queue 741ef71fa9 Merge pull request #36012 from jlowdermilk/cmd-auth-provider
Automatic merge from submit-queue

Add cmd support to gcp auth provider plugin

**What this PR does / why we need it**:

Adds ability for gcp auth provider plugin to get access token by shelling out to an external command. We need this because for GKE, kubectl should be using gcloud credentials. It currently uses google application default credentials, which causes confusion if user has configured both with different permissions (previously the two were almost always identical).

**Which issue this PR fixes**:
Addresses #35530 with gcp-only solution, as generic cmd plugin was deemed not useful for other providers.

**Special notes for your reviewer**:

Configuration options are to support whatever future command gcloud provides for printing access token of active user. Also works with existing command (`gcloud auth print-access-token`)

```release-note
```
2016-11-06 01:45:48 -08:00
nikhiljindal dc1414fcb9 Autogenerated bazel and test owner changes 2016-11-05 20:57:58 -07:00
Kubernetes Submit Queue afa99c68b8 Merge pull request #35144 from pipejakob/generate-token
Automatic merge from submit-queue

New command: "kubeadm token generate"

As part of #33930, this PR adds a new top-level command to kubeadm to just generate a token for use with the init/join commands. Otherwise, users are left to either figure out how to generate a token on their own, or let `kubeadm init` generate a token, capture and parse the output, and then use that token for `kubeadm join`.

At this point, I was hoping for feedback on the CLI experience, and then I can add tests. I spoke with @mikedanese and he didn't like the original propose of `kubeadm util generate-token`, so here are the runners up:

```
$ kubeadm generate-token          # <--- current implementation
$ kubeadm generate token          # in case kubeadm might generate other things in the future?
$ kubeadm init --generate-token   # possibly as a subcommand of an existing one
```

Currently, the output is simply the token on one line without any padding/formatting:

```
$ kubeadm generate-token
1087fd.722b60cdd39b1a5f
```

CC: @kubernetes/sig-cluster-lifecycle 

**Release note**:

<!--  Steps to write your release note:
1. Use the release-note-* labels to set the release note state (if you have access) 
2. Enter your extended release note in the below block; leaving it blank means using the PR title as the release note. If no release note is required, just write `NONE`. 
-->

``` release-note
New kubeadm command: generate-token
```
2016-11-05 16:12:52 -07:00
Kubernetes Submit Queue 17fda0a135 Merge pull request #35806 from bdbauer/new_deletion
Automatic merge from submit-queue

Made changes to DELETE API to let v1.DeleteOptions be passed in as a queryParameter

**Which issue this PR fixes** _(optional, in `fixes #<issue number>(, #<issue_number>, ...)` format, will close that issue when PR gets merged)_: fixes #34856

```release-note
DELETE requests can now pass in their DeleteOptions as a query parameter or a body parameter, rather than just as a body parameter.
```
2016-11-05 08:49:34 -07:00
Kubernetes Submit Queue a811515d34 Merge pull request #35691 from kargakis/controller-changes-for-perma-failed
Automatic merge from submit-queue

Controller changes for perma failed deployments

This PR adds support for reporting failed deployments based on a timeout
parameter defined in the spec. If there is no progress for the amount
of time defined as progressDeadlineSeconds then the deployment will be
marked as failed by a Progressing condition with a ProgressDeadlineExceeded
reason.

Follow-up to https://github.com/kubernetes/kubernetes/pull/19343

Docs at kubernetes/kubernetes.github.io#1337

Fixes https://github.com/kubernetes/kubernetes/issues/14519

@kubernetes/deployment @smarterclayton
2016-11-04 14:49:43 -07:00
Kubernetes Submit Queue 157b9279da Merge pull request #35635 from mwielgus/configmap-ctrl
Automatic merge from submit-queue

Federated ConfigMap controller

Based on the secrets controller. E2e tests will come in the next PR.

**Release note**:

``` release-note
Federated ConfigMap controller. Supports all the API that regular ConfigMap has.
```

cc: @quinton-hoole @kubernetes/sig-cluster-federation
2016-11-04 10:29:30 -07:00
Marcin d010d1d897 Autogen updates for configmap controller 2016-11-04 16:44:40 +01:00
deads2k 61673c4b39 make kubectl get generic with respect to objects 2016-11-04 09:04:57 -04:00
Michail Kargakis de8214ad4d test: e2e tests for perma-failed deployments 2016-11-04 13:36:46 +01:00
Kubernetes Submit Queue 929d3f74e8 Merge pull request #34645 from kargakis/rs-conditions-controller-changes
Automatic merge from submit-queue

Replica set conditions controller changes

Follow-up to https://github.com/kubernetes/kubernetes/pull/33905, partially addresses https://github.com/kubernetes/kubernetes/issues/32863.

@smarterclayton @soltysh @bgrant0607 @mfojtik I just need to add e2e tests
2016-11-04 04:21:10 -07:00
yarntime@163.com 5416c3133d add defaults test 2016-11-04 15:21:14 +08:00
Bowei Du a06fc6ab7a Adds TCPCloseWaitTimeout option to kube-proxy for sysctl nf_conntrack_tcp_timeout_time_wait
Fixes issue-32551
2016-11-03 22:07:02 -07:00
Andy Goldstein 8c923faf74 Switch to JobLister 2016-11-03 20:41:40 -04:00
Benjamin Bauer 76c3804859 Made changes to DELETE API to let v1.DeleteOptions be passed in as a QueryParameter 2016-11-03 15:53:04 -07:00
Kubernetes Submit Queue f0ca9fbd9e Merge pull request #35567 from mwielgus/allowed_disruptions_b2
Automatic merge from submit-queue

Switch DisruptionBudget api from bool to int allowed disruptions [only v1beta1]

Continuation of #34546. Apparently it there is some bug that prevents us from having 2 different incompatibile version of API in integration tests. So in this PR v1alpha1 is removed until testing infrastructure is fixed.

Base PR comment:

Currently there is a single bool in disruption budget api that denotes whether 1 pod can be deleted or not. Every time a pod is deleted the apiserver filps the bool to false and the disruptionbudget controller sets it to true if more deletions are allowed. This works but it is far from optimal when the user wants to delete multiple pods (for example, by decreasing replicaset size from 10000 to 8000).
This PR adds a new api version v1beta1 and changes bool to int which contains a number of pods that can be deleted at once.

cc: @davidopp @mml @wojtek-t @fgrzadkowski @caesarxuchao
2016-11-03 15:50:19 -07:00
Bowei Du d9557d4eaf kube-dns logging cleanup
--v=2 is low noise (record changes), can be default
--v=3 will shows per request logging

Note: due to the code path with which we integrate with
skydns, we don't see non-PILLAR_DOMAIN requests, so these
will never be logged.
2016-11-03 12:38:07 -07:00
Kubernetes Submit Queue 758c56cc85 Merge pull request #35856 from madhusudancs/federation-kubefed-init-01
Automatic merge from submit-queue

[Federation] Add unit tests for `kubefed init`'s certificate generator.

Please review only the last commit here. This is based on PR #35594 which will be reviewed independently.

These are a subset of unit tests for code introduced in PR #35594

Design Doc: PR #34484

cc @kubernetes/sig-cluster-federation @quinton-hoole
2016-11-03 11:19:01 -07:00
Marcin 3872a47074 Autogenerated code and docs 2016-11-03 18:36:32 +01:00
Kubernetes Submit Queue 41b5fe86b6 Merge pull request #31546 from derekwaynecarr/systemd-pod-cgroups
Automatic merge from submit-queue

pod and qos level cgroup support

```release-note
[Kubelet] Add alpha support for `--cgroups-per-qos` using the configured `--cgroup-driver`. Disabled by default.
```
2016-11-03 03:56:56 -07:00
Madhusudan.C.S 331ef53b69 [Federation][init-01] Add unit tests for `kubefed init`'s certificate generator. 2016-11-02 19:27:39 -07:00
Jeff Lowdermilk 283bb31ada Add cmd support to gcp auth provider plugin 2016-11-02 13:57:30 -07:00
Kubernetes Submit Queue df8db653da Merge pull request #35493 from madhusudancs/federation-kubefed-01
Automatic merge from submit-queue

[Federation][join-01] Implement `kubefed join` command.

Supersedes PR #35155.

Please review only the last commit here. This is based on PR #35492 which will be reviewed independently.

I will add a release note separately for this entire feature, so please don't worry too much about the release note here in the PR.

Design Doc: PR #34484

cc @kubernetes/sig-cluster-federation @quinton-hoole @mwielgus
2016-11-02 10:35:55 -07:00
derekwaynecarr 42289c2758 pod and qos level cgroup support 2016-11-02 08:07:04 -04:00
Michail Kargakis 2491216222 Replica set/rc controller changes for Conditions 2016-11-02 10:30:09 +01:00
Kubernetes Submit Queue 49e7d640d9 Merge pull request #35235 from foxish/node-controller-no-force-deletion
Automatic merge from submit-queue

Node controller to not force delete pods

Fixes https://github.com/kubernetes/kubernetes/issues/35145

- [x] e2e tests to test Petset, RC, Job.
- [x] Remove and cover other locations where we force-delete pods within the NodeController.

**Release note**:

<!--  Steps to write your release note:
1. Use the release-note-* labels to set the release note state (if you have access) 
2. Enter your extended release note in the below block; leaving it blank means using the PR title as the release note. If no release note is required, just write `NONE`. 
-->

``` release-note
Node controller no longer force-deletes pods from the api-server.

* For StatefulSet (previously PetSet), this change means creation of replacement pods is blocked until old pods are definitely not running (indicated either by the kubelet returning from partitioned state, or deletion of the Node object, or deletion of the instance in the cloud provider, or force deletion of the pod from the api-server). This has the desirable outcome of "fencing" to prevent "split brain" scenarios.
* For all other existing controllers except StatefulSet , this has no effect on the ability of the controller to replace pods because the controllers do not reuse pod names (they use generate-name).
* User-written controllers that reuse names of pod objects should evaluate this change.
```
2016-11-01 20:08:57 -07:00
Madhusudan.C.S 2342f6eefb [Federation][join-01] Implement `kubefed join` command.
Also, add unit tests for `kubefed join`.
2016-11-01 12:45:28 -07:00
Anirudh 71941016c1 Fix old e2e tests, refactor and add new e2e tests. 2016-11-01 11:46:13 -07:00
Kubernetes Submit Queue 4eb1c2baa9 Merge pull request #35795 from deads2k/api-33-clean-master.go
Automatic merge from submit-queue

remove non-reuseable bits of MasterServer

Scrub `master.go` again.  I think I'm pretty happy with this shape.  I may promote `InstallAPIs` since we're likely to want it downstream.
2016-11-01 06:50:23 -07:00
ymqytw 89cd3f9b84 add a gke-only e2e test for kubectl in a Pod 2016-10-31 16:05:16 -07:00
Jordan Liggitt 1a7f7c5399
Allow apiserver to choose preferred kubelet address type 2016-10-31 16:02:38 -04:00
Jacob Beacham cf6b6778dc Adding CLI tests for kubeadm. 2016-10-31 11:12:51 -07:00
deads2k 7e65d5693b remove non-reuseable bits of MasterServer 2016-10-31 08:50:05 -04:00
Justin Santa Barbara cebfc821a4 Create simple version of ratelimit package
Allows for more testing.
2016-10-30 20:55:03 -04:00
Chao Xu 2044927034 move watch.ListWatchUntil to its own package to avoid future import cycle 2016-10-30 13:14:20 -07:00
Kubernetes Submit Queue 9e71a65335 Merge pull request #35326 from apprenda/kubeadm-unit-tests-pkg-preflight
Automatic merge from submit-queue

kubeadm: added unit test for app/preflight pkg

Added unit test for kubeadm/app/preflight package testing functionality of checks.go.

This PR is part of the ongoing effort to add tests (#35025)

/cc @pires @jbeda
2016-10-30 10:31:56 -07:00
Chao Xu 850729bfaf include multiple versions in clientset
update client-gen to use the term "internalversion" rather than "unversioned";
leave internal one unqualified;
cleanup client-gen
2016-10-29 13:30:47 -07:00
Kubernetes Submit Queue 3bda6884b8 Merge pull request #35432 from ncdc/protobuf-dash-package-names
Automatic merge from submit-queue

Convert - to _ for protobuf package names

Convert - to _ for protobuf package names to allow protobuf code generation
support for go packages that have - in their names.

@smarterclayton @deads2k @liggitt @sttts @lavalamp @nikhiljindal @kubernetes/sig-api-machinery
2016-10-29 13:25:04 -07:00
Kubernetes Submit Queue 4ec036c8af Merge pull request #35452 from deads2k/auth-02-front-proxy
Automatic merge from submit-queue

allow authentication through a front-proxy

This allows a front proxy to set a request header and have that be a valid `user.Info` in the authentication chain.  To secure this power, a client certificate may be used to confirm the identity of the front proxy

@kubernetes/sig-auth fyi
@erictune per-request
@liggitt you wrote the openshift one, ptal.
2016-10-29 07:52:09 -07:00
Paulo Pires 1157548e3c
kubeadm: added tests owner. 2016-10-29 09:40:13 -04:00
Kubernetes Submit Queue 58457daf63 Merge pull request #31652 from intelsdi-x/poc-opaque-int-resources
Automatic merge from submit-queue

[PHASE 1] Opaque integer resource accounting.

## [PHASE 1] Opaque integer resource accounting.

This change provides a simple way to advertise some amount of arbitrary countable resource for a node in a Kubernetes cluster. Users can consume these resources by including them in pod specs, and the scheduler takes them into account when placing pods on nodes. See the example at the bottom of the PR description for more info.

Summary of changes:

- Defines opaque integer resources as any resource with prefix `pod.alpha.kubernetes.io/opaque-int-resource-`.
- Prevent kubelet from overwriting capacity.
- Handle opaque resources in scheduler.
- Validate integer-ness of opaque int quantities in API server.
- Tests for above.

Feature issue: https://github.com/kubernetes/features/issues/76

Design: http://goo.gl/IoKYP1

Issues:

kubernetes/kubernetes#28312
kubernetes/kubernetes#19082

Related:

kubernetes/kubernetes#19080

CC @davidopp @timothysc @balajismaniam 

**Release note**:
<!--  Steps to write your release note:
1. Use the release-note-* labels to set the release note state (if you have access) 
2. Enter your extended release note in the below block; leaving it blank means using the PR title as the release note. If no release note is required, just write `NONE`. 
-->
```release-note
Added support for accounting opaque integer resources.

Allows cluster operators to advertise new node-level resources that would be
otherwise unknown to Kubernetes. Users can consume these resources in pod
specs just like CPU and memory. The scheduler takes care of the resource
accounting so that no more than the available amount is simultaneously
allocated to pods.
```

## Usage example

```sh
$ echo '[{"op": "add", "path": "pod.alpha.kubernetes.io~1opaque-int-resource-bananas", "value": "555"}]' | \
> http PATCH http://localhost:8080/api/v1/nodes/localhost.localdomain/status \
> Content-Type:application/json-patch+json
```

```http
HTTP/1.1 200 OK
Content-Type: application/json
Date: Thu, 11 Aug 2016 16:44:55 GMT
Transfer-Encoding: chunked

{
    "apiVersion": "v1",
    "kind": "Node",
    "metadata": {
        "annotations": {
            "volumes.kubernetes.io/controller-managed-attach-detach": "true"
        },
        "creationTimestamp": "2016-07-12T04:07:43Z",
        "labels": {
            "beta.kubernetes.io/arch": "amd64",
            "beta.kubernetes.io/os": "linux",
            "kubernetes.io/hostname": "localhost.localdomain"
        },
        "name": "localhost.localdomain",
        "resourceVersion": "12837",
        "selfLink": "/api/v1/nodes/localhost.localdomain/status",
        "uid": "2ee9ea1c-47e6-11e6-9fb4-525400659b2e"
    },
    "spec": {
        "externalID": "localhost.localdomain"
    },
    "status": {
        "addresses": [
            {
                "address": "10.0.2.15",
                "type": "LegacyHostIP"
            },
            {
                "address": "10.0.2.15",
                "type": "InternalIP"
            }
        ],
        "allocatable": {
            "alpha.kubernetes.io/nvidia-gpu": "0",
            "cpu": "2",
            "memory": "8175808Ki",
            "pods": "110"
        },
        "capacity": {
            "alpha.kubernetes.io/nvidia-gpu": "0",
            "pod.alpha.kubernetes.io/opaque-int-resource-bananas": "555",
            "cpu": "2",
            "memory": "8175808Ki",
            "pods": "110"
        },
        "conditions": [
            {
                "lastHeartbeatTime": "2016-08-11T16:44:47Z",
                "lastTransitionTime": "2016-07-12T04:07:43Z",
                "message": "kubelet has sufficient disk space available",
                "reason": "KubeletHasSufficientDisk",
                "status": "False",
                "type": "OutOfDisk"
            },
            {
                "lastHeartbeatTime": "2016-08-11T16:44:47Z",
                "lastTransitionTime": "2016-07-12T04:07:43Z",
                "message": "kubelet has sufficient memory available",
                "reason": "KubeletHasSufficientMemory",
                "status": "False",
                "type": "MemoryPressure"
            },
            {
                "lastHeartbeatTime": "2016-08-11T16:44:47Z",
                "lastTransitionTime": "2016-08-10T06:27:11Z",
                "message": "kubelet is posting ready status",
                "reason": "KubeletReady",
                "status": "True",
                "type": "Ready"
            },
            {
                "lastHeartbeatTime": "2016-08-11T16:44:47Z",
                "lastTransitionTime": "2016-08-10T06:27:01Z",
                "message": "kubelet has no disk pressure",
                "reason": "KubeletHasNoDiskPressure",
                "status": "False",
                "type": "DiskPressure"
            }
        ],
        "daemonEndpoints": {
            "kubeletEndpoint": {
                "Port": 10250
            }
        },
        "images": [],
        "nodeInfo": {
            "architecture": "amd64",
            "bootID": "1f7e95ca-a4c2-490e-8ca2-6621ae1eb5f0",
            "containerRuntimeVersion": "docker://1.10.3",
            "kernelVersion": "4.5.7-202.fc23.x86_64",
            "kubeProxyVersion": "v1.3.0-alpha.4.4285+7e4b86c96110d3-dirty",
            "kubeletVersion": "v1.3.0-alpha.4.4285+7e4b86c96110d3-dirty",
            "machineID": "cac4063395254bc89d06af5d05322453",
            "operatingSystem": "linux",
            "osImage": "Fedora 23 (Cloud Edition)",
            "systemUUID": "D6EE0782-5DEB-4465-B35D-E54190C5EE96"
        }
    }
}
```

After patching, the kubelet's next sync fills in allocatable:

```
$ kubectl get node localhost.localdomain -o json | jq .status.allocatable
```

```json
{
  "alpha.kubernetes.io/nvidia-gpu": "0",
  "pod.alpha.kubernetes.io/opaque-int-resource-bananas": "555",
  "cpu": "2",
  "memory": "8175808Ki",
  "pods": "110"
}
```

Create two pods, one that needs a single banana and another that needs a truck load:

```
$ kubectl create -f chimp.yaml
$ kubectl create -f superchimp.yaml
```

Inspect the scheduler result and pod status:

```
$ kubectl describe pods chimp
Name:           chimp
Namespace:      default
Node:           localhost.localdomain/10.0.2.15
Start Time:     Thu, 11 Aug 2016 19:58:46 +0000
Labels:         <none>
Status:         Running
IP:             172.17.0.2
Controllers:    <none>
Containers:
  nginx:
    Container ID:       docker://46ff268f2f9217c59cc49f97cc4f0f085d5ac0e251f508cc08938601117c0cec
    Image:              nginx:1.10
    Image ID:           docker://sha256:82e97a2b0390a20107ab1310dea17f539ff6034438099384998fd91fc540b128
    Port:               80/TCP
    Limits:
      cpu:                                      500m
      memory:                                   64Mi
      pod.alpha.kubernetes.io/opaque-int-resource-bananas:   3
    Requests:
      cpu:                                      250m
      memory:                                   32Mi
      pod.alpha.kubernetes.io/opaque-int-resource-bananas:   1
    State:                                      Running
      Started:                                  Thu, 11 Aug 2016 19:58:51 +0000
    Ready:                                      True
    Restart Count:                              0
    Volume Mounts:                              <none>
    Environment Variables:                      <none>
Conditions:
  Type          Status
  Initialized   True 
  Ready         True 
  PodScheduled  True 
No volumes.
QoS Class:      Burstable
Events:
  FirstSeen     LastSeen        Count   From                            SubobjectPath           Type            Reason                  Message
  ---------     --------        -----   ----                            -------------           --------        ------                  -------
  9m            9m              1       {default-scheduler }                                    Normal          Scheduled               Successfully assigned chimp to localhost.localdomain
  9m            9m              2       {kubelet localhost.localdomain}                         Warning         MissingClusterDNS       kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to DNSDefault policy.
  9m            9m              1       {kubelet localhost.localdomain} spec.containers{nginx}  Normal          Pulled                  Container image "nginx:1.10" already present on machine
  9m            9m              1       {kubelet localhost.localdomain} spec.containers{nginx}  Normal          Created                 Created container with docker id 46ff268f2f92
  9m            9m              1       {kubelet localhost.localdomain} spec.containers{nginx}  Normal          Started                 Started container with docker id 46ff268f2f92
```

```
$ kubectl describe pods superchimp
Name:           superchimp
Namespace:      default
Node:           /
Labels:         <none>
Status:         Pending
IP:
Controllers:    <none>
Containers:
  nginx:
    Image:      nginx:1.10
    Port:       80/TCP
    Requests:
      cpu:                                      250m
      memory:                                   32Mi
      pod.alpha.kubernetes.io/opaque-int-resource-bananas:   10Ki
    Volume Mounts:                              <none>
    Environment Variables:                      <none>
Conditions:
  Type          Status
  PodScheduled  False 
No volumes.
QoS Class:      Burstable
Events:
  FirstSeen     LastSeen        Count   From                    SubobjectPath   Type            Reason                  Message
  ---------     --------        -----   ----                    -------------   --------        ------                  -------
  3m            1s              15      {default-scheduler }                    Warning         FailedScheduling        pod (superchimp) failed to fit in any node
fit failure on node (localhost.localdomain): Insufficient pod.alpha.kubernetes.io/opaque-int-resource-bananas
```
2016-10-28 22:25:18 -07:00
Connor Doyle b0421c1ba4 Updated test owners file. 2016-10-28 10:28:50 -07:00
Andy Goldstein 72cec547cd Convert - to _ for protobuf package names
Convert - to _ for protobuf package names to allow protobuf code generation
support for go packages that have - in their names.
2016-10-28 11:08:13 -04:00
deads2k 557e653785 add front proxy authenticator 2016-10-28 08:36:46 -04:00
bprashanth 37bc34c567 periodically GC pod ips 2016-10-27 22:15:35 -07:00
Janet Kuo e0252f9be0 Update test owners 2016-10-27 17:25:46 -07:00
Kubernetes Submit Queue bbe5fe327f Merge pull request #35650 from rmmh/verify-test-owners
Automatic merge from submit-queue

Add hack/verify-test-owners.sh to ensure tests always have owners.

This ensures that new tests or changed tests are assigned appropriate owners.
2016-10-27 16:46:50 -07:00
Ryan Hitchman 8e4e8944b6 Add hack/verify-test-owners.sh to ensure tests always have owners. 2016-10-27 12:35:43 -07:00
Piotr Szczesniak c0e55a9235 Swap in tests ownership 2016-10-26 09:34:16 +02:00
Ryan Hitchman 78eeb76386 Make hack/update_owners.py get list from local repo, add --check option. 2016-10-25 12:26:21 -07:00
Mik Vyatskov 10c0c9fa02 Self-assign logging tests 2016-10-20 15:36:11 +02:00
Lucas Käldström 0bba65ca1a Remove old references to contrib/mesos 2016-10-01 16:46:48 +03:00
Kubernetes Submit Queue 5af04d1dd1 Merge pull request #32876 from errordeveloper/more-cert-utils
Automatic merge from submit-queue

Refactor cert utils into one pkg, add funcs from bootkube for kubeadm to use

**What this PR does / why we need it**:

We have ended-up with rather incomplete and fragmented collection of utils for handling certificates. It may be worse to consider using `cfssl` for doing all of these things, but for now there is some functionality that we need in `kubeadm` that we can borrow from bootkube. It makes sense to move the utils from bookube into core, as discussed in #31221.

**Special notes for your reviewer**: I've taken the opportunity to review names of existing funcs and tried to make some improvements in that area (with help from @peterbourgon).

**Release note**:

```release-note
NONE
```
2016-09-22 01:29:46 -07:00
Maciej Szulik 9fe5a67c52 Add myself to job/scheduledjob tests 2016-09-19 17:06:29 +02:00
Ilya Dmitrichenko 386fae4592
Refactor utils that deal with certs
- merge `pkg/util/{crypto,certificates}`
- add funcs from `github.com/kubernetes-incubator/bootkube/pkg/tlsutil`
- ensure naming of funcs is fairly consistent
2016-09-19 09:03:42 +01:00
Kubernetes Submit Queue 83e887fae3 Merge pull request #32773 from kargakis/test-ownership
Automatic merge from submit-queue

test: add/remove myself from tests appropriately

Added/removed myself from tests and run the pythong script that updates the csv

@fejta ptal
2016-09-16 21:14:58 -07:00
Mike Danese a765d59932 move informer and controller to pkg/client/cache
Signed-off-by: Mike Danese <mikedanese@google.com>
2016-09-15 12:50:08 -07:00
Michail Kargakis 69bb4e4c84 test: add/remove myself from tests appropriately 2016-09-15 12:27:05 +02:00
Kubernetes Submit Queue f2951a54f9 Merge pull request #30674 from ivan4th/add-e2e-tests-for-wrapped-volume-race
Automatic merge from submit-queue

Add e2e tests that check for wrapped volume race

This PR adds two new e2e tests that reproduce the race condition fixed in #29641 (see e.g. #29297)

In order to observe the race, you need to revert the PR that fixes it, via e.g.
```
git revert -n df1e925143
```
or
```
curl -sL https://github.com/kubernetes/kubernetes/pull/29641.patch | patch -p1 -R
```

The tests are `[Slow]` because they need to run several passes that involve creating pods with many volumes. They also are `[Serial]` because the load on the cluster may affect reproducibility of the race. They take about ~450s each when they fail on standard GCE cluster created by `go run hack/e2e.go -v --up`. `git_repo` test takes about 66s to run when it succeeds (fix PR not reverted) and `configmap` test takes about 546s in this case because configmap mounting is slower and still requires 3 passes x 5 pods x 50 configmap volumes to fail constantly with fix PR reverted. Probably these times can be reduced but frankly I've already spent quite a bit of time on tuning the numbers to find a balance between reproducibility and speed.

Managed to reproduce the problem in more or less reliable way for `configMap` and `gitRepo` volumes. Tried to reproduce it for `secret` volumes too but without success so far because they use tmpfs-based `emptyDir` variety. For `downwardAPI` volumes I expect the same problems with race reproducibility as with `secret` volumes, although I think some e2e races were caused by the bug, e.g. #29633.

The tests operate by creating several pods (via an RC) with many volumes and waiting for them to become Running. It sets node affinity for pods so that they all get created on a single node (the first one in the node list). The race condition leads to volume mount failures with slow retries, thus causing the test to time out.

The test failures look like this:

configmap:
```
• Failure [435.547 seconds]
[k8s.io] Wrapped EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:709
  should not cause race condition when used for configmaps [Serial] [Slow] [It]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/wrapped_empty_dir.go:170

  Failed waiting for pod wrapped-volume-race-8c097734-6376-11e6-9ffa-5254003793ad-acbtt to enter running state
  Expected error:
      <*errors.errorString | 0xc8201758d0>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  not to have occurred

  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/wrapped_empty_dir.go:395
```
You'll see errors like this in kubelet log on the first node in the cluster:
```
E0816 00:27:23.319431    3510 configmap.go:174] Error creating atomic writer: stat /var/lib/kubelet/pods/e5986355-6347-11e6-a5d7-42010af00002/volumes/kubernetes.io~configmap/racey-configmap-14: no such file or directory
E0816 00:27:23.319478    3510 nestedpendingoperations.go:232] Operation for "\"kubernetes.io/configmap/e5986355-6347-11e6-a5d7-42010af00002-racey-configmap-14\" (\"e5986355-6347-11e6-a5d7-42010af00002\")" failed. No retries permitted until 2016-08-16 00:28:27.319450118 +0000 UTC (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kubernetes.io/configmap/e5986355-6347-11e6-a5d7-42010af00002-racey-configmap-14" (spec.Name: "racey-configmap-14") pod "e5986355-6347-11e6-a5d7-42010af00002" (UID: "e5986355-6347-11e6-a5d7-42010af00002") with: stat /var/lib/kubelet/pods/e5986355-6347-11e6-a5d7-42010af00002/volumes/kubernetes.io~configmap/racey-configmap-14: no such file or directory
```

git_repo:
```
• Failure [455.035 seconds]                                                                                                                                                                                                                           [0/1882]
[k8s.io] Wrapped EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:709
  should not cause race condition when used for git_repo [Serial] [Slow] [It]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/wrapped_empty_dir.go:179

  Failed waiting for pod wrapped-volume-race-71b12b3d-6375-11e6-9ffa-5254003793ad-b0slz to enter running state
  Expected error:
      <*errors.errorString | 0xc8201758d0>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  not to have occurred

  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/wrapped_empty_dir.go:395
```
Errors in kubelet log:
```
E0815 23:41:08.670203    3510 nestedpendingoperations.go:232] Operation for "\"kubernetes.io/git-repo/97636bd8-6341-11e6-a5d7-42010af00002-racey-git-repo-8\" (\"97636bd8-6341-11e6-a5d7-42010af00002\")" failed. No retries permitted until 2016-08-15 23:42:12.670181604 +0000 UTC (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kubernetes.io/git-repo/97636bd8-6341-11e6-a5d7-42010af00002-racey-git-repo-8" (spec.Name: "racey-git-repo-8") pod "97636bd8-6341-11e6-a5d7-42010af00002" (UID: "97636bd8-6341-11e6-a5d7-42010af00002") with: failed to exec 'git clone http://10.0.68.35:2345 test': : chdir /var/lib/kubelet/pods/97636bd8-6341-11e6-a5d7-42010af00002/volumes/kubernetes.io~git-repo/racey-git-repo-8: no such file or directory
```

Generally, the races cause unexpected "no such directory" errors in kubelet logs with subsequent volume mount failures.

I've added race tests to e2e test `empty_dir_wrapper.go` ("EmptyDir wrapper volumes"). This test was added in #18445, the same PR that introduced the race bug. The original purpose of the test was making sure that no conflicts occur between different wrapped emptyDir volumes, so I've replaced "should becomes" with "should not conflict" in the first `It(...)`.
2016-09-11 03:39:21 -07:00
Kubernetes Submit Queue 8780961e94 Merge pull request #32112 from soltysh/test_owners
Automatic merge from submit-queue

Updated test owners and assigned ScheduledJobs to soltysh

I've updated test owners by running `hack/update_owners.py` and assigned all ScheduledJob related issues to myself. 

@fejta ptal
2016-09-09 00:48:14 -07:00
Kubernetes Submit Queue ddcbdcb8c8 Merge pull request #31535 from aveshagarwal/master-e2e-downward-api-issues
Automatic merge from submit-queue

Fix downward api tests to output node allocatable not node capacity

@kubernetes/rh-cluster-infra @derekwaynecarr
2016-09-07 16:25:19 -07:00
Maciej Szulik ac1335c979 Updated test owners and assigned ScheduledJobs to soltysh 2016-09-06 11:38:57 +02:00
Ryan Hitchman 0c80bce7a7 Fix test owners for horizontal pod autoscaling. 2016-08-30 13:30:45 -07:00
Erick Fejta fdb085ff61 Add missing tests 2016-08-29 15:22:06 -07:00
Avesh Agarwal db74d4dbc2 Fix downward api tests to output node allocatable not node capacity 2016-08-26 16:13:24 -04:00
Erick Fejta 5c821c1fed Update test assignments 2016-08-19 18:43:40 -07:00
Ivan Shvedunov 8ff00d17d8 Add e2e tests that check for wrapped volume race
See #29641 for details.
2016-08-17 12:14:14 +03:00
Erick Fejta 17d91dd2ec Assign Probing Container tests to Random-Liu 2016-08-09 17:20:00 -07:00
Kubernetes Submit Queue 7da75631f6 Merge pull request #29956 from david-mcmahon/test_owners
Automatic merge from submit-queue

Remove myself from test ownership.

These are almost certainly not correct, but probably more likely owners than myself.
@rmmh @dchen1107 @timstclair @erictune @mtaufen @caesarxuchao @fgrzadkowski @krousey @lavalamp
2016-08-04 00:01:51 -07:00
David McMahon 3a88747ef8 Remove myself from test ownership. 2016-08-03 14:34:31 -07:00
gmarek f1167e9b9c Change the owner of JSON NodeAffinity test 2016-08-03 10:42:07 +02:00
Alex Robinson 0ed8fa5693 Give away my e2e tests. 2016-08-02 22:43:20 +00:00
Ryan Hitchman 5d53b3a686 Update test-owners with new tests, add catch-all assignment to
test-infra team.

A future update to the munger will use this to assign any flake without
an explicit owner to a member of the test-infra team.
2016-08-01 16:02:39 -07:00
Ryan Hitchman 616e938662 Address PR comments, randomly assign owners for new tests. 2016-07-06 13:22:53 -07:00
Ryan Hitchman 3d485098c3 Add test/test_owners.csv, for automatic assignment of test failures.
This file will be read by the munger -- see kubernetes/contrib#1264

This also includes a simple script to do minor automatic updates to the
CSV.
2016-07-01 17:39:14 -07:00