Commit Graph

32622 Commits (fe66c5c5fae7273c85d69b5d8fa3e64f0ea0ea88)

Author SHA1 Message Date
k8s-merge-robot fe66c5c5fa Merge pull request #29628 from dims/fix-kubectl-version-extract
Automatic merge from submit-queue

kubectl container - Extract version better

1. Use --client since -c is deprecated now
2. The command (./kubectl version --client | grep -o 'GitVersion:"[^"]*"')
   now returns:
      GitVersion:"v1.4.0-alpha.1.784+ed3a29bd6aeb98-dirty"
   so parse out the version better using sed

Related to #23708
2016-07-27 21:07:14 -07:00
k8s-merge-robot 5427e8aa02 Merge pull request #29544 from lixiaobing10051267/masterFuncNote
Automatic merge from submit-queue

Func note is copied same as other one
2016-07-27 21:07:03 -07:00
k8s-merge-robot 40a6d68efb Merge pull request #29541 from lixiaobing10051267/masterTimeOut2
Automatic merge from submit-queue

Log information wrong while wait.ForeverTestTimeout
2016-07-27 21:06:52 -07:00
k8s-merge-robot 1ae9b73cd3 Merge pull request #29673 from pmorie/mount-collision
Automatic merge from submit-queue

Fix mount collision timeout issue

Short- or medium-term workaround for #29555.  The root issue being fixed here is that the recent attach/detach work in the kubelet uses a unique volume name as a key that tracks the work that has to be done for each volume in a pod to attach/mount/umount/detach.  However, the non-attachable volume plugins do not report unique names for themselves, which causes collisions when a single secret or configmap is mounted multiple times in a pod.

This is still a WIP -- I need to add a couple E2E tests that ensure that tests break in the future if there is a regression -- but posting for early review.

cc @kubernetes/sig-storage 

Ultimately, I would like to refine this a bit further.  A couple things I would like to change:

1.  `GetUniqueVolumeName` should be a property ONLY of attachable volumes
2.  I would like to see the kubelet apparatus for attach/mount/umount/detach handle non-attachable volumes specifically to avoid things like the `WaitForControllerAttach` call that has to be done for those volume types now
2016-07-27 21:06:47 -07:00
k8s-merge-robot 313272a22c Merge pull request #29508 from k82cn/add_whitspace
Automatic merge from submit-queue

Remove unnecessary empty line.
2016-07-27 19:52:47 -07:00
k8s-merge-robot 1f9c41dc3a Merge pull request #29495 from xiangpengzhao/fix-defer-fclose
Automatic merge from submit-queue

defer file.Close() in resource_printer.go
2016-07-27 19:19:34 -07:00
k8s-merge-robot e008087e0a Merge pull request #29457 from derekwaynecarr/service-node-port-quota-fix
Automatic merge from submit-queue

Quota was not counting services with multiple nodeports properly

```release-note
If a service of type node port declares multiple ports, quota on "services.nodeports" will charge for each port in the service.
```

Fixes https://github.com/kubernetes/kubernetes/issues/29456

/cc @kubernetes/rh-cluster-infra @sdminonne
2016-07-27 18:09:40 -07:00
k8s-merge-robot 75c93b4063 Merge pull request #29439 from matttproud/cleanups_volumeflocker
Automatic merge from submit-queue

volume/flocker: plug time.Ticker resource leak

This commit ensures that `flockerMounter.updateDatasetPrimary` does not leak
running `time.Ticker` instances.  Upon termination of the consuming routine, we
stop the tickers.

```release-note
* flockerMounter.updateDatasetPrimary no longer leaks running time.Ticker instances.
  Upon termination of the consuming routine, we stop the tickers.
```
2016-07-27 17:18:34 -07:00
k8s-merge-robot ab7d039c81 Merge pull request #29388 from ronnielai/image-gc-check
Automatic merge from submit-queue

Avoiding trying to gc images with no tags which are still in use

#29325
2016-07-27 16:44:50 -07:00
k8s-merge-robot 3301f6d14f Merge pull request #29356 from smarterclayton/init_containers
Automatic merge from submit-queue

LimitRanger and PodSecurityPolicy need to check more on init containers

Container limits not applied to init containers. HostPorts not checked on podsecuritypolicy

@pweil- @derekwaynecarr
2016-07-27 16:09:34 -07:00
k8s-merge-robot 78f7b32e66 Merge pull request #29693 from bprashanth/healthz_limits
Automatic merge from submit-queue

Give healthz more memory to mitigate #29688

This will recreate the rc but not the pods. At least on the clusters we patched, if the pods get recreated they'll ccome back up with the updated limits. 
#29688
2016-07-27 15:34:49 -07:00
Paul Morie f15684e0c4 Add e2e coverage for mount collisions 2016-07-27 17:54:01 -04:00
Paul Morie c884297990 Fix collisions issues / timeouts for mounts
For non-attachable volumes, do not call GetVolumeName on the plugin and instead
generate a unique name based on the identity of the pod and the name of the volume
within the pod.
2016-07-27 17:53:50 -04:00
k8s-merge-robot e86b3f266c Merge pull request #29641 from ivan4th/fix-configmap-race
Automatic merge from submit-queue

Fix wrapped volume race

**EDIT:** now covers configmap, secret, downwardapi & git_repo volume plugins.

Fixes #29297.

wrappedVolumeSpec used by configMapVolumeMounter and
configMapVolumeUnmounter contained a pointer to api.Volume which was
being patched by NewWrapperMounter/NewWrapperUnmounter, causing race
condition during configmap volume mounts.

See https://github.com/kubernetes/kubernetes/issues/29297#issuecomment-235403806 for complete explanation.
The subtle bug was introduced by #18445, it also can affect other volume plugins utilizing `wrappedVolumeSpec` technique, if this PR is correct/accepted will make more PRs for secrets etc. Although tmpfs variety of inner `emptyDir` volume appears to be less susceptible to this race, there's chance it can fail too.

The errors produced by this race look like this:
```Jul 19 17:05:21 ubuntu1604 kubelet[17097]: I0719 17:05:21.854303   17097 reconciler.go:253] MountVolume operation started for volume "kubernetes.io/configmap/foo-files"
 (spec.Name: "files") to pod "11786582-4dbf-11e6-9fc9-64cca009c636" (UID: "11786582-4dbf-11e6-9fc9-64cca009c636").
Jul 19 17:05:21 ubuntu1604 kubelet[17097]: I0719 17:05:21.854842   17097 reconciler.go:253] MountVolume operation started for volume "kubernetes.io/configmap/bar-file
s" (spec.Name: "files") to pod "117d2c22-4dbf-11e6-9fc9-64cca009c636" (UID: "117d2c22-4dbf-11e6-9fc9-64cca009c636").
Jul 19 17:05:21 ubuntu1604 kubelet[17097]: E0719 17:05:21.860796   17097 configmap.go:171] Error creating atomic writer: stat /var/lib/kubelet/pods/117d2c22-4dbf-11e6-9fc9-64cca009c636/volumes/kubernetes.io~configmap/files: no such file or directory
Jul 19 17:05:21 ubuntu1604 kubelet[17097]: E0719 17:05:21.861070   17097 goroutinemap.go:155] Operation for "kubernetes.io/configmap/bar-files" failed. No retries permitted until 2016-07-19 17:07:21.861036886 +0200 CEST (durationBeforeRetry 2m0s). error: MountVolume.SetUp failed for volume "kubernetes.io/configmap/bar-files" (spec.Name: "files") pod "117d2c22-4dbf-11e6-9fc9-64cca009c636" (UID: "117d2c22-4dbf-11e6-9fc9-64cca009c636") with: stat /var/lib/kubelet/pods/117d2c22-4dbf-11e6-9fc9-64cca009c636/volumes/kubernetes.io~configmap/files: no such file or directory
Jul 19 17:05:21 ubuntu1604 kubelet[17097]: E0719 17:05:21.861271   17097 configmap.go:171] Error creating atomic writer: stat /var/lib/kubelet/pods/11786582-4dbf-11e6-9fc9-64cca009c636/volumes/kubernetes.io~configmap/files: no such file or directory
Jul 19 17:05:21 ubuntu1604 kubelet[17097]: E0719 17:05:21.862284   17097 goroutinemap.go:155] Operation for "kubernetes.io/configmap/foo-files" failed. No retries permitted until 2016-07-19 17:07:21.862275753 +0200 CEST (durationBeforeRetry 2m0s). error: MountVolume.SetUp failed for volume "kubernetes.io/configmap/foo-files" (spec.Name: "files") pod "11786582-4dbf-11e6-9fc9-64cca009c636" (UID: "11786582-4dbf-11e6-9fc9-64cca009c636") with: stat /var/lib/kubelet/pods/11786582-4dbf-11e6-9fc9-64cca009c636/volumes/kubernetes.io~configmap/files: no such file or directory```

Note "Error creating atomic writer" errors.
This problem can be reproduced by making kubelet mount multiple config map volumes in parallel.
2016-07-27 14:24:14 -07:00
Prashanth Balasubramanian 79d7519f67 Give healthz more memory to mitigate #29688 2016-07-27 12:22:36 -07:00
Daniel Smith 617b614e49 Merge pull request #29697 from kubernetes/revert-29284-hamaster-etcd
Revert "Modified etcd manifest to support clustering."
2016-07-27 12:03:38 -07:00
Daniel Smith fb3f02fb68 Revert "Modified etcd manifest to support clustering." 2016-07-27 12:03:21 -07:00
Dawn Chen 1aaea5fe09 Merge pull request #29687 from cjcullen/customuser
Fix potential unbound KUBE_USER variable in gci/trusty.
2016-07-27 11:31:12 -07:00
CJ Cullen 6d2c411757 Fix potential unbound KUBE_USER variable in gci/trusty. 2016-07-27 10:50:44 -07:00
Ron Lai 64981aaf46 Avoiding trying to gc images with no tags which are still in use 2016-07-27 10:31:47 -07:00
Daniel Smith 6456d67bd0 Merge pull request #29676 from kubernetes/revert-29380-kill-setpgid
Revert "Fix killing child sudo process in e2e_node tests"
2016-07-27 09:56:34 -07:00
Erick Fejta 12d923ed15 Revert "Fix killing child sudo process in e2e_node tests" 2016-07-27 21:53:05 +05:30
k8s-merge-robot 03fe6b962c Merge pull request #29380 from bboreham/kill-setpgid
Automatic merge from submit-queue

Fix killing child sudo process in e2e_node tests

Fixes #29211.

The context is we are trying to kill a process started as `sudo kube-apiserver`, but `sudo` ignores signals from the same process group. Applying `Setpgid` means the `sudo kill` process won't be in the same process group, so will not fall foul of this nifty feature.

I also took the liberty of removing some code setting `Pdeathsig` because it claims to be doing something  in the same area, but actually it doesn't do that at all.  The setting is applied to the forked process, i.e. `sudo`, and it means the `sudo` will get killed if we (`e2e_node.test`) die.  This (a) isn't what the comment says and (b) doesn't help because sending SIGKILL to the sudo process leaves sudo's child alive.

I didn't use the "hack for linux-only" approach because I think `Setpgid` is available on all platforms that `e2e_node` builds on.
2016-07-27 03:18:15 -07:00
k8s-merge-robot 5b7f7e7bd3 Merge pull request #29365 from lixiaobing10051267/masterLen
Automatic merge from submit-queue

len(vmList) output format not correct

len(vmList) output format not correct, not "%s", is "%d".
2016-07-27 02:41:58 -07:00
Ivan Shvedunov df1e925143 Fix wrapped volume race
This fixes race conditions in configmap, secret, downwardapi & git_repo
volume plugins.
wrappedVolumeSpec vars used by volume mounters and unmounters contained
a pointer to api.Volume structs which were being patched by
NewWrapperMounter/NewWrapperUnmounter, causing race condition during
volume mounts.
2016-07-27 12:24:46 +03:00
k8s-merge-robot 3a29863d36 Merge pull request #29284 from jszczepkowski/hamaster-etcd
Automatic merge from submit-queue

Modified etcd manifest to support clustering.
2016-07-27 02:00:09 -07:00
k8s-merge-robot 5064306808 Merge pull request #29254 from ping035627/ping035627-patch-0718
Automatic merge from submit-queue

Judge the cloud isn't nil before use it in server.go

The PR add a judgement for the cloud before use it, because cloudprovider.InitCloudProvider maybe return nil for the cloud.
2016-07-27 01:24:21 -07:00
k8s-merge-robot 9045dfef8f Merge pull request #29249 from aveshagarwal/master-node-e2e-configmap-selinux-fix
Automatic merge from submit-queue

Fix ConfigMap related node e2e tests on selinux enabled systems

One selinux enabled systems, it might require to relabel
/var/lib/kubelet, otherwise following tests fail:

Summarizing 7 Failures:

```
[Fail] [k8s.io] ConfigMap [It] updates should be reflected in volume [Conformance]
/root/upstream-code/gocode/src/k8s.io/kubernetes/test/e2e_node/configmap.go:131

[Fail] [k8s.io] ConfigMap [It] should be consumable from pods in volume as non-root with FSGroup [Feature:FSGroup]
/root/upstream-code/gocode/src/k8s.io/kubernetes/test/e2e/framework/util.go:2115

[Fail] [k8s.io] ConfigMap [It] should be consumable from pods in volume with mappings as non-root [Conformance]
/root/upstream-code/gocode/src/k8s.io/kubernetes/test/e2e/framework/util.go:2115

[Fail] [k8s.io] ConfigMap [It] should be consumable from pods in volumpe [Conformance]
/root/upstream-code/gocode/src/k8s.io/kubernetes/test/e2e/framework/util.go:2115

[Fail] [k8s.io] ConfigMap [It] should be consumable from pods in volume with mappings [Conformance]
/root/upstream-code/gocode/src/k8s.io/kubernetes/test/e2e/framework/util.go:2115

[Fail] [k8s.io] ConfigMap [It] should be consumable from pods in volume with mappings as non-root with FSGroup [Feature:FSGroup]
/root/upstream-code/gocode/src/k8s.io/kubernetes/test/e2e/framework/util.go:2115

[Fail] [k8s.io] ConfigMap [It] should be consumable from pods in volume as non-root [Conformance]
/root/upstream-code/gocode/src/k8s.io/kubernetes/test/e2e/framework/util.go:2115
```
@kubernetes/rh-cluster-infra
2016-07-27 00:15:54 -07:00
k8s-merge-robot 540e992e08 Merge pull request #28850 from MHBauer/faster-test
Automatic merge from submit-queue

Faster test

<!--
Checklist for submitting a Pull Request

Please remove this comment block before submitting.

1. Please read our [contributor guidelines](https://github.com/kubernetes/kubernetes/blob/master/CONTRIBUTING.md).
2. See our [developer guide](https://github.com/kubernetes/kubernetes/blob/master/docs/devel/development.md).
3. If you want this PR to automatically close an issue when it is merged,
   add `fixes #<issue number>` or `fixes #<issue number>, fixes #<issue number>`
   to close multiple issues (see: https://github.com/blog/1506-closing-issues-via-pull-requests).
4. Follow the instructions for [labeling and writing a release note for this PR](https://github.com/kubernetes/kubernetes/blob/master/docs/devel/pull-requests.md#release-notes) in the block below.
-->
In attempting to troubleshoot flakes with this test case I actually wanted to understand how it worked.
There's some poor comments that need work.
I added some additional output which may or may not help in debugging the flakes.
I doubt this fixes the flake.

My major concern is the 'refactor' I did of the test case to batch up runs by sub-test-case. As it stood there was a 200ms pause between each sub, so they should not have interfered with each other. Now they are just started as fast as possible, but only 20 run at a time before moving on to the next 20. I am not sure if I am violating the ethos of the original test case.

Runs on my computer are down from 2m40s -> 40s.
Getting rid of the arbitrary client limiting brings it down to ~12 seconds. 11 to fetch the image and <1 to actually run the tests against the proxies. I can add a zero to the number of loops if you want to hit it harder. It would result in 10x as much text output though.


[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/.github/PULL_REQUEST_TEMPLATE.md?pixel)]()
2016-07-26 23:38:09 -07:00
k8s-merge-robot d897db4ac5 Merge pull request #28933 from smarterclayton/accept_content_types
Automatic merge from submit-queue

Use response content-type on restclient errors

Also allow a new AcceptContentTypes field to allow the client to ask for
a fallback serialization when getting responses from the server. This
allows a new client to ask for protobuf and JSON, falling back to JSON
when necessary.

The changes to request.go allow error responses from non-JSON servers to
be properly decoded.

@wojtek-t - also alters #28910 slightly (this is better output)
2016-07-26 22:56:53 -07:00
k8s-merge-robot 994239dc00 Merge pull request #28821 from colemickens/azure-cloudprovider-pr
Automatic merge from submit-queue

Add an Azure CloudProvider Implementation

This PR adds `Azure` as a cloudprovider provider for Kubernetes. It specifically adds support for native pod networking (via Azure User Defined Routes) and L4 Load Balancing (via Azure Load Balancers).

I did have to add `clusterName` as a parameter to the `LoadBalancers` methods. This is because Azure only allows one "LoadBalancer" object per set of backend machines. This means a single "LoadBalancer" object must be shared across the cluster. The "LoadBalancer" is named via the `cluster-name` parameter passed to `kube-controller-manager` so as to enable multiple clusters per resource group if the user desires such a configuration.

There are few things that I'm a bit unsure about:

1. The implementation of the `Instances` interface. It's not extensively documented, it's not really clear what the different functions are used for, and my questions on the ML didn't get an answer.

2. Counter to the comments on the `LoadBalancers` Interface, I modify the `api.Service` object in `EnsureLoadBalancerDeleted`, but not with the intention of affecting Kube's view of the Service. I simply do it so that I can remove the `Port`s on the `Service` object and then re-use my reconciliation logic that can handle removing stale/deleted Ports. 

3. The logging is a bit verbose. I'm looking for guidance on the appropriate log level to use for the chattier bits.

Due to the (current) lack of Instance Metadata Service and lack of Virtual Machine Identity in Azure, the user is required to do a few things to opt-in to this provider. These things are called-out as they are in contrast to AWS/GCE:

1. The user must provision an Azure Active Directory ServicePrincipal with `Contributor` level access to the resource group that the cluster is deployed in. This creation process is documented [by Hashicorp](https://www.packer.io/docs/builders/azure-setup.html) or [on the MSDN Blog](https://blogs.msdn.microsoft.com/arsen/2016/05/11/how-to-create-and-test-azure-service-principal-using-azure-cli/).

2. The user must place a JSON file somewhere on each Node that conforms to the `AzureConfig` struct defined in `azure.go`. (This is automatically done in the Azure flavor of [Kubernetes-Anywhere](https://github.com/kubernetes/kubernetes-anywhere).)

3. The user must specify `--cloud-config=/path/to/azure.json` as an option to `kube-apiserver` and `kube-controller-manager` similarly to how the user would need to pass `--cloud-provider=azure`.

I've been running approximately this code for a month and a half. I only encountered one bug which has since been fixed and covered by a unit test. I've just deployed a new cluster (and a Type=LoadBalancer nginx Service) using this code (via `kubernetes-anywhere`) and have posted [the `kube-controller-manager` logs](https://gist.github.com/colemickens/1bf6a26e7ef9484a72a30b1fcf9fc3cb) for anyone who is interested in seeing the logs of the logic.

If you're interested in this PR, you can use the instructions in my [`azure-kubernetes-demo` repository](https://github.com/colemickens/azure-kubernetes-demo) to deploy a cluster with minimal effort via [`kubernetes-anywhere`](https://github.com/kubernetes/kubernetes-anywhere). (There is currently [a pending PR in `kubernetes-anywhere` that is needed](https://github.com/kubernetes/kubernetes-anywhere/pull/172) in conjuncture with this PR). I also have a pre-built `hyperkube` image: `docker.io/colemickens/hyperkube-amd64:v1.4.0-alpha.0-azure`, which will be kept in sync with the branch this PR stems from.

I'm hoping this can land in the Kubernetes 1.4 timeframe.

CC (potential code reviewers from Azure): @ahmetalpbalkan @brendandixon @paulmey

CC (other interested Azure folk): @brendandburns @johngossman @anandramakrishna @jmspring @jimzim

CC (others who've expressed interest): @codefx9 @edevil @thockin @rootfs
2016-07-26 21:56:49 -07:00
k8s-merge-robot d82e404a00 Merge pull request #28351 from sttts/sttts-kubectl-create-quota
Automatic merge from submit-queue

Add support for kubectl create quota command

Follow-up of https://github.com/kubernetes/kubernetes/pull/19625

```
Create a resourcequota with the specified name, hard limits and optional scopes

Usage:
  kubectl create quota NAME [--hard=key1=value1,key2=value2] [--scopes=Scope1,Scope2] [--dry-run=bool] [flags]

Aliases:
  quota, q


Examples:
  // Create a new resourcequota named my-quota
  $ kubectl create quota my-quota --hard=cpu=1,memory=1G,pods=2,services=3,replicationcontrollers=2,resourcequotas=1,secrets=5,persistentvolumeclaims=10

  // Create a new resourcequota named best-effort
  $ kubectl create quota best-effort --hard=pods=100 --scopes=BestEffort
```
2016-07-26 21:20:04 -07:00
k8s-merge-robot 5a7b52b8d2 Merge pull request #26942 from xiangpengzhao/fix_testcase
Automatic merge from submit-queue

Fix panic in schema test

If the swagger files for testing are lost, the func `loadSchemaForTest` or `NewSwaggerSchemaFromBytes` will return a non-nil error and a nil schema. In this case, the calling for `ValidateBytes` will result in panic. So, call Fatalf instead of Errorf.

Also fix minor typos.

Test logs:

```
--- FAIL: TestLoad (0.01s)
	schema_test.go:131: Failed to load: open ../../../api/swagger-spec/v1.json: no such file or directory
--- FAIL: TestValidateOk (0.00s)
	schema_test.go:138: Failed to load: open ../../../api/swagger-spec/v1.json: no such file or directory
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
	panic: runtime error: invalid memory address or nil pointer dereference
[signal 0xb code=0x1 addr=0x20 pc=0x4d52df]

goroutine 10 [running]:
panic(0x15fffa0, 0xc8200100a0)
	/usr/local/go/src/runtime/panic.go:481 +0x3e6
testing.tRunner.func1(0xc820085a70)
	/usr/local/go/src/testing/testing.go:467 +0x192
panic(0x15fffa0, 0xc8200100a0)
	/usr/local/go/src/runtime/panic.go:443 +0x4e9
k8s.io/kubernetes/pkg/api/validation.TestValidateOk(0xc820085a70)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/api/validation/schema_test.go:159 +0x79f
testing.tRunner(0xc820085a70, 0x22aad68)
	/usr/local/go/src/testing/testing.go:473 +0x98
created by testing.RunTests
	/usr/local/go/src/testing/testing.go:582 +0x892
FAIL	k8s.io/kubernetes/pkg/api/validation	0.048s
```
2016-07-26 20:35:32 -07:00
k8s-merge-robot ffff1ab63c Merge pull request #28319 from grodrigues3/revert-comments-tLogf
Automatic merge from submit-queue

reverted the code from 23688 that cause race condition with older version of Go

```release-note
* release-note-None
```


[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/.github/PULL_REQUEST_TEMPLATE.md?pixel)]()
2016-07-26 19:56:47 -07:00
lixiaobing10051267 77f133dc84 Func note is copied same as other one
Delete the func note
2016-07-27 10:15:18 +08:00
k8s-merge-robot b7e8791746 Merge pull request #29635 from derekwaynecarr/disable_test_namespace_autoprovision
Automatic merge from submit-queue

Disable flaky unit test in admission plugin in NamespaceAutoProvision

Ref: https://github.com/kubernetes/kubernetes/issues/29473

Disables the test until the full fix is resolved in https://github.com/kubernetes/kubernetes/pull/29634

This admission controller is not in our default set, but is flaking and is a p0.

/cc @ncdc @liggitt @hodovska
2016-07-26 16:14:02 -07:00
k8s-merge-robot a78c8e1635 Merge pull request #29602 from lixiaobing10051267/masterWebsite
Automatic merge from submit-queue

Redirect the website to new location in gpu-support.md

The website has been changed, should be redirected to new one.
2016-07-26 15:34:18 -07:00
k8s-merge-robot b8e78b3310 Merge pull request #29558 from janetkuo/deployment-rollover-minreadyseconds-e2e
Automatic merge from submit-queue

Use nonexistent image instead of minReadySeconds in deployment rollover e2e test

Fixes #26834 

@kubernetes/deployment
2016-07-26 15:34:14 -07:00
k8s-merge-robot 92e22b424e Merge pull request #29592 from xiangpengzhao/add-fed-make-target
Automatic merge from submit-queue

Add rules for all directories in federation/cmd/

federation related target is not included in Makefile. Add it.
/cc @thockin 

BTW, `make help` is still WIP.
2016-07-26 15:01:27 -07:00
k8s-merge-robot c8d1ddfc80 Merge pull request #29586 from kubernetes/childsb-patch-1
Automatic merge from submit-queue

Update pull-requests.md fix typo

Fix the make target for `make test-integration`
2016-07-26 15:01:23 -07:00
k8s-merge-robot bc92126d20 Merge pull request #27700 from xiangpengzhao/fix_oncallusersupportlinks
Automatic merge from submit-queue

Fix broken links in on-call-user-support.md

Links in `Example response` are broken.
2016-07-26 15:01:18 -07:00
k8s-merge-robot 9014ceb9d8 Merge pull request #29286 from soltysh/wait_pod2
Automatic merge from submit-queue

Rework pod waiting mechanism in e2e tests to accept pod and watch based

This PR re-applies #28212 which was reverted in #29223. The only difference is that the initial PR contained also `PodStartTimeout` shortening (see [here](4b0c0bd924)) which might caused the problems. Let's give it a 2nd try. I've tested all the flakes and they were passing on my machine.

@smarterclayton @apelisse ptal
2016-07-26 15:01:13 -07:00
Cole Mickens 2ebffb431d implement azure cloudprovider 2016-07-26 14:50:33 -07:00
Cole Mickens 6ad9dc659f add clusterName to Loadbalancer methods 2016-07-26 14:50:33 -07:00
Cole Mickens e31b8de2e1 vendor azure-sdk-for-go, go-autorest 2016-07-26 14:50:28 -07:00
Cole Mickens 6d9494eff4 godep: add azure-sdk-for-go, go-autorest 2016-07-26 14:50:16 -07:00
derekwaynecarr 09c97a2acc Disable flaky unit test in admission plugin in NamespaceAutoProvision 2016-07-26 17:36:14 -04:00
Jerzy Szczepkowski 827ee794d6 Modified etcd manifest to support clustering.
Modified etcd manifest to support clustering.
2016-07-26 23:24:14 +02:00
Davanum Srinivas ef0581d1e9 kubectl container - Extract version better
1. Use --client since -c is deprecated now
2. The command (./kubectl version --client | grep -o 'GitVersion:"[^"]*"')
   now returns:
      GitVersion:"v1.4.0-alpha.1.784+ed3a29bd6aeb98-dirty"
   so parse out the version better using sed

Related to #23708
2016-07-26 13:44:42 -04:00
Dr. Stefan Schimanski 199f991f6a Add --scopes to kubectl-create-quota and add tests 2016-07-26 14:12:35 +02:00