Commit Graph

32762 Commits (a82c55213bca6d97f9fe9a2b89dc49219e4412e6)

Author SHA1 Message Date
k8s-merge-robot 2480ef5f1d Merge pull request #28178 from mikedanese/cni-reload
Automatic merge from submit-queue

periodically reload the cni plugin configuration

Might fix #28787
2016-07-28 02:27:43 -07:00
k8s-merge-robot 5c6c8eb9a6 Merge pull request #29531 from AdoHe/rolling_update_panic
Automatic merge from submit-queue

fix kubectl rolling update empty file cause panic issue

 ```release-note
Fix issue with kubectl panicing when passed files that do not exist.
```

Fix #29398 
@pwittrock @justinsb ptal. This just fix on the cmd layer, I am thinking whether we should return err from marshal&unmarshal if the reader is empty.
2016-07-28 01:54:56 -07:00
k8s-merge-robot 685fc92f65 Merge pull request #29492 from Random-Liu/bumpup-cadvisor-version
Automatic merge from submit-queue

Bump cadvisor dependencies to latest head. 

Fixes #28619 
Fixes #28997 

This is another try of https://github.com/kubernetes/kubernetes/pull/29153.

To update cadvisor godeps, we did:
* Bump up docker version to v1.11.2 for both cadvisor [https://github.com/google/cadvisor/pull/1388] and k8s.
* Bump up cadvisor `go-systemd` version to be the same with k8s [https://github.com/google/cadvisor/pull/1390]. Or else, a package `github.com/coreos/pkg/dlopen` will be removed by Godep, because it is used by new `go-systemd` in k8s, but not used by old `go-systemd` in cadvisor.
* Bump up runc version to be the same with docker v1.11.2 just in case.
* Add `github.com/Azure/go-ansiterm` dependency which is needed by docker v1.11.2.
* Change `pkg/util/term/`, because `SetWinsSize` is removed from windows platform in docker v1.11.2. [The first commit]

@vishh 
/cc @ncdc for the `pkg/util/term` change.
2016-07-28 01:19:42 -07:00
Harry Zhang cb14b35bde Refactor util clock into it's own pkg 2016-07-28 02:29:04 -04:00
k8s-merge-robot 1c72ba6810 Merge pull request #29520 from hongchaodeng/serr
Automatic merge from submit-queue

storage error: precondition failure should return invalid object error

In introducing the preconditions by @caesarxuchao , if check preconditions failed, it returns resource version conflict error. This is the wrong error to return, and instead it should return invalid object error. We need to separate these two types of errors.
See the implementation in etcd3 [https://github.com/kubernetes/kubernetes/blob/master/pkg/storage/etcd3/store.go#L467].

Also renaming "ErrCodeResourceVersionConflicts" to "ErrCodeVersionConflicts" for simpler reading.
2016-07-27 22:12:11 -07:00
k8s-merge-robot fe66c5c5fa Merge pull request #29628 from dims/fix-kubectl-version-extract
Automatic merge from submit-queue

kubectl container - Extract version better

1. Use --client since -c is deprecated now
2. The command (./kubectl version --client | grep -o 'GitVersion:"[^"]*"')
   now returns:
      GitVersion:"v1.4.0-alpha.1.784+ed3a29bd6aeb98-dirty"
   so parse out the version better using sed

Related to #23708
2016-07-27 21:07:14 -07:00
k8s-merge-robot 5427e8aa02 Merge pull request #29544 from lixiaobing10051267/masterFuncNote
Automatic merge from submit-queue

Func note is copied same as other one
2016-07-27 21:07:03 -07:00
k8s-merge-robot 40a6d68efb Merge pull request #29541 from lixiaobing10051267/masterTimeOut2
Automatic merge from submit-queue

Log information wrong while wait.ForeverTestTimeout
2016-07-27 21:06:52 -07:00
k8s-merge-robot 1ae9b73cd3 Merge pull request #29673 from pmorie/mount-collision
Automatic merge from submit-queue

Fix mount collision timeout issue

Short- or medium-term workaround for #29555.  The root issue being fixed here is that the recent attach/detach work in the kubelet uses a unique volume name as a key that tracks the work that has to be done for each volume in a pod to attach/mount/umount/detach.  However, the non-attachable volume plugins do not report unique names for themselves, which causes collisions when a single secret or configmap is mounted multiple times in a pod.

This is still a WIP -- I need to add a couple E2E tests that ensure that tests break in the future if there is a regression -- but posting for early review.

cc @kubernetes/sig-storage 

Ultimately, I would like to refine this a bit further.  A couple things I would like to change:

1.  `GetUniqueVolumeName` should be a property ONLY of attachable volumes
2.  I would like to see the kubelet apparatus for attach/mount/umount/detach handle non-attachable volumes specifically to avoid things like the `WaitForControllerAttach` call that has to be done for those volume types now
2016-07-27 21:06:47 -07:00
k8s-merge-robot 313272a22c Merge pull request #29508 from k82cn/add_whitspace
Automatic merge from submit-queue

Remove unnecessary empty line.
2016-07-27 19:52:47 -07:00
k8s-merge-robot 1f9c41dc3a Merge pull request #29495 from xiangpengzhao/fix-defer-fclose
Automatic merge from submit-queue

defer file.Close() in resource_printer.go
2016-07-27 19:19:34 -07:00
Yu-Ju Hong 03d11bcf4e Add a dockershim package
Add a new docker integration with kubelet using the new runtime API.
This change adds the package with some skeletons, and implements some
of the basic operations.
2016-07-27 18:30:25 -07:00
Lantao Liu cb1b3c86d3 Bump cadvisor dependencies to latest head. 2016-07-27 18:29:19 -07:00
k8s-merge-robot e008087e0a Merge pull request #29457 from derekwaynecarr/service-node-port-quota-fix
Automatic merge from submit-queue

Quota was not counting services with multiple nodeports properly

```release-note
If a service of type node port declares multiple ports, quota on "services.nodeports" will charge for each port in the service.
```

Fixes https://github.com/kubernetes/kubernetes/issues/29456

/cc @kubernetes/rh-cluster-infra @sdminonne
2016-07-27 18:09:40 -07:00
Lantao Liu 01a5ddd782 Not to use SetWinsize in windows 2016-07-27 17:22:30 -07:00
k8s-merge-robot 75c93b4063 Merge pull request #29439 from matttproud/cleanups_volumeflocker
Automatic merge from submit-queue

volume/flocker: plug time.Ticker resource leak

This commit ensures that `flockerMounter.updateDatasetPrimary` does not leak
running `time.Ticker` instances.  Upon termination of the consuming routine, we
stop the tickers.

```release-note
* flockerMounter.updateDatasetPrimary no longer leaks running time.Ticker instances.
  Upon termination of the consuming routine, we stop the tickers.
```
2016-07-27 17:18:34 -07:00
k8s-merge-robot ab7d039c81 Merge pull request #29388 from ronnielai/image-gc-check
Automatic merge from submit-queue

Avoiding trying to gc images with no tags which are still in use

#29325
2016-07-27 16:44:50 -07:00
k8s-merge-robot 3301f6d14f Merge pull request #29356 from smarterclayton/init_containers
Automatic merge from submit-queue

LimitRanger and PodSecurityPolicy need to check more on init containers

Container limits not applied to init containers. HostPorts not checked on podsecuritypolicy

@pweil- @derekwaynecarr
2016-07-27 16:09:34 -07:00
k8s-merge-robot 78f7b32e66 Merge pull request #29693 from bprashanth/healthz_limits
Automatic merge from submit-queue

Give healthz more memory to mitigate #29688

This will recreate the rc but not the pods. At least on the clusters we patched, if the pods get recreated they'll ccome back up with the updated limits. 
#29688
2016-07-27 15:34:49 -07:00
Paul Morie f15684e0c4 Add e2e coverage for mount collisions 2016-07-27 17:54:01 -04:00
Paul Morie c884297990 Fix collisions issues / timeouts for mounts
For non-attachable volumes, do not call GetVolumeName on the plugin and instead
generate a unique name based on the identity of the pod and the name of the volume
within the pod.
2016-07-27 17:53:50 -04:00
Yu-Ju Hong 0ac247c6a7 Add kuberuntime.go 2016-07-27 14:34:30 -07:00
k8s-merge-robot e86b3f266c Merge pull request #29641 from ivan4th/fix-configmap-race
Automatic merge from submit-queue

Fix wrapped volume race

**EDIT:** now covers configmap, secret, downwardapi & git_repo volume plugins.

Fixes #29297.

wrappedVolumeSpec used by configMapVolumeMounter and
configMapVolumeUnmounter contained a pointer to api.Volume which was
being patched by NewWrapperMounter/NewWrapperUnmounter, causing race
condition during configmap volume mounts.

See https://github.com/kubernetes/kubernetes/issues/29297#issuecomment-235403806 for complete explanation.
The subtle bug was introduced by #18445, it also can affect other volume plugins utilizing `wrappedVolumeSpec` technique, if this PR is correct/accepted will make more PRs for secrets etc. Although tmpfs variety of inner `emptyDir` volume appears to be less susceptible to this race, there's chance it can fail too.

The errors produced by this race look like this:
```Jul 19 17:05:21 ubuntu1604 kubelet[17097]: I0719 17:05:21.854303   17097 reconciler.go:253] MountVolume operation started for volume "kubernetes.io/configmap/foo-files"
 (spec.Name: "files") to pod "11786582-4dbf-11e6-9fc9-64cca009c636" (UID: "11786582-4dbf-11e6-9fc9-64cca009c636").
Jul 19 17:05:21 ubuntu1604 kubelet[17097]: I0719 17:05:21.854842   17097 reconciler.go:253] MountVolume operation started for volume "kubernetes.io/configmap/bar-file
s" (spec.Name: "files") to pod "117d2c22-4dbf-11e6-9fc9-64cca009c636" (UID: "117d2c22-4dbf-11e6-9fc9-64cca009c636").
Jul 19 17:05:21 ubuntu1604 kubelet[17097]: E0719 17:05:21.860796   17097 configmap.go:171] Error creating atomic writer: stat /var/lib/kubelet/pods/117d2c22-4dbf-11e6-9fc9-64cca009c636/volumes/kubernetes.io~configmap/files: no such file or directory
Jul 19 17:05:21 ubuntu1604 kubelet[17097]: E0719 17:05:21.861070   17097 goroutinemap.go:155] Operation for "kubernetes.io/configmap/bar-files" failed. No retries permitted until 2016-07-19 17:07:21.861036886 +0200 CEST (durationBeforeRetry 2m0s). error: MountVolume.SetUp failed for volume "kubernetes.io/configmap/bar-files" (spec.Name: "files") pod "117d2c22-4dbf-11e6-9fc9-64cca009c636" (UID: "117d2c22-4dbf-11e6-9fc9-64cca009c636") with: stat /var/lib/kubelet/pods/117d2c22-4dbf-11e6-9fc9-64cca009c636/volumes/kubernetes.io~configmap/files: no such file or directory
Jul 19 17:05:21 ubuntu1604 kubelet[17097]: E0719 17:05:21.861271   17097 configmap.go:171] Error creating atomic writer: stat /var/lib/kubelet/pods/11786582-4dbf-11e6-9fc9-64cca009c636/volumes/kubernetes.io~configmap/files: no such file or directory
Jul 19 17:05:21 ubuntu1604 kubelet[17097]: E0719 17:05:21.862284   17097 goroutinemap.go:155] Operation for "kubernetes.io/configmap/foo-files" failed. No retries permitted until 2016-07-19 17:07:21.862275753 +0200 CEST (durationBeforeRetry 2m0s). error: MountVolume.SetUp failed for volume "kubernetes.io/configmap/foo-files" (spec.Name: "files") pod "11786582-4dbf-11e6-9fc9-64cca009c636" (UID: "11786582-4dbf-11e6-9fc9-64cca009c636") with: stat /var/lib/kubelet/pods/11786582-4dbf-11e6-9fc9-64cca009c636/volumes/kubernetes.io~configmap/files: no such file or directory```

Note "Error creating atomic writer" errors.
This problem can be reproduced by making kubelet mount multiple config map volumes in parallel.
2016-07-27 14:24:14 -07:00
Clayton Coleman 958d78cb10
Init container quota is inaccurate
Usage charged should be max of greater of init container or all regular
containers. Also, need to validate init container inputs
2016-07-27 15:44:18 -04:00
Prashanth Balasubramanian 79d7519f67 Give healthz more memory to mitigate #29688 2016-07-27 12:22:36 -07:00
Daniel Smith 617b614e49 Merge pull request #29697 from kubernetes/revert-29284-hamaster-etcd
Revert "Modified etcd manifest to support clustering."
2016-07-27 12:03:38 -07:00
Daniel Smith fb3f02fb68 Revert "Modified etcd manifest to support clustering." 2016-07-27 12:03:21 -07:00
Dawn Chen 1aaea5fe09 Merge pull request #29687 from cjcullen/customuser
Fix potential unbound KUBE_USER variable in gci/trusty.
2016-07-27 11:31:12 -07:00
CJ Cullen 6d2c411757 Fix potential unbound KUBE_USER variable in gci/trusty. 2016-07-27 10:50:44 -07:00
Ron Lai 64981aaf46 Avoiding trying to gc images with no tags which are still in use 2016-07-27 10:31:47 -07:00
Mike Danese 792868c743 periodically reload the cni plugin config
Signed-off-by: Mike Danese <mikedanese@google.com>
2016-07-27 10:07:52 -07:00
Daniel Smith 6456d67bd0 Merge pull request #29676 from kubernetes/revert-29380-kill-setpgid
Revert "Fix killing child sudo process in e2e_node tests"
2016-07-27 09:56:34 -07:00
Erick Fejta 12d923ed15 Revert "Fix killing child sudo process in e2e_node tests" 2016-07-27 21:53:05 +05:30
Wojciech Tyczynski a63cccfafc Cache pods with pod (anti)affinity constraints 2016-07-27 17:31:53 +02:00
Avesh Agarwal cb7766de19 Fix kubelet to not accept negative eviction (hard, soft) thresholds
and add unit tests
2016-07-27 10:56:31 -04:00
Clayton Coleman d67187856f
No PetSet client in client/unversioned
Also add fakes
2016-07-27 10:08:58 -04:00
deads2k aa3db4d995 make the resource prefix in etcd configurable for cohabitation 2016-07-27 07:51:40 -04:00
k8s-merge-robot 03fe6b962c Merge pull request #29380 from bboreham/kill-setpgid
Automatic merge from submit-queue

Fix killing child sudo process in e2e_node tests

Fixes #29211.

The context is we are trying to kill a process started as `sudo kube-apiserver`, but `sudo` ignores signals from the same process group. Applying `Setpgid` means the `sudo kill` process won't be in the same process group, so will not fall foul of this nifty feature.

I also took the liberty of removing some code setting `Pdeathsig` because it claims to be doing something  in the same area, but actually it doesn't do that at all.  The setting is applied to the forked process, i.e. `sudo`, and it means the `sudo` will get killed if we (`e2e_node.test`) die.  This (a) isn't what the comment says and (b) doesn't help because sending SIGKILL to the sudo process leaves sudo's child alive.

I didn't use the "hack for linux-only" approach because I think `Setpgid` is available on all platforms that `e2e_node` builds on.
2016-07-27 03:18:15 -07:00
k8s-merge-robot 5b7f7e7bd3 Merge pull request #29365 from lixiaobing10051267/masterLen
Automatic merge from submit-queue

len(vmList) output format not correct

len(vmList) output format not correct, not "%s", is "%d".
2016-07-27 02:41:58 -07:00
Ivan Shvedunov df1e925143 Fix wrapped volume race
This fixes race conditions in configmap, secret, downwardapi & git_repo
volume plugins.
wrappedVolumeSpec vars used by volume mounters and unmounters contained
a pointer to api.Volume structs which were being patched by
NewWrapperMounter/NewWrapperUnmounter, causing race condition during
volume mounts.
2016-07-27 12:24:46 +03:00
k8s-merge-robot 3a29863d36 Merge pull request #29284 from jszczepkowski/hamaster-etcd
Automatic merge from submit-queue

Modified etcd manifest to support clustering.
2016-07-27 02:00:09 -07:00
k8s-merge-robot 5064306808 Merge pull request #29254 from ping035627/ping035627-patch-0718
Automatic merge from submit-queue

Judge the cloud isn't nil before use it in server.go

The PR add a judgement for the cloud before use it, because cloudprovider.InitCloudProvider maybe return nil for the cloud.
2016-07-27 01:24:21 -07:00
k8s-merge-robot 9045dfef8f Merge pull request #29249 from aveshagarwal/master-node-e2e-configmap-selinux-fix
Automatic merge from submit-queue

Fix ConfigMap related node e2e tests on selinux enabled systems

One selinux enabled systems, it might require to relabel
/var/lib/kubelet, otherwise following tests fail:

Summarizing 7 Failures:

```
[Fail] [k8s.io] ConfigMap [It] updates should be reflected in volume [Conformance]
/root/upstream-code/gocode/src/k8s.io/kubernetes/test/e2e_node/configmap.go:131

[Fail] [k8s.io] ConfigMap [It] should be consumable from pods in volume as non-root with FSGroup [Feature:FSGroup]
/root/upstream-code/gocode/src/k8s.io/kubernetes/test/e2e/framework/util.go:2115

[Fail] [k8s.io] ConfigMap [It] should be consumable from pods in volume with mappings as non-root [Conformance]
/root/upstream-code/gocode/src/k8s.io/kubernetes/test/e2e/framework/util.go:2115

[Fail] [k8s.io] ConfigMap [It] should be consumable from pods in volumpe [Conformance]
/root/upstream-code/gocode/src/k8s.io/kubernetes/test/e2e/framework/util.go:2115

[Fail] [k8s.io] ConfigMap [It] should be consumable from pods in volume with mappings [Conformance]
/root/upstream-code/gocode/src/k8s.io/kubernetes/test/e2e/framework/util.go:2115

[Fail] [k8s.io] ConfigMap [It] should be consumable from pods in volume with mappings as non-root with FSGroup [Feature:FSGroup]
/root/upstream-code/gocode/src/k8s.io/kubernetes/test/e2e/framework/util.go:2115

[Fail] [k8s.io] ConfigMap [It] should be consumable from pods in volume as non-root [Conformance]
/root/upstream-code/gocode/src/k8s.io/kubernetes/test/e2e/framework/util.go:2115
```
@kubernetes/rh-cluster-infra
2016-07-27 00:15:54 -07:00
k8s-merge-robot 540e992e08 Merge pull request #28850 from MHBauer/faster-test
Automatic merge from submit-queue

Faster test

<!--
Checklist for submitting a Pull Request

Please remove this comment block before submitting.

1. Please read our [contributor guidelines](https://github.com/kubernetes/kubernetes/blob/master/CONTRIBUTING.md).
2. See our [developer guide](https://github.com/kubernetes/kubernetes/blob/master/docs/devel/development.md).
3. If you want this PR to automatically close an issue when it is merged,
   add `fixes #<issue number>` or `fixes #<issue number>, fixes #<issue number>`
   to close multiple issues (see: https://github.com/blog/1506-closing-issues-via-pull-requests).
4. Follow the instructions for [labeling and writing a release note for this PR](https://github.com/kubernetes/kubernetes/blob/master/docs/devel/pull-requests.md#release-notes) in the block below.
-->
In attempting to troubleshoot flakes with this test case I actually wanted to understand how it worked.
There's some poor comments that need work.
I added some additional output which may or may not help in debugging the flakes.
I doubt this fixes the flake.

My major concern is the 'refactor' I did of the test case to batch up runs by sub-test-case. As it stood there was a 200ms pause between each sub, so they should not have interfered with each other. Now they are just started as fast as possible, but only 20 run at a time before moving on to the next 20. I am not sure if I am violating the ethos of the original test case.

Runs on my computer are down from 2m40s -> 40s.
Getting rid of the arbitrary client limiting brings it down to ~12 seconds. 11 to fetch the image and <1 to actually run the tests against the proxies. I can add a zero to the number of loops if you want to hit it harder. It would result in 10x as much text output though.


[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/.github/PULL_REQUEST_TEMPLATE.md?pixel)]()
2016-07-26 23:38:09 -07:00
Kevin 57d22a156c Remove redundant pod deletion in scheduler predicates tests and fix taints-tolerations e2e 2016-07-27 14:01:13 +08:00
k8s-merge-robot d897db4ac5 Merge pull request #28933 from smarterclayton/accept_content_types
Automatic merge from submit-queue

Use response content-type on restclient errors

Also allow a new AcceptContentTypes field to allow the client to ask for
a fallback serialization when getting responses from the server. This
allows a new client to ask for protobuf and JSON, falling back to JSON
when necessary.

The changes to request.go allow error responses from non-JSON servers to
be properly decoded.

@wojtek-t - also alters #28910 slightly (this is better output)
2016-07-26 22:56:53 -07:00
k8s-merge-robot 994239dc00 Merge pull request #28821 from colemickens/azure-cloudprovider-pr
Automatic merge from submit-queue

Add an Azure CloudProvider Implementation

This PR adds `Azure` as a cloudprovider provider for Kubernetes. It specifically adds support for native pod networking (via Azure User Defined Routes) and L4 Load Balancing (via Azure Load Balancers).

I did have to add `clusterName` as a parameter to the `LoadBalancers` methods. This is because Azure only allows one "LoadBalancer" object per set of backend machines. This means a single "LoadBalancer" object must be shared across the cluster. The "LoadBalancer" is named via the `cluster-name` parameter passed to `kube-controller-manager` so as to enable multiple clusters per resource group if the user desires such a configuration.

There are few things that I'm a bit unsure about:

1. The implementation of the `Instances` interface. It's not extensively documented, it's not really clear what the different functions are used for, and my questions on the ML didn't get an answer.

2. Counter to the comments on the `LoadBalancers` Interface, I modify the `api.Service` object in `EnsureLoadBalancerDeleted`, but not with the intention of affecting Kube's view of the Service. I simply do it so that I can remove the `Port`s on the `Service` object and then re-use my reconciliation logic that can handle removing stale/deleted Ports. 

3. The logging is a bit verbose. I'm looking for guidance on the appropriate log level to use for the chattier bits.

Due to the (current) lack of Instance Metadata Service and lack of Virtual Machine Identity in Azure, the user is required to do a few things to opt-in to this provider. These things are called-out as they are in contrast to AWS/GCE:

1. The user must provision an Azure Active Directory ServicePrincipal with `Contributor` level access to the resource group that the cluster is deployed in. This creation process is documented [by Hashicorp](https://www.packer.io/docs/builders/azure-setup.html) or [on the MSDN Blog](https://blogs.msdn.microsoft.com/arsen/2016/05/11/how-to-create-and-test-azure-service-principal-using-azure-cli/).

2. The user must place a JSON file somewhere on each Node that conforms to the `AzureConfig` struct defined in `azure.go`. (This is automatically done in the Azure flavor of [Kubernetes-Anywhere](https://github.com/kubernetes/kubernetes-anywhere).)

3. The user must specify `--cloud-config=/path/to/azure.json` as an option to `kube-apiserver` and `kube-controller-manager` similarly to how the user would need to pass `--cloud-provider=azure`.

I've been running approximately this code for a month and a half. I only encountered one bug which has since been fixed and covered by a unit test. I've just deployed a new cluster (and a Type=LoadBalancer nginx Service) using this code (via `kubernetes-anywhere`) and have posted [the `kube-controller-manager` logs](https://gist.github.com/colemickens/1bf6a26e7ef9484a72a30b1fcf9fc3cb) for anyone who is interested in seeing the logs of the logic.

If you're interested in this PR, you can use the instructions in my [`azure-kubernetes-demo` repository](https://github.com/colemickens/azure-kubernetes-demo) to deploy a cluster with minimal effort via [`kubernetes-anywhere`](https://github.com/kubernetes/kubernetes-anywhere). (There is currently [a pending PR in `kubernetes-anywhere` that is needed](https://github.com/kubernetes/kubernetes-anywhere/pull/172) in conjuncture with this PR). I also have a pre-built `hyperkube` image: `docker.io/colemickens/hyperkube-amd64:v1.4.0-alpha.0-azure`, which will be kept in sync with the branch this PR stems from.

I'm hoping this can land in the Kubernetes 1.4 timeframe.

CC (potential code reviewers from Azure): @ahmetalpbalkan @brendandixon @paulmey

CC (other interested Azure folk): @brendandburns @johngossman @anandramakrishna @jmspring @jimzim

CC (others who've expressed interest): @codefx9 @edevil @thockin @rootfs
2016-07-26 21:56:49 -07:00
k8s-merge-robot d82e404a00 Merge pull request #28351 from sttts/sttts-kubectl-create-quota
Automatic merge from submit-queue

Add support for kubectl create quota command

Follow-up of https://github.com/kubernetes/kubernetes/pull/19625

```
Create a resourcequota with the specified name, hard limits and optional scopes

Usage:
  kubectl create quota NAME [--hard=key1=value1,key2=value2] [--scopes=Scope1,Scope2] [--dry-run=bool] [flags]

Aliases:
  quota, q


Examples:
  // Create a new resourcequota named my-quota
  $ kubectl create quota my-quota --hard=cpu=1,memory=1G,pods=2,services=3,replicationcontrollers=2,resourcequotas=1,secrets=5,persistentvolumeclaims=10

  // Create a new resourcequota named best-effort
  $ kubectl create quota best-effort --hard=pods=100 --scopes=BestEffort
```
2016-07-26 21:20:04 -07:00
k8s-merge-robot 5a7b52b8d2 Merge pull request #26942 from xiangpengzhao/fix_testcase
Automatic merge from submit-queue

Fix panic in schema test

If the swagger files for testing are lost, the func `loadSchemaForTest` or `NewSwaggerSchemaFromBytes` will return a non-nil error and a nil schema. In this case, the calling for `ValidateBytes` will result in panic. So, call Fatalf instead of Errorf.

Also fix minor typos.

Test logs:

```
--- FAIL: TestLoad (0.01s)
	schema_test.go:131: Failed to load: open ../../../api/swagger-spec/v1.json: no such file or directory
--- FAIL: TestValidateOk (0.00s)
	schema_test.go:138: Failed to load: open ../../../api/swagger-spec/v1.json: no such file or directory
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
	panic: runtime error: invalid memory address or nil pointer dereference
[signal 0xb code=0x1 addr=0x20 pc=0x4d52df]

goroutine 10 [running]:
panic(0x15fffa0, 0xc8200100a0)
	/usr/local/go/src/runtime/panic.go:481 +0x3e6
testing.tRunner.func1(0xc820085a70)
	/usr/local/go/src/testing/testing.go:467 +0x192
panic(0x15fffa0, 0xc8200100a0)
	/usr/local/go/src/runtime/panic.go:443 +0x4e9
k8s.io/kubernetes/pkg/api/validation.TestValidateOk(0xc820085a70)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/api/validation/schema_test.go:159 +0x79f
testing.tRunner(0xc820085a70, 0x22aad68)
	/usr/local/go/src/testing/testing.go:473 +0x98
created by testing.RunTests
	/usr/local/go/src/testing/testing.go:582 +0x892
FAIL	k8s.io/kubernetes/pkg/api/validation	0.048s
```
2016-07-26 20:35:32 -07:00
k8s-merge-robot ffff1ab63c Merge pull request #28319 from grodrigues3/revert-comments-tLogf
Automatic merge from submit-queue

reverted the code from 23688 that cause race condition with older version of Go

```release-note
* release-note-None
```


[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/.github/PULL_REQUEST_TEMPLATE.md?pixel)]()
2016-07-26 19:56:47 -07:00