Commit Graph

48502 Commits (952ad3f4a749e60a5cf9b5a09bb6e3c4d1653c7f)

Author SHA1 Message Date
billy2180 952ad3f4a7 test/images/network-tester:bump rc/pod image verison to 1.9 2017-05-22 17:11:23 +08:00
Kubernetes Submit Queue 3999de6d43 Merge pull request #46108 from Mashimiao/fix-missing-arg
Automatic merge from submit-queue

fix missing argument

Signed-off-by: Ma Shimiao <mashimiao.fnst@cn.fujitsu.com>

**What this PR does / why we need it**:
format reads arg 3, have only 2 args

```release-note
```
2017-05-22 01:15:53 -07:00
Kubernetes Submit Queue cdbfec3148 Merge pull request #43501 from xingzhou/kube-43126
Automatic merge from submit-queue

Use TabWriter to keep output of "kubectl get xx -w" aligned.

Use TabWriter to keep output of "kubectl get xx -w" aligned.

fixed #43126
2017-05-22 00:22:40 -07:00
Kubernetes Submit Queue 06c12e717a Merge pull request #46071 from emaildanwilson/fedClusterSelectorIntegration
Automatic merge from submit-queue

[Federation] ClusterSelector Integration Testing

This pull request adds integration testing for the federated ClusterSelector ref: design #29887 merged pull #40234

cc: @nikhiljindal @marun
2017-05-21 23:18:44 -07:00
Kubernetes Submit Queue f0e5a999e9 Merge pull request #46179 from zjj2wry/pvc_example
Automatic merge from submit-queue

fix typo

**What this PR does / why we need it**:

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
```
2017-05-21 08:31:10 -07:00
Kubernetes Submit Queue c1f8fcd9fe Merge pull request #45496 from andyxning/fix_pleg_relist_time
Automatic merge from submit-queue

fix pleg relist time

This PR fix pleg reslist time. According to current implementation, we have a `Healthy` method periodically check the relist time. If current timestamp subtracts latest relist time is longer than `relistThreshold`(default is 3 minutes), we should return an error to indicate the error of runtime.

`relist` method is also called periodically. If runtime(docker) hung, the relist method should return immediately without updating the latest relist time. If we update latest relist time no matter runtime(docker) hung(default timeout is 2 minutes), the `Healthy` method will never return an error.

```release-note
Kubelet PLEG updates the relist timestamp only after successfully relisting.
```

/cc @yujuhong @Random-Liu @dchen1107
2017-05-21 04:17:14 -07:00
Kubernetes Submit Queue 39308b8980 Merge pull request #38959 from Gradiant/master
Automatic merge from submit-queue

Adapt loadbalancer deleting/updating when using cloudprovider openstack in openstack/liberty

**What this PR does / why we need it**:
Make an extra verification on the returned listeners and pools because gophercloud query doesn't filter the results by loadbalancerID / listenerID respectively when using **openstack/librerty**.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
#33759 
**Special notes for your reviewer**:
#33759 it's supposed to have a pull request which fixes this problem but in the release  1.5 loadbalancers doesn't use that patched code.
**Release note**:

NONE
```release-note
```
2017-05-21 03:22:02 -07:00
Kubernetes Submit Queue ba4c8b8db2 Merge pull request #46025 from billy2180/bump-netexec-pod-xml-to-1.7
Automatic merge from submit-queue

Bump e2e netexec pod.xml image version to 1.7

Changing the image version from 1.5 to 1.7
2017-05-21 02:27:05 -07:00
zhengjiajin 5ab80f8551 fix typo 2017-05-21 17:02:42 +08:00
Kubernetes Submit Queue 336fb2f508 Merge pull request #45933 from smarterclayton/secret_reuse
Automatic merge from submit-queue

Move the remaining controllers to shared informers

Completes work done in 1.6 to move the last two hold outs to shared informers - tokens controller and scheduler. Adds a few more tools to allow informer reuse (like filtering the informer, or maintaining a mutation cache).

The mutation cache is identical to #45838 and will be removed when that merges

@ncdc @deads2k extracted from openshift/origin#14086
2017-05-20 23:08:09 -07:00
Clayton Coleman ad720cc651
generated: bazel 2017-05-20 21:58:38 -04:00
Kubernetes Submit Queue 95ce463e95 Merge pull request #46020 from marun/fed-override-server-image-default
Automatic merge from submit-queue

[Federation][kubefed]: Move server image definition to cmd

This enables consumers like openshift to provide a different default without editing the kubefed init logic.

cc: @kubernetes/sig-federation-pr-reviews
2017-05-20 14:30:55 -07:00
Kubernetes Submit Queue cad2ed0960 Merge pull request #46080 from wojtek-t/prepare_for_periodic_runner
Automatic merge from submit-queue

Prepare for periodic runner

Rerevert https://github.com/kubernetes/kubernetes/pull/46004 with the fix.

For now, built on top of #45816
2017-05-20 13:39:27 -07:00
Clayton Coleman 479f01d340
Scheduler should use shared informer for pods
Previously, the scheduler created two separate list watchers. This
changes the scheduler to be able to leverage a shared informer, whether
passed in externally or spawned using the new in place method. This
removes the last use of a "special" informer in the codebase.

Allows someone wrapping the scheduler to use a shared informer if they
have more information avaliable.
2017-05-20 14:19:49 -04:00
Clayton Coleman 784e3ae5fa
Switch the tokens controller to use shared informers
Tokens controller previously needed a bit of extra help in order to be
safe for concurrent use. The new MutationCache allows it to keep a local
cache and still use a shared informer. The filtering event handler lets
it only see changes to secrets it cares about.
2017-05-20 14:19:49 -04:00
Clayton Coleman 5ac3214c42
Mutation cache should support retrieving items from ByIndex()
Allows tokens controller to observe updates
2017-05-20 14:19:49 -04:00
Clayton Coleman 5439cfd245
Add a filtering resource handler for informers
Allows an informer consumer to easily filter a set of changes out,
possibly to maintain a smaller cache or to only operate on a known set
of objects.
2017-05-20 14:19:48 -04:00
Clayton Coleman 3e095d12b4
Refactor move of client-go/util/clock to apimachinery 2017-05-20 14:19:48 -04:00
Clayton Coleman 8013212db5
Move client-go/util/clock to apimachinery/pkg/util/clock
For reuse
2017-05-20 14:19:47 -04:00
Clayton Coleman bb8c00583a
Update consumers of LRUExpireCache 2017-05-20 14:19:47 -04:00
Clayton Coleman 8e1639a71b
Change LRUExpireCache to use hashicorp cache to expose Keys()
Removes the spawning of goroutines in the cache (which could be a
hotspot for anything in the critical path) as well.
2017-05-20 14:19:47 -04:00
Clayton Coleman 529e627c8a
Move pkg/util/cache to apimachinery
Will be used by client-go as well
2017-05-20 14:19:46 -04:00
Kubernetes Submit Queue a8bff0ed9a Merge pull request #45836 from mbohlool/openapi_pb
Automatic merge from submit-queue

Add protobuf binary version of OpenAPI spec

Fixes #45833
Partially fixes #42841

```release-note
OpenAPI spec is now available in protobuf binary and gzip format (with ETag support)
```
2017-05-20 11:01:04 -07:00
Wojciech Tyczynski 7ba30afbed Fix codestyle 2017-05-20 18:46:29 +02:00
Wojciech Tyczynski 758c9666e5 Call syncProxyRules when really needed and remove reasons 2017-05-20 18:46:28 +02:00
Kubernetes Submit Queue 8b706690fb Merge pull request #45816 from wojtek-t/no_early_return_sync_proxy_rules
Automatic merge from submit-queue

Change kube-proxy so that we won't perform no-op calls
2017-05-20 09:06:59 -07:00
Wojciech Tyczynski c0c41aa083 Check whether service changed 2017-05-20 14:22:56 +02:00
Wojciech Tyczynski 05ffcccdc1 Check whether endpoints change 2017-05-20 14:22:07 +02:00
Wojciech Tyczynski 37a6989c79 Cleanup iptables proxier 2017-05-20 14:17:03 +02:00
Kubernetes Submit Queue 2c2b5f7379 Merge pull request #45085 from sttts/sttts-aggregator-upgrade
Automatic merge from submit-queue

kube-apiserver: check upgrade header to detect upgrade connections

Without this every connection with "Connection" header but without upgrade request are rejected. A simple
curl will set "Connection", but does not intent to upgrade.
2017-05-20 02:08:00 -07:00
Kubernetes Submit Queue 03495f12c3 Merge pull request #46152 from shashidharatd/federation
Automatic merge from submit-queue (batch tested with PRs 46014, 46152)

Updated test/test_owners.csv for federation test cases

To the best of my knowledge have updated the test owners for federation e2e test cases. PTAL and comment if any concern.

**Release note**:
```release-note
NONE
```
cc @kubernetes/sig-federation-pr-reviews @fejta 
/assign @madhusudancs
2017-05-20 00:59:27 -07:00
Kubernetes Submit Queue bdeac66adc Merge pull request #46014 from YuPengZTE/devFinishRequest
Automatic merge from submit-queue (batch tested with PRs 46014, 46152)

format reads arg 3, have only 2 args, add i

Signed-off-by: yupengzte <yu.peng36@zte.com.cn>



**What this PR does / why we need it**:

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
```
2017-05-20 00:59:26 -07:00
Kubernetes Submit Queue 8fe818b2a1 Merge pull request #45981 from fabianofranz/kubectl_plugins_v1_part1
Automatic merge from submit-queue (batch tested with PRs 46033, 46122, 46053, 46018, 45981)

Command tree and exported env in kubectl plugins

This is part of `kubectl` plugins V1:
- Adds support to several env vars passing context information to the plugin. Plugins can make use of them to connect to the REST API, access global flags, get the path of the plugin caller (so that `kubectl` can be invoked) and so on. Exported env vars include
  - `KUBECTL_PLUGINS_DESCRIPTOR_*`: the plugin descriptor fields
  - `KUBECTL_PLUGINS_GLOBAL_FLAG_*`: one for each global flag, useful to access namespace, context, etc
  - ~`KUBECTL_PLUGINS_REST_CLIENT_CONFIG_*`: one for most fields in `rest.Config` so that a REST client can be built.~
  - `KUBECTL_PLUGINS_CALLER`: path to `kubectl`
  - `KUBECTL_PLUGINS_CURRENT_NAMESPACE`: namespace in use
- Adds support for plugins as child of other plugins so that a tree of commands can be built (e.g. `kubectl myplugin list`, `kubectl myplugin add`, etc)

**Release note**:

```release-note
Added support to a hierarchy of kubectl plugins (a tree of plugins as children of other plugins).

Added exported env vars to kubectl plugins so that plugin developers have access to global flags, namespace, the plugin descriptor and the full path to the caller binary.
```
@kubernetes/sig-cli-pr-reviews
2017-05-19 23:29:32 -07:00
Kubernetes Submit Queue af5d057339 Merge pull request #46018 from YuPengZTE/devBaseCommand
Automatic merge from submit-queue (batch tested with PRs 46033, 46122, 46053, 46018, 45981)

ineffectual assignment to baseCommand, delete it

Signed-off-by: yupengzte <yu.peng36@zte.com.cn>



**What this PR does / why we need it**:

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
```
2017-05-19 23:29:30 -07:00
Kubernetes Submit Queue 112ed869c7 Merge pull request #46053 from dashpole/test_eviction_metrics
Automatic merge from submit-queue (batch tested with PRs 46033, 46122, 46053, 46018, 45981)

Log age of stats used for evictions during eviction tests

I recently added prometheus metrics for the age of the metrics used for evictions #43031.  It would be nice to surface these during eviction tests, so I can better assess how old stats are, and whether or not the age of stats causes extra evictions.

This isnt super-high priority, and can be done after code-freeze, since it is a testing improvement.  Feel free to take a look whenever either of you has time.

/assign @mtaufen 
/assign @Random-Liu
2017-05-19 23:29:28 -07:00
Kubernetes Submit Queue 078d820de9 Merge pull request #46122 from ncdc/kube-proxy-version-flag
Automatic merge from submit-queue (batch tested with PRs 46033, 46122, 46053, 46018, 45981)

Restore kube-proxy --version

Accidentally removed by #34727.

Fixes #46026
2017-05-19 23:29:26 -07:00
Kubernetes Submit Queue 3456d4d239 Merge pull request #46033 from wojtek-t/reduce_memory_allocations_in_kube_proxy
Automatic merge from submit-queue

Reduce memory allocations in kube proxy

Memory allocation (and Go GarbageCollection) seems to be one of the most expensive operations in kube-proxy (I've seen profiles where it was more than 50%).

The commits are mostly independent from each other and all of them are mostly about reusing already allocated memory.

This PR is reducing memory allocation by ~5x (results below from 100-node load test):

before:
```
(pprof) top
38.64GB of 39.11GB total (98.79%)
Dropped 249 nodes (cum <= 0.20GB)
Showing top 10 nodes out of 61 (cum >= 0.20GB)
      flat  flat%   sum%        cum   cum%
   15.10GB 38.62% 38.62%    15.10GB 38.62%  bytes.makeSlice
    9.48GB 24.25% 62.87%     9.48GB 24.25%  runtime.rawstringtmp
    8.30GB 21.21% 84.07%    32.47GB 83.02%  k8s.io/kubernetes/pkg/proxy/iptables.(*Proxier).syncProxyRules
    2.08GB  5.31% 89.38%     2.08GB  5.31%  fmt.(*fmt).padString
    1.90GB  4.86% 94.24%     3.82GB  9.77%  strings.Join
    0.67GB  1.72% 95.96%     0.67GB  1.72%  runtime.hashGrow
    0.36GB  0.92% 96.88%     0.36GB  0.92%  runtime.stringtoslicebyte
    0.31GB  0.79% 97.67%     0.62GB  1.58%  encoding/base32.(*Encoding).EncodeToString
    0.24GB  0.62% 98.29%     0.24GB  0.62%  strings.genSplit
    0.20GB   0.5% 98.79%     0.20GB   0.5%  runtime.convT2E
```

after:
```
7.94GB of 8.13GB total (97.75%)
Dropped 311 nodes (cum <= 0.04GB)
Showing top 10 nodes out of 65 (cum >= 0.11GB)
      flat  flat%   sum%        cum   cum%
    3.32GB 40.87% 40.87%     8.05GB 99.05%  k8s.io/kubernetes/pkg/proxy/iptables.(*Proxier).syncProxyRules
    2.85GB 35.09% 75.95%     2.85GB 35.09%  runtime.rawstringtmp
    0.60GB  7.41% 83.37%     0.60GB  7.41%  runtime.hashGrow
    0.31GB  3.76% 87.13%     0.31GB  3.76%  runtime.stringtoslicebyte
    0.28GB  3.43% 90.56%     0.55GB  6.80%  encoding/base32.(*Encoding).EncodeToString
    0.19GB  2.29% 92.85%     0.19GB  2.29%  strings.genSplit
    0.18GB  2.17% 95.03%     0.18GB  2.17%  runtime.convT2E
    0.10GB  1.28% 96.31%     0.71GB  8.71%  runtime.mapassign
    0.10GB  1.21% 97.51%     0.10GB  1.21%  syscall.ByteSliceFromString
    0.02GB  0.23% 97.75%     0.11GB  1.38%  syscall.SlicePtrFromStrings
```
2017-05-19 23:21:49 -07:00
Kubernetes Submit Queue 9f5849a05e Merge pull request #45975 from php-coder/psp_and_projected_volume
Automatic merge from submit-queue (batch tested with PRs 45346, 45903, 45958, 46042, 45975)

examples/podsecuritypolicy/rbac: allow to use projected volumes in restricted PSP

**What this PR does / why we need it**:
This PR modifies `restricted` PSP to allow `projected` volume type. No need to modify `privileged` PSP because it already allows all volume types.

It should not add any harm because `projected` uses configmaps, downward API, and secrets that are already permitted.

**Special notes for your reviewer**:
This was inspired by similar change in the OpenShift: https://github.com/openshift/origin/pull/14147

**Release note**:
```release-note
NONE
```

PTAL @pweil- @derekwaynecarr 
CC @mfojtik
2017-05-19 22:29:36 -07:00
Kubernetes Submit Queue 4f55f49035 Merge pull request #46042 from derekwaynecarr/quota-admission-registry
Automatic merge from submit-queue (batch tested with PRs 45346, 45903, 45958, 46042, 45975)

ResourceQuota admission control injects registry

**What this PR does / why we need it**:
The `ResourceQuota` admission controller works with a registry that maps a GroupKind to an Evaluator.  The registry used in the existing plug-in is not injectable, which makes usage of the ResourceQuota plug-in in other API server contexts difficult.  This PR updates the code to support late injection of the registry via a plug-in initializer.
2017-05-19 22:29:34 -07:00
Kubernetes Submit Queue 46a38b0e2f Merge pull request #45958 from k82cn/k8s_45925
Automatic merge from submit-queue (batch tested with PRs 45346, 45903, 45958, 46042, 45975)

Ignored mirror pods in PodPreset admission plugin

**What this PR does / why we need it**:
Ignored mirror pods in PodPreset admission plugin.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #45925 

**Release note**:

```release-note
Ignored mirror pods in PodPreset admission plugin.
```
2017-05-19 22:29:33 -07:00
Kubernetes Submit Queue 113cf85612 Merge pull request #45903 from brendandburns/azure-disk-api
Automatic merge from submit-queue (batch tested with PRs 45346, 45903, 45958, 46042, 45975)

Azure disk api

This is to update the AzureDiskApi and split it from the implementation which is caught in rebase hell...

Once this is merged, we'll get the implementation in.

@smarterclayton suggested this as a way to break the rebase hell logjam. request for a quick review.

Thanks!
2017-05-19 22:29:30 -07:00
Kubernetes Submit Queue f499606bfe Merge pull request #45346 from codablock/fix_double_attach
Automatic merge from submit-queue

Don't try to attach volumes which are already attached to other nodes

This PR is a replacement for https://github.com/kubernetes/kubernetes/pull/40148. I was not able to push fixes and rebases to the original branch as I don't have access to the Github organization anymore.

CC @saad-ali You probably have to update the PR link in [Q2 2017 (v1.7)](https://docs.google.com/spreadsheets/d/1t4z5DYKjX2ZDlkTpCnp18icRAQqOE85C1T1r2gqJVck/edit#gid=14624465)

I assume the PR will need a new "ok to test" 

**ORIGINAL PR DESCRIPTION**

This PR fixes an issue with the attach/detach volume controller. There are cases where the `desiredStateOfWorld` contains the same volume for multiple nodes, resulting in the attach/detach controller attaching this volume to multiple nodes. This of course fails for volumes like AWS EBS, Azure Disks, ...

I observed this situation on Azure when using Azure Disks and replication controllers which start to reschedule PODs. When you delete a POD that belongs to a RC, the RC will immediately schedule a new POD on another node. This results in a short time (max a few seconds) where you have 2 PODs which try to attach/mount the same volume on different nodes. As the old POD is still alive, the attach/detach controller does not try to detach the volume and starts to attach the volume to the new POD immediately.

This behavior was probably not noticed before on other clouds as the bogus attempt to attach probably fails pretty fast and thus is unnoticed. As the situation with the 2 PODs disappears after a few seconds, a detach for the old POD is initiated and thus the new POD can attach successfully.

On Azure however, attaching and detaching takes quite long, resulting in the first bogus attach attempt to already eat up much time.
When attaching fails on Azure and reports that it is already attached somewhere else, the cloud provider immediately does a detach call for the same volume+node it tried to attach to. This is done to make sure the failed attach request is aborted immediately. You can find this here: https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/providers/azure/azure_storage.go#L74

The complete flow of attach->fail->abort eats up valuable time and the attach/detach controller can not proceed with other work while this is happening. This means, if the old POD disappears in the meantime, the controller can't even start the detach for the volume which delays the whole process of rescheduling and reattaching.

Also, I and other people have observed very strange behavior where disks ended up being "attached" to multiple VMs at the same time as reported by Azure Portal. This results in the controller to fail reattaching forever. It's hard to figure out why and when this happens and there is no reproducer known yet. I can imagine however that the described behavior correlates with what I described above.

I was not sure if there are actually cases where it is perfectly fine to have a volume mounted to multiple PODs/nodes. At least technically, this should be possible with network based volumes, e.g. nfs. Can someone with more knowledge about volumes help me here? I may need to add a check before skipping attaching in `reconcile`.

CC @colemickens @rootfs

-->
```release-note
Don't try to attach volume to new node if it is already attached to another node and the volume does not support multi-attach.
```
2017-05-19 21:54:42 -07:00
shashidharatd b4792014d1 Updated test/test_owners.csv for federation test cases 2017-05-20 08:43:30 +05:30
Kubernetes Submit Queue 2473c24f81 Merge pull request #45979 from bowei/owners
Automatic merge from submit-queue

Add bowei to OWNERS: e2e/test dns,network; cloud route, node, service…
2017-05-19 19:39:05 -07:00
Kubernetes Submit Queue a9d0403858 Merge pull request #38169 from caseydavenport/calico-daemonset
Automatic merge from submit-queue

Update Calico add-on

**What this PR does / why we need it:**

Updates Calico to the latest version using self-hosted install as a DaemonSet, removes Calico's dependency on etcd.

- [x] Remove [last bits of Calico salt](175fe62720/cluster/saltbase/salt/calico/master.sls (L3))
- [x] Failing on the master since no kube-proxy to access API.
- [x] Fix outgoing NAT
- [x] Tweak to work on both debian / GCI (not just GCI)
- [x] Add the portmap plugin for host port support

Maybe:
- [ ] Add integration test

**Which issue this PR fixes:**

https://github.com/kubernetes/kubernetes/issues/32625

**Try it out**

Clone the PR, then:

```
make quick-release
export NETWORK_POLICY_PROVIDER=calico
export NODE_OS_DISTRIBUTION=gci
export MASTER_SIZE=n1-standard-4
./cluster/kube-up.sh 
```

**Release note:**

```release-note
The Calico version included in kube-up for GCE has been updated to v2.2.
```
2017-05-19 19:38:59 -07:00
Kubernetes Submit Queue 83a1a863ad Merge pull request #45564 from whitlockjc/admission-api-group
Automatic merge from submit-queue (batch tested with PRs 45996, 46121, 45707, 46011, 45564)

add "admission" API group

This commit is an initial pass at providing an admission API group.
The API group is required by the webhook admission controller being
developed as part of https://github.com/kubernetes/community/pull/132
and could be used more as that proposal comes to fruition.

**Note:** This PR was created by following the [Adding an API Group](https://github.com/kubernetes/community/blob/master/contributors/devel/adding-an-APIGroup.md) documentation.

cc @smarterclayton
2017-05-19 18:57:38 -07:00
Kubernetes Submit Queue 73e7ef1f8c Merge pull request #46011 from MrHohn/e2e-fix-return-podnames
Automatic merge from submit-queue (batch tested with PRs 45996, 46121, 45707, 46011, 45564)

Fix waitForNPods in restart.go

From https://github.com/kubernetes/kubernetes/issues/45991#issuecomment-302292404.

Don't redefine `pods` so we can return real pod names instead of empty array.

/assign @dchen1107 @bowei 

**Release note**:

```release-note
NONE
```
2017-05-19 18:57:36 -07:00
Kubernetes Submit Queue e2a9327999 Merge pull request #45707 from Crazykev/cri-experimental
Automatic merge from submit-queue (batch tested with PRs 45996, 46121, 45707, 46011, 45564)

Remove flag `experimental-cri` in e2e-node test

Signed-off-by: Crazykev <crazykev@zju.edu.cn>



**What this PR does / why we need it**: 
This patch remove deprecated flag in node e2e test script, cause kubelet already remove this. Leave this will make kubelet start failed.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**: /cc @feiskyer 

**Release note**:

```release-note
NONE
```
2017-05-19 18:57:34 -07:00
Kubernetes Submit Queue 1c8d255819 Merge pull request #46121 from Random-Liu/fix-kuberuntime-getpods
Automatic merge from submit-queue (batch tested with PRs 45996, 46121, 45707, 46011, 45564)

Fix kuberuntime GetPods.

The `ImageID` is not populated from `GetPods` in kuberuntime.

Image garbage collector is using this field, https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/images/image_gc_manager.go#L204.

Without this fix, image garbage collector will try to garbage collect all images every time. Because docker will not allow that, it should be fine. However, I'm not sure whether the unnecessary remove will cause any problem, e.g. overload docker image management system and make docker hang.

@dchen1107 @yujuhong @feiskyer Do you think we should cherry-pick this?
2017-05-19 18:57:33 -07:00
Kubernetes Submit Queue ee9bab1111 Merge pull request #45996 from cblecker/hack-owner
Automatic merge from submit-queue

Add cblecker to hack/ reviewers

**What this PR does / why we need it**:
I've done a number of reviews in this part of the code base, and would like to continue helping out and formally be assigned PRs that change things in hack/

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2017-05-19 16:06:27 -07:00