Commit Graph

47919 Commits (3b0b7399f545f6cb3f4372911851dd70dcad055e)

Author SHA1 Message Date
Kubernetes Submit Queue 3b0b7399f5 Merge pull request #45062 from FengyunPan/fix-e2e-daemonset-secret
Automatic merge from submit-queue

[federation][e2e] Distinguish local vars and global vars

None
2017-05-12 00:59:18 -07:00
Kubernetes Submit Queue 6c50ffcf7b Merge pull request #45291 from yaxinlx/feature-request/fix-kubelet-channel-close
Automatic merge from submit-queue

There is a rule in using go channel: never close a channel in the

receiver side.

fix https://github.com/kubernetes/kubernetes/issues/45215
2017-05-12 00:16:59 -07:00
Kubernetes Submit Queue 316876060a Merge pull request #45286 from gnufied/fix-terminated-pods-detach
Automatic merge from submit-queue

detach the volume when pod is terminated

When pods are terminated we should detach the volume. 

Fixes https://github.com/kubernetes/kubernetes/issues/45191

**Release note**:
```
Detach the volume when pods are terminated.  
```
2017-05-11 21:46:29 -07:00
Kubernetes Submit Queue ee4e5e79f3 Merge pull request #45437 from kewu1992/cos-docker-validation
Automatic merge from submit-queue

Add properties file for cos-docker-validation test job

**What this PR does / why we need it**: 
This is forked from test/e2e_node/jenkins/docker_validation/jenkins-validation.properties. It is used for COS docker validation test.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```NONE
```
2017-05-11 20:58:49 -07:00
Kubernetes Submit Queue ed4b25e46e Merge pull request #45406 from xilabao/fix-impersonate-in-create-role
Automatic merge from submit-queue

fix specialized verbs in create role
2017-05-11 20:18:12 -07:00
Erick Fejta a95b5c0441 Merge pull request #44621 from caarlos0/patch-1
Improvement on Kubectl CheatSheet base64 examples
2017-05-11 19:39:40 -07:00
Hemant Kumar 951a36aac7 Add Keepterminatedpodvolumes as a annotation on node
and lets make sure that controller respects it
and doesn't detaches mounted volumes.
2017-05-11 22:31:14 -04:00
Hemant Kumar 9a1a9cbe08 detach the volume when pod is terminated
Make sure volume is detached when pod is terminated because
of any reason and not deleted from api server.
2017-05-11 22:18:22 -04:00
Kubernetes Submit Queue 7408f6b3a7 Merge pull request #45661 from deads2k/cli-11-delete
Automatic merge from submit-queue

orphan when kubectl delete --cascade=false

The default for new objects is to propagate deletes (use GC) when no deleteoptions are passed.  In addition, the vast majority of kube objects use this default.  Only a few controllers resources (sts, rc, deploy, jobs, rs) orphan by default.  This means that when you do `kubectl delete sa/foo --cascade=false` you do *not* orphan.  That doesn't fulfill the intent of the command.  This explicitly orphans when `--cascade=false` so we don't use GC.

@fabianofranz 
@jwforres I liked this easter egg :)

@kubernetes/sig-cli-bugs we should backport this to 1.6
2017-05-11 18:27:52 -07:00
Kubernetes Submit Queue 86eb18944f Merge pull request #45495 from deads2k/server-24-stop
Automatic merge from submit-queue

plumb stopch to post start hook index since many of them are starting go funcs

Many post-start hooks require a stop channel to properly terminate their go funcs.

@p0lyn0mial I think you need this for https://github.com/kubernetes/kubernetes/pull/45355 ptal.
@ncdc per request
@sttts can you review too since Andy is out?
2017-05-11 16:50:21 -07:00
Kubernetes Submit Queue c93c50b46a Merge pull request #45672 from wojtek-t/bump_l7_threshold
Automatic merge from submit-queue

Bump l7-lb-controller resource usage threshold in tests

Fix #45512
2017-05-11 16:04:15 -07:00
Kubernetes Submit Queue 69ad6addcc Merge pull request #45559 from rmmh/no-xss
Automatic merge from submit-queue

HTML escape apiserver errors to avoid triggering vulnerability scanners.

Simple XSS scans might fetch /<script>alert('vulnerable')</script>, and
fail when the response body includes the script tag verbatim, despite
the headers directing the browser to interpret the response as text.

This isn't a real vulnerability, but it's easier to fix this here than
it is to fix the scanners.


**Release note**:
```release-note
NONE
```
2017-05-11 13:17:40 -07:00
Kubernetes Submit Queue 3dfffac7f9 Merge pull request #41684 from gyliu513/kubelet-types-labels
Automatic merge from submit-queue

Improved code coverage for pkg/kubelet/types/labels

The test coverage improved from 0% to 100%.
This fixed part of #40780



**What this PR does / why we need it**:
Increase test coverage.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:
release-note-none

**Release note**:

```NONE
```
2017-05-11 12:20:17 -07:00
Wojciech Tyczynski 4d8ee3a1b9 Bump l7-lb-controller resource usage threshold in tests 2017-05-11 20:05:55 +02:00
deads2k be39283923 plumb stopch to post start hook index since many of them are starting go funcs 2017-05-11 09:16:13 -04:00
deads2k e91716a2db orphan when kubectl delete --cascade=false 2017-05-11 09:11:07 -04:00
Kubernetes Submit Queue 48caf95a6c Merge pull request #45631 from nilebox/nilebox/remove-doc-insecure
Automatic merge from submit-queue

Remove mentioning insecure server (which is not supported anymore) from API server docs

**What this PR does / why we need it**:
Remove mentioning insecure serving from the docs, since only secure serving is supported now.
2017-05-11 05:36:27 -07:00
Kubernetes Submit Queue 640373da10 Merge pull request #45641 from xilabao/update-token-ttl-description
Automatic merge from submit-queue (batch tested with PRs 44626, 45641)

update token ttl description
2017-05-11 03:59:38 -07:00
Kubernetes Submit Queue 15df7fedca Merge pull request #44626 from madhusudancs/fed-dns-paged-list
Automatic merge from submit-queue (batch tested with PRs 44626, 45641)

Update Google Cloud DNS provider Rrset.Get(name) method to return a list and change the `Rrset.List()` implementation to perform a paged walk

Some federated service e2e tests and a few ingress tests would become flaky after a few hundred runs. @csbell spent quite a lot of time debugging this and found out that this flakiness was due to a bug in the federated service controller deletion logic. Deletion of a federated service object triggers a logic in the controller to update the DNS records corresponding to that object. This DNS record update logic would return an error in failed runs which would in-turn cause the controller to reschedule the operation. This led to an infinite retry-failure cycle that never gave the API server a chance to garbage collect the deleted service object.

A couple of days ago we started seeing a correlation between the number of resource records in a DNS managed zone and these test failures. If you look at the test runs before and after run 2900 in the test grid - https://k8s-testgrid.appspot.com/cluster-federation#gce, you will notice that the grid became super green at 2900. That's when I deleted all the dangling DNS records from the past runs.

After some investigation yesterday, we found that `ResourceRecordSet.Get()` interface and its implementation, and `ResourceRecordSet.List()` implementation at least for Google Cloud DNS were incorrect.

This PR makes minimal set of changes (read: least invasive) in Google Cloud DNS provider implementation to fix these problems:

1. Modifies DNS provider Rrset.Get(name) interface to return multiple records and updates federated service controller.

    There can be multiple DNS resource records for a given name. They can vary by type, ttl, rrdata and a number of various other parameters. It is incorrect to return a single resource record for a given name.

    This change updates the Get interface to return multiple records for a given name and uses this list in the federated service controller to perform DNS operations.

2. Update Google Cloud DNS List implementation to perform a paged walk of lists to aggregate all the DNS records.

    The current `List()` implementation just lists the DNS resorce records in a given managed zone once and retruns the list. It neither performs a paged walk nor does it consider the `page_token` in the returned response.

    This change walks all the pages and aggregates the records in the pages and returns the aggregated list. This is potentially dangerous as it can blow up memory if there are a huge number of records in the given managed zone. But this is the best we can do without changing the provider interface too much. 

    Next step is to define a new paged list interface and implement it.

**Release note**:
```release-note
NONE
```

/assign @csbell 

cc @justinsb @shashidharatd @quinton-hoole @kubernetes/sig-federation-pr-reviews
2017-05-11 03:59:35 -07:00
Kubernetes Submit Queue 6288c4e96c Merge pull request #44861 from sttts/sttts-dynamic-client-listoptions-fallback
Automatic merge from submit-queue

apimachinery: NotRegisteredErr for known kinds not registered in target GV

Fixes the fall back to core v1 for *Options in the parameter encoder of the dynamic client.

The dynamic client uses NotRegisteredErr to fall back to core v1 if ListOptions is not known
in the given GV. This commit fixes the case that ListOptions is known in some group, but not
in the given one.
2017-05-11 03:06:25 -07:00
Kubernetes Submit Queue 33356a18df Merge pull request #45630 from zjj2wry/e2e
Automatic merge from submit-queue

small change to view more test info

**What this PR does / why we need it**:

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:
small change to view more test info, think you very much

**Release note**:

```release-note
```
2017-05-11 01:51:30 -07:00
Dr. Stefan Schimanski 2ece9e4dec NotRegisteredErr for known kinds not registered in target GV
The dynamic client uses NotRegisteredErr to fall back to core v1 if ListOptions is not known
in the given GV. This commit fixes the case that ListOptions is known in some group, but not
in the given one.
2017-05-11 09:59:04 +02:00
Madhusudan.C.S 4bde13ac62 Remove all the existing records before creating new ones to avoid DNS misconfiguration.
When we fetch the dns records by name, we get a list of records that match
the given name. As an optimization we look up to see if the new record we
want to create is already in the returned list to avoid performing any updates.

However, when the new record we want to create isn't in the returned list, it
is hard to say if the returned list contains the list of records that we want
to retain. For example, we might get a list of A records and we want to create
a CNAME record. Creating a new CNAME record without removing the A records is
a DNS misconfiguration. So to play safe we just remove all the existing records
in the list and create the new desired record.

**Note**: This is the opposite of what I said here - https://reviewable.kubernetes.io/reviews/kubernetes/kubernetes/44626#-Ki9xQOzybryHvsxNrra.
2017-05-11 00:47:11 -07:00
xilabao 7f5e8fdedd update token ttl description 2017-05-11 15:23:57 +08:00
Kubernetes Submit Queue 9a0f5ccb33 Merge pull request #45480 from xiangpengzhao/scheduledjob-cronjob
Automatic merge from submit-queue (batch tested with PRs 45634, 45480)

Rename vars scheduledJob to cronJob in describe.go

**What this PR does / why we need it**:
Rename vars scheduledJob to cronJob in describe.go

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:
There might still be some leftovers in other places.
@soltysh 

**Release note**:

```release-note
NONE
```
2017-05-11 00:12:40 -07:00
Kubernetes Submit Queue c92f94aa0d Merge pull request #45634 from zjj2wry/e2e-l
Automatic merge from submit-queue (batch tested with PRs 45634, 45480)

Fix BY() format

**What this PR does / why we need it**:
i read other by(), just format, think you

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
```
2017-05-11 00:12:38 -07:00
Kubernetes Submit Queue 947db16df7 Merge pull request #45579 from shiywang/refactor-visit-patch
Automatic merge from submit-queue (batch tested with PRs 45515, 45579)

Refactor functions in editoptions.go to use less arguments

Fixes https://github.com/kubernetes/kubernetes/issues/45521
/assign @mengqiy 
will rebase pr https://github.com/kubernetes/kubernetes/pull/42256 after this get merged
2017-05-10 23:20:42 -07:00
Kubernetes Submit Queue 873ce9ca4a Merge pull request #45515 from derekwaynecarr/ignore-openrc
Automatic merge from submit-queue (batch tested with PRs 45515, 45579)

Ignore openrc cgroup

**What this PR does / why we need it**:
It is a work-around for the following: https://github.com/opencontainers/runc/issues/1440

**Special notes for your reviewer**:
I am open to a cleaner way to do this, but we have many developer users on Macs that ran containerized kubelets that are not able to run them right now due to the inclusion of openrc tripping up our existence checks.  Ideally, runc can give us a call to say "does this exist according to what runc knows about".  Or we could add a whitelist check.  Right now, this was the smallest hack pending more discussion.
2017-05-10 23:20:40 -07:00
Kubernetes Submit Queue c2f6ccf0ef Merge pull request #45256 from perotinus/rs_noindexer
Automatic merge from submit-queue (batch tested with PRs 45556, 45561, 45256)

[Federation] Replace the indexing lister with a regular store in the replicaset controller

This is part of the refactoring work to allow the replicaset controller to use the generic sync controller.

None of the other controllers use a lister, including the deployment controller

**Release note**:
```release-note
NONE
```
2017-05-10 22:24:43 -07:00
Kubernetes Submit Queue 7ac1936cc6 Merge pull request #45561 from deads2k/tpr-11-defaulting
Automatic merge from submit-queue (batch tested with PRs 45556, 45561, 45256)

add defaulting for customresources

This adds the promised defaulting for customresources.  Namespaced by default, listkind=kind+List, singular=toLower(kind).
2017-05-10 22:24:41 -07:00
Kubernetes Submit Queue 3126e73400 Merge pull request #45556 from deads2k/tpr-10-validation
Automatic merge from submit-queue

add validation for customresourcedefintions

Add basic validation for customresource definitions.

@adohe if you had review bandwidth, this is a relatively small one.
2017-05-10 22:21:21 -07:00
Kubernetes Submit Queue 4b2ab4e116 Merge pull request #45550 from jacekn/fix45547
Automatic merge from submit-queue (batch tested with PRs 45569, 45602, 45604, 45478, 45550)

Don't append :443 to registry domain in the kubernetes-worker layer registry action

**What this PR does / why we need it**: Fixes #45547

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #45547

**Special notes for your reviewer**:

**Release note**:

```
Fix #45547 - don't append :443 to juju created docker registry config
```
2017-05-10 21:34:45 -07:00
Kubernetes Submit Queue fc7ae99327 Merge pull request #45478 from HardySimpson/fix-endpoints-del
Automatic merge from submit-queue (batch tested with PRs 45569, 45602, 45604, 45478, 45550)

fix endpoints controller del lead-election endpoints

when there are multiple controller-manager instances,  we observe that it will delete leader-election endpoints after 5min,  and cause re-election, add a check to avoid that

Fixes #45585

error log

```
192.168.0.5 - - [02/May/2017:15:10:13 +0000] "GET /api/v1/endpoints HTTP/1.1" 200 1175 "-" "kube-controller-manager/V100R001C00B012 (linux/amd64) kubernetes/bede5a0/endpoint-controller"
192.168.0.5 - - [02/May/2017:15:10:13 +0000] "DELETE /api/v1/namespaces/kube-system/endpoints/kube-controller-manager HTTP/1.1" 200 46 "-" "kube-controller-manager/V100R001C00B012 (linux/amd64) kubernetes/bede5a0/endpoint-controller"
192.168.0.5 - - [02/May/2017:15:10:13 +0000] "DELETE /api/v1/namespaces/kube-system/endpoints/kube-scheduler HTTP/1.1" 200 46 "-" "kube-controller-manager/V100R001C00B012 (linux/amd64) kubernetes/bede5a0/endpoint-controller"
192.168.0.7 - - [02/May/2017:15:10:14 +0000] "GET /api/v1/namespaces/kube-system/endpoints/kube-scheduler HTTP/1.1" 404 123 "-" "kube-scheduler/V100R001C00B012 (linux/amd64) kubernetes/bede5a0"
192.168.0.7 - - [02/May/2017:15:10:14 +0000] "POST /api/v1/namespaces/kube-system/endpoints HTTP/1.1" 201 398 "-" "kube-scheduler/V100R001C00B012 (linux/amd64) kubernetes/bede5a0"
192.168.0.6 - - [02/May/2017:15:10:14 +0000] "GET /api/v1/namespaces/kube-system/endpoints/kube-controller-manager HTTP/1.1" 404 141 "-" "kube-controller-manager/V100R001C00B012 (linux/amd64) kubernetes/bede5a0"
192.168.0.6 - - [02/May/2017:15:10:14 +0000] "POST /api/v1/namespaces/kube-system/endpoints HTTP/1.1" 201 416 "-" "kube-controller-manager/V100R001C00B012 (linux/amd64) kubernetes/bede5a0"
192.168.0.7 - - [02/May/2017:15:10:14 +0000] "GET /api/v1/namespaces/kube-system/endpoints/kube-controller-manager HTTP/1.1" 200 416 "-" "kube-controller-manager/V100R001C00B012 (linux/amd64) ku
```



release-note

```release-note
none
```
2017-05-10 21:34:43 -07:00
Kubernetes Submit Queue e939019900 Merge pull request #45604 from shyamjvs/start-km-master-fix
Automatic merge from submit-queue (batch tested with PRs 45569, 45602, 45604, 45478, 45550)

Minor bug fix in start-kubemark-master script

cc @wojtek-t @gmarek
2017-05-10 21:34:41 -07:00
Kubernetes Submit Queue a507d30833 Merge pull request #45602 from dashpole/enable_memcg_for_all_tests
Automatic merge from submit-queue (batch tested with PRs 45569, 45602, 45604, 45478, 45550)

Enable kernel memcg notification for node and cluster GCI/COS testing.

Sets --experimental-kernel-memcg-notification=true when running on the GCI/COS image.  It sets this for master and nodes for cluster e2e tests, and for the node in node e2e tests.

Issue #42676 

cc @dchen1107 @Random-Liu
2017-05-10 21:34:39 -07:00
Kubernetes Submit Queue b0d024fee1 Merge pull request #45569 from vmware/fix_VolumesAreAttached
Automatic merge from submit-queue (batch tested with PRs 45569, 45602, 45604, 45478, 45550)

Fixing VolumesAreAttached and DisksAreAttached functions in vSphere

**What this PR does / why we need it**:

In the vSphere HA, when node fail over happens, node VM momentarily goes in to “not connected” state. During this time, if kubernetes calls VolumesAreAttached function, we are returning incorrect map, with status for volume set to false - detached state.

Volumes attached to previous nodes, requires to be detached before they can attach to the new node. Kubernetes attempt to check volume attachment. When node VM is not accessible or for any reason we cannot determine disk is attached, we were returning a Map of volumepath and its attachment status set to false. This was misinterpreted as disks are already detached from the node and Kubernetes was marking volumes as detached after orphaned pod is cleaned up. This causes volumes to remain attached to previous node, and pod creation always remains in the “containercreating” state. Since both the node are powered on, volumes can not be attached to new node.

**Logs before fix**

```
{"log":"E0508 21:31:20.902501       1 vsphere.go:1053] disk uuid not found for [vsanDatastore] kubevols/kubernetes-dynamic-pvc-8b75170e-342d-11e7-bab5-0050568aeb0a.vmdk. err: No disk UUID fou
nd\n","stream":"stderr","time":"2017-05-08T21:31:20.902792337Z"}
{"log":"E0508 21:31:20.902552       1 vsphere.go:1041] Failed to check whether disk is attached. err: No disk UUID found\n","stream":"stderr","time":"2017-05-08T21:31:20.902842673Z"}
{"log":"I0508 21:31:20.902575       1 attacher.go:114] VolumesAreAttached: check volume \"[vsanDatastore] kubevols/kubernetes-dynamic-pvc-8b75170e-342d-11e7-bab5-0050568aeb0a.vmdk\" (specName
: \"pvc-8b75170e-342d-11e7-bab5-0050568aeb0a\") is no longer attached\n","stream":"stderr","time":"2017-05-08T21:31:20.902849717Z"}
{"log":"I0508 21:31:20.902596       1 operation_generator.go:166] VerifyVolumesAreAttached determined volume \"kubernetes.io/vsphere-volume/[vsanDatastore] kubevols/kubernetes-dynamic-pvc-8b7
5170e-342d-11e7-bab5-0050568aeb0a.vmdk\" (spec.Name: \"pvc-8b75170e-342d-11e7-bab5-0050568aeb0a\") is no longer attached to node \"node3\", therefore it was marked as detached.\n","stream":"s
tderr","time":"2017-05-08T21:31:20.902863097Z"}
```



In this change, we are making sure correct volume attachment map is returned, and in case of any error occurred while checking disk’s status, we return nil map.


**Logs after fix**
```
{"log":"E0509 20:25:37.982152       1 vsphere.go:1067] Failed to check whether disk is attached. err: No disk UUID found\n","stream":"stderr","time":"2017-05-09T20:25:37.982516134Z"}
{"log":"E0509 20:25:37.982190       1 attacher.go:104] Error checking if volumes ([[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c26fcae8-34f2-11e7-9303-0050568a3ac1.vmdk [vsanDatastore] kubevols/kubernetes-dynamic-pvc-c268f141-34f2-11e7-9303-0050568a3ac1.vmdk [vsanDatastore] kubevols/kubernetes-dynamic-pvc-c25d08d3-34f2-11e7-9303-0050568a3ac1.vmdk]) are attached to current node (\"node3\"). err=No disk UUID found\n","stream":"stderr","time":"2017-05-09T20:25:37.982521101Z"}
{"log":"E0509 20:25:37.982220       1 operation_generator.go:158] VolumesAreAttached failed for checking on node \"node3\" with: No disk UUID found\n","stream":"stderr","time":"2017-05-09T20:25:37.982526285Z"}
{"log":"I0509 20:25:39.157279       1 attacher.go:115] VolumesAreAttached: volume \"[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c268f141-34f2-11e7-9303-0050568a3ac1.vmdk\" (specName: \"pvc-c268f141-34f2-11e7-9303-0050568a3ac1\") is attached\n","stream":"stderr","time":"2017-05-09T20:25:39.157724393Z"}
{"log":"I0509 20:25:39.157329       1 attacher.go:115] VolumesAreAttached: volume \"[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c25d08d3-34f2-11e7-9303-0050568a3ac1.vmdk\" (specName: \"pvc-c25d08d3-34f2-11e7-9303-0050568a3ac1\") is attached\n","stream":"stderr","time":"2017-05-09T20:25:39.157787946Z"}
{"log":"I0509 20:25:39.157367       1 attacher.go:115] VolumesAreAttached: volume \"[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c26fcae8-34f2-11e7-9303-0050568a3ac1.vmdk\" (specName: \"pvc-c26fcae8-34f2-11e7-9303-0050568a3ac1\") is attached\n","stream":"stderr","time":"2017-05-09T20:25:39.157794586Z"}
```

```
{"log":"I0509 20:25:41.267425       1 reconciler.go:173] Started DetachVolume for volume \"kubernetes.io/vsphere-volume/[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c26fcae8-34f2-11e7-9303-0050568a3ac1.vmdk\" from node \"node3\"\n","stream":"stderr","time":"2017-05-09T20:25:41.267883567Z"}
{"log":"I0509 20:25:41.271836       1 operation_generator.go:694] Verified volume is safe to detach for volume \"pvc-c26fcae8-34f2-11e7-9303-0050568a3ac1\" (UniqueName: \"kubernetes.io/vsphere-volume/[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c26fcae8-34f2-11e7-9303-0050568a3ac1.vmdk\") on node \"node3\" \n","stream":"stderr","time":"2017-05-09T20:25:41.272703255Z"}
{"log":"I0509 20:25:47.928021       1 operation_generator.go:341] DetachVolume.Detach succeeded for volume \"pvc-c26fcae8-34f2-11e7-9303-0050568a3ac1\" (UniqueName: \"kubernetes.io/vsphere-volume/[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c26fcae8-34f2-11e7-9303-0050568a3ac1.vmdk\") on node \"node3\" \n","stream":"stderr","time":"2017-05-09T20:25:47.928348553Z"}

{"log":"I0509 20:26:12.535962       1 operation_generator.go:694] Verified volume is safe to detach for volume \"pvc-c25d08d3-34f2-11e7-9303-0050568a3ac1\" (UniqueName: \"kubernetes.io/vsphere-volume/[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c25d08d3-34f2-11e7-9303-0050568a3ac1.vmdk\") on node \"node3\" \n","stream":"stderr","time":"2017-05-09T20:26:12.536055214Z"}
{"log":"I0509 20:26:14.188580       1 operation_generator.go:341] DetachVolume.Detach succeeded for volume \"pvc-c25d08d3-34f2-11e7-9303-0050568a3ac1\" (UniqueName: \"kubernetes.io/vsphere-volume/[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c25d08d3-34f2-11e7-9303-0050568a3ac1.vmdk\") on node \"node3\" \n","stream":"stderr","time":"2017-05-09T20:26:14.188792677Z"}

{"log":"I0509 20:26:40.355656       1 reconciler.go:173] Started DetachVolume for volume \"kubernetes.io/vsphere-volume/[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c268f141-34f2-11e7-9303-0050568a3ac1.vmdk\" from node \"node3\"\n","stream":"stderr","time":"2017-05-09T20:26:40.355922165Z"}
{"log":"I0509 20:26:40.357988       1 operation_generator.go:694] Verified volume is safe to detach for volume \"pvc-c268f141-34f2-11e7-9303-0050568a3ac1\" (UniqueName: \"kubernetes.io/vsphere-volume/[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c268f141-34f2-11e7-9303-0050568a3ac1.vmdk\") on node \"node3\" \n","stream":"stderr","time":"2017-05-09T20:26:40.358177953Z"}

```




**Which issue this PR fixes**
fixes #45464, https://github.com/vmware/kubernetes/issues/116

**Special notes for your reviewer**:
Verified this change on locally built hyperkube image - v1.7.0-alpha.3.147+3c0526cb64bdf5-dirty

**performed many fail over with large volumes (30GB) attached to the pod.**

$ kubectl describe pod
Name:		wordpress-mysql-2789807967-3xcvc
Node:		node3/172.1.87.0
Status:		Running

Powered Off node3's host. pod failed over to node2. Verified all 3 disks detached from node3 and attached to node2.

$ kubectl describe pod
Name:		wordpress-mysql-2789807967-qx0b0
Node:		node2/172.1.9.0
Status:		Running

Powered Off node2's host. pod failed over to node3. Verified all 3 disks detached from node2 and attached to node3.

$ kubectl describe pod
Name:		wordpress-mysql-2789807967-7849s
Node:		node3/172.1.87.0
Status:		Running

Powered Off node3's host. pod failed over to node1. Verified all 3 disks detached from node3 and attached to node1.

$ kubectl describe pod
Name:		wordpress-mysql-2789807967-26lp1
Node:		node1/172.1.98.0
Status:		Running

Powered off node1's host. pod failed over to node3. Verified all 3 disks detached from node1 and attached to node3.

$ kubectl describe pods
Name:		wordpress-mysql-2789807967-4pdtl
Node:		node3/172.1.87.0
Status:		Running


Powered off node3's host. pod failed over to node1. Verified all 3 disks detached from node3 and attached to node1.

$ kubectl describe pod
Name:		wordpress-mysql-2789807967-t375f
Node:		node1/172.1.98.0
Status:		Running

Powered off node1's host. pod failed over to node3. Verified all 3 disks detached from node1 and attached to node3.

$ kubectl describe pods
Name:		wordpress-mysql-2789807967-pn6ps
Node:		node3/172.1.87.0
Status:		Running

powered off node3's host. pod failed over to node1. Verified all 3 disks detached from node3 and attached to node1

$ kubectl describe pods
Name:		wordpress-mysql-2789807967-0wqc1
Node:		node1/172.1.98.0
Status:		Running

powered off node1's host. pod failed over to node3. Verified all 3 disks detached from node1 and attached to node3.

$ kubectl describe pods
Name:		wordpress-mysql-2789807967-821nc
Node:		node3/172.1.87.0
Status:		Running


**Release note**:

```release-note
NONE
```

CC:  @BaluDontu @abrarshivani @luomiao @tusharnt @pdhamdhere
2017-05-10 21:34:37 -07:00
zhengjiajin a3a619463e Fix BY() format 2017-05-11 12:26:40 +08:00
Kubernetes Submit Queue 1f3b158a10 Merge pull request #45194 from yujuhong/rm-cri-flag
Automatic merge from submit-queue

Remove the deprecated `--enable-cri` flag

Except for rkt, CRI is the default and only integration point for
container runtimes.

```release-note
Remove the deprecated `--enable-cri` flag. CRI is now the default, 
and the only way to integrate with kubelet for the container runtimes.
```
2017-05-10 20:46:24 -07:00
Nail Islamov 6c448319ac Remove mentioning insecure server (which is not supported anymore) 2017-05-11 13:18:58 +10:00
zhengjiajin 6f29fd99b4 small change to view more test info 2017-05-11 11:01:06 +08:00
Kubernetes Submit Queue 6328ca6fc1 Merge pull request #45530 from zhangxiaoyu-zidif/e2e-delete-redundant-para
Automatic merge from submit-queue

util.go: format for

**What this PR does / why we need it**:
format for.
delete redundant para.
make code clean.

**Release note**:

```release-note
NONE
```
2017-05-10 19:54:47 -07:00
Kubernetes Submit Queue f6f2b2156e Merge pull request #45599 from gmarek/scheduler_predicates
Automatic merge from submit-queue (batch tested with PRs 43067, 45586, 45590, 38636, 45599)

Make SchedulerPredicates test more resiliant to recent Node restarts

cc @kubernetes/sig-scheduling-pr-reviews
2017-05-10 19:31:47 -07:00
Kubernetes Submit Queue b0399114fe Merge pull request #38636 from dhawal55/internal-elb
Automatic merge from submit-queue (batch tested with PRs 43067, 45586, 45590, 38636, 45599)

AWS: Remove check that forces loadBalancerSourceRanges to be 0.0.0.0/0. 

fixes #38633

Remove check that forces loadBalancerSourceRanges to be 0.0.0.0/0. Also, remove check that forces service.beta.kubernetes.io/aws-load-balancer-internal annotation to be 0.0.0.0/0. Ideally, it should be a boolean, but for backward compatibility, leaving it to be a non-empty value
2017-05-10 19:31:45 -07:00
Kubernetes Submit Queue 7cbe729a97 Merge pull request #45590 from gmarek/print_to_file
Automatic merge from submit-queue (batch tested with PRs 43067, 45586, 45590, 38636, 45599)

Move rest of performance data gathered by tests to Summaries

cc @shyamjvs
2017-05-10 19:31:42 -07:00
Kubernetes Submit Queue d1fef17408 Merge pull request #45586 from shyamjvs/yolo
Automatic merge from submit-queue (batch tested with PRs 43067, 45586, 45590, 38636, 45599)

Fix bug in hollow-node deletion in stop-kubemark script

Just noticed.

cc @wojtek-t @gmarek
2017-05-10 19:31:40 -07:00
yaxinlx c280b7cab7 There is a rule in using go channel: never close a channel in the
receiver side.

fix https://github.com/kubernetes/kubernetes/issues/45215

delete the channel close line

change the event channel element type to struct{}

go fmt

eventCh channel is not essential to be buffered
2017-05-11 10:19:28 +08:00
Kubernetes Submit Queue b040513aab Merge pull request #43067 from xilabao/dedup-in-printer
Automatic merge from submit-queue

De-duplication in printer
2017-05-10 19:08:59 -07:00
xilabao 02deeb224e fix specialized verbs in create role 2017-05-11 09:32:43 +08:00
Kubernetes Submit Queue 35837d80ce Merge pull request #45605 from rmmh/fix-ip-alias
Automatic merge from submit-queue

Fix ip-alias testing.

IP aliases are an alpha feature, and node accelerators are a beta
feature. $gcloud determines which is appropriate.

Before, this would try to run "gcloud alpha beta", which is incoherent.

```release-note
NOTE
```
2017-05-10 18:22:53 -07:00
Kubernetes Submit Queue aba95a169b Merge pull request #45543 from sttts/sttts-external-roundtrip
Automatic merge from submit-queue (batch tested with PRs 45382, 45384, 44781, 45333, 45543)

Do roundtrip testing with external kinds in client-go TPR example

This tests that our serialization machinery works for TPR types, i.e. without internal counterpart and without generated code.

/cc @nilebox
2017-05-10 17:47:47 -07:00