Automatic merge from submit-queue
detach the volume when pod is terminated
When pods are terminated we should detach the volume.
Fixes https://github.com/kubernetes/kubernetes/issues/45191
**Release note**:
```
Detach the volume when pods are terminated.
```
Automatic merge from submit-queue
Add properties file for cos-docker-validation test job
**What this PR does / why we need it**:
This is forked from test/e2e_node/jenkins/docker_validation/jenkins-validation.properties. It is used for COS docker validation test.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```NONE
```
Automatic merge from submit-queue
orphan when kubectl delete --cascade=false
The default for new objects is to propagate deletes (use GC) when no deleteoptions are passed. In addition, the vast majority of kube objects use this default. Only a few controllers resources (sts, rc, deploy, jobs, rs) orphan by default. This means that when you do `kubectl delete sa/foo --cascade=false` you do *not* orphan. That doesn't fulfill the intent of the command. This explicitly orphans when `--cascade=false` so we don't use GC.
@fabianofranz
@jwforres I liked this easter egg :)
@kubernetes/sig-cli-bugs we should backport this to 1.6
Automatic merge from submit-queue
plumb stopch to post start hook index since many of them are starting go funcs
Many post-start hooks require a stop channel to properly terminate their go funcs.
@p0lyn0mial I think you need this for https://github.com/kubernetes/kubernetes/pull/45355 ptal.
@ncdc per request
@sttts can you review too since Andy is out?
Automatic merge from submit-queue
HTML escape apiserver errors to avoid triggering vulnerability scanners.
Simple XSS scans might fetch /<script>alert('vulnerable')</script>, and
fail when the response body includes the script tag verbatim, despite
the headers directing the browser to interpret the response as text.
This isn't a real vulnerability, but it's easier to fix this here than
it is to fix the scanners.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
Improved code coverage for pkg/kubelet/types/labels
The test coverage improved from 0% to 100%.
This fixed part of #40780
**What this PR does / why we need it**:
Increase test coverage.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
release-note-none
**Release note**:
```NONE
```
Automatic merge from submit-queue
Remove mentioning insecure server (which is not supported anymore) from API server docs
**What this PR does / why we need it**:
Remove mentioning insecure serving from the docs, since only secure serving is supported now.
Automatic merge from submit-queue (batch tested with PRs 44626, 45641)
Update Google Cloud DNS provider Rrset.Get(name) method to return a list and change the `Rrset.List()` implementation to perform a paged walk
Some federated service e2e tests and a few ingress tests would become flaky after a few hundred runs. @csbell spent quite a lot of time debugging this and found out that this flakiness was due to a bug in the federated service controller deletion logic. Deletion of a federated service object triggers a logic in the controller to update the DNS records corresponding to that object. This DNS record update logic would return an error in failed runs which would in-turn cause the controller to reschedule the operation. This led to an infinite retry-failure cycle that never gave the API server a chance to garbage collect the deleted service object.
A couple of days ago we started seeing a correlation between the number of resource records in a DNS managed zone and these test failures. If you look at the test runs before and after run 2900 in the test grid - https://k8s-testgrid.appspot.com/cluster-federation#gce, you will notice that the grid became super green at 2900. That's when I deleted all the dangling DNS records from the past runs.
After some investigation yesterday, we found that `ResourceRecordSet.Get()` interface and its implementation, and `ResourceRecordSet.List()` implementation at least for Google Cloud DNS were incorrect.
This PR makes minimal set of changes (read: least invasive) in Google Cloud DNS provider implementation to fix these problems:
1. Modifies DNS provider Rrset.Get(name) interface to return multiple records and updates federated service controller.
There can be multiple DNS resource records for a given name. They can vary by type, ttl, rrdata and a number of various other parameters. It is incorrect to return a single resource record for a given name.
This change updates the Get interface to return multiple records for a given name and uses this list in the federated service controller to perform DNS operations.
2. Update Google Cloud DNS List implementation to perform a paged walk of lists to aggregate all the DNS records.
The current `List()` implementation just lists the DNS resorce records in a given managed zone once and retruns the list. It neither performs a paged walk nor does it consider the `page_token` in the returned response.
This change walks all the pages and aggregates the records in the pages and returns the aggregated list. This is potentially dangerous as it can blow up memory if there are a huge number of records in the given managed zone. But this is the best we can do without changing the provider interface too much.
Next step is to define a new paged list interface and implement it.
**Release note**:
```release-note
NONE
```
/assign @csbell
cc @justinsb @shashidharatd @quinton-hoole @kubernetes/sig-federation-pr-reviews
Automatic merge from submit-queue
apimachinery: NotRegisteredErr for known kinds not registered in target GV
Fixes the fall back to core v1 for *Options in the parameter encoder of the dynamic client.
The dynamic client uses NotRegisteredErr to fall back to core v1 if ListOptions is not known
in the given GV. This commit fixes the case that ListOptions is known in some group, but not
in the given one.
Automatic merge from submit-queue
small change to view more test info
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
small change to view more test info, think you very much
**Release note**:
```release-note
```
The dynamic client uses NotRegisteredErr to fall back to core v1 if ListOptions is not known
in the given GV. This commit fixes the case that ListOptions is known in some group, but not
in the given one.
When we fetch the dns records by name, we get a list of records that match
the given name. As an optimization we look up to see if the new record we
want to create is already in the returned list to avoid performing any updates.
However, when the new record we want to create isn't in the returned list, it
is hard to say if the returned list contains the list of records that we want
to retain. For example, we might get a list of A records and we want to create
a CNAME record. Creating a new CNAME record without removing the A records is
a DNS misconfiguration. So to play safe we just remove all the existing records
in the list and create the new desired record.
**Note**: This is the opposite of what I said here - https://reviewable.kubernetes.io/reviews/kubernetes/kubernetes/44626#-Ki9xQOzybryHvsxNrra.
Automatic merge from submit-queue (batch tested with PRs 45634, 45480)
Rename vars scheduledJob to cronJob in describe.go
**What this PR does / why we need it**:
Rename vars scheduledJob to cronJob in describe.go
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
There might still be some leftovers in other places.
@soltysh
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 45634, 45480)
Fix BY() format
**What this PR does / why we need it**:
i read other by(), just format, think you
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 45515, 45579)
Ignore openrc cgroup
**What this PR does / why we need it**:
It is a work-around for the following: https://github.com/opencontainers/runc/issues/1440
**Special notes for your reviewer**:
I am open to a cleaner way to do this, but we have many developer users on Macs that ran containerized kubelets that are not able to run them right now due to the inclusion of openrc tripping up our existence checks. Ideally, runc can give us a call to say "does this exist according to what runc knows about". Or we could add a whitelist check. Right now, this was the smallest hack pending more discussion.
Automatic merge from submit-queue (batch tested with PRs 45556, 45561, 45256)
[Federation] Replace the indexing lister with a regular store in the replicaset controller
This is part of the refactoring work to allow the replicaset controller to use the generic sync controller.
None of the other controllers use a lister, including the deployment controller
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 45556, 45561, 45256)
add defaulting for customresources
This adds the promised defaulting for customresources. Namespaced by default, listkind=kind+List, singular=toLower(kind).
Automatic merge from submit-queue
add validation for customresourcedefintions
Add basic validation for customresource definitions.
@adohe if you had review bandwidth, this is a relatively small one.
Automatic merge from submit-queue (batch tested with PRs 45569, 45602, 45604, 45478, 45550)
Don't append :443 to registry domain in the kubernetes-worker layer registry action
**What this PR does / why we need it**: Fixes#45547
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#45547
**Special notes for your reviewer**:
**Release note**:
```
Fix#45547 - don't append :443 to juju created docker registry config
```
Automatic merge from submit-queue (batch tested with PRs 45569, 45602, 45604, 45478, 45550)
Minor bug fix in start-kubemark-master script
cc @wojtek-t @gmarek
Automatic merge from submit-queue (batch tested with PRs 45569, 45602, 45604, 45478, 45550)
Enable kernel memcg notification for node and cluster GCI/COS testing.
Sets --experimental-kernel-memcg-notification=true when running on the GCI/COS image. It sets this for master and nodes for cluster e2e tests, and for the node in node e2e tests.
Issue #42676
cc @dchen1107 @Random-Liu
Automatic merge from submit-queue (batch tested with PRs 45569, 45602, 45604, 45478, 45550)
Fixing VolumesAreAttached and DisksAreAttached functions in vSphere
**What this PR does / why we need it**:
In the vSphere HA, when node fail over happens, node VM momentarily goes in to “not connected” state. During this time, if kubernetes calls VolumesAreAttached function, we are returning incorrect map, with status for volume set to false - detached state.
Volumes attached to previous nodes, requires to be detached before they can attach to the new node. Kubernetes attempt to check volume attachment. When node VM is not accessible or for any reason we cannot determine disk is attached, we were returning a Map of volumepath and its attachment status set to false. This was misinterpreted as disks are already detached from the node and Kubernetes was marking volumes as detached after orphaned pod is cleaned up. This causes volumes to remain attached to previous node, and pod creation always remains in the “containercreating” state. Since both the node are powered on, volumes can not be attached to new node.
**Logs before fix**
```
{"log":"E0508 21:31:20.902501 1 vsphere.go:1053] disk uuid not found for [vsanDatastore] kubevols/kubernetes-dynamic-pvc-8b75170e-342d-11e7-bab5-0050568aeb0a.vmdk. err: No disk UUID fou
nd\n","stream":"stderr","time":"2017-05-08T21:31:20.902792337Z"}
{"log":"E0508 21:31:20.902552 1 vsphere.go:1041] Failed to check whether disk is attached. err: No disk UUID found\n","stream":"stderr","time":"2017-05-08T21:31:20.902842673Z"}
{"log":"I0508 21:31:20.902575 1 attacher.go:114] VolumesAreAttached: check volume \"[vsanDatastore] kubevols/kubernetes-dynamic-pvc-8b75170e-342d-11e7-bab5-0050568aeb0a.vmdk\" (specName
: \"pvc-8b75170e-342d-11e7-bab5-0050568aeb0a\") is no longer attached\n","stream":"stderr","time":"2017-05-08T21:31:20.902849717Z"}
{"log":"I0508 21:31:20.902596 1 operation_generator.go:166] VerifyVolumesAreAttached determined volume \"kubernetes.io/vsphere-volume/[vsanDatastore] kubevols/kubernetes-dynamic-pvc-8b7
5170e-342d-11e7-bab5-0050568aeb0a.vmdk\" (spec.Name: \"pvc-8b75170e-342d-11e7-bab5-0050568aeb0a\") is no longer attached to node \"node3\", therefore it was marked as detached.\n","stream":"s
tderr","time":"2017-05-08T21:31:20.902863097Z"}
```
In this change, we are making sure correct volume attachment map is returned, and in case of any error occurred while checking disk’s status, we return nil map.
**Logs after fix**
```
{"log":"E0509 20:25:37.982152 1 vsphere.go:1067] Failed to check whether disk is attached. err: No disk UUID found\n","stream":"stderr","time":"2017-05-09T20:25:37.982516134Z"}
{"log":"E0509 20:25:37.982190 1 attacher.go:104] Error checking if volumes ([[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c26fcae8-34f2-11e7-9303-0050568a3ac1.vmdk [vsanDatastore] kubevols/kubernetes-dynamic-pvc-c268f141-34f2-11e7-9303-0050568a3ac1.vmdk [vsanDatastore] kubevols/kubernetes-dynamic-pvc-c25d08d3-34f2-11e7-9303-0050568a3ac1.vmdk]) are attached to current node (\"node3\"). err=No disk UUID found\n","stream":"stderr","time":"2017-05-09T20:25:37.982521101Z"}
{"log":"E0509 20:25:37.982220 1 operation_generator.go:158] VolumesAreAttached failed for checking on node \"node3\" with: No disk UUID found\n","stream":"stderr","time":"2017-05-09T20:25:37.982526285Z"}
{"log":"I0509 20:25:39.157279 1 attacher.go:115] VolumesAreAttached: volume \"[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c268f141-34f2-11e7-9303-0050568a3ac1.vmdk\" (specName: \"pvc-c268f141-34f2-11e7-9303-0050568a3ac1\") is attached\n","stream":"stderr","time":"2017-05-09T20:25:39.157724393Z"}
{"log":"I0509 20:25:39.157329 1 attacher.go:115] VolumesAreAttached: volume \"[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c25d08d3-34f2-11e7-9303-0050568a3ac1.vmdk\" (specName: \"pvc-c25d08d3-34f2-11e7-9303-0050568a3ac1\") is attached\n","stream":"stderr","time":"2017-05-09T20:25:39.157787946Z"}
{"log":"I0509 20:25:39.157367 1 attacher.go:115] VolumesAreAttached: volume \"[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c26fcae8-34f2-11e7-9303-0050568a3ac1.vmdk\" (specName: \"pvc-c26fcae8-34f2-11e7-9303-0050568a3ac1\") is attached\n","stream":"stderr","time":"2017-05-09T20:25:39.157794586Z"}
```
```
{"log":"I0509 20:25:41.267425 1 reconciler.go:173] Started DetachVolume for volume \"kubernetes.io/vsphere-volume/[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c26fcae8-34f2-11e7-9303-0050568a3ac1.vmdk\" from node \"node3\"\n","stream":"stderr","time":"2017-05-09T20:25:41.267883567Z"}
{"log":"I0509 20:25:41.271836 1 operation_generator.go:694] Verified volume is safe to detach for volume \"pvc-c26fcae8-34f2-11e7-9303-0050568a3ac1\" (UniqueName: \"kubernetes.io/vsphere-volume/[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c26fcae8-34f2-11e7-9303-0050568a3ac1.vmdk\") on node \"node3\" \n","stream":"stderr","time":"2017-05-09T20:25:41.272703255Z"}
{"log":"I0509 20:25:47.928021 1 operation_generator.go:341] DetachVolume.Detach succeeded for volume \"pvc-c26fcae8-34f2-11e7-9303-0050568a3ac1\" (UniqueName: \"kubernetes.io/vsphere-volume/[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c26fcae8-34f2-11e7-9303-0050568a3ac1.vmdk\") on node \"node3\" \n","stream":"stderr","time":"2017-05-09T20:25:47.928348553Z"}
{"log":"I0509 20:26:12.535962 1 operation_generator.go:694] Verified volume is safe to detach for volume \"pvc-c25d08d3-34f2-11e7-9303-0050568a3ac1\" (UniqueName: \"kubernetes.io/vsphere-volume/[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c25d08d3-34f2-11e7-9303-0050568a3ac1.vmdk\") on node \"node3\" \n","stream":"stderr","time":"2017-05-09T20:26:12.536055214Z"}
{"log":"I0509 20:26:14.188580 1 operation_generator.go:341] DetachVolume.Detach succeeded for volume \"pvc-c25d08d3-34f2-11e7-9303-0050568a3ac1\" (UniqueName: \"kubernetes.io/vsphere-volume/[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c25d08d3-34f2-11e7-9303-0050568a3ac1.vmdk\") on node \"node3\" \n","stream":"stderr","time":"2017-05-09T20:26:14.188792677Z"}
{"log":"I0509 20:26:40.355656 1 reconciler.go:173] Started DetachVolume for volume \"kubernetes.io/vsphere-volume/[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c268f141-34f2-11e7-9303-0050568a3ac1.vmdk\" from node \"node3\"\n","stream":"stderr","time":"2017-05-09T20:26:40.355922165Z"}
{"log":"I0509 20:26:40.357988 1 operation_generator.go:694] Verified volume is safe to detach for volume \"pvc-c268f141-34f2-11e7-9303-0050568a3ac1\" (UniqueName: \"kubernetes.io/vsphere-volume/[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c268f141-34f2-11e7-9303-0050568a3ac1.vmdk\") on node \"node3\" \n","stream":"stderr","time":"2017-05-09T20:26:40.358177953Z"}
```
**Which issue this PR fixes**
fixes#45464, https://github.com/vmware/kubernetes/issues/116
**Special notes for your reviewer**:
Verified this change on locally built hyperkube image - v1.7.0-alpha.3.147+3c0526cb64bdf5-dirty
**performed many fail over with large volumes (30GB) attached to the pod.**
$ kubectl describe pod
Name: wordpress-mysql-2789807967-3xcvc
Node: node3/172.1.87.0
Status: Running
Powered Off node3's host. pod failed over to node2. Verified all 3 disks detached from node3 and attached to node2.
$ kubectl describe pod
Name: wordpress-mysql-2789807967-qx0b0
Node: node2/172.1.9.0
Status: Running
Powered Off node2's host. pod failed over to node3. Verified all 3 disks detached from node2 and attached to node3.
$ kubectl describe pod
Name: wordpress-mysql-2789807967-7849s
Node: node3/172.1.87.0
Status: Running
Powered Off node3's host. pod failed over to node1. Verified all 3 disks detached from node3 and attached to node1.
$ kubectl describe pod
Name: wordpress-mysql-2789807967-26lp1
Node: node1/172.1.98.0
Status: Running
Powered off node1's host. pod failed over to node3. Verified all 3 disks detached from node1 and attached to node3.
$ kubectl describe pods
Name: wordpress-mysql-2789807967-4pdtl
Node: node3/172.1.87.0
Status: Running
Powered off node3's host. pod failed over to node1. Verified all 3 disks detached from node3 and attached to node1.
$ kubectl describe pod
Name: wordpress-mysql-2789807967-t375f
Node: node1/172.1.98.0
Status: Running
Powered off node1's host. pod failed over to node3. Verified all 3 disks detached from node1 and attached to node3.
$ kubectl describe pods
Name: wordpress-mysql-2789807967-pn6ps
Node: node3/172.1.87.0
Status: Running
powered off node3's host. pod failed over to node1. Verified all 3 disks detached from node3 and attached to node1
$ kubectl describe pods
Name: wordpress-mysql-2789807967-0wqc1
Node: node1/172.1.98.0
Status: Running
powered off node1's host. pod failed over to node3. Verified all 3 disks detached from node1 and attached to node3.
$ kubectl describe pods
Name: wordpress-mysql-2789807967-821nc
Node: node3/172.1.87.0
Status: Running
**Release note**:
```release-note
NONE
```
CC: @BaluDontu @abrarshivani @luomiao @tusharnt @pdhamdhere
Automatic merge from submit-queue
Remove the deprecated `--enable-cri` flag
Except for rkt, CRI is the default and only integration point for
container runtimes.
```release-note
Remove the deprecated `--enable-cri` flag. CRI is now the default,
and the only way to integrate with kubelet for the container runtimes.
```
Automatic merge from submit-queue
util.go: format for
**What this PR does / why we need it**:
format for.
delete redundant para.
make code clean.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 43067, 45586, 45590, 38636, 45599)
Make SchedulerPredicates test more resiliant to recent Node restarts
cc @kubernetes/sig-scheduling-pr-reviews
Automatic merge from submit-queue (batch tested with PRs 43067, 45586, 45590, 38636, 45599)
AWS: Remove check that forces loadBalancerSourceRanges to be 0.0.0.0/0.
fixes#38633
Remove check that forces loadBalancerSourceRanges to be 0.0.0.0/0. Also, remove check that forces service.beta.kubernetes.io/aws-load-balancer-internal annotation to be 0.0.0.0/0. Ideally, it should be a boolean, but for backward compatibility, leaving it to be a non-empty value
Automatic merge from submit-queue (batch tested with PRs 43067, 45586, 45590, 38636, 45599)
Move rest of performance data gathered by tests to Summaries
cc @shyamjvs
Automatic merge from submit-queue (batch tested with PRs 43067, 45586, 45590, 38636, 45599)
Fix bug in hollow-node deletion in stop-kubemark script
Just noticed.
cc @wojtek-t @gmarek
Automatic merge from submit-queue
Fix ip-alias testing.
IP aliases are an alpha feature, and node accelerators are a beta
feature. $gcloud determines which is appropriate.
Before, this would try to run "gcloud alpha beta", which is incoherent.
```release-note
NOTE
```
Automatic merge from submit-queue (batch tested with PRs 45382, 45384, 44781, 45333, 45543)
Do roundtrip testing with external kinds in client-go TPR example
This tests that our serialization machinery works for TPR types, i.e. without internal counterpart and without generated code.
/cc @nilebox