Automatic merge from submit-queue
Fix hardcoded tmp dir path in kubectl test.
**What this PR does / why we need it**:
Current case uses hardcoded tmp dir path, and it does not delete tmp dir after test run.
Which means 1. The case could not be run by different users (no permission) 2. /tmp dir keeps growing.
**Which issue this PR fixes**
**Special notes for your reviewer**:
**Release note**:
When volume's status is 'attaching', its attachments will be None,
controllermanager can't get device path and make some failed event.
But it is normal, let's fix it.
Automatic merge from submit-queue (batch tested with PRs 45691, 45667, 45698, 45715)
testName to head
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
testName in head, may be can quick location
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 45684, 45266, 45669, 44787, 44984)
Fix XDG-based kubectl plugin dirs
XDGDataPluginLoader messed up its default-value handling for `XDG_DATA_DIRS` and ends up scanning *all of /usr/share* looking for plugins if you don't have that set :-O
/release-note-none
/assign @fabianofranz
Automatic merge from submit-queue (batch tested with PRs 45684, 45266, 45669, 44787, 44984)
[CRI] Return success if ImageNotFound in RemoveImage()
Signed-off-by: Crazykev <crazykev@zju.edu.cn>
**What this PR does / why we need it**:
**Sorry for close the [old one](https://github.com/kubernetes/kubernetes/pull/44381) mistakenly, rebase and move to here.**
RemoveImage() operation should be idempotent, [ref](https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/api/v1alpha1/runtime/api.proto#L89-L92)
@feiskyer @Random-Liu PTAL
**Which issue this PR fixes**
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 45571, 45657, 45638, 45663, 45622)
Use real proxier inside hollow-proxy but with mocked syscalls
Fixes https://github.com/kubernetes/kubernetes/issues/43701
This should make hollow-proxy better mimic the real kube-proxy in performance.
Maybe next we should have a more realistic implementation even for fake iptables (adding/updating/deleting rules/chains in an table, just not on the real one)? Though I'm not sure how important it is.
cc @kubernetes/sig-scalability-misc @kubernetes/sig-network-misc @wojtek-t @gmarek
Automatic merge from submit-queue (batch tested with PRs 45571, 45657, 45638, 45663, 45622)
rkt: Improve the Garbage Collection
**What this PR does / why we need it**:
This PR improve the garbage collection of files written inside the `/var/lib/kubelet/pods/<pod: id>`
It removes the` finished-<pod: id>` file touched during the `ExecStopPost` of the systemd unit.
It also removes the `/dev/termination-log` file mounted into containers .
The termination-log is used to produce a message from the container and collected by the kubelet when the Pod stops.
Especially for the termination-log, removing theses files will free the associated space used on the filesystem.
**Release note**:
`NONE`
Automatic merge from submit-queue
Fix AssertCalls usage for kubelet fake runtimes unit tests
Despite its name, AssertCalls() does not assert anything. It returns an error that should be checked. This was causing false negatives for a handful of unit tests, which are also fixed here.
Tests for the image manager needed to be rearranged in order to accommodate a potentially different sequence of calls each tick because the image puller changes behavior based on prior errors.
**What this PR does / why we need it**: Fixes broken unit tests
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*:
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
detach the volume when pod is terminated
When pods are terminated we should detach the volume.
Fixes https://github.com/kubernetes/kubernetes/issues/45191
**Release note**:
```
Detach the volume when pods are terminated.
```
Automatic merge from submit-queue
orphan when kubectl delete --cascade=false
The default for new objects is to propagate deletes (use GC) when no deleteoptions are passed. In addition, the vast majority of kube objects use this default. Only a few controllers resources (sts, rc, deploy, jobs, rs) orphan by default. This means that when you do `kubectl delete sa/foo --cascade=false` you do *not* orphan. That doesn't fulfill the intent of the command. This explicitly orphans when `--cascade=false` so we don't use GC.
@fabianofranz
@jwforres I liked this easter egg :)
@kubernetes/sig-cli-bugs we should backport this to 1.6
change import of client-go/api/helper to kubernetes/api/helper
remove unnecessary use of client-go/api.registry
change use of client-go/pkg/util to kubernetes/pkg/util
remove dependency on client-go/pkg/apis/extensions
remove unnecessary invocation of k8s.io/client-go/extension/intsall
change use of k8s.io/client-go/pkg/apis/authentication to v1
Automatic merge from submit-queue
Improved code coverage for pkg/kubelet/types/labels
The test coverage improved from 0% to 100%.
This fixed part of #40780
**What this PR does / why we need it**:
Increase test coverage.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
release-note-none
**Release note**:
```NONE
```
Automatic merge from submit-queue (batch tested with PRs 45634, 45480)
Rename vars scheduledJob to cronJob in describe.go
**What this PR does / why we need it**:
Rename vars scheduledJob to cronJob in describe.go
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
There might still be some leftovers in other places.
@soltysh
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 45515, 45579)
Ignore openrc cgroup
**What this PR does / why we need it**:
It is a work-around for the following: https://github.com/opencontainers/runc/issues/1440
**Special notes for your reviewer**:
I am open to a cleaner way to do this, but we have many developer users on Macs that ran containerized kubelets that are not able to run them right now due to the inclusion of openrc tripping up our existence checks. Ideally, runc can give us a call to say "does this exist according to what runc knows about". Or we could add a whitelist check. Right now, this was the smallest hack pending more discussion.
Automatic merge from submit-queue (batch tested with PRs 45569, 45602, 45604, 45478, 45550)
Fixing VolumesAreAttached and DisksAreAttached functions in vSphere
**What this PR does / why we need it**:
In the vSphere HA, when node fail over happens, node VM momentarily goes in to “not connected” state. During this time, if kubernetes calls VolumesAreAttached function, we are returning incorrect map, with status for volume set to false - detached state.
Volumes attached to previous nodes, requires to be detached before they can attach to the new node. Kubernetes attempt to check volume attachment. When node VM is not accessible or for any reason we cannot determine disk is attached, we were returning a Map of volumepath and its attachment status set to false. This was misinterpreted as disks are already detached from the node and Kubernetes was marking volumes as detached after orphaned pod is cleaned up. This causes volumes to remain attached to previous node, and pod creation always remains in the “containercreating” state. Since both the node are powered on, volumes can not be attached to new node.
**Logs before fix**
```
{"log":"E0508 21:31:20.902501 1 vsphere.go:1053] disk uuid not found for [vsanDatastore] kubevols/kubernetes-dynamic-pvc-8b75170e-342d-11e7-bab5-0050568aeb0a.vmdk. err: No disk UUID fou
nd\n","stream":"stderr","time":"2017-05-08T21:31:20.902792337Z"}
{"log":"E0508 21:31:20.902552 1 vsphere.go:1041] Failed to check whether disk is attached. err: No disk UUID found\n","stream":"stderr","time":"2017-05-08T21:31:20.902842673Z"}
{"log":"I0508 21:31:20.902575 1 attacher.go:114] VolumesAreAttached: check volume \"[vsanDatastore] kubevols/kubernetes-dynamic-pvc-8b75170e-342d-11e7-bab5-0050568aeb0a.vmdk\" (specName
: \"pvc-8b75170e-342d-11e7-bab5-0050568aeb0a\") is no longer attached\n","stream":"stderr","time":"2017-05-08T21:31:20.902849717Z"}
{"log":"I0508 21:31:20.902596 1 operation_generator.go:166] VerifyVolumesAreAttached determined volume \"kubernetes.io/vsphere-volume/[vsanDatastore] kubevols/kubernetes-dynamic-pvc-8b7
5170e-342d-11e7-bab5-0050568aeb0a.vmdk\" (spec.Name: \"pvc-8b75170e-342d-11e7-bab5-0050568aeb0a\") is no longer attached to node \"node3\", therefore it was marked as detached.\n","stream":"s
tderr","time":"2017-05-08T21:31:20.902863097Z"}
```
In this change, we are making sure correct volume attachment map is returned, and in case of any error occurred while checking disk’s status, we return nil map.
**Logs after fix**
```
{"log":"E0509 20:25:37.982152 1 vsphere.go:1067] Failed to check whether disk is attached. err: No disk UUID found\n","stream":"stderr","time":"2017-05-09T20:25:37.982516134Z"}
{"log":"E0509 20:25:37.982190 1 attacher.go:104] Error checking if volumes ([[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c26fcae8-34f2-11e7-9303-0050568a3ac1.vmdk [vsanDatastore] kubevols/kubernetes-dynamic-pvc-c268f141-34f2-11e7-9303-0050568a3ac1.vmdk [vsanDatastore] kubevols/kubernetes-dynamic-pvc-c25d08d3-34f2-11e7-9303-0050568a3ac1.vmdk]) are attached to current node (\"node3\"). err=No disk UUID found\n","stream":"stderr","time":"2017-05-09T20:25:37.982521101Z"}
{"log":"E0509 20:25:37.982220 1 operation_generator.go:158] VolumesAreAttached failed for checking on node \"node3\" with: No disk UUID found\n","stream":"stderr","time":"2017-05-09T20:25:37.982526285Z"}
{"log":"I0509 20:25:39.157279 1 attacher.go:115] VolumesAreAttached: volume \"[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c268f141-34f2-11e7-9303-0050568a3ac1.vmdk\" (specName: \"pvc-c268f141-34f2-11e7-9303-0050568a3ac1\") is attached\n","stream":"stderr","time":"2017-05-09T20:25:39.157724393Z"}
{"log":"I0509 20:25:39.157329 1 attacher.go:115] VolumesAreAttached: volume \"[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c25d08d3-34f2-11e7-9303-0050568a3ac1.vmdk\" (specName: \"pvc-c25d08d3-34f2-11e7-9303-0050568a3ac1\") is attached\n","stream":"stderr","time":"2017-05-09T20:25:39.157787946Z"}
{"log":"I0509 20:25:39.157367 1 attacher.go:115] VolumesAreAttached: volume \"[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c26fcae8-34f2-11e7-9303-0050568a3ac1.vmdk\" (specName: \"pvc-c26fcae8-34f2-11e7-9303-0050568a3ac1\") is attached\n","stream":"stderr","time":"2017-05-09T20:25:39.157794586Z"}
```
```
{"log":"I0509 20:25:41.267425 1 reconciler.go:173] Started DetachVolume for volume \"kubernetes.io/vsphere-volume/[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c26fcae8-34f2-11e7-9303-0050568a3ac1.vmdk\" from node \"node3\"\n","stream":"stderr","time":"2017-05-09T20:25:41.267883567Z"}
{"log":"I0509 20:25:41.271836 1 operation_generator.go:694] Verified volume is safe to detach for volume \"pvc-c26fcae8-34f2-11e7-9303-0050568a3ac1\" (UniqueName: \"kubernetes.io/vsphere-volume/[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c26fcae8-34f2-11e7-9303-0050568a3ac1.vmdk\") on node \"node3\" \n","stream":"stderr","time":"2017-05-09T20:25:41.272703255Z"}
{"log":"I0509 20:25:47.928021 1 operation_generator.go:341] DetachVolume.Detach succeeded for volume \"pvc-c26fcae8-34f2-11e7-9303-0050568a3ac1\" (UniqueName: \"kubernetes.io/vsphere-volume/[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c26fcae8-34f2-11e7-9303-0050568a3ac1.vmdk\") on node \"node3\" \n","stream":"stderr","time":"2017-05-09T20:25:47.928348553Z"}
{"log":"I0509 20:26:12.535962 1 operation_generator.go:694] Verified volume is safe to detach for volume \"pvc-c25d08d3-34f2-11e7-9303-0050568a3ac1\" (UniqueName: \"kubernetes.io/vsphere-volume/[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c25d08d3-34f2-11e7-9303-0050568a3ac1.vmdk\") on node \"node3\" \n","stream":"stderr","time":"2017-05-09T20:26:12.536055214Z"}
{"log":"I0509 20:26:14.188580 1 operation_generator.go:341] DetachVolume.Detach succeeded for volume \"pvc-c25d08d3-34f2-11e7-9303-0050568a3ac1\" (UniqueName: \"kubernetes.io/vsphere-volume/[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c25d08d3-34f2-11e7-9303-0050568a3ac1.vmdk\") on node \"node3\" \n","stream":"stderr","time":"2017-05-09T20:26:14.188792677Z"}
{"log":"I0509 20:26:40.355656 1 reconciler.go:173] Started DetachVolume for volume \"kubernetes.io/vsphere-volume/[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c268f141-34f2-11e7-9303-0050568a3ac1.vmdk\" from node \"node3\"\n","stream":"stderr","time":"2017-05-09T20:26:40.355922165Z"}
{"log":"I0509 20:26:40.357988 1 operation_generator.go:694] Verified volume is safe to detach for volume \"pvc-c268f141-34f2-11e7-9303-0050568a3ac1\" (UniqueName: \"kubernetes.io/vsphere-volume/[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c268f141-34f2-11e7-9303-0050568a3ac1.vmdk\") on node \"node3\" \n","stream":"stderr","time":"2017-05-09T20:26:40.358177953Z"}
```
**Which issue this PR fixes**
fixes#45464, https://github.com/vmware/kubernetes/issues/116
**Special notes for your reviewer**:
Verified this change on locally built hyperkube image - v1.7.0-alpha.3.147+3c0526cb64bdf5-dirty
**performed many fail over with large volumes (30GB) attached to the pod.**
$ kubectl describe pod
Name: wordpress-mysql-2789807967-3xcvc
Node: node3/172.1.87.0
Status: Running
Powered Off node3's host. pod failed over to node2. Verified all 3 disks detached from node3 and attached to node2.
$ kubectl describe pod
Name: wordpress-mysql-2789807967-qx0b0
Node: node2/172.1.9.0
Status: Running
Powered Off node2's host. pod failed over to node3. Verified all 3 disks detached from node2 and attached to node3.
$ kubectl describe pod
Name: wordpress-mysql-2789807967-7849s
Node: node3/172.1.87.0
Status: Running
Powered Off node3's host. pod failed over to node1. Verified all 3 disks detached from node3 and attached to node1.
$ kubectl describe pod
Name: wordpress-mysql-2789807967-26lp1
Node: node1/172.1.98.0
Status: Running
Powered off node1's host. pod failed over to node3. Verified all 3 disks detached from node1 and attached to node3.
$ kubectl describe pods
Name: wordpress-mysql-2789807967-4pdtl
Node: node3/172.1.87.0
Status: Running
Powered off node3's host. pod failed over to node1. Verified all 3 disks detached from node3 and attached to node1.
$ kubectl describe pod
Name: wordpress-mysql-2789807967-t375f
Node: node1/172.1.98.0
Status: Running
Powered off node1's host. pod failed over to node3. Verified all 3 disks detached from node1 and attached to node3.
$ kubectl describe pods
Name: wordpress-mysql-2789807967-pn6ps
Node: node3/172.1.87.0
Status: Running
powered off node3's host. pod failed over to node1. Verified all 3 disks detached from node3 and attached to node1
$ kubectl describe pods
Name: wordpress-mysql-2789807967-0wqc1
Node: node1/172.1.98.0
Status: Running
powered off node1's host. pod failed over to node3. Verified all 3 disks detached from node1 and attached to node3.
$ kubectl describe pods
Name: wordpress-mysql-2789807967-821nc
Node: node3/172.1.87.0
Status: Running
**Release note**:
```release-note
NONE
```
CC: @BaluDontu @abrarshivani @luomiao @tusharnt @pdhamdhere
Automatic merge from submit-queue
Remove the deprecated `--enable-cri` flag
Except for rkt, CRI is the default and only integration point for
container runtimes.
```release-note
Remove the deprecated `--enable-cri` flag. CRI is now the default,
and the only way to integrate with kubelet for the container runtimes.
```
Automatic merge from submit-queue (batch tested with PRs 43067, 45586, 45590, 38636, 45599)
AWS: Remove check that forces loadBalancerSourceRanges to be 0.0.0.0/0.
fixes#38633
Remove check that forces loadBalancerSourceRanges to be 0.0.0.0/0. Also, remove check that forces service.beta.kubernetes.io/aws-load-balancer-internal annotation to be 0.0.0.0/0. Ideally, it should be a boolean, but for backward compatibility, leaving it to be a non-empty value
Automatic merge from submit-queue (batch tested with PRs 45382, 45384, 44781, 45333, 45543)
azure: improve user agent string
**What this PR does / why we need it**: the UA string doesn't actually contain "kubernetes" in it
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**: none
**Release note**:
```release-note
NONE
```
cc: @brendandburns
Automatic merge from submit-queue (batch tested with PRs 45382, 45384, 44781, 45333, 45543)
Ensure desired state of world populator runs before volume reconstructor
If the kubelet's volumemanager reconstructor for actual state of world runs before the desired state of world has been populated, the pods in the actual state of world will have some incorrect volume information: namely outerVolumeSpecName, which if incorrect leads to part of the issue here https://github.com/kubernetes/kubernetes/issues/43515, because WaitForVolumeAttachAndMount searches the actual state of world with the correct outerVolumeSpecName and won't find it so reports 'timeout waiting....', etc. forever for existing pods. The comments acknowledge that this is a known issue
The all sources ready check doesn't work because the sources being ready doesn't necessarily mean the desired state of world populator added pods from the sources. So instead let's put the all sources ready check in the *populator*, and when the sources are ready, it will be able to populate the desired state of world and make "HasAddedPods()" return true. THEN, the reconstructor may run.
@jingxu97 PTAL, you wrote all of the reconstruction stuff
```release-note
NONE
```
Automatic merge from submit-queue
Edge based winuserspace proxy
Last PR in the series of making kube-proxy event-based.
This is a sibling PR to https://github.com/kubernetes/kubernetes/pull/45356 that is already merged.
The second commit is removing the code that is no longer used.
Automatic merge from submit-queue
Enable shared PID namespace by default for docker pods
**What this PR does / why we need it**: This PR enables PID namespace sharing for docker pods by default, bringing the behavior of docker in line with the other CRI runtimes when used with docker >= 1.13.1.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: ref #1615
**Special notes for your reviewer**: cc @dchen1107 @yujuhong
**Release note**:
```release-note
Kubernetes now shares a single PID namespace among all containers in a pod when running with docker >= 1.13.1. This means processes can now signal processes in other containers in a pod, but it also means that the `kubectl exec {pod} kill 1` pattern will cause the pod to be restarted rather than a single container.
```
Automatic merge from submit-queue
azure: load balancer: support UDP, fix multiple loadBalancerSourceRanges support, respect sessionAffinity
**What this PR does / why we need it**:
1. Adds support for UDP ports
2. Fixes support for multiple `loadBalancerSourceRanges`
3. Adds support the Service spec's `sessionAffinity`
4. Removes dead code from the Instances file
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#43683
**Special notes for your reviewer**: n/a
**Release note**:
```release-note
azure: add support for UDP ports
azure: fix support for multiple `loadBalancerSourceRanges`
azure: support the Service spec's `sessionAffinity`
```
Automatic merge from submit-queue
Add support for PodPreset in `kubectl get` command
**What this PR does / why we need it**:
PR title
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#44736
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 45453, 45307, 44987)
Migrate the docker client code from dockertools to dockershim
Move docker client code from dockertools to dockershim/libdocker. This includes
DockerInterface (renamed to Interface), FakeDockerClient, etc.
This is part of #43234
Automatic merge from submit-queue (batch tested with PRs 45453, 45307, 44987)
Init cache with assigned non-terminated pods before scheduling
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#45220
**Release note**:
```release-note
The fix makes scheduling go routine waiting for cache (e.g. Pod) to be synced.
```
Despite its name, AssertCalls() does not assert anything. It returns an
error that must be checked. This was causing false negatives for
a handful of unit tests.
Automatic merge from submit-queue
Filter out IPV6 addresses from NodeAddresses() returned by vSphere
The vSphere CP returns both IPV6 and IPV4 addresses for a Node as part of NodeAddresses() implementation. However, Kubelet fails due to duplicate api.NodeAddress value when the node has an IPV6 address associated with it. This issue is tracked in #42690. The following are observed:
- when we enabled the logs and checked the addresses sent by vSphere CP to Kubelet, we don't see any duplicate addresses at all.
- Also, kubelet_node_status doesn’t receive any duplicate address from cloud provider.
However, when we filter out the IPV6 addresses and only return IPV4 addresses to the Kubelet, it works perfectly fine.
Even though the Kubelet receives the non-duplicate node-addresses, it still errors out with duplicate node addresses. It might be an issue when kubelet propagates these addresses to API server (or) API server is enable to handle IPV6 addresses.
@divyenpatel @abrarshivani @pdhamdhere @tusharnt
**Release note**:
```release-note
None
```
Automatic merge from submit-queue
rkt: Generate a new Network Namespace for each Pod
**What this PR does / why we need it**:
This PR concerns the Kubelet with the Container runtime rkt.
Currently, when a Pod stops and the kubelet restart it, the Pod will use the **same network namespace** based on its PodID.
When the Garbage Collection is triggered, it delete all the old resources and the current network namespace.
The Pods and all containers inside it loose the _eth0_ interface.
I explained more in details in #45149 how to reproduce this behavior.
This PR generates a new unique network namespace name for each new/restarting Pod.
The Garbage collection retrieve the correct network namespace and remove it safely.
**Which issue this PR fixes** :
fix#45149
**Special notes for your reviewer**:
Following @yifan-gu guidelines, so maybe expecting him for the final review.
**Release note**:
`NONE`
Automatic merge from submit-queue (batch tested with PRs 45304, 45006, 45527)
increase the QPS for namespace controller
The namespace controller is really chatty. Especially to discovery since that involves two requests for every API version available. This bumps the QPS and burst on the namespace controller to avoid being stuck waiting.
Automatic merge from submit-queue (batch tested with PRs 44798, 45537, 45448, 45432)
nfs.go: cleancode err
**What this PR does / why we need it**:
The modification makes code clean, simple, and easy to inspect.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
Statefulsets for cinder: allow multi-AZ deployments, spread pods across zones
**What this PR does / why we need it**: Currently if we do not specify availability zone in cinder storageclass, the cinder is provisioned to zone called nova. However, like mentioned in issue, we have situation that we want spread statefulset across 3 different zones. Currently this is not possible with statefulsets and cinder storageclass. In this new solution, if we leave it empty the algorithm will choose the zone for the cinder drive similar style like in aws and gce storageclass solutions.
**Which issue this PR fixes** fixes#44735
**Special notes for your reviewer**:
example:
```
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: all
provisioner: kubernetes.io/cinder
---
apiVersion: v1
kind: Service
metadata:
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
name: galera
labels:
app: mysql
spec:
ports:
- port: 3306
name: mysql
clusterIP: None
selector:
app: mysql
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mysql
spec:
serviceName: "galera"
replicas: 3
template:
metadata:
labels:
app: mysql
annotations:
pod.alpha.kubernetes.io/initialized: "true"
spec:
containers:
- name: mysql
image: adfinissygroup/k8s-mariadb-galera-centos:v002
imagePullPolicy: Always
ports:
- containerPort: 3306
name: mysql
- containerPort: 4444
name: sst
- containerPort: 4567
name: replication
- containerPort: 4568
name: ist
volumeMounts:
- name: storage
mountPath: /data
readinessProbe:
exec:
command:
- /usr/share/container-scripts/mysql/readiness-probe.sh
initialDelaySeconds: 15
timeoutSeconds: 5
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
volumeClaimTemplates:
- metadata:
name: storage
annotations:
volume.beta.kubernetes.io/storage-class: all
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 12Gi
```
If this example is deployed it will automatically create one replica per AZ. This helps us a lot making HA databases.
Current storageclass for cinder is not perfect in case of statefulsets. Lets assume that cinder storageclass is defined to be in zone called nova, but because labels are not added to pv - pods can be started in any zone. The problem is that at least in our openstack it is not possible to use cinder drive located in zone x from zone y. However, should we have possibility to choose between cross-zone cinder mounts or not? Imo it is not good way of doing things that they mount volume from another zone where the pod is located(means more network traffic between zones)? What you think? Current new solution does not allow that anymore (should we have possibility to allow it? it means removing the labels from pv).
There might be some things that needs to be fixed still in this release and I need help for that. Some parts of the code is not perfect.
Issues what i am thinking about (I need some help for these):
1) Can everybody see in openstack what AZ their servers are? Can there be like access policy that do not show that? If AZ is not found from server specs, I have no idea how the code behaves.
2) In GetAllZones() function, is it really needed to make new serviceclient using openstack.NewComputeV2 or could I somehow use existing one
3) This fetches all servers from some openstack tenant(project). However, in some cases kubernetes is maybe deployed only to specific zone. If kube servers are located for instance in zone 1, and then there are another servers in same tenant in zone 2. There might be usecase that cinder drive is provisioned to zone-2 but it cannot start pod, because kubernetes does not have any nodes in zone-2. Could we have better way to fetch kubernetes nodes zones? Currently that information is not added to kubernetes node labels automatically in openstack (which should I think). I have added those labels manually to nodes. If that zone information is not added to nodes, the new solution does not start stateful pods at all, because it cannot target pods.
cc @rootfs @anguslees @jsafrane
```release-note
Default behaviour in cinder storageclass is changed. If availability is not specified, the zone is chosen by algorithm. It makes possible to spread stateful pods across many zones.
```
Automatic merge from submit-queue
apiserver: injectable default watch cache size
This makes it possible to override the default watch capacity in the REST options getter. Before this PR the default is written into the storage struct explicitly, and if it is the default, the REST options getter didn't know. With this the PR the default is applied late and can be injected from the outside.
Automatic merge from submit-queue
Remove leaked tmp file in unit tests
Some unit tests leave a temp file in work space:
pkg/util/iptables/xtables.lock
This patch remove that file
@dcbw
**Release note**:
```NONE
```
Automatic merge from submit-queue
add rootfs gnufied and childsb to volume approver
**What this PR does / why we need it**:
add me and @gnufied @childsb to volume approver
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Fixes support for multiple instances of loadBalancerSourceRanges.
Previously, the names of the rules for each address range conflicted
causing only one to be applied. Now each gets a unique name.
Automatic merge from submit-queue (batch tested with PRs 45018, 45330)
Add exponential backoff to openstack loadbalancer functions
Using exponential backoff to lower openstack load and reduce API call throttling
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 45018, 45330)
Clean up for qos.go
**What this PR does / why we need it**:
Seems we are not using any of those functions.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#39148
**Release note**:
```release-note
A small clean up to remove unnecessary functions.
```
Automatic merge from submit-queue (batch tested with PRs 45200, 45203)
Allow certificate manager to be initialized with no certs.
Adds support to the certificate manager so it can be initialized with no
certs and only a connection to the certificate request signing API. This
specifically covers the scenario for the kubelet server certificate,
where there is a request signing client but on first boot there is no
bootstrapping or local certs.
Automatic merge from submit-queue (batch tested with PRs 45508, 44258, 44126, 45441, 45320)
Print a newline after ginkgo tests so the test infra doesn't think th…
Fixes#45279
Print a newline after ginkgo tests so the test infra doesn't think that they fail
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 45508, 44258, 44126, 45441, 45320)
Use existing global var criSupportedLogDrivers
**What this PR does / why we need it**:
Use existing global var `criSupportedLogDrivers` defined in docker_service.go. If CRI supports other log drivers in the future, we will only need to modify that global var.
cc @Random-Liu
Automatic merge from submit-queue (batch tested with PRs 45508, 44258, 44126, 45441, 45320)
cloud initialize node in external cloud controller
@thockin This PR adds support in the `cloud-controller-manager` to initialize nodes (instead of kubelet, which did it previously)
This also adds support in the kubelet to skip node cloud initialization when `--cloud-provider=external`
Specifically,
Kubelet
1. The kubelet has a new flag called `--provider-id` which uniquely identifies a node in an external DB
2. The kubelet sets a node taint - called "ExternalCloudProvider=true:NoSchedule" if cloudprovider == "external"
Cloud-Controller-Manager
1. The cloud-controller-manager listens on "AddNode" events, and then processes nodes that starts with that above taint. It performs the cloud node initialization steps that were previously being done by the kubelet.
2. On addition of node, it figures out the zone, region, instance-type, removes the above taint and updates the node.
3. Then periodically queries the cloudprovider for node addresses (which was previously done by the kubelet) and updates the node if there are new addresses
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 41903, 45311, 45474, 45472, 45501)
Adds a helper to convert componentconfig into a configmap
**What this PR does / why we need it**:
Adds a utility function that will be used by self-hosted components such as `kubeadm` but is also a step towards https://github.com/kubernetes/kubernetes/issues/44857
**Special notes for your reviewer**:
**Release note**:
```
NONE
```
/cc @kubernetes/sig-cluster-lifecycle-pr-reviews @bsalamat
Automatic merge from submit-queue (batch tested with PRs 41903, 45311, 45474, 45472, 45501)
Display <none> when port is empty.
**What this PR does / why we need it**:
If container ports are not specified, `kubectl describe` displays `<none>` instead of empty.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 41903, 45311, 45474, 45472, 45501)
Fetch VM UUID from - /sys/class/dmi/id/product_serial
**What this PR does / why we need it**:
Current code fetch VM uuid using uuid reported at `'/sys/devices/virtual/dmi/id/product_uuid'.` This doesn't work with all the distros like Ubuntu 16.04 and Fedora.
updating code to fetch VM uuid from `/sys/class/dmi/id/product_serial`
**Which issue this PR fixes**
fixes #
**Special notes for your reviewer**:
Verified UUID is matching with VM UUID on ubuntu 16.04, Cent OS 7.3 , and Photon OS
@BaluDontu @tusharnt
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 44727, 45409, 44968, 45122, 45493)
Separate healthz server from metrics server in kube-proxy
From #14661, proposal is on kubernetes/community#552.
Couple bullet points as in commit:
- /healthz will be served on 0.0.0.0:10256 by default.
- /metrics and /proxyMode will be served on port 10249 as before.
- Healthz handler will verify timestamp in iptables mode.
/assign @nicksardo @bowei @thockin
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
adds log when gpuManager.start() failed
If gpuManager.start() returns error, there is no log.
We confused with scheduler do not schedule any pod(with gpu) to one node.
kubectl describe node xxx shows there is no gpu on that node, because the gpu driver do not work on that node, gpuManager.start() failed, but we can not see anything in log.
Automatic merge from submit-queue
Remove unnecessary constants and add type to secret
**What this PR does / why we need it**:
Adds the type field to the secret for the `persistent-volume-provisioning` example of Quobyte. Also remove unnecessary constants in Quobyte Code base.
FYI
@rootfs @saad-ali @quolix
Automatic merge from submit-queue
refactor names for the apiserver handling chain
The names and structure around the handling chain got a bit confused. This simplifies it back out into a single struct with three parts: overall handler, gorestful handler, pathrecording mux and makes the delegate wiring simpler
Automatic merge from submit-queue
Clean up petset
**What this PR does / why we need it**:
Rename legacy petset to statefulset.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
util/iptables: grab iptables locks if iptables-restore doesn't support --wait
When iptables-restore doesn't support --wait (which < 1.6.2 don't), it may
conflict with other iptables users on the system, like docker, because it
doesn't acquire the iptables lock before changing iptables rules. This causes
sporadic docker failures when starting containers.
To ensure those don't happen, essentially duplicate the iptables locking
logic inside util/iptables when we know iptables-restore doesn't support
the --wait option.
Unfortunately iptables uses two different locking mechanisms, one until
1.4.x (abstract socket based) and another from 1.6.x (/run/xtables.lock
flock() based). We have to grab both locks, because we don't know what
version of iptables-restore exists since iptables-restore doesn't have
a --version option before 1.6.2. Plus, distros (like RHEL) backport the
/run/xtables.lock patch to 1.4.x versions.
Related: https://github.com/kubernetes/kubernetes/pull/43575
See also: https://github.com/openshift/origin/pull/13845
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1417234
@kubernetes/rh-networking @kubernetes/sig-network-misc @eparis @knobunc @danwinship @thockin @freehan
Automatic merge from submit-queue
fix the typos of e.g.
fix the typos of e.g.
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 43006, 45305, 45390, 45412, 45392)
[GCE] Collect latency metric on get/list calls
**What this PR does / why we need it**:
Collects latency & count measurements on GET and LIST operations to GCE cloud.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 43006, 45305, 45390, 45412, 45392)
Update go-restful dependency
This is required by #44787. But because both this and the changes in 44787 need constant rebase, I am trying to get this one in separately to make less rebases.
The change is only a dependency update.
Automatic merge from submit-queue
Fix crash on Pods().Get() failure
**What this PR does / why we need it**:
Fixes a potential crash in syncPod when Pods().Get() returns an error other than NotFound. This is unlikely to occur with the standard client, but easily shows up with a stub kube client that returns Unimplemented to everything. Updates the unit test as well.
**Release note**:
`NONE`
- /healthz will be served on 0.0.0.0:10256 by default.
- /metrics and /proxyMode will be served on port 10249
as before.
- Healthz handler will verify timestamp in iptables mode.
Automatic merge from submit-queue (batch tested with PRs 44590, 44969, 45325, 45208, 44714)
Use dedicated UnixUserID and UnixGroupID types
**What this PR does / why we need it**:
DRYs up type definitions by using the dedicated types in apimachinery
**Which issue this PR fixes**
#38120
**Release note**:
```release-note
UIDs and GIDs now use apimachinery types
```
Automatic merge from submit-queue (batch tested with PRs 44590, 44969, 45325, 45208, 44714)
Fix onlylocal endpoint's healthcheck nodeport logic
I was in the middle of rebasing #41162, surprisingly found the healthcheck nodeport logic in kube-proxy is still buggy. Separate this fix out as it isn't GA related.
/assign @freehan @thockin
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 44590, 44969, 45325, 45208, 44714)
Refactor volume operation log and error messages
What this PR does / why we need it:
Adds wrappers for volume-specific error and log messages. Each message has a simple version that can be displayed to the user and a detailed version that can be used in logs. The messages that are used for events was also cleaned up. @msau42
Which issue this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close that issue when PR gets merged): fixes#40905
Special notes for your reviewer:
pkg/kubelet/volumemanager/reconciler/reconciler.go can be refactored. I can do that refactoring after this one.
Release note:
NONE
Automatic merge from submit-queue
Kubectl taint node based on label selector
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #44522
**Release note**:
```
Taints the node based on label selector
```
Automatic merge from submit-queue (batch tested with PRs 45322, 44770, 45411)
Fix and make TaintManager harder to break before we move it out of NC
Fix#45342
cc @gyliu513
Automatic merge from submit-queue
add set rolebinding/clusterrolebinding command
add command to set user/group/serviceaccount in rolebinding/clusterrolebinding /cc @liggitt @deads2k
Automatic merge from submit-queue
OWNERS: add directxman12 to pkg/apis/autoscaling
Added directxman12 (current SIG lead of SIG-autoscaling) as a reviewer for pkg/apis/autoscaling.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 43732, 45413)
Extend timeouts in timed_workers_test
Fix#45375
If it won't be enough I'll rewrite it to allow injectable timers.
Automatic merge from submit-queue (batch tested with PRs 43732, 45413)
Handle maxUnavailable larger than spec.replicas
**What this PR does / why we need it**:
Handle maxUnavailable larger than spec.replicas
**Which issue this PR fixes**
fixes#42479
**Special notes for your reviewer**:
None
**Release note**:
```
NONE
```
Automatic merge from submit-queue
Edge based userspace proxy
Second last PR from my changes to kube-proxy to make it event-based.
This is switching userspace proxy to be even-based similarly to what we already did with iptables.
Automatic merge from submit-queue (batch tested with PRs 45362, 45159, 45321, 45238)
Remove redundent GetObjectKind() defined on types
Embedding TypeMeta is enough.
Automatic merge from submit-queue
remove useless code in kubelet
**What this PR does / why we need it**:
This code has logical error as the etc-hosts file will be recreated even it already exists. In addition, if do not recreate etc-hosts file when it exists, the pod ip in it will be out of date when pod ips change. So remove this code as it is not needed.
**Which issue this PR fixes**:
**Special notes for your reviewer**:
xrefer: #44481, #44473
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
use our own serve mux that directs how we want
alternative to https://github.com/kubernetes/kubernetes/pull/44405
I really wanted to avoid writing my own, but the gorilla mux works via redirect, which would be a change. This does exact pattern matches only unless someone explicitly requests a prefix match.
@liggitt happier?
Automatic merge from submit-queue (batch tested with PRs 45316, 45341)
Pass NoOpLegacyHost to dockershim in --experimental-dockershim mode
This allows dockershim to use network plugins, if needed.
/cc @Random-Liu
Automatic merge from submit-queue
use of --local should completely eliminate communication with API server
This PR is a bug fix for #45223
It allows --local flag to completely avoid communication with api server.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
fixes#45223
This is a simple change, to set the value of boolean flag "local" on o.Local variable
Automatic merge from submit-queue
Use Docker API Version instead of docker version
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
Fixes#42492
**Special notes for your reviewer**:
**Release note**:
`Update cadvisor to latest head to use docker APIversion exposed by cadvisor`
Automatic merge from submit-queue (batch tested with PRs 45056, 44904, 45312)
CRI: clarify the behavior of PodSandboxStatus and ContainerStatus
**What this PR does / why we need it**:
Currently, we define that ImageStatus should return `nil, nil` when requested image doesn't exist, and kubelet is relying on this behavior now.
However, we haven't clearly defined the behavior of PodSandboxStatus and ContainerStatus. Currently, they return error when requested sandbox/container doesn't exist, and kubelet is also relying on this behavior.
**Which issue this PR fixes**
Fixes#44885.
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
bump(golang.org/x/oauth2): a6bd8cefa1811bd24b86f8902872e4e8225f74c4
As I tackle https://github.com/kubernetes/kubernetes/issues/42654 kubectl's OpenID Connect plugin will start using golang.org/x/oauth2 for refreshing, instead of go-oidc's own hand rolled oauth2 implementation. In preparation, update golang.org/x/oauth2 to include 7374b3f1ec which fixes refreshing with Okta.
We also somehow removed the dependency on `google.golang.org/appengine`. Maybe 8cf58155e4?
cc @kubernetes/sig-auth-pr-reviews
Automatic merge from submit-queue (batch tested with PRs 45314, 45250, 41733)
CRI: add ImageFsInfo API
**What this PR does / why we need it**:
kubelet currently relies on cadvisor to get the ImageFS info for supported runtimes, i.e., docker and rkt. This PR adds ImageFsInfo API to CRI so kubelet could get the ImageFS correctly for all runtimes.
**Which issue this PR fixes**
First step for #33048 ~~also reverts temporary ImageStats in #33870~~.
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```