At present, endpoints and services are created
for glusterfs pvcs are in form of glusterfs-dynamic-<PVC name>
however this could cause issue if user deletes a PVC and immediately
creates a new one with the same name, PV controller will try to delete
the old PV and its endpoint and at the same the controller will try to create new PV
and the same endpoint. Depending on which event reaches the
controller first, it may create new PV, check that endpoints exists,
then delete the old PVC and delete endpoints already used by the new PV.
This patch changes the endpoint/service name to below format:
`glusterfs-dynamic-<PVC UUID>`.
By the uniqueness of UUID, above mentioned issue will no longer be present.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
update bazel and fix goftm
use defaultStorageAccountKind
fix test failure
update godep license file
fix staging godeps issue
update staging godeps
fix comments, use one API call for file creation
As #69219 outlines the unit tests in `csi_client_test.go` where not
testing the actual implementation of the `csiDriverClient` but was
testing the fake.
To fix this, we changed the `csiDriverClient` to use a
`nodeClientCreator` which is responsible for creating a new
`NodeClient`, a real one in prod and a fake one in the tests.
The setup of the gRPC connection has been pushed into that creator. The
node client uses that connection; that's transparent to the driver
client. It's the responsibility of the driver client to close the
connection when it is done with the node client. To achieve this, we
have the node client creator return a closer which handles the
connection teardown.
In the tests we now also check if the driver client actually calls
this closer, thus closing the gRPC connection.
Closes: #69219
Co-authored-by: Rosie Bloxsom <rbloxsom@pivotal.io>
Co-authored-by: Maria Ntalla <mntalla@pivotal.io>
Don't mount single path instead of multipath volumes and always wait until
at least 2 paths are available. Try up to 3 times to get all paths. Try 5
times to get at last 2 paths.
We were not sorting them previously, which made the order
non-deterministic. If we believe the order doesn't matter, let's pick
a consistent order to minimize the chances of a rare flake.
This also simplifies the unit tests, which were flaking
not-very-rarely, e.g. with
`bazel test //pkg/volume/awsebs/... --runs_per_test=8`
Individual implementations are not yet being moved.
Fixed all dependencies which call the interface.
Fixed golint exceptions to reflect the move.
Added project info as per @dims and
https://github.com/kubernetes/kubernetes-template-project.
Added dims to the security contacts.
Fixed minor issues.
Added missing template files.
Copied ControllerClientBuilder interface to cp.
This allows us to break the only dependency on K8s/K8s.
Added TODO to ControllerClientBuilder.
Fixed GoDeps.
Factored in feedback from JustinSB.
- Change not to skip error from GetLoopDevice other than DeviceNotFound
- Add comment for the reason for order of descriptor lock release and TearDownDevice
go test in 1.11 now performs some validation of format strings, so
this address the issues highlighted by go test, allowing go test to
pass with 1.11.
Fixes pertaining to storage.
The glusterfs plugin/driver set log-level to ERROR by default while
mounting glusterfs shares. However at times, its required to supply other
log-level options like INFO, TRACE..etc. This patch enables the support
to provide other log-level values from storageclass.
Additional Ref#
https://docs.gluster.org/en/v3/Administrator%20Guide/Setting%20Up%20Clients/
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
This PR checks if NodeGetInfo returns error. If so, it returns
the error. Without this change, it always returns no error (nil)
regardless of whether NodeGetInfo returns error.
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md.
CSI Node info registration in kubelet
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#67683
**Special notes for your reviewer**:
Feature issue: https://github.com/kubernetes/features/issues/557
Design doc: https://github.com/kubernetes/community/pull/2034
Missing pieces:
* CSI client retry and exponential backoff logic.
* CSINodeInfo object validation
* e2e test with all the CSI machinery.
An RBAC rule is also added to support external-provisioner topology updates.
**Release note**:
```release-note
Registers volume topology information reported by a node-level Container Storage Interface (CSI) driver. This enables Kubernetes support of CSI topology mechanisms.
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md.
Fix metricsStatFS volume path for local volume
**What this PR does / why we need it**:
Fix metricsStatFS volume path for local volume
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
/kind bug
/sig storage
/assign @msau42
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md.
Resolves#59015, Scheduler: Add support for EBS types t3, r5, & z1d
Fixes#59015
The new t3, r5, r5d and z1 need matched as well according to this:
From current AWS documentation:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/volume_limits.html
T3, C5, C5d, M5, M5d, R5, R5d, and z1d instances support a maximum of
28 attachments, and every instance has at least one network interface
attachment. If you have no additional network interface attachments on
these instances, you could attach 27 EBS volumes.
**Release note**:
```NONE
```
From current AWS documentation:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/volume_limits.html
T3, C5, C5d, M5, M5d, R5, R5d, and z1d instances support a maximum of
28 attachments, and every instance has at least one network interface
attachment. If you have no additional network interface attachments on
these instances, you could attach 27 EBS volumes.
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md.
as hostpathtype owner, adds myself to OWNERS file
**What this PR does / why we need it**:
As the owner of HostPathType, I would like to add myself to OWNERS file.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
/cc thockin saad-ali
**Release note**:
```release-note
None
```
Automatic merge from submit-queue (batch tested with PRs 64283, 67910, 67803, 68100). If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md.
CSI Cluster Registry and Node Info CRDs
**What this PR does / why we need it**:
Introduces the new `CSIDriver` and `CSINodeInfo` API Object as proposed in https://github.com/kubernetes/community/pull/2514 and https://github.com/kubernetes/community/pull/2034
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes https://github.com/kubernetes/features/issues/594
**Special notes for your reviewer**:
Per the discussion in https://groups.google.com/d/msg/kubernetes-sig-storage-wg-csi/x5CchIP9qiI/D_TyOrn2CwAJ the API is being added to the staging directory of the `kubernetes/kubernetes` repo because the consumers will be attach/detach controller and possibly kubelet, but it will be installed as a CRD (because we want to move in the direction where the API server is Kubernetes agnostic, and all Kubernetes specific types are installed).
**Release note**:
```release-note
Introduce CSI Cluster Registration mechanism to ease CSI plugin discovery and allow CSI drivers to customize Kubernetes' interaction with them.
```
CC @jsafrane
Automatic merge from submit-queue (batch tested with PRs 68051, 68130, 67211, 68065, 68117). If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md.
support cross resource group for azure file
**What this PR does / why we need it**:
support cross resource group for azure file: by `resourceGroup` field, azure cloud provider will create azure file on user specified resource group
```
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: azurefile-rg
provisioner: kubernetes.io/azure-file
parameters:
resourceGroup: RESOURCE_GROUP_NAME
storageAccount: EXISTING_STORAGE_ACCOUNT
```
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#64428
**Special notes for your reviewer**:
**Release note**:
```
resourcegroup parameter is added to AzureFile storage class to support azure file dyanmic provision in cross resource group.
```
/kind bug
/sig azure
/assign @feiskyer
cd @khenidak
Automatic merge from submit-queue (batch tested with PRs 67745, 67432, 67569, 67825, 67943). If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md.
Move volume dynamic provisioning scheduling to beta
**What this PR does / why we need it**:
* Combine feature gate VolumeScheduling and DynamicProvisioningScheduling into one
* Add allowedTopologies description in kubectl
**Special notes for your reviewer**:
Wait until related e2e and downside plugins are ready.
/hold
**Release note**:
```release-note
Move volume dynamic provisioning scheduling to beta (ACTION REQUIRED: The DynamicProvisioningScheduling alpha feature gate has been removed. The VolumeScheduling beta feature gate is still required for this feature)
```
Automatic merge from submit-queue (batch tested with PRs 67745, 67432, 67569, 67825, 67943). If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md.
Fix panic when choosing zone or zones for volume
**What this PR does / why we need it**:
Fix panic when choosing zone or zones for volume, so that zoneSlice won't divide by zero now.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
cc @ddebroy @andyzhangx
Automatic merge from submit-queue (batch tested with PRs 67766, 67642, 67772). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Enable dynamic azure disk volume limits
**What this PR does / why we need it**:
Enable dynamic azure disk volume limits,
This is an azure cloud provider implementation related to feature: [Dynamic Maximum volume count](https://github.com/kubernetes/features/issues/554)
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#66269
**Special notes for your reviewer**:
This PR use `az.VirtualMachineSizesClient.List` to list all vm sizes under region, match vm size with current node size, and then got `MaxDataDiskCount`, the `GetVolumeLimits` happens in kubelet and will return `attachable-volumes-azure-disk` in node status as following example:
```
agentpool-22082114-0
...
allocatable:
attachable-volumes-azure-disk: "8"
cpu: "2"
ephemeral-storage: "28043041951"
hugepages-1Gi: "0"
hugepages-2Mi: "0"
memory: 7034772Ki
pods: "30"
```
**Release note**:
```
Enable dynamic azure disk volume limits
```
/sig azure
/kind feature
Automatic merge from submit-queue (batch tested with PRs 67822, 67835). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Remove provisioner config from log message.
Signed-off-by: hchiramm <hchiramm@redhat.com>
```
release-note-none
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add DynamicProvisioningScheduling support for GCE PD and RePD
**What this PR does / why we need it**:
This PR adds support for the DynamicProvisioningScheduling feature for GCE PD and RePD. With this in place, if VolumeBindingMode: WaitForFirstConsumer is specified in a GCE storageclass and DynamicProvisioningScheduling is enabled, GCE PD provisioner will use the selected node's LabelZoneFailureDomain as (1) the zone to provision a GCE PD volume in (2) one of the zones to provision GCE RePD volume in.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
E2E tests for DynamicProvisioningScheduling scenarios for GCE PD to follow
**Release note**:
```release-note
none
```
/sig storage
/assign @msau42
Automatic merge from submit-queue (batch tested with PRs 66916, 67252, 67794, 67619, 67328). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Using a fixed set of locks, then we don't need to free unused locks anymore.
**What this PR does / why we need it**:
Using a fixed set of locks, then we don't need to free unused locks anymore.
See kubernetes/kubernetes/pull/66442 for discussions.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#65113
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
/assign @msau42
/assign @thockin
Automatic merge from submit-queue (batch tested with PRs 66980, 67604, 67741, 67715). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add support of Azure cross resource group nodes
**What this PR does / why we need it**:
Part of feature [Cross resource group nodes](https://github.com/kubernetes/features/issues/604).
This PR adds support of Azure cross resource group nodes that are labeled with `kubernetes.azure.com/resource-group=<rg-name>` and `alpha.service-controller.kubernetes.io/exclude-balancer=true`
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
See designs [here](https://github.com/kubernetes/community/pull/2479).
**Release note**:
```release-note
Azure cloud provider now supports cross resource group nodes that are labeled with `kubernetes.azure.com/resource-group=<rg-name>` and `alpha.service-controller.kubernetes.io/exclude-balancer=true`
```
/sig azure
/kind feature
Automatic merge from submit-queue (batch tested with PRs 59230, 66233, 67483, 67713). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
change default value of kind for azure disk
**What this PR does / why we need it**:
change default value of kind for azure disk, as we are suggesting users to use managed disk, default value should be managed disk.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#67480
**Special notes for your reviewer**:
assign @feiskyer
FYI @khenidak @brendandburns
**Release note**:
```
change default value of kind for azure disk
```
/kind feature
/sig azure
Automatic merge from submit-queue (batch tested with PRs 67332, 66737, 67281, 67173). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Support mount options for cephfs with ceph-fuse mount
**What this PR does / why we need it**:
When cephfs uses ceph-fuse for the mount command, mount option and
readOnly options are disregarded. This patch adds to ceph-fuse as
well.
**Special notes for your reviewer**:
N/A
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 66884, 67410, 67229, 67409). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add node affinity for Azure unzoned managed disks
**What this PR does / why we need it**:
Continue of [Azure Availability Zone feature](https://github.com/kubernetes/features/issues/586).
Add node affinity for Azure unzoned managed disks, so that unzoned disks only scheduled to unzoned nodes.
This is required because Azure doesn't allow attaching unzoned disks to zoned VMs.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
Unzoned nodes would label `failure-domain.beta.kubernetes.io/zone=0` and the value is fault domain ( while availability zone is used for zoned nodes). So fault domain is used to populate unzoned disks.
Since there are at most 3 fault domains in each region, the PR adds 3 terms for them:
```yaml
kubectl describe pv pvc-bdf93a67-9c45-11e8-ba6f-000d3a07de8c
Name: pvc-bdf93a67-9c45-11e8-ba6f-000d3a07de8c
Labels: <none>
Annotations: pv.kubernetes.io/bound-by-controller=yes
pv.kubernetes.io/provisioned-by=kubernetes.io/azure-disk
volumehelper.VolumeDynamicallyCreatedByKey=azure-disk-dynamic-provisioner
Finalizers: [kubernetes.io/pv-protection]
StorageClass: azuredisk-unzoned
Status: Bound
Claim: default/unzoned-pvc
Reclaim Policy: Delete
Access Modes: RWO
Capacity: 5Gi
Node Affinity:
Required Terms:
Term 0: failure-domain.beta.kubernetes.io/region in [southeastasia]
failure-domain.beta.kubernetes.io/zone in [0]
Term 1: failure-domain.beta.kubernetes.io/region in [southeastasia]
failure-domain.beta.kubernetes.io/zone in [1]
Term 2: failure-domain.beta.kubernetes.io/region in [southeastasia]
failure-domain.beta.kubernetes.io/zone in [2]
Message:
Source:
Type: AzureDisk (an Azure Data Disk mount on the host and bind mount to the pod)
DiskName: k8s-5b3d7b8f-dynamic-pvc-bdf93a67-9c45-11e8-ba6f-000d3a07de8c
DiskURI: /subscriptions/<subscription>/resourceGroups/<rg-name>/providers/Microsoft.Compute/disks/k8s-5b3d7b8f-dynamic-pvc-bdf93a67-9c45-11e8-ba6f-000d3a07de8c
Kind: Managed
FSType:
CachingMode: None
ReadOnly: false
Events: <none>
```
**Release note**:
```release-note
Add node affinity for Azure unzoned managed disks
```
/sig azure
/kind feature
/cc @brendandburns @khenidak @andyzhangx @msau42
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Attacher/Detacher refactor for local storage
Proposal link: https://github.com/kubernetes/community/pull/2438
**What this PR does / why we need it**:
Attacher/Detacher refactor for the plugins which just need to mount device, but do not need to attach, such as local storage plugin.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
```release-note
Attacher/Detacher refactor for local storage
```
/sig storage
/kind feature
Automatic merge from submit-queue (batch tested with PRs 67396, 67097, 67395, 67365, 67099). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Ignore EIO error in unmount path
**What this PR does / why we need it**:
This PR ignores EIO in unmount path. XFS shuts down filesystem when the target is down and it returns EIO for the stat calls used in unmount path.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#66868
**Special notes for your reviewer**:
We already handle ESTALE & ENOTCONN errors in isCorruptedMnt Call. Adding EIO to that list covers the XFS shutdown case.
Also Flexvolume doesn't check for these errors in its current form. Updated Flexvolume code to handle it.
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 66780, 67330). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Changed admission controller to allow volume expansion for all volume plugins
**What this PR does / why we need it**:
There are two motivations for this change:
1. CSI plugins are soon going to support volume expansion. For such plugins, admission controller doesn't know whether the plugins are capabale of supporting volume expansion or not.
2. Currently, admission controller rejects PVC updates for in-tree plugins that don't support volume expansion (e.g., NFS, iSCSI). This change allows external controllers to expand volumes similar to how external provisioners are accommodated.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
This PR mimics the behavior of the PV controller when PVs are provisioned externally by logging and setting a new event for PVs that are being expanded externally. As SIG Storage is planning new types of operations on PVs, it may make more sense to a have a single event for all actions taken by external controllers.
**Release note**:
```release-note
The check for unsupported plugins during volume resize has been moved from the admission controller to the two controllers that handle volume resize.
```
/sig storage
/assign @gnufied @jsafrane @wongma7
Automatic merge from submit-queue (batch tested with PRs 67017, 67190, 67110, 67140, 66873). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add wait loop for multipath devices to appear
It takes a variable amount of time for the multipath daemon
to create /dev/dm-XX in response to new LUNs being discovered.
The old iscsi_util code only discovered the multipath device
if it was created quickly enough, but in a significant number
of cases, kubelet would grab one of the individual paths and
put a filesystem it on before multipathd could construct a
multipath device.
This change waits for the multipath device to get created for
up to 10 seconds, but only if the PV actually had more than
one portal.
fixes#60894
```release-note
Dynamic provisions that create iSCSI PVs can ensure that multipath is used by specifying 2 or more target portals in the PV, which will cause kubelet to wait up to 10 seconds for the multipath device. PVs with just one portal continue to work as before, with kubelet not waiting for the multipath device and just using the first disk it finds.
```
Automatic merge from submit-queue (batch tested with PRs 67017, 67190, 67110, 67140, 66873). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
CSI plugin now calls NodeGetInfo() to get driver's node ID
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#67040
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
/sig storage
@sbezverk @vladimirvivien @saad-ali
Automatic merge from submit-queue (batch tested with PRs 66602, 67178, 67207, 67125, 66332). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add UT to RBD volume test of TestGetAccessModes and TestRequiresRemount
**What this PR does / why we need it**:
Add UT to RBD volume test of TestGetAccessModes and TestRequiresRemount
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 67058, 67083, 67220, 67222, 67209). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fixed unsafe type cast in vSphere volume plugin
**What this PR does / why we need it**: Fixes the controller manager panic caused by vSphere volumes being used on the wrong cloud provider.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#67218
**Release note**:
```release-note
NONE
```
/assign @saad-ali
Automatic merge from submit-queue (batch tested with PRs 67195, 67184). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Remove redundant code in aws_ebs_block.go
**What this PR does / why we need it**:
Remove redundant code in aws_ebs_block.go
There is the same code in aws_ebs.go
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
NONE
**Special notes for your reviewer**:
NONE
**Release note**:
```release-note
NONE
```
/sig storage
It takes a variable amount of time for the multipath daemon
to create /dev/dm-XX in response to new LUNs being discovered.
The old iscsi_util code only discovered the multipath device
if it was created quickly enough, but in a significant number
of cases, kubelet would grab one of the individual paths and
put a filesystem it on before multipathd could construct a
multipath device.
This change waits for the multipath device to get created for
up to 10 seconds, but only if the PV actually had more than
one portal.
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add availability zones support to Azure managed disks
**What this PR does / why we need it**:
Continue of [Azure Availability Zone feature](https://github.com/kubernetes/features/issues/586).
This PR adds availability zone support for Azure managed disks and its storage class. Zoned managed disks is enabled by default if there are zoned nodes in the cluster.
The zone could also be customized by `zone` or `zones` parameter, e.g.
```yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
name: managed-disk-zone-1
parameters:
zone: "southeastasia-1"
# zones: "southeastasia-1,"southeastasia-2"
cachingmode: None
kind: Managed
storageaccounttype: Standard_LRS
provisioner: kubernetes.io/azure-disk
reclaimPolicy: Delete
volumeBindingMode: Immediate
```
All zoned AzureDisk PV will also be labeled with its availability zone, e.g.
```sh
$ kubectl get pvc pvc-azuredisk-az-1
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-azuredisk-az-1 Bound pvc-5ad0c7b8-8f0b-11e8-94f2-000d3a07de8c 5Gi RWO managed-disk-zone-1 2h
$ kubectl get pv pvc-5ad0c7b8-8f0b-11e8-94f2-000d3a07de8c --show-labels
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE LABELS
pvc-5ad0c7b8-8f0b-11e8-94f2-000d3a07de8c 5Gi RWO Delete Bound default/pvc-azuredisk-az-1 managed-disk-zone-1 2h failure-domain.beta.kubernetes.io/region=southeastasia,failure-domain.beta.kubernetes.io/zone=southeastasia-1
```
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
See also the [KEP](https://github.com/kubernetes/community/pull/2364).
DynamicProvisioningScheduling feature would be added in a following PR.
**Release note**:
```release-note
Azure managed disks now support availability zones and new parameters `zoned`, `zone` and `zones` are added for AzureDisk storage class.
```
/kind feature
/sig azure
/assign @brendandburns @khenidak @andyzhangx
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
error out empty hostname
**What this PR does / why we need it**:
For linux, the hostname is read from file `/proc/sys/kernel/hostname` directly, which can be overwritten with whitespaces.
Should error out such invalid hostnames.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixeskubernetes/kubeadm#835
**Special notes for your reviewer**:
/cc luxas timothysc
**Release note**:
```release-note
nodes: improve handling of erroneous host names
```
Automatic merge from submit-queue (batch tested with PRs 61389, 66817, 66903, 66675, 66965). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Provide an option to supply log-file mount option for gluster plugin.
At present, `log-file` location of glusterfs client mount is decided
by the plugin, however at times, users/admin want to configure log-file
of gluster fuse client process on their own in some setups. Even though they can pass log-file
mount option through storage class, before this patch glusterfs plugin
always discard it. This patch enable/give preference
to admin supplied log-file mount option if specified in storage class.
If the log-file mount option is incomplete or wrong, plugin fallback to the
location which is carved out by the combination of pvc and pod name.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
**What this PR does / why we need it**:
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
csiAttacher: check deviceMountPath before hasStageUnstageCapability
csiAttacher#MountDevice: it is better to check `deviceMountPath` before `hasStageUnstageCapability`
**Release note**:
```
NONE
```
/release-note-none
/kind cleanup
/sig storage
At present, `log-file` location of glusterfs client mount is decided
by the plugin, however at times, users/admin want to configure log-file
of gluster fuse client process on their own in some setups. Even though they can pass log-file
mount option through storage class, before this patch glusterfs plugin
always discard it. This patch enable/give preference
to admin supplied log-file mount option if specified in storage class.
If the log-file mount option is incomplete or wrong, plugin fallback to the
location which is carved out by the combination of pvc and pod name.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
Automatic merge from submit-queue (batch tested with PRs 64645, 66832). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Detect if GCE PD udev link is wrong and try to correct it
**What this PR does / why we need it**:
udev can miss scsi events and not update device links properly when disks are detached and reattached to the same node in a different order. This PR workarounds the issue by comparing the udev link name, which is the PD disk name, with the scsi disk serial number returned from scsi_id, and prevents mounting of the disk if the link is wrong. It will also run `udevadm trigger` on the disk to try to correct the bad link.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
This fix prevents a GCE PD volume from being mounted if the udev device link is stale and tries to correct the link.
```
Automatic merge from submit-queue (batch tested with PRs 65730, 66615, 66684, 66519, 66510). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add DynamicProvisioningScheduling support for EBS
**What this PR does / why we need it**:
This PR adds support for the DynamicProvisioningScheduling feature in EBS. With this in place, if VolumeBindingMode: WaitForFirstConsumer is specified in a EBS storageclass and DynamicProvisioningScheduling is enabled, EBS provisioner will use the selected node's LabelZoneFailureDomain as the zone to provision the EBS volume in.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
Related to #63232
Sample `describe pv` output with NodeAffinity populated:
```
~$ kubectl describe pv pvc-f9d2138b-7e3e-11e8-a4ea-064124617820
Name: pvc-f9d2138b-7e3e-11e8-a4ea-064124617820
Labels: failure-domain.beta.kubernetes.io/region=us-west-2
failure-domain.beta.kubernetes.io/zone=us-west-2a
Annotations: kubernetes.io/createdby=aws-ebs-dynamic-provisioner
pv.kubernetes.io/bound-by-controller=yes
pv.kubernetes.io/provisioned-by=kubernetes.io/aws-ebs
Finalizers: [kubernetes.io/pv-protection]
StorageClass: slow3
Status: Bound
Claim: default/pvc3
Reclaim Policy: Delete
Access Modes: RWO
Capacity: 6Gi
Node Affinity:
Required Terms:
Term 0: failure-domain.beta.kubernetes.io/zone in [us-west-2a]
failure-domain.beta.kubernetes.io/region in [us-west-2]
Message:
Source:
Type: AWSElasticBlockStore (a Persistent Disk resource in AWS)
VolumeID: aws://us-west-2a/vol-0fc1cdae7d10860f6
FSType: ext4
Partition: 0
ReadOnly: false
Events: <none>
```
**Release note**:
```release-note
none
```
/sig storage
/assign @msau42 @jsafrane
Automatic merge from submit-queue (batch tested with PRs 66225, 66648, 65799, 66630, 66619). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Update Azure Go SDK to v19.0.0 and get availability zone for VirtualMachineScaleSetVM
**What this PR does / why we need it**:
Continue of #66242. This PR updates Azure Go SDK to v19.0.0 (with compute API 2018-04-01) and gets availability zones for VirtualMachineScaleSetVM.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
Azure Go SDK has been upgraded to v19.0.0 and VirtualMachineScaleSetVM now supports availability zones.
```
/sig azure
/assign @brendandburns @khenidak @andyzhangx
There are two motivations for this change:
(1) CSI plugins are soon going to support volume expansion. For such
plugins, admission controller doesn't know whether the plugins are
capabale of supporting volume expansion or not.
(2) Currently, admission controller rejects PVC updates for in-tree plugins
that don't support volume expansion (e.g., NFS, iSCSI). This change allows
external controllers to expand volumes similar to how external provisioners
operate.
Automatic merge from submit-queue (batch tested with PRs 64844, 63176). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fix discovery/deletion of iscsi block devices
This PR modifies the iSCSI attach/detatch codepaths in the following
ways:
1) After unmounting a filesystem on an iSCSI block device, always
flush the multipath device mapper entry (if it exists) and delete
all block devices so the kernel forgets about them.
2) When attaching an iSCSI block device, instead of blindly
attempting to scan for the new LUN, first determine if the target
is already logged into, and if not, do the login first. Once every
portal is logged into, the scan is done.
3) Scans are now done for specific devices, instead of the whole
bus. This avoids discovering LUNs that kubelet has no interest in.
4) Additions to the underlying utility interfaces, with new tests
for the new functionality.
5) Some existing code was shifted up or down, to make the new logic
work.
6) A typo in an existing exec call on the attach path was fixed.
Fixes#59946
```release-note
When attaching iSCSI volumes, kubelet now scans only the specific
LUNs being attached, and also deletes them after detaching. This avoids
dangling references to LUNs that no longer exist, which used to be the
cause of random I/O errors/timeouts in kernel logs, slowdowns during
block-device related operations, and very rare cases of data corruption.
```
Automatic merge from submit-queue (batch tested with PRs 66373, 66612). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Remove `auto_unmount` mount option from pv spec annotation.
At present, `auto_unmount` option is recorded at PV annotation of glusterfs PV.
Due to the preference given in MountOptionFromSpec() for annotation mount options
over sc supplied mount options(Ref PR# https://github.com/kubernetes/kubernetes/pull/66576),
the sc supplied mount options are not honoured in glusterfs plugin
eventhough the driver returns `true` for storage class mountoptions
support at probe.
This patch removes `auto_unmount` option from annotation of the pv spec
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
**What this PR does / why we need it**:
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add UT Test to cephfs
**What this PR does / why we need it**:
Add UT Test to cephfs
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
At present, `auto_unmount` option is recorded at PV annotation of glusterfs PV.
Due to the preference given in MountOptionFromSpec() for annotation mount options
over sc supplied mount options(Ref PR# https://github.com/kubernetes/kubernetes/pull/66576),
the sc supplied mount options are not honoured in glusterfs plugin
eventhough the driver returns `true` for storage class mountoptions
support at probe.
This patch removes `auto_unmount` option from annotation of the pv spec.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
fix wrong description
**What this PR does / why we need it**:
fix wrong description
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
This change ensures that iSCSI block devices are deleted after
unmounting, and implements scanning of individual LUNs rather
than scanning the whole iSCSI bus.
In cases where an iSCSI bus is in use by more than one attachment,
detaching used to leave behind phantom block devices, which could
cause I/O errors, long timeouts, or even corruption in the case
when the underlying LUN number was recycled. This change makes
sure to flush references to the block devices after unmounting.
The original iSCSI code scanned the whole target every time a LUN
was attached. On storage controllers that export multiple LUNs on
the same target IQN, this led to a situation where nodes would
see SCSI disks that they weren't supposed to -- possibly dozens or
hundreds of extra SCSI disks. This caused 3 significant problems:
1) The large number of disks wasted resources on the node and
caused a minor drag on performance.
2) The scanning of all the devices caused a huge number of uevents
from the kernel, causing udev to bog down for multiple minutes in
some cases, triggering timeouts and other transient failures.
3) Because Kubernetes was not tracking all the "extra" LUNs that
got discovered, they would not get cleaned up until the last LUN
on a particular target was detached, causing a logout. This led
to significant complications:
In the time window between when a LUN was unintentially scanned,
and when it was removed due to a logout, if it was deleted on the
backend, a phantom reference remained on the node. In the best
case, the phantom LUN would cause I/O errors and timeouts in the
udev system. In the worst case, the backend could reuse the LUN
number for a new volume, and if that new volume were to be
scheduled to a pod with a phantom reference to the old LUN by the
same number, the initiator could get confused and possibly corrupt
data on that volume.
To avoid these problems, the new implementation only scans for
the specific LUN number it expects to see. It's worth noting that
the default behavior of iscsiadm is to automatically scan the
whole bus on login. That behavior can be disabled by setting
node.session.scan = manual
in iscsid.conf, and for the reasons mentioned above, it is
strongly recommended to set that option. This change still works
regardless of the setting in iscsid.conf, and while automatic
scanning will cause some problems, this change doesn't make the
problems any worse, and can make things better in some cases.
Automatic merge from submit-queue (batch tested with PRs 66464, 66488). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Use glog instead of fmt
**What this PR does / why we need it**:
Use glog instead of fmt
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
NONE
**Special notes for your reviewer**:
NONE
**Release note**:
```release-note
NONE
```
/sig storage
Automatic merge from submit-queue (batch tested with PRs 66464, 66488). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Avoid overflowing int64 in RoundUpSize and return error if overflow int
**What this PR does / why we need it**:
There are many places in plugins (some I may have missed) that we naively convert a resource.Quantity.Value() which is an int64, to an int, which may be only 32 bits long.
Background, optional to read :): Kubernetes canonicalizes resource.Quantities, and from what I have seen testing creating PVCs, decimalSI is the default. If a quantity is in `decimalSI` format and its value in bytes would overflow an int64, e.g. `10E`, nothing happens. If it is in binarySI and its value in bytes would overflow an int64, e.g. `10Ei`, it is set down to 2^63-1 and there's no overflow of the field value. But there may be overflow later in the code which is what this PR is addressing.
* Change `RoundUpSize` implementation to avoid overflowing `int64`
* Add `RoundUp*Int` functions for use when an `int` is expected instead of an `int64`, because `int` may be 32bits and naively doing `int($INT64_VALUE)` can lead to silent overflow. These functions return an error if overflow has occurred.
* Rename `*GB` variables to `*GiB` where appropriate for maximum clarity
* Use `RoundUpToGiB` instead of `RoundUpSize` where possible
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**: please review carefully as we don't have e2e tests for most plugins!
**Release note**:
```release-note
NONE
```
edit: remove 'we do not need to worry about...'. yes we do, i worded that badly :))
Automatic merge from submit-queue (batch tested with PRs 66152, 66406, 66218, 66278, 65660). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Refactor kubelet node status setters, add test coverage
This internal refactor moves the node status setters to a new package, explicitly injects dependencies to facilitate unit testing, and adds individual unit tests for the setters.
I gave each setter a distinct commit to facilitate review.
Non-goals:
- I intentionally excluded the class of setters that return a "modified" boolean, as I want to think more carefully about how to cleanly handle the behavior, and this PR is already rather large.
- I would like to clean up the status update control loops as well, but that belongs in a separate PR.
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 64690, 66174). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Check presence of /dev/disk/by-id before reading it in ScaleIO
**What this PR does / why we need it**:
In certain OS environments, like RHEL 7.4 using Xen PV driver `xen_blkfront` for boot and data disks, the path `/dev/disk/by-id` does not exist in the OS post boot. When trying to dynamically provision PVs from a ScaleIO storageclass in such an environment, ScaleIO fails when trying to create the PV with:
```
E0711 08:02:50.441964 25246 sio_client.go:342] scaleio: failed to ReadDir /dev/disk/by-id: open /dev/disk/by-id: no such file or directory
E0711 08:02:50.441989 25246 sio_volume.go:123] scaleio: setup of volume k8svol-282bd4fa84d: open /dev/disk/by-id: no such file or directory
```
This PR avoids the above failure by first checking for the presence of `/dev/disk/by-id` before attempting to read from it as done in other drivers like gce_pd.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
/sig storage
Automatic merge from submit-queue (batch tested with PRs 66172, 66254). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Reverting commit #56600 as GCE PD is allocated in chunks of GiB inste…
**What this PR does / why we need it:**
This PR reverts the changes made in commit https://github.com/kubernetes/kubernetes/pull/56600 which considered GCE PDs are allocated in chunks of GBs. The following set of operations demonstrate the allocation is in GiBs.
Manually create a PD in GB, and manually attach it to a node:
```
$ gcloud compute disks create msau-test --zone=us-central1-b --size=1GB
```
Run lsblk on it, and it shows the bytes of the disk are 1GiB:
```
$ lsblk -b
sdc 8:32 0 1073741824 0 disk
```
**Which issue(s) this PR fixes**:
[65285](https://github.com/kubernetes/kubernetes/issues/65285)
**Special notes for your reviewer**:
```release-note
none
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
fix a return miss bug
**What this PR does / why we need it**:
This PR fix a return sentence miss bug
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
typo fix: fromat->format
**What this PR does / why we need it**:
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 66076, 65792, 65649). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
kubernetes: fix printf format errors
These are all flagged by Go 1.11's
more accurate printf checking in go vet,
which runs as part of go test.
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add region label to dynamic provisioned cinder PVs
**What this PR does / why we need it**:
This PR adds region label to dynamic provisioned Cinder PVs at the time of the PV creation.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#65977
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 66038, 65992, 66008). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Stop sorting downward api file lines
Fixes#65159
```release-note
fixes an issue with multi-line annotations injected via downward API files getting scrambled
```
These are all flagged by Go 1.11's
more accurate printf checking in go vet,
which runs as part of go test.
Lubomir I. Ivanov <neolit123@gmail.com>
applied ammend for:
pkg/cloudprovider/provivers/vsphere/nodemanager.go
Automatic merge from submit-queue (batch tested with PRs 65931, 65705, 66033). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Block volumes should have empty FSType
FSType in block PVs has no meaning and it should be empty in provisioned PVs.
**Which issue(s) this PR fixes**
Fixes#65704
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 55023, 65499). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Bugfix/csi default fs type
This PR address the issue mentioned in the following ticket https://github.com/kubernetes/kubernetes/issues/65122
The FSType string will now not be defaulted to ext4. Removes defaulting of CSI file system type to ext4. CSI plugins that depended on this default need to be updated as the fsType would remain an empty string if not provided and would not default to ext4. CSI spec allows for an empty fstype string. This is intended for non-block plugins like nfs and gluster where filesystems are not separately created on the volume. But currently the default file system is overridden to ext4 which makes the above case redundant. This commit prevents such an overridding.
```release-note
ACTION REQUIRED: Removes defaulting of CSI file system type to ext4. All the production drivers listed under https://kubernetes-csi.github.io/docs/Drivers.html were tested and work as expected after this change. If you are using a driver not in that list, please test the drivers on an updated test cluster first. ```
Automatic merge from submit-queue (batch tested with PRs 65456, 65549). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fix flexvolume in containerized kubelets
Fixes flex volumes in containerized kubelets.
cc @jsafrane @chakri-nelluri @verult
Note to reviewers : e2e tests pass in local containarized cluster.
```release-note
Fix flexvolume in containarized kubelets
```
Automatic merge from submit-queue (batch tested with PRs 65456, 65549). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
add volume mode field to constructed volume spec for CSI plugin
Add volume mode filed to constructed Volume Spec for CSI plugin
```release-note
Add volume mode filed to constructed volume spec for CSI plugin
```
Automatic merge from submit-queue (batch tested with PRs 64593, 65117, 65629, 65827, 65686). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
clean unused function in file pkg/volume/projected/projected.go
**What this PR does / why we need it**:
It was imported by https://github.com/kubernetes/kubernetes/pull/37237
And it is unusable at first place when it was imported
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 65628, 65573). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
run test TestAttacherMountDevice in temp directory
This change fix two unit test:
1. After run command `make test WHAT=k8s.io/kubernetes/pkg/volume/csi KUBE_TEST_ARGS='-run ^TestAttacherMountDevice$'`
There is a file leaked in work space:
pkg/volume/csi/vol_data.json
2. make test WHAT=k8s.io/kubernetes/pkg/volume/csi KUBE_TEST_ARGS='-run ^TestAttacherUnmountDevice$'
This test fails if it does not run along with TestAttacherMountDevice.
This change fix it.
**What this PR does / why we need it**:
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 65348, 65599, 65635, 65688, 65691). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
azure: Add validation of resourceGroup option.
ResourceGroup can be set only on `kind: Managed` disks.
**Release note**:
```release-note
NONE
```
/assign @andyzhangx
Automatic merge from submit-queue (batch tested with PRs 65299, 65524, 65154, 65329, 65536). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Make various fixes to flex tests and fix some crashes
* Fixes two controller-manager crashes when a flex plugin gets removed from flex directory.
* Also enables e2e tests to run in local clusters and other environments.
* Removes disruptive from flex e2e tests because flex can be installed in a running cluster and does not require kubelet or controller-manager restart anymore.
/sig storage
cc @verult @jsafrane
```release-note
Fix controller-manager crashes when flex plugin is removed from flex plugin directory
```
Automatic merge from submit-queue (batch tested with PRs 65582, 65480, 65310, 65644, 65645). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fix local volume directory can't be deleted issue
**What this PR does / why we need it**:
Need to add volume mode field to constructed pv spec.
**Special notes for your reviewer**:
I get an issue:
1) kubelet has lots of logs with errors related with volume mode
```
Jun 21 10:31:18 kubelet[19333]: E0621 10:31:18.422321 19333 reconciler.go:179] operationExecutor.NewVolumeHandler for UnmountVolume failed for volume "lv-e57cf589-4658-4881-b125-7b9f35c2c8eb" (UniqueName: "kubernetes.io/local-volume/4103e613-656c-11e8-8c20-74dbd180ddb4-lv-e57cf589-4658-4881-b125-7b9f35c2c8eb") pod "4103e613-656c-11e8-8c20-74dbd180ddb4" (UID: "4103e613-656c-11e8-8c20-74dbd180ddb4") : cannot get volumeMode for volume: lv-e57cf589-4658-4881-b125-7b9f35c2c8eb
Jun 21 10:31:18 kubelet[19333]: E0621 10:31:18.422351 19333 reconciler.go:179] operationExecutor.NewVolumeHandler for UnmountVolume failed for volume "lv-b1e788ac-78eb-4d26-819a-263cef5337ea" (UniqueName: "kubernetes.io/local-volume/4082c1da-656c-11e8-8c20-74dbd180ddb4-lv-b1e788ac-78eb-4d26-819a-263cef5337ea") pod "4082c1da-656c-11e8-8c20-74dbd180ddb4" (UID: "4082c1da-656c-11e8-8c20-74dbd180ddb4") : cannot get volumeMode for volume: lv-b1e788ac-78eb-4d26-819a-263cef5337ea
```
2) The pod is an orphan pod and have the volume directory left at the node
3) Because of the errors, the volume directory will never be deleted
**Release note**:
```release-note
Fix local volume directory can't be deleted because of volumeMode error
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add support for plugin directory hierarchy
**What this PR does / why we need it**:
Add hierarchy support for plugin directory, it traverses and
watch plugin directory and its sub directory recursively.
plugin socket file only need be unique within one directory,
```
plugin socket directory
|
---->sub directory 1
| |
| -----> socket1, socket2 ...
----->sub directory 2
|
------> socket1, socket2 ...
```
the design itself allow sub directory be anything,
but in practical, each plugin type could just use one sub directory.
**Which issue(s) this PR fixes**:
Fixes#64003
**Special notes for your reviewer**:
twos bonus changes added as below
1) propose to let pluginWatcher bookkeeping registered plugins,
to make sure plugin name is unique within one plugin type.
arguably, we could let each handler do the same work, but it requires
every handler repeat the same thing.
2) extract example handler out from test, it is easier to read the code with the
seperation.
**Release note**:
```release-note
N/A
```
/sig node
/cc @vikaschoudhary16 @jiayingz @RenaudWasTaken @vishh @derekwaynecarr @saad-ali @vladimirvivien @dchen1107 @yujuhong @tallclair @Random-Liu @anfernee @akutz
Automatic merge from submit-queue (batch tested with PRs 64246, 65489, 65443). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
azure: Move configuration of resource group in storage class.
**What this PR does / why we need it**:
This moves configuration of Azure resource group into storage class. Users can't configure dynamic provisioning in PVCs, because that makes the PVC not portable to other Kubernetes installations, possibly on other clouds.
/sig storage
/assign @andyzhangx
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 65492, 65516, 65447). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Resolve potential devicePath symlink when MapVolume in containerized kubelet
**What this PR does / why we need it**: Ensures local block volumes will work in case kubelet is running in a container
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#65445
**Special notes for your reviewer**: Code is mostly plumbing. If there is a better way to do it, let me know :)
I assume there will be e2e tests for the non-containerized case. I will need to test the containerized case myself, which may take a while.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 65404, 65323, 65468). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fix cleanup of volume metadata json file.
Create the json file with metadata as the last item, when everything else is ready, so we don't need to clean up the file in all error cases in this function.
Fixes#65322
**Release note**:
```release-note
Fixed cleanup of CSI metadata files.
```
/assign @saad-ali @vladimirvivien
it traverses and watch plugin directory and its sub directory recursively,
plugin socket file only need be unique within one directory,
- plugin socket directory
- |
- ---->sub directory 1
- | |
- | -----> socket1, socket2 ...
- ----->sub directory 2
- |
- ------> socket1, socket2 ...
the design itself allow sub directory be anything,
but in practical, each plugin type could just use one sub directory.
four bonus changes added as below
1. extract example handler out from test, it is easier to read the code
with the seperation.
2. there are two variables here: "Watcher" and "watcher".
"Watcher" is the plugin watcher, and "watcher" is the fsnotify watcher.
so rename the "watcher" to "fsWatcher" to make code easier to
understand.
3. change RegisterCallbackFn() return value order, it is
conventional to return error last, after this change,
the pkg/volume/csi is compliance with golint, so remove it
from hack/.golint_failures
4. refactor errors handling at invokeRegistrationCallbackAtHandler()
to make error message more clear.
Automatic merge from submit-queue (batch tested with PRs 65064, 65218, 65260, 65241, 64372). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Added attach/mount/check steps to CSI Driver E2E tests
This PR makes the CSI Volume E2E tests actually go through the entire dynamic provisioning pipeline and test attach/mount/check file etc.
Fixes#64927
```release-note
None
```
Create the json file with metadata as the last item, when everything
else is ready, so we don't need to clean up the file in all error cases
in this function.
Automatic merge from submit-queue (batch tested with PRs 64895, 64938, 63700, 65050, 64957). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
In case storage class parameters are empty, create a new map for Portworx volume labels
Signed-off-by: Harsh Desai <harsh@portworx.com>
**What this PR does / why we need it**:
Fixes#64894
**Release note**:
```release-note
Fixes an issue where Portworx PVCs remain in pending state when created using a StorageClass with empty parameters
```
Automatic merge from submit-queue (batch tested with PRs 65256, 64236, 64919, 64879, 57932). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add block volume support to Cinder volume plugin
**What this PR does / why we need it**:
This PR adds block volume support to Cinder volume plugin.
**Release note**:
```release-note
Added block volume support to Cinder volume plugin.
```
Automatic merge from submit-queue (batch tested with PRs 65254, 64837, 64782, 64555, 64850). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
fix rbd device works at block mode not get mapped to container
In my environment, restart docker will caused all of the container restart(our kernel is a bit old).
Kubelet will also be restart.
After the container up , I checked the container and find the RBD block device is not mapped to container.
When I inspect the container, its {HostConfig.Devices[]} field is empty.
I did some debug, then find that at code https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/kubelet_pods.go#L113
The volName is empty.
```release-notes
Fix issues for block device not mapped to container.
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fix UnmountDevice with deleted pod.
When a pod is deleted, kubelet can't read VolumeAttachment objects. It should cache all information in a json file.
Fixes#63827
~~Work in progress: missing (unit?) tests~~
**Release note**:
```release-note
NONE
```
@saad-ali @vladimirvivien @sbezverk
/sig storage
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fix#64119 RBD volumes cannot be mapped read only to more than 1 container
**What this PR does / why we need it**:
RBD volumes cannot be mapped read only to more than 1 container.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#64119
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 64416, 63625, 60967, 64767, 64588). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fix some log issues in flexvolume
**What this PR does / why we need it**:
This PR fixes some log errors in flexvolume's code. Currently some log statements logs the fuction call `spec.Name()` as `spec.Name`, which causes an address appeared in the log.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 64416, 63625, 60967, 64767, 64588). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Remove mount.GetMountRefs in favor of mounter.GetMountRefs
**What this PR does / why we need it**:
Currently, there are two `GetMountRefs` functions:
- `mount.GetMountRefs`: used in various volume plugins
- `<mounter>.GetMountRefs` (previously `mount.GetMountRefsByDev` introduced in [#49988](https://github.com/kubernetes/kubernetes/pull/49988/files#diff-0c0020e71c995790a90ad9c61ede7632R154), moved to `Mounter` interface in #62903)
This is confusing, and it's better to implement `GetMountRefs` on mounter interface, because different mounters can have their own implementation (especially for nsenter).
This pr removes `mount.GetMountRefs` in favor of mounter.GetMountRefs.
More discussions: https://github.com/kubernetes/kubernetes/pull/62102#issuecomment-390081884 and https://github.com/kubernetes/kubernetes/pull/62102#issuecomment-390123022.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 65032, 63471, 64104, 64672, 64427). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
add external resource group support for azure disk
**What this PR does / why we need it**:
add external resource group support for azure disk,
- without this PR, user could only create dynamic azure disk in the same resource group as cluster
- with this PR, user could specify external resource group in PVC:
```
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-azuredisk
annotations:
volume.beta.kubernetes.io/resource-group: "USER-SPECIFIED-RG"
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: hdd
```
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#64388
**Special notes for your reviewer**:
Pls note above config won't change resource group for azure disk forever, next time if user don't specify resource group, only default resource group will be used.
**Release note**:
```
add external resource group support for azure disk
```
/sig azure
/assign @feiskyer @karataliu
/cc @khenidak
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
CSI block map file path fix
**What this PR does / why we need it**:
This PR is a bug fix that addresses the way CSI communicates block volume path. Instead of sending a directory to the external CSI driver, this PR fixes it to send path to a pre-existing file used for block mapping.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#64854
**Special notes for your reviewer**:
/kind bug
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 64142, 64426, 62910, 63942, 64548). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Clean up fake mounters.
**What this PR does / why we need it**:
Fixes https://github.com/kubernetes/kubernetes/issues/61502
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
list of fake mounters:
- (keep) pkg/util/mount.FakeMounter
- (removed) pkg/kubelet/cm.fakeMountInterface:
- (inherit from mount.FakeMounter) pkg/util/mount.fakeMounter
- (inherit from mount.FakeMounter) pkg/util/removeall.fakeMounter
- (removed) pkg/volume/host_path.fakeFileTypeChecker
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
CSI implementation of raw block volume support
**What this PR does / why we need it**:
This PR implements support for block volumes feature.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#64722
**Special notes for your reviewer**:
**Release note**:
```release-note
Provides API support for external CSI storage drivers to support block volumes.
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Set GCE PD attachable volume limit based on machineType
This PR implements the function to return attachable volume limit based
on machineType for GCE PD. This is part of the design in kubernetes/community#2051/
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Volume topology aware dynamic provisioning: work based on new API
**What this PR does / why we need it**:
The PR has been split to 3 parts:
Part1: https://github.com/kubernetes/kubernetes/pull/63232 for basic scheduler and PV controller plumbing
Part2: https://github.com/kubernetes/kubernetes/pull/63233 for API change
and the PR itself includes work based on the API change:
- Dynamic provisioning allowed topologies scheduler work
- Update provisioning interface to be aware of selected node and topology
**Which issue(s) this PR fixes**
Feature: https://github.com/kubernetes/features/issues/561
Design: https://github.com/kubernetes/community/issues/2168
**Special notes for your reviewer**:
/sig storage
/sig scheduling
/assign @msau42 @jsafrane @saad-ali @bsalamat
@kubernetes/sig-storage-pr-reviews
@kubernetes/sig-scheduling-pr-reviews
**Release note**:
```release-note
Volume topology aware dynamic provisioning
```
This PR implements the function to return attachable volume limit based
on machineType for GCE PD. This is part of the design in kubernetes/community#2051/
Automatic merge from submit-queue (batch tested with PRs 64276, 64094, 64719, 64766, 64750). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Delegate map operation to BlockVolumeMapper plugin
**What this PR does / why we need it**:
This PR refactors the volume controller's operation generator, for block mapping, to delegate core block mounting sequence to the `volume.BlockVolumeMapper` plugin instead of living in the operation generator. This is to ensure better customization of block volume logic for existing internal volume plugins.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#64093
```release-note
NONE
```
/sig storage
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add azuredisk PV size grow feature
**What this PR does / why we need it**:
According to kubernetes/features#284, add size grow feature for azure disk
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#56463
**Special notes for your reviewer**:
- This feature is ony for azure managed disk, and if that disk is already attached to a running VM, disk resize will fail as following:
```
$ kubectl describe pvc pvc-azuredisk
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning VolumeResizeFailed 51s (x3 over 3m) volume_expand Error expanding volume "default/pvc-azuredisk" of plugin kubernetes.io/azure-disk : disk.DisksClient#CreateOrUpdate: Failure responding to request: StatusCode=409 -- Original Error: autorest/azure: Service returned an error. Status=409 Code="OperationNotAllowed" Message="Cannot resize disk andy-mg1102-dynamic-pvc-d2d00dd9-6185-11e8-a6c3-000d3a0643a8 while it is attached to running VM /subscriptions/.../resourceGroups/.../providers/Microsoft.Compute/virtualMachines/k8s-agentpool-17607330-0."
```
**How to use this feature**
- `kubectl edit pvc pvc-azuredisk` to change azuredisk PVC size from 6GB to 10GB
```
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
...
volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/azure-disk
creationTimestamp: 2018-05-27T08:13:23Z
finalizers:
- kubernetes.io/pvc-protection
name: pvc-azuredisk
...
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 6Gi
storageClassName: hdd
volumeMode: Filesystem
volumeName: pvc-d2d00dd9-6185-11e8-a6c3-000d3a0643a8
status:
accessModes:
- ReadWriteOnce
capacity:
storage: 6Gi
conditions:
- lastProbeTime: null
lastTransitionTime: 2018-05-27T08:14:34Z
message: Waiting for user to (re-)start a pod to finish file system resize of
volume on node.
status: "True"
type: FileSystemResizePending
phase: Bound
```
- After resized, `/mnt/disk` is still 6GB
```
$ kubectl exec -it nginx-azuredisk -- bash
# df -h
Filesystem Size Used Avail Use% Mounted on
...
/dev/sdf 5.8G 15M 5.5G 1% /mnt/disk
...
```
- After user run `sudo resize2fs /dev/sdf` in agent node, `/mnt/disk` becomes 10GB now:
```
$ kubectl exec -it nginx-azuredisk -- bash
# df -h
Filesystem Size Used Avail Use% Mounted on
...
/dev/sdf 9.8G 16M 9.3G 1% /mnt/disk
...
```
**Release note**:
```
Add azuredisk size grow feature
```
/sig azure
/assign @feiskyer @karataliu @gnufied
cc @khenidak
Automatic merge from submit-queue (batch tested with PRs 62266, 64351, 64366, 64235, 64560). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Adding CSI driver registration with plugin watcher
Adding CSI driver registration bits. The registration process will leverage driver-registrar side which will open the `registration` socket and will listen for pluginwatcher's GetInfo calls.
```release-note
Adding CSI driver registration code.
```
/sig sig-storage
Automatic merge from submit-queue (batch tested with PRs 62266, 64351, 64366, 64235, 64560). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add log and fs stats for Windows containers
**What this PR does / why we need it**:
Add log and fs stats for Windows containers.
Without this, kubelet will report errors continuously:
```
Unable to fetch container log stats for path \var\log\pods\2a70ed65-37ae-11e8-8730-000d3a14b1a0\echo: Du not supported for this build.
```
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#60180#62047
**Special notes for your reviewer**:
**Release note**:
```release-note
Add log and fs stats for Windows containers
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
fix data loss issue if using existing azure disk with partitions in disk mount
**What this PR does / why we need it**:
When use an existing azure disk(also called [static provisioning](https://github.com/andyzhangx/demo/tree/master/linux/azuredisk#static-provisioning-for-azure-disk)) in pod, if that disk has multiple partitions, the disk will be formatted in the pod mounting.
This PR removes `formatIfNotFormatted` func in `WaitForAttach` which uses `lsblk` command to check whether disk is formatted or not
b87a392b1a/pkg/volume/azure_dd/azure_common_linux.go (L213-L215)
And finally the format disk operation will happen in `MountDevice` in which it uses common k8s code(`SafeFormatAndMount.GetDiskFormat`) using `blkid` to detect disk format, `blkid` could detect multiple partitions
b87a392b1a/pkg/util/mount/mount_linux.go (L541-L543)
- so if we use common k8s code(`SafeFormatAndMount.GetDiskFormat`), following error will be returned for mulitple partition disks, which is expected:
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#63235
**Special notes for your reviewer**:
This PR depends on https://github.com/kubernetes/kubernetes/pull/63248
**Release note**:
```
fix data loss issue if using existing azure disk with partitions in disk mount
```
/sig azure
/assign @khenidak