Automatic merge from submit-queue (batch tested with PRs 40777, 43673)
remove an unnecassary variable assignment in glusterfs_test
**What this PR does / why we need it**:
`path` is exactly the same variable as `volumePath`, which is defined in line 122 . So no needs to assign it.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 44362, 44421, 44468, 43878, 44480)
Delete EmptyDir volume directly instead of renaming the directory.
**What this PR does / why we need it**:
The volume operation executor can handle duplicate requests on the same volume now, so it is not necessary to rename the directory anymore. This change can cause pod deletion to take longer for large emptydir volumes because now the pod waits for the volume to be deleted until it continues pod cleanup. But this is actually required for local disk scheduling so that we don't schedule new pods that need emptydir volumes on the node if the previous emptydir has not be fully reclaimed yet.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#43534
**Special notes for your reviewer**:
**Release note**:
NONE
cc @kubernetes/sig-storage-pr-reviews
Automatic merge from submit-queue
Update owners to include kerneltime
**What this PR does / why we need it**: Update owners to include kerneltime to help with PRs
Automatic merge from submit-queue
Catch error when failed to make directory in NFS volume plugin
NFS: Catch error when failed to make directory
Currently, NFS volume plugin doesn't catch the error from
os.MkdirAll. That makes it difficult to debug why failed to make the
directory. This patch adds error catch to os.MkdirAll.
Automatic merge from submit-queue
azure disk: add logging on disk attach
**What this PR does / why we need it**:
While we were debugging a failed azure disk attach, we were missing logging information to identify the root cause. This fix logs information at each stage of attach to help identify where problem is once it happens again.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
NONE
Automatic merge from submit-queue (batch tested with PRs 44119, 42538, 43802, 42336, 43396)
iSCSI CHAP support
**What this PR does / why we need it**:
To support CHAP authentication in a multi-tenant setup
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
Support iSCSI CHAP authentication
```
Automatic merge from submit-queue
relocate FC multipath readme to examples from pkg/volume
Signed-off-by: rootfs <hchen@redhat.com>
**What this PR does / why we need it**:
`pkg/volume/README.md` is not a good place for Fiber Channel specific doc. Move the block into FC README.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 44008, 41929)
vSphere Cloud Provider: Fstype in storage class
This PR does following,
1. Adds fstype support in storage class for vSphere Cloud Provider.
2. Modify examples to include fstype in storage class.
3. Adds fstype support in storage class for Photon Controller Cloud Provider (@luomiao)
Internally reviewed [here](https://github.com/vmware/kubernetes/pull/88).
cc @pdhamdhere @tusharnt @kerneltime @BaluDontu @divyenpatel @luomiao
Automatic merge from submit-queue (batch tested with PRs 42038, 42083)
Add backup-volfile-servers to mount option.
This feature ensures the `backup servers` in the trusted pool is contacted if there is a failure in the connected server.
Mount option becomes:
mount -t glusterfs -o log-level=ERROR,log-file=/var/lib/kubelet/plugins/kubernetes.io/glusterfs/glustermount/glusterpod-glusterfs.log,backup-volfile-servers=192.168.100.0:192.168.200.0:192.168.43.149 ..
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
Automatic merge from submit-queue
Remove unused method from operation_generator
This is only a removal of the GerifyVolumeIsSafeToDetach [sic] method from operation_executor. The method is not called from anywhere, moreover there is a private method named verifyVolumeIsSafeToDetach (which is being used). This looks like a cut&paste mistake that deserves to be cleaned.
```release-note
NONE
```
Automatic merge from submit-queue
Curate owners for pkg/volume/aws_ebs
The previous list was algorithmically generated; applying some curation.
```release-note
NONE
```
The cloudprovider is being refactored out of kubernetes core. This is being
done by moving all the cloud-specific calls from kube-apiserver, kubelet and
kube-controller-manager into a separately maintained binary(by vendors) called
cloud-controller-manager. The Kubelet relies on the cloudprovider to detect information
about the node that it is running on. Some of the cloudproviders worked by
querying local information to obtain this information. In the new world of things,
local information cannot be relied on, since cloud-controller-manager will not
run on every node. Only one active instance of it will be run in the cluster.
Today, all calls to the cloudprovider are based on the nodename. Nodenames are
unqiue within the kubernetes cluster, but generally not unique within the cloud.
This model of addressing nodes by nodename will not work in the future because
local services cannot be queried to uniquely identify a node in the cloud. Therefore,
I propose that we perform all cloudprovider calls based on ProviderID. This ID is
a unique identifier for identifying a node on an external database (such as
the instanceID in aws cloud).
Automatic merge from submit-queue (batch tested with PRs 42835, 42974)
VSAN policy support for storage volume provisioning inside kubernetes
The vsphere users will have the ability to specify custom Virtual SAN Storage Capabilities during dynamic volume provisioning. You can now define storage requirements, such as performance and availability, in the form of storage capabilities during dynamic volume provisioning. The storage capability requirements are converted into a Virtual SAN policy which are then pushed down to the Virtual SAN layer when a storage volume (virtual disk) is being created. The virtual disk is distributed across the Virtual SAN datastore to meet the requirements.
For example, User creates a storage class with VSAN storage capabilities:
> kind: StorageClass
> apiVersion: storage.k8s.io/v1beta1
> metadata:
> name: slow
> provisioner: kubernetes.io/vsphere-volume
> parameters:
> hostFailuresToTolerate: "2"
> diskStripes: "1"
> cacheReservation: "20"
> datastore: VSANDatastore
The vSphere Cloud provider provisions a virtual disk (VMDK) on VSAN with the policy configured to the disk.
When you know storage requirements of your application that is being deployed on a container, you can specify these storage capabilities when you create a storage class inside Kubernetes.
@pdhamdhere @tthole @abrarshivani @divyenpatel
**Release note**:
```release-note
None
```
Automatic merge from submit-queue (batch tested with PRs 43642, 43170, 41813, 42170, 41581)
Enable storage class support in Azure File volume
**What this PR does / why we need it**:
Support StorageClass in Azure file volume
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
Support StorageClass in Azure file volume
```
Automatic merge from submit-queue (batch tested with PRs 42522, 42545, 42556, 42006, 42631)
Fixes MountVolume.NewMounter errors not displayed to users via describe events
Fixes#42004
This fixes the problem of mount errors being eaten and not displayed to users again. Specifically erros caught in MountVolume.NewMounter (like missing endpoints, etc...)
Current behavior for any mount failure:
```
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
12m 12m 1 default-scheduler Normal Scheduled Successfully assigned glusterfs-bb-pod1 to 127.0.0.1
10m 1m 5 kubelet, 127.0.0.1 Warning FailedMount Unable to mount volumes for pod "glusterfs-bb-pod1_default(67c9dfa7-f9f5-11e6-aee2-5254003a59cf)": timeout expired waiting for volumes to attach/mount for pod "default"/"glusterfs-bb-pod1". list of unattached/unmounted volumes=[glusterfsvol]
10m 1m 5 kubelet, 127.0.0.1 Warning FailedSync Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "default"/"glusterfs-bb-pod1". list of unattached/unmounted volumes=[glusterfsvol]
```
New Behavior:
For example on glusterfs - deliberately didn't create endpoints, now correct message is displayed:
```
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
2m 2m 1 default-scheduler Normal Scheduled Successfully assigned glusterfs-bb-pod1 to 127.0.0.1
54s 54s 1 kubelet, 127.0.0.1 Warning FailedMount Unable to mount volumes for pod "glusterfs-bb-pod1_default(8edd2c25-fa09-11e6-92ae-5254003a59cf)": timeout expired waiting for volumes to attach/mount for pod "default"/"glusterfs-bb-pod1". With error timed out waiting for the condition. list of unattached/unmounted volumes=[glusterfsvol]
54s 54s 1 kubelet, 127.0.0.1 Warning FailedSync Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "default"/"glusterfs-bb-pod1". With error timed out waiting for the condition. list of unattached/unmounted volumes=[glusterfsvol]
2m 6s 814 kubelet, 127.0.0.1 Warning FailedMount MountVolume.NewMounter failed for volume "kubernetes.io/glusterfs/8edd2c25-fa09-11e6-92ae-5254003a59cf-glusterfsvol" (spec.Name: "glusterfsvol") pod "8edd2c25-fa09-11e6-92ae-5254003a59cf" (UID: "8edd2c25-fa09-11e6-92ae-5254003a59cf") with: endpoints "glusterfs-cluster" not found
```
Automatic merge from submit-queue (batch tested with PRs 42237, 42297, 42279, 42436, 42551)
should replace errors.New(fmt.Sprintf(...)) with fmt.Errorf(...)
Signed-off-by: yupengzte <yu.peng36@zte.com.cn>
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue
Add gnufied as reviewer for pkg/volume
I have helped review and contributed code to this
area already.
cc @saad-ali @jsafrane @childsb
Automatic merge from submit-queue
fc: Drop multipath.conf snippet
**What this PR does / why we need it**:
Removes multipath.conf - The code does not make use of it - or ensure s that it's getting used - and it should in addition be handled elsewehre.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
```
A minimalistic multipath.conf got written, but it was useless, as
it is unclear if multipathd is running and there was also no
config reload triggered.
This patch drops this snippet. In general it's probably a better idea
to leave the multipath.conf to the component managing the host.
Signed-off-by: Fabian Deutsch <fabiand@fedoraproject.org>
Auto-generated via:
git grep -l [Ss]uccesfully | xargs sed -ri 's/([sS])uccesfully/\1uccessfully/g'
I noticed this when running kube-scheduler with --v4 and it is annoying.
Then manually reverted changed to the vendored bits.
Automatic merge from submit-queue
recycle pod can't get the event since channel closed
What this PR does / why we need it:
We create a hostPath type PV with "Recycle" persistentVolumeReclaimPolicy, and bind a PVC to it, but after deleted the PVC, the PV cannot become to available status. This is happened after we upgrade etcd to 3.0. The reason is:
If the channel used to get the pod message and events been abnormal closed(for example, the event channel maybe closed because of "required revision has been compacted" error), the function internalRecycleVolumeByWatchingPodUntilCompletion will stuck in a loop, and the recycle pod will not been deleted, the PV can not become into available status
Special notes for your reviewer:
None
Release note:
This commit updates the code to set the default value of the readOnly attribute to false.
It also updates the example docs to add full list of supported plugin attributes and doc.
Automatic merge from submit-queue (batch tested with PRs 42369, 42375, 42397, 42435, 42455)
[Bug Fix]: Avoid evicting more pods than necessary by adding Timestamps for fsstats and ignoring stale stats
Continuation of #33121. Credit for most of this goes to @sjenning. I added volume fs timestamps.
**why is this a bug**
This PR attempts to fix part of https://github.com/kubernetes/kubernetes/issues/31362 which results in multiple pods getting evicted unnecessarily whenever the node runs into resource pressure. This PR reduces the chances of such disruptions by avoiding reacting to old/stale metrics.
Without this PR, kubernetes nodes under resource pressure will cause unnecessary disruptions to user workloads.
This PR will also help deflake a node e2e test suite.
The eviction manager currently avoids evicting pods if metrics are old. However, timestamp data is not available for filesystem data, and this causes lots of extra evictions.
See the [inode eviction test flakes](https://k8s-testgrid.appspot.com/google-node#kubelet-flaky-gce-e2e) for examples.
This should probably be treated as a bugfix, as it should help mitigate extra evictions.
cc: @kubernetes/sig-storage-pr-reviews @kubernetes/sig-node-pr-reviews @vishh @derekwaynecarr @sjenning
Automatic merge from submit-queue (batch tested with PRs 41306, 42187, 41666, 42275, 42266)
Implement bulk polling of volumes
This implements Bulk volume polling using ideas presented by
justin in https://github.com/kubernetes/kubernetes/pull/39564
But it changes the implementation to use an interface
and doesn't affect other implementations.
cc @justinsb
This implements Bulk volume polling using ideas presented by
justin in https://github.com/kubernetes/kubernetes/pull/39564
But it changes the implementation to use an interface
and doesn't affect other implementations.
Automatic merge from submit-queue (batch tested with PRs 41597, 42185, 42075, 42178, 41705)
force rbd image unlock if the image is not used
**What this PR does / why we need it**:
Ceph RBD image could be locked if the host that holds the lock is down. In such case, the image cannot be used by other Pods.
The fix is to detect the orphaned locks and force unlock.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#31790
**Special notes for your reviewer**:
Note, previously, RBD volume plugin maps the image, mount it, and create a lock on the image. Since the proposed fix uses `rbd status` output to determine if the image is being used, the sequence has to change to: rbd lock checking (through `rbd lock list`), mapping check (through `rbd status`), forced unlock if necessary (through `rbd lock rm`), image lock, image mapping, and mount.
**Release note**:
```release-note
force unlock rbd image if the image is not used
```
- Add a new type PortworxVolumeSource
- Implement the kubernetes volume plugin for Portworx Volumes under pkg/volume/portworx
- The Portworx Volume Driver uses the libopenstorage/openstorage specifications and apis for volume operations.
Changes for k8s configuration and examples for portworx volumes.
- Add PortworxVolume hooks in kubectl, kube-controller-manager and validation.
- Add a README for PortworxVolume usage as PVs, PVCs and StorageClass.
- Add example spec files
Handle code review comments.
- Modified READMEs to incorporate to suggestions.
- Add a test for ReadWriteMany access mode.
- Use util.UnmountPath in TearDown.
- Add ReadOnly flag to PortworxVolumeSource
- Use hostname:port instead of unix sockets
- Delete the mount dir in TearDown.
- Fix link issue in persistentvolumes README
- In unit test check for mountpath after Setup is done.
- Add PVC Claim Name as a Portworx Volume Label
Generated code and documentation.
- Updated swagger spec
- Updated api-reference docs
- Updated generated code under pkg/api/v1
Godeps update for Portworx Volume Driver
- Adds github.com/libopenstorage/openstorage
- Adds go.pedge.io/pb/go/google/protobuf
- Updates Godep Licenses
Automatic merge from submit-queue (batch tested with PRs 41116, 41804, 42104, 42111, 42120)
Add support for attacher/detacher interface in Flex volume
Add support for attacher/detacher interface in Flex volume
This change breaks backward compatibility and requires to be release noted.
```release-note
Flex volume plugin is updated to support attach/detach interfaces. It broke backward compatibility. Please update your drivers and implement the new callouts.
```
Add some lines about how to enable multipath for block storage.
A new README was added, because multipath is relevant for at least
FC and iSCSI.
Signed-off-by: Fabian Deutsch <fabiand@fedoraproject.org>
A minimalistic multipath.conf got written, but it was useless, as
it is unclear if multipathd is running and there was also no
config reload triggered.
This patch drops this snippet. In general it's probably a better idea
to leave the multipath.conf to the component managing the host.
Signed-off-by: Fabian Deutsch <fabiand@fedoraproject.org>
is contacted if there is a failure in the connected server.
Mount option becomes:
mount -t glusterfs -o log-level=ERROR,log-file=/var/lib/kubelet/plugins/kubernetes.io/glusterfs/glustermount/glusterpod-glusterfs.log,backup-volfile-servers=192.168.100.0:192.168.200.0:192.168.43.149 ..
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
Automatic merge from submit-queue
Fix for Support selection of datastore for dynamic provisioning in vS…
Fixes#40558
Current vSphere Cloud provider doesn't allow a user to select a datastore for dynamic provisioning. All the volumes are created in default datastore provided by the user in the global vsphere configuration file.
With this fix, the user will be able to provide the datastore in the storage class definition. This will allow the volumes to be created in the datastore specified by the user in the storage class definition. This field is optional. If no datastore is specified, the volume will be created in the default datastore specified in the global config file.
For example:
User creates a storage class with the datastore
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: slow
provisioner: kubernetes.io/vsphere-volume
parameters:
diskformat: thin
datastore: VMFSDatastore
Now the volume will be created in the datastore - "VMFSDatastore" specified by the user.
If the user creates a storage class without any datastore
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: slow
provisioner: kubernetes.io/vsphere-volume
parameters:
diskformat: thin
Now the volume will be created in the datastore which in the global configuration file (vsphere.conf)
@pdhamdhere @kerneltime
Automatic merge from submit-queue (batch tested with PRs 41667, 41820, 40910, 41645, 41361)
Allow multiple mounts in StatefulSet volume zone placement
We have some heuristics that ensure that volumes (and hence stateful set
pods) are spread out across zones. Sadly they forgot to account for
multiple mounts. This PR updates the heuristic to ignore the mount name
when we see something that looks like a statefulset volume, thus
ensuring that multiple mounts end up in the same AZ.
Fix#35695
```release-note
Fix zone placement heuristics so that multiple mounts in a StatefulSet pod are created in the same zone
```
Automatic merge from submit-queue (batch tested with PRs 41364, 40317, 41326, 41783, 41782)
changes to cleanup the volume plugin for recycle
**What this PR does / why we need it**:
Code cleanup. Changing from creating a new interface from the plugin, that then calls a function to recycle a volume, to adding the function to the plugin itself.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#26230
**Special notes for your reviewer**:
Took same approach from closed PR #28432.
Do you want the approach to be the same for NewDeleter(), NewMounter(), NewUnMounter() and should they be in this same PR or submit different PR's for those?
**Release note**:
```NONE
```
Automatic merge from submit-queue (batch tested with PRs 39373, 41585, 41617, 41707, 39958)
Fix ConfigMaps for Windows
**What this PR does / why we need it**: ConfigMaps were broken for Windows as the existing code used linux specific file paths. Updated the code in `kubelet_getters.go` to use `path/filepath` to get the directories. Also reverted back the code in `secret.go` as updating `kubelet_getters.go` to use `path/filepath` also fixes `secrets`
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes https://github.com/kubernetes/kubernetes/issues/39372
```release-note
Fix ConfigMap for Windows Containers.
```
cc: @pires
We have some heuristics that ensure that volumes (and hence stateful set
pods) are spread out across zones. Sadly they forgot to account for
multiple mounts. This PR updates the heuristic to ignore the mount name
when we see something that looks like a statefulset volume, thus
ensuring that multiple mounts end up in the same AZ.
Fix#35695
Automatic merge from submit-queue (batch tested with PRs 41531, 40417, 41434)
Always detach volumes in operator executor
**What this PR does / why we need it**:
Instead of marking a volume as detached immediately in Kubelet's
reconciler, delegate the marking asynchronously to the operator
executor. This is necessary to prevent race conditions with other
operations mutating the same volume state.
An example of one such problem:
1. pod is created, volume is added to desired state of the world
2. reconciler process starts
3. reconciler starts MountVolume, which is kicked off asynchronously via
operation_executor.go
4. MountVolume mounts the volume, but hasn't yet marked it as mounted
5. pod is deleted, volume is removed from desired state of the world
6. reconciler reaches detach volume section, detects volume is no longer in desired state of world,
removes it from volumes in use
7. MountVolume tries to mark mount, throws an error because
volume is no longer in actual state of world list. After this, kubelet isn't aware of the mount
so doesn't try to unmount again.
8. controller-manager tries to detach the volume, this fails because it
is still mounted to the OS.
9. EBS gets stuck indefinitely in busy state trying to detach.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#32881, fixes ##37854 (maybe)
**Special notes for your reviewer**:
**Release note**:
```release-note
```
To safely mark a volume detached when the volume controller manager is used.
An example of one such problem:
1. pod is created, volume is added to desired state of the world
2. reconciler process starts
3. reconciler starts MountVolume, which is kicked off asynchronously via
operation_executor.go
4. MountVolume mounts the volume, but hasn't yet marked it as mounted
5. pod is deleted, volume is removed from desired state of the world
6. reconciler detects volume is no longer in desired state of world,
removes it from volumes in use
7. MountVolume tries to mark volume in use, throws an error because
volume is no longer in actual state of world list.
8. controller-manager tries to detach the volume, this fails because it
is still mounted to the OS.
9. EBS gets stuck indefinitely in busy state trying to detach.
Automatic merge from submit-queue (batch tested with PRs 41246, 39998)
Cinder volume attacher: use instanceID instead of NodeID when verifying attachment
**What this PR does / why we need it**: Cinder volume attacher incorrectly uses NodeID instead of openstack instance id, so that reconciliation fails.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#39978
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue
Add gnufied as reviewer for aws and gce volumes
Adding myself as reviewer for aws and gce volume plugins. I understand the code well enough and have helped with review in those areas already.
cc @childsb @justinsb @saad-ali