Automatic merge from submit-queue
Curating Owners: pkg/volume
cc @jsafrane @spothanis @agonzalezro @justinsb @johscheuer @simonswine @nelcy @pmorie @quofelix @sdminonne @thockin @saad-ali @rootfs
In an effort to expand the existing pool of reviewers and establish a
two-tiered review process (first someone lgtms and then someone
experienced in the project approves), we are adding new reviewers to
existing owners files.
If You Care About the Process:
------------------------------
We did this by algorithmically figuring out who’s contributed code to
the project and in what directories. Unfortunately, that doesn’t work
well: people that have made mechanical code changes (e.g change the
copyright header across all directories) end up as reviewers in lots of
places.
Instead of using pure commit data, we generated an excessively large
list of reviewers and pruned based on all time commit data, recent
commit data and review data (number of PRs commented on).
At this point we have a decent list of reviewers, but it needs one last
pass for fine tuning.
Also, see https://github.com/kubernetes/contrib/issues/1389.
TLDR:
-----
As an owner of a sig/directory and a leader of the project, here’s what
we need from you:
1. Use PR https://github.com/kubernetes/kubernetes/pull/35715 as an example.
2. The pull-request is made editable, please edit the `OWNERS` file to
remove the names of people that shouldn't be reviewing code in the
future in the **reviewers** section. You probably do NOT need to modify
the **approvers** section. Names asre sorted by relevance, using some
secret statistics.
3. Notify me if you want some OWNERS file to be removed. Being an
approver or reviewer of a parent directory makes you a reviewer/approver
of the subdirectories too, so not all OWNERS files may be necessary.
4. Please use ALIAS if you want to use the same list of people over and
over again (don't hesitate to ask me for help, or use the pull-request
above as an example)
Automatic merge from submit-queue (batch tested with PRs 39807, 37505, 39844, 39525, 39109)
fix bug not using volumetype config in create volume
fixes#39843
@humblec
we are building the volumetype config but I don't see where we are using it in the CreateVolume for dyn provisioning, this is why volumetype parameter from the Storage Class was being overlooked because we are hard coding constants like replicaCount which is always 3.
unless I'm missing something?
Automatic merge from submit-queue (batch tested with PRs 38284, 38403, 38265, 38378)
glusterfs: properly check gidMin and gidMax values from SC individually
<!-- Thanks for sending a pull request! Here are some tips for you:
1. If this is your first time, read our contributor guidelines https://github.com/kubernetes/kubernetes/blob/master/CONTRIBUTING.md and developer guide https://github.com/kubernetes/kubernetes/blob/master/docs/devel/development.md
2. If you want *faster* PR reviews, read how: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/faster_reviews.md
3. Follow the instructions for writing a release note: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/pull-requests.md#release-notes
-->
**What this PR does / why we need it**:
This fixes a misleading debug message, and also prevents the glusterfs provisioner from adapting a misconfiguration of the gid-range in the storage class. Instead it will fail with proper error messages.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
https://bugzilla.redhat.com/show_bug.cgi?id=1402286
**Special notes for your reviewer**:
**Release note**:
<!-- Steps to write your release note:
1. Use the release-note-* labels to set the release note state (if you have access)
2. Enter your extended release note in the below block; leaving it blank means using the PR title as the release note. If no release note is required, just write `NONE`.
-->
```release-note
```
Don't override explict out-of max-range configuration, but
fail with an error message instead.
Signed-off-by: Michael Adam <obnox@redhat.com>
Automatic merge from submit-queue (batch tested with PRs 38076, 38137, 36882, 37634, 37558)
glusterfs: Fix all gid types to int to prevent failures on 32bit systems
<!-- Thanks for sending a pull request! Here are some tips for you:
1. If this is your first time, read our contributor guidelines https://github.com/kubernetes/kubernetes/blob/master/CONTRIBUTING.md and developer guide https://github.com/kubernetes/kubernetes/blob/master/docs/devel/development.md
2. If you want *faster* PR reviews, read how: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/faster_reviews.md
3. Follow the instructions for writing a release note: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/pull-requests.md#release-notes
-->
**What this PR does / why we need it**:
The glusterfs dynamic provisioner with GID security has an issue on 32 bit systems.
This fixes that issue by forcing all gid types to int internally.
<!--
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
-->
**Release note**:
<!-- Steps to write your release note:
1. Use the release-note-* labels to set the release note state (if you have access)
2. Enter your extended release note in the below block; leaving it blank means using the PR title as the release note. If no release note is required, just write `NONE`.
-->
```release-note
Fix the glusterfs dynamic provisioner for 32bit systems by limiting the gids to type int internally, and allowing 2147483647 as the highest GID.
```
This makes all types int until we hand the GID to heketi/gluster,
at which point it's converted to int64.
It also limits the maximum usable GID ti math.MaxInt32 = 2147483647.
Signed-off-by: Michael Adam <obnox@redhat.com>
This makes all types int until we hand the GID to heketi/gluster,
at which point it's converted to int64.
It also limits the maximum usable GID ti math.MaxInt32 = 2147483647.
Signed-off-by: Michael Adam <obnox@redhat.com>
An allocator of integers that allows for changing the range.
Previously allocated numbers are not lost, and can be
released later even if they have fallen outside of the range.
Signed-off-by: Michael Adam <obnox@redhat.com>
Automatic merge from submit-queue
Add `clusterid`, an optional parameter to storageclass.
At present, admin doesn't have the privilege to chose the
trusted storage pool from which persistent gluster volume
has to be provided.
This patch introduce a new storage class parameter which allows
the admin to specify storage pool/cluster if required.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
trusted storage pool from which persistent gluster volume
has to be provided.
This patch introduce a new storage class parameter which allows
the admin to specify storage pool/cluster if required.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
Automatic merge from submit-queue
Make a consistent name ( GlusterFS instead of Gluster) in variables a…
Signed-off-by: Humble Chirammal hchiramm@redhat.com
Automatic merge from submit-queue
Remove stale volumes if endpoint/svc creation fails.
Remove stale volumes if endpoint/svc creation fails.
Signed-off-by: Humble Chirammal hchiramm@redhat.com
Don't store Gluster SotrageClass parameters in annotations, it's insecure.
Instead, expect that there is the StorageClass available at the time
when it's needed by Gluster deleter.
Gluster provisioner is interested in pvc.Namespace and I don't want to add
at as a new field in VolumeOptions - it would contain almost whole PVC.
Let's pass direct reference to PVC instead and let the provisioner to pick
information it is interested in.
At present, provisioner creates Distribute Volume and this patch
change the default volume type to Distribute-Replica(3) volume.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
Automatic merge from submit-queue
Add volume reconstruct/cleanup logic in kubelet volume manager
Currently kubelet volume management works on the concept of desired
and actual world of states. The volume manager periodically compares the
two worlds and perform volume mount/unmount and/or attach/detach
operations. When kubelet restarts, the cache of those two worlds are
gone. Although desired world can be recovered through apiserver, actual
world can not be recovered which may cause some volumes cannot be cleaned
up if their information is deleted by apiserver. This change adds the
reconstruction of the actual world by reading the pod directories from
disk. The reconstructed volume information is added to both desired
world and actual world if it cannot be found in either world. The rest
logic would be as same as before, desired world populator may clean up
the volume entry if it is no longer in apiserver, and then volume
manager should invoke unmount to clean it up.
Fixes https://github.com/kubernetes/kubernetes/issues/27653
Currently kubelet volume management works on the concept of desired
and actual world of states. The volume manager periodically compares the
two worlds and perform volume mount/unmount and/or attach/detach
operations. When kubelet restarts, the cache of those two worlds are
gone. Although desired world can be recovered through apiserver, actual
world can not be recovered which may cause some volumes cannot be cleaned
up if their information is deleted by apiserver. This change adds the
reconstruction of the actual world by reading the pod directories from
disk. The reconstructed volume information is added to both desired
world and actual world if it cannot be found in either world. The rest
logic would be as same as before, desired world populator may clean up
the volume entry if it is no longer in apiserver, and then volume
manager should invoke unmount to clean it up.
Fake clientset no longer needs to be prepopulated with records: keeping
them in leads to the name conflict on creates. Also, since fake
clientset now respects namespaces, we need to correctly populate them.
This commit adds a new volume manager in kubelet that synchronizes
volume mount/unmount (and attach/detach, if attach/detach controller
is not enabled).
This eliminates the race conditions between the pod creation loop
and the orphaned volumes loops. It also removes the unmount/detach
from the `syncPod()` path so volume clean up never blocks the
`syncPod` loop.
- Add volume.MetricsProvider function to Volume interface.
- Add volume.MetricsDu for providing metrics via executing "du".
- Add volulme.MetricsNil for unsupported Volumes.
GlusterFS by default uses a log file based on the mountpoint path munged into a
file, i.e. `/mnt/foo/bar` becomes `/var/log/glusterfs/mnt-foo-bar.log`.
On certain Kubernetes environments this can result in a log file that exceeds
the 255 character length most filesystems impose on filenames causing the mount
to fail. Instead, use the `log-file` mount option to place the log file under
the kubelet plugin directory with a filename of our choosing keeping it fairly
persistent in the case of troubleshooting.
IsLikelyNotMountPoint determines if a directory is not a mountpoint.
It is fast but not necessarily ALWAYS correct. If the path is in fact
a bind mount from one part of a mount to another it will not be detected.
mkdir /tmp/a /tmp/b; mount --bin /tmp/a /tmp/b; IsLikelyNotMountPoint("/tmp/b")
will return true. When in fact /tmp/b is a mount point. So this patch
renames the function and switches it from a positive to a negative (I
could think of a good positive name). This should make future users of
this function aware that it isn't quite perfect, but probably good
enough.
I have a pod, which exports a Gluster filesystem in non-default namespace.
When I try to use this FS as a GlusterfsVolumeSource in a 'client' pod
definition, Kubernetes looks for the appropriate endpoint in 'default'
namespace instead of the namespace where the client pod is being defined.