Commit Graph

155 Commits (9e223539290b5401a3b912ea429147a01c3fda75)

Author SHA1 Message Date
Chao Xu 60604f8818 run hack/update-all 2017-06-22 11:31:03 -07:00
Chao Xu f4989a45a5 run root-rewrite-v1-..., compile 2017-06-22 10:25:57 -07:00
mbohlool c91a12d205 Remove all references to types.UnixUserID and types.UnixGroupID 2017-06-21 04:09:07 -07:00
Matthew Wong 5e788a6a67 Don't provision for PVCs with AccessModes unsupported by plugin 2017-06-12 12:56:41 -04:00
NickrenREN a02d6cd5d8 Add createdby annotation for rbd and quobyte and make dynamical createdby key const
make dynamical createdby key const
2017-05-26 16:55:11 +08:00
pospispa d73c0d649d Admin Can Specify in Which GCE Availability Zone(s) a PV Shall Be Created
An admin wants to specify in which GCE availability zone(s) users may create persistent volumes using dynamic provisioning.

That's why the admin can now configure in StorageClass object a comma separated list of zones. Dynamically created PVs for PVCs that use the StorageClass are created in one of the configured zones.
2017-05-24 10:48:10 +02:00
Jamie Hannaford 9440a68744 Use dedicated Unix User and Group ID types 2017-05-05 14:07:38 +02:00
Mike Danese a05c3c0efd autogenerated 2017-04-14 10:40:57 -07:00
Hemant Kumar 786da1de12 Impement bulk polling of volumes
This implements Bulk volume polling using ideas presented by
justin in https://github.com/kubernetes/kubernetes/pull/39564

But it changes the implementation to use an interface
and doesn't affect other implementations.
2017-03-02 14:59:59 -05:00
Hemant Kumar 2d3008fc56 Implement support for mount options in PVs
Add support for mount options via annotations on PVs
2017-03-01 11:50:40 -05:00
Hemant Kumar 8fad1a6aec Add gnufied as reviewer for aws and gce volumes 2017-02-06 16:38:13 -05:00
Dr. Stefan Schimanski 44ea6b3f30 Update generated files 2017-01-29 21:41:45 +01:00
Dr. Stefan Schimanski bc6fdd925d pkg/api/resource: move to apimachinery 2017-01-29 21:41:44 +01:00
deads2k 5a8f075197 move authoritative client-go utils out of pkg 2017-01-24 08:59:18 -05:00
deads2k ee6752ef20 find and replace 2017-01-20 08:04:53 -05:00
Kubernetes Submit Queue f56b606985 Merge pull request #36520 from apelisse/owners-pkg-volume
Automatic merge from submit-queue

Curating Owners: pkg/volume

cc @jsafrane @spothanis @agonzalezro @justinsb @johscheuer @simonswine @nelcy @pmorie @quofelix @sdminonne @thockin @saad-ali @rootfs

In an effort to expand the existing pool of reviewers and establish a
two-tiered review process (first someone lgtms and then someone
experienced in the project approves), we are adding new reviewers to
existing owners files.


If You Care About the Process:
------------------------------

We did this by algorithmically figuring out who’s contributed code to
the project and in what directories.  Unfortunately, that doesn’t work
well: people that have made mechanical code changes (e.g change the
copyright header across all directories) end up as reviewers in lots of
places.

Instead of using pure commit data, we generated an excessively large
list of reviewers and pruned based on all time commit data, recent
commit data and review data (number of PRs commented on).

At this point we have a decent list of reviewers, but it needs one last
pass for fine tuning.

Also, see https://github.com/kubernetes/contrib/issues/1389.

TLDR:
-----

As an owner of a sig/directory and a leader of the project, here’s what
we need from you:

1. Use PR https://github.com/kubernetes/kubernetes/pull/35715 as an example.

2. The pull-request is made editable, please edit the `OWNERS` file to
remove the names of people that shouldn't be reviewing code in the
future in the **reviewers** section. You probably do NOT need to modify
the **approvers** section. Names asre sorted by relevance, using some
secret statistics.

3. Notify me if you want some OWNERS file to be removed.  Being an
approver or reviewer of a parent directory makes you a reviewer/approver
of the subdirectories too, so not all OWNERS files may be necessary.

4. Please use ALIAS if you want to use the same list of people over and
over again (don't hesitate to ask me for help, or use the pull-request
above as an example)
2017-01-17 19:56:39 -08:00
Clayton Coleman 9a2a50cda7
refactor: use metav1.ObjectMeta in other types 2017-01-17 16:17:19 -05:00
deads2k 6a4d5cd7cc start the apimachinery repo 2017-01-11 09:09:48 -05:00
Jeff Grafton 20d221f75c Enable auto-generating sources rules 2017-01-05 14:14:13 -08:00
Mike Danese 161c391f44 autogenerated 2016-12-29 13:04:10 -08:00
rkouj c14d47dffe Use common unmount util func for TearDownAt() 2016-12-19 16:40:55 -08:00
Chao Xu 03d8820edc rename /release_1_5 to /clientset 2016-12-14 12:39:48 -08:00
Mike Danese c87de85347 autoupdate BUILD files 2016-12-12 13:30:07 -08:00
Chao Xu bcc783c594 run hack/update-all.sh 2016-11-23 15:53:09 -08:00
Chao Xu bb675d395f dependencies: pkg/volume 2016-11-23 15:53:09 -08:00
Jing Xu 3d3e44e77e fix issue in converting aws volume id from mount paths
This PR is to fix the issue in converting aws volume id from mount
paths. Currently there are three aws volume id formats supported. The
following lists example of those three formats and their corresponding
global mount paths:
1. aws:///vol-123456
(/var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/vol-123456)
2. aws://us-east-1/vol-123456
(/var/lib/kubelet/plugins/kubernetes.io/mounts/aws/us-est-1/vol-123455)
3. vol-123456
(/var/lib/kubelet/plugins/kubernetes.io/mounts/aws/us-est-1/vol-123455)

For the first two cases, we need to check the mount path and convert
them back to the original format.
2016-11-16 22:35:48 -08:00
Saad Ali 4259853f4a Update OWNERS for pkg/volume/gce_pd/ 2016-11-13 18:05:12 -08:00
Rajat Ramesh Koujalagi d81e216fc6 Better messaging for missing volume components on host to perform mount 2016-11-09 15:16:11 -08:00
Antoine Pelisse fd510b1207 Update OWNERS approvers and reviewers: pkg/volume 2016-11-09 10:17:36 -08:00
Kubernetes Submit Queue 33dab1d555 Merge pull request #35629 from hpcloud/bug/33128-unused-waitfordetach
Automatic merge from submit-queue

Remove unused WaitForDetach from Detacher interface and plugins

See issue #33128 and PR #33270

We can't rely on the device name provided by OpenStack Cinder, and thus
must perform detection based on the drive serial number (aka It's cinder ID)
on the kubelet itself.

This needs to be removed now, as part of #33128, as the code can't be
updated to attempt device detection and fallback through to the Cinder
provided deviceName, as detection "fails" when the device is gone, and
if cinder has reported a deviceName that another volume has used in
relaity, then this will block forever (or until the other, unreleated,
volume has been detached)
2016-11-06 04:52:23 -08:00
Kubernetes Submit Queue 43a915e628 Merge pull request #35491 from pmorie/byebye-getrootcontext
Automatic merge from submit-queue

Remove GetRootContext method from VolumeHost interface

Remove the `GetRootContext` call from the `VolumeHost` interface, since Kubernetes no longer needs to know the SELinux context of the Kubelet directory.

Per #33951 and #35127.

Depends on #33663; only the last commit is relevant to this PR.
2016-11-06 01:09:19 -08:00
Paul Morie 4722cb299b Remove GetRootContext from VolumeHost 2016-11-03 12:21:19 -04:00
Justin Santa Barbara 3cdbfc98af AWS: strong-typing for k8s vs aws volume ids
We are more liberal in what we accept as a volume id in k8s, and indeed
we ourselves generate names that look like `aws://<zone>/<id>` for
dynamic volumes.

This volume id (hereafter a KubernetesVolumeID) cannot directly be
compared to an AWS volume ID (hereafter an awsVolumeID).

We introduce types for each, to prevent accidental comparison or
confusion.

Issue #35746
2016-11-02 09:42:55 -04:00
Kiall Mac Innes ccb8d53a39 Remove unused WaitForDetach from Detacher interface and plugins
This has been unused since 542f2dc7, and relies on deviceName, which
can no longer be relied upon (see issue #33128).

This needs to be removed now, as part of #33128, as the code can't be
updated to attempt device detection and fallback through to the Cinder
provided deviceName, as detection "fails" when the device is gone, and
if cinder has reported a deviceName that another volume has used in
relaity, then this will block forever (or until the other, unreleated,
volume has been detached)
2016-11-02 11:59:13 +01:00
Jing Xu abbde43374 Add sync state loop in master's volume reconciler
At master volume reconciler, the information about which volumes are
attached to nodes is cached in actual state of world. However, this
information might be out of date in case that node is terminated (volume
is detached automatically). In this situation, reconciler assume volume
is still attached and will not issue attach operation when node comes
back. Pods created on those nodes will fail to mount.

This PR adds the logic to periodically sync up the truth for attached volumes kept in the actual state cache. If the volume is no longer attached to the node, the actual state will be updated to reflect the truth. In turn, reconciler will take actions if needed.

To avoid issuing many concurrent operations on cloud provider, this PR
tries to add batch operation to check whether a list of volumes are
attached to the node instead of one request per volume.

More details are explained in PR #33760
2016-10-28 09:24:53 -07:00
Jing Xu b02481708a Fix volume states out of sync problem after kubelet restarts
When kubelet restarts, all the information about the volumes will be
gone from actual/desired states. When update node status with mounted
volumes, the volume list might be empty although there are still volumes
are mounted and in turn causing master to detach those volumes since
they are not in the mounted volumes list. This fix is to make sure only
update mounted volumes list after reconciler starts sync states process.
This sync state process will scan the existing volume directories and
reconstruct actual states if they are missing.

This PR also fixes the problem during orphaned pods' directories. In
case of the pod directory is unmounted but has not yet deleted (e.g.,
interrupted with kubelet restarts), clean up routine will delete the
directory so that the pod directoriy could be cleaned up (it is safe to
delete directory since it is no longer mounted)

The third issue this PR fixes is that during reconstruct volume in
actual state, mounter could not be nil since it is required for creating
container.VolumeMap. If it is nil, it might cause nil pointer exception
in kubelet.

Details are in proposal PR #33203
2016-10-25 12:29:12 -07:00
Mike Danese 3b6a067afc autogenerated 2016-10-21 17:32:32 -07:00
Jan Safranek 101602ab11 Pass whole PVC to provisioner plugin
Gluster provisioner is interested in pvc.Namespace and I don't want to add
at as a new field in VolumeOptions - it would contain almost whole PVC.

Let's pass direct reference to PVC instead and let the provisioner to pick
information it is interested in.
2016-10-12 12:22:01 +02:00
Jedrzej Nowak f0988b95e7 Typos and englishify pkg/volume 2016-10-03 22:39:33 +02:00
Justin Santa Barbara 54195d590f Use strongly-typed types.NodeName for a node name
We had another bug where we confused the hostname with the NodeName.

To avoid this happening again, and to make the code more
self-documenting, we use types.NodeName (a typedef alias for string)
whenever we are referring to the Node.Name.

A tedious but mechanical commit therefore, to change all uses of the
node name to use types.NodeName

Also clean up some of the (many) places where the NodeName is referred
to as a hostname (not true on AWS), or an instanceID (not true on GCE),
etc.
2016-09-27 10:47:31 -04:00
Jan Safranek 9903b389b3 Update cloud providers 2016-09-15 10:33:57 +02:00
Jan Safranek d94220810e GCE changes for the new provisioning model 2016-08-18 10:36:50 +02:00
saadali e73c516366 Prevent device unmount from deleting dir on err
Prevent device unmount from deleting dir unless volume is successfully
unmounted first.
2016-08-15 16:58:31 -07:00
Jing Xu f19a1148db This change supports robust kubelet volume cleanup
Currently kubelet volume management works on the concept of desired
and actual world of states. The volume manager periodically compares the
two worlds and perform volume mount/unmount and/or attach/detach
operations. When kubelet restarts, the cache of those two worlds are
gone. Although desired world can be recovered through apiserver, actual
world can not be recovered which may cause some volumes cannot be cleaned
up if their information is deleted by apiserver. This change adds the
reconstruction of the actual world by reading the pod directories from
disk. The reconstructed volume information is added to both desired
world and actual world if it cannot be found in either world. The rest
logic would be as same as before, desired world populator may clean up
the volume entry if it is no longer in apiserver, and then volume
manager should invoke unmount to clean it up.
2016-08-15 11:29:15 -07:00
Scott Creeley 11d1289afa Add volume and mount logging 2016-07-21 09:10:00 -04:00
Cindy Wang e13c678e3b Make volume unmount more robust using exclusive mount w/ O_EXCL 2016-07-18 16:20:08 -07:00
Davanum Srinivas 2b0ed014b7 Use Go canonical import paths
Add canonical imports only in existing doc.go files.
https://golang.org/doc/go1.4#canonicalimports

Fixes #29014
2016-07-16 13:48:21 -04:00
Michael Rubin 8028e953b6 Revert "Mount r/w GCE PD disks with -o discard" 2016-07-07 16:47:35 -07:00
Tim Hockin 8efefab9a3 Mount r/w GCE PD disks with -o discard
As per
https://cloud.google.com/compute/docs/disks/add-persistent-disk#formatting.
2016-07-03 21:30:18 -07:00
David McMahon ef0c9f0c5b Remove "All rights reserved" from all the headers. 2016-06-29 17:47:36 -07:00