Commit Graph

70 Commits (65b2f71ee7faa0f85aea4a02713380a7bb8334d4)

Author SHA1 Message Date
Shyam Jeedigunta 4425864707 Migrate kubelet configmap management logic to an interface 2017-05-31 10:39:36 +02:00
Kubernetes Submit Queue 0aad9d30e3 Merge pull request #44897 from msau42/local-storage-plugin
Automatic merge from submit-queue (batch tested with PRs 46076, 43879, 44897, 46556, 46654)

Local storage plugin

**What this PR does / why we need it**:
Volume plugin implementation for local persistent volumes.  Scheduler predicate will direct already-bound PVCs to the node that the local PV is at.  PVC binding still happens independently.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: 
Part of #43640

**Release note**:

```
Alpha feature: Local volume plugin allows local directories to be created and consumed as a Persistent Volume.  These volumes have node affinity and pods will only be scheduled to the node that the volume is at.
```
2017-05-30 23:20:02 -07:00
NickrenREN a02d6cd5d8 Add createdby annotation for rbd and quobyte and make dynamical createdby key const
make dynamical createdby key const
2017-05-26 16:55:11 +08:00
Rohit Agarwal 0f5cc4027f Implement FakeVolumePlugin's ConstructVolumeSpec method according to interface expectation.
This fixes #45803 and #46204.
2017-05-24 17:26:34 -07:00
Michelle Au 6ade5461ad Add GetNodeLabels to VolumeHost interface 2017-05-22 14:44:06 -07:00
FengyunPan d86bf8a0b8 Fix reconciler test of attaching volume
The Attach() of FakeVolume should return device path
2017-05-13 15:19:34 +08:00
Jamie Hannaford 9440a68744 Use dedicated Unix User and Group ID types 2017-05-05 14:07:38 +02:00
saadali eacc48373b Remove rkouj from owners files. 2017-04-28 17:14:38 -07:00
Mike Danese a05c3c0efd autogenerated 2017-04-14 10:40:57 -07:00
Kubernetes Submit Queue f9ccee7714 Merge pull request #42435 from dashpole/timestamps_for_fsstats
Automatic merge from submit-queue (batch tested with PRs 42369, 42375, 42397, 42435, 42455)

[Bug Fix]: Avoid evicting more pods than necessary by adding Timestamps for fsstats and ignoring stale stats

Continuation of #33121.  Credit for most of this goes to @sjenning.  I added volume fs timestamps.

**why is this a bug** 
This PR attempts to fix part of https://github.com/kubernetes/kubernetes/issues/31362 which results in multiple pods getting evicted unnecessarily whenever the node runs into resource pressure. This PR reduces the chances of such disruptions by avoiding reacting to old/stale metrics.
Without this PR, kubernetes nodes under resource pressure will cause unnecessary disruptions to user workloads. 
This PR will also help deflake a node e2e test suite.

The eviction manager currently avoids evicting pods if metrics are old.  However, timestamp data is not available for filesystem data, and this causes lots of extra evictions.
See the [inode eviction test flakes](https://k8s-testgrid.appspot.com/google-node#kubelet-flaky-gce-e2e) for examples.
This should probably be treated as a bugfix, as it should help mitigate extra evictions.

cc: @kubernetes/sig-storage-pr-reviews  @kubernetes/sig-node-pr-reviews @vishh @derekwaynecarr @sjenning
2017-03-03 23:21:48 -08:00
David Ashpole a90c7951d4 add volume timestamps 2017-03-02 15:01:59 -08:00
Hemant Kumar 786da1de12 Impement bulk polling of volumes
This implements Bulk volume polling using ideas presented by
justin in https://github.com/kubernetes/kubernetes/pull/39564

But it changes the implementation to use an interface
and doesn't affect other implementations.
2017-03-02 14:59:59 -05:00
Hemant Kumar 2d3008fc56 Implement support for mount options in PVs
Add support for mount options via annotations on PVs
2017-03-01 11:50:40 -05:00
Kevin Griffith 9448aa66ff cleanup the volume plugin for recycle
update commit to reflect changes
2017-02-06 10:38:49 -06:00
Dr. Stefan Schimanski 44ea6b3f30 Update generated files 2017-01-29 21:41:45 +01:00
Dr. Stefan Schimanski bc6fdd925d pkg/api/resource: move to apimachinery 2017-01-29 21:41:44 +01:00
Dr. Stefan Schimanski a0137e9b28 Update generated files 2017-01-25 19:49:45 +01:00
Dr. Stefan Schimanski d7eb3b6870 pkg/util: move uuid and strategicpatch into k8s.io/apimachinery 2017-01-25 19:45:09 +01:00
deads2k 5a8f075197 move authoritative client-go utils out of pkg 2017-01-24 08:59:18 -05:00
Wojciech Tyczynski bf7138652f SecretVolume using secret manager 2017-01-23 16:10:01 +01:00
Kubernetes Submit Queue b5929bfb2b Merge pull request #38789 from jessfraz/cleanup-temp-dirs
Automatic merge from submit-queue (batch tested with PRs 37228, 40146, 40075, 38789, 40189)

Cleanup temp dirs

So funny story my /tmp ran out of space running the unit tests so I am cleaning up all the temp dirs we create.
2017-01-20 12:34:58 -08:00
deads2k ee6752ef20 find and replace 2017-01-20 08:04:53 -05:00
Kubernetes Submit Queue f56b606985 Merge pull request #36520 from apelisse/owners-pkg-volume
Automatic merge from submit-queue

Curating Owners: pkg/volume

cc @jsafrane @spothanis @agonzalezro @justinsb @johscheuer @simonswine @nelcy @pmorie @quofelix @sdminonne @thockin @saad-ali @rootfs

In an effort to expand the existing pool of reviewers and establish a
two-tiered review process (first someone lgtms and then someone
experienced in the project approves), we are adding new reviewers to
existing owners files.


If You Care About the Process:
------------------------------

We did this by algorithmically figuring out who’s contributed code to
the project and in what directories.  Unfortunately, that doesn’t work
well: people that have made mechanical code changes (e.g change the
copyright header across all directories) end up as reviewers in lots of
places.

Instead of using pure commit data, we generated an excessively large
list of reviewers and pruned based on all time commit data, recent
commit data and review data (number of PRs commented on).

At this point we have a decent list of reviewers, but it needs one last
pass for fine tuning.

Also, see https://github.com/kubernetes/contrib/issues/1389.

TLDR:
-----

As an owner of a sig/directory and a leader of the project, here’s what
we need from you:

1. Use PR https://github.com/kubernetes/kubernetes/pull/35715 as an example.

2. The pull-request is made editable, please edit the `OWNERS` file to
remove the names of people that shouldn't be reviewing code in the
future in the **reviewers** section. You probably do NOT need to modify
the **approvers** section. Names asre sorted by relevance, using some
secret statistics.

3. Notify me if you want some OWNERS file to be removed.  Being an
approver or reviewer of a parent directory makes you a reviewer/approver
of the subdirectories too, so not all OWNERS files may be necessary.

4. Please use ALIAS if you want to use the same list of people over and
over again (don't hesitate to ask me for help, or use the pull-request
above as an example)
2017-01-17 19:56:39 -08:00
Saad Ali 526578679b Update OWNERS 2017-01-17 17:36:34 -08:00
Clayton Coleman 9a2a50cda7
refactor: use metav1.ObjectMeta in other types 2017-01-17 16:17:19 -05:00
NickrenREN a12dea14e0 fix redundant alias clientset 2017-01-12 10:21:05 +08:00
deads2k 6a4d5cd7cc start the apimachinery repo 2017-01-11 09:09:48 -05:00
Jeff Grafton 20d221f75c Enable auto-generating sources rules 2017-01-05 14:14:13 -08:00
Jess Frazelle ba617fdd1b
cleanup metrics_du_test in /tmp
Signed-off-by: Jess Frazelle <acidburn@google.com>
2017-01-04 10:27:00 -08:00
Chao Xu 03d8820edc rename /release_1_5 to /clientset 2016-12-14 12:39:48 -08:00
Mike Danese c87de85347 autoupdate BUILD files 2016-12-12 13:30:07 -08:00
Chao Xu bcc783c594 run hack/update-all.sh 2016-11-23 15:53:09 -08:00
Chao Xu bb675d395f dependencies: pkg/volume 2016-11-23 15:53:09 -08:00
Rajat Ramesh Koujalagi d81e216fc6 Better messaging for missing volume components on host to perform mount 2016-11-09 15:16:11 -08:00
Antoine Pelisse fd510b1207 Update OWNERS approvers and reviewers: pkg/volume 2016-11-09 10:17:36 -08:00
pospispa dc9bb87ac7 Simplifies NFS and Host Path Plugin - Removed newRecyclerFunc, newDeleterFunc and newProvisionerFunc
struct hostPathPlugin contains newRecyclerFunc, newDeleterFunc and newProvisionerFunc items that have only one instance, i.e. newRecycler, newDeleter or newProvisioner function.

That's why the newRecyclerFunc, newDeleterFunc and newProvisionerFunc items are removed and the newRecycler, newDeleter or newProvisioner functions are called directly.

In addition, the TestRecycler tests whether NewFakeRecycler function is called and returns nil. This is no longer needed so this particular part of the test is removed. In addition, the no longer used NewFakeRecycler function is removed also.

Similarly for the NFS plugin, struct nfsPlugin contains newRecyclerFunc item that has only one instance, i.e. newRecycler function. That's why the newRecyclerFunc item is removed and the newRecycler function is called directly. In addition, the TestRecycler tests whether newMockRecycler function is called and returns nil. This is no longer needed so this particular part of the test is removed. In addition, the no longer used newMockRecycler function is removed also.
2016-11-07 10:39:04 +01:00
Kubernetes Submit Queue 33dab1d555 Merge pull request #35629 from hpcloud/bug/33128-unused-waitfordetach
Automatic merge from submit-queue

Remove unused WaitForDetach from Detacher interface and plugins

See issue #33128 and PR #33270

We can't rely on the device name provided by OpenStack Cinder, and thus
must perform detection based on the drive serial number (aka It's cinder ID)
on the kubelet itself.

This needs to be removed now, as part of #33128, as the code can't be
updated to attempt device detection and fallback through to the Cinder
provided deviceName, as detection "fails" when the device is gone, and
if cinder has reported a deviceName that another volume has used in
relaity, then this will block forever (or until the other, unreleated,
volume has been detached)
2016-11-06 04:52:23 -08:00
Paul Morie 4722cb299b Remove GetRootContext from VolumeHost 2016-11-03 12:21:19 -04:00
Kiall Mac Innes ccb8d53a39 Remove unused WaitForDetach from Detacher interface and plugins
This has been unused since 542f2dc7, and relies on deviceName, which
can no longer be relied upon (see issue #33128).

This needs to be removed now, as part of #33128, as the code can't be
updated to attempt device detection and fallback through to the Cinder
provided deviceName, as detection "fails" when the device is gone, and
if cinder has reported a deviceName that another volume has used in
relaity, then this will block forever (or until the other, unreleated,
volume has been detached)
2016-11-02 11:59:13 +01:00
Jing Xu abbde43374 Add sync state loop in master's volume reconciler
At master volume reconciler, the information about which volumes are
attached to nodes is cached in actual state of world. However, this
information might be out of date in case that node is terminated (volume
is detached automatically). In this situation, reconciler assume volume
is still attached and will not issue attach operation when node comes
back. Pods created on those nodes will fail to mount.

This PR adds the logic to periodically sync up the truth for attached volumes kept in the actual state cache. If the volume is no longer attached to the node, the actual state will be updated to reflect the truth. In turn, reconciler will take actions if needed.

To avoid issuing many concurrent operations on cloud provider, this PR
tries to add batch operation to check whether a list of volumes are
attached to the node instead of one request per volume.

More details are explained in PR #33760
2016-10-28 09:24:53 -07:00
Mike Danese 3b6a067afc autogenerated 2016-10-21 17:32:32 -07:00
Jan Safranek 101602ab11 Pass whole PVC to provisioner plugin
Gluster provisioner is interested in pvc.Namespace and I don't want to add
at as a new field in VolumeOptions - it would contain almost whole PVC.

Let's pass direct reference to PVC instead and let the provisioner to pick
information it is interested in.
2016-10-12 12:22:01 +02:00
Justin Santa Barbara 54195d590f Use strongly-typed types.NodeName for a node name
We had another bug where we confused the hostname with the NodeName.

To avoid this happening again, and to make the code more
self-documenting, we use types.NodeName (a typedef alias for string)
whenever we are referring to the Node.Name.

A tedious but mechanical commit therefore, to change all uses of the
node name to use types.NodeName

Also clean up some of the (many) places where the NodeName is referred
to as a hostname (not true on AWS), or an instanceID (not true on GCE),
etc.
2016-09-27 10:47:31 -04:00
Jan Safranek d7111b282f Send recycle events from pod to pv.
This allows users to diagnose what's wrong with recycler. Recycler pods are
started automatically with a cryptic name and they are deleted immediately
when they finish.

kubectl describe pods will show:

  FirstSeen     LastSeen        Count   From                            SubobjectPath   Type            Reason          Message
  ---------     --------        -----   ----                            -------------   --------        ------          -------
  59m           59m             1       {persistentvolume-controller }                  Warning         RecyclerPod     Recycler pod: Unable to mount volumes for pod "recycler-for-nfs_default(5421800e-347b-11e6-a79b-3c970e965218)": timeout expired waiting for volumes to attach/mount for pod "recycler-for-nfs"/"default". list of unattached/unmounted volumes=[vol]
  53m           53m             1       {persistentvolume-controller }                  Warning         RecyclerPod     Recycler pod: Unable to mount volumes for pod "recycler-for-nfs_default(3c9809e5-347c-11e6-a79b-3c970e965218)": timeout expired waiting for volumes to attach/mount for pod "recycler-for-nfs"/"default". list of unattached/unmounted volumes=[vol]
  46m           46m             1       {persistentvolume-controller }                  Warning         RecyclerPod     Recycler pod: Unable to mount volumes for pod "recycler-for-nfs_default(250dd2a2-347d-11e6-a79b-3c970e965218)": timeout expired waiting for volumes to attach/mount for pod "recycler-for-nfs"/"default". list of unattached/unmounted volumes=[vol]
  40m           40m             1       {persistentvolume-controller }                  Warning         RecyclerPod     Recycler pod: Unable to mount volumes for pod "recycler-for-nfs_default(0d84ea33-347e-11e6-a79b-3c970e965218)": timeout expired waiting for volumes to attach/mount for pod "recycler-for-nfs"/"default". list of unattached/unmounted volumes=[vol]
  33m           33m             1       {persistentvolume-controller }                  Warning         RecyclerPod     Recycler pod: Unable to mount volumes for pod "recycler-for-nfs_default(f5fb63bf-347e-11e6-a79b-3c970e965218)": timeout expired waiting for volumes to attach/mount for pod "recycler-for-nfs"/"default". list of unattached/unmounted volumes=[vol]
  27m           27m             1       {persistentvolume-controller }                  Warning         RecyclerPod     Recycler pod: Unable to mount volumes for pod "recycler-for-nfs_default(de7128fd-347f-11e6-a79b-3c970e965218)": timeout expired waiting for volumes to attach/mount for pod "recycler-for-nfs"/"default". list of unattached/unmounted volumes=[vol]
  1h            3m              75      {persistentvolume-controller }                  Normal          RecyclerPod     Recycler pod: Successfully assigned recycler-for-nfs to 127.0.0.1
  1h            3m              76      {persistentvolume-controller }                  Normal          RecyclerPod     Recycler pod: Pod was active on the node longer than specified deadline
  1h            1m              12      {persistentvolume-controller }                  Warning         RecyclerPod     Recycler pod: Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "recycler-for-nfs"/"default". list of unattached/unmounted volumes=[vol]
  20m           1m              4       {persistentvolume-controller }                  Warning         RecyclerPod     (events with common reason combined)


These steps were necessary:

- added event watcher to volume.RecycleVolumeByWatchingPodUntilCompletion

- pass all these events through volume plugins to volume controller

- rework volume.RecycleVolumeByWatchingPodUntilCompletion unit tests to a table
  (too much copy-paste)

- fix all unit tests along the way
2016-09-08 12:57:57 +02:00
Avesh Agarwal 52a60fe3be Fix default resource limits (node capacities) for downward api volumes 2016-08-16 14:41:17 -04:00
Kubernetes Submit Queue 79ed7064ca Merge pull request #27970 from jingxu97/restartKubelet-6-22
Automatic merge from submit-queue

Add volume reconstruct/cleanup logic in kubelet volume manager

Currently kubelet volume management works on the concept of desired
and actual world of states. The volume manager periodically compares the
two worlds and perform volume mount/unmount and/or attach/detach
operations. When kubelet restarts, the cache of those two worlds are
gone. Although desired world can be recovered through apiserver, actual
world can not be recovered which may cause some volumes cannot be cleaned
up if their information is deleted by apiserver. This change adds the
reconstruction of the actual world by reading the pod directories from
disk. The reconstructed volume information is added to both desired
world and actual world if it cannot be found in either world. The rest
logic would be as same as before, desired world populator may clean up
the volume entry if it is no longer in apiserver, and then volume
manager should invoke unmount to clean it up.

Fixes https://github.com/kubernetes/kubernetes/issues/27653
2016-08-15 13:48:43 -07:00
Jing Xu f19a1148db This change supports robust kubelet volume cleanup
Currently kubelet volume management works on the concept of desired
and actual world of states. The volume manager periodically compares the
two worlds and perform volume mount/unmount and/or attach/detach
operations. When kubelet restarts, the cache of those two worlds are
gone. Although desired world can be recovered through apiserver, actual
world can not be recovered which may cause some volumes cannot be cleaned
up if their information is deleted by apiserver. This change adds the
reconstruction of the actual world by reading the pod directories from
disk. The reconstructed volume information is added to both desired
world and actual world if it cannot be found in either world. The rest
logic would be as same as before, desired world populator may clean up
the volume entry if it is no longer in apiserver, and then volume
manager should invoke unmount to clean it up.
2016-08-15 11:29:15 -07:00
Jess Frazelle 7e9d82129e
fix go vet errors
Signed-off-by: Jess Frazelle <jessfraz@google.com>

fix composites

Signed-off-by: Jess Frazelle <me@jessfraz.com>
2016-08-10 16:45:41 -07:00
Harry Zhang c495397cae Refactor uuid into its own pkg 2016-07-30 00:07:02 -04:00
David McMahon ef0c9f0c5b Remove "All rights reserved" from all the headers. 2016-06-29 17:47:36 -07:00