Commit Graph

14 Commits (1b2a4b55bb87aa60e3a09e60aa6cc679b4feaa01)

Author SHA1 Message Date
Chao Xu bcc783c594 run hack/update-all.sh 2016-11-23 15:53:09 -08:00
Chao Xu bb675d395f dependencies: pkg/volume 2016-11-23 15:53:09 -08:00
Jing Xu 3d3e44e77e fix issue in converting aws volume id from mount paths
This PR is to fix the issue in converting aws volume id from mount
paths. Currently there are three aws volume id formats supported. The
following lists example of those three formats and their corresponding
global mount paths:
1. aws:///vol-123456
(/var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/vol-123456)
2. aws://us-east-1/vol-123456
(/var/lib/kubelet/plugins/kubernetes.io/mounts/aws/us-est-1/vol-123455)
3. vol-123456
(/var/lib/kubelet/plugins/kubernetes.io/mounts/aws/us-est-1/vol-123455)

For the first two cases, we need to check the mount path and convert
them back to the original format.
2016-11-16 22:35:48 -08:00
Rajat Ramesh Koujalagi d81e216fc6 Better messaging for missing volume components on host to perform mount 2016-11-09 15:16:11 -08:00
Kubernetes Submit Queue 33dab1d555 Merge pull request #35629 from hpcloud/bug/33128-unused-waitfordetach
Automatic merge from submit-queue

Remove unused WaitForDetach from Detacher interface and plugins

See issue #33128 and PR #33270

We can't rely on the device name provided by OpenStack Cinder, and thus
must perform detection based on the drive serial number (aka It's cinder ID)
on the kubelet itself.

This needs to be removed now, as part of #33128, as the code can't be
updated to attempt device detection and fallback through to the Cinder
provided deviceName, as detection "fails" when the device is gone, and
if cinder has reported a deviceName that another volume has used in
relaity, then this will block forever (or until the other, unreleated,
volume has been detached)
2016-11-06 04:52:23 -08:00
Paul Morie 4722cb299b Remove GetRootContext from VolumeHost 2016-11-03 12:21:19 -04:00
Kiall Mac Innes ccb8d53a39 Remove unused WaitForDetach from Detacher interface and plugins
This has been unused since 542f2dc7, and relies on deviceName, which
can no longer be relied upon (see issue #33128).

This needs to be removed now, as part of #33128, as the code can't be
updated to attempt device detection and fallback through to the Cinder
provided deviceName, as detection "fails" when the device is gone, and
if cinder has reported a deviceName that another volume has used in
relaity, then this will block forever (or until the other, unreleated,
volume has been detached)
2016-11-02 11:59:13 +01:00
Kubernetes Submit Queue 3d33b45e43 Merge pull request #30091 from rootfs/azure-storage
Automatic merge from submit-queue

support Azure disk dynamic provisioning

azure disk dynamic provisioning

A screen shot 

``` console
$ kubectl create -f examples/experimental/persistent-volume-provisioning/azure-dd.yaml
storageclass "slow" created
$ kubectl create -f examples/experimental/persistent-volume-provisioning/claim1.json
persistentvolumeclaim "claim1" created
$ kubectl describe pvc
Name:       claim1
Namespace:  default
Status:     Bound
Volume:     pvc-de7150d1-6a37-11e6-aec9-000d3a12e034
Labels:     <none>
Capacity:   3Gi
Access Modes:   RWO
$ kubectl create -f pod.yaml
replicationcontroller "nfs-server" created
$ kubectl describe pod
Name:       nfs-server-b9w6x
Namespace:  default
Node:       rootfs-dev/172.24.0.4
Start Time: Wed, 24 Aug 2016 19:46:21 +0000
Labels:     role=nfs-server
Status:     Running
IP:     172.17.0.2
Controllers:    ReplicationController/nfs-server
Containers:
  nfs-server:
    Container ID:   docker://be6f8c0e26dc896d4c53ef0d21c9414982f0b39a10facd6b93a255f9e1c3806c
    Image:      nginx
    Image ID:       docker://bfdd4ced794ed276a28cf56b233ea58dec544e9ca329d796cf30b8bcf6d39b3f
    Port:       
    State:      Running
      Started:      Wed, 24 Aug 2016 19:49:19 +0000
    Ready:      True
    Restart Count:  0
    Volume Mounts:
      /exports from mypvc (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-9o0fj (ro)
    Environment Variables:  <none>
Conditions:
  Type      Status
  Initialized   True 
  Ready     True 
  PodScheduled  True 
Volumes:
  mypvc:
    Type:   PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  claim1
    ReadOnly:   false
  default-token-9o0fj:
    Type:   Secret (a volume populated by a Secret)
    SecretName: default-token-9o0fj
QoS Class:  BestEffort
Tolerations:    <none>
Events:
  FirstSeen LastSeen    Count   From            SubobjectPath           Type        Reason      Message
  --------- --------    -----   ----            -------------           --------    ------      -------
  11m       11m     1   {default-scheduler }                    Normal      Scheduled   Successfully assigned nfs-server-b9w6x to rootfs-dev
  9m        9m      1   {kubelet rootfs-dev}                    Warning     FailedMount Unable to mount volumes for pod "nfs-server-b9w6x_default(6eb7fd98-6a33-11e6-aec9-000d3a12e034)": timeout expired waiting for volumes to attach/mount for pod "nfs-server-b9w6x"/"default". list of unattached/unmounted volumes=[mypvc]
  9m        9m      1   {kubelet rootfs-dev}                    Warning     FailedSync  Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "nfs-server-b9w6x"/"default". list of unattached/unmounted volumes=[mypvc]
  8m        8m      1   {kubelet rootfs-dev}    spec.containers{nfs-server} Normal      Pulling     pulling image "nginx"
  8m        8m      1   {kubelet rootfs-dev}    spec.containers{nfs-server} Normal      Pulled      Successfully pulled image "nginx"
  8m        8m      1   {kubelet rootfs-dev}    spec.containers{nfs-server} Normal      Created     Created container with docker id be6f8c0e26dc
  8m        8m      1   {kubelet rootfs-dev}    spec.containers{nfs-server} Normal      Started     Started container with docker id be6f8c0e26dc

```

@colemickens @brendandburns
2016-11-01 17:27:14 -07:00
Jing Xu abbde43374 Add sync state loop in master's volume reconciler
At master volume reconciler, the information about which volumes are
attached to nodes is cached in actual state of world. However, this
information might be out of date in case that node is terminated (volume
is detached automatically). In this situation, reconciler assume volume
is still attached and will not issue attach operation when node comes
back. Pods created on those nodes will fail to mount.

This PR adds the logic to periodically sync up the truth for attached volumes kept in the actual state cache. If the volume is no longer attached to the node, the actual state will be updated to reflect the truth. In turn, reconciler will take actions if needed.

To avoid issuing many concurrent operations on cloud provider, this PR
tries to add batch operation to check whether a list of volumes are
attached to the node instead of one request per volume.

More details are explained in PR #33760
2016-10-28 09:24:53 -07:00
Huamin Chen 1d52719465 azure disk volume: support storage class and dynamic provisioning
Signed-off-by: Huamin Chen <hchen@redhat.com>
2016-10-28 13:31:47 +00:00
Mike Danese 3b6a067afc autogenerated 2016-10-21 17:32:32 -07:00
Jedrzej Nowak f0988b95e7 Typos and englishify pkg/volume 2016-10-03 22:39:33 +02:00
Justin Santa Barbara 54195d590f Use strongly-typed types.NodeName for a node name
We had another bug where we confused the hostname with the NodeName.

To avoid this happening again, and to make the code more
self-documenting, we use types.NodeName (a typedef alias for string)
whenever we are referring to the Node.Name.

A tedious but mechanical commit therefore, to change all uses of the
node name to use types.NodeName

Also clean up some of the (many) places where the NodeName is referred
to as a hostname (not true on AWS), or an instanceID (not true on GCE),
etc.
2016-09-27 10:47:31 -04:00
Huamin Chen dea4b0226d support Azure data disk volume
Signed-off-by: Huamin Chen <hchen@redhat.com>
2016-08-23 13:23:07 +00:00