11 KiB
PLEASE NOTE: This document applies to the HEAD of the source tree
If you are using a released version of Kubernetes, you should refer to the docs that go with that version.
The latest release of this document can be found [here](http://releases.k8s.io/release-1.4/examples/experimental/persistent-volume-provisioning/README.md).Documentation for other releases can be found at releases.k8s.io.
Persistent Volume Provisioning
This example shows how to use experimental persistent volume provisioning.
Pre-requisites
This example assumes that you have an understanding of Kubernetes administration and can modify the scripts that launch kube-controller-manager.
Admin Configuration
The admin must define StorageClass
objects that describe named "classes" of storage offered in a cluster. Different classes might map to arbitrary levels or policies determined by the admin. When configuring a StorageClass
object for persistent volume provisioning, the admin will need to describe the type of provisioner to use and the parameters that will be used by the provisioner when it provisions a PersistentVolume
belonging to the class.
The name of a StorageClass object is significant, and is how users can request a particular class, by specifying the name in their PersistentVolumeClaim
. The provisioner
field must be specified as it determines what volume plugin is used for provisioning PVs. 2 cloud providers will be provided in the beta version of this feature: EBS and GCE. The parameters
field contains the parameters that describe volumes belonging to the storage class. Different parameters may be accepted depending on the provisioner
. For example, the value io1
, for the parameter type
, and the parameter iopsPerGB
are specific to EBS . When a parameter is omitted, some default is used.
AWS
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: slow
provisioner: kubernetes.io/aws-ebs
parameters:
type: io1
zone: us-east-1d
iopsPerGB: "10"
type
:io1
,gp2
,sc1
,st1
. See AWS docs for details. Default:gp2
.zone
: AWS zone. If not specified, a random zone from those where Kubernetes cluster has a node is chosen.iopsPerGB
: only forio1
volumes. I/O operations per second per GiB. AWS volume plugin multiplies this with size of requested volume to compute IOPS of the volume and caps it at 20 000 IOPS (maximum supported by AWS, see AWS docs).encrypted
: denotes whether the EBS volume should be encrypted or not. Valid values aretrue
orfalse
.kmsKeyId
: optional. The full Amazon Resource Name of the key to use when encrypting the volume. If none is supplied butencrypted
is true, a key is generated by AWS. See AWS docs for valid ARN value.
GCE
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: slow
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
zone: us-central1-a
type
:pd-standard
orpd-ssd
. Default:pd-ssd
zone
: GCE zone. If not specified, a random zone in the same region as controller-manager will be chosen.
GLUSTERFS
apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
name: slow
provisioner: kubernetes.io/glusterfs
parameters:
endpoint: "glusterfs-cluster"
resturl: "http://127.0.0.1:8081"
restauthenabled: "true"
restuser: "admin"
restuserkey: "password"
endpoint
:glusterfs-cluster
is the endpoint/service name which includes GlusterFS trusted pool IP addresses and this parameter is mandatory.resturl
: Gluster REST service url which provision gluster volumes on demand. The format should beIPaddress:Port
and this is a mandatory parameter for GlusterFS dynamic provisioner.restauthenabled
: Gluster REST service authentication boolean is required if the authentication is enabled on the REST server. If this value is 'true', 'restuser' and 'restuserkey' have to be filled.restuser
: Gluster REST service user who has access to create volumes in the Gluster Trusted Pool.restuserkey
: Gluster REST service user's password which will be used for authentication to the REST server.
OpenStack Cinder
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: gold
provisioner: kubernetes.io/cinder
parameters:
type: fast
availability: nova
type
: VolumeType created in Cinder. Default is empty.availability
: Availability Zone. Default is empty.
Ceph RBD
apiVersion: extensions/v1beta1
kind: StorageClass
metadata:
name: fast
provisioner: kubernetes.io/rbd
parameters:
monitors: 10.16.153.105:6789
adminId: kube
adminSecretName: ceph-secret
adminSecretNamespace: kube-system
pool: kube
userId: kube
userSecretName: ceph-secret-user
monitors
: Ceph monitors, comma delimited. It is required.adminId
: Ceph client ID that is capable of creating images in the pool. Default is "admin".adminSecret
: Secret Name foradminId
. It is required.adminSecretNamespace
: The namespace foradminSecret
. Default is "default".pool
: Ceph RBD pool. Default is "rbd".userId
: Ceph client ID that is used to map the RBD image. Default is the same asadminId
.userSecretName
: The name of Ceph Secret foruserId
to map RBD image. It must exist in the same namespace as PVCs. It is required.
User provisioning requests
Users request dynamically provisioned storage by including a storage class in their PersistentVolumeClaim
.
The annotation volume.beta.kubernetes.io/storage-class
is used to access this experimental feature. It is required that this value matches the name of a StorageClass
configured by the administrator.
In the future, the storage class may remain in an annotation or become a field on the claim itself.
{
"kind": "PersistentVolumeClaim",
"apiVersion": "v1",
"metadata": {
"name": "claim1",
"annotations": {
"volume.beta.kubernetes.io/storage-class": "slow"
}
},
"spec": {
"accessModes": [
"ReadWriteOnce"
],
"resources": {
"requests": {
"storage": "3Gi"
}
}
}
}
Sample output
GCE
This example uses GCE but any provisioner would follow the same flow.
First we note there are no Persistent Volumes in the cluster. After creating a storage class and a claim including that storage class, we see a new PV is created and automatically bound to the claim requesting storage.
$ kubectl get pv
$ kubectl create -f examples/experimental/persistent-volume-provisioning/gce-pd.yaml
storageclass "slow" created
$ kubectl create -f examples/experimental/persistent-volume-provisioning/claim1.json
persistentvolumeclaim "claim1" created
$ kubectl get pv
NAME CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
pvc-bb6d2f0c-534c-11e6-9348-42010af00002 3Gi RWO Bound default/claim1 4s
$ kubectl get pvc
NAME LABELS STATUS VOLUME CAPACITY ACCESSMODES AGE
claim1 <none> Bound pvc-bb6d2f0c-534c-11e6-9348-42010af00002 3Gi RWO 7s
# delete the claim to release the volume
$ kubectl delete pvc claim1
persistentvolumeclaim "claim1" deleted
# the volume is deleted in response to being release of its claim
$ kubectl get pv
Ceph RBD
First create Ceph admin's Secret in the system namespace. Here the Secret is created in kube-system
:
$ kubectl create -f examples/experimental/persistent-volume-provisioning/rbd/ceph-secret-admin.yaml --namespace=kube-system
Then create RBD Storage Class:
$ kubectl create -f examples/experimental/persistent-volume-provisioning/rbd/rbd-storage-class.yaml
Before creating PVC in user's namespace (e.g. myns), make sure the Ceph user's Secret exists, if not, create the Secret:
$ kubectl create -f examples/experimental/persistent-volume-provisioning/rbd/ceph-secret-user.yaml --namespace=myns
Now create a PVC in user's namespace (e.g. myns):
$ kubectl create -f examples/experimental/persistent-volume-provisioning/claim1.json --namespace=myns
Check the PV and PVC are created:
$ kubectl describe pvc --namespace=myns
Name: claim1
Namespace: myns
Status: Bound
Volume: pvc-1cfa23b3-664b-11e6-9eb9-90b11c09520d
Labels: <none>
Capacity: 3Gi
Access Modes: RWO
No events.
$ kubectl describe pv
Name: pvc-1cfa23b3-664b-11e6-9eb9-90b11c09520d
Labels: <none>
Status: Bound
Claim: myns/claim1
Reclaim Policy: Delete
Access Modes: RWO
Capacity: 3Gi
Message:
Source:
Type: RBD (a Rados Block Device mount on the host that shares a pod's lifetime)
CephMonitors: [10.16.153.105:6789]
RBDImage: kubernetes-dynamic-pvc-1cfb1862-664b-11e6-9a5d-90b11c09520d
FSType:
RBDPool: kube
RadosUser: kube
Keyring: /etc/ceph/keyring
SecretRef: &{ceph-secret-user}
ReadOnly: false
No events.
Create a Pod to use the PVC:
$ kubectl create -f examples/experimental/persistent-volume-provisioning/rbd/pod.yaml --namespace=myns