![]() Automatic merge from submit-queue Use secrets for glusterfs provisioning passwords - no plain password in StorageClass! - fix the style along the way - use PV annotations to pass the configuration from provisioners to deleters, inspired by Ceph RBD provisioning. ~~Proposing 1.4:~~ ~~- GlusterFS provisioning is a new 1.4 feature~~ ~~- if we release GlusterFS provisioner as it is now, we need to support it's API (i.e. plaintext passwords) until 2.0~~ ~~- it can break only GlusterFS provisioning, nothing else~~ ~~- it's easy to revert~~ @kubernetes/sig-storage fixes #31871 |
||
---|---|---|
.. | ||
quobyte | ||
rbd | ||
README.md | ||
aws-ebs.yaml | ||
claim1.json | ||
gce-pd.yaml | ||
glusterfs-dp.yaml | ||
glusterfs-provisioning-secret.yaml |
README.md
PLEASE NOTE: This document applies to the HEAD of the source tree
If you are using a released version of Kubernetes, you should refer to the docs that go with that version.
The latest release of this document can be found [here](http://releases.k8s.io/release-1.4/examples/experimental/persistent-volume-provisioning/README.md).Documentation for other releases can be found at releases.k8s.io.
Persistent Volume Provisioning
This example shows how to use experimental persistent volume provisioning.
Pre-requisites
This example assumes that you have an understanding of Kubernetes administration and can modify the scripts that launch kube-controller-manager.
Admin Configuration
The admin must define StorageClass
objects that describe named "classes" of storage offered in a cluster. Different classes might map to arbitrary levels or policies determined by the admin. When configuring a StorageClass
object for persistent volume provisioning, the admin will need to describe the type of provisioner to use and the parameters that will be used by the provisioner when it provisions a PersistentVolume
belonging to the class.
The name of a StorageClass object is significant, and is how users can request a particular class, by specifying the name in their PersistentVolumeClaim
. The provisioner
field must be specified as it determines what volume plugin is used for provisioning PVs. 2 cloud providers will be provided in the beta version of this feature: EBS and GCE. The parameters
field contains the parameters that describe volumes belonging to the storage class. Different parameters may be accepted depending on the provisioner
. For example, the value io1
, for the parameter type
, and the parameter iopsPerGB
are specific to EBS . When a parameter is omitted, some default is used.
AWS
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: slow
provisioner: kubernetes.io/aws-ebs
parameters:
type: io1
zone: us-east-1d
iopsPerGB: "10"
type
:io1
,gp2
,sc1
,st1
. See AWS docs for details. Default:gp2
.zone
: AWS zone. If not specified, a random zone from those where Kubernetes cluster has a node is chosen.iopsPerGB
: only forio1
volumes. I/O operations per second per GiB. AWS volume plugin multiplies this with size of requested volume to compute IOPS of the volume and caps it at 20 000 IOPS (maximum supported by AWS, see AWS docs).encrypted
: denotes whether the EBS volume should be encrypted or not. Valid values aretrue
orfalse
.kmsKeyId
: optional. The full Amazon Resource Name of the key to use when encrypting the volume. If none is supplied butencrypted
is true, a key is generated by AWS. See AWS docs for valid ARN value.
GCE
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: slow
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
zone: us-central1-a
type
:pd-standard
orpd-ssd
. Default:pd-ssd
zone
: GCE zone. If not specified, a random zone in the same region as controller-manager will be chosen.
vSphere
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: slow
provisioner: kubernetes.io/vsphere-volume
parameters:
diskformat: thin
diskformat
:thin
,zeroedthick
andeagerzeroedthick
. See vSphere docs for details. Default:"thin"
.
GLUSTERFS
apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
name: slow
provisioner: kubernetes.io/glusterfs
parameters:
endpoint: "glusterfs-cluster"
resturl: "http://127.0.0.1:8081"
restuser: "admin"
secretNamespace: "default"
secretName: "heketi-secret"
endpoint
:glusterfs-cluster
is the endpoint name which includes GlusterFS trusted pool IP addresses. This parameter is mandatory. We need to also create a service for this endpoint, so that the endpoint will be persisted. This service can be without a selector to tell Kubernetes we want to add its endpoints manually. Please note that, glusterfs plugin looks for the endpoint in the pod namespace, so it is mandatory that the endpoint and service have to be created in Pod's namespace for successful mount of gluster volumes in the pod.resturl
: Gluster REST service/Heketi service url which provision gluster volumes on demand. The general format should beIPaddress:Port
and this is a mandatory parameter for GlusterFS dynamic provisioner. If Heketi service is exposed as a routable service in openshift/kubernetes setup, this can have a format similar tohttp://heketi-storage-project.cloudapps.mystorage.com
where the fqdn is a resolvable heketi service url.restauthenabled
: Gluster REST service authentication boolean that enables authentication to the REST server. If this value is 'true',restuser
andrestuserkey
orsecretNamespace
+secretName
have to be filled. This option is deprecated, authentication is enabled when any ofrestuser
,restuserkey
,secretName
orsecretNamespace
is specified.restuser
: Gluster REST service/Heketi user who has access to create volumes in the Gluster Trusted Pool.restuserkey
: Gluster REST service/Heketi user's password which will be used for authentication to the REST server. This parameter is deprecated in favor ofsecretNamespace
+secretName
.secretNamespace
+secretName
: Identification of Secret instance that containes user password to use when talking to Gluster REST service. These parameters are optional, empty password will be used when bothsecretNamespace
andsecretName
are omitted.
When both restuserkey
and secretNamespace
+ secretName
is specified, the secret will be used.
Example of a secret can be found in glusterfs-provisioning-secret.yaml.
Reference : (How to configure Heketi)
Create endpoints
As in example glusterfs-endpoints.json file, the "IP" field should be filled with the address of a node in the Glusterfs server cluster. It is fine to give any valid value (from 1 to 65535) to the "port" field.
Create the endpoints,
$ kubectl create -f examples/volumes/glusterfs/glusterfs-endpoints.json
You can verify that the endpoints are successfully created by running
$ kubectl get endpoints
NAME ENDPOINTS
glusterfs-cluster 10.240.106.152:1,10.240.79.157:1
We need also create a service for this endpoints, so that the endpoints will be persisted. It is possible to create service
without a selector to tell Kubernetes we want to add its endpoints manually. For an example service file refer glusterfs-service.json.
Use this command to create the service:
$ kubectl create -f examples/volumes/glusterfs/glusterfs-service.json
OpenStack Cinder
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: gold
provisioner: kubernetes.io/cinder
parameters:
type: fast
availability: nova
type
: VolumeType created in Cinder. Default is empty.availability
: Availability Zone. Default is empty.
Ceph RBD
apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
name: fast
provisioner: kubernetes.io/rbd
parameters:
monitors: 10.16.153.105:6789
adminId: kube
adminSecretName: ceph-secret
adminSecretNamespace: kube-system
pool: kube
userId: kube
userSecretName: ceph-secret-user
monitors
: Ceph monitors, comma delimited. It is required.adminId
: Ceph client ID that is capable of creating images in the pool. Default is "admin".adminSecret
: Secret Name foradminId
. It is required.adminSecretNamespace
: The namespace foradminSecret
. Default is "default".pool
: Ceph RBD pool. Default is "rbd".userId
: Ceph client ID that is used to map the RBD image. Default is the same asadminId
.userSecretName
: The name of Ceph Secret foruserId
to map RBD image. It must exist in the same namespace as PVCs. It is required.
Quobyte
apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
name: slow
provisioner: kubernetes.io/quobyte
parameters:
quobyteAPIServer: "http://138.68.74.142:7860"
registry: "138.68.74.142:7861"
adminSecretName: "quobyte-admin-secret"
adminSecretNamespace: "kube-system"
user: "root"
group: "root"
quobyteConfig: "BASE"
quobyteTenant: "DEFAULT"
- quobyteAPIServer API Server of Quobyte in the format http(s)://api-server:7860
- registry Quobyte registry to use to mount the volume. You can specifiy the registry as : pair or if you want to specify multiple registries you just have to put a comma between them e.q. :,:,:. The host can be an IP address or if you have a working DNS you can also provide the DNS names.
- adminSecretName secret that holds information about the Quobyte user and the password to authenticate agains the API server.
- adminSecretNamespace The namespace for adminSecretName. Default is
default
. - user maps all access to this user. Default is
root
. - group maps all access to this group. Default is
nfsnobody
. - quobyteConfig use the specified configuration to create the volume. You can create a new configuration or modify an existing one with the Web console or the quobyte CLI. Default is
BASE
- quobyteTenant use the specified tenant ID to create/delete the volume. This Quobyte tenant has to be already present in Quobyte. Default is
DEFAULT
First create Quobyte admin's Secret in the system namespace. Here the Secret is created in kube-system
:
$ kubectl create -f examples/experimental/persistent-volume-provisioning/quobyte/quobyte-admin-secret.yaml --namespace=kube-system
Then create the Quobyte storage class:
$ kubectl create -f examples/experimental/persistent-volume-provisioning/quobyte/quobyte-storage-class.yaml
Now create a PVC
$ kubectl create -f examples/experimental/persistent-volume-provisioning/claim1.json
Check the created PVC:
$ kubectl describe pvc
Name: claim1
Namespace: default
Status: Bound
Volume: pvc-bdb82652-694a-11e6-b811-080027242396
Labels: <none>
Capacity: 3Gi
Access Modes: RWO
No events.
$ kubectl describe pv
Name: pvc-bdb82652-694a-11e6-b811-080027242396
Labels: <none>
Status: Bound
Claim: default/claim1
Reclaim Policy: Delete
Access Modes: RWO
Capacity: 3Gi
Message:
Source:
Type: Quobyte (a Quobyte mount on the host that shares a pod's lifetime)
Registry: 138.68.79.14:7861
Volume: kubernetes-dynamic-pvc-bdb97c58-694a-11e6-91b6-080027242396
ReadOnly: false
No events.
Create a Pod to use the PVC:
$ kubectl create -f examples/experimental/persistent-volume-provisioning/quobyte/example-pod.yaml
User provisioning requests
Users request dynamically provisioned storage by including a storage class in their PersistentVolumeClaim
.
The annotation volume.beta.kubernetes.io/storage-class
is used to access this experimental feature. It is required that this value matches the name of a StorageClass
configured by the administrator.
In the future, the storage class may remain in an annotation or become a field on the claim itself.
{
"kind": "PersistentVolumeClaim",
"apiVersion": "v1",
"metadata": {
"name": "claim1",
"annotations": {
"volume.beta.kubernetes.io/storage-class": "slow"
}
},
"spec": {
"accessModes": [
"ReadWriteOnce"
],
"resources": {
"requests": {
"storage": "3Gi"
}
}
}
}
Sample output
GCE
This example uses GCE but any provisioner would follow the same flow.
First we note there are no Persistent Volumes in the cluster. After creating a storage class and a claim including that storage class, we see a new PV is created and automatically bound to the claim requesting storage.
$ kubectl get pv
$ kubectl create -f examples/experimental/persistent-volume-provisioning/gce-pd.yaml
storageclass "slow" created
$ kubectl create -f examples/experimental/persistent-volume-provisioning/claim1.json
persistentvolumeclaim "claim1" created
$ kubectl get pv
NAME CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
pvc-bb6d2f0c-534c-11e6-9348-42010af00002 3Gi RWO Bound default/claim1 4s
$ kubectl get pvc
NAME LABELS STATUS VOLUME CAPACITY ACCESSMODES AGE
claim1 <none> Bound pvc-bb6d2f0c-534c-11e6-9348-42010af00002 3Gi RWO 7s
# delete the claim to release the volume
$ kubectl delete pvc claim1
persistentvolumeclaim "claim1" deleted
# the volume is deleted in response to being release of its claim
$ kubectl get pv
Ceph RBD
This section will guide you on how to configure and use the Ceph RBD provisioner.
Pre-requisites
For this to work you must have a functional Ceph cluster, and the rbd
command line utility must be installed on any host/container that kube-controller-manager
or kubelet
is running on.
Configuration
First we must identify the Ceph client admin key. This is usually found in /etc/ceph/ceph.client.admin.keyring
on your Ceph cluster nodes. The file will look something like this:
[client.admin]
key = AQBfxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx==
auid = 0
caps mds = "allow"
caps mon = "allow *"
caps osd = "allow *"
From the key value, we will create a secret. We must create the Ceph admin Secret in the namespace defined in our StorageClass
. In this example we set the namespace to kube-system
.
$ kubectl create secret generic ceph-secret-admin --from-literal=key='AQBfxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx==' --namespace=kube-system
Now modify examples/experimental/persistent-volume-provisioning/rbd/rbd-storage-class.yaml
to reflect your environment, particularly the monitors
field. We are now ready to create our RBD Storage Class:
$ kubectl create -f examples/experimental/persistent-volume-provisioning/rbd/rbd-storage-class.yaml
The kube-controller-manager is now able to provision storage, however we still need to be able to map it. Mapping should be done with a non-privileged key, if you have existing users you can get all keys by running ceph auth list
on your Ceph cluster with the admin key. For this example we will create a new user and pool.
$ ceph osd pool create kube 512
$ ceph auth get-or-create client.kube mon 'allow r' osd 'allow rwx pool=kube'
[client.kube]
key = AQBQyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy==
Usage
Once configured, create a PVC in a user's namespace (e.g. myns):
$ kubectl create -f examples/experimental/persistent-volume-provisioning/claim1.json --namespace=myns
Eventually the PVC creation will result in a PV and RBD volume to match:
$ kubectl describe pvc --namespace=myns
Name: claim1
Namespace: myns
Status: Bound
Volume: pvc-1cfa23b3-664b-11e6-9eb9-90b11c09520d
Labels: <none>
Capacity: 3Gi
Access Modes: RWO
No events.
$ kubectl describe pv
Name: pvc-1cfa23b3-664b-11e6-9eb9-90b11c09520d
Labels: <none>
Status: Bound
Claim: myns/claim1
Reclaim Policy: Delete
Access Modes: RWO
Capacity: 3Gi
Message:
Source:
Type: RBD (a Rados Block Device mount on the host that shares a pod's lifetime)
CephMonitors: [127.0.0.1:6789]
RBDImage: kubernetes-dynamic-pvc-1cfb1862-664b-11e6-9a5d-90b11c09520d
FSType:
RBDPool: kube
RadosUser: kube
Keyring: /etc/ceph/keyring
SecretRef: &{ceph-secret-user}
ReadOnly: false
No events.
With our storage provisioned, we can now create a Pod to use the PVC:
$ kubectl create -f examples/experimental/persistent-volume-provisioning/rbd/pod.yaml --namespace=myns
Now our pod has an RBD mount!
$ export PODNAME=`kubectl get pod --selector='role=server' --namespace=myns --output=template --template="{{with index .items 0}}{{.metadata.name}}{{end}}"`
$ kubectl exec -it $PODNAME --namespace=myns -- df -h | grep rbd
/dev/rbd1 2.9G 4.5M 2.8G 1% /var/lib/www/html