eb3cf509e5 | ||
---|---|---|
.. | ||
README.md | ||
deployment.yaml | ||
simple-statefulset.yaml | ||
simple-storageclass.yaml | ||
vsphere-volume-pod.yaml | ||
vsphere-volume-pv.yaml | ||
vsphere-volume-pvc.yaml | ||
vsphere-volume-pvcpod.yaml | ||
vsphere-volume-pvcsc.yaml | ||
vsphere-volume-pvcscpod.yaml | ||
vsphere-volume-sc-fast.yaml | ||
vsphere-volume-sc-vsancapabilities-with-datastore.yaml | ||
vsphere-volume-sc-vsancapabilities.yaml | ||
vsphere-volume-sc-with-datastore.yaml | ||
vsphere-volume-spbm-policy-with-datastore.yaml | ||
vsphere-volume-spbm-policy.yaml |
README.md
vSphere Volume
- Prerequisites
- Examples
- Volumes
- Persistent Volumes
- Storage Class
- [Storage Policy Management inside kubernetes] (#storage-policy-management-inside-kubernetes)
- [Using existing vCenter SPBM policy] (#using-existing-vcenter-spbm-policy)
- Virtual SAN policy support
- Stateful Set
Prerequisites
- Kubernetes with vSphere Cloud Provider configured. For cloudprovider configuration please refer vSphere getting started guide.
Examples
Volumes
-
Create VMDK.
First ssh into ESX and then use following command to create vmdk,
vmkfstools -c 2G /vmfs/volumes/datastore1/volumes/myDisk.vmdk
-
Create Pod which uses 'myDisk.vmdk'.
See example
apiVersion: v1 kind: Pod metadata: name: test-vmdk spec: containers: - image: gcr.io/google_containers/test-webserver name: test-container volumeMounts: - mountPath: /test-vmdk name: test-volume volumes: - name: test-volume # This VMDK volume must already exist. vsphereVolume: volumePath: "[datastore1] volumes/myDisk" fsType: ext4
Creating the pod:
$ kubectl create -f examples/volumes/vsphere/vsphere-volume-pod.yaml
Verify that pod is running:
$ kubectl get pods test-vmdk NAME READY STATUS RESTARTS AGE test-vmdk 1/1 Running 0 48m
Persistent Volumes
-
Create VMDK.
First ssh into ESX and then use following command to create vmdk,
vmkfstools -c 2G /vmfs/volumes/datastore1/volumes/myDisk.vmdk
-
Create Persistent Volume.
See example:
apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 spec: capacity: storage: 2Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain vsphereVolume: volumePath: "[datastore1] volumes/myDisk" fsType: ext4
In the above example datastore1 is located in the root folder. If datastore is member of Datastore Cluster or located in sub folder, the folder path needs to be provided in the VolumePath as below.
vsphereVolume: VolumePath: "[DatastoreCluster/datastore1] volumes/myDisk"
Creating the persistent volume:
$ kubectl create -f examples/volumes/vsphere/vsphere-volume-pv.yaml
Verifying persistent volume is created:
$ kubectl describe pv pv0001 Name: pv0001 Labels: <none> Status: Available Claim: Reclaim Policy: Retain Access Modes: RWO Capacity: 2Gi Message: Source: Type: vSphereVolume (a Persistent Disk resource in vSphere) VolumePath: [datastore1] volumes/myDisk FSType: ext4 No events.
-
Create Persistent Volume Claim.
See example:
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc0001 spec: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi
Creating the persistent volume claim:
$ kubectl create -f examples/volumes/vsphere/vsphere-volume-pvc.yaml
Verifying persistent volume claim is created:
$ kubectl describe pvc pvc0001 Name: pvc0001 Namespace: default Status: Bound Volume: pv0001 Labels: <none> Capacity: 2Gi Access Modes: RWO No events.
-
Create Pod which uses Persistent Volume Claim.
See example:
apiVersion: v1 kind: Pod metadata: name: pvpod spec: containers: - name: test-container image: gcr.io/google_containers/test-webserver volumeMounts: - name: test-volume mountPath: /test-vmdk volumes: - name: test-volume persistentVolumeClaim: claimName: pvc0001
Creating the pod:
$ kubectl create -f examples/volumes/vsphere/vsphere-volume-pvcpod.yaml
Verifying pod is created:
$ kubectl get pod pvpod NAME READY STATUS RESTARTS AGE pvpod 1/1 Running 0 48m
Storage Class
Note: Here you don't need to create vmdk it is created for you.
-
Create Storage Class.
Example 1:
kind: StorageClass apiVersion: storage.k8s.io/v1beta1 metadata: name: fast provisioner: kubernetes.io/vsphere-volume parameters: diskformat: zeroedthick fstype: ext3
You can also specify the datastore in the Storageclass as shown in example 2. The volume will be created on the datastore specified in the storage class. This field is optional. If not specified as shown in example 1, the volume will be created on the datastore specified in the vsphere config file used to initialize the vSphere Cloud Provider.
Example 2:
kind: StorageClass apiVersion: storage.k8s.io/v1beta1 metadata: name: fast provisioner: kubernetes.io/vsphere-volume parameters: diskformat: zeroedthick datastore: VSANDatastore
If datastore is member of DataStore Cluster or within some sub folder, the datastore folder path needs to be provided in the datastore parameter as below.
parameters: datastore: DatastoreCluster/VSANDatastore
Download example Creating the storageclass:
$ kubectl create -f examples/volumes/vsphere/vsphere-volume-sc-fast.yaml
Verifying storage class is created:
$ kubectl describe storageclass fast Name: fast IsDefaultClass: No Annotations: <none> Provisioner: kubernetes.io/vsphere-volume Parameters: diskformat=zeroedthick,fstype=ext3 No events.
-
Create Persistent Volume Claim.
See example:
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvcsc001 annotations: volume.beta.kubernetes.io/storage-class: fast spec: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi
Creating the persistent volume claim:
$ kubectl create -f examples/volumes/vsphere/vsphere-volume-pvcsc.yaml
Verifying persistent volume claim is created:
$ kubectl describe pvc pvcsc001 Name: pvcsc001 Namespace: default StorageClass: fast Status: Bound Volume: pvc-83295256-f8e0-11e6-8263-005056b2349c Labels: <none> Capacity: 2Gi Access Modes: RWO Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 1m 1m 1 persistentvolume-controller Normal ProvisioningSucceeded Successfully provisioned volume pvc-83295256-f8e0-11e6-8263-005056b2349c using kubernetes.io/vsphere-volume
Persistent Volume is automatically created and is bounded to this pvc.
Verifying persistent volume claim is created:
$ kubectl describe pv pvc-83295256-f8e0-11e6-8263-005056b2349c Name: pvc-83295256-f8e0-11e6-8263-005056b2349c Labels: <none> StorageClass: fast Status: Bound Claim: default/pvcsc001 Reclaim Policy: Delete Access Modes: RWO Capacity: 2Gi Message: Source: Type: vSphereVolume (a Persistent Disk resource in vSphere) VolumePath: [datastore1] kubevols/kubernetes-dynamic-pvc-83295256-f8e0-11e6-8263-005056b2349c.vmdk FSType: ext3 No events.
Note: VMDK is created inside
kubevols
folder in datastore which is mentioned in 'vsphere' cloudprovider configuration. The cloudprovider config is created during setup of Kubernetes cluster on vSphere. -
Create Pod which uses Persistent Volume Claim with storage class.
See example:
apiVersion: v1 kind: Pod metadata: name: pvpod spec: containers: - name: test-container image: gcr.io/google_containers/test-webserver volumeMounts: - name: test-volume mountPath: /test-vmdk volumes: - name: test-volume persistentVolumeClaim: claimName: pvcsc001
Creating the pod:
$ kubectl create -f examples/volumes/vsphere/vsphere-volume-pvcscpod.yaml
Verifying pod is created:
$ kubectl get pod pvpod NAME READY STATUS RESTARTS AGE pvpod 1/1 Running 0 48m
Storage Policy Management inside kubernetes
Using existing vCenter SPBM policy
Admins can use the existing vCenter Storage Policy Based Management (SPBM) policy to configure a persistent volume with the SPBM policy.
Note: Here you don't need to create persistent volume it is created for you.
-
Create Storage Class.
Example 1:
kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: fast provisioner: kubernetes.io/vsphere-volume parameters: diskformat: zeroedthick storagePolicyName: gold
The admin specifies the SPBM policy - "gold" as part of storage class definition for dynamic volume provisioning. When a PVC is created, the persistent volume will be provisioned on a compatible datastore with maximum free space that satisfies the "gold" storage policy requirements.
Example 2:
kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: fast provisioner: kubernetes.io/vsphere-volume parameters: diskformat: zeroedthick storagePolicyName: gold datastore: VSANDatastore
The admin can also specify a custom datastore where he wants the volume to be provisioned along with the SPBM policy name. When a PVC is created, the vSphere Cloud Provider checks if the user specified datastore satisfies the "gold" storage policy requirements. If yes, it will provision the persistent volume on user specified datastore. If not, it will error out to the user that the user specified datastore is not compatible with "gold" storage policy requirements.
Virtual SAN policy support
Vsphere Infrastructure(VI) Admins will have the ability to specify custom Virtual SAN Storage Capabilities during dynamic volume provisioning. You can now define storage requirements, such as performance and availability, in the form of storage capabilities during dynamic volume provisioning. The storage capability requirements are converted into a Virtual SAN policy which are then pushed down to the Virtual SAN layer when a persistent volume (virtual disk) is being created. The virtual disk is distributed across the Virtual SAN datastore to meet the requirements.
The official VSAN policy documentation describes in detail about each of the individual storage capabilities that are supported by VSAN. The user can specify these storage capabilities as part of storage class defintion based on his application needs.
The policy settings can be one or more of the following:
- hostFailuresToTolerate: represents NumberOfFailuresToTolerate
- diskStripes: represents NumberofDiskStripesPerObject
- objectSpaceReservation: represents ObjectSpaceReservation
- cacheReservation: represents FlashReadCacheReservation
- iopsLimit: represents IOPSLimitForObject
- forceProvisioning: represents if volume must be Force Provisioned
Note: Here you don't need to create persistent volume it is created for you.
-
Create Storage Class.
Example 1:
kind: StorageClass apiVersion: storage.k8s.io/v1beta1 metadata: name: fast provisioner: kubernetes.io/vsphere-volume parameters: diskformat: zeroedthick hostFailuresToTolerate: "2" cachereservation: "20"
Here a persistent volume will be created with the Virtual SAN capabilities - hostFailuresToTolerate to 2 and cachereservation is 20% read cache reserved for storage object. Also the persistent volume will be zeroedthick disk. The official VSAN policy documentation describes in detail about each of the individual storage capabilities that are supported by VSAN and can be configured on the virtual disk.
You can also specify the datastore in the Storageclass as shown in example 2. The volume will be created on the datastore specified in the storage class. This field is optional. If not specified as shown in example 1, the volume will be created on the datastore specified in the vsphere config file used to initialize the vSphere Cloud Provider.
Example 2:
kind: StorageClass apiVersion: storage.k8s.io/v1beta1 metadata: name: fast provisioner: kubernetes.io/vsphere-volume parameters: diskformat: zeroedthick datastore: VSANDatastore hostFailuresToTolerate: "2" cachereservation: "20"
Note: If you do not apply a storage policy during dynamic provisioning on a VSAN datastore, it will use a default Virtual SAN policy.
Creating the storageclass:
$ kubectl create -f examples/volumes/vsphere/vsphere-volume-sc-vsancapabilities.yaml
Verifying storage class is created:
$ kubectl describe storageclass fast Name: fast Annotations: <none> Provisioner: kubernetes.io/vsphere-volume Parameters: diskformat=zeroedthick, hostFailuresToTolerate="2", cachereservation="20" No events.
-
Create Persistent Volume Claim.
See example:
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvcsc-vsan annotations: volume.beta.kubernetes.io/storage-class: fast spec: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi
Creating the persistent volume claim:
$ kubectl create -f examples/volumes/vsphere/vsphere-volume-pvcsc.yaml
Verifying persistent volume claim is created:
$ kubectl describe pvc pvcsc-vsan Name: pvcsc-vsan Namespace: default Status: Bound Volume: pvc-80f7b5c1-94b6-11e6-a24f-005056a79d2d Labels: <none> Capacity: 2Gi Access Modes: RWO No events.
Persistent Volume is automatically created and is bounded to this pvc.
Verifying persistent volume claim is created:
$ kubectl describe pv pvc-80f7b5c1-94b6-11e6-a24f-005056a79d2d Name: pvc-80f7b5c1-94b6-11e6-a24f-005056a79d2d Labels: <none> Status: Bound Claim: default/pvcsc-vsan Reclaim Policy: Delete Access Modes: RWO Capacity: 2Gi Message: Source: Type: vSphereVolume (a Persistent Disk resource in vSphere) VolumePath: [VSANDatastore] kubevols/kubernetes-dynamic-pvc-80f7b5c1-94b6-11e6-a24f-005056a79d2d.vmdk FSType: ext4 No events.
Note: VMDK is created inside
kubevols
folder in datastore which is mentioned in 'vsphere' cloudprovider configuration. The cloudprovider config is created during setup of Kubernetes cluster on vSphere. -
Create Pod which uses Persistent Volume Claim with storage class.
See example:
apiVersion: v1 kind: Pod metadata: name: pvpod spec: containers: - name: test-container image: gcr.io/google_containers/test-webserver volumeMounts: - name: test-volume mountPath: /test volumes: - name: test-volume persistentVolumeClaim: claimName: pvcsc-vsan
Creating the pod:
$ kubectl create -f examples/volumes/vsphere/vsphere-volume-pvcscpod.yaml
Verifying pod is created:
$ kubectl get pod pvpod NAME READY STATUS RESTARTS AGE pvpod 1/1 Running 0 48m
Stateful Set
vSphere volumes can be consumed by Stateful Sets.
-
Create a storage class that will be used by the
volumeClaimTemplates
of a Stateful Set.See example:
kind: StorageClass apiVersion: storage.k8s.io/v1beta1 metadata: name: thin-disk provisioner: kubernetes.io/vsphere-volume parameters: diskformat: thin
-
Create a Stateful set that consumes storage from the Storage Class created.
See example:
--- apiVersion: v1 kind: Service metadata: name: nginx labels: app: nginx spec: ports: - port: 80 name: web clusterIP: None selector: app: nginx --- apiVersion: apps/v1beta1 kind: StatefulSet metadata: name: web spec: serviceName: "nginx" replicas: 14 template: metadata: labels: app: nginx spec: containers: - name: nginx image: gcr.io/google_containers/nginx-slim:0.8 ports: - containerPort: 80 name: web volumeMounts: - name: www mountPath: /usr/share/nginx/html volumeClaimTemplates: - metadata: name: www annotations: volume.beta.kubernetes.io/storage-class: thin-disk spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 1Gi
This will create Persistent Volume Claims for each replica and provision a volume for each claim if an existing volume could be bound to the claim.