Merge pull request #42974 from vmware/VSANPolicyProvisioningForKubernetesOnKubernetesRepo

Automatic merge from submit-queue (batch tested with PRs 42835, 42974)

VSAN policy support for storage volume provisioning inside kubernetes

The vsphere users will have the ability to specify custom Virtual SAN Storage Capabilities during dynamic volume provisioning. You can now define storage requirements, such as performance and availability, in the form of storage capabilities during dynamic volume provisioning. The storage capability requirements are converted into a Virtual SAN policy which are then pushed down to the Virtual SAN layer when a storage volume (virtual disk) is being created. The virtual disk is distributed across the Virtual SAN datastore to meet the requirements.

For example, User creates a storage class with VSAN storage capabilities:

> kind: StorageClass
> apiVersion: storage.k8s.io/v1beta1
> metadata:
>   name: slow
> provisioner: kubernetes.io/vsphere-volume
> parameters:
>   hostFailuresToTolerate: "2"
>   diskStripes: "1"
>   cacheReservation: "20"
>   datastore: VSANDatastore

The vSphere Cloud provider provisions a virtual disk (VMDK) on VSAN with the policy configured to the disk.

When you know storage requirements of your application that is being deployed on a container, you can specify these storage capabilities when you create a storage class inside Kubernetes.

@pdhamdhere @tthole @abrarshivani @divyenpatel 

**Release note**:

```release-note
None
```
pull/6/head
Kubernetes Submit Queue 2017-03-27 17:00:23 -07:00 committed by GitHub
commit 3843108081
5 changed files with 669 additions and 74 deletions

View File

@ -5,6 +5,7 @@
- [Volumes](#volumes)
- [Persistent Volumes](#persistent-volumes)
- [Storage Class](#storage-class)
- [Virtual SAN policy support inside Kubernetes](#virtual-san-policy-support-inside-kubernetes)
- [Stateful Set](#stateful-set)
## Prerequisites
@ -353,6 +354,185 @@
NAME READY STATUS RESTARTS AGE
pvpod 1/1 Running 0 48m
```
### Virtual SAN policy support inside Kubernetes
Vsphere Infrastructure(VI) Admins will have the ability to specify custom Virtual SAN Storage Capabilities during dynamic volume provisioning. You can now define storage requirements, such as performance and availability, in the form of storage capabilities during dynamic volume provisioning. The storage capability requirements are converted into a Virtual SAN policy which are then pushed down to the Virtual SAN layer when a persistent volume (virtual disk) is being created. The virtual disk is distributed across the Virtual SAN datastore to meet the requirements.
The official [VSAN policy documentation](https://pubs.vmware.com/vsphere-65/index.jsp?topic=%2Fcom.vmware.vsphere.virtualsan.doc%2FGUID-08911FD3-2462-4C1C-AE81-0D4DBC8F7990.html) describes in detail about each of the individual storage capabilities that are supported by VSAN. The user can specify these storage capabilities as part of storage class defintion based on his application needs.
The policy settings can be one or more of the following:
* *hostFailuresToTolerate*: represents NumberOfFailuresToTolerate
* *diskStripes*: represents NumberofDiskStripesPerObject
* *objectSpaceReservation*: represents ObjectSpaceReservation
* *cacheReservation*: represents FlashReadCacheReservation
* *iopsLimit*: represents IOPSLimitForObject
* *forceProvisioning*: represents if volume must be Force Provisioned
__Note: Here you don't need to create persistent volume it is created for you.__
1. Create Storage Class.
Example 1:
```yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: fast
provisioner: kubernetes.io/vsphere-volume
parameters:
diskformat: zeroedthick
hostFailuresToTolerate: "2"
cachereservation: "20"
```
[Download example](vsphere-volume-sc-vsancapabilities.yaml?raw=true)
Here a persistent volume will be created with the Virtual SAN capabilities - hostFailuresToTolerate to 2 and cachereservation is 20% read cache reserved for storage object. Also the persistent volume will be *zeroedthick* disk.
The official [VSAN policy documentation](https://pubs.vmware.com/vsphere-65/index.jsp?topic=%2Fcom.vmware.vsphere.virtualsan.doc%2FGUID-08911FD3-2462-4C1C-AE81-0D4DBC8F7990.html) describes in detail about each of the individual storage capabilities that are supported by VSAN and can be configured on the virtual disk.
You can also specify the datastore in the Storageclass as shown in example 2. The volume will be created on the datastore specified in the storage class.
This field is optional. If not specified as shown in example 1, the volume will be created on the datastore specified in the vsphere config file used to initialize the vSphere Cloud Provider.
Example 2:
```yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: fast
provisioner: kubernetes.io/vsphere-volume
parameters:
diskformat: zeroedthick
datastore: VSANDatastore
hostFailuresToTolerate: "2"
cachereservation: "20"
```
[Download example](vsphere-volume-sc-vsancapabilities-with-datastore.yaml?raw=true)
__Note: If you do not apply a storage policy during dynamic provisioning on a VSAN datastore, it will use a default Virtual SAN policy.__
Creating the storageclass:
``` bash
$ kubectl create -f examples/volumes/vsphere/vsphere-volume-sc-vsancapabilities.yaml
```
Verifying storage class is created:
``` bash
$ kubectl describe storageclass fast
Name: fast
Annotations: <none>
Provisioner: kubernetes.io/vsphere-volume
Parameters: diskformat=zeroedthick, hostFailuresToTolerate="2", cachereservation="20"
No events.
```
2. Create Persistent Volume Claim.
See example:
```yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvcsc-vsan
annotations:
volume.beta.kubernetes.io/storage-class: fast
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
```
[Download example](vsphere-volume-pvcsc.yaml?raw=true)
Creating the persistent volume claim:
``` bash
$ kubectl create -f examples/volumes/vsphere/vsphere-volume-pvcsc.yaml
```
Verifying persistent volume claim is created:
``` bash
$ kubectl describe pvc pvcsc-vsan
Name: pvcsc-vsan
Namespace: default
Status: Bound
Volume: pvc-80f7b5c1-94b6-11e6-a24f-005056a79d2d
Labels: <none>
Capacity: 2Gi
Access Modes: RWO
No events.
```
Persistent Volume is automatically created and is bounded to this pvc.
Verifying persistent volume claim is created:
``` bash
$ kubectl describe pv pvc-80f7b5c1-94b6-11e6-a24f-005056a79d2d
Name: pvc-80f7b5c1-94b6-11e6-a24f-005056a79d2d
Labels: <none>
Status: Bound
Claim: default/pvcsc-vsan
Reclaim Policy: Delete
Access Modes: RWO
Capacity: 2Gi
Message:
Source:
Type: vSphereVolume (a Persistent Disk resource in vSphere)
VolumePath: [VSANDatastore] kubevols/kubernetes-dynamic-pvc-80f7b5c1-94b6-11e6-a24f-005056a79d2d.vmdk
FSType: ext4
No events.
```
__Note: VMDK is created inside ```kubevols``` folder in datastore which is mentioned in 'vsphere' cloudprovider configuration.
The cloudprovider config is created during setup of Kubernetes cluster on vSphere.__
3. Create Pod which uses Persistent Volume Claim with storage class.
See example:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: pvpod
spec:
containers:
- name: test-container
image: gcr.io/google_containers/test-webserver
volumeMounts:
- name: test-volume
mountPath: /test
volumes:
- name: test-volume
persistentVolumeClaim:
claimName: pvcsc-vsan
```
[Download example](vsphere-volume-pvcscpod.yaml?raw=true)
Creating the pod:
``` bash
$ kubectl create -f examples/volumes/vsphere/vsphere-volume-pvcscpod.yaml
```
Verifying pod is created:
``` bash
$ kubectl get pod pvpod
NAME READY STATUS RESTARTS AGE
pvpod 1/1 Running 0 48m
```
### Stateful Set
vSphere volumes can be consumed by Stateful Sets.

View File

@ -0,0 +1,10 @@
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: fast
provisioner: kubernetes.io/vsphere-volume
parameters:
diskformat: zeroedthick
datastore: vsanDatastore
hostFailuresToTolerate: "2"
cachereservation: "20"

View File

@ -0,0 +1,9 @@
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: fast
provisioner: kubernetes.io/vsphere-volume
parameters:
diskformat: zeroedthick
hostFailuresToTolerate: "2"
cachereservation: "20"

View File

@ -66,6 +66,8 @@ const (
ZeroedThickDiskType = "zeroedThick"
VolDir = "kubevols"
RoundTripperDefaultCount = 3
DummyVMName = "kubernetes-helper-vm"
VSANDatastoreType = "vsan"
)
// Controller types that are currently supported for hot attach of disks
@ -166,11 +168,12 @@ type Volumes interface {
// VolumeOptions specifies capacity, tags, name and diskFormat for a volume.
type VolumeOptions struct {
CapacityKB int
Tags map[string]string
Name string
DiskFormat string
Datastore string
CapacityKB int
Tags map[string]string
Name string
DiskFormat string
Datastore string
StorageProfileData string
}
// Generates Valid Options for Diskformat
@ -687,6 +690,8 @@ func cleanUpController(ctx context.Context, newSCSIController types.BaseVirtualD
// Attaches given virtual disk volume to the compute running kubelet.
func (vs *VSphere) AttachDisk(vmDiskPath string, nodeName k8stypes.NodeName) (diskID string, diskUUID string, err error) {
var newSCSIController types.BaseVirtualDevice
// Create context
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
@ -722,50 +727,24 @@ func (vs *VSphere) AttachDisk(vmDiskPath string, nodeName k8stypes.NodeName) (di
var diskControllerType = vs.cfg.Disk.SCSIControllerType
// find SCSI controller of particular type from VM devices
allSCSIControllers := getSCSIControllers(vmDevices)
scsiControllersOfRequiredType := getSCSIControllersOfType(vmDevices, diskControllerType)
scsiController := getAvailableSCSIController(scsiControllersOfRequiredType)
var newSCSICreated = false
var newSCSIController types.BaseVirtualDevice
// creating a scsi controller as there is none found of controller type defined
newSCSICreated := false
if scsiController == nil {
if len(allSCSIControllers) >= SCSIControllerLimit {
// we reached the maximum number of controllers we can attach
return "", "", fmt.Errorf("SCSI Controller Limit of %d has been reached, cannot create another SCSI controller", SCSIControllerLimit)
}
glog.V(1).Infof("Creating a SCSI controller of %v type", diskControllerType)
newSCSIController, err := vmDevices.CreateSCSIController(diskControllerType)
newSCSIController, err = createAndAttachSCSIControllerToVM(ctx, vm, diskControllerType)
if err != nil {
k8runtime.HandleError(fmt.Errorf("error creating new SCSI controller: %v", err))
return "", "", err
}
configNewSCSIController := newSCSIController.(types.BaseVirtualSCSIController).GetVirtualSCSIController()
hotAndRemove := true
configNewSCSIController.HotAddRemove = &hotAndRemove
configNewSCSIController.SharedBus = types.VirtualSCSISharing(types.VirtualSCSISharingNoSharing)
// add the scsi controller to virtual machine
err = vm.AddDevice(context.TODO(), newSCSIController)
if err != nil {
glog.V(1).Infof("cannot add SCSI controller to vm - %v", err)
// attempt clean up of scsi controller
if vmDevices, err := vm.Device(ctx); err == nil {
cleanUpController(ctx, newSCSIController, vmDevices, vm)
}
glog.Errorf("Failed to create SCSI controller for VM :%q with err: %+v", vm.Name(), err)
return "", "", err
}
// verify scsi controller in virtual machine
vmDevices, err = vm.Device(ctx)
vmDevices, err := vm.Device(ctx)
if err != nil {
// cannot cleanup if there is no device list
return "", "", err
}
// Get VM device list
_, vmDevices, _, err := getVirtualMachineDevices(ctx, vs.cfg, vs.client, vSphereInstance)
_, vmDevices, _, err = getVirtualMachineDevices(ctx, vs.cfg, vs.client, vSphereInstance)
if err != nil {
glog.Errorf("cannot get vmDevices for VM err=%s", err)
return "", "", fmt.Errorf("cannot get vmDevices for VM err=%s", err)
@ -1200,8 +1179,8 @@ func (vs *VSphere) DetachDisk(volPath string, nodeName k8stypes.NodeName) error
// CreateVolume creates a volume of given size (in KiB).
func (vs *VSphere) CreateVolume(volumeOptions *VolumeOptions) (volumePath string, err error) {
var diskFormat string
var datastore string
var destVolPath string
// Default datastore is the datastore in the vSphere config file that is used initialize vSphere cloud provider.
if volumeOptions.Datastore == "" {
@ -1220,8 +1199,6 @@ func (vs *VSphere) CreateVolume(volumeOptions *VolumeOptions) (volumePath string
" Valid options are %s.", volumeOptions.DiskFormat, DiskformatValidOptions)
}
diskFormat = diskFormatValidType[volumeOptions.DiskFormat]
// Create context
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
@ -1246,43 +1223,65 @@ func (vs *VSphere) CreateVolume(volumeOptions *VolumeOptions) (volumePath string
return "", err
}
// vmdks will be created inside kubevols directory
kubeVolsPath := filepath.Clean(ds.Path(VolDir)) + "/"
err = makeDirectoryInDatastore(vs.client, dc, kubeVolsPath, false)
if err != nil && err != ErrFileAlreadyExist {
glog.Errorf("Cannot create dir %#v. err %s", kubeVolsPath, err)
return "", err
// Create a disk with the VSAN storage capabilities specified in the volumeOptions.StorageProfileData.
// This is achieved by following steps:
// 1. Create dummy VM if not already present.
// 2. Add a new disk to the VM by performing VM reconfigure.
// 3. Detach the new disk from the dummy VM.
if volumeOptions.StorageProfileData != "" {
// Check if the datastore is VSAN if any capability requirements are specified.
// VSphere cloud provider now only supports VSAN capabilities requirements
ok, err := checkIfDatastoreTypeIsVSAN(vs.client, ds)
if err != nil {
return "", fmt.Errorf("Failed while determining whether the datastore: %q"+
" is VSAN or not.", datastore)
}
if !ok {
return "", fmt.Errorf("The specified datastore: %q is not a VSAN datastore."+
" The policy parameters will work only with VSAN Datastore."+
" So, please specify a valid VSAN datastore in Storage class definition.", datastore)
}
// Check if the DummyVM exists in kubernetes cluster folder.
// The kubernetes cluster folder - vs.cfg.Global.WorkingDir is where all the nodes in the kubernetes cluster are created.
vmRegex := vs.cfg.Global.WorkingDir + DummyVMName
dummyVM, err := f.VirtualMachine(ctx, vmRegex)
if err != nil {
// 1. Create dummy VM and return the VM reference.
dummyVM, err = vs.createDummyVM(ctx, dc, ds)
if err != nil {
return "", err
}
}
// 2. Reconfigure the VM to attach the disk with the VSAN policy configured.
vmDiskPath, err := vs.createVirtualDiskWithPolicy(ctx, dc, ds, dummyVM, volumeOptions)
if err != nil {
glog.Errorf("Failed to attach the disk to VM: %q with err: %+v", DummyVMName, err)
return "", err
}
dummyVMNodeName := vmNameToNodeName(DummyVMName)
// 3. Detach the disk from the dummy VM.
err = vs.DetachDisk(vmDiskPath, dummyVMNodeName)
if err != nil {
glog.Errorf("Failed to detach the disk: %q from VM: %q with err: %+v", vmDiskPath, DummyVMName, err)
return "", fmt.Errorf("Failed to create the volume: %q with err: %+v", volumeOptions.Name, err)
}
destVolPath = vmDiskPath
} else {
// Create a virtual disk directly if no VSAN storage capabilities are specified by the user.
destVolPath, err = createVirtualDisk(ctx, vs.client, dc, ds, volumeOptions)
if err != nil {
return "", fmt.Errorf("Failed to create the virtual disk having name: %+q with err: %+v", destVolPath, err)
}
}
glog.V(4).Infof("Created dir with path as %+q", kubeVolsPath)
vmDiskPath := kubeVolsPath + volumeOptions.Name + ".vmdk"
// Create a virtual disk manager
virtualDiskManager := object.NewVirtualDiskManager(vs.client.Client)
// Create specification for new virtual disk
vmDiskSpec := &types.FileBackedVirtualDiskSpec{
VirtualDiskSpec: types.VirtualDiskSpec{
AdapterType: LSILogicControllerType,
DiskType: diskFormat,
},
CapacityKb: int64(volumeOptions.CapacityKB),
}
// Create virtual disk
task, err := virtualDiskManager.CreateVirtualDisk(ctx, vmDiskPath, dc, vmDiskSpec)
if err != nil {
return "", err
}
err = task.Wait(ctx)
if err != nil {
return "", err
}
return vmDiskPath, nil
glog.V(1).Infof("VM Disk path is %+q", destVolPath)
return destVolPath, nil
}
// DeleteVolume deletes a volume given volume name.
// Also, deletes the folder where the volume resides.
func (vs *VSphere) DeleteVolume(vmDiskPath string) error {
// Create context
ctx, cancel := context.WithCancel(context.Background())
@ -1356,6 +1355,255 @@ func (vs *VSphere) NodeExists(c *govmomi.Client, nodeName k8stypes.NodeName) (bo
return false, nil
}
func (vs *VSphere) createDummyVM(ctx context.Context, datacenter *object.Datacenter, datastore *object.Datastore) (*object.VirtualMachine, error) {
virtualMachineConfigSpec := types.VirtualMachineConfigSpec{
Name: DummyVMName,
Files: &types.VirtualMachineFileInfo{
VmPathName: "[" + datastore.Name() + "]",
},
NumCPUs: 1,
MemoryMB: 4,
}
// Create a new finder
f := find.NewFinder(vs.client.Client, true)
f.SetDatacenter(datacenter)
// Get the folder reference for global working directory where the dummy VM needs to be created.
vmFolder, err := getFolder(ctx, vs.client, vs.cfg.Global.Datacenter, vs.cfg.Global.WorkingDir)
if err != nil {
return nil, fmt.Errorf("Failed to get the folder reference for %q", vs.cfg.Global.WorkingDir)
}
vmRegex := vs.cfg.Global.WorkingDir + vs.localInstanceID
currentVM, err := f.VirtualMachine(ctx, vmRegex)
if err != nil {
return nil, err
}
currentVMHost, err := currentVM.HostSystem(ctx)
if err != nil {
return nil, err
}
// Get the resource pool for the current node.
// We create the dummy VM in the same resource pool as current node.
resourcePool, err := currentVMHost.ResourcePool(ctx)
if err != nil {
return nil, err
}
task, err := vmFolder.CreateVM(ctx, virtualMachineConfigSpec, resourcePool, nil)
if err != nil {
return nil, err
}
dummyVMTaskInfo, err := task.WaitForResult(ctx, nil)
if err != nil {
return nil, err
}
dummyVM := dummyVMTaskInfo.Result.(*object.VirtualMachine)
return dummyVM, nil
}
// Creates a virtual disk with the policy configured to the disk.
// A call to this function is made only when a user specifies VSAN storage capabilties in the storage class definition.
func (vs *VSphere) createVirtualDiskWithPolicy(ctx context.Context, datacenter *object.Datacenter, datastore *object.Datastore, virtualMachine *object.VirtualMachine, volumeOptions *VolumeOptions) (string, error) {
var diskFormat string
diskFormat = diskFormatValidType[volumeOptions.DiskFormat]
vmDevices, err := virtualMachine.Device(ctx)
if err != nil {
return "", err
}
var diskControllerType = vs.cfg.Disk.SCSIControllerType
// find SCSI controller of particular type from VM devices
scsiControllersOfRequiredType := getSCSIControllersOfType(vmDevices, diskControllerType)
scsiController := getAvailableSCSIController(scsiControllersOfRequiredType)
var newSCSIController types.BaseVirtualDevice
if scsiController == nil {
newSCSIController, err = createAndAttachSCSIControllerToVM(ctx, virtualMachine, diskControllerType)
if err != nil {
glog.Errorf("Failed to create SCSI controller for VM :%q with err: %+v", virtualMachine.Name(), err)
return "", err
}
// verify scsi controller in virtual machine
vmDevices, err := virtualMachine.Device(ctx)
if err != nil {
return "", err
}
scsiController = getSCSIController(vmDevices, diskControllerType)
if scsiController == nil {
glog.Errorf("cannot find SCSI controller in VM")
// attempt clean up of scsi controller
cleanUpController(ctx, newSCSIController, vmDevices, virtualMachine)
return "", fmt.Errorf("cannot find SCSI controller in VM")
}
}
kubeVolsPath := filepath.Clean(datastore.Path(VolDir)) + "/"
// Create a kubevols directory in the datastore if one doesn't exist.
err = makeDirectoryInDatastore(vs.client, datacenter, kubeVolsPath, false)
if err != nil && err != ErrFileAlreadyExist {
glog.Errorf("Cannot create dir %#v. err %s", kubeVolsPath, err)
return "", err
}
glog.V(4).Infof("Created dir with path as %+q", kubeVolsPath)
vmDiskPath := kubeVolsPath + volumeOptions.Name + ".vmdk"
disk := vmDevices.CreateDisk(scsiController, datastore.Reference(), vmDiskPath)
unitNumber, err := getNextUnitNumber(vmDevices, scsiController)
if err != nil {
glog.Errorf("cannot attach disk to VM, limit reached - %v.", err)
return "", err
}
*disk.UnitNumber = unitNumber
disk.CapacityInKB = int64(volumeOptions.CapacityKB)
backing := disk.Backing.(*types.VirtualDiskFlatVer2BackingInfo)
backing.DiskMode = string(types.VirtualDiskModeIndependent_persistent)
switch diskFormat {
case ThinDiskType:
backing.ThinProvisioned = types.NewBool(true)
case EagerZeroedThickDiskType:
backing.EagerlyScrub = types.NewBool(true)
default:
backing.ThinProvisioned = types.NewBool(false)
}
// Reconfigure VM
virtualMachineConfigSpec := types.VirtualMachineConfigSpec{}
deviceConfigSpec := &types.VirtualDeviceConfigSpec{
Device: disk,
Operation: types.VirtualDeviceConfigSpecOperationAdd,
FileOperation: types.VirtualDeviceConfigSpecFileOperationCreate,
}
storageProfileSpec := &types.VirtualMachineDefinedProfileSpec{
ProfileId: "",
ProfileData: &types.VirtualMachineProfileRawData{
ExtensionKey: "com.vmware.vim.sps",
ObjectData: volumeOptions.StorageProfileData,
},
}
deviceConfigSpec.Profile = append(deviceConfigSpec.Profile, storageProfileSpec)
virtualMachineConfigSpec.DeviceChange = append(virtualMachineConfigSpec.DeviceChange, deviceConfigSpec)
task, err := virtualMachine.Reconfigure(ctx, virtualMachineConfigSpec)
if err != nil {
glog.Errorf("Failed to reconfigure the VM with the disk with err - %v.", err)
return "", err
}
err = task.Wait(ctx)
if err != nil {
glog.Errorf("Failed to reconfigure the VM with the disk with err - %v.", err)
return "", err
}
return vmDiskPath, nil
}
// creating a scsi controller as there is none found.
func createAndAttachSCSIControllerToVM(ctx context.Context, vm *object.VirtualMachine, diskControllerType string) (types.BaseVirtualDevice, error) {
// Get VM device list
vmDevices, err := vm.Device(ctx)
if err != nil {
return nil, err
}
allSCSIControllers := getSCSIControllers(vmDevices)
if len(allSCSIControllers) >= SCSIControllerLimit {
// we reached the maximum number of controllers we can attach
return nil, fmt.Errorf("SCSI Controller Limit of %d has been reached, cannot create another SCSI controller", SCSIControllerLimit)
}
newSCSIController, err := vmDevices.CreateSCSIController(diskControllerType)
if err != nil {
k8runtime.HandleError(fmt.Errorf("error creating new SCSI controller: %v", err))
return nil, err
}
configNewSCSIController := newSCSIController.(types.BaseVirtualSCSIController).GetVirtualSCSIController()
hotAndRemove := true
configNewSCSIController.HotAddRemove = &hotAndRemove
configNewSCSIController.SharedBus = types.VirtualSCSISharing(types.VirtualSCSISharingNoSharing)
// add the scsi controller to virtual machine
err = vm.AddDevice(context.TODO(), newSCSIController)
if err != nil {
glog.V(1).Infof("cannot add SCSI controller to vm - %v", err)
// attempt clean up of scsi controller
if vmDevices, err := vm.Device(ctx); err == nil {
cleanUpController(ctx, newSCSIController, vmDevices, vm)
}
return nil, err
}
return newSCSIController, nil
}
// Create a virtual disk.
func createVirtualDisk(ctx context.Context, c *govmomi.Client, dc *object.Datacenter, ds *object.Datastore, volumeOptions *VolumeOptions) (string, error) {
kubeVolsPath := filepath.Clean(ds.Path(VolDir)) + "/"
// Create a kubevols directory in the datastore if one doesn't exist.
err := makeDirectoryInDatastore(c, dc, kubeVolsPath, false)
if err != nil && err != ErrFileAlreadyExist {
glog.Errorf("Cannot create dir %#v. err %s", kubeVolsPath, err)
return "", err
}
glog.V(4).Infof("Created dir with path as %+q", kubeVolsPath)
vmDiskPath := kubeVolsPath + volumeOptions.Name + ".vmdk"
diskFormat := diskFormatValidType[volumeOptions.DiskFormat]
// Create a virtual disk manager
virtualDiskManager := object.NewVirtualDiskManager(c.Client)
// Create specification for new virtual disk
vmDiskSpec := &types.FileBackedVirtualDiskSpec{
VirtualDiskSpec: types.VirtualDiskSpec{
AdapterType: LSILogicControllerType,
DiskType: diskFormat,
},
CapacityKb: int64(volumeOptions.CapacityKB),
}
// Create virtual disk
task, err := virtualDiskManager.CreateVirtualDisk(ctx, vmDiskPath, dc, vmDiskSpec)
if err != nil {
return "", err
}
return vmDiskPath, task.Wait(ctx)
}
// Check if the provided datastore is VSAN
func checkIfDatastoreTypeIsVSAN(c *govmomi.Client, datastore *object.Datastore) (bool, error) {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
pc := property.DefaultCollector(c.Client)
// Convert datastores into list of references
var dsRefs []types.ManagedObjectReference
dsRefs = append(dsRefs, datastore.Reference())
// Retrieve summary property for the given datastore
var dsMorefs []mo.Datastore
err := pc.Retrieve(ctx, dsRefs, []string{"summary"}, &dsMorefs)
if err != nil {
return false, err
}
for _, ds := range dsMorefs {
if ds.Summary.Type == VSANDatastoreType {
return true, nil
}
}
return false, nil
}
// Creates a folder using the specified name.
// If the intermediate level folders do not exist,
// and the parameter createParents is true,
@ -1378,3 +1626,49 @@ func makeDirectoryInDatastore(c *govmomi.Client, dc *object.Datacenter, path str
return err
}
// Get the folder for a given VM
func getFolder(ctx context.Context, c *govmomi.Client, datacenterName string, folderName string) (*object.Folder, error) {
f := find.NewFinder(c.Client, true)
// Fetch and set data center
dc, err := f.Datacenter(ctx, datacenterName)
if err != nil {
return nil, err
}
f.SetDatacenter(dc)
folderName = strings.TrimSuffix(folderName, "/")
dcFolders, err := dc.Folders(ctx)
vmFolders, _ := dcFolders.VmFolder.Children(ctx)
var vmFolderRefs []types.ManagedObjectReference
for _, vmFolder := range vmFolders {
vmFolderRefs = append(vmFolderRefs, vmFolder.Reference())
}
// Get only references of type folder.
var folderRefs []types.ManagedObjectReference
for _, vmFolder := range vmFolderRefs {
if vmFolder.Type == "Folder" {
folderRefs = append(folderRefs, vmFolder)
}
}
// Find the specific folder reference matching the folder name.
var resultFolder *object.Folder
pc := property.DefaultCollector(c.Client)
for _, folderRef := range folderRefs {
var refs []types.ManagedObjectReference
var folderMorefs []mo.Folder
refs = append(refs, folderRef)
err = pc.Retrieve(ctx, refs, []string{"name"}, &folderMorefs)
for _, fref := range folderMorefs {
if fref.Name == folderName {
resultFolder = object.NewFolder(c.Client, folderRef)
}
}
}
return resultFolder, nil
}

View File

@ -19,6 +19,7 @@ package vsphere_volume
import (
"errors"
"fmt"
"strconv"
"strings"
"time"
@ -35,6 +36,26 @@ const (
checkSleepDuration = time.Second
diskByIDPath = "/dev/disk/by-id/"
diskSCSIPrefix = "wwn-0x"
diskformat = "diskformat"
datastore = "datastore"
HostFailuresToTolerateCapability = "hostfailurestotolerate"
ForceProvisioningCapability = "forceprovisioning"
CacheReservationCapability = "cachereservation"
DiskStripesCapability = "diskstripes"
ObjectSpaceReservationCapability = "objectspacereservation"
IopsLimitCapability = "iopslimit"
HostFailuresToTolerateCapabilityMin = 0
HostFailuresToTolerateCapabilityMax = 3
ForceProvisioningCapabilityMin = 0
ForceProvisioningCapabilityMax = 1
CacheReservationCapabilityMin = 0
CacheReservationCapabilityMax = 100
DiskStripesCapabilityMin = 1
DiskStripesCapabilityMax = 12
ObjectSpaceReservationCapabilityMin = 0
ObjectSpaceReservationCapabilityMax = 100
IopsLimitCapabilityMin = 0
)
var ErrProbeVolume = errors.New("Error scanning attached volumes")
@ -73,15 +94,28 @@ func (util *VsphereDiskUtil) CreateVolume(v *vsphereVolumeProvisioner) (vmDiskPa
// the values to the cloud provider.
for parameter, value := range v.options.Parameters {
switch strings.ToLower(parameter) {
case "diskformat":
case diskformat:
volumeOptions.DiskFormat = value
case "datastore":
case datastore:
volumeOptions.Datastore = value
case HostFailuresToTolerateCapability, ForceProvisioningCapability,
CacheReservationCapability, DiskStripesCapability,
ObjectSpaceReservationCapability, IopsLimitCapability:
capabilityData, err := validateVSANCapability(strings.ToLower(parameter), value)
if err != nil {
return "", 0, err
} else {
volumeOptions.StorageProfileData += capabilityData
}
default:
return "", 0, fmt.Errorf("invalid option %q for volume plugin %s", parameter, v.plugin.GetPluginName())
}
}
if volumeOptions.StorageProfileData != "" {
volumeOptions.StorageProfileData = "(" + volumeOptions.StorageProfileData + ")"
}
glog.V(1).Infof("StorageProfileData in vsphere volume %q", volumeOptions.StorageProfileData)
// TODO: implement PVC.Selector parsing
if v.options.PVC.Spec.Selector != nil {
return "", 0, fmt.Errorf("claim.Spec.Selector is not supported for dynamic provisioning on vSphere")
@ -132,3 +166,71 @@ func getCloudProvider(cloud cloudprovider.Interface) (*vsphere.VSphere, error) {
}
return vs, nil
}
// Validate the capability requirement for the user specified policy attributes.
func validateVSANCapability(capabilityName string, capabilityValue string) (string, error) {
var capabilityData string
capabilityIntVal, ok := verifyCapabilityValueIsInteger(capabilityValue)
if !ok {
return "", fmt.Errorf("Invalid value for %s. The capabilityValue: %s must be a valid integer value", capabilityName, capabilityValue)
}
switch strings.ToLower(capabilityName) {
case HostFailuresToTolerateCapability:
if capabilityIntVal >= HostFailuresToTolerateCapabilityMin && capabilityIntVal <= HostFailuresToTolerateCapabilityMax {
capabilityData = " (\"hostFailuresToTolerate\" i" + capabilityValue + ")"
} else {
return "", fmt.Errorf(`Invalid value for hostFailuresToTolerate.
The default value is %d, minimum value is %d and maximum value is %d.`,
1, HostFailuresToTolerateCapabilityMin, HostFailuresToTolerateCapabilityMax)
}
case ForceProvisioningCapability:
if capabilityIntVal >= ForceProvisioningCapabilityMin && capabilityIntVal <= ForceProvisioningCapabilityMax {
capabilityData = " (\"forceProvisioning\" i" + capabilityValue + ")"
} else {
return "", fmt.Errorf(`Invalid value for forceProvisioning.
The value can be either %d or %d.`,
ForceProvisioningCapabilityMin, ForceProvisioningCapabilityMax)
}
case CacheReservationCapability:
if capabilityIntVal >= CacheReservationCapabilityMin && capabilityIntVal <= CacheReservationCapabilityMax {
capabilityData = " (\"cacheReservation\" i" + strconv.Itoa(capabilityIntVal*10000) + ")"
} else {
return "", fmt.Errorf(`Invalid value for cacheReservation.
The minimum percentage is %d and maximum percentage is %d.`,
CacheReservationCapabilityMin, CacheReservationCapabilityMax)
}
case DiskStripesCapability:
if capabilityIntVal >= DiskStripesCapabilityMin && capabilityIntVal <= DiskStripesCapabilityMax {
capabilityData = " (\"stripeWidth\" i" + capabilityValue + ")"
} else {
return "", fmt.Errorf(`Invalid value for diskStripes.
The minimum value is %d and maximum value is %d.`,
DiskStripesCapabilityMin, DiskStripesCapabilityMax)
}
case ObjectSpaceReservationCapability:
if capabilityIntVal >= ObjectSpaceReservationCapabilityMin && capabilityIntVal <= ObjectSpaceReservationCapabilityMax {
capabilityData = " (\"proportionalCapacity\" i" + capabilityValue + ")"
} else {
return "", fmt.Errorf(`Invalid value for ObjectSpaceReservation.
The minimum percentage is %d and maximum percentage is %d.`,
ObjectSpaceReservationCapabilityMin, ObjectSpaceReservationCapabilityMax)
}
case IopsLimitCapability:
if capabilityIntVal >= IopsLimitCapabilityMin {
capabilityData = " (\"iopsLimit\" i" + capabilityValue + ")"
} else {
return "", fmt.Errorf(`Invalid value for iopsLimit.
The value should be greater than %d.`, IopsLimitCapabilityMin)
}
}
return capabilityData, nil
}
// Verify if the capability value is of type integer.
func verifyCapabilityValueIsInteger(capabilityValue string) (int, bool) {
i, err := strconv.Atoi(capabilityValue)
if err != nil {
return -1, false
}
return i, true
}