Commit Graph

132 Commits (a77c8a1ecda8af9639fdf13707ea817d954ea1f6)

Author SHA1 Message Date
NickrenREN 39c48d3605 remove rackspace related code 2017-09-22 18:06:50 +08:00
Matthew Wong 5e772b8e4b Add storageClass.mountOptions and use it in all applicable plugins 2017-08-29 11:37:36 -04:00
mtanino 5ff9dc0b3b WaitForAttach refactoring for iSCSI attacher/detacher
This change is prerequisite for implementing iSCSI attacher
and detacher.

In order to use chap authentication at iSCSI plugin after
implementing attacher and detacher, secret is needed at
AttachDisk() which is called from WaitForAttach().
To obtain secret, pod information is required, but
WaitForAttach() doesn't pass pod information inside.

This patch adds 'pod' as an argument of WaitForAttach()
and adds changes to drivers who implements WaitForAttach().

Fixes #48953
2017-08-26 17:21:34 -04:00
Kubernetes Submit Queue 27fbb68f18 Merge pull request #51087 from oracle/for/upstream/master/ccm-instance-exists
Automatic merge from submit-queue (batch tested with PRs 51174, 51363, 51087, 51382, 51388)

Add InstanceExistsByProviderID to cloud provider interface for CCM

**What this PR does / why we need it**:

Currently, [`MonitorNode()`](02b520f0a4/pkg/controller/cloud/nodecontroller.go (L240)) in the node controller checks with the CCM if a node still exists by calling `ExternalID(nodeName)`. `ExternalID` is supposed to return the provider id of a node which is not supported on every cloud. This means that any clouds who cannot infer the provider id by the node name from a remote location will never remove nodes that no longer exist. 


**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #50985

**Special notes for your reviewer**:

We'll want to create a subsequent issue to track the implementation of these two new methods in the cloud providers.

**Release note**:

```release-note
Adds `InstanceExists` and `InstanceExistsByProviderID` to cloud provider interface for the cloud controller manager
```

/cc @wlan0 @thockin @andrewsykim @luxas @jhorwit2

/area cloudprovider
/sig cluster-lifecycle
2017-08-26 06:43:30 -07:00
Kubernetes Submit Queue 932e07af53 Merge pull request #50031 from verult/ConnectedProbe
Automatic merge from submit-queue (batch tested with PRs 51054, 51101, 50031, 51296, 51173)

Dynamic Flexvolume plugin discovery, probing with filesystem watch.

**What this PR does / why we need it**: Enables dynamic Flexvolume plugin discovery. This model uses a filesystem watch (fsnotify library), which notifies the system that a probe is necessary only if something changes in the Flexvolume plugin directory.

This PR uses the dependency injection model in https://github.com/kubernetes/kubernetes/pull/49668.

**Release Note**:
```release-note
Dynamic Flexvolume plugin discovery. Flexvolume plugins can now be discovered on the fly rather than only at system initialization time.
```

/sig-storage

/assign @jsafrane @saad-ali 
/cc @bassam @chakri-nelluri @kokhang @liggitt @thockin
2017-08-26 02:05:34 -07:00
Kubernetes Submit Queue 9d7bdb6a5f Merge pull request #51274 from yastij/clean-cinder-detachLogError
Automatic merge from submit-queue (batch tested with PRs 51235, 50819, 51274, 50972, 50504)

Clean cinder detachlogerror

**What this PR does / why we need it**:

**Which issue this PR fixes** : fixes #50441

**Special notes for your reviewer**:

**Release note**:

```release-note
```
2017-08-25 19:40:32 -07:00
Cheng Xing 396c3c7c6f Adding dynamic Flexvolume plugin discovery capability, using filesystem watch. 2017-08-25 11:42:32 -07:00
Josh Horwitz 2f1ea47c83 Add InstanceExists* methods to cloud provider interface for CCM 2017-08-24 20:41:28 -04:00
Yassine TIJANI d433ecd6cb cleaning dettach logic since it's not needed 2017-08-24 22:22:58 +02:00
Kubernetes Submit Queue 012e94b6be Merge pull request #50239 from FengyunPan/fix-no-exist-node
Automatic merge from submit-queue (batch tested with PRs 38947, 50239, 51115, 51094, 51116)

Mark the volumes as detached when node does not exist

If node does not exist, node's volumes will be detached
automatically and become available. So mark them detached and do not return err.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
#50200

**Release note**:
```release-note
NONE
```
2017-08-23 08:41:04 -07:00
FengyunPan 63725e3e3c Mark the volumes as detached when node does not exist
If node doesn't exist, OpenStack Nova will assume the volumes
are not attached to it. So mark the volumes as detached and
return false without error.
Fix: #50200
2017-08-15 16:42:11 +08:00
Jan Safranek 0e547bae22 SafeFormatAndMount should use volume.Exec provided by VolumeHost
We need to execute mkfs / fsck where the utilities are.
2017-08-14 12:16:27 +02:00
Jan Safranek bc0e170d9c Add pluginName to VolumeHost.GetMouter
Different plugins can get different mounter, depending where the mount
utilities are.
2017-08-14 12:16:26 +02:00
Kubernetes Submit Queue 1bdf691f6c Merge pull request #50429 from houjun41544/20170810
Automatic merge from submit-queue

Remove repeated reviewer's names

**What this PR does / why we need it**:

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
```
2017-08-11 15:52:47 -07:00
Jeff Grafton a7f49c906d Use buildozer to delete licenses() rules except under third_party/ 2017-08-11 09:32:39 -07:00
Jeff Grafton 33276f06be Use buildozer to remove deprecated automanaged tags 2017-08-11 09:31:50 -07:00
houjun dfd38ae947 Remove repeated reviewers names 2017-08-10 12:51:06 +08:00
Davanum Srinivas ad98f109ef Volunteer to review Cinder related code
Since i am currently helping with the OpenStack cloud provider, happy
to do the same with cinder package as well as they are related.
2017-07-27 16:01:12 -04:00
Kubernetes Submit Queue 3a0d8f8fea Merge pull request #45532 from jsafrane/cinder-approver
Automatic merge from submit-queue

Tune Cinder approvers

I don't want to be single approver for cinder PRs, @anguslees is OpenStack maintainer and should be able to help with Cinder.

Any other volunteers from @kubernetes/sig-storage-pr-reviews or @k8s-sig-openstack-pr-reviews?

Note: @justinsb **is** still reviewer, he was just listed twice.

```release-note
NONE
```
2017-07-27 03:14:42 -07:00
deads2k 94e9993900 remove deads2k from volume reviewer 2017-07-25 08:52:25 -04:00
Kubernetes Submit Queue d286f56221 Merge pull request #45345 from codablock/storageclass_fstype
Automatic merge from submit-queue (batch tested with PRs 45345, 49470, 49407, 49448, 49486)

Support "fstype" parameter in dynamically provisioned PVs

This PR is a replacement for https://github.com/kubernetes/kubernetes/pull/40805. I was not able to push fixes and rebases to the original branch as I don't have access to the Github organization anymore.

I assume the PR will need a new "ok to test" 

**ORIGINAL PR DESCRIPTION**

**What this PR does / why we need it**: This PR allows specifying the desired FSType when dynamically provisioning volumes with storage classes. The FSType can now be set as a parameter:
```yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
  name: test
provisioner: kubernetes.io/azure-disk
parameters:
  fstype: xfs
```

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #37801

**Special notes for your reviewer**:
The PR also implicitly adds checks for unsupported parameters.

**Release note**:

```release-note
Support specifying of FSType in StorageClass
```
2017-07-24 07:40:47 -07:00
ymqytw 9b393a83d4 update godep 2017-07-20 11:03:49 -07:00
ymqytw 3dfc8bf7f3 update import 2017-07-20 11:03:49 -07:00
Alexander Block 8057056d1c Support "fstype" parameter in dynamically provisioned PVs 2017-07-19 10:34:13 +02:00
Jacob Simpson 29c1b81d4c Scripted migration from clientset_generated to client-go. 2017-07-17 15:05:37 -07:00
Mike Danese c201553f27 remove some people from OWNERS so they don't get reviews anymore
These are googlers who don't work on the project anymore but are still
getting reviews assigned to them:
- bprashanth
- rjnagal
- vmarmol
2017-07-13 10:02:21 -07:00
Chao Xu 60604f8818 run hack/update-all 2017-06-22 11:31:03 -07:00
Chao Xu f4989a45a5 run root-rewrite-v1-..., compile 2017-06-22 10:25:57 -07:00
mbohlool c91a12d205 Remove all references to types.UnixUserID and types.UnixGroupID 2017-06-21 04:09:07 -07:00
Matthew Wong 5e788a6a67 Don't provision for PVCs with AccessModes unsupported by plugin 2017-06-12 12:56:41 -04:00
FengyunPan 1f47323187 Waiting attach operation to be finished rather than returning nil 2017-06-06 22:58:44 +08:00
deads2k 954eb3ceb9 move labels to components which own the APIs 2017-05-31 10:32:06 -04:00
FengyunPan 300f531389 Wait for detach operation to complete
When volume's status is 'detaching', controllermanager will detach
it again and return err. It is necessary to wait for detach
operation to complete within the alloted time.
2017-05-31 07:52:15 +08:00
NickrenREN a02d6cd5d8 Add createdby annotation for rbd and quobyte and make dynamical createdby key const
make dynamical createdby key const
2017-05-26 16:55:11 +08:00
Jamie Hannaford 4bd71a3b77 Refactor to use Volume IDs and remove ambiguity 2017-05-24 12:59:16 +02:00
FengyunPan 4a6e1f2a1d Don't return err when volume's status is 'attaching'
When volume's status is 'attaching', its attachments will be None,
controllermanager can't get device path and make some failed event.
But it is normal, let's fix it.
2017-05-12 19:53:50 +08:00
Kubernetes Submit Queue 49626c975b Merge pull request #44798 from zetaab/master
Automatic merge from submit-queue

Statefulsets for cinder: allow multi-AZ deployments, spread pods across zones

**What this PR does / why we need it**: Currently if we do not specify availability zone in cinder storageclass, the cinder is provisioned to zone called nova. However, like mentioned in issue, we have situation that we want spread statefulset across 3 different zones. Currently this is not possible with statefulsets and cinder storageclass. In this new solution, if we leave it empty the algorithm will choose the zone for the cinder drive similar style like in aws and gce storageclass solutions. 

**Which issue this PR fixes** fixes #44735

**Special notes for your reviewer**:

example:

```
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
  name: all
provisioner: kubernetes.io/cinder
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
  name: galera
  labels:
    app: mysql
spec:
  ports:
  - port: 3306
    name: mysql
  clusterIP: None
  selector:
    app: mysql
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: mysql
spec:
  serviceName: "galera"
  replicas: 3
  template:
    metadata:
      labels:
        app: mysql
      annotations:
        pod.alpha.kubernetes.io/initialized: "true"
    spec:
      containers:
      - name: mysql
        image: adfinissygroup/k8s-mariadb-galera-centos:v002
        imagePullPolicy: Always
        ports:
        - containerPort: 3306
          name: mysql
        - containerPort: 4444
          name: sst
        - containerPort: 4567
          name: replication
        - containerPort: 4568
          name: ist
        volumeMounts:
        - name: storage
          mountPath: /data
        readinessProbe:
          exec:
            command:
            - /usr/share/container-scripts/mysql/readiness-probe.sh
          initialDelaySeconds: 15
          timeoutSeconds: 5
        env:
          - name: POD_NAMESPACE
            valueFrom:
              fieldRef:
                apiVersion: v1
                fieldPath: metadata.namespace
  volumeClaimTemplates:
  - metadata:
      name: storage
      annotations:
        volume.beta.kubernetes.io/storage-class: all
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 12Gi
```

If this example is deployed it will automatically create one replica per AZ. This helps us a lot making HA databases.

Current storageclass for cinder is not perfect in case of statefulsets. Lets assume that cinder storageclass is defined to be in zone called nova, but because labels are not added to pv - pods can be started in any zone. The problem is that at least in our openstack it is not possible to use cinder drive located in zone x from zone y. However, should we have possibility to choose between cross-zone cinder mounts or not? Imo it is not good way of doing things that they mount volume from another zone where the pod is located(means more network traffic between zones)? What you think? Current new solution does not allow that anymore (should we have possibility to allow it? it means removing the labels from pv).

There might be some things that needs to be fixed still in this release and I need help for that. Some parts of the code is not perfect.

Issues what i am thinking about (I need some help for these):
1) Can everybody see in openstack what AZ their servers are? Can there be like access policy that do not show that? If AZ is not found from server specs, I have no idea how the code behaves. 
2) In GetAllZones() function, is it really needed to make new serviceclient using openstack.NewComputeV2 or could I somehow use existing one
3) This fetches all servers from some openstack tenant(project). However, in some cases kubernetes is maybe deployed only to specific zone. If kube servers are located for instance in zone 1, and then there are another servers in same tenant in zone 2. There might be usecase that cinder drive is provisioned to zone-2 but it cannot start pod, because kubernetes does not have any nodes in zone-2. Could we have better way to fetch kubernetes nodes zones? Currently that information is not added to kubernetes node labels automatically in openstack (which should I think). I have added those labels manually to nodes. If that zone information is not added to nodes, the new solution does not start stateful pods at all, because it cannot target pods.


cc @rootfs @anguslees @jsafrane 

```release-note
Default behaviour in cinder storageclass is changed. If availability is not specified, the zone is chosen by algorithm. It makes possible to spread stateful pods across many zones.
```
2017-05-09 08:10:44 -07:00
Jan Safranek 016c34ff40 Tune Cinder approvers 2017-05-09 12:59:35 +02:00
Jamie Hannaford 9440a68744 Use dedicated Unix User and Group ID types 2017-05-05 14:07:38 +02:00
Jesse Haka 66e49eecca add possibility to leave AZ empty, and it will automatically generate zone for it
update bazel

fix gofmt

make getzones function lowercase

add az to log
2017-05-03 16:37:20 +03:00
saadali eacc48373b Remove rkouj from owners files. 2017-04-28 17:14:38 -07:00
Mike Danese a05c3c0efd autogenerated 2017-04-14 10:40:57 -07:00
wlan0 a68c783dc8 Use ProviderID to address nodes in the cloudprovider
The cloudprovider is being refactored out of kubernetes core. This is being
done by moving all the cloud-specific calls from kube-apiserver, kubelet and
kube-controller-manager into a separately maintained binary(by vendors) called
cloud-controller-manager. The Kubelet relies on the cloudprovider to detect information
about the node that it is running on. Some of the cloudproviders worked by
querying local information to obtain this information. In the new world of things,
local information cannot be relied on, since cloud-controller-manager will not
run on every node. Only one active instance of it will be run in the cluster.

Today, all calls to the cloudprovider are based on the nodename. Nodenames are
unqiue within the kubernetes cluster, but generally not unique within the cloud.
This model of addressing nodes by nodename will not work in the future because
local services cannot be queried to uniquely identify a node in the cloud. Therefore,
I propose that we perform all cloudprovider calls based on ProviderID. This ID is
a unique identifier for identifying a node on an external database (such as
the instanceID in aws cloud).
2017-03-27 23:13:13 -07:00
Hemant Kumar 786da1de12 Impement bulk polling of volumes
This implements Bulk volume polling using ideas presented by
justin in https://github.com/kubernetes/kubernetes/pull/39564

But it changes the implementation to use an interface
and doesn't affect other implementations.
2017-03-02 14:59:59 -05:00
Hemant Kumar 2d3008fc56 Implement support for mount options in PVs
Add support for mount options via annotations on PVs
2017-03-01 11:50:40 -05:00
Ferdinand Hübner 8fd0624bc4 resolve udevadm from PATH 2017-02-10 22:22:32 +01:00
Kubernetes Submit Queue 6ea92b47eb Merge pull request #39998 from DukeXar/cinder_instance_id
Automatic merge from submit-queue (batch tested with PRs 41246, 39998)

Cinder volume attacher: use instanceID instead of NodeID when verifying attachment

**What this PR does / why we need it**: Cinder volume attacher incorrectly uses NodeID instead of openstack instance id, so that reconciliation fails.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #39978 

**Special notes for your reviewer**:

**Release note**:

```release-note
```
2017-02-10 07:53:58 -08:00
Kubernetes Submit Queue 1cd06fbcf0 Merge pull request #38797 from aaron12134/spell-obsession
Automatic merge from submit-queue (batch tested with PRs 38772, 38797, 40732, 40740)

Synchronous spellcheck for pkg/volume/*

**What this PR does / why we need it**: Increase code readability

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**: Minor contribution 

**Release note**:

```release-note
```
2017-01-31 11:00:47 -08:00
Dr. Stefan Schimanski 44ea6b3f30 Update generated files 2017-01-29 21:41:45 +01:00
Dr. Stefan Schimanski bc6fdd925d pkg/api/resource: move to apimachinery 2017-01-29 21:41:44 +01:00