Commit Graph

105 Commits (f32b390cf08a9afd9f9899e0d97a90eb162b32a8)

Author SHA1 Message Date
Chao Xu 60604f8818 run hack/update-all 2017-06-22 11:31:03 -07:00
Chao Xu f4989a45a5 run root-rewrite-v1-..., compile 2017-06-22 10:25:57 -07:00
mbohlool c91a12d205 Remove all references to types.UnixUserID and types.UnixGroupID 2017-06-21 04:09:07 -07:00
Matthew Wong 5e788a6a67 Don't provision for PVCs with AccessModes unsupported by plugin 2017-06-12 12:56:41 -04:00
FengyunPan 1f47323187 Waiting attach operation to be finished rather than returning nil 2017-06-06 22:58:44 +08:00
deads2k 954eb3ceb9 move labels to components which own the APIs 2017-05-31 10:32:06 -04:00
FengyunPan 300f531389 Wait for detach operation to complete
When volume's status is 'detaching', controllermanager will detach
it again and return err. It is necessary to wait for detach
operation to complete within the alloted time.
2017-05-31 07:52:15 +08:00
NickrenREN a02d6cd5d8 Add createdby annotation for rbd and quobyte and make dynamical createdby key const
make dynamical createdby key const
2017-05-26 16:55:11 +08:00
Jamie Hannaford 4bd71a3b77 Refactor to use Volume IDs and remove ambiguity 2017-05-24 12:59:16 +02:00
FengyunPan 4a6e1f2a1d Don't return err when volume's status is 'attaching'
When volume's status is 'attaching', its attachments will be None,
controllermanager can't get device path and make some failed event.
But it is normal, let's fix it.
2017-05-12 19:53:50 +08:00
Kubernetes Submit Queue 49626c975b Merge pull request #44798 from zetaab/master
Automatic merge from submit-queue

Statefulsets for cinder: allow multi-AZ deployments, spread pods across zones

**What this PR does / why we need it**: Currently if we do not specify availability zone in cinder storageclass, the cinder is provisioned to zone called nova. However, like mentioned in issue, we have situation that we want spread statefulset across 3 different zones. Currently this is not possible with statefulsets and cinder storageclass. In this new solution, if we leave it empty the algorithm will choose the zone for the cinder drive similar style like in aws and gce storageclass solutions. 

**Which issue this PR fixes** fixes #44735

**Special notes for your reviewer**:

example:

```
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
  name: all
provisioner: kubernetes.io/cinder
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
  name: galera
  labels:
    app: mysql
spec:
  ports:
  - port: 3306
    name: mysql
  clusterIP: None
  selector:
    app: mysql
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: mysql
spec:
  serviceName: "galera"
  replicas: 3
  template:
    metadata:
      labels:
        app: mysql
      annotations:
        pod.alpha.kubernetes.io/initialized: "true"
    spec:
      containers:
      - name: mysql
        image: adfinissygroup/k8s-mariadb-galera-centos:v002
        imagePullPolicy: Always
        ports:
        - containerPort: 3306
          name: mysql
        - containerPort: 4444
          name: sst
        - containerPort: 4567
          name: replication
        - containerPort: 4568
          name: ist
        volumeMounts:
        - name: storage
          mountPath: /data
        readinessProbe:
          exec:
            command:
            - /usr/share/container-scripts/mysql/readiness-probe.sh
          initialDelaySeconds: 15
          timeoutSeconds: 5
        env:
          - name: POD_NAMESPACE
            valueFrom:
              fieldRef:
                apiVersion: v1
                fieldPath: metadata.namespace
  volumeClaimTemplates:
  - metadata:
      name: storage
      annotations:
        volume.beta.kubernetes.io/storage-class: all
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 12Gi
```

If this example is deployed it will automatically create one replica per AZ. This helps us a lot making HA databases.

Current storageclass for cinder is not perfect in case of statefulsets. Lets assume that cinder storageclass is defined to be in zone called nova, but because labels are not added to pv - pods can be started in any zone. The problem is that at least in our openstack it is not possible to use cinder drive located in zone x from zone y. However, should we have possibility to choose between cross-zone cinder mounts or not? Imo it is not good way of doing things that they mount volume from another zone where the pod is located(means more network traffic between zones)? What you think? Current new solution does not allow that anymore (should we have possibility to allow it? it means removing the labels from pv).

There might be some things that needs to be fixed still in this release and I need help for that. Some parts of the code is not perfect.

Issues what i am thinking about (I need some help for these):
1) Can everybody see in openstack what AZ their servers are? Can there be like access policy that do not show that? If AZ is not found from server specs, I have no idea how the code behaves. 
2) In GetAllZones() function, is it really needed to make new serviceclient using openstack.NewComputeV2 or could I somehow use existing one
3) This fetches all servers from some openstack tenant(project). However, in some cases kubernetes is maybe deployed only to specific zone. If kube servers are located for instance in zone 1, and then there are another servers in same tenant in zone 2. There might be usecase that cinder drive is provisioned to zone-2 but it cannot start pod, because kubernetes does not have any nodes in zone-2. Could we have better way to fetch kubernetes nodes zones? Currently that information is not added to kubernetes node labels automatically in openstack (which should I think). I have added those labels manually to nodes. If that zone information is not added to nodes, the new solution does not start stateful pods at all, because it cannot target pods.


cc @rootfs @anguslees @jsafrane 

```release-note
Default behaviour in cinder storageclass is changed. If availability is not specified, the zone is chosen by algorithm. It makes possible to spread stateful pods across many zones.
```
2017-05-09 08:10:44 -07:00
Jamie Hannaford 9440a68744 Use dedicated Unix User and Group ID types 2017-05-05 14:07:38 +02:00
Jesse Haka 66e49eecca add possibility to leave AZ empty, and it will automatically generate zone for it
update bazel

fix gofmt

make getzones function lowercase

add az to log
2017-05-03 16:37:20 +03:00
saadali eacc48373b Remove rkouj from owners files. 2017-04-28 17:14:38 -07:00
Mike Danese a05c3c0efd autogenerated 2017-04-14 10:40:57 -07:00
wlan0 a68c783dc8 Use ProviderID to address nodes in the cloudprovider
The cloudprovider is being refactored out of kubernetes core. This is being
done by moving all the cloud-specific calls from kube-apiserver, kubelet and
kube-controller-manager into a separately maintained binary(by vendors) called
cloud-controller-manager. The Kubelet relies on the cloudprovider to detect information
about the node that it is running on. Some of the cloudproviders worked by
querying local information to obtain this information. In the new world of things,
local information cannot be relied on, since cloud-controller-manager will not
run on every node. Only one active instance of it will be run in the cluster.

Today, all calls to the cloudprovider are based on the nodename. Nodenames are
unqiue within the kubernetes cluster, but generally not unique within the cloud.
This model of addressing nodes by nodename will not work in the future because
local services cannot be queried to uniquely identify a node in the cloud. Therefore,
I propose that we perform all cloudprovider calls based on ProviderID. This ID is
a unique identifier for identifying a node on an external database (such as
the instanceID in aws cloud).
2017-03-27 23:13:13 -07:00
Hemant Kumar 786da1de12 Impement bulk polling of volumes
This implements Bulk volume polling using ideas presented by
justin in https://github.com/kubernetes/kubernetes/pull/39564

But it changes the implementation to use an interface
and doesn't affect other implementations.
2017-03-02 14:59:59 -05:00
Hemant Kumar 2d3008fc56 Implement support for mount options in PVs
Add support for mount options via annotations on PVs
2017-03-01 11:50:40 -05:00
Ferdinand Hübner 8fd0624bc4 resolve udevadm from PATH 2017-02-10 22:22:32 +01:00
Kubernetes Submit Queue 6ea92b47eb Merge pull request #39998 from DukeXar/cinder_instance_id
Automatic merge from submit-queue (batch tested with PRs 41246, 39998)

Cinder volume attacher: use instanceID instead of NodeID when verifying attachment

**What this PR does / why we need it**: Cinder volume attacher incorrectly uses NodeID instead of openstack instance id, so that reconciliation fails.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #39978 

**Special notes for your reviewer**:

**Release note**:

```release-note
```
2017-02-10 07:53:58 -08:00
Kubernetes Submit Queue 1cd06fbcf0 Merge pull request #38797 from aaron12134/spell-obsession
Automatic merge from submit-queue (batch tested with PRs 38772, 38797, 40732, 40740)

Synchronous spellcheck for pkg/volume/*

**What this PR does / why we need it**: Increase code readability

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**: Minor contribution 

**Release note**:

```release-note
```
2017-01-31 11:00:47 -08:00
Dr. Stefan Schimanski 44ea6b3f30 Update generated files 2017-01-29 21:41:45 +01:00
Dr. Stefan Schimanski bc6fdd925d pkg/api/resource: move to apimachinery 2017-01-29 21:41:44 +01:00
Kubernetes Submit Queue f18a921a03 Merge pull request #40311 from deads2k/client-13-move-util
Automatic merge from submit-queue (batch tested with PRs 40299, 40311)

move authoritative client-go util out of pkg

Move `client-go/pkg/util` which are authoritative to `client-go/util` to make it easier to reason about what comes from where.
2017-01-24 08:59:59 -08:00
Kubernetes Submit Queue 68f123dfa0 Merge pull request #37275 from xiangfeiz/cinder-rescan-scsi
Automatic merge from submit-queue

Adding rescan scsi controller for cinder

For lsilogic scsi controller, attached cinder volume does not
appear under /dev/ automatically unless do a rescan.
This approach was used in vSphere volume provider before PR #27496
dropped support for lsilogic scsi controller.
2017-01-24 06:24:59 -08:00
deads2k 5a8f075197 move authoritative client-go utils out of pkg 2017-01-24 08:59:18 -05:00
deads2k ee6752ef20 find and replace 2017-01-20 08:04:53 -05:00
Kubernetes Submit Queue f56b606985 Merge pull request #36520 from apelisse/owners-pkg-volume
Automatic merge from submit-queue

Curating Owners: pkg/volume

cc @jsafrane @spothanis @agonzalezro @justinsb @johscheuer @simonswine @nelcy @pmorie @quofelix @sdminonne @thockin @saad-ali @rootfs

In an effort to expand the existing pool of reviewers and establish a
two-tiered review process (first someone lgtms and then someone
experienced in the project approves), we are adding new reviewers to
existing owners files.


If You Care About the Process:
------------------------------

We did this by algorithmically figuring out who’s contributed code to
the project and in what directories.  Unfortunately, that doesn’t work
well: people that have made mechanical code changes (e.g change the
copyright header across all directories) end up as reviewers in lots of
places.

Instead of using pure commit data, we generated an excessively large
list of reviewers and pruned based on all time commit data, recent
commit data and review data (number of PRs commented on).

At this point we have a decent list of reviewers, but it needs one last
pass for fine tuning.

Also, see https://github.com/kubernetes/contrib/issues/1389.

TLDR:
-----

As an owner of a sig/directory and a leader of the project, here’s what
we need from you:

1. Use PR https://github.com/kubernetes/kubernetes/pull/35715 as an example.

2. The pull-request is made editable, please edit the `OWNERS` file to
remove the names of people that shouldn't be reviewing code in the
future in the **reviewers** section. You probably do NOT need to modify
the **approvers** section. Names asre sorted by relevance, using some
secret statistics.

3. Notify me if you want some OWNERS file to be removed.  Being an
approver or reviewer of a parent directory makes you a reviewer/approver
of the subdirectories too, so not all OWNERS files may be necessary.

4. Please use ALIAS if you want to use the same list of people over and
over again (don't hesitate to ask me for help, or use the pull-request
above as an example)
2017-01-17 19:56:39 -08:00
Anton Klautsan 084d801e0a Add unit-tests for DisksAreAttached 2017-01-18 01:55:39 +00:00
Anton Klautsan 2267588d95 Cinder volume attacher: use instanceID not NodeID 2017-01-18 01:52:46 +00:00
Saad Ali 16cbb574e4 Update OWNERS 2017-01-17 16:20:24 -08:00
Clayton Coleman 9a2a50cda7
refactor: use metav1.ObjectMeta in other types 2017-01-17 16:17:19 -05:00
rkouj 32766e3b6d Check if path exists before performing unmount 2017-01-11 14:33:05 -08:00
deads2k 6a4d5cd7cc start the apimachinery repo 2017-01-11 09:09:48 -05:00
Jeff Grafton 20d221f75c Enable auto-generating sources rules 2017-01-05 14:14:13 -08:00
zdj6373 84316ad559 "Attach" function records information collation 2017-01-04 16:42:24 +08:00
Mike Danese 161c391f44 autogenerated 2016-12-29 13:04:10 -08:00
Kubernetes Submit Queue d4bf500e73 Merge pull request #39055 from anguslees/detach
Automatic merge from submit-queue (batch tested with PRs 39152, 39142, 39055)

openstack: Forcibly detach an attached cinder volume before attaching elsewhere

Fixes #33288



**What this PR does / why we need it**:
Without this fix, we can't preemptively reschedule pods with persistent volumes to other hosts (for rebalancing or hardware failure recovery).

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #33288

**Special notes for your reviewer**:
(This is a resurrection/cleanup of PR #33734, originally authored by @Rotwang)

**Release note**:
2016-12-27 17:10:14 -08:00
Angus Lees fa1d6f3838 Forcibly detach an attached volume before attaching elsewhere
Fixes #33288

Co-Authored-By: @Rotwang
2016-12-21 11:57:10 +11:00
NickrenREN 430abfbdfe cinder attacher GetDeviceMountPath
add function to test GetDeviceMountPath func return value
2016-12-20 10:15:34 +08:00
aaronxu 37f5d4d719 Synchronous spellcheck for pkg/volume/* 2016-12-14 20:07:10 -08:00
Mike Danese c87de85347 autoupdate BUILD files 2016-12-12 13:30:07 -08:00
Chao Xu bcc783c594 run hack/update-all.sh 2016-11-23 15:53:09 -08:00
Chao Xu bb675d395f dependencies: pkg/volume 2016-11-23 15:53:09 -08:00
Xiangfei Zhu 89c0aa735a Adding rescan scsi controller for cinder
For lsilogic scsi controller, attached cinder volume does not
appear under /dev/ automatically unless do a rescan.
This approach was used in vSphere volume provider before PR #27496
dropped support for lsilogic scsi controller.
2016-11-21 22:49:18 -08:00
Jing Xu 3d3e44e77e fix issue in converting aws volume id from mount paths
This PR is to fix the issue in converting aws volume id from mount
paths. Currently there are three aws volume id formats supported. The
following lists example of those three formats and their corresponding
global mount paths:
1. aws:///vol-123456
(/var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/vol-123456)
2. aws://us-east-1/vol-123456
(/var/lib/kubelet/plugins/kubernetes.io/mounts/aws/us-est-1/vol-123455)
3. vol-123456
(/var/lib/kubelet/plugins/kubernetes.io/mounts/aws/us-est-1/vol-123455)

For the first two cases, we need to check the mount path and convert
them back to the original format.
2016-11-16 22:35:48 -08:00
Rajat Ramesh Koujalagi d81e216fc6 Better messaging for missing volume components on host to perform mount 2016-11-09 15:16:11 -08:00
Antoine Pelisse fd510b1207 Update OWNERS approvers and reviewers: pkg/volume 2016-11-09 10:17:36 -08:00
Kubernetes Submit Queue 33dab1d555 Merge pull request #35629 from hpcloud/bug/33128-unused-waitfordetach
Automatic merge from submit-queue

Remove unused WaitForDetach from Detacher interface and plugins

See issue #33128 and PR #33270

We can't rely on the device name provided by OpenStack Cinder, and thus
must perform detection based on the drive serial number (aka It's cinder ID)
on the kubelet itself.

This needs to be removed now, as part of #33128, as the code can't be
updated to attempt device detection and fallback through to the Cinder
provided deviceName, as detection "fails" when the device is gone, and
if cinder has reported a deviceName that another volume has used in
relaity, then this will block forever (or until the other, unreleated,
volume has been detached)
2016-11-06 04:52:23 -08:00
Kubernetes Submit Queue 43a915e628 Merge pull request #35491 from pmorie/byebye-getrootcontext
Automatic merge from submit-queue

Remove GetRootContext method from VolumeHost interface

Remove the `GetRootContext` call from the `VolumeHost` interface, since Kubernetes no longer needs to know the SELinux context of the Kubelet directory.

Per #33951 and #35127.

Depends on #33663; only the last commit is relevant to this PR.
2016-11-06 01:09:19 -08:00