Automatic merge from submit-queue (batch tested with PRs 53101, 53158, 52165). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
[OpenStack] Service LoadBalancer defaults to external
**What this PR does / why we need it**:
Let "service.beta.kubernetes.io/openstack-internal-load-balancer" default to false.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*:
fixes#53078
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 53157, 52628). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Added openstack instance metadata search order
**What this PR does / why we need it**: This PR adds a search order for the instance metadata retrieval on openstack. More information and discussion can be found on #52378
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#52378
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
When running Kubernetes against an installation of DevStack which
deploys the Cinder service at a path rather than a port (ex:
http://foo.bar/volume rather than http://foo.bar:xxx), the version
detection fails. It is better to use the OpenStack service catalog.
OTOH, when initialize cinder client, kubernetes will check the
endpoint from the OpenStack service catalog, so we can do this
version detection by it.
Automatic merge from submit-queue (batch tested with PRs 52751, 52898, 52633, 52611, 52609). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..
Only register floatingIP for external loadbalancer service
If the user has provided the floating-ip options, then it's safe
to assume they want (only) the floating-ip to be the ingress IP;
if they have not provided floating-ip options, then the LB IP is
the only relevant value.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
Fix#52566
**Release note**:
```release-note
Only register floatingIP into Loadbalancer ingress field for external loadbalancer service
```
Automatic merge from submit-queue (batch tested with PRs 52751, 52898, 52633, 52611, 52609). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..
Fix missing floatingip when calling GetLoadBalancer()
If user specify floating-network-id, a floatingip and a vip will
be assigned to LoadBalancer service, So its status contains a
floatingip and a vip, but GetLoadBalancer() only return vip.
**Release note**:
```release-note
GetLoadBalancer() only return floatingip when user specify floating-network-id, or return LB vip.
```
Automatic merge from submit-queue (batch tested with PRs 52880, 52855, 52761, 52885, 52929). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..
Remove cloud provider rackspace
**What this PR does / why we need it**:
For now, we have to implement functions in both `rackspace` and `openstack` packages if we want to add function for cinder, for example [resize for cinder](https://github.com/kubernetes/kubernetes/pull/51498). Since openstack has implemented all the functions rackspace has, and rackspace is considered deprecated for a long time, [rackspace deprecated](https://github.com/rackspace/gophercloud/issues/592) ,
after talking with @mikedanese and @jamiehannaford offline , i sent this PR to remove `rackspace` in favor of `openstack`
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#52854
**Special notes for your reviewer**:
**Release note**:
```release-note
The Rackspace cloud provider has been removed after a long deprecation period. It was deprecated because it duplicates a lot of the OpenStack logic and can no longer be maintained. Please use the OpenStack cloud provider instead.
```
If user specify floating-network-id, a floatingip be assigned to
LoadBalancer service, So its status contains a floatingip, but
GetLoadBalancer() only return vip.
If the user has provided the floating-ip options, then it's safe
to assume they want (only) the floating-ip to be the ingress IP;
if they have not provided floating-ip options, then the LB IP is
the only relevant value.
Fix#52566
Automatic merge from submit-queue
Implement GetZoneByProviderID and GetZoneByNodeName for openstack
This is part of #50926
cc @wlan0
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 51174, 51363, 51087, 51382, 51388)
Add InstanceExistsByProviderID to cloud provider interface for CCM
**What this PR does / why we need it**:
Currently, [`MonitorNode()`](02b520f0a4/pkg/controller/cloud/nodecontroller.go (L240)) in the node controller checks with the CCM if a node still exists by calling `ExternalID(nodeName)`. `ExternalID` is supposed to return the provider id of a node which is not supported on every cloud. This means that any clouds who cannot infer the provider id by the node name from a remote location will never remove nodes that no longer exist.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#50985
**Special notes for your reviewer**:
We'll want to create a subsequent issue to track the implementation of these two new methods in the cloud providers.
**Release note**:
```release-note
Adds `InstanceExists` and `InstanceExistsByProviderID` to cloud provider interface for the cloud controller manager
```
/cc @wlan0 @thockin @andrewsykim @luxas @jhorwit2
/area cloudprovider
/sig cluster-lifecycle
Automatic merge from submit-queue (batch tested with PRs 51235, 50819, 51274, 50972, 50504)
Support for specifying external LoadBalancerIP on openstack
1. Support ServiceAnnotationLoadBalancerFloatingNetworkId for LB v1
2. Support for specifying external LoadBalancerIP on openstack
Add ServiceAnnotationLoadBalancerInternal annotation to distinguish
between internal LoadBalancerIP and external LoadBalancerIP.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
Fix#50851
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 51244, 50559, 49770, 51194, 50901)
Fix the matching rule of instance ProviderID
Url.Parse() can't parse ProviderID which contains ':///'.
This PR use regexp to match ProviderID.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
Fix#49769
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
cloudprovider/openstack bug fix: don't try to append pool id if pool doesn't exist
**What this PR does / why we need it**:
This fixes a bug in the OpenStack cloud provider that could cause a panic.
Consider what will happen in the current `LbaasV2.EnsureLoadBalancerDeleted` code if `nil, ErrNotFound` is returned by `getPoolByListenerID`.
Automatic merge from submit-queue (batch tested with PRs 38947, 50239, 51115, 51094, 51116)
Mark the volumes as detached when node does not exist
If node does not exist, node's volumes will be detached
automatically and become available. So mark them detached and do not return err.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
#50200
**Release note**:
```release-note
NONE
```
If user specify floating-network-id by annotation rather than cloud
provider file, openstack cloud provider don't delete floatingip when
deleting LoadBalancer service.
Currently if user doesn't specify subnet-id or specify a unsafe
subnet-id, openstack cloud provider can't create a correct LoadBalancer
service.
Actually we can get it automatically. This patch do a improvement.
This is a part of #50726
If node doesn't exist, OpenStack Nova will assume the volumes
are not attached to it. So mark the volumes as detached and
return false without error.
Fix: #50200
Automatic merge from submit-queue (batch tested with PRs 47724, 49984, 49785, 49803, 49618)
Fix conflict about getPortByIp
**What this PR does / why we need it**:
Currently getPortByIp() get port of instance only based on IP.
If there are two instances in diffent network and the CIDR of
their subnet are same, getPortByIp() will be conflict.
My PR gets port based on IP and Name of instance.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
Fix#43909
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Currently getPortByIp() get port of instance only based on IP.
If there are two instances in diffent network and the CIDR of
their subnet are same, getPortByIp() will be conflict.
My PR gets port based on IP and Name of instance.
Automatic merge from submit-queue
Ignore the available volume when calling DetachDisk
Fix#50207
If user detachs the volume by nova in openstack env, volume becomes
available. If nova instance is been deleted, nova will detach it
automatically and become available. So the "available" is fine since that means the
volume is detached from instance already.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 49524, 46760, 50206, 50166, 49603)
[OpenStack] Add more detail error message
I get same simple error messages "Unable to initialize cinder client
for region: RegionOne" from controller-manager, but I can not find the
reason. We should add more detail message "err" into glog.Errorf.
Currently NewBlockStorageV2() return err when failed to get cinder endpoint, but there is no code to output the message of err.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 50087, 39587, 50042, 50241, 49914)
AttachDisk should not call detach inside of Cinder volume provider
If use detachs the volume by nova in openstack env, volume becomes
available. If nova instance is been deleted, nova will detach it
automatically. So the "available" is fine since that means the
volume is detached from instance already.
I get same simple error messages "Unable to initialize cinder client
for region: RegionOne" from controller-manager, but I can not find the
reason. We should add more detail message "err" into glog.Errorf.
Automatic merge from submit-queue (batch tested with PRs 47416, 47408, 49697, 49860, 50162)
add possibility to use multiple floatingip pools in openstack loadbalancer
**What this PR does / why we need it**: Currently only one floating pool is supported in kubernetes openstack cloud provider. It is quite big issue for us, because we want run only single kubernetes cluster, but we want that external and internal services can be used. It means that we need possibility to create services with internal and external pools.
**Which issue this PR fixes**: fixes#49147
**Special notes for your reviewer**: service labels is not maybe correct place to define this floatingpool id. However, I did not find any better place easily. I do not want start modifying service api structure.
**Release note**:
```release-note
Add possibility to use multiple floatingip pools in openstack loadbalancer
```
Example how it works:
```
cat /etc/kubernetes/cloud-config
[Global]
auth-url=https://xxxx
username=xxxx
password=xxxx
region=yyy
tenant-id=b23efb65b1d44b5abd561511f40c565d
domain-name=foobar
[LoadBalancer]
lb-version=v2
subnet-id=aed26269-cd01-4d4e-b0d8-9ec726c4c2ba
lb-method=ROUND_ROBIN
floating-network-id=56e523e7-76cb-477f-80e4-2dc8cf32e3b4
create-monitor=yes
monitor-delay=10s
monitor-timeout=2000s
monitor-max-retries=3
```
```
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 1
template:
metadata:
labels:
run: web
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
labels:
run: web-ext
name: web-ext
namespace: default
spec:
selector:
run: web
ports:
- port: 80
name: https
protocol: TCP
targetPort: 80
type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
labels:
run: web-int
floatingPool: a2a84887-4915-42bf-aaff-2b76688a4ec7
name: web-int
namespace: default
spec:
selector:
run: web
ports:
- port: 80
name: https
protocol: TCP
targetPort: 80
type: LoadBalancer
```
```
% kubectl create -f example.yaml
deployment "nginx-deployment" created
service "web-ext" created
service "web-int" created
% kubectl get svc -o wide
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes 10.254.0.1 <none> 443/TCP 2m <none>
web-ext 10.254.23.153 192.168.1.57,193.xx.xxx.xxx 80:30151/TCP 52s run=web
web-int 10.254.128.141 192.168.1.58,10.222.130.80 80:32431/TCP 52s run=web
```
cc @anguslees @k8s-sig-openstack-feature-requests @dims
if not needed here
load network ids from gophercloud api
fix to getnetworkbyname
update godeps, add networks library
fix gofmt and boilerplate
gofmt
use annotations
fix
remove enableflag
add comment to annotationvalue
Automatic merge from submit-queue (batch tested with PRs 49081, 49318, 49219, 48989, 48486)
Better message if we dont find appropriate BlockStorage API
**What this PR does / why we need it**:
With latest devstack, v1 and v2 are DEPRECATED and v3 is marked
as CURRENT. So we fail to attach the disk, the error message is
shown when one does "kubectl describe pod" but the operator has
to dig into find the problem.
So log a better message if we can't find the appropriate version
of the API that we support with an explicit error message that
the operator can see how to fix the situation.
Note support for v3 block storage API is being added to gophercloud
and will take a bit of time before we can support it.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
With latest devstack, v1 and v2 are DEPRECATED and v3 is marked
as CURRENT. So we fail to attach the disk, the error message is
shown when one does "kubectl describe pod" but the operator has
to dig into find the problem.
So log a better message if we can't find the appropriate version
of the API that we support with an explicit error message that
the operator can see how to fix the situation.
Note support for v3 block storage API is being added to gophercloud
and will take a bit of time before we can support it.
Automatic merge from submit-queue (batch tested with PRs 49420, 49296, 49299, 49371, 46514)
Avoid looking up instance id until we need it
**What this PR does / why we need it**:
currently kube-controller-manager cannot run outside of a vm started
by openstack (with --cloud-provider=openstack params). We try to read
the instance id from the metadata provider or the config drive or the
file location only when we really need it. In the normal scenario, the
controller-manager uses the node name to get the instance id.
41541910e1/pkg/volume/cinder/attacher.go (L149)
The localInstanceID is currently used only in the test case, so let
us not read it until it is really needed.
So let's try to find the instance-id only when we need it.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
currently kube-controller-manager cannot run outside of a vm started
by openstack (with --cloud-provider=openstack params). We try to read
the instance id from the metadata provider or the config drive or the
file location only when we really need it. In the normal scenario, the
controller-manager uses the node name to get the instance id.
41541910e1/pkg/volume/cinder/attacher.go (L149)
The localInstanceID is currently used only in the test case, so let
us not read it until it is really needed.
Automatic merge from submit-queue (batch tested with PRs 49276, 49235)
Don't fail fast if LoadBalancer section is missing
**What this PR does / why we need it**:
We should allow scenarios where cinder can be used even if the
operator does not want to use the openstack load balancer. So
let's warn in the beginning if subnet-id is missing but fail only
if they try to use the load balancer
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
We should allow scenarios where cinder can be used even if the
operator does not want to use the openstack load balancer. So
let's warn in the beginning if subnet-id is missing but fail only
if they try to use the load balancer
Current devstack seems to return "id", and an upcoming change using
nova's microversion will be returning "original_name":
https://blueprints.launchpad.net/nova/+spec/instance-flavor-api
So let's just inspect what is present and use that to figure out
the instance type.
Automatic merge from submit-queue (batch tested with PRs 48594, 47042, 48801, 48641, 48243)
Fix panic of DeleteRoute()
Fix#48800
It should be 'addr_pairs', not 'routes'.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 47948, 48631, 48693, 48549, 47593)
OpenStack for cloud-controller-manager
**What this PR does / why we need it**:
This implements the `NodeAddressesByProviderID` and `InstanceTypeByProviderID` methods used by the cloud-controller-manager to the OpenStack provider. The instance type returned is the flavor name, for consistency `InstanceType` has been implemented too returning the same value.
```release-note
NONE
```
This is part of #47257 cc @wlan0
Automatic merge from submit-queue
Fix deleting empty monitors
Fix#48094
When create-monitor of cloud-config is false, pool has not monitor
and can not delete empty monitor.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 47234, 48410, 48514, 48529, 48348)
Check opts of cloud config file
Fix#48347
Check opts when register OpenStack CloudProvider rather than
returning error when use opts to create/use cloud resource.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 47776, 46220, 46878, 47942, 47947)
fix comment mistake
fix comment mistake
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 47776, 46220, 46878, 47942, 47947)
update openstack metadata-service url
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue
Ignore ErrNotFound when delete LB resources
IsNotFound error is fine since that means the object is
deleted already, so let's check it before return error.
Automatic merge from submit-queue
Adapt loadbalancer deleting/updating when using cloudprovider openstack in openstack/liberty
**What this PR does / why we need it**:
Make an extra verification on the returned listeners and pools because gophercloud query doesn't filter the results by loadbalancerID / listenerID respectively when using **openstack/librerty**.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
#33759
**Special notes for your reviewer**:
#33759 it's supposed to have a pull request which fixes this problem but in the release 1.5 loadbalancers doesn't use that patched code.
**Release note**:
NONE
```release-note
```
Automatic merge from submit-queue
Initialize cloud providers with a K8s clientBuilder
**What this PR does / why we need it**:
This PR provides each cloud provider the ability to generate kubernetes clients. Either the full access or service account client builder is passed from the controller manager. Cloud providers could need to retrieve information from the cluster that isn't provided through defined interfaces, and this seems more preferable to adding parameters.
Please leave your thoughts/comments.
**Release note**:
```release-note
NONE
```
When volume's status is 'attaching', its attachments will be None,
controllermanager can't get device path and make some failed event.
But it is normal, let's fix it.
Automatic merge from submit-queue
Statefulsets for cinder: allow multi-AZ deployments, spread pods across zones
**What this PR does / why we need it**: Currently if we do not specify availability zone in cinder storageclass, the cinder is provisioned to zone called nova. However, like mentioned in issue, we have situation that we want spread statefulset across 3 different zones. Currently this is not possible with statefulsets and cinder storageclass. In this new solution, if we leave it empty the algorithm will choose the zone for the cinder drive similar style like in aws and gce storageclass solutions.
**Which issue this PR fixes** fixes#44735
**Special notes for your reviewer**:
example:
```
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: all
provisioner: kubernetes.io/cinder
---
apiVersion: v1
kind: Service
metadata:
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
name: galera
labels:
app: mysql
spec:
ports:
- port: 3306
name: mysql
clusterIP: None
selector:
app: mysql
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mysql
spec:
serviceName: "galera"
replicas: 3
template:
metadata:
labels:
app: mysql
annotations:
pod.alpha.kubernetes.io/initialized: "true"
spec:
containers:
- name: mysql
image: adfinissygroup/k8s-mariadb-galera-centos:v002
imagePullPolicy: Always
ports:
- containerPort: 3306
name: mysql
- containerPort: 4444
name: sst
- containerPort: 4567
name: replication
- containerPort: 4568
name: ist
volumeMounts:
- name: storage
mountPath: /data
readinessProbe:
exec:
command:
- /usr/share/container-scripts/mysql/readiness-probe.sh
initialDelaySeconds: 15
timeoutSeconds: 5
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
volumeClaimTemplates:
- metadata:
name: storage
annotations:
volume.beta.kubernetes.io/storage-class: all
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 12Gi
```
If this example is deployed it will automatically create one replica per AZ. This helps us a lot making HA databases.
Current storageclass for cinder is not perfect in case of statefulsets. Lets assume that cinder storageclass is defined to be in zone called nova, but because labels are not added to pv - pods can be started in any zone. The problem is that at least in our openstack it is not possible to use cinder drive located in zone x from zone y. However, should we have possibility to choose between cross-zone cinder mounts or not? Imo it is not good way of doing things that they mount volume from another zone where the pod is located(means more network traffic between zones)? What you think? Current new solution does not allow that anymore (should we have possibility to allow it? it means removing the labels from pv).
There might be some things that needs to be fixed still in this release and I need help for that. Some parts of the code is not perfect.
Issues what i am thinking about (I need some help for these):
1) Can everybody see in openstack what AZ their servers are? Can there be like access policy that do not show that? If AZ is not found from server specs, I have no idea how the code behaves.
2) In GetAllZones() function, is it really needed to make new serviceclient using openstack.NewComputeV2 or could I somehow use existing one
3) This fetches all servers from some openstack tenant(project). However, in some cases kubernetes is maybe deployed only to specific zone. If kube servers are located for instance in zone 1, and then there are another servers in same tenant in zone 2. There might be usecase that cinder drive is provisioned to zone-2 but it cannot start pod, because kubernetes does not have any nodes in zone-2. Could we have better way to fetch kubernetes nodes zones? Currently that information is not added to kubernetes node labels automatically in openstack (which should I think). I have added those labels manually to nodes. If that zone information is not added to nodes, the new solution does not start stateful pods at all, because it cannot target pods.
cc @rootfs @anguslees @jsafrane
```release-note
Default behaviour in cinder storageclass is changed. If availability is not specified, the zone is chosen by algorithm. It makes possible to spread stateful pods across many zones.
```
Automatic merge from submit-queue
Use provided VipPortID for OpenStack LB
**What this PR does / why we need it**:
When creating an OpenStack LoadBalancer, Kubernetes will search through the tenant trying to match the LB's VIP with a port. This is problematic because multiple ports may have the same fixed IP, therefore leading to routing inconsistencies. We should use the port ID provided by the LB's response body instead.
**Which issue this PR fixes**:
https://github.com/kubernetes/kubernetes/issues/43909
**Special notes for your reviewer**:
Since this involves non-deterministic testing, it'd be best if we can run this in a staging environment for a few days before merging (say until early next week).
**Release note**:
```release-note
Fixes issue during LB creation where ports where incorrectly assigned to a floating IP
```
Automatic merge from submit-queue
cinder: Add support for the KVM virtio-scsi driver
**What this PR does / why we need it**:
The VirtIO SCSI driver for KVM changes the way disks appear in /dev/disk/by-id.
This adds support for the new format.
Without this, volume attaching on an openstack cluster using this kvm driver doesn't work
**Special notes for your reviewer**:
Does this need e2e tests? I couldn't find anywhere to add another openstack configuration used in the e2e tests.
Wiki page about this: https://wiki.openstack.org/wiki/Virtio-scsi-for-bdm
**Release note**:
```release-note
cinder: Add support for the KVM virtio-scsi driver
```
Automatic merge from submit-queue (batch tested with PRs 44555, 44238)
openstack: remove field flavor_to_resource
I believe there is no usage about `flavor_to_resource`, and I think there is no need to build that information, too.
cc @anguslees
**Release note:**
```
NONE
```
The cloudprovider is being refactored out of kubernetes core. This is being
done by moving all the cloud-specific calls from kube-apiserver, kubelet and
kube-controller-manager into a separately maintained binary(by vendors) called
cloud-controller-manager. The Kubelet relies on the cloudprovider to detect information
about the node that it is running on. Some of the cloudproviders worked by
querying local information to obtain this information. In the new world of things,
local information cannot be relied on, since cloud-controller-manager will not
run on every node. Only one active instance of it will be run in the cluster.
Today, all calls to the cloudprovider are based on the nodename. Nodenames are
unqiue within the kubernetes cluster, but generally not unique within the cloud.
This model of addressing nodes by nodename will not work in the future because
local services cannot be queried to uniquely identify a node in the cloud. Therefore,
I propose that we perform all cloudprovider calls based on ProviderID. This ID is
a unique identifier for identifying a node on an external database (such as
the instanceID in aws cloud).
Automatic merge from submit-queue (batch tested with PRs 40932, 41896, 41815, 41309, 41628)
Add custom CA file to openstack cloud provider config
**What this PR does / why we need it**: Adds ability to specify custom CA bundle file to verify OpenStack endpoint against. Useful in tests and PoC deployments. Similar to what https://github.com/kubernetes/kubernetes/pull/35488 did for authentication.
**Which issue this PR fixes**: None
**Special notes for your reviewer**: Based on https://github.com/kubernetes/kubernetes/pull/35488 which added support for custom CA file for authentication.
**Release note**:
This change migrates the 'openstack' provider and 'keystone'
authenticator plugin to the newer gophercloud/gophercloud library.
Note the 'rackspace' provider still uses rackspace/gophercloud.
Fixes#30404
Automatic merge from submit-queue (batch tested with PRs 40297, 41285, 41211, 41243, 39735)
fix variables in openstack.go to keep camel casing and remove unused var
In cases where insecure OpenStack endpoint is to be used
(e.g., when testing), gophercloud will fail to connect
to such endpoints. This patch adds support for custom CA
file configuration option, which, when provided, will
make gophercloud validate OpenStack endpoint against
certificate(s) read from file specified in that option.
Automatic merge from submit-queue
Curating Owners: pkg/cloudprovider
cc @runseb @justinsb @kerneltime @mikedanese @svanharmelen @anguslees @brendandburns @abrarshivani @imkin @luomiao @colemickens @ngtuna @dagnello @abithap
In an effort to expand the existing pool of reviewers and establish a
two-tiered review process (first someone lgtms and then someone
experienced in the project approves), we are adding new reviewers to
existing owners files.
If You Care About the Process:
------------------------------
We did this by algorithmically figuring out who’s contributed code to
the project and in what directories. Unfortunately, that doesn’t work
well: people that have made mechanical code changes (e.g change the
copyright header across all directories) end up as reviewers in lots of
places.
Instead of using pure commit data, we generated an excessively large
list of reviewers and pruned based on all time commit data, recent
commit data and review data (number of PRs commented on).
At this point we have a decent list of reviewers, but it needs one last
pass for fine tuning.
Also, see https://github.com/kubernetes/contrib/issues/1389.
TLDR:
-----
As an owner of a sig/directory and a leader of the project, here’s what
we need from you:
1. Use PR https://github.com/kubernetes/kubernetes/pull/35715 as an example.
2. The pull-request is made editable, please edit the `OWNERS` file to
remove the names of people that shouldn't be reviewing code in the
future in the **reviewers** section. You probably do NOT need to modify
the **approvers** section. Names asre sorted by relevance, using some
secret statistics.
3. Notify me if you want some OWNERS file to be removed. Being an
approver or reviewer of a parent directory makes you a reviewer/approver
of the subdirectories too, so not all OWNERS files may be necessary.
4. Please use ALIAS if you want to use the same list of people over and
over again (don't hesitate to ask me for help, or use the pull-request
above as an example)
This method has been unused by k8s for some time, and yet is the last
piece of the cloud provider API that encourages provider names to be
human-friendly strings (this method applies a regex to instance names).
Actually removing this deprecated method is part of a long effort to
migrate from instance names to instance IDs in at least the OpenStack
provider plugin.
Automatic merge from submit-queue
openstack: Implement the `Routes` provider API
``` release-note
Implement the Routes provider API for OpenStack using Neutron extraroute extension. This removes the need for flannel/etc where supported. To use, ensure all your nodes are on the same Neutron (private) network and specify the router ID in new `[Route]` section of provider config:
[Route]
router-id = <router UUID>
```
This change implements the Routes API using Neutron's "extraroute"
extension.
To use, this requires all the nodes to be on the same Neutron network
and the UUID of the Neutron router on that network.
Required cloud provider config section:
[Route]
router-id = <UUID of Neutron router>
Ensure kube-controllermanager is started with (non-default)
`--allocate-node-cidrs=true` and set `--cluster-cidr` to the POD
super-subnet (a private /16 would be reasonable).
Based on an earlier version by @timbyr (#19473)
Update EnsureLoadBalancer/UpdateLoadBalancer API to use node objects.
In particular, this allows us to take the node address directly from the
node.Status.Addresses and avoids a name -> instance lookup.
Automatic merge from submit-queue
Don't rely on device name provided by Cinder
See issue #33128
We can't rely on the device name provided by Cinder, and thus must perform
detection based on the drive serial number (aka It's cinder ID) on the
kubelet itself.
This patch re-works the cinder volume attacher to ignore the supplied
deviceName, and instead defer to the pre-existing GetDevicePath method to
discover the device path based on it's serial number and /dev/disk/by-id
mapping.
This new behavior is controller by a config option, as falling back
to the cinder value when we can't discover a device would risk devices
not showing up, falling back to cinder's guess, and detecting the wrong
disk as attached.
Automatic merge from submit-queue
Corect filtering of OpenStack LBaaS resources to delete
Neutron's API ignores unknown parameters. When listing pools etc, K8
attempts to filter on "LoadBalancerID", which is not a valid filter.
As such, it is ignored by Neutron, and a list of all pools is
returned. K8 then proceeds to delete each of the pools.
Instead, we now double check the resources really belong to the LB
we're trying to delete.
Fixes issue #33759
See issue #33128
We can't rely on the device name provided by Cinder, and thus must perform
detection based on the drive serial number (aka It's cinder ID) on the
kubelet itself.
This patch re-works the cinder volume attacher to ignore the supplied
deviceName, and instead defer to the pre-existing GetDevicePath method to
discover the device path based on it's serial number and /dev/disk/by-id
mapping.
This new behavior is controller by a config option, as falling back
to the cinder value when we can't discover a device would risk devices
not showing up, falling back to cinder's guess, and detecting the wrong
disk as attached.
GetDevicePath was currently coded to only support Nova+KVM style device
paths, update so we also support Nova+ESXi and leave the code such that
new pattern additions are easy.
Neutron's API ignores unknown paramaters. When listing pools etc, K8
attempts to filter on "LoadBalancerID", which is not a valid filter.
As such, it is ignored by Neutron, and a list of all pools is
returned. K8 then proceeds to update each of the pools.
Instead, we now double check the resources really belong to the LB
we're trying to update.
At master volume reconciler, the information about which volumes are
attached to nodes is cached in actual state of world. However, this
information might be out of date in case that node is terminated (volume
is detached automatically). In this situation, reconciler assume volume
is still attached and will not issue attach operation when node comes
back. Pods created on those nodes will fail to mount.
This PR adds the logic to periodically sync up the truth for attached volumes kept in the actual state cache. If the volume is no longer attached to the node, the actual state will be updated to reflect the truth. In turn, reconciler will take actions if needed.
To avoid issuing many concurrent operations on cloud provider, this PR
tries to add batch operation to check whether a list of volumes are
attached to the node instead of one request per volume.
More details are explained in PR #33760
In order to be able to use new mounter library, this PR adds the
mounterPath flag to kubelet which passes the flag to the mount
interface. If flag is empty, mount uses default mount path.
This allows security groups to be created and attached to the neutron
port that the loadbalancer is using on the subnet.
The security group ID that is assigned to the nodes needs to be
provided, to allow for traffic from the loadbalancer to the nodePort
to be refelected in the rules.
This adds two config items to the LoadBalancer options -
ManageSecurityGroups (bool)
NodeSecurityGroupID (string)
Automatic merge from submit-queue
openstack: Support config-drive and improve CurrentNodeName, GetZone
This PR adds support for fetching local instance metadata via config-drive (as well as querying metadata service), and surfaces some additional metadata information (from either source):
- `CurrentNodeName` now returns the OpenStack instance name, rather than the current hostname (they might not be the same)
- `GetZone` includes availability zone label in `FailureDomain`
Thanks to @kiall for a WIP implementation of the latter.
Previously the OpenStack provider just returned the hostname in
CurrentNodeName. With this change, we return the local OpenStack
instance name, as the API intended.
Config-drive is an alternate no-network method for publishing local
instance metadata on OpenStack. This change implements support for
fetching data from config-drive, and tries it before querying the
network metadata service (since config-drive will fail quickly if not
available).
Note config-drive involves mounting the filesystem with label
"config-2", so anyone using config-drive and running kubelet in a
container will need to ensure /dev/disk/by-label/config-2 is available
inside the container (read-only).
We had another bug where we confused the hostname with the NodeName.
To avoid this happening again, and to make the code more
self-documenting, we use types.NodeName (a typedef alias for string)
whenever we are referring to the Node.Name.
A tedious but mechanical commit therefore, to change all uses of the
node name to use types.NodeName
Also clean up some of the (many) places where the NodeName is referred
to as a hostname (not true on AWS), or an instanceID (not true on GCE),
etc.
Automatic merge from submit-queue
Fixed a bug that causes k8s to delete all healthmonitors on your OpenStack tenant
<!-- Thanks for sending a pull request! Here are some tips for you:
1. If this is your first time, read our contributor guidelines https://github.com/kubernetes/kubernetes/blob/master/CONTRIBUTING.md and developer guide https://github.com/kubernetes/kubernetes/blob/master/docs/devel/development.md
2. If you want *faster* PR reviews, read how: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/faster_reviews.md
3. Follow the instructions for writing a release note: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/pull-requests.md#release-notes
-->
**What this PR does / why we need it**:
The OpenStack LBaaS v2 api does not support filtering health monitors by pool_id, so /lbaas/healthmonitors?pool_id=abc123 will always return all health monitors in your OpenStack tenant.
This presents a problem when, in the very next block of code, we loop over the list of monitorIDs and delete them one-by-one. This will delete all the health monitors in your tenant without warning.
Fortunately, we already got the healthmonitor IDs when we built the list of pools. Using those, we can delete only those healthmonitors associated with our pool(s).
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
The main issue here was the use of v2_monitors.List(lbaas.network, v2_monitors.ListOpts{PoolID: poolID}). This is trying to filter healthmonitors by pool_id, but that is not supported by the API. It creates a call like /lbaas/healthmonitors?pool_id=abc123. The API server ignores the pool_id parameter and returns a list of all healthmonitors (which k8s then tries to delete).
**Release note**:
<!-- Steps to write your release note:
1. Use the release-note-* labels to set the release note state (if you have access)
2. Enter your extended release note in the below block; leaving it blank means using the PR title as the release note. If no release note is required, just write `NONE`.
-->
```release-note
```
Automatic merge from submit-queue
update pkg/cloudprovider OWNERS to spread the review load
This is going to make the mungebot start assigning reviews in your cloudprovider packages.
fyi @runseb @dagnello @imkin @anguslees @dagnello
Automatic merge from submit-queue
fix Openstack provider to allow more than one service port for lbaas v2
This resolves bug #30477 where if a service defines multiple ports for load balancer, the plugin will fail with multiple ports are not supported.
@anguslees @jianhuiz
Automatic merge from submit-queue
openstack: Autodetect LBaaS v1 vs v2
```release-note
* openstack: autodetect LBaaS v1/v2 by querying for available extensions. For most installs, this effectively changes the default from v1 to v2. Existing installs can add "lb-version = v1" to the provider config file to continue to use v1.
```
<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.kubernetes.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.kubernetes.io/reviews/kubernetes/kubernetes/29726)
<!-- Reviewable:end -->
This removes the need to manually specify the version in all but unusual
cases.
For most installs this will effectively flip the default from
v1 (deprecated) to v2 so conservative existing installs may want to
manually configure "lb-version = v1" before upgrading.
In OpenStack Mitaka, the name field for members was added as an optional
field but does not exist in Liberty. Therefore the current
implementation for lbaas v2 will not work in Liberty.
Automatic merge from submit-queue
Rackspace improvements (OpenStack Cinder)
This adds PV support via Cinder on Rackspace clusters. Rackspace Cloud Block Storage is pretty much vanilla OpenStack Cinder, so there is no need for a separate Volume Plugin. Instead I refactored the Cinder/OpenStack interaction a bit (by introducing a CinderProvider Interface and moving the device path detection logic to the OpenStack part).
Right now this is limited to `AttachDisk` and `DetachDisk`. Creation and deletion of Block Storage is not in scope of this PR.
Also the `ExternalID` and `InstanceID` cloud provider methods have been implemented for Rackspace.
This is a better abstraction than passing in specific pieces of the
Service that each of the cloudproviders may or may not need. For
instance, many of the providers don't need a region, yet this is passed
in. Similarly many of the providers want a string IP for the load
balancer, but it passes in a converted net ip. Affinity is unused by
AWS. A provider change may also require adding a new parameter which has
an effect on all other cloud provider implementations.
Further, this will simplify adding provider specific load balancer
options, such as with labels or some other metadata. For example, we
could add labels for configuring the details of an AWS elastic load
balancer, such as idle timeout on connections, whether it is
internal or external, cross-zone load balancing, and so on.
Authors: @chbatey, @jsravn
Had to move other things around too to avoid a weird api ->
cloudprovider dependency.
Also adding fixes per code reviews.
(This is a squash of the previously approved commits)
We return an error if the user specifies a non 0.0.0.0/0 load balancer
source restriction on OpenStack, where we can't enforce the restriction
(currently).
This refactors #21431 to pull a lot of the code into cloudprovider so it
can be reused by AWS.
It also changes the name of the annotation to be non-GCE specific:
service.beta.kubernetes.io/load-balancer-source-ranges
Fix#21651