Automatic merge from submit-queue
Information is opposite to real meaning to express
master is not equal to expectedMaster, the meaning should be the master is unexpected:
master, err := mesosCloud.Master(clusterName)
if master != expectedMaster {
t.Fatalf("Master returns the expected value: (expected: %#v, actual: %#v", expectedMaster, master)
Automatic merge from submit-queue
format number not consistent with real variable number
glog.Infof format number not consistent with real variable number, should add %s for second var because loadBalancerName is string:
func (c *Cloud) ensureLoadBalancer(namespacedName types.NamespacedName, loadBalancerName string, ...
Automatic merge from submit-queue
AWS: Added experimental option to skip zone check
This pull request resolves#28380. In the vast majority of cases, it is appropriate to validate the AWS region against a known set of regions. However, there is the edge case where this is undesirable as Kubernetes may be deployed in an AWS-like environment where the region is not one of the known regions.
By adding the optional **DisableStrictZoneCheck true** to the **[Global]** section in the aws.conf file (e.g. /etc/aws/aws.conf) one can bypass the ragion validation.
This allows the user the set "working-dir" in their vsphere.cfg file.
The value should be a path in the vSphere datastore in which the
provider will look for vms.
Automatic merge from submit-queue
vSphere provider - Getting node data by ip instead of uuid
To get the uuid we need the service to be running as root. This change
allows us to run the controller-manager and api server as non-root.
Automatic merge from submit-queue
Adding SCSI controller type filter for vSphere disk attach
Hot plug of disks to a SCSI controller of type lsilogic doesn't work as expected. When a device is detached from the controller, it fails to remove the device from the /dev path which makes the subsequent attaches to the node to fail. With scsi controller types lsilogic-sas or paravirtual this seems to work well. This patch filters the existing controller for these types, and if it doesn't find one, it creates a new controller for disk attach.
This PR is dependent on https://github.com/kubernetes/kubernetes/pull/26658 (1st commit) also targeting this for 1.3
Automatic merge from submit-queue
Removing name field from Member for compatibility with OpenStack Liberty
In OpenStack Mitaka, the name field for members was added as an optional field but does not exist in Liberty. Therefore the current implementation for lbaas v2 will not work in Liberty.
Automatic merge from submit-queue
AWS/GCE: Spread PetSet volume creation across zones, create GCE volumes in non-master zones
Long term we plan on integrating this into the scheduler, but in the
short term we use the volume name to place it onto a zone.
We hash the volume name so we don't bias to the first few zones.
If the volume name "looks like" a PetSet volume name (ending with
-<number>) then we use the number as an offset. In that case we hash
the base name.
Automatic merge from submit-queue
GCE provider: Create TargetPool with 200 instances, then update with rest
GCE provider: Create TargetPool with 200 instances, then update with rest
Tested with 2000 nodes, this actually meets the GCE API specifications (which is nutty). Previous PR (#25178) was based on a mistaken understanding of a poorly documented set of limitations, and even poorer testing, for which I am embarassed.
Also includes the revert of #25178 (review commits separately).
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/.github/PULL_REQUEST_TEMPLATE.md?pixel)]()
Tested with 2000 nodes, this actually meets the GCE API specifications
(which is nutty). Previous PR (#25178) was based on a mistaken
understanding of a poorly documented set of limitations, and even
poorer testing, for which I am embarassed.
In OpenStack Mitaka, the name field for members was added as an optional
field but does not exist in Liberty. Therefore the current
implementation for lbaas v2 will not work in Liberty.
Lots of comments describing the heuristics, how it fits together and the
limitations.
In particular, we can't guarantee correct volume placement if the set of
zones is changing between allocating volumes.
Filters can't exceed 4k, and GET requests against the GCE API are also
limited, so these break down in different ways at different cluster
counts. Fix it by introducing an advisory node-instance-prefix
configuration in the GCE provider that can hint the
EnsureLoadBalancer/UpdateLoadBalancer code (and the firewall
creation/update code). If it's not there, or wrong (a hostname that's
registered violates it), just ignore it and grab the whole project.
Hot attach of disk to a scsi controller will work only if the
controller type is lsilogic-sas or paravirtual.This patch filters
the existing controller for these types, if it doesn't find one it
creates a new scsi controller.
Automatic merge from submit-queue
AWS volumes: Use /dev/xvdXX names with EC2
We are using HVM style names, which cannot be paravirtual style names.
See
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/device_naming.html
This also fixes problems introduced when moving volume mounting to KCM.
Fix#27534
Automatic merge from submit-queue
refuse to create a firewall rule with no target tag
fixes#25145
This modification in gce.firewallObject() will return error when trying
to create or update firewall rule if no node tag can be found. Also add
unit test for this modification.
We had a long-lasting bug which prevented creation of volumes in
non-master zones, because the cloudprovider in the volume label
admission controller is not initialized with the multizone setting
(issue #27656).
This implements a simple workaround: if the volume is created with the
failure-domain zone label, we look for the volume in that zone. This is
more efficient, avoids introducing a new semantic, and allows users (and
the dynamic provisioner) to create volumes in non-master zones.
Fixes#27657
Long term we plan on integrating this into the scheduler, but in the
short term we use the volume name to place it onto a zone.
We hash the volume name so we don't bias to the first few zones.
If the volume name "looks like" a PetSet volume name (ending with
-<number>) then we use the number as an offset. In that case we hash
the base name.
Fixes#27256
- replaces probeVolume with scsiHostRescan to scan hot attached disks
- fixes substring match of UUID returned from AttachDisk
- changes DetachDisk to take volumePath argument instead of diskID
- fixes delayed failure at mount rather than attach disk
- removes cloning of virtual disk in AttachDisk
Automatic merge from submit-queue
AWS: cache instances during service reload to avoid rate limiting on restart
Fixes#25610 by reducing redundant calls to DescribeInstances()
```release-note
* The AWS cloudprovider will cache results from DescribeInstances() if the set of nodes hasn't changed
```
Also move int/stringSlicesEqual from servicecontroller.go to pkg/util/slice
Automatic merge from submit-queue
AWS: support mixed plaintext/encrypted ports in ELBs via service.beta.kubernetes.io/aws-load-balancer-ssl-ports annotation
Fixes#26268
Implements the second SSL ELB annotation, per #24978
`service.beta.kubernetes.io/aws-load-balancer-ssl-ports=*` (comma-separated list of port numbers or e.g. `https`)
If not specified, all ports are secure (SSL or HTTPS).
Automatic merge from submit-queue
LBaaS v2 Support for Openstack Cloud Provider Plugin
Resolves#19774.
This work is based on Gophercloud support for LBaaS v2 currently in review (this will have to merge first):
https://github.com/rackspace/gophercloud/pull/575
These changes includes the addition of a new loadbalancer configuration option: **LBVersion**. If this configuration attribute is missing or anything other than "v2", lbaas v1 implementation will be used.
Fixes#26268
Implements the second SSL ELB annotation, per #24978
service.beta.kubernetes.io/aws-load-balancer-ssl-ports=* (or e.g. https)
If not specified, all ports are secure (SSL or HTTPS).
Add ELB proxy protocol support via the annotation
"service.beta.kubernetes.io/aws-load-balancer-proxy-protocol". This
allows servers like Nginx and Haproxy to retrieve the real IP address of
a remote client.
Implements #25145
This modification in gce.firewallObject() will return error when trying
to create or update firewall rule if no node tag can be found. Also add
unit test for this modification.
Instead of N pointers, we were returning N null pointers, followed by the real
thing. It's not clear why we didn't trip on this until now, maybe there is a
new server-side check for empty subnetID strings.
Instead of just rate limits to operation polling, send all API calls
through a rate limited RoundTripper.
This isn't a perfect solution, since the QPS is obviously getting
split between different controllers, etc., but it's also spread across
different APIs, which, in practice, rate limit differently.
Fixes#26119 (hopefully)
Automatic merge from submit-queue
AWS: Move enforcement of attached AWS device limit from kubelet to scheduler
Limit of nr. of attached EBS volumes to a node is now enforced by scheduler. It can be adjusted by `KUBE_MAX_PD_VOLS` env. variable there. Therefore we don't need the same check in kubelet. If the system admin wants to attach more, we should allow it.
Kubelet limit is now 650 attached volumes ('ba'..'zz').
Note that the scheduler counts only *pods* assigned to a node. When a pod is deleted and a new pod is scheduled on a node, kubelet start (slowly) detaching the old volume and (slowly) attaching the new volume. Depending on AWS speed **it may happen that more than KUBE_MAX_PD_VOLS volumes are actually attached to a node for some time!** Kubelet will clean it up in few seconds / minutes (both attach/detach is quite slow).
Fixes#22994
Automatic merge from submit-queue
AWS: Allow cross-region image pulling with ECR
Fixes#23298
Definitely should be in the release notes; should maybe get merged in 1.2 along with #23594 after some soaking. Documentation changes to follow.
cc @justinsb @erictune @rata @miguelfrde
This is step two. We now create long-lived, lazy ECR providers in all regions.
When first used, they will create the actual ECR providers doing the work
behind the scenes, namely talking to ECR in the region where the image lives,
rather than the one our instance is running in.
Also:
- moved the list of AWS regions out of the AWS cloudprovider and into the
credentialprovider, then exported it from there.
- improved logging
Behold, running in us-east-1:
```
aws_credentials.go:127] Creating ecrProvider for us-west-2
aws_credentials.go:63] AWS request: ecr:GetAuthorizationToken in us-west-2
aws_credentials.go:217] Adding credentials for user AWS in us-west-2
Successfully pulled image "123456789012.dkr.ecr.us-west-2.amazonaws.com/test:latest"
```
*"One small step for a pod, one giant leap for Kube-kind."*
<!-- Reviewable:start -->
---
This change is [<img src="http://reviewable.k8s.io/review_button.svg" height="35" align="absmiddle" alt="Reviewable"/>](http://reviewable.k8s.io/reviews/kubernetes/kubernetes/24369)
<!-- Reviewable:end -->
Automatic merge from submit-queue
AWS: SSL support for ELB listeners through annotations
In the API, ports have only either TCP or UDP as their protocols, but ELB distinguishes HTTPS->HTTP[S]? from SSL->(SSL|TCP).
Per #24978, this is implemented through two separate annotations:
`service.beta.kubernetes.io/aws-load-balancer-ssl-cert=arn:aws:acm:us-east-1:123456789012:certificate/12345678-1234-1234-1234-123456789012`
`service.beta.kubernetes.io/aws-load-balancer-backend-protocol=(https|http|ssl|tcp)`
Mixing plain-text and encrypted listeners will be in a separate PR, implementing #24978's `aws-load-balancer-ssl-ports=LIST`
Automatic merge from submit-queue
vSphere Cloud Provider Implementation
This is the first PR towards implementation for vSphere cloud provider support in Kubernetes (ref. issue #23932).
Use constructor for ecrProvider
Rename package to "credentials" like golint requests
Don't wrap the lazy provider with a caching provider
Add immedita compile-time interface conformance checks for the interfaces
Added comments
This is step two. We now create long-lived, lazy ECR providers in all regions.
When first used, they will create the actual ECR providers doing the work
behind the scenes, namely talking to ECR in the region where the image lives,
rather than the one our instance is running in.
Also:
- moved the list of AWS regions out of the AWS cloudprovider and into the
credentialprovider, then exported it from there.
- improved logging
Behold, running in us-east-1:
```
aws_credentials.go:127] Creating ecrProvider for us-west-2
aws_credentials.go:63] AWS request: ecr:GetAuthorizationToken in us-west-2
aws_credentials.go:217] Adding credentials for user AWS in us-west-2
Successfully pulled image 123456789012.dkr.ecr.us-west-2.amazonaws.com/test:latest"
```
*"One small step for a pod, one giant leap for Kube-kind."*
When vSphere cloud provider object is instantiated, the VM name of the
Node where this object is being create in needs to be set. This patch
also includes vSphere as part of the cloud provider package.
This patch includes implementation for the following Instance object
interfaces:
* NodeAddresses
* ExternalID
* InstanceID
Also minor refactoring in overall Instance implementation.
Automatic merge from submit-queue
AWS: Add support for ap-northeast-2 region (Seoul)
This PR does:
- Support AWS Seoul region: ap-northeast-2.
Currently, I can not setup Kubernetes on AWS Seoul.
Error Messages:
>
> ip-10-0-0-50 core # docker logs 0697db
> I0419 07:57:44.569174 1 aws.go:466] Zone not specified in configuration file; querying AWS metadata service
> F0419 07:57:44.570380 1 controllermanager.go:279] Cloud provider could not be initialized: could not init cloud provider "aws": not a valid AWS zone (unknown region): ap-northeast-2a
Automatic merge from submit-queue
Rackspace improvements (OpenStack Cinder)
This adds PV support via Cinder on Rackspace clusters. Rackspace Cloud Block Storage is pretty much vanilla OpenStack Cinder, so there is no need for a separate Volume Plugin. Instead I refactored the Cinder/OpenStack interaction a bit (by introducing a CinderProvider Interface and moving the device path detection logic to the OpenStack part).
Right now this is limited to `AttachDisk` and `DetachDisk`. Creation and deletion of Block Storage is not in scope of this PR.
Also the `ExternalID` and `InstanceID` cloud provider methods have been implemented for Rackspace.
Limit of nr. of attached EBS volumes to a node is now enforced by scheduler. It
can be adjusted by KUBE_MAX_PD_VOLS env. variable there.
Therefore we don't need the same check in kubelet. If the system admin wants to
attach more, we should allow it.
Kubelet limit is now 650 attached volumes ('ba'..'zz').
This is a better abstraction than passing in specific pieces of the
Service that each of the cloudproviders may or may not need. For
instance, many of the providers don't need a region, yet this is passed
in. Similarly many of the providers want a string IP for the load
balancer, but it passes in a converted net ip. Affinity is unused by
AWS. A provider change may also require adding a new parameter which has
an effect on all other cloud provider implementations.
Further, this will simplify adding provider specific load balancer
options, such as with labels or some other metadata. For example, we
could add labels for configuring the details of an AWS elastic load
balancer, such as idle timeout on connections, whether it is
internal or external, cross-zone load balancing, and so on.
Authors: @chbatey, @jsravn
The previous logic was incorrect; if we saw two untagged security groups
before seeing the first tagged security, we would incorrectly return an
error.
Fix#23339
AWS has soft support limit for 40 attached EBS devices. Assuming there is just
one root device, use the rest for persistent volumes.
The devices will have name /dev/xvdba - /dev/xvdcm, leaving /dev/sda - /dev/sdz
to the system.
Also, add better error handling and propagate error
"Too many EBS volumes attached to node XYZ" to a pod.
We have previously tried building a full cloudprovider in e2e for AWS;
this wasn't the best idea, because e2e runs on a different machine than
normal operations, and often doesn't even run in AWS. In turn, this
meant that the cloudprovider had to do extra work and have extra code,
which we would like to get rid of. Indeed, I got rid of some code which
tolerated not running in AWS, and this broke e2e.
There are known issues with the attached-volume state cache that just aren't
possible to fix with the current interface.
Replace it with a map of the active attach jobs (that was the original
requirement, to avoid a nasty race condition).
This costs us an extra DescribeInstance call on attach/detach, but that
seems worth it if it ends this class of bugs.
Fix#15073
Either ELB is slow to delete (in which case the bumped timeout will
help), or the security groups are otherwise blocked (in which case
logging them will help us track this down).
Fix#17626
We know the ELB call will fail, so we error out early rather than
hitting the API. Preserves rate limit quota, and also allows us to give
a more self-evident message.
Fix#21993
Now that we can't build an awsInstance from metadata, because of the
PrivateDnsName issue, we might as well simplify the arguments.
Create a 'placeholder' method though - newAWSInstanceFromMetadata - that
documents the desire to use metadata, shows how we would get it, but
links to the bug which explains why we can't use it.
Had to move other things around too to avoid a weird api ->
cloudprovider dependency.
Also adding fixes per code reviews.
(This is a squash of the previously approved commits)
This has two main advantages:
* The use of the mock package to verify API calls against the aws SDK
* Nicer error messages for asserts without having to use if statements
We return an error if the user specifies a non 0.0.0.0/0 load balancer
source restriction on OpenStack, where we can't enforce the restriction
(currently).
This refactors #21431 to pull a lot of the code into cloudprovider so it
can be reused by AWS.
It also changes the name of the annotation to be non-GCE specific:
service.beta.kubernetes.io/load-balancer-source-ranges
Fix#21651
for Instance.List and Routes.List which we will definitely have
more than 500 of when supporting 1000 nodes.
Add TODOs for other GCE List API calls to do similar fixes.
Add more logging to GCE's routecontroller.go when creating or deleting routes.
Fix the AWS subnet lookup that checks if a subnet is public, which was
missing a few cases:
- Subnets without explicit routing tables, which use the main VPC
routing table.
- Routing tables not tagged with KubernetesCluster. The filter for this
is now removed.
Like everything else AWS, we differentiate between k8s-owned security
groups and k8s-not-owned security groups using tags.
When we are setting up the ingress rule for ELBs, pick the security
group that is tagged over any others.
We continue to tolerate a single security group being untagged, but
having multiple security groups without tagging is now an error, as it
leads to undefined behaviour.
We also log at startup if the cluster tag is not defined.
Fix#21986
Follow up from #20731. I have no way of testing this.
There's an additional group of functions (Get|Delete|Reserve)GlobalStaticIP that can create an IP without the
service description, but those are not called anywhere in the Kubernetes codebase and are probably for the
Ingress project. I'm leaving those alone for now.
Add aws cloud config:
[global]
disableSecurityGroupIngress = true
The aws provider creates an inbound rule per load balancer on the node
security group. However, this can quickly run into the AWS security
group rule limit of 50.
This disables the automatic ingress creation. It requires that the user
has setup a rule that allows inbound traffic on kubelet ports from the
local VPC subnet (so load balancers can access it). E.g. `10.82.0.0/16
30000-32000`.
Limits: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Appendix_Limits.html#vpc-limits-security-groups
Authors: @jsravn, @balooo
When finding instance by node name in AWS, only retrieve running
instances. Otherwise terminated, old nodes can show up with the same
tag when rebuilding nodes in the cluster.
Another improvement made is to filter instances by the node names
provided, rather than selecting all instances and filtering in code.
Authors: @jsravn, @chbatey, @balooo
This applies a cross-request time delay when we observe
RequestLimitExceeded errors, unlike the default library behaviour which
only applies a *per-request* backoff.
Issue #12121
In the AWS API (generally) we tag things we create, and then we filter
to find them. However, creation & tagging are typically two separate
calls. So there is a chance that we will create an object, but fail to
tag it.
We fix this (done here in the case of security groups, but we can do
this more generally) by retrieving the resource without a tag filter.
If the retrieved resource has the correct tags, great. If it has the
tags for another cluster, that's a problem, and we raise an error. If
it has no tags at all, we add the tags.
This only works where the resource is uniquely named (or we can
otherwise retrieve it uniquely). For security groups, the SG name comes
from the service UUID, so that's unique.
Fixes#11324
Volume names have now format <cluster-name>-dynamic-<pv-name>.
pv-name is guaranteed to be unique in Kubernetes cluster, adding
<cluster-name> ensures we don't conflict with any running cluster
in the cloud project (kube-controller-manager --cluster-name=XXX).
'kubernetes' is the default cluster name.
AWS doesn't support type=LoadBalancer with UDP services. For now, we
simply skip over the test with type=LoadBalancer on AWS for the UDP
service.
Fix#20911
This commit allows the AWS cloud provider plugin to work on EC2 instances
that do not have a public IP. The EC2 metadata service returns a 404 for the
'public-ipv4' endpoint for private instances, and the plugin was bubbling this
up as a fatal error.