We have previously tried building a full cloudprovider in e2e for AWS;
this wasn't the best idea, because e2e runs on a different machine than
normal operations, and often doesn't even run in AWS. In turn, this
meant that the cloudprovider had to do extra work and have extra code,
which we would like to get rid of. Indeed, I got rid of some code which
tolerated not running in AWS, and this broke e2e.
There are known issues with the attached-volume state cache that just aren't
possible to fix with the current interface.
Replace it with a map of the active attach jobs (that was the original
requirement, to avoid a nasty race condition).
This costs us an extra DescribeInstance call on attach/detach, but that
seems worth it if it ends this class of bugs.
Fix#15073
Either ELB is slow to delete (in which case the bumped timeout will
help), or the security groups are otherwise blocked (in which case
logging them will help us track this down).
Fix#17626
We know the ELB call will fail, so we error out early rather than
hitting the API. Preserves rate limit quota, and also allows us to give
a more self-evident message.
Fix#21993
Now that we can't build an awsInstance from metadata, because of the
PrivateDnsName issue, we might as well simplify the arguments.
Create a 'placeholder' method though - newAWSInstanceFromMetadata - that
documents the desire to use metadata, shows how we would get it, but
links to the bug which explains why we can't use it.
Had to move other things around too to avoid a weird api ->
cloudprovider dependency.
Also adding fixes per code reviews.
(This is a squash of the previously approved commits)
This has two main advantages:
* The use of the mock package to verify API calls against the aws SDK
* Nicer error messages for asserts without having to use if statements
We return an error if the user specifies a non 0.0.0.0/0 load balancer
source restriction on OpenStack, where we can't enforce the restriction
(currently).
This refactors #21431 to pull a lot of the code into cloudprovider so it
can be reused by AWS.
It also changes the name of the annotation to be non-GCE specific:
service.beta.kubernetes.io/load-balancer-source-ranges
Fix#21651
for Instance.List and Routes.List which we will definitely have
more than 500 of when supporting 1000 nodes.
Add TODOs for other GCE List API calls to do similar fixes.
Add more logging to GCE's routecontroller.go when creating or deleting routes.
Fix the AWS subnet lookup that checks if a subnet is public, which was
missing a few cases:
- Subnets without explicit routing tables, which use the main VPC
routing table.
- Routing tables not tagged with KubernetesCluster. The filter for this
is now removed.
Like everything else AWS, we differentiate between k8s-owned security
groups and k8s-not-owned security groups using tags.
When we are setting up the ingress rule for ELBs, pick the security
group that is tagged over any others.
We continue to tolerate a single security group being untagged, but
having multiple security groups without tagging is now an error, as it
leads to undefined behaviour.
We also log at startup if the cluster tag is not defined.
Fix#21986
Follow up from #20731. I have no way of testing this.
There's an additional group of functions (Get|Delete|Reserve)GlobalStaticIP that can create an IP without the
service description, but those are not called anywhere in the Kubernetes codebase and are probably for the
Ingress project. I'm leaving those alone for now.
Add aws cloud config:
[global]
disableSecurityGroupIngress = true
The aws provider creates an inbound rule per load balancer on the node
security group. However, this can quickly run into the AWS security
group rule limit of 50.
This disables the automatic ingress creation. It requires that the user
has setup a rule that allows inbound traffic on kubelet ports from the
local VPC subnet (so load balancers can access it). E.g. `10.82.0.0/16
30000-32000`.
Limits: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Appendix_Limits.html#vpc-limits-security-groups
Authors: @jsravn, @balooo
When finding instance by node name in AWS, only retrieve running
instances. Otherwise terminated, old nodes can show up with the same
tag when rebuilding nodes in the cluster.
Another improvement made is to filter instances by the node names
provided, rather than selecting all instances and filtering in code.
Authors: @jsravn, @chbatey, @balooo