This removes a panic I mistakenly introduced when an instance is not
found, and also restores the exact prior behaviour for
getInstanceByName, where it returns cloudprovider.InstanceNotFound when
the instance is not found.
We adapt the existing code to work across all zones in a region.
We require a feature-flag to enable Ubernetes-Lite
Reasons:
* There are some behavioural changes if users create volumes with
the same name in two zones.
* We don't want to make one API call per zone if we're not running
Ubernetes-Lite.
* Ubernetes-Lite is still experimental.
There isn't a parallel flag implemented for AWS, because at the moment
there would be no behaviour changes from this.
findInstancesByNodeNames was a simple loop around
findInstanceByNodeName, which made an EC2 API call for each call.
We've had trouble with this sort of behaviour hitting EC2 rate limits on
bigger clusters (e.g. #11979).
Instead, change this method to fetch _all_ the tagged EC2 instances, and
then loop through the local results. This is one API call (modulo
paging).
We are currently only using findInstancesByNodeNames for the load
balancer, where we attach every node, so we were fetching all but one of
the instances anyway.
Issue #11979
If a route already exists but is invalid (e.g. from a crash), we
automatically delete it before trying to create a route that would
otherwise conflict.
For AWS EBS, a volume can only be attached to a node in the same AZ.
The scheduler must therefore detect if a volume is being attached to a
pod, and ensure that the pod is scheduled on a node in the same AZ as
the volume.
So that the scheduler need not query the cloud provider every time, and
to support decoupled operation (e.g. bare metal) we tag the volume with
our placement labels. This is done automatically by means of an
admission controller on AWS when a PersistentVolume is created backed by
an EBS volume.
Support for tagging GCE PVs will follow.
Pods that specify a volume directly (i.e. without using a
PersistentVolumeClaim) will not currently be scheduled correctly (i.e.
they will be scheduled without zone-awareness).
This synchronizes Cinder with AWS EBS code, where we already tag volumes with
claim.Namespace and claim.Name (and pv.Name, as suggested in separate PR).
General purpose SSD ('gp2') volume type is just slighly more expensive than
Magnetic ('standard' / default in AWS), while the performance gain is pretty
significant.
So far, the volumes were created only during testing, where the extra cost
won't make any difference. In future, we plan to introduce QoS classes, where
users could choose SSD/Magnetic depending on their use cases.
'gp2' is just the default volume type for (hopefuly) short period before these
QoS classes are implemented.
From some reason, MiBs were used for public functions and AWS cloud provider
recalculated them to GiB. Let's expose what AWS really supports and don't hide
real allocation units.
them, not in the steady state once they've been created. This makes it
much less likely that users will run into static IP quota issues.
Also add slightly more parallelism to the deletion of load balancers
now that I realize the static IPs can be deleted in parallel with
forwarding rules :)
Previously we'd just tear everything down and recreate it, which makes
for a pretty bad experience because it causes downtime whenever the
service controller restarts and has to make sure everything is in the
desired state.
This adds more code than I'd prefer, but makes it much cleaner and more
organized than it was before, in my opinion. I didn't bother
parallelizing anything because it's complex enough as it is, right now.
It's consistently passing the existing e2es and worked when I tested
manually, but this could definitely use additional e2e tests and/or some
serious refactoring to make real unit tests feasible. I'll follow up
with one or two e2e tests that make sense (updating an LB or killing the
controller manager, perhaps).
This will cut down on the amount of time it takes to delete an external
load balancer, which should reduce the likelihood of resource leaks when
clusters are deleted.
This code was in rough shape, so I've fixed the issues with the original
PR as well as a few other changes:
1. Clarify the error messages related to the "gce Addresses" to make it
clear we're talking about static IP addresses
2. Fix the bug in the original PR, which was a nil pointer dereference
from passing op to waitForRegionOp when the address doesn't exist.
3. Rearrange the steps of EnsureTCPLoadBalancerDeleted to be the reverse
of EnsureCreated, which mostly just seems like good practice to me.
This is also supported by the following two bugs I found :(
4. Fix an independent bug of returning too early if the target pool
doesn't exist, effectively stranding the firewall. This was likely
introduced because target pools used to be the last thing deleted,
so it was previously safe to return there.
5. Fix an independent bug of not returning an error waiting for the
target pool to be deleted failed. This was very possibly causing
target pool leaks in our e2e tests. This was similarly due to
assuming that the target pool was the last thing deleted in the
function, then having the firewall deletion stuck in after it.
Only takes the first available subnet in a AZ, ignore other subnets
and log warning about this.
Removes AWS region comparison for subnet AZs. A VPC is only in a single
AWS region.
Fixes#12381
This code was in rough shape, so I've fixed the issues with the original
PR as well as a few other changes:
1. Clarify the error messages related to the "gce Addresses" to make it
clear we're talking about static IP addresses
2. Fix the bug in the original PR, which was a nil pointer dereference
from passing op to waitForRegionOp when the address doesn't exist.
3. Rearrange the steps of EnsureTCPLoadBalancerDeleted to be the reverse
of EnsureCreated, which mostly just seems like good practice to me.
This is also supported by the following two bugs I found :(
4. Fix an independent bug of returning too early if the target pool
doesn't exist, effectively stranding the firewall. This was likely
introduced because target pools used to be the last thing deleted,
so it was previously safe to return there.
5. Fix an independent bug of not returning an error waiting for the
target pool to be deleted failed. This was very possibly causing
target pool leaks in our e2e tests. This was similarly due to
assuming that the target pool was the last thing deleted in the
function, then having the firewall deletion stuck in after it.
The ELB client lookup isn't necessary because the service
does not operate across regions. Instead the client should
be built like the others by querying for the region from
the master node's metadata service.
Inverting code path on CreateTcploadBalancer to avoid branch divergence
Removing useless variable vipAddr as vip have information needed
Renaming 'error' variable on EnsureTCPLoadBalancerDeleted to be consistent
A lot of packages use StringSet, but they don't use anything else from
the util package. Moving StringSet into another package will shrink
their dependency trees significantly.
Avoid creating a new 'err' variable in the 'if'-branch, shadowing the one
in the outer scope.
Any error from subsequent 'cloud, err = GetCloudProvider()' was not propagated
to 'err' variable in the outer scope and thus errors were never returned from
this function.
This is hard to debug error on OpenStack, when content of --cloud-config=
file is wrong or connection to OpenStack fails. Such error is never logged
and Kubernetes thinks everything is OK.
This will allows authentication with the AWS API using the
~/.aws/credentials file which is created by runnign 'aws configure' on
a node.
Signed-off-by: Sami Wagiaalla <swagiaal@redhat.com>
ELB will automatically create a health check, but if we update the
listeners the old health check port sticks around, and all the instances
are marked offline.
Update the health-checks to match the listeners: we just check the first
valid service port, with some hard-coded options for timeouts / retries etc.