GCE disks don't have tags, we must encode the tags into Description field.
It's encoded as JSON, which is both human and machine readable:
description: '{"kubernetes.io/created-for/pv/name":"pv-gce-oxwts","kubernetes.io/created-for/pvc/name":"myclaim","kubernetes.io/created-for/pvc/namespace":"default"}'
The ip permission method now checks for containment, not equality, so
order of parameters matter. This change fixes
`removeSecurityGroupIngress` to pass in the removal permission first to
compare against the existing permission.
Change isEqualIPPermission to consider the entire list of security group
ids on when checking if a security group id has already been added.
This is used for example when adding and removing ingress rules to the
cluster nodes from an elastic load balancer. Without this, once there
are multiple load balancers, the method as it stands incorrectly returns
false even if the security group id is in the list of group ids. This
causes a few problems: dangling security groups which fill up an
account's limit since they don't get removed, and inability to recreate
load balancers in certain situations (receiving an
InvalidPermission.Duplicate from AWS when adding the same security
group).
According to AWS, the ELB healthy threshold is "Number of consecutive health check successes before declaring an EC2 instance healthy." It has an unusual interaction with Kubernetes, since all nodes will enter either an unhealthy state or a healthy state together depending on the service's healthiness as a whole.
We have observed that if our service goes down for the unhealthy threshold (which is 2 checks at 30 second intervals = 60 seconds), then the ELB will stop serving traffic to all nodes in the cluster, and will wait for the healthy threshold (currently 10 * 30 = 300 seconds) AFTER the service is restored to add back the cluster nodes, meaning it remains unreachable for an extra 300 seconds.
With the new settings, the ELB will continue to timeout dead nodes after 60 seconds, but will restore healthy nodes after 20 seconds. The minimum value for healthyThreshold is 2, and the minimum value for interval is 5 seconds. I went for 10 seconds instead of the minimum sort of arbitrarily because I was not sure how much this value may affect the scalability of clusters in EC2, as it does put some extra load on the kube-proxy.
We don't cope well if a PD is in multiple zones, but this is actually
fairly easy to detect. This is probably justified purely on the basis
that we never want to delete the wrong volume (DeleteDisk), but also
because this means that we now warn on creation if a disk is in multiple
zones (with the labeling admission controller).
This also means that with the scheduling predicate in place, that many
of our volume problems "go away" in practice: you still can't create or
delete a volume when it is ambiguous, but thereafter the volume will be
labeled with the zone, that will match it only to nodes with the same
zone, and then we query for the volume in that zone when we
attach/detach it.
This removes a panic I mistakenly introduced when an instance is not
found, and also restores the exact prior behaviour for
getInstanceByName, where it returns cloudprovider.InstanceNotFound when
the instance is not found.
We adapt the existing code to work across all zones in a region.
We require a feature-flag to enable Ubernetes-Lite
Reasons:
* There are some behavioural changes if users create volumes with
the same name in two zones.
* We don't want to make one API call per zone if we're not running
Ubernetes-Lite.
* Ubernetes-Lite is still experimental.
There isn't a parallel flag implemented for AWS, because at the moment
there would be no behaviour changes from this.
findInstancesByNodeNames was a simple loop around
findInstanceByNodeName, which made an EC2 API call for each call.
We've had trouble with this sort of behaviour hitting EC2 rate limits on
bigger clusters (e.g. #11979).
Instead, change this method to fetch _all_ the tagged EC2 instances, and
then loop through the local results. This is one API call (modulo
paging).
We are currently only using findInstancesByNodeNames for the load
balancer, where we attach every node, so we were fetching all but one of
the instances anyway.
Issue #11979
If a route already exists but is invalid (e.g. from a crash), we
automatically delete it before trying to create a route that would
otherwise conflict.
For AWS EBS, a volume can only be attached to a node in the same AZ.
The scheduler must therefore detect if a volume is being attached to a
pod, and ensure that the pod is scheduled on a node in the same AZ as
the volume.
So that the scheduler need not query the cloud provider every time, and
to support decoupled operation (e.g. bare metal) we tag the volume with
our placement labels. This is done automatically by means of an
admission controller on AWS when a PersistentVolume is created backed by
an EBS volume.
Support for tagging GCE PVs will follow.
Pods that specify a volume directly (i.e. without using a
PersistentVolumeClaim) will not currently be scheduled correctly (i.e.
they will be scheduled without zone-awareness).
This synchronizes Cinder with AWS EBS code, where we already tag volumes with
claim.Namespace and claim.Name (and pv.Name, as suggested in separate PR).
General purpose SSD ('gp2') volume type is just slighly more expensive than
Magnetic ('standard' / default in AWS), while the performance gain is pretty
significant.
So far, the volumes were created only during testing, where the extra cost
won't make any difference. In future, we plan to introduce QoS classes, where
users could choose SSD/Magnetic depending on their use cases.
'gp2' is just the default volume type for (hopefuly) short period before these
QoS classes are implemented.
From some reason, MiBs were used for public functions and AWS cloud provider
recalculated them to GiB. Let's expose what AWS really supports and don't hide
real allocation units.
them, not in the steady state once they've been created. This makes it
much less likely that users will run into static IP quota issues.
Also add slightly more parallelism to the deletion of load balancers
now that I realize the static IPs can be deleted in parallel with
forwarding rules :)
Previously we'd just tear everything down and recreate it, which makes
for a pretty bad experience because it causes downtime whenever the
service controller restarts and has to make sure everything is in the
desired state.
This adds more code than I'd prefer, but makes it much cleaner and more
organized than it was before, in my opinion. I didn't bother
parallelizing anything because it's complex enough as it is, right now.
It's consistently passing the existing e2es and worked when I tested
manually, but this could definitely use additional e2e tests and/or some
serious refactoring to make real unit tests feasible. I'll follow up
with one or two e2e tests that make sense (updating an LB or killing the
controller manager, perhaps).
This will cut down on the amount of time it takes to delete an external
load balancer, which should reduce the likelihood of resource leaks when
clusters are deleted.
This code was in rough shape, so I've fixed the issues with the original
PR as well as a few other changes:
1. Clarify the error messages related to the "gce Addresses" to make it
clear we're talking about static IP addresses
2. Fix the bug in the original PR, which was a nil pointer dereference
from passing op to waitForRegionOp when the address doesn't exist.
3. Rearrange the steps of EnsureTCPLoadBalancerDeleted to be the reverse
of EnsureCreated, which mostly just seems like good practice to me.
This is also supported by the following two bugs I found :(
4. Fix an independent bug of returning too early if the target pool
doesn't exist, effectively stranding the firewall. This was likely
introduced because target pools used to be the last thing deleted,
so it was previously safe to return there.
5. Fix an independent bug of not returning an error waiting for the
target pool to be deleted failed. This was very possibly causing
target pool leaks in our e2e tests. This was similarly due to
assuming that the target pool was the last thing deleted in the
function, then having the firewall deletion stuck in after it.
Only takes the first available subnet in a AZ, ignore other subnets
and log warning about this.
Removes AWS region comparison for subnet AZs. A VPC is only in a single
AWS region.
Fixes#12381
This code was in rough shape, so I've fixed the issues with the original
PR as well as a few other changes:
1. Clarify the error messages related to the "gce Addresses" to make it
clear we're talking about static IP addresses
2. Fix the bug in the original PR, which was a nil pointer dereference
from passing op to waitForRegionOp when the address doesn't exist.
3. Rearrange the steps of EnsureTCPLoadBalancerDeleted to be the reverse
of EnsureCreated, which mostly just seems like good practice to me.
This is also supported by the following two bugs I found :(
4. Fix an independent bug of returning too early if the target pool
doesn't exist, effectively stranding the firewall. This was likely
introduced because target pools used to be the last thing deleted,
so it was previously safe to return there.
5. Fix an independent bug of not returning an error waiting for the
target pool to be deleted failed. This was very possibly causing
target pool leaks in our e2e tests. This was similarly due to
assuming that the target pool was the last thing deleted in the
function, then having the firewall deletion stuck in after it.
The ELB client lookup isn't necessary because the service
does not operate across regions. Instead the client should
be built like the others by querying for the region from
the master node's metadata service.
Inverting code path on CreateTcploadBalancer to avoid branch divergence
Removing useless variable vipAddr as vip have information needed
Renaming 'error' variable on EnsureTCPLoadBalancerDeleted to be consistent
A lot of packages use StringSet, but they don't use anything else from
the util package. Moving StringSet into another package will shrink
their dependency trees significantly.
Avoid creating a new 'err' variable in the 'if'-branch, shadowing the one
in the outer scope.
Any error from subsequent 'cloud, err = GetCloudProvider()' was not propagated
to 'err' variable in the outer scope and thus errors were never returned from
this function.
This is hard to debug error on OpenStack, when content of --cloud-config=
file is wrong or connection to OpenStack fails. Such error is never logged
and Kubernetes thinks everything is OK.
This will allows authentication with the AWS API using the
~/.aws/credentials file which is created by runnign 'aws configure' on
a node.
Signed-off-by: Sami Wagiaalla <swagiaal@redhat.com>
ELB will automatically create a health check, but if we update the
listeners the old health check port sticks around, and all the instances
are marked offline.
Update the health-checks to match the listeners: we just check the first
valid service port, with some hard-coded options for timeouts / retries etc.
This turned out to be a little convoluted, but is needed because deleting an ELB on AWS
is a painful UX - it won't have the same endpoint when it is recreated.
Also started splitting the provider into files, but only for new functions (so far!)
Previously the servicecontroller would do the delete, but by having the cloudprovider
take that task on, we can later remove it from the servicecontroller, and the
cloudprovider can do something more efficient.
The current code assumes the full domain name will not be included,
which is not always the case. This patch adds support for computing the
host tag from a fully qualified domain name.
This should ensure all load balancers get deleted even if a reordering of
watch events causes us to strand one after its service has been deleted,
because the sync will notice that the service controller's cache has a
service in it that no longer exists in the apiserver.
It could still leak in the case that the controller manager is killed
between when it leaks something and the sync runs, but this should
improve things.
We need to find the ID for a named security group, or create a new one.
We do this by listing the security groups, and then doing a create if we
cannot find one. This is a race though; against another thread if the
AWS API were consistent, but generally because the AWS API is actually
eventually consistent.
We wrap it in a retry loop.
When we create a security-group in the AWS API, there is sometimes
a delay before we can tag it (the AWS API is eventually consistent).
So we wrap CreateTags in a simple retry loop.
It seems impossible to determine from outside. Thankfully we're running
the attachment from inside the instance, so can check for /dev/sdX or
/dev/xvdX.
More modern images seem to be moving to /dev/xvdX
This is a partial reversion of #9728, and should fix#10612.
9728 used the AWS instance id as the node name. But proxy, logs
and exec all used the node name as the host name for contacting the minion.
It is possible to resolve a host to the IP, and this fixes logs. But
exec and proxy also require an SSL certificate match on the hostname,
and this is harder to fix.
So the sensible fix seems to be a minimal reversion of the changes in #9728,
and we can revisit this post 1.0.
ExternalID must return "", cloudprovider.InstanceNotFound if the instance
is not found, for nodecontroller to remove nodes corresponding to deleted instances.
We need the last state seen for interpreting the change-stream,
separately we need to track the last state we successfully applied to the
load balancer.
The most reliable way seems to be to deauthorize the LB security group from
other groups, then delete the LB itself, then repeatedly retry to delete the LB
security group.
We can't delete the LB security group until the LB is actually completely
deleted, but the LB is hidden from the API during deletion. So our only real
option is to retry deletion of the LB security group until the expected error
goes away when the LB is fully deleted.