Automatic merge from submit-queue (batch tested with PRs 60435, 60334, 60458, 59301, 60125). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Return missing ClusterID error instead of ignoring it
This fixes issue #57382. In the cases I'm aware of kubelet cannot function if it can't detect the cluster it is running in, so the error should be passed up to the caller preventing initialization when kubelet would fail. This way the error can be detected and kubelet startup attempted again later (giving AWS time to apply the tags).
```release-note
On AWS kubelet returns an error when started under conditions that do not allow it to work (AWS has not yet tagged the instance).
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Tag Security Group created for AWS ELB with same additional tags as ELB
/sig aws
(I worked on this with @bkochendorfer)
Tags the SG created for the ELB with the same additional tags the ELB gets from the `service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags` annotation. This is useful for identifying orphaned resources.
We think that reusing the annotation is a simpler and less intrusive approach than adding a new annotation, and most users will want the same set of tags applied.
We weren't sure how to write a test for this because it looks like the fake EC2 code doesn't store the state of the security groups. If new tests are a requirement for merging, we'll need help writing them.
Fixes#53489
```release-note
AWS Security Groups created for ELBs will now be tagged with the same additional tags as the ELB (i.e. the tags specified by the "service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags" annotation.)
```
Currently the AWS cloud provider uses the EC2 instance role when
interacting with AWS APIs. This change gives the option to provide and IAM
role that the cloud provider will assume before calling the APIs. All
resources created by the role will be owned by that account instead of
the account where the EC2 instance is running.
This adds context to all the relevant cloud provider interface signatures.
Callers of those APIs are currently satisfied using context.TODO().
There will be follow on PRs to push the context through the stack.
For an idea of the full scope of this change please look at PR #58532.
Reuse the
service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags
annotation for these tags.
We think that this is a simpler and less intrusive approach than adding
a new annotation, and most users will want the same set of tags applied.
Signed-off-by: Brett Kochendorfer <brett.kochendorfer@gmail.com>
Automatic merge from submit-queue (batch tested with PRs 51301, 50497, 50112, 48184, 50993)
AWS: handle multiple IPs when using more than 1 network interface per ec2 instance
**What this PR does / why we need it**:
Adds support for kubelets running with the AWS cloud provider on ec2 instances with multiple network interfaces. If the active interface is not eth0, the AWS cloud provider currently reports the wrong node IP.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#44686
**Special notes for your reviewer**:
There is also some work necessary for handling multiple DNS names and such but I didn't fix them in this PR.
**Release note**:
```release-note
Fixed bug in AWS provider to handle multiple IPs when using more than 1 network interface per ec2 instance.
```
AWS CreateVolume call does not check if referenced encryption key actually
exists and returns a valid new AWS EBS volume even though an invalid key
was specified. Later on it removes the EBS silently when its encryption fails.
To work around this buggy behavior we manually check that the key exists
before calling CreateVolume.
Automatic merge from submit-queue (batch tested with PRs 47915, 47856, 44086, 47575, 47475)
AWS: Fix suspicious loop comparing permissions
Because we only ever call it with a single UserId/GroupId, this would
not have been a problem in practice, but this fixes the code.
Fix#36902
```release-note
NONE
```
Automatic merge from submit-queue
New annotation to add existing Security Groups to ELBs created by AWS cloudprovider
**What this PR does / why we need it**:
When K8S cluster is deployed in existing VPC there might be a need to attach extra SecurityGroups to ELB created by AWS cloudprovider. Example of it can be cases, where such Security Groups are maintained by another team.
**Special notes for your reviewer**:
For tests to pass depends on https://github.com/kubernetes/kubernetes/pull/45168 and therefore includes it
**Release note**:
```release-note
New 'service.beta.kubernetes.io/aws-load-balancer-extra-security-groups' Service annotation to specify extra Security Groups to be added to ELB created by AWS cloudprovider
```
We maintain a cache of all instances, and we invalidate the cache
whenever we see a new instance. For ELBs that should be sufficient,
because our usage is limited to instance ids and security groups, which
should not change.
Fix#45050
Automatic merge from submit-queue (batch tested with PRs 47510, 47516, 47482, 47521, 47537)
Batch AWS getInstancesByNodeNames calls with FilterNodeLimit
We are going to limit the getInstancesByNodeNames call with a batch
size of 150.
Fixes - #47271
```release-note
AWS: Batch DescribeInstance calls with nodeNames to 150 limit, to stay within AWS filter limits.
```
Service objects can be annotated with
`service.beta.kubernetes.io/aws-load-balancer-extra-security-groups`
to specify existing security groups to be added to ELB
created by AWS cloudprovider
This PR provides support for tagging AWS ELBs using information in an
annotation and provided as a list of comma separated key-value pairs.
Closes https://github.com/kubernetes/community/pull/404
Automatic merge from submit-queue (batch tested with PRs 41921, 41695, 42139, 42090, 41949)
AWS: Support shared tag `kubernetes.io/cluster/<clusterid>`
We recognize an additional cluster tag:
kubernetes.io/cluster/<clusterid>
This now allows us to share resources, in particular subnets.
In addition, the value is used to track ownership/lifecycle. When we
create objects, we record the value as "owned".
We also refactor out tags into its own file & class, as we are touching
most of these functions anyway.
```release-note
AWS: Support shared tag `kubernetes.io/cluster/<clusterid>`
```
We recognize an additional cluster tag:
kubernetes.io/cluster/<clusterid>
This now allows us to share resources, in particular subnets.
In addition, the value is used to track ownership/lifecycle. When we
create objects, we record the value as "owned".
We also refactor out tags into its own file & class, as we are touching
most of these functions anyway.
Means we can run in newly announced regions without a code change.
We don't register the ECR provider in new regions, so we will still need
a code change for now.
This also means we do trust config / instance metadata, and don't reject
incorrectly configured zones.
Fix#35014
This method has been unused by k8s for some time, and yet is the last
piece of the cloud provider API that encourages provider names to be
human-friendly strings (this method applies a regex to instance names).
Actually removing this deprecated method is part of a long effort to
migrate from instance names to instance IDs in at least the OpenStack
provider plugin.
We are more liberal in what we accept as a volume id in k8s, and indeed
we ourselves generate names that look like `aws://<zone>/<id>` for
dynamic volumes.
This volume id (hereafter a KubernetesVolumeID) cannot directly be
compared to an AWS volume ID (hereafter an awsVolumeID).
We introduce types for each, to prevent accidental comparison or
confusion.
Issue #35746
We had another bug where we confused the hostname with the NodeName.
To avoid this happening again, and to make the code more
self-documenting, we use types.NodeName (a typedef alias for string)
whenever we are referring to the Node.Name.
A tedious but mechanical commit therefore, to change all uses of the
node name to use types.NodeName
Also clean up some of the (many) places where the NodeName is referred
to as a hostname (not true on AWS), or an instanceID (not true on GCE),
etc.
Automatic merge from submit-queue
AWS: More ELB attributes via service annotations
Replaces #25015 and addresses all of @justinsb's feedback therein. This is a new PR because I was unable to reopen#25015 to amend it.
I noticed recently that there is existing (but undocumented) precedent for the AWS cloud provider to manage ELB-specifc load balancer configuration based on service annotations. In particular, one can _already_ designate an ELB as "internal" or enable PROXY protocol.
This PR extends this capability to the management of ELB attributes, which includes the following items:
* Access logs:
* Enabled / disabled
* Emit interval
* S3 bucket name
* S3 bucket prefix
* Connection draining:
* Enabled / disabled
* Timeout
* Connection:
* Idle timeout
* Cross-zone load balancing:
* Enabled / disabled
Some of these are possibly more useful than others. Use cases that immediately come to mind:
* Enabling cross-zone load balancing is potentially useful for "Ubernetes Light," or anyone otherwise attempting to spread worker nodes around multiple AZs.
* Increasing idle timeout is useful for the benefit of anyone dealing with long-running requests. An example I personally care about would be git pushes to Deis' builder component.