Automatic merge from submit-queue
Trusty: fix breakage by #26413 and #26109
The code https://github.com/kubernetes/kubernetes/tree/master/cluster/gce/trusty in both master and release-1.2 branches is broken. Although we already switched to using cluster/gce/gci for 1.3 branch, we will still need to maintain the cluster/gce/trusty code as long as release-1.2 branch is not deprecated. I will make a couple of PR and cherry-picks to make cluster/gce/trusty back to normal. This PR is one of them, which will not be cherry picked to release-1.2.
@roberthbailey @dchen1107 @fabioy @zmerlynn
cc/ @kubernetes/goog-image
Automatic merge from submit-queue
Fix fake event recorder race
Event recorder should wait for some time to get all expected events, the event may be written by another goroutine that just have finished.
It should not slow down the test in most cases, only when there is a bug and expected event is not sent.
Fixes#26578
Using P2 to speed up merge and to prevent further flakes.
@kubernetes/sig-storage
Automatic merge from submit-queue
Change Kubelet test to run also in large clusters
This should fix the recent failure of kubelet test in 2000-node cluster.
@zmerlynn - FYI
Automatic merge from submit-queue
GCI: cherry-pick the fix in PR #25670
This PR simply copies the fix in #25670 into the GCI support.
cc/ @kubernetes/goog-image @dchen1107 @roberthbailey
Automatic merge from submit-queue
Don't set the env var CC when not cross-compiling
I noticed that this script was trying to use `arm-linux-gnueabi-gcc` also when running natively on arm.
When running natively, `CC` should always be `gcc` (which also is the default)
Also added `federation-controller-manager` to the static list, I think someone forgot to do that.
@ixdy @david-mcmahon @spiffxp @spxtr
Automatic merge from submit-queue
SSH e2e: Limit to 100 nodes, limit combinatorics
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/.github/PULL_REQUEST_TEMPLATE.md?pixel)]()This limits the "for all hosts" to 100 nodes, and also limits the
combinatorial section so that we only do the other SSH command variant
testing on the first host rather than *all* of the hosts. I also
killed one of the variants because it didn't seem to be testing much
important.
Fixes#26600
Event recorder should wait for some time to get all expected events, the event
may be written by another goroutine that just have finished.
It should not slow down the test in most cases, only when there is a bug and
expected event is not sent.
Automatic merge from submit-queue
retry GetThirdPartyGroupVersions
GetThirdPartyGroupVersions() may return a "NotFound" error if a thirdparty group is deleted in the interim between the group-discovery and the resource-discovery. This is causing e2e flakes in all tests that run kubectl, because test/e2e/thirdparty.go is creating/deleting thirdparty groups.
Fix#26425
The e2e flakes will have the following pattern:
1. the test is calling kubectl
2. error message is `Error from server: the server could not find the requested resource`
3. in the apiserver log, you should see `GET /apis/company.com/v1: (518.944µs) 404 [[kubectl/v1.3.0 (linux/amd64) kubernetes/ae28564] 104.154.110.118:46043]`
For detail see [here](https://github.com/kubernetes/kubernetes/issues/26425#issuecomment-222844523)
cc @janetkuo @brendanburns
Automatic merge from submit-queue
federation cluster controller: load secretRef only if it is present
Ref https://github.com/kubernetes/kubernetes/pull/26132#issuecomment-222614042
Updated the code to load the secretRef only if it is present in the spec. This allows unsecured access to the cluster.
cc @kubernetes/sig-cluster-federation @mfanjie
This limits the "for all hosts" to 100 nodes, and also limits the
combinatorial section so that we only do the other SSH command variant
testing on the first host rather than *all* of the hosts. I also
killed one of the variants because it didn't seem to be testing much
important.
Fixes#26600
Automatic merge from submit-queue
Stabilize controller unit tests.
Remove test "5-1", it's flaky as it depends on order of execution of goroutines. When the controller starts, existing claim is enqueued as "initial sync event" and a new volume is enqueued to separate goroutine. It is not deterministic which goroutine processes its events first and there is no way how to tell that the claim event was processed.
Also, force resync of the controllers after the test to make sure all events are processed.
Fixes unit test flakes.
@kubernetes/sig-storage
Automatic merge from submit-queue
Switch DNS addons from skydns to kubedns
Change GCI and trusty cluster-helper scripts to use kubedns instead of skydns.
Automatic merge from submit-queue
kubelet e2e: set cpu/memory limits for docker 1.11
Docker 1.11 consumes more memory. Bump the limit to fix the tests. Also add
new limits for the 100-pod resource usage tracking test.
This fixes#26495
Automatic merge from submit-queue
AWS: ELB proxy protocol support via annotation service.beta.kubernetes.io/aws-load-balancer-proxy-protocol
This is a ~~work in progress~~ branch that adds support for the Proxy Protocol with Elastic Load Balancers. The proxy protocol is documented here: http://www.haproxy.org/download/1.5/doc/proxy-protocol.txt. It allows us to pass the "real ip" address of a client to pods behind services.
As it stands now, we create an ELB policy on the load balancer that enables the proxy protocol. We then enumerate each node port assigned to the load balancer and add our newly created policy to it. The manual process is documented here: http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/enable-proxy-protocol.html
Right now, I’m looking to get some feedback on the approach before I dive too much deeper in the code. More precisely, I have questions regarding the following:
1) Right now I just check that a certain annotation exists on the service regardless of what its value is. Assuming we’re going to enable this feature via an annotation, what is the expected experience? This decision likely depends on the answers to the next questions.
2) Right now the implementation enables the proxy protocol on every ELB backend. The actual ELB API expects you to add the policy for each configured backend. Do we want the ability to configure the proxy protocol on a per service port basis? For example, if a service exposes TCP 80 and 443, would we want the ability to only enable the proxy protocol on port 443? Does this overcomplicate the implementation? If we wanted to go this direction we could do something like ...
```
{
"service.beta.kubernetes.io/aws-load-balancer-proxy-protocol": "tcp:80,tcp:443"
}
```
3) I avoided this because I was concerned with scope creep and our organization doesn’t need it, but could/should our implementation be adjusted to just handle ELB policies in general? I hadn’t used the ELB API until I started working on this branch so I don’t know how realistic this is. I also don't know how common this use case is as our organization has used our own load balancing setup prior to Kubernetes. This page has a couple of examples at the bottom: http://docs.aws.amazon.com/cli/latest/reference/elb/create-load-balancer-policy.html
cc @justinsb
<!-- Reviewable:start -->
---
This change is [<img src="http://reviewable.k8s.io/review_button.svg" height="35" align="absmiddle" alt="Reviewable"/>](http://reviewable.k8s.io/reviews/kubernetes/kubernetes/24569)
<!-- Reviewable:end -->