Automatic merge from submit-queue
Add generic NoExecute Toleration to NPD
Ref. #44445
cc @davidopp
```release-note
Add generic Toleration for NoExecute Taints to NodeProblemDetector
```
Automatic merge from submit-queue
fix typo in build.sh
**What this PR does / why we need it**:
fix typo in build.sh
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
NONE
**Special notes for your reviewer**:
NONE
**Release note**:
```release-note
NONE
```
proxy_handler now uses the endpoint router to map the cluster IP to
appropriate endpoint (Pod) IP for the given resource.
Added code to allow aggregator routing to be optional.
Updated bazel build.
Fixes to cover JLiggit comments.
Added util ResourceLocation method based on Listers.
Fixed issues from verification steps.
Updated to add an interface to obfuscate some of the routing logic.
Collapsed cluster IP resolution in to the aggregator routing
implementation.
Added 2 simple unit tests for ResolveEndpoint
Automatic merge from submit-queue (batch tested with PRs 46501, 45944, 46473)
Enable the ip-masq-agent on GCE installs
Setting this will trigger cluster/addons/ip-masq-agent/ip-masq-agent.yaml to be installed as an addon, which disable configure IP masquerade for all of RFC1918, rather
than just 10.0/8.
Because the flag defaulted to 10.0/8 we can't just change the default. I think anyone who needs IP masquerade set up should probably use this instead.
@justinsb @kubernetes/sig-cluster-lifecycle-misc
Fixes#11204
@dnardo - any reason not to do this?
Release Note:
```release-note
GCE installs will now avoid IP masquerade for all RFC-1918 IP blocks, rather than just 10.0.0.0/8. This means that clusters can
be created in 192.168.0.0./16 and 172.16.0.0/12 while preserving the container IPs (which would be lost before).
```
Automatic merge from submit-queue (batch tested with PRs 46429, 46308, 46395, 45867, 45492)
Bump Go version to 1.8.3
This PR also removed this patched version of Go 1.8.1 which we used to use to workaround performance problem of Go 1.8.1.
Fix https://github.com/kubernetes/kubernetes/issues/45216
Ref #46391
@timothysc @bradfitz
Automatic merge from submit-queue (batch tested with PRs 46124, 46434, 46089, 45589, 46045)
Bump elasticsearch and kibana to 5.4.0
**What this PR does / why we need it**: Updates elasticsearch and kibana docker image assets to 5.4.0 version
**Release note**:
```release-note
Upgrade Elasticsearch Addon to v5.4.0
```
Setting this will trigger
cluster/addons/ip-masq-agent/ip-masq-agent.yaml to be installed as an
addon, which disable configure IP masquerade for all of RFC1918, rather
than just 10.0/8.
Automatic merge from submit-queue (batch tested with PRs 44774, 46266, 46248, 46403, 46430)
kube-proxy: ratelimit runs of iptables by sync-period flags
This bounds how frequently iptables can be synced. It will be no more often than every 10 seconds and no less often than every 1 minute, by default.
@timothysc FYI
@dcbw @freehan FYI
Automatic merge from submit-queue (batch tested with PRs 45573, 46354, 46376, 46162, 46366)
GCE - Retrieve subnetwork name/url from gce.conf
**What this PR does / why we need it**:
Features like ILB require specifying the subnetwork if the network is type manual.
**Notes:**
The network URL can be [constructed](68e7e18698/pkg/cloudprovider/providers/gce/gce.go (L211-L217)) by fetching instance metadata; however, the subnetwork is not provided through this feature. Users must specify the subnetwork name/url through the gce.conf.
Although multiple subnets can exist in the same region for a network, the cloud provider will only use one subnet url for creating LBs.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 46299, 46309, 46311, 46303, 46150)
Create a subnet for reserving the service cluster IP range
This will be done if IP aliases is enabled on GCP.
```release-note
NONE
```
Automatic merge from submit-queue
Allow the /logs handler on the apiserver to be toggled.
Adds a flag to kube-apiserver, and plumbs through en environment variable in configure-helper.sh
Automatic merge from submit-queue
Enable "kick the tires" support for Nvidia GPUs in COS
This PR provides an installation daemonset that will install Nvidia CUDA drivers on Google Container Optimized OS (COS).
User space libraries and debug utilities from the Nvidia driver installation are made available on the host in a special directory on the host -
* `/home/kubernetes/bin/nvidia/lib` for libraries
* `/home/kubernetes/bin/nvidia/bin` for debug utilities
Containers that run CUDA applications on COS are expected to consume the libraries and debug utilities (if necessary) from the host directories using `HostPath` volumes.
Note: This solution requires updating Pod Spec across distros. This is a known issue and will be addressed in the future. Until then CUDA workloads will not be portable.
This PR updates the COS base image version to m59. This is coupled with this PR for the following reasons:
1. Driver installation requires disabling a kernel feature in COS.
2. The kernel API for disabling this interface changed across COS versions
3. If the COS image update is not handled in this PR, then a subsequent COS image update will break GPU integration and will require an update to the installation scripts in this PR.
4. Instead of having to post `3` PRs, one each for adding the basic installer, updating COS to m59, and then updating the installer again, this PR combines all the changes to reduce review overhead and latency, and additional noise that will be created when GPU tests break.
**Try out this PR**
1. Get Quota for GPUs in any region
2. `export `KUBE_GCE_ZONE=<zone-with-gpus>` KUBE_NODE_OS_DISTRIBUTION=gci`
3. `NODE_ACCELERATORS="type=nvidia-tesla-k80,count=1" cluster/kube-up.sh`
4. `kubectl create -f cluster/gce/gci/nvidia-gpus/cos-installer-daemonset.yaml`
5. Run your CUDA app in a pod.
**Another option is to run a e2e manually to try out this PR**
1. Get Quota for GPUs in any region
2. export `KUBE_GCE_ZONE=<zone-with-gpus>` KUBE_NODE_OS_DISTRIBUTION=gci
3. `NODE_ACCELERATORS="type=nvidia-tesla-k80,count=1"`
4. `go run hack/e2e.go -- --up`
5. `hack/ginkgo-e2e.sh --ginkgo.focus="\[Feature:GPU\]"`
The e2e will install the drivers automatically using the daemonset and then run test workloads to validate driver integration.
TODO:
- [x] Update COS image version to m59 release.
- [x] Remove sleep from the install script and add it to the daemonset
- [x] Add an e2e that will run the daemonset and run a sample CUDA app on COS clusters.
- [x] Setup a test project with necessary quota to run GPU tests against HEAD to start with https://github.com/kubernetes/test-infra/pull/2759
- [x] Update node e2e serial configs to install nvidia drivers on COS by default
Automatic merge from submit-queue (batch tested with PRs 46201, 45952, 45427, 46247, 46062)
remove the elasticsearch template
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```
NONE
```
Loading file-based index template has been disabled since 2.0.0-beta1 version of Elasticsearch. https://www.elastic.co/guide/en/elasticsearch/reference/2.0/breaking_20_index_api_changes.html#_file_based_index_templates
So the `template-k8s-logstash.json` is not longer useful.
On the other hand, as https://github.com/kubernetes/kubernetes/issues/25127 indicated, we might better curl the elasticsearch API to load this template.
Automatic merge from submit-queue
Add version for fluentd-gcp config
Fluentd-gcp config should be versioned, because otherwise during the update race can happen and the new pod can mount the old config
Packaged the script as a docker container stored in gcr.io/google-containers
A daemonset deployment is included to make it easy to consume the installer
A cluster e2e has been added to test the installation daemonset along with verifying installation
by using a sample CUDA application.
Node e2e for GPUs updated to avoid running on nodes without GPU devices.
Signed-off-by: Vishnu kannan <vishnuk@google.com>
Automatic merge from submit-queue
Update Calico add-on
**What this PR does / why we need it:**
Updates Calico to the latest version using self-hosted install as a DaemonSet, removes Calico's dependency on etcd.
- [x] Remove [last bits of Calico salt](175fe62720/cluster/saltbase/salt/calico/master.sls (L3))
- [x] Failing on the master since no kube-proxy to access API.
- [x] Fix outgoing NAT
- [x] Tweak to work on both debian / GCI (not just GCI)
- [x] Add the portmap plugin for host port support
Maybe:
- [ ] Add integration test
**Which issue this PR fixes:**
https://github.com/kubernetes/kubernetes/issues/32625
**Try it out**
Clone the PR, then:
```
make quick-release
export NETWORK_POLICY_PROVIDER=calico
export NODE_OS_DISTRIBUTION=gci
export MASTER_SIZE=n1-standard-4
./cluster/kube-up.sh
```
**Release note:**
```release-note
The Calico version included in kube-up for GCE has been updated to v2.2.
```
Automatic merge from submit-queue (batch tested with PRs 44606, 46038)
Add ip-masq-agent addon to the addons folder.
This also ensures that under gce we add this DaemonSet if the non-masq-cidr
is set to 0/0.
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
Add ip-masq-agent addon to the addons folder which is used in GCE if --non-masquerade-cidr is set to 0/0
```
Automatic merge from submit-queue
Make real proxier in hollow-proxy optional (default=true)
Ref https://github.com/kubernetes/kubernetes/pull/45622
This allows using real proxier for hollow proxy, but we use the fake one by default.
cc @kubernetes/sig-scalability-misc @wojtek-t @gmarek
Automatic merge from submit-queue
Update cluster startup scripts to use gcloud beta for alias IP support
The feature has gone from alpha to beta.
```release-note
NONE
```
Automatic merge from submit-queue
Update dashboard-controller.yaml
**What this PR does / why we need it**: Updates Dashboard addon to newest version. Changelog can be found at https://github.com/kubernetes/dashboard/releases/tag/v1.6.1.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
Update Dashboard version to 1.6.1
```
Automatic merge from submit-queue
fix: required openstack heat version for conditions is 2016-10-14 / newton
This fix sets the required heat version to 2016-10-14.
In OpenStack heat the conditions statement was introduced in version 2016-10-14 | newton, accourding to:
https://docs.openstack.org/releasenotes/heat/newton.html
and more specific:
https://docs.openstack.org/developer/heat/template_guide/hot_spec.html
The conditions are used to make the assignment of public ips / floating ips optional (added in commit 4eef540876). However this template is not compatible with OpenStack heat releases prior newton and produces the following error:
```
ERROR: Failed to validate: : resources.kube_minions: : "condition" is not a valid keyword inside a output definition
```
PR without a release note:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 45860, 45119, 44525, 45625, 44403)
Support running StatefulSetBasic e2e tests with local-up-cluster
**What this PR does / why we need it**:
Currently StatefulSet(s) fail when you use local-up-cluster without
setting a cloud provider. In this PR, we use set the
kubernetes.io/host-path provisioner as the default provisioner when
there CLOUD_PROVIDER is not specified. This enables e2e test(s)
(specifically StatefulSetBasic) to work.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 44337, 45775, 45832, 45574, 45758)
Fix lint failures on kubernetes-e2e charm
**What this PR does / why we need it**:
This fixes a test failure on the kubernetes-e2e charm relating to tox and flake8:
```DEBUG🏃/bin/sh: 1: flake8: not found```
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
This is a follow-up to https://github.com/kubernetes/kubernetes/pull/45494 where the same thing was done for kubernetes-master.
**Release note**:
```release-note
Fix lint failures on kubernetes-e2e charm
```
Automatic merge from submit-queue (batch tested with PRs 44337, 45775, 45832, 45574, 45758)
Refactor gcr.io/google_containers/elasticsearch to alpine
**What this PR does / why we need it**:
This reduces the image size of the gcr.io/google_containers/elasticsearch image.
Before:
```
REPOSITORY TAG IMAGE ID CREATED SIZE
gcr.io/google_containers/elasticsearch v2.4.1-2 6941e43df81a 4 weeks ago 419MB
```
After:
```
REPOSITORY TAG IMAGE ID CREATED SIZE
gcr.io/google_containers/elasticsearch v2.4.1-2 24ad40c21a52 About an hour ago 178MB
```
**Special notes for your reviewer**:
I used a workaround to make the elasticsearch_logging_discovery binary work with alpine. (See [stackoverflow](https://stackoverflow.com/questions/34729748/installed-go-binary-not-found-in-path-on-alpine-linux-docker/35613430#35613430)). Alternatively this can be solved by setting ```CGO_ENABLED=0```when compiling the binary. I didn't feel comfortable chaing the Makefile though, since I'm no golang expert. Feedback wanted!