Automatic merge from submit-queue
Migrates addons from RCs to Deployments
Fixes#33698.
Below addons are being migrated:
- kube-dns
- GLBC default backend
- Dashboard UI
- Kibana
For the new deployments, the version suffixes are removed from their names. Version related labels are also removed because they are confusing and not needed any more with regard to how Deployment and the new Addon Manager works.
The `replica` field in `kube-dns` Deployment manifest is removed for the incoming DNS horizontal autoscaling feature #33239.
The `replica` field in `Dashboard` Deployment manifest is also removed because the rescheduler e2e test is manually scaling it.
Some resource limit related fields in `heapster-controller.yaml` are removed, as they will be set up by the `addon resizer` containers. Detailed reasons in #34513.
Three e2e tests are modified:
- `rescheduler.go`: Changed to resize Dashboard UI Deployment instead of ReplicationController.
- `addon_update.go`: Some namespace related changes in order to make it compatible with the new Addon Manager.
- `dns_autoscaling.go`: Changed to examine kube-dns Deployment instead of ReplicationController.
Both of above two tests passed on my own cluster. The upgrade process --- from old Addons with RCs to new Addons with Deployments --- was also tested and worked as expected.
The last commit upgrades Addon Manager to v6.0. It is still a work in process and currently waiting for #35220 to be finished. (The Addon Manager image in used comes from a non-official registry but it mostly works except some corner cases.)
@piosz @gmarek could you please review the heapster part and the rescheduler test?
@mikedanese @thockin
cc @kubernetes/sig-cluster-lifecycle
---
Notes:
- Kube-dns manifest still uses *-rc.yaml for the new Deployment. The stale file names are preserved here for receiving faster review. May send out PR to re-organize kube-dns's file names after this.
- Heapster Deployment's name remains in the old fashion(with `-v1.2.0` suffix) for avoiding describe this upgrade transition explicitly. In this way we don't need to attach fake apply labels to the old Deployments.
Automatic merge from submit-queue
Use new fluentd-gcp image version
In #35618 we used new version of fluentd agent, which includes new version of jeamalloc, allowing us to use it.
Additionally, we came up with a hacky way to encourage Ruby GC to be invoked more often by using RUBY_GC_HEAP_OLDOBJECT_LIMIT_FACTOR variable.
@piosz
Automatic merge from submit-queue
Add support for vpshere cloud provider in kubeup
<!-- Thanks for sending a pull request! Here are some tips for you:
1. If this is your first time, read our contributor guidelines https://github.com/kubernetes/kubernetes/blob/master/CONTRIBUTING.md and developer guide https://github.com/kubernetes/kubernetes/blob/master/docs/devel/development.md
2. If you want *faster* PR reviews, read how: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/faster_reviews.md
3. Follow the instructions for writing a release note: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/pull-requests.md#release-notes
-->
**What this PR does / why we need it**:
vSphere cloud provider added in 1.3 was not configured when deploying via kubeup
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
<!-- Steps to write your release note:
1. Use the release-note-* labels to set the release note state (if you have access)
2. Enter your extended release note in the below block; leaving it blank means using the PR title as the release note. If no release note is required, just write `NONE`.
-->
```release-note
Add support for vSphere Cloud Provider when deploying via kubeup on vSphere.
```
When deploying on vSphere using kube up add configuration
for vSphere cloud provider.
Automatic merge from submit-queue
Bump glbc version to 0.8.0
Picks up k8s.io godeps for v1.4 thereby fixing an int overflow bug in the upstream delayed-workqueue pkg. Without this the controller spams logs with retries in the "soft error" case, which is easy to come by when users eg: create ingresses that point to non-exist services.
Should go into 1.4.1, because 1.4.0 is pretty much out at this point.
https://github.com/kubernetes/kubernetes/issues/33279
Automatic merge from submit-queue
Enable hostpath provisioner for vagrant environment
This flag is required to run e2e tests for certain features (petset), and for manual tests and debugging.
related: https://github.com/kubernetes/kubernetes/issues/32119
Tell systemd to keep trying to restart kubelet without limit. Without
this change at some stage systemd will stop trying to restart kubelet
and mark it failed.
These are the settings we're using elsewhere (e.g. Docker)
It is required to run automated tests for certain features (petset),
and for manual tests and debugging.
Change-Id: I9203aab6d67c8ff0cc4574473e8d0af888fe1804
Automatic merge from submit-queue
Update container image version for downward api volume tests
Some tests were using 0.7, and some were using 0.6, so updating all to 0.7.
@kubernetes/rh-cluster-infra
Automatic merge from submit-queue
Use etcd 2.3.7
This will switch to etcd 2.3.7 for release 1.4, to resolve issues rolling back from 1.4 to 1.3 (while preventing those same issues rolling back to 1.4.0 from a release including etcd 3.0.x).
Fixes#32253.
See #32253 (comment) for etcd roadmap.
Automatic merge from submit-queue
Configure webhook
**What this PR does / why we need it**: this configures the image policy webhook + admission controller for gce/gci.
addresses: #22888
**Release note**:
```Configure image verification admission controller and webhook on gce.
```
Automatic merge from submit-queue
Update core etcd references to use 3.0.4
This updates the core references to use 3.0.4.
There are still legacy references in the code base that should be cleaned, or just removed but I'm reluctant to purge.
/cc @kubernetes/sig-scalability
Automatic merge from submit-queue
Add support for kube-up.sh to deploy Calico network policy to GCI masters
Also remove requirement for calicoctl from Debian / salt installed nodes and clean it up a little by deploying calico-node with a manifest rather than calicoctl. This also makes it more reliable by retrying properly.
How to use:
```
make quick-release
NETWORK_POLICY_PROVIDER=calico cluster/kube-up.sh
```
One place where I was uncertain:
- CPU allocations (on the master particularly, where there's very little spare capacity). I took some from etcd, but if there's a better way to decide this, I'm happy to change it.
<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.kubernetes.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.kubernetes.io/reviews/kubernetes/kubernetes/29037)
<!-- Reviewable:end -->