Automatic merge from submit-queue
Don't overwrite FLANNEL_OTHER_NET_CONFIG in ubuntu config
Make it easier to pass options to flannel through environment variables.
Automatic merge from submit-queue
WIP: Remove the legacy networking mode
<!-- Thanks for sending a pull request! Here are some tips for you:
1. If this is your first time, read our contributor guidelines https://github.com/kubernetes/kubernetes/blob/master/CONTRIBUTING.md and developer guide https://github.com/kubernetes/kubernetes/blob/master/docs/devel/development.md
2. If you want *faster* PR reviews, read how: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/faster_reviews.md
3. Follow the instructions for writing a release note: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/pull-requests.md#release-notes
-->
**What this PR does / why we need it**:
Removes the deprecated configure-cbr0 flag and networking mode to avoid having untested and maybe unstable code in kubelet, see: #33789
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, #<issue_number>, ...)` format, will close that issue when PR gets merged)*:
fixes#30589fixes#31937
**Special notes for your reviewer**: There are a lot of deployments who rely on this networking mode. Not sure how we deal with that: force switch to kubenet or just delete the old deployment?
But please review the code changes first (the first commit)
**Release note**:
<!-- Steps to write your release note:
1. Use the release-note-* labels to set the release note state (if you have access)
2. Enter your extended release note in the below block; leaving it blank means using the PR title as the release note. If no release note is required, just write `NONE`.
-->
```release-note
Removed the deprecated kubelet --configure-cbr0 flag, and with that the "classic" networking mode as well
```
PTAL @kubernetes/sig-network @kubernetes/sig-node @mikedanese
Automatic merge from submit-queue
Added option to specify the flannel backend, to cluster/ubuntu
```release-note
```
Generalized the cluster/ubuntu scripting so that there is a way to
specify the Flannel "backend" to use.
Also updated the default setting of ADMISSION_CONTROL, to match that
recommended for the latest release in
http://kubernetes.io/docs/admin/admission-controllers/#is-there-a-recommended-set-of-plug-ins-to-use,
and updated the comment on that setting to explain it.
Also made `cluster/ubuntu/reconfDocker.sh` sensitive to the `DEBUG` envar.
Automatic merge from submit-queue
Do not cache hyperkube package installation
**What this PR does / why we need it**:
The hyperkube build process could use a cached layer containing out of date packages. For example, the v1.4.0 image contains packages with security vulnerabilities, which should have been available as of the release build date.
This was surfaced from quay.io/clair scanning the hyperkube images:
17bc61b54e047da8cd1b23316aac6961db?tab=vulnerabilities
This patch adds a cache-busting comment to the RUN command which installs/updates packages.
Automatic merge from submit-queue
Elasticsearch and Kibana update
```release-note
Updated Elasticsearch image from version 1.5.1 to version 2.4.1. Updated Kibana image from version 4.0.2 to version 4.6.1.
```
Updated es and kibana images. Made image versions match es/kibana versions they contain.
ref #19149
Automatic merge from submit-queue
Added INSTANCE_PREFIX to project hash to avoid S3 bucket clash
**What this PR does / why we need it**:
Fixes an issue where if you run multiple k8s clusters in same region S3 resources are being overwritten and therefore node bootstrapping stalls, i.e. when using Auto scaling.
**Special notes for your reviewer**:
By adding the `INSTANCE_PREFIX` to the project hash in the S3 bucket the bucket will not be overwritten.
**Release note**:
<!-- Steps to write your release note:
1. Use the release-note-* labels to set the release note state (if you have access)
2. Enter your extended release note in the below block; leaving it blank means using the PR title as the release note. If no release note is required, just write `NONE`.
-->
```release-note
```
Automatic merge from submit-queue
Make Rackspace deploy scripts compatible with Kubernetes v1.3.0
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/.github/PULL_REQUEST_TEMPLATE.md?pixel)]()
* Use the current stable CoreOS image
* Switch to etcd2
* Launch flanneld on master to make nodes accessible
* Generate Service Account certificate and enable admission controls
Automatic merge from submit-queue
Delete all deployments when tear down cluster alive resources
Delete all deployments when tear down cluster alive resources.
Automatic merge from submit-queue
Upgrade addon-manager with kubectl apply
The first step of #33698.
Use `kubectl apply` to replace addon-manager's previous logic.
The most important issue this PR is targeting is the upgrade from 1.4 to 1.5. Procedure as below:
1. Precondition: After the master is upgraded, new addon-manager starts and all the old resources on nodes are running normally.
2. Annotate the old ReplicationController resources with kubectl.kubernetes.io/last-applied-configuration=""
3. Call `kubectl apply --prune=false` on addons folder to create new addons, including the new Deployments.
4. Wait for one minute for new addons to be spinned up.
5. Enter the periodical loop of `kubectl apply --prune=true`. The old RCs will be pruned at the first call.
Procedure of a normal startup:
1. Addon-manager starts and no addon resources are running.
2. Annotate nothing.
3. Call `kubectl apply --prune=false` to create all new addons.
4. No need to explain the remain.
Remained Issues:
- Need to add `--type` flag to `kubectl apply --prune`, mentioned [here](https://github.com/kubernetes/kubernetes/pull/33075#discussion_r80814070).
- This addon manager is not working properly with the current Deployment heapster, which runs [addon-resizer](https://github.com/kubernetes/contrib/tree/master/addon-resizer) in the same pod and changes resource limit configuration through the apiserver. `kubectl apply` fights with the addon-resizers. May be we should remove the initial resource limit field in the configuration file for this specific Deployment as we removed the replica count.
@mikedanese @thockin @bprashanth
---
Below are some logical things that may need to be clarified, feel free to **OMIT** them as they are too verbose:
- For upgrade, the old RCs will not fight with the new Deployments during the overlap period even if they use the same label in template:
- Deployment will not recognize the old pods because it need to match an additional "pod-template-hash" label.
- ReplicationController will not manage the new pods (created by deployment) because the [`controllerRef`](https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/controller-ref.md) feature.
- As we are moving all addons to Deployment, all old RCs would be removed. Attach empty annotation to RCs is only for letting `kubectl apply --prune` to recognize them, the content does not matter.
- We might need to also annotate other resource types if we plan to upgrade them in 1.5 release:
- They don't need to be attached this fake annotation if they remain in the same name. `kubectl apply` can recognize them by name/type/namespace.
- In the other case, attaching empty annotations to them will still work. As the plan is to use label selector for annotate, some innocence old resources may also be attached empty annotations, they work as below two cases:
- Resources that need to be bumped up to a newer version (mainly due to some significant update --- change disallowed fields --- that could not be managed by the update feature of `kubectl apply`) are good to go with this fake annotation, as old resources will be deleted and new sources will be created. The content in annotation does not matter.
- Resources that need to stay inside the management of `kubectl apply` is also good to go. As `kubectl apply` will [generate a 3-way merge patch](https://github.com/kubernetes/kubernetes/blob/master/pkg/util/strategicpatch/patch.go#L1202-L1226). This empty annotation is harmless enough.
Automatic merge from submit-queue
Fixed downloading of flannel 0.6.x releases in ubuntu installer, 0.5.x works as well
**What this PR does / why we need it**:
This PR fixes compatibility of ubuntu installer with flannel release 0.6.0 and 0.6.1 where download url was changed.
**Release note**:
```NONE
```
Automatic merge from submit-queue
fix sed command run failed on mac os
bash command ```sed -i ... ``` run failed on mac os, it should be ```sed -i.back ..```
Automatic merge from submit-queue
Delete all firewall rules (and optionally network) on GCE/GKE cluster teardown
Not entirely ready for review yet; I want to see what Jenkins thinks of this.
Automatic merge from submit-queue
Update `gcloud docker` commands to use `gcloud docker -- ARGS`
We can then avoid the following warning:
```
WARNING: The '--' argument must be specified between gcloud specific args on the left and DOCKER_ARGS on the right. IMPORTANT: previously, commands allowed the omission of the --, and unparsed arguments were treated as implementation args. This usage is being deprecated and will be removed in March 2017.
This will be strictly enforced in March 2017. Use 'gcloud beta docker' to see new behavior.
```
Automatic merge from submit-queue
log-dump.sh: Fix kubemark log-dump.sh
**What this PR does / why we need it**: Using `log-dump.sh` with the `kubemark` synthetic provider are broken.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#34446
Automatic merge from submit-queue
Update the series and the README to reflect the change.
This PR updates the juju charm code to support the latest series (xenial 16.04). We changed the README to reflect this change and how that changes the juju commands.
fixes#30373
`release-note-none`
Automatic merge from submit-queue
Add gcl cluster logging test
This PR changes default logging destination for tests to gcp and adds test for cluster logging using google cloud logging
Fix#20760
Automatic merge from submit-queue
[WIP] AWS compatibility for federation cluster and e2e
I've been testing this and have reached a point where the e2e tests run, and some test failures are popping up which are not overtly related to AWS specific things.
```sh
SSSSSSSSSSSSSSSS
Summarizing 5 Failures:
[Fail] [k8s.io] [Feature:Federation] Federated Services DNS [BeforeEach] should be able to discover a federated service
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/federation-util.go:233
[Fail] [k8s.io] [Feature:Federation] Federated Services Service creation [It] should create matching services in underlying clusters
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/federation-util.go:233
[Fail] [k8s.io] Federated ingresses [Feature:Federation] Federated Ingresses [It] should create and update matching ingresses in underlying clusters
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/federated-ingress.go:289
[Fail] [k8s.io] [Feature:Federation] Federated Services DNS [BeforeEach] non-local federated service [Slow] missing local service should never find DNS entries for a missing local service
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/federation-util.go:233
[Fail] [k8s.io] [Feature:Federation] Federated Services DNS [BeforeEach] non-local federated service should be able to discover a non-local federated service
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/federation-util.go:233
Ran 16 of 383 Specs in 519.872 seconds
FAIL! -- 11 Passed | 5 Failed | 1 Pending | 366 Skipped --- FAIL: TestE2E (519.89s)
```
\cc @quinton-hoole @madhusudancs for advice. Should I investigate further?