Automatic merge from submit-queue
Update Stateful Set example files for 1.5
1. Remove initialized annotation from statefulset examples
2. Update storage class annotation to beta in statefulset examples
3. Remove alpha limitation on PetSet in cassandra example
cc @erictune @foxish @kow3ns @enisoc @chrislovecnm @kubernetes/sig-apps
```release-note
NONE
```
Automatic merge from submit-queue
Fixes Addon Manager's pruning issue for old Deployments
Fixes#37641.
Attaches the `last-applied`annotations to the existing Deployments for pruning.
Below images are built and pushed:
- gcr.io/google-containers/kube-addon-manager:v6.1
- gcr.io/google-containers/kube-addon-manager-amd64:v6.1
- gcr.io/google-containers/kube-addon-manager-arm:v6.1
- gcr.io/google-containers/kube-addon-manager-arm64:v6.1
- gcr.io/google-containers/kube-addon-manager-ppc64le:v6.1
@mikedanese
cc @saad-ali @krousey
Automatic merge from submit-queue
Set Dashboard UI version to v1.5.0-beta1
There will be one more such PR coming for 1.5 release. In one week.
Setting release note to none. Will set notes for final version PR.
Github release info:
https://github.com/kubernetes/dashboard/releases/tag/v1.5.0-beta1
Automatic merge from submit-queue
Deploy a default StorageClass instance on AWS and GCE
This needs a newer kubectl in kube-addons-manager container. It's quite tricky to test as I cannot push new container image to gcr.io and I must copy the newer container manually.
cc @kubernetes/sig-storage
**Release note**:
```release-note
Kubernetes now installs a default StorageClass object when deployed on AWS, GCE and
OpenStack with kube-up.sh scripts. This StorageClass will automatically provision
a PeristentVolume in corresponding cloud for a PersistentVolumeClaim that cannot be
satisfied by any existing matching PersistentVolume in Kubernetes.
To override this default provisioning, administrators must manually delete this default StorageClass.
```
Automatic merge from submit-queue
Bumps up Addon Manager to v6.0 with full support of kubectl apply
Below images are built and pushed:
- gcr.io/google-containers/kube-addon-manager:v6.0
- gcr.io/google-containers/kube-addon-manager-amd64:v6.0
- gcr.io/google-containers/kube-addon-manager-arm:v6.0
- gcr.io/google-containers/kube-addon-manager-arm64:v6.0
- gcr.io/google-containers/kube-addon-manager-ppc64le:v6.0
The actual change made is upgrade kubectl version from `v1.5.0-alpha.1` to `v1.5.0-beta.1`, which is released today.
@mikedanese
@saad-ali This need to get into 1.5 because Addon Manager v6.0-alpha.1 (currently in used) does not have full support of `kubectl apply --prune`.
Automatic merge from submit-queue
fix: elasticsearch template mapping to parse kubernetes.labels
**What this PR does / why we need it**:
This PR updates the field mappings for the elasticsearch template that ships with the EFK stack implementation.
Specifically, elasticsearch cannot parse the `kubernetes.labels` object because it attempts to treat it as a string and produces an error. This update treats `kubernetes.labels` as an object and all of the properties within as a string, allowing accurate indexing and allowing users in kibana to search on `kubernetes.labels.*`.
**Release note**:
```release-note
Fluentd/Elastisearch add-on: correctly parse and index kubernetes labels
```
Automatic merge from submit-queue
Add kube-proxy logs to fluentd configs
Related to https://github.com/kubernetes/kubernetes/issues/37107
Makes fluentd collect logs from kube-proxy. It's completely backward-compatible change that does not cause problems currently, so I suggest not to bump version.
cc @piosz
- Adds command line flags --config-map, --config-map-ns.
- Fixes 36194 (https://github.com/kubernetes/kubernetes/issues/36194)
- Update kube-dns yamls
- Update bazel (hack/update-bazel.sh)
- Update known command line flags
- Temporarily reference new kube-dns image (this will be fixed with
a separate commit when the DNS image is created)
Automatic merge from submit-queue
Update Dashboard UI version to 1.4.2
**What this PR does / why we need it**:
Dashboard 1.4.2 contains a fix for an XSS security bug, so I think it would be prudent to update the Dashboard version 'shipped' with kubernetes to this version
**Special notes for your reviewer**:
**Release note**:
- Updated dashboard version in addons to 1.4.2```
Automatic merge from submit-queue
Fix Docker Registry image version to 2.5.1
`registry:2` is constantly being updated with new versions. This means there's a possibility that the image may be changed unintentionally. For example, when the Pod is rescheduled on nodes that does not already have the image, depending on the time of the pull, `registry:2` may result in different images.
Fix this to the latest `registry:2.5.1` instead to avoid this problem.
@uluyol @freehan
Dashboard 1.4.2 contains a fix for an XSS security bug, so I think it would be prudent to update the Dashboard version 'shipped' with kubernetes to this version
Automatic merge from submit-queue
Migrates addons from RCs to Deployments
Fixes#33698.
Below addons are being migrated:
- kube-dns
- GLBC default backend
- Dashboard UI
- Kibana
For the new deployments, the version suffixes are removed from their names. Version related labels are also removed because they are confusing and not needed any more with regard to how Deployment and the new Addon Manager works.
The `replica` field in `kube-dns` Deployment manifest is removed for the incoming DNS horizontal autoscaling feature #33239.
The `replica` field in `Dashboard` Deployment manifest is also removed because the rescheduler e2e test is manually scaling it.
Some resource limit related fields in `heapster-controller.yaml` are removed, as they will be set up by the `addon resizer` containers. Detailed reasons in #34513.
Three e2e tests are modified:
- `rescheduler.go`: Changed to resize Dashboard UI Deployment instead of ReplicationController.
- `addon_update.go`: Some namespace related changes in order to make it compatible with the new Addon Manager.
- `dns_autoscaling.go`: Changed to examine kube-dns Deployment instead of ReplicationController.
Both of above two tests passed on my own cluster. The upgrade process --- from old Addons with RCs to new Addons with Deployments --- was also tested and worked as expected.
The last commit upgrades Addon Manager to v6.0. It is still a work in process and currently waiting for #35220 to be finished. (The Addon Manager image in used comes from a non-official registry but it mostly works except some corner cases.)
@piosz @gmarek could you please review the heapster part and the rescheduler test?
@mikedanese @thockin
cc @kubernetes/sig-cluster-lifecycle
---
Notes:
- Kube-dns manifest still uses *-rc.yaml for the new Deployment. The stale file names are preserved here for receiving faster review. May send out PR to re-organize kube-dns's file names after this.
- Heapster Deployment's name remains in the old fashion(with `-v1.2.0` suffix) for avoiding describe this upgrade transition explicitly. In this way we don't need to attach fake apply labels to the old Deployments.
Automatic merge from submit-queue
Fix startup script bug in kibana image
Big thanks to @lhopki01 for noticing this!
As mention in discussion in https://github.com/kubernetes/kubernetes/pull/36103 current image crashes if we don't want to work behind proxy because of string interpolation in bash.
@piosz
Automatic merge from submit-queue
Fixes token_found bug in addon manager
From #35832.
Above PR exposed addon manager's logs on Jenkins, found below error on the gce e2e test artifacts:
```
Error from server: serviceaccounts "default" not found
error executing template "{{with index .secrets 0}}{{.name}}{{end}}": template: output:1:7: executing "output" at <index .secrets 0>: error calling index: index of untyped nil
== default service account in the kube-system namespace has token Error executing template: template: output:1:7: executing "output" at <index .secrets 0>: error calling index: index of untyped nil. Printing more information for debugging the template:
template was:
{{with index .secrets 0}}{{.name}}{{end}}
raw data was:
{"kind":"ServiceAccount","apiVersion":"v1","metadata":{"name":"default","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/serviceaccounts/default","uid":"de3f2f85-9d6a-11e6-9df3-42010af00002","resourceVersion":"48","creationTimestamp":"2016-10-29T00:01:40Z"}}
object given to template engine was:
map[apiVersion:v1 metadata:map[selfLink:/api/v1/namespaces/kube-system/serviceaccounts/default uid:de3f2f85-9d6a-11e6-9df3-42010af00002 resourceVersion:48 creationTimestamp:2016-10-29T00:01:40Z name:default namespace:kube-system] kind:ServiceAccount] ==
```
Seems like the script failed to retrieve service token at the first time and mistakenly used the error message as the token content. Fixes by replacing `|| true` with if condition.
https://hub.docker.com/r/library/registry/tags/
`registry:2` is constantly being updated with new versions. This means there's a possibility that the image may be changed unintentionally. For example, when the Pod is rescheduled on nodes that does not already have the image, depending on the time of the pull, `registry:2` may result in different images.
Fix this to the latest `registry:2.5.1` instead to avoid this problem.
Automatic merge from submit-queue
Update fluentd-gcp configuration
Related to #32762
Though it's not a final solution to the fluentd OOM problems, it increases number of logs that can be handled without losses by
- switching to the file buffering, making buffering mechanism more resilient
- decreasing size of the buffer, decreasing the amount of memory needed
- decreasing number of threads handling the load, since number of chunks is lower than previous number of threads
which results in decrease in theoretical throughput. Tests to confirm cases covered by this change will follow.
cc @piosz @edsiper @repeatedly please take look and confirm that all of these changed are meaningful.
Automatic merge from submit-queue
Rename PetSet to StatefulSet in docs and examples.
**What this PR does / why we need it**: Addresses some of the pre-code-freeze changes for implementing the PetSet --> StatefulSet rename. (#35534)
**Special notes for your reviewer**: This PR only changes docs and examples, as #35731 hasn't been merged yet and I don't want to create merge conflicts. I'll open another PR for any remaining code changes needed after that PR is merged. /cc @erictune @janetkuo @chrislovecnm
Automatic merge from submit-queue
cluster-addons: enable Jemalloc for Fluentd based images
**What this PR does / why we need it**:
This Pull Request includes two patches that enable the recommended use of Jemalloc memory allocator for container images that are based in Fluentd. The patches applies to the following cluster-addons:
- fluentd-es-image
- fluentd-gcp-image
**Which issue this PR fixes**
This PR is part of the solution for issues:
- kubernetes/kubernetes/issues/32762
- GoogleCloudPlatform/fluent-plugin-google-cloud/issues/87
When Fluentd runs in high load environments, it's likely the default operating system memory allocator will generate a high fragmentation ending up in a high memory usage. In order to reduce fragmentation and decrease memory usage an alternative memory allocator as Jemalloc is used.
![](https://cloud.githubusercontent.com/assets/369718/19498577/eaa9f324-954e-11e6-9a6b-6b30310a66a3.png)
For the record: fluentd-es-image uses [td-agent](https://docs.treasuredata.com/articles/td-agent) Fluentd package maintained by Treasure Data, which contains Jemalloc 4.2.1 (latest stable version). The google-fluentd package used in fluentd-gcp-image comes with Jemalloc 2.2.5, which have many known issues, I strongly suggest google-fluentd package gets updated.
**Special notes for your reviewer**:
In the research of this topic have been involved @piosz and @Crassirostris.
Automatic merge from submit-queue
Fixed kibana image and controller to work through proxy
As described in #34969, new kibana image doesn't work properly with proxies without additional configuration.
@piosz
Automatic merge from submit-queue
Update grafana in kubernetes to version 3.1.1
Fix#33775
```release-note
Update grafana version used by default in kubernetes to 3.1.1
```
@piosz
Automatic merge from submit-queue
Update elasticsearch and kibana usage
```release-note
Updated default Elasticsearch and Kibana used for elasticsearch logging destination to versions 2.4.1 and 4.6.1 respectively.
```
Updated controllers for elasticsearch and kibana to use newer versions of images. Fixed e2e test because of elasticsearch backward incompatible API changes.
Fixed out of sync elasticsearch controller for coreos.
@piosz
Automatic merge from submit-queue
Updated addon manager READMEs
Updates addon-manager's README. Based on the pre-condition that the addon manager keeps current "reconciled" pattern instead of "fire-once".
@mikedanese
The current DockerFile build an image using td-agent package but it let
the service run with the default memory allocator provided by glibc.
In high load environments, is highly required to use a customized memory
allocator such as Jemalloc. Otherwise the service will face a high memory
fragmentation ending up in 'high memory' usage from a monitoring perspective.
td-agent package by default install Jemalloc and set the LD_PRELOAD
environment variable through it init script, but since the service is
launched through Docker the env variable needs to be set manually.
After this patch, when running td-agent container image now is possible
to see that Jemalloc is used:
root@monotop:/proc/18810# cat maps |grep jemall
7f251eddd000-7f251ee1b000 ... /opt/td-agent/embedded/lib/libjemalloc.so.2
7f251ee1b000-7f251f01b000 ... /opt/td-agent/embedded/lib/libjemalloc.so.2
7f251f01b000-7f251f01d000 ... /opt/td-agent/embedded/lib/libjemalloc.so.2
7f251f01d000-7f251f01e000 ... /opt/td-agent/embedded/lib/libjemalloc.so.2
For a reference about the memory usage difference between malloc v/s jemalloc
please refer to the following chart:
https://goo.gl/dVYTmw
Signed-off-by: Eduardo Silva <eduardo@treasure-data.com>
Automatic merge from submit-queue
Elasticsearch and Kibana update
```release-note
Updated Elasticsearch image from version 1.5.1 to version 2.4.1. Updated Kibana image from version 4.0.2 to version 4.6.1.
```
Updated es and kibana images. Made image versions match es/kibana versions they contain.
ref #19149
Automatic merge from submit-queue
Upgrade addon-manager with kubectl apply
The first step of #33698.
Use `kubectl apply` to replace addon-manager's previous logic.
The most important issue this PR is targeting is the upgrade from 1.4 to 1.5. Procedure as below:
1. Precondition: After the master is upgraded, new addon-manager starts and all the old resources on nodes are running normally.
2. Annotate the old ReplicationController resources with kubectl.kubernetes.io/last-applied-configuration=""
3. Call `kubectl apply --prune=false` on addons folder to create new addons, including the new Deployments.
4. Wait for one minute for new addons to be spinned up.
5. Enter the periodical loop of `kubectl apply --prune=true`. The old RCs will be pruned at the first call.
Procedure of a normal startup:
1. Addon-manager starts and no addon resources are running.
2. Annotate nothing.
3. Call `kubectl apply --prune=false` to create all new addons.
4. No need to explain the remain.
Remained Issues:
- Need to add `--type` flag to `kubectl apply --prune`, mentioned [here](https://github.com/kubernetes/kubernetes/pull/33075#discussion_r80814070).
- This addon manager is not working properly with the current Deployment heapster, which runs [addon-resizer](https://github.com/kubernetes/contrib/tree/master/addon-resizer) in the same pod and changes resource limit configuration through the apiserver. `kubectl apply` fights with the addon-resizers. May be we should remove the initial resource limit field in the configuration file for this specific Deployment as we removed the replica count.
@mikedanese @thockin @bprashanth
---
Below are some logical things that may need to be clarified, feel free to **OMIT** them as they are too verbose:
- For upgrade, the old RCs will not fight with the new Deployments during the overlap period even if they use the same label in template:
- Deployment will not recognize the old pods because it need to match an additional "pod-template-hash" label.
- ReplicationController will not manage the new pods (created by deployment) because the [`controllerRef`](https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/controller-ref.md) feature.
- As we are moving all addons to Deployment, all old RCs would be removed. Attach empty annotation to RCs is only for letting `kubectl apply --prune` to recognize them, the content does not matter.
- We might need to also annotate other resource types if we plan to upgrade them in 1.5 release:
- They don't need to be attached this fake annotation if they remain in the same name. `kubectl apply` can recognize them by name/type/namespace.
- In the other case, attaching empty annotations to them will still work. As the plan is to use label selector for annotate, some innocence old resources may also be attached empty annotations, they work as below two cases:
- Resources that need to be bumped up to a newer version (mainly due to some significant update --- change disallowed fields --- that could not be managed by the update feature of `kubectl apply`) are good to go with this fake annotation, as old resources will be deleted and new sources will be created. The content in annotation does not matter.
- Resources that need to stay inside the management of `kubectl apply` is also good to go. As `kubectl apply` will [generate a 3-way merge patch](https://github.com/kubernetes/kubernetes/blob/master/pkg/util/strategicpatch/patch.go#L1202-L1226). This empty annotation is harmless enough.
Automatic merge from submit-queue
fix sed command run failed on mac os
bash command ```sed -i ... ``` run failed on mac os, it should be ```sed -i.back ..```
We can then avoid the following warning:
```
WARNING: The '--' argument must be specified between gcloud specific args on the left and DOCKER_ARGS on the right. IMPORTANT: previously, commands allowed the omission of the --, and unparsed arguments were treated as implementation args. This usage is being deprecated and will be removed in March 2017.
This will be strictly enforced in March 2017. Use 'gcloud beta docker' to see new behavior.
```
Signed-off-by: Jess Frazelle <acidburn@google.com>
Automatic merge from submit-queue
Added --log-facility flag to enhance dnsmasq logging
Fix#31010.
Dnsmasq in kube-dns pod is logging in default setting, which is somehow hard to locate. Add --log-facility=- flag to redirect logs to std.
@girishkalele
Automatic merge from submit-queue
Use a Deployment for kube-dns
Attempt to fix#31554
Switching kube-dns from using Replication Controller to Deployment.
The outdated kube-dns YAML file in coreos and juju dir is also updated. Most of the specific memory limit in the files remain unchanged because it seems like people were modifying it explicitly(c8d82fc2a9). Only the memory limit for healthz is increased due to this pending investigation(#29688).
YAML files stay in *-rc.yaml format considering there are a lots of scripts in cluster and hack dirs are using this format. But it may be fine to changed them all.
@bprashanth @girishkalele
Automatic merge from submit-queue
Reduce size of images fluentd-gcp and fluentd-elasticsearch
replaces #26652
```
aledbf/fluentd-elasticsearch 1.19 769ece5c8ba8 About an hour ago 269.9 MB
gcr.io/google_containers/fluentd-elasticsearch 1.18 0a8cbfbea7f7 5 weeks ago 530.3 MB
aledbf/fluentd-gcp 1.22 ef979b82a767 About an hour ago 307.9 MB
gcr.io/google_containers/fluentd-gcp 1.21 0ef09b1bcfd7 2 weeks ago 498.5 MB
```
closes#29782
Automatic merge from submit-queue
Add user-specified kubectl arguments to addons start script
This is a simple way, using the same environment variable paradigm used throughout these scripts, to let a user specify kubectl arguments to the addons script.
fixes#30371
Automatic merge from submit-queue
Add support for kube-up.sh to deploy Calico network policy to GCI masters
Also remove requirement for calicoctl from Debian / salt installed nodes and clean it up a little by deploying calico-node with a manifest rather than calicoctl. This also makes it more reliable by retrying properly.
How to use:
```
make quick-release
NETWORK_POLICY_PROVIDER=calico cluster/kube-up.sh
```
One place where I was uncertain:
- CPU allocations (on the master particularly, where there's very little spare capacity). I took some from etcd, but if there's a better way to decide this, I'm happy to change it.
<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.kubernetes.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.kubernetes.io/reviews/kubernetes/kubernetes/29037)
<!-- Reviewable:end -->
Automatic merge from submit-queue
Add cleanup addon pod to remove empty keys from etcd
namespace deletion will leave a trace of empty keys on etcd. This PR adds an addon pod to periodically check for those empty keys on etcd and remove them.
fixes#27307
Automatic merge from submit-queue
Bump exechealthz image
With the new image at least if we observe an exec container taking more ram than it should (like the oom situation, which shouldn't happen today because of the increased limits), we can kubectl exec and check the pprof endpoints.
Note that I'm not bumping the rc version, because I just did so with: https://github.com/kubernetes/kubernetes/pull/29693.
Only run the systemd-journal plugin when on a platform that requests it.
The plugin crashes the fluentd process if the journal isn't present, so
it can't just be run blindly in all configurations.
Automatic merge from submit-queue
Update to dnsmasq:1.3 and make hyperkube always use the latest addons
This bumps dnsmasq to a version that works on all architectures: https://github.com/kubernetes/contrib/pull/1192 (which have to be pushed first indeed)
Also I removed the manifests in hyperkube addons in favor for machine-generated ones, which will avoid mistakes.
This one is required for `v1.3`, so it has to be cherrypicked I think...
It makes docker and docker-multinode addons work again...
(Yes, we'll probably get rid of docker in favor for minikube, but we'll have to have it in this release at least)
@girishkalele @thockin @ArtfulCoder @david-mcmahon @bgrant0607 @mikedanese
Automatic merge from submit-queue
Re-enable node problem detector by default
Re-enable node problem detector started in gce cluster by default.
For now, in the master node, the node problem detector will be started and do nothing (see https://github.com/kubernetes/node-problem-detector/pull/13).
But in fact, in my test cluster, the master has no extra cpu to run the node problem detector, so node problem detector is started on all nodes except master, which is what we want but not expected...
@dchen1107
/cc @kubernetes/sig-node
/cc @andyzheng0831 for the gci script change.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/.github/PULL_REQUEST_TEMPLATE.md?pixel)]()
Automatic merge from submit-queue
Add collection of the new glbc and cluster-autoscaler logs
I've incremented the version numbers by 2 to avoid conflicting with #26652. I'll make sure the potential conflict between the images gets resolved reasonably.
cc @piosz @bprashanth @aledbf
Automatic merge from submit-queue
Switch DNS addons from skydns to kubedns
Change GCI and trusty cluster-helper scripts to use kubedns instead of skydns.
Unified skydns templates using a simple underscore based template and
added transform sed scripts to transform into salt and sed yaml
templates
Moved all content out of cluster/addons/dns into build/kube-dns and
saltbase/salt/kube-dns
Automatic merge from submit-queue
Add node problem detector as an addon pod.
```release-note
Introduce a new add-on pod NodeProblemDetector.
NodeProblemDetector is a DaemonSet running on each node, monitoring node health and reporting
node problems as NodeCondition and Event. Currently it already supports kernel log monitoring, and
will support more problem detection in the future. It is enabled by default on gce now.
```
This PR enables NodeProblemDetector as an add-on pod.
/cc @mikedanese @kubernetes/sig-node
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/.github/PULL_REQUEST_TEMPLATE.md?pixel)]()
Automatic merge from submit-queue
add index template for es aggregations
This index template helps us to do es aggregations of namespace_name, pod_name and container_name. Then after doing eggs, we will get the whole name not all the spilt pieces.
fix#25127
Automatic merge from submit-queue
Initial kube-up support for VMware's Photon Controller
This is for: https://github.com/kubernetes/kubernetes/issues/24121
Photon Controller is an open-source cloud management platform. More
information is available at:
http://vmware.github.io/photon-controller/
This commit provides initial support for Photon Controller. The
following features are tested and working:
- kube-up and kube-down
- Basic pod and service management
- Networking within the Kubernetes cluster
- UI and DNS addons
It has been tested with a Kubernetes cluster of up to 10
nodes. Further work on scaling is planned for the near future.
Internally we have implemented continuous integration testing and will
run it multiple times per day against the Kubernetes master branch
once this is integrated so we can quickly react to problems.
A few things have not yet been implemented, but are planned:
- Support for kube-push
- Support for test-build-release, test-setup, test-teardown
Assuming this is accepted for inclusion, we will write documentation
for the kubernetes.io site.
We have included a script to help users configure Photon Controller
for use with Kubernetes. While not required, it will help some
users get started more quickly. It will be documented.
We are aware of the kube-deploy efforts and will track them and
support them as appropriate.
This is for: https://github.com/kubernetes/kubernetes/issues/24121
Photon Controller is an open-source cloud management platform. More
information is available at:
http://vmware.github.io/photon-controller/
This commit provides initial support for Photon Controller. The
following features are tested and working:
- kube-up and kube-down
- Basic pod and service management
- Networking within the Kubernetes cluster
- UI and DNS addons
It has been tested with a Kubernetes cluster of up to 10
nodes. Further work on scaling is planned for the near future.
Internally we have implemented continuous integration testing and will
run it multiple times per day against the Kubernetes master branch
once this is integrated so we can quickly react to problems.
A few things have not yet been implemented, but are planned:
- Support for kube-push
- Support for test-build-release, test-setup, test-teardown
Assuming this is accepted for inclusion, we will write documentation
for the kubernetes.io site.
We have included a script to help users configure Photon Controller
for use with Kubernetes. While not required, it will help some
users get started more quickly. It will be documented.
We are aware of the kube-deploy efforts and will track them and
support them as appropriate.
Automatic merge from submit-queue
Fix GLBC cluster addon README link
Fix the link to L7 load balancer controller in GLBC cluster addon README.
Fixed#24462.
Automatic merge from submit-queue
don't ship kube-registry-proxy and pause images in tars.
pause is built into containervm. if it's not on the machine we should just pull
it. nobody that I'm aware of uses kube-registry-proxy and it makes build/deployment
more complicated and slower.
Automatic merge from submit-queue
Make kube2sky and skydns docker images cross-platform
ARM tracking issue: #17981
Continues on: #19216
Make it possible to create `kube2sky` and `skydns` docker images for ARM and other architectures too
Build in a container, so `golang` isn't a dependency
I've preserved the original default behaviour:
- `skydns`: It just compiles with go on host
- `kube2sky`: Build an image
@brendandburns @dchen1107 @ArtfulCoder @thockin @fgrzadkowski
pause is built into containervm. if it's not on the machine we should just pull
it. nobody that I'm aware of uses kube-registry-proxy and it makes build/deployment
more complicated and slower.
It includes some performance improvements for parsing JSON (which is
very important for us, since all Docker logs are JSON) as well as a
couple new settings, like forcing of a flush of multiline logs after a
time period rather than having to wait until a new log is seen before
feeling confident flushing the previous one.
I didn't expect glog to split single log statements onto multiple lines,
but apparently it does if they're long enough. This groups them back
together appropriately.