Automatic merge from submit-queue (batch tested with PRs 46850, 47984)
Update addon-resizer version
Update addon-resizer version and remove the flags that have been deprecated in the new version.
**What this PR does / why we need it**:
ref kubernetes/contrib#2623
**Special notes for your reviewer**:
Need to wait for merging kubernetes/contrib#2623 first.
**Release note**:
```release-note
addon-resizer flapping behavior was removed.
```
Automatic merge from submit-queue
Allow log-dumping only N randomly-chosen nodes in the cluster
This should let us save "lots" (~3-4 hours) of time in our 5000-node cluster scale tests as we copy logs from all the nodes to jenkins worker and then upload all of them to gcs (while we don't need too many).
This will also prevent the jenkins container facing "No space left on device" error while dumping logs, that we saw in runs 12-13 of gce-enormous-cluster.
The longterm fix will be to enable [logexporter](https://github.com/kubernetes/test-infra/tree/master/logexporter) for our tests.
cc @kubernetes/sig-scalability-misc @kubernetes/test-infra-maintainers @gmarek @fejta
Automatic merge from submit-queue (batch tested with PRs 48004, 48205, 48130, 48207)
Bumped Heapster to v1.4.0
``` release-note
Bumped Heapster to v1.4.0.
More details about the release https://github.com/kubernetes/heapster/releases/tag/v1.4.0
```
follow up #47961
The release candidate `v1.4.0-beta.0` turned out to be stable.
Automatic merge from submit-queue (batch tested with PRs 48004, 48205, 48130, 48207)
Do not set CNI in cases where there is a private master and network policy provider is set.
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
In GCE and in a "private master" setup, do not set the network-plugin provider to CNI by default if a network policy provider is given.
```
Automatic merge from submit-queue (batch tested with PRs 48192, 48182)
Add generic NoSchedule toleration to fluentd in gcp config as a quick…
…-fix for #44445
Automatic merge from submit-queue (batch tested with PRs 48139, 48042, 47645, 48054, 48003)
Add a failsafe for etcd not returning a connection string
**What this PR does / why we need it**: Removing a kubernetes-master will fail as described on this issue: https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/311
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/311
**Special notes for your reviewer**: This is a two liner defensive code. I am not totally sold on this patch. I might not be the right place to address the above issue. However, solving the problem on the etcd side and updating the interface scope to be unit (as suggested) seems much more involving.
**Release note**:
```
Fix error when removing juju kubernetes-master unit
```
Automatic merge from submit-queue
Make big clusters work again after introduction of subnets
This PR does two things:
- make IP aliases automatically pick Node IP Range based on number of Nodes,
- fix logic for starting clusters >4095 Nodes that was broken by introduction of subnets,
cc @wojtek-t @shyamjvs
```release-note
Setting env var ENABLE_BIG_CLUSTER_SUBNETS=true will allow kube-up.sh to start clusters bigger that 4095 Nodes on GCE.
```
Ref https://github.com/kubernetes/kubernetes/issues/47344
Automatic merge from submit-queue
Insert Cynerva and Kjackal to approvers list
**What this PR does / why we need it**:
Per the membership reviews, we're looking to promote Konstantinos and
George to approvers to help distribute the review/bug load for the `cluster/juju` code
tree.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*:
**Special notes for your reviewer**:
cc @marcoceppi and @tvansteenburgh
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 48092, 47894, 47983)
fix systemd service file for custom args.
`KUBE_SCHEDULER_ARGS` and `KUBELET_ARGS` are used to custom args for scheduler or kubelet by users.
But if there are more than one params in `KUBELET_ARGS`, for example, if I set KUBELET_ARGS="--cgroups-per-qos=false --enforce-node-allocatable=", the kubelet will judge the `false --enforce-node-allocatable=` as the value of `cgroups-per-qos`. Because `${KUBELET_ARGS}` in kubelet.service will expands the variable into one word. And if I take `$KUBELET_ARGS` instead, kubelet will worker perfectly.
For more info, please click [EnvironmentFiles and support for /etc/sysconfig files](http://fedoraproject.org/wiki/Packaging:Systemd#EnvironmentFiles_and_support_for_.2Fetc.2Fsysconfig_files). This bug is reported by @huanxingyouyoutoo. And I make this PR for her to fix it.
**Release note**:
```
NONE
```
Automatic merge from submit-queue (batch tested with PRs 48012, 47443, 47702, 47178)
Fix setting juju worker labels during deployment
**What this PR does / why we need it**: Allows for setting the labels of juju workers during deployment (eg inside a bundle)
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#47176
**Special notes for your reviewer**:
**Release note**:
```
Fix bug in setting Juju kubernetes-worker labels in bundle.yaml files.
```
Automatic merge from submit-queue (batch tested with PRs 47860, 47170)
Fix restart action on juju kubernetes-master
**What this PR does / why we need it**: Restart action of kubernetes-master of Juju is not functioning.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/299
**Special notes for your reviewer**:
**Release note**:
```
Fix: Restart action of juju's kubernetes-master restarts the respective snap based services
```
Automatic merge from submit-queue (batch tested with PRs 47860, 47170)
Make fluentd log to stdio instead of a dedicated file
Lower verbosity also, to reduce volume of system logs exported to the backend.
Fix https://github.com/kubernetes/kubernetes/issues/43772
/cc @piosz
Automatic merge from submit-queue (batch tested with PRs 47961, 46276)
Bumped Heapster to v1.4.0-beta.0
Heapster release candidate for Kubernetes 1.7
cc @dchen1107 @caesarxuchao
Automatic merge from submit-queue
Parameterize the binary path and host arch for the hyperkube image
As the [cluster/images/hyperkube/README.md](https://github.com/kubernetes/kubernetes/tree/master/cluster/images/hyperkube) shows, I run the command: `make build VERSION=test ARCH=ppc64le`, but got the below errors, so this PR will fix it.
```
ARCH=ppc64le
cp -r ./* /tmp/hyperkubeTFbYrI
mkdir -p /tmp/hyperkubeTFbYrI/cni-bin
cp ../../../_output/dockerized/bin/linux/ppc64le/hyperkube /tmp/hyperkubeTFbYrI
cp: cannot stat '../../../_output/dockerized/bin/linux/ppc64le/hyperkube': No such file or directory
Makefile:62: recipe for target 'build' failed
make: *** [build] Error 1
```
Automatic merge from submit-queue (batch tested with PRs 47993, 47892, 47591, 47469, 47845)
Bump up npd version to v0.4.1
```
Bump up npd version to v0.4.1
```
Fixes#47219
Automatic merge from submit-queue (batch tested with PRs 47993, 47892, 47591, 47469, 47845)
Use a different env var to enable the ip-masq-agent addon.
We shouldn't mix setting the non-masq-cidr with enabling the addon.
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
```
https://github.com/kubernetes/kubernetes/issues/47865
Automatic merge from submit-queue (batch tested with PRs 47883, 47179, 46966, 47982, 47945)
Strip versions from known api groups in audit policy
Props to @CaoShuFeng for catching this.
Issue: kubernetes/features#22
/cc @ericchiang
Automatic merge from submit-queue (batch tested with PRs 46151, 47602, 47507, 46203, 47471)
Add RBAC support to fluentd-elasticsearch cluster addon
**What this PR does / why we need it**:
Adds rbac support to the fluentd-elasticsearch addon
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#46023
**Special notes for your reviewer**:
**Release note**:
```release-note
Add RBAC support to fluentd-elasticsearch cluster addon
```
Automatic merge from submit-queue (batch tested with PRs 46151, 47602, 47507, 46203, 47471)
es discovery support args apiserver-host and kubeconfig
Now discovery elasticsearch through kubernetes client,but now does not support specifying the apiserver-host or kubeconfig create client.
Automatic merge from submit-queue (batch tested with PRs 47403, 46646, 46906, 46527, 46792)
Avoid redundant copying of tars during kube-up for gce if the same file already exists
**What this PR does / why we need it**:
Whenever I execute cluster/kube-up.sh it copies my tar files to google cloud, even if the files haven't changed. This PR checks to see whether the files already exist, and avoids uploading them again. These files are large and can take a long time to upload.
**Which issue this PR fixes**: fixes#46791
**Special notes for your reviewer**:
Here is the new output:
cluster/kube-up.sh
... Starting cluster in us-central1-b using provider gce
... calling verify-prereqs
... calling verify-kube-binaries
... calling kube-up
Project: PROJECT
Zone: us-central1-b
+++ Staging server tars to Google Storage: gs://kubernetes-staging-PROJECT/kubernetes-devel
+++ kubernetes-server-linux-amd64.tar.gz uploaded earlier, cloud and local file md5 match (md5 = 3a095kcf27267a71fe58f91f89fab1bc)
**Release note**:
```cluster/kube-up.sh on gce now avoids redundant copying of kubernetes tars if the local and cloud files' md5 hash match```
Automatic merge from submit-queue
Remove limits from ip-masq-agent for now and disable ip-masq-agent in GCE
ip-masq-agent when issuing an iptables-save will read any configured iptables on the node. This means that the ip-masq-agent's memory requirements would grow with the number of iptables (i.e. services) on the node.
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
#47865
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue
Don't audit log tokens in TokenReviews
We don't want to leak auth tokens in the audit logs, so only log TokenReview requests at the metadata level.
Issue: kubernetes/features#22
ip-masq-agent when issuing an iptables-save will read
any configured iptables on the node. This means that
the ip-masq-agent's memory requirements would grow
with the number of iptables (i.e. services) on the node.
Disable ip-masq-agent in GCE
Automatic merge from submit-queue
Fix broken command in registry addon document
**What this PR does / why we need it**:
Fix a command example in registry addon document so it matches the example yaml above.
Automatic merge from submit-queue (batch tested with PRs 47906, 47910)
Reduce scheduler CPU request to 75m
On a 1 cpu master we are over budget with CPU requests. Components like npd or cluster autoscaler don't have *any* space to run. We need to reduce some requests.
cc: @gmarek @mikedanese @roberthbailey @davidopp @dchen1107
Automatic merge from submit-queue
Add aleksandra-malinowska to cluster-autoscaler salt definition owners
@aleksandra-malinowska is working on Cluster Autoscaler so she should be added to reviewers and approvers.
Automatic merge from submit-queue
Reduce Cluster Autoscaler cpu request to 10m
We are super tight on 1 cpu master node. With the recent changes we cannot fit to the master if request is bigger than 10m.
cc: @gmarek @MaciekPytel @aleksandra-malinowska
Automatic merge from submit-queue
Update addons with upstream CVE fixes
**What this PR does / why we need it**: refreshes the kube-dns, metadata-proxy, and fluentd-gcp, event-exporter, prometheus-to-sd, and ip-masq-agent addons with new base images containing fixes for the following vulnerabilities:
* CVE-2016-4448
* CVE-2016-9841
* CVE-2016-9843
* CVE-2017-1000366
* CVE-2017-2616
* CVE-2017-9526
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#47386 (yay!)
**Special notes for your reviewer**:
**Release note**:
```release-note
Update kube-dns, metadata-proxy, and fluentd-gcp, event-exporter, prometheus-to-sd, and ip-masq-agent addons with new base images containing fixes for CVE-2016-4448, CVE-2016-9841, CVE-2016-9843, CVE-2017-1000366, CVE-2017-2616, and CVE-2017-9526.
```
/assign @bowei @MrHohn @Q-Lee @crassirostris @dnardo
/cc @dchen1107 @timstclair
Automatic merge from submit-queue (batch tested with PRs 42252, 42251, 42249, 47512, 47887)
Bump the memory request/limit for ip-masq-daemon.
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
issue #47865
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Per the membership reviews we're looking to promote Konstantinos and
George to approvers to help distribute the review/bug load for the juju code
tree.
Automatic merge from submit-queue
Add ip-masq-agent readiness label by default.
Since we are setting the non-masq-cidr in the kubelet to 0.0.0.0/0 we
need to ensure the ip-masq-agent runs.
pr/#46473 made the NON_MASQUERADE_CIDR default to 0.0.0.0/0 which means we need to have this label set now.
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
fixes#47752
**Special notes for your reviewer**:
**Release note**:
```release-note
ip-masq-agent is now the default for GCE
```
Automatic merge from submit-queue
fix validate-cluster.sh
attempt to fix#47379.
Without this fix, the validate-cluster.sh never retries if `kubectl-retry get cs` fails.
cc @dchen1107
Automatic merge from submit-queue (batch tested with PRs 45268, 47573, 47632, 47818)
NODE_TAINTS in gce startup scripts
Currently there is now way to pass a list of taints that should be added on node registration (at least not in gce or other saltbased deployment). This PR adds necessary plumbing to pass the taints from user or instance group template to kubelet startup flags.
```release-note
Taints support in gce/salt startup scripts.
```
The PR was manually tested.
```
NODE_TAINTS: 'dedicated=ml:NoSchedule'
```
in kube-env results in
```
spec:
[...]
taints:
- effect: NoSchedule
key: dedicated
timeAdded: null
value: ml
```
cc: @davidopp @gmarek @dchen1107 @MaciekPytel
setting the non-masq-cidr in the kubelet to 0.0.0.0/0 we
need to ensure the ip-masq-agent runs.
Add node label pre-req back to ip-masq-agent.
Make gce test consistent with gce default scripts.
Automatic merge from submit-queue (batch tested with PRs 46604, 47634)
Set price expander in Cluster Autoscaler for GCE
With CA 0.6 we will make price-preferred node expander the default one for GCE. For other cloud providers we will stick to the default one (random) until the community implement the required interfaces in CA repo.
https://github.com/kubernetes/autoscaler/issues/82
cc: @MaciekPytel @aleksandra-malinowska
Automatic merge from submit-queue (batch tested with PRs 47669, 40284, 47356, 47458, 47701)
Standardize on home/kubernetes/bin for CNI
**What this PR does / why we need it**:
Standardizes where CNI plugins get installed on GCE.
**Which issue this PR fixes**
Fixes: https://github.com/kubernetes/kubernetes/issues/47453
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 47669, 40284, 47356, 47458, 47701)
Mark Static pods on the Master as critical
fixes#47277.
A known issue with static pods is that they do not interact well with evictions. If a static pod is evicted or oom killed, then it will never be recreated. To mitigate this, we do not evict static pods that are critical. In addition, non-critical pods are candidates for preemption if a critical pod is scheduled to the node. If there are not enough allocatable resources on the node, this causes the static pod to be preempted.
This PR marks all static pods in the kube-system namspace as critical.
cc @vishh @dchen1107
Automatic merge from submit-queue (batch tested with PRs 46327, 47166)
mark --network-plugin-dir deprecated for kubelet
**What this PR does / why we need it**:
**Which issue this PR fixes** : fixes#43967
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 47530, 47679)
Use cos-stable-59-9460-64-0 instead of cos-beta-59-9460-20-0.
Remove dead code that has now moved to another repo as part of #47467
**Release note**:
```release-note
NONE
```
/sig node
Automatic merge from submit-queue (batch tested with PRs 47626, 47674, 47683, 47290, 47688)
Use echoserver:1.6 for better debugging and XSS prevention.
**What this PR does / why we need it**: This updates our test code to use a newer echoserver with XSS preventions.
**Which issue this PR fixes**: fixes#47682
**Special notes for your reviewer**: Marking as 1.7 since it's a fix to test code.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 47626, 47674, 47683, 47290, 47688)
Fix Juju kubernetes-master idle status never being set
**What this PR does / why we need it**:
This fixes a problem with the kubernetes-master charm where the "Kubernetes master running." status message never gets set.
This happens because the `kube-api-endpoint.connected` state that it's waiting for doesn't exist. The state we need is `kube-api-endpoint.available` as seen [here](https://github.com/juju-solutions/interface-http/blob/master/provides.py#L12).
Additionally, we need to add the relation arguments to idle_status so it doesn't break when called.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#47676
**Special notes for your reviewer**:
**Release note**:
```release-note
Fix Juju kubernetes-master idle status never being set
```
Automatic merge from submit-queue (batch tested with PRs 47626, 47674, 47683, 47290, 47688)
The KUBE-METADATA-SERVER firewall must be applied before the universa…
…l tcp ACCEPT
**What this PR does / why we need it**: the metadata firewall rule was broken by being appended after the universal tcp accept.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 38751, 44282, 46382, 47603, 47606)
Working on fixing #43716.
This will create the necessary certificates.
On GCE is will upload those certificates to Metadata.
They are then pulled down on to the kube-apiserver.
They are written to the /etc/src/kubernetes/pki directory.
Finally they are loaded vi the appropriate command line flags.
The requestheader-client-ca-file can be seen by running the following:-
kubectl get ConfigMap extension-apiserver-authentication
--namespace=kube-system -o yaml
Minor bug fixes.
Made sure AGGR_MASTER_NAME is set up in all configs.
Clean up variable names.
Added additional requestheader configuration parameters.
Added check so that if there is no Aggregator CA contents we won't start
the aggregator with the relevant flags.
**What this PR does / why we need it**:
This PR creates a request header CA. It also creates a proxy client cert/key pair.
It causes these files to end up on kube-apiserver and set the CLI flags so they are properly loaded.
Without it the customer either has to set them up themselves or re-use the master CA which is a security vulnerability.
Currently this creates everything on GCE.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#43716
**Special notes for your reviewer**:
This is a reapply of pull/47094 with the GKE issue resolved.
**Release note**: None
- It contains a fix for ipaliasing.
- It contains a fix which decouples GPU driver installation from kernel
version.
Remove dead code that has now moved to another repo as part of #47467
Automatic merge from submit-queue
Don't start any Typha instances if not using Calico
**What this PR does / why we need it**:
Don't start any Typha instances if Calico isn't being used. A recent change now includes all add-ons on the master, but we don't always want a Typha replica.
**Which issue this PR fixes**
Fixes https://github.com/kubernetes/kubernetes/issues/47622
**Release note**:
```release-note
NONE
```
cc @dnardo
Automatic merge from submit-queue (batch tested with PRs 47562, 47605)
Adding option in node start script to add "volume-plugin-dir" flag to kubelet.
**What this PR does / why we need it**: Adds a variable to allow specifying FlexVolume driver directory through cluster/kube-up.sh. Without this, the process of setting up FlexVolume in a non-default directory is very manual.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#47561
Automatic merge from submit-queue
Add encryption provider support via environment variables
These changes are needed to allow cloud providers to use the encryption providers as an alpha feature. The version checks can be done in the respective cloud providers'.
Context: #46460 and #46916
@destijl @jcbsmpsn @smarterclayton
Automatic merge from submit-queue
[GCE] Bump GLBC version to 0.9.5
Fixes#47559
```release-note
Bump GLBC version to 0.9.5 - fixes [loss of manually modified GCLB health check settings](https://github.com/kubernetes/kubernetes/issues/47559) upon upgrade from pre-1.6.4 to either 1.6.4 or 1.6.5, or from pre-1.5.8 to 1.5.8.
```
This will create the necessary certificates.
On GCE is will upload those certificates to Metadata.
They are then pulled down on to the kube-apiserver.
They are written to the /etc/src/kubernetes/pki directory.
Finally they are loaded vi the appropriate command line flags.
The requestheader-client-ca-file can be seen by running the following:-
kubectl get ConfigMap extension-apiserver-authentication
--namespace=kube-system -o yaml
Minor bug fixes.
Made sure AGGR_MASTER_NAME is set up in all configs.
Clean up variable names.
Added additional requestheader configuration parameters.
Added check so that if there is no Aggregator CA contents we won't start
the aggregator with the relevant flags.
Automatic merge from submit-queue (batch tested with PRs 47492, 47542, 46800, 47545, 45764)
Update addons with upstream CVE fixes
**What this PR does / why we need it**: refreshes the cluster-proportional-autoscaler, metadata-proxy, and fluentd-gcp addons with new base images with fixes for the following vulnerabilities:
* CVE-2016-4448
* CVE-2016-8859
* CVE-2016-9841
* CVE-2016-9843
* CVE-2017-9526
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: x-ref #47386, though there are still a few images left to update
**Release note**:
```release-note
Update cluster-proportional-autoscaler, metadata-proxy, and fluentd-gcp addons with fixes for CVE-2016-4448, CVE-2016-8859, CVE-2016-9841, CVE-2016-9843, and CVE-2017-9526.
```
/cc @timstclair @MrHohn @Q-Lee @crassirostris
Automatic merge from submit-queue
Fix dangling reference to gcloud alpha API for GCI (should be beta)
This reference to the alpha API was missed (fixed in GCE, but not GCI)
Fixes#47494
```release-note
none
```
Automatic merge from submit-queue (batch tested with PRs 47510, 47516, 47482, 47521, 47537)
Allow autoscaler min at 0 in GCE
Allow scaling migs to zero in GCE startup scripts. This only makes sense when there is more than 1 mig. The main use case (for now) will be to test scaling to to zero in e2e tests.
Automatic merge from submit-queue (batch tested with PRs 47302, 47389, 47402, 47468, 47459)
Change port on which fluentd exposes its metrics
Fix https://github.com/kubernetes/kubernetes/issues/47397
/cc @Q-Lee @nicksardo
```release-note
Stackdriver Logging deployment exposes metrics on node port 31337 when enabled.
```
Automatic merge from submit-queue (batch tested with PRs 47302, 47389, 47402, 47468, 47459)
Update to kube-addon-manager:v6.4-beta.2: kubectl v1.6.4 and refreshed base images
**What this PR does / why we need it**: refreshes base images for kube-addon-manager with fixes for CVE-2016-9841 and CVE-2016-9843.
x-ref https://github.com/kubernetes/kubernetes/issues/47386
**Special notes for your reviewer**: the updated images are not yet pushed, so tests will fail until that's done.
**Release note**:
```release-note
```
/assign @MrHohn
Automatic merge from submit-queue (batch tested with PRs 46441, 43987, 46921, 46823, 47276)
Enable Node authorizer and NodeRestriction admission in kubemark
xref https://github.com/kubernetes/features/issues/279
We want to ensure scale testing covers use of the authorizer/admission pair that partitions nodes. This includes enabling the authorizer, which populates a graph of existing nodes and pods.
Kubemark is still running all nodes with a single credential, so a follow-up step is to generate unique credentials per node (or enable TLS bootstrapping) and remove the temporary rolebinding added in this PR so the node authorizer is the one authorizing each call by a hollow node.
Automatic merge from submit-queue (batch tested with PRs 47000, 47188, 47094, 47323, 47124)
Set up proxy certs for Aggregator.
Working on fixing https://github.com/kubernetes/kubernetes/issues/43716.
This will create the necessary certificates.
On GCE is will upload those certificates to Metadata.
They are then pulled down on to the kube-apiserver.
They are written to the /etc/src/kubernetes/pki directory.
Finally they are loaded vi the appropriate command line flags.
The requestheader-client-ca-file can be seen by running the following:-
kubectl get ConfigMap extension-apiserver-authentication --namespace=kube-system -o yaml
**What this PR does / why we need it**:
This PR creates a request header CA. It also creates a proxy client cert/key pair.
It causes these files to end up on kube-apiserver and set the CLI flags so they are properly loaded.
Without it the customer either has to set them up themselves or re-use the master CA which is a security vulnerability.
Currently this creates everything on GCE.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#43716
**Special notes for your reviewer**:
Automatic merge from submit-queue (batch tested with PRs 47000, 47188, 47094, 47323, 47124)
Add Calico typha agent
**What this PR does / why we need it**:
- Adds the Calico typha agent with autoscaling to the GCE scripts.
- Adds logic to adjust Calico resource requests based on cluster size.
Fixes https://github.com/kubernetes/kubernetes/issues/47269
**Special notes for your reviewer**:
CC @dnardo
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
Make fluentd-gcp run with host network
Fluentd-gcp should have access to instance's platform-dependent service account in order to work.
/cc @piosz
Working on fixing https://github.com/kubernetes/kubernetes/issues/43716.
This will create the necessary certificates.
On GCE is will upload those certificates to Metadata.
They are then pulled down on to the kube-apiserver.
They are written to the /etc/src/kubernetes/pki directory.
Finally they are loaded vi the appropriate command line flags.
The requestheader-client-ca-file can be seen by running the following:-
kubectl get ConfigMap extension-apiserver-authentication
--namespace=kube-system -o yaml
Minor bug fixes.
Made sure AGGR_MASTER_NAME is set up in all configs.
Clean up variable names.
Added additional requestheader configuration parameters.
Automatic merge from submit-queue
Remove e2e-rbac-bindings.
Replace todo-grabbag binding w/ more specific heapster roles/bindings.
Move kubelet binding.
**What this PR does / why we need it**:
The "e2e-rbac-bindings" held 2 leftovers from the 1.6 RBAC rollout process:
- One is the "kubelet-binding" which grants the "system:node" role to kubelet. This is needed until we enable the node authorizer. I moved this to the folder w/ some other kubelet related bindings.
- The other is the "todo-remove-grabbag-cluster-admin" binding, which grants the cluster-admin role to the default service account in the kube-system namespace. This appears to only be required for heapster. Heapster will instead use a "heapster" service account, bound to a "system:heapster" role on the cluster (no write perms), and a "system:pod-nanny" role in the kube-system namespace.
**Which issue this PR fixes**: Addresses part of #39990
**Release Note**:
```release-note
New and upgraded 1.7 GCE/GKE clusters no longer have an RBAC ClusterRoleBinding that grants the `cluster-admin` ClusterRole to the `default` service account in the `kube-system` namespace.
If this permission is still desired, run the following command to explicitly grant it, either before or after upgrading to 1.7:
kubectl create clusterrolebinding kube-system-default --serviceaccount=kube-system:default --clusterrole=cluster-admin
```
Automatic merge from submit-queue
Audit webhook config for GCE
Add a `ADVANCED_AUDIT_BACKEND` (comma delimited list) environment variable to the GCE cluster config to select the audit backend, and add configuration for the webhook backend.
~~Based on the first commit from https://github.com/kubernetes/kubernetes/pull/46557~~
For kubernetes/features#22
Since this is GCE-only configuration plumbing, I think this should be exempt from code-freeze.
Automatic merge from submit-queue
Remove Initializers from admission-control in kubernetes-master charm for pre-1.7
**What this PR does / why we need it**:
This fixes a problem with the kubernetes-master charm where kube-apiserver never comes up:
```
failed to initialize admission: Unknown admission plugin: Initializers
```
The Initializers plugin does not exist before Kubernetes 1.7. The charm needs to support 1.6 as well.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#47062
**Special notes for your reviewer**:
This fixes a problem introduced by https://github.com/kubernetes/kubernetes/pull/36721
**Release note**:
```release-note
Remove Initializers from admission-control in kubernetes-master charm for pre-1.7
```
Automatic merge from submit-queue
Fixes 47182
**What this PR does / why we need it**: Adds some state guards to the idle_status message to speed up the deployment
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#47182
**Special notes for your reviewer**:
This adds additional state guards of the idle_status method, which will
prevent it from being run until a worker has joined the relationship.
Previous invocations may have some messaging inconsistencies but will reach
eventual consistency once a worker has joined.
This prevents the polling loop from executing too soon, bloating the
installation time by bare-minimum an additional 10 minutes.
**Release note**:
```release-note
Added state guards to the idle_status messaging in the kubernetes-master charm to make deployment faster on initial deployment.
```
Automatic merge from submit-queue
Bump up npd version to v0.4.0
Fixes#47070.
Bump up npd version to [v0.4.0](https://github.com/kubernetes/node-problem-detector/releases/tag/v0.4.0).
```release-note
Bump up Node Problem Detector version to v0.4.0, which added support of parsing log from /dev/kmsg and ABRT.
```
/cc @dchen1107 @ajitak
This adds additional state guardsof the idle_status method, which will
prevent it from being run until a worker has joined the relationship.
Previous invocations may have some message artifacting, but will reach
eventual consistency once a worker has joined.
This prevents the polling loop from executing too soon, bloating the
installation time by bare-minimum an additional 10 minutes.
Automatic merge from submit-queue (batch tested with PRs 46718, 46828, 46988)
Update docs/ links to point to main site
**What this PR does / why we need it**:
This updates various links to either point to kubernetes.io or to the kubernetes/community repo instead of the legacy docs/ tree in k/k
Pre-requisite for #46813
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
@kubernetes/sig-docs-maintainers @chenopis @ahmetb @thockin
Automatic merge from submit-queue (batch tested with PRs 46897, 46899, 46864, 46854, 46875)
Write audit policy file for GCE/GKE configuration
Setup the audit policy configuration for GCE & GKE. Here is the high level summary of the policy:
- Default logging everything at `Metadata`
- Known write APIs default to `RequestResponse`
- Known read-only APIs default to `Request`
- Except secrets & configmaps are logged at `Metadata`
- Don't log events
- Don't log `/version`, swagger or healthchecks
In addition to the above, I spent time analyzing the noisiest lines in the audit log from a cluster that soaked for 24 hours (and ran a batch of e2e tests). Of those top requests, those that were identified as low-risk (all read-only, except update kube-system endpoints by controllers) are dropped.
I suspect we'll want to tweak this a bit more once we've had a time to soak it on some real clusters.
For kubernetes/features#22
/cc @sttts @ericchiang
Automatic merge from submit-queue
Add event exporter deployment to the fluentd-gcp addon
Introduce event exporter deployment to the fluentd-gcp addon so that by default if logging to Stackdriver is enabled, events will be available there also.
In this release, event exporter is a non-critical pod in BestEffort QoS class to avoid preempting actual workload in tightly loaded clusters. It will become critical in one of the future releases.
```release-note
Stackdriver cluster logging now deploys a new component to export Kubernetes events.
```
Automatic merge from submit-queue (batch tested with PRs 46972, 42829, 46799, 46802, 46844)
promote tls-bootstrap to beta
last commit of this PR.
Towards https://github.com/kubernetes/kubernetes/issues/46999
```release-note
Promote kubelet tls bootstrap to beta. Add a non-experimental flag to use it and deprecate the old flag.
```
Automatic merge from submit-queue (batch tested with PRs 46734, 46810, 46759, 46259, 46771)
Add iptables lock-file mount to kube-proxy manifest
**What this PR does / why we need it**: kube-proxy is broken in make bazel-release. The new iptables binary uses a lockfile in "/run", but the directory doesn't exist. This causes iptables-restore to fail. We need to share the same lock-file amongst all containers, so mount the host /run dir.
This is similar to #46132 but expediency matters, since builds are broken.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#46103
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue
Respect PDBs during node upgrades and add test coverage to the ServiceTest upgrade test.
This is still a WIP... needs to be squashed at least, and I don't think it's currently passing until I increase the scale of the RC, but please have a look at the general outline. Thanks!
Fixes#38336
@kow3ns @bdbauer @krousey @erictune @maisem @davidopp
```
On GCE, node upgrades will now respect PodDisruptionBudgets, if present.
```
Automatic merge from submit-queue
Add some initial resource limits to the ip-masq-agent.
These limits were based on observing the agent over roughly a day RES was typically ~4M for me but I'd like to make sure we have some headroom. If there was a huge config map then this could increase slightly but not significantly since we only allow 64 entries.
VmPeak: 11164 kB
VmSize: 11164 kB
VmLck: 0 kB
VmPin: 0 kB
VmHWM: 7652 kB
VmRSS: 4260 kB
VmData: 7612 kB
VmStk: 136 kB
VmExe: 1856 kB
VmLib: 0 kB
VmPTE: 40 kB
VmPMD: 20 kB
VmSwap: 0 kB
Automatic merge from submit-queue
Adding a metadata proxy addon
**What this PR does / why we need it**: adds a metadata server proxy daemonset to hide kubelet secrets.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: this partially addresses #8867
**Special notes for your reviewer**:
**Release note**: the gce metadata server can be hidden behind a proxy, hiding the kubelet's token.
```release-note
The gce metadata server can be hidden behind a proxy, hiding the kubelet's token.
```
Automatic merge from submit-queue
Add initializer support to admission and uninitialized filtering to rest storage
Initializers are the opposite of finalizers - they allow API clients to react to object creation and populate fields prior to other clients seeing them.
High level description:
1. Add `metadata.initializers` field to all objects
2. By default, filter objects with > 0 initializers from LIST and WATCH to preserve legacy client behavior (known as partially-initialized objects)
3. Add an admission controller that populates .initializer values per type, and denies mutation of initializers except by certain privilege levels (you must have the `initialize` verb on a resource)
4. Allow partially-initialized objects to be viewed via LIST and WATCH for initializer types
5. When creating objects, the object is "held" by the server until the initializers list is empty
6. Allow some creators to bypass initialization (set initializers to `[]`), or to have the result returned immediately when the object is created.
The code here should be backwards compatible for all clients because they do not see partially initialized objects unless they GET the resource directly. The watch cache makes checking for partially initialized objects cheap. Some reflectors may need to change to ask for partially-initialized objects.
```release-note
Kubernetes resources, when the `Initializers` admission controller is enabled, can be initialized (defaulting or other additive functions) by other agents in the system prior to those resources being visible to other clients. An initialized resource is not visible to clients unless they request (for get, list, or watch) to see uninitialized resources with the `?includeUninitialized=true` query parameter. Once the initializers have completed the resource is then visible. Clients must have the the ability to perform the `initialize` action on a resource in order to modify it prior to initialization being completed.
```
Automatic merge from submit-queue (batch tested with PRs 46456, 46675, 46676, 46416, 46375)
Move tolerations to PodSpec for ip-masq-agent.yaml.
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 46456, 46675, 46676, 46416, 46375)
Move tolerations to PodSpec for calico-node.yaml.
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
none
```
Automatic merge from submit-queue (batch tested with PRs 46239, 46627, 46346, 46388, 46524)
Configure NPD version through env variable
This lets user specify NPD version to be installed with kubernetes.
Automatic merge from submit-queue (batch tested with PRs 41563, 45251, 46265, 46462, 46721)
change kubemark image project to match new cos image project
The old project is not available anymore.
https://github.com/kubernetes/kubernetes/pull/45136
Automatic merge from submit-queue (batch tested with PRs 46648, 46500, 46238, 46668, 46557)
Add an e2e test for AdvancedAuditing
Enable a simple "advanced auditing" setup for e2e tests running on GCE, and add an e2e test that creates & deletes a pod, a secret, and verifies that they're audited.
Includes https://github.com/kubernetes/kubernetes/pull/46548
For https://github.com/kubernetes/features/issues/22
/cc @ericchiang @sttts @soltysh @ihmccreery
Respect PDBs during node upgrades and add test coverage to the
ServiceTest upgrade test. Modified that test so that we include pod
anti-affinity constraints and a PDB.
Automatic merge from submit-queue
Set Kubelet Disk Defaults for the 1.7 release
The `--low-diskspace-threshold-mb` flag has been depreciated since 1.6.
This PR sets the default to `0`, and sets defaults for disk eviction based on the values used for our [e2e tests](https://github.com/kubernetes/kubernetes/blob/master/test/e2e_node/services/kubelet.go#L145).
This also removes the custom defaults for vagrant, as the new defaults should work for it as well.
/assign @derekwaynecarr
cc @vishh
```release-note
By default, --low-diskspace-threshold-mb is not set, and --eviction-hard includes "nodefs.available<10%,nodefs.inodesFree<5%"
```
Automatic merge from submit-queue (batch tested with PRs 46661, 46562, 46657, 46655, 46640)
remove openvpn and nginx from salt
only used in azure which doesn't exist.
Automatic merge from submit-queue
Plumb through the ENABLE_LEGACY_ABAC flag for GKE kube-up.
**What this PR does / why we need it**:
Makes the "gke" provider in `cluster/` respect the `ENABLE_LEGACY_ABAC` env var by plumbing it through to the `--enable-legacy-authorization` gcloud flag.
Automatic merge from submit-queue
Support storageclass storage updates to v1
**What this PR does / why we need it**: enable cluster administrators to update storageclasses stored in etcd from storage.k8s.io/v1beta1 to storage.k8s.io/v1.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**: I had a hard time getting the test to work with how it was handling KUBE_API_VERSIONS and RUNTIME_CONFIG. I would appreciate some extra review attention there. Also, I had to hack in a `cluster-scoped` "namespace" to get the verification portions of the test script to work. I'm definitely open to ideas for how to improve that if needed.
**Release note**:
```release-note
Support updating storageclasses in etcd to storage.k8s.io/v1. You must do this prior to upgrading to 1.8.
```
cc @kubernetes/sig-storage-pr-reviews @kubernetes/sig-api-machinery-pr-reviews @jsafrane @deads2k @saad-ali @enj
Automatic merge from submit-queue
gcloud command syntax changed between alpha and beta versions
syntax for secondary-ranges changed from:
name=NAME,range=RANGE
to
NAME=RANGE
Automatic merge from submit-queue
Add generic NoExecute Toleration to NPD
Ref. #44445
cc @davidopp
```release-note
Add generic Toleration for NoExecute Taints to NodeProblemDetector
```
Automatic merge from submit-queue
fix typo in build.sh
**What this PR does / why we need it**:
fix typo in build.sh
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
NONE
**Special notes for your reviewer**:
NONE
**Release note**:
```release-note
NONE
```
proxy_handler now uses the endpoint router to map the cluster IP to
appropriate endpoint (Pod) IP for the given resource.
Added code to allow aggregator routing to be optional.
Updated bazel build.
Fixes to cover JLiggit comments.
Added util ResourceLocation method based on Listers.
Fixed issues from verification steps.
Updated to add an interface to obfuscate some of the routing logic.
Collapsed cluster IP resolution in to the aggregator routing
implementation.
Added 2 simple unit tests for ResolveEndpoint
Automatic merge from submit-queue (batch tested with PRs 46501, 45944, 46473)
Enable the ip-masq-agent on GCE installs
Setting this will trigger cluster/addons/ip-masq-agent/ip-masq-agent.yaml to be installed as an addon, which disable configure IP masquerade for all of RFC1918, rather
than just 10.0/8.
Because the flag defaulted to 10.0/8 we can't just change the default. I think anyone who needs IP masquerade set up should probably use this instead.
@justinsb @kubernetes/sig-cluster-lifecycle-misc
Fixes#11204
@dnardo - any reason not to do this?
Release Note:
```release-note
GCE installs will now avoid IP masquerade for all RFC-1918 IP blocks, rather than just 10.0.0.0/8. This means that clusters can
be created in 192.168.0.0./16 and 172.16.0.0/12 while preserving the container IPs (which would be lost before).
```
Automatic merge from submit-queue (batch tested with PRs 46429, 46308, 46395, 45867, 45492)
Bump Go version to 1.8.3
This PR also removed this patched version of Go 1.8.1 which we used to use to workaround performance problem of Go 1.8.1.
Fix https://github.com/kubernetes/kubernetes/issues/45216
Ref #46391
@timothysc @bradfitz
Automatic merge from submit-queue (batch tested with PRs 46124, 46434, 46089, 45589, 46045)
Bump elasticsearch and kibana to 5.4.0
**What this PR does / why we need it**: Updates elasticsearch and kibana docker image assets to 5.4.0 version
**Release note**:
```release-note
Upgrade Elasticsearch Addon to v5.4.0
```
Setting this will trigger
cluster/addons/ip-masq-agent/ip-masq-agent.yaml to be installed as an
addon, which disable configure IP masquerade for all of RFC1918, rather
than just 10.0/8.
Automatic merge from submit-queue (batch tested with PRs 44774, 46266, 46248, 46403, 46430)
kube-proxy: ratelimit runs of iptables by sync-period flags
This bounds how frequently iptables can be synced. It will be no more often than every 10 seconds and no less often than every 1 minute, by default.
@timothysc FYI
@dcbw @freehan FYI
Automatic merge from submit-queue (batch tested with PRs 45573, 46354, 46376, 46162, 46366)
GCE - Retrieve subnetwork name/url from gce.conf
**What this PR does / why we need it**:
Features like ILB require specifying the subnetwork if the network is type manual.
**Notes:**
The network URL can be [constructed](68e7e18698/pkg/cloudprovider/providers/gce/gce.go (L211-L217)) by fetching instance metadata; however, the subnetwork is not provided through this feature. Users must specify the subnetwork name/url through the gce.conf.
Although multiple subnets can exist in the same region for a network, the cloud provider will only use one subnet url for creating LBs.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 46299, 46309, 46311, 46303, 46150)
Create a subnet for reserving the service cluster IP range
This will be done if IP aliases is enabled on GCP.
```release-note
NONE
```
Automatic merge from submit-queue
Allow the /logs handler on the apiserver to be toggled.
Adds a flag to kube-apiserver, and plumbs through en environment variable in configure-helper.sh
Automatic merge from submit-queue
Enable "kick the tires" support for Nvidia GPUs in COS
This PR provides an installation daemonset that will install Nvidia CUDA drivers on Google Container Optimized OS (COS).
User space libraries and debug utilities from the Nvidia driver installation are made available on the host in a special directory on the host -
* `/home/kubernetes/bin/nvidia/lib` for libraries
* `/home/kubernetes/bin/nvidia/bin` for debug utilities
Containers that run CUDA applications on COS are expected to consume the libraries and debug utilities (if necessary) from the host directories using `HostPath` volumes.
Note: This solution requires updating Pod Spec across distros. This is a known issue and will be addressed in the future. Until then CUDA workloads will not be portable.
This PR updates the COS base image version to m59. This is coupled with this PR for the following reasons:
1. Driver installation requires disabling a kernel feature in COS.
2. The kernel API for disabling this interface changed across COS versions
3. If the COS image update is not handled in this PR, then a subsequent COS image update will break GPU integration and will require an update to the installation scripts in this PR.
4. Instead of having to post `3` PRs, one each for adding the basic installer, updating COS to m59, and then updating the installer again, this PR combines all the changes to reduce review overhead and latency, and additional noise that will be created when GPU tests break.
**Try out this PR**
1. Get Quota for GPUs in any region
2. `export `KUBE_GCE_ZONE=<zone-with-gpus>` KUBE_NODE_OS_DISTRIBUTION=gci`
3. `NODE_ACCELERATORS="type=nvidia-tesla-k80,count=1" cluster/kube-up.sh`
4. `kubectl create -f cluster/gce/gci/nvidia-gpus/cos-installer-daemonset.yaml`
5. Run your CUDA app in a pod.
**Another option is to run a e2e manually to try out this PR**
1. Get Quota for GPUs in any region
2. export `KUBE_GCE_ZONE=<zone-with-gpus>` KUBE_NODE_OS_DISTRIBUTION=gci
3. `NODE_ACCELERATORS="type=nvidia-tesla-k80,count=1"`
4. `go run hack/e2e.go -- --up`
5. `hack/ginkgo-e2e.sh --ginkgo.focus="\[Feature:GPU\]"`
The e2e will install the drivers automatically using the daemonset and then run test workloads to validate driver integration.
TODO:
- [x] Update COS image version to m59 release.
- [x] Remove sleep from the install script and add it to the daemonset
- [x] Add an e2e that will run the daemonset and run a sample CUDA app on COS clusters.
- [x] Setup a test project with necessary quota to run GPU tests against HEAD to start with https://github.com/kubernetes/test-infra/pull/2759
- [x] Update node e2e serial configs to install nvidia drivers on COS by default
Automatic merge from submit-queue (batch tested with PRs 46201, 45952, 45427, 46247, 46062)
remove the elasticsearch template
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```
NONE
```
Loading file-based index template has been disabled since 2.0.0-beta1 version of Elasticsearch. https://www.elastic.co/guide/en/elasticsearch/reference/2.0/breaking_20_index_api_changes.html#_file_based_index_templates
So the `template-k8s-logstash.json` is not longer useful.
On the other hand, as https://github.com/kubernetes/kubernetes/issues/25127 indicated, we might better curl the elasticsearch API to load this template.
Automatic merge from submit-queue
Add version for fluentd-gcp config
Fluentd-gcp config should be versioned, because otherwise during the update race can happen and the new pod can mount the old config
Packaged the script as a docker container stored in gcr.io/google-containers
A daemonset deployment is included to make it easy to consume the installer
A cluster e2e has been added to test the installation daemonset along with verifying installation
by using a sample CUDA application.
Node e2e for GPUs updated to avoid running on nodes without GPU devices.
Signed-off-by: Vishnu kannan <vishnuk@google.com>
Automatic merge from submit-queue
Update Calico add-on
**What this PR does / why we need it:**
Updates Calico to the latest version using self-hosted install as a DaemonSet, removes Calico's dependency on etcd.
- [x] Remove [last bits of Calico salt](175fe62720/cluster/saltbase/salt/calico/master.sls (L3))
- [x] Failing on the master since no kube-proxy to access API.
- [x] Fix outgoing NAT
- [x] Tweak to work on both debian / GCI (not just GCI)
- [x] Add the portmap plugin for host port support
Maybe:
- [ ] Add integration test
**Which issue this PR fixes:**
https://github.com/kubernetes/kubernetes/issues/32625
**Try it out**
Clone the PR, then:
```
make quick-release
export NETWORK_POLICY_PROVIDER=calico
export NODE_OS_DISTRIBUTION=gci
export MASTER_SIZE=n1-standard-4
./cluster/kube-up.sh
```
**Release note:**
```release-note
The Calico version included in kube-up for GCE has been updated to v2.2.
```
Automatic merge from submit-queue (batch tested with PRs 44606, 46038)
Add ip-masq-agent addon to the addons folder.
This also ensures that under gce we add this DaemonSet if the non-masq-cidr
is set to 0/0.
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
Add ip-masq-agent addon to the addons folder which is used in GCE if --non-masquerade-cidr is set to 0/0
```
Automatic merge from submit-queue
Make real proxier in hollow-proxy optional (default=true)
Ref https://github.com/kubernetes/kubernetes/pull/45622
This allows using real proxier for hollow proxy, but we use the fake one by default.
cc @kubernetes/sig-scalability-misc @wojtek-t @gmarek
Automatic merge from submit-queue
Update cluster startup scripts to use gcloud beta for alias IP support
The feature has gone from alpha to beta.
```release-note
NONE
```
Automatic merge from submit-queue
Update dashboard-controller.yaml
**What this PR does / why we need it**: Updates Dashboard addon to newest version. Changelog can be found at https://github.com/kubernetes/dashboard/releases/tag/v1.6.1.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
Update Dashboard version to 1.6.1
```
Automatic merge from submit-queue
fix: required openstack heat version for conditions is 2016-10-14 / newton
This fix sets the required heat version to 2016-10-14.
In OpenStack heat the conditions statement was introduced in version 2016-10-14 | newton, accourding to:
https://docs.openstack.org/releasenotes/heat/newton.html
and more specific:
https://docs.openstack.org/developer/heat/template_guide/hot_spec.html
The conditions are used to make the assignment of public ips / floating ips optional (added in commit 4eef540876). However this template is not compatible with OpenStack heat releases prior newton and produces the following error:
```
ERROR: Failed to validate: : resources.kube_minions: : "condition" is not a valid keyword inside a output definition
```
PR without a release note:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 45860, 45119, 44525, 45625, 44403)
Support running StatefulSetBasic e2e tests with local-up-cluster
**What this PR does / why we need it**:
Currently StatefulSet(s) fail when you use local-up-cluster without
setting a cloud provider. In this PR, we use set the
kubernetes.io/host-path provisioner as the default provisioner when
there CLOUD_PROVIDER is not specified. This enables e2e test(s)
(specifically StatefulSetBasic) to work.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 44337, 45775, 45832, 45574, 45758)
Fix lint failures on kubernetes-e2e charm
**What this PR does / why we need it**:
This fixes a test failure on the kubernetes-e2e charm relating to tox and flake8:
```DEBUG🏃/bin/sh: 1: flake8: not found```
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
This is a follow-up to https://github.com/kubernetes/kubernetes/pull/45494 where the same thing was done for kubernetes-master.
**Release note**:
```release-note
Fix lint failures on kubernetes-e2e charm
```
Automatic merge from submit-queue (batch tested with PRs 44337, 45775, 45832, 45574, 45758)
Refactor gcr.io/google_containers/elasticsearch to alpine
**What this PR does / why we need it**:
This reduces the image size of the gcr.io/google_containers/elasticsearch image.
Before:
```
REPOSITORY TAG IMAGE ID CREATED SIZE
gcr.io/google_containers/elasticsearch v2.4.1-2 6941e43df81a 4 weeks ago 419MB
```
After:
```
REPOSITORY TAG IMAGE ID CREATED SIZE
gcr.io/google_containers/elasticsearch v2.4.1-2 24ad40c21a52 About an hour ago 178MB
```
**Special notes for your reviewer**:
I used a workaround to make the elasticsearch_logging_discovery binary work with alpine. (See [stackoverflow](https://stackoverflow.com/questions/34729748/installed-go-binary-not-found-in-path-on-alpine-linux-docker/35613430#35613430)). Alternatively this can be solved by setting ```CGO_ENABLED=0```when compiling the binary. I didn't feel comfortable chaing the Makefile though, since I'm no golang expert. Feedback wanted!
Automatic merge from submit-queue (batch tested with PRs 45070, 45821, 45732, 45494, 45789)
Fix lint errors in juju kubernetes master and e2e charms
**What this PR does / why we need it**: Fixes style error in the Juju charms
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```
Code style fixes in Juju charms
```
Automatic merge from submit-queue (batch tested with PRs 45653, 45719, 45729, 45730, 44250)
Remove kubemark.sh as we don't use pod IP from it anymore
This has been pending for sometime now. We no longer seem to actually depend on the downwarp api for the pod IP (hollow-proxy for example now gets it using api call).
cc @wojtek-t @gmarek
Automatic merge from submit-queue
Remove unused file cluster/images/kubemark/build-kubemark.sh
It's irrelevant and we don't seem to use/need it anymore.
cc @wojtek-t @gmarek
Automatic merge from submit-queue (batch tested with PRs 45691, 45667, 45698, 45715)
Add general NoExecute Toleration to fluentd in gcp configuration
Ref #44445
Once merged I'll create a cherry-pick that will be picked up in GKE together with the next patch release.
cc @JorritSalverda @davidopp @aveshagarwal @nimeshksingh @piosz
```release-note
fluentd will tolerate all NoExecute Taints when run in gcp configuration.
```
Automatic merge from submit-queue
Update kube-dns version to 1.14.2
```release-note
Updates kube-dns to 1.14.2
- Support kube-master-url flag without kubeconfig
- Fix concurrent R/Ws in dns.go
- Fix confusing logging when initialize server
- Fix printf in cmd/kube-dns/app/server.go
- Fix version on startup and --version flag
- Support specifying port number for nameserver in stubDomains
```
Changes:
- Support kube-master-url flag without kubeconfig
- Fix concurrent R/Ws in dns.go
- Fix confusing logging when initialize server
- Fix printf in cmd/kube-dns/app/server.go
- Fix version on startup and --version flag
- Support specifying port number for nameserver in stubDomains
Automatic merge from submit-queue (batch tested with PRs 45569, 45602, 45604, 45478, 45550)
Don't append :443 to registry domain in the kubernetes-worker layer registry action
**What this PR does / why we need it**: Fixes#45547
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#45547
**Special notes for your reviewer**:
**Release note**:
```
Fix#45547 - don't append :443 to juju created docker registry config
```
Automatic merge from submit-queue (batch tested with PRs 45569, 45602, 45604, 45478, 45550)
Enable kernel memcg notification for node and cluster GCI/COS testing.
Sets --experimental-kernel-memcg-notification=true when running on the GCI/COS image. It sets this for master and nodes for cluster e2e tests, and for the node in node e2e tests.
Issue #42676
cc @dchen1107 @Random-Liu
IP aliases are an alpha feature, and node accelerators are a beta
feature. $gcloud determines which is appropriate.
Before, this would try to run "gcloud alpha beta", which is incoherent.
Automatic merge from submit-queue
util/iptables: grab iptables locks if iptables-restore doesn't support --wait
When iptables-restore doesn't support --wait (which < 1.6.2 don't), it may
conflict with other iptables users on the system, like docker, because it
doesn't acquire the iptables lock before changing iptables rules. This causes
sporadic docker failures when starting containers.
To ensure those don't happen, essentially duplicate the iptables locking
logic inside util/iptables when we know iptables-restore doesn't support
the --wait option.
Unfortunately iptables uses two different locking mechanisms, one until
1.4.x (abstract socket based) and another from 1.6.x (/run/xtables.lock
flock() based). We have to grab both locks, because we don't know what
version of iptables-restore exists since iptables-restore doesn't have
a --version option before 1.6.2. Plus, distros (like RHEL) backport the
/run/xtables.lock patch to 1.4.x versions.
Related: https://github.com/kubernetes/kubernetes/pull/43575
See also: https://github.com/openshift/origin/pull/13845
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1417234
@kubernetes/rh-networking @kubernetes/sig-network-misc @eparis @knobunc @danwinship @thockin @freehan
Automatic merge from submit-queue (batch tested with PRs 44830, 45130)
Adding support for Accelerators to GCE clusters.
```release-note
Create clusters with GPUs in GKE by specifying "type=<gpu-type>,count=<gpu-count>" to NODE_ACCELERATORS env var.
List of available GPUs - https://cloud.google.com/compute/docs/gpus/#introduction
```
Automatic merge from submit-queue (batch tested with PRs 44590, 44969, 45325, 45208, 44714)
Enable basic auth username rotation for GCI
When changing basic auth creds, just delete the whole file, in order to be able to rotate username in addition to password.
Automatic merge from submit-queue (batch tested with PRs 45309, 45376)
Allow passing --enable-kubernetes-alpha to GKE e2e tests
**What this PR does / why we need it**:
This allows us to pass --enable-kubernetes-alpha when running GKE e2e tests.
**Release note**:
```
NONE
```
@dchen1107
Automatic merge from submit-queue (batch tested with PRs 45285, 45162)
mounter.go: format return err.
**What this PR does / why we need it**:
when an error returned is nil, it's preferred to explicitly return nil.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 45283, 45289, 45248, 44295)
Use docker_bundle rule from new rules_docker repo
**What this PR does / why we need it**: switched to using the new `docker_bundle` rule from `rules_docker` instead of my patched `docker_build` rule. This also brings in some fixes for the docker rules that were missing from my fork.
Additionally, I switched out the `git_repository` rules for `http_archive` rules, since that seems to be recommended by the bazel docs (and might be faster).
Lastly, I updated the `pkg_tar` rules to use my patch, which doesn't prepend `./` to files inside the tarballs.
This one should likely be merged upstream in the near future.
I think this is the last of the changes necessary to have `bazel run //:ci-artifacts` working properly to support using bazel for e2e in CI.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 45283, 45289, 45248, 44295)
Remove offending code due to bad rebase
**What this PR does / why we need it**: Fix bug introduced by bad rebasing
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```
NONE
```
Automatic merge from submit-queue (batch tested with PRs 43884, 44712, 45124, 43883)
Increase Dashboard memory limits
**What this PR does / why we need it**: Increases memory requests and limits for Dashboard.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes https://github.com/kubernetes/dashboard/issues/1431
**Special notes for your reviewer**: Dashboard crashes on large clusters, this change should fix that problem.
**Release note**:
```release-note
Increase Dashboard's memory requests and limits
```
Automatic merge from submit-queue
Support arbitrary alphanumeric strings as prerelease identifiers
**What this PR does / why we need it**: this is basically an extension of #43642, but supports more general prerelease identifiers, per the spec at http://semver.org/#spec-item-9.
These regular expressions are still a bit more restrictive than the SemVer spec allows (we disallow hyphens, and we require the format `-foo.N` instead of arbitrary `-foo.X.bar.Y.bazZ`), but this should support our needs without changing too much more logic or breaking other assumptions.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
README.md: Update outdated links
**What this PR does / why we need it**:
the PR aims to update some links.
Some links with "#" would not redirect to right point of pages.
Other links without "#" can work, but they are outdated. I change them by the way.
**Special notes for your reviewer**:
**Release note**:
```release-note
```
none
Automatic merge from submit-queue
Retry calls we report config changes quickly.
**What this PR does / why we need it**: In Juju deployments of Kubernetes the status of the charms is updated when a status-update is triggered periodically. As a result changes in config variables may take up to 10 minutes to be reflected on the charms status. See bug below.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/263
**Special notes for your reviewer**:
**Release note**:
```
Kubernetes clusters deployed with Juju pick up config changes faster.
```
Automatic merge from submit-queue (batch tested with PRs 41583, 45117, 45123)
Adds the cifs-common package
**What this PR does / why we need it**: Enables mounting of CIFS volumes. Required for Azure.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/227
**Release note**:
```release-note
Added CIFS PV support for Juju Charms
```
Automatic merge from submit-queue
Remove the Rackspace provider
**What this PR does / why we need it**:
To aid the effort of moving providers out of the cluster dir, I'm
removing Rackspace and leaving behind a README.md simply as a
placeholder until the entire dir is deleted.
**Which issue this PR fixes**
Fixes#6962
**Release note**:
```release-note
Deployment of Kubernetes clusters on Rackspace using the in-tree bash deployment (i.e. cluster/kube-up.sh or get-kube.sh) is obsolete and support has been removed.```
Currently StatefulSet(s) fail when you use local-up-cluster without
setting a cloud provider. In this PR, we use set the
kubernetes.io/host-path provisioner as the default provisioner when
there CLOUD_PROVIDER is not specified. This enables e2e test(s)
(specifically StatefulSetBasic) to work.
Automatic merge from submit-queue (batch tested with PRs 41530, 44814, 43620, 41985)
Fixes juju kubernetes master: 1. Get certs from a dead leader. 2. Append tokens.
**What this PR does / why we need it**:
Fixes two issues with the Juju kubernetes master.
1. Grab certificates from a leader that is already removed.
2. Append (not truncate) auth tokens
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
fixes#43563fixes#43519
**Special notes for your reviewer**:
**Release note**:
```
Recover certificates from leadership context in case all masters die in a Juju deployment
```
Automatic merge from submit-queue
Add metrics exporter to the fluentd-gcp deployment
Metrics exporter container reads metrics from the `/metrics` endpoint in fluentd and exports them directly to the Stackdriver. It assumes that Stackdriver Monitoring API is enabled.
/cc @fgrzadkowski
Automatic merge from submit-queue (batch tested with PRs 42432, 44628, 45101, 44921)
Use correct option name in the kubernetes-worker layer registry action
**What this PR does / why we need it**: It fixes#44920
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#44920
**Special notes for your reviewer**:
**Release note**:
```
Ensure kubernetes-worker juju layer registry action uses correct ingress controller option name
```
Automatic merge from submit-queue
Bump GLBC version to 0.9.3
**What this PR does / why we need it**:
Bumps version of GLBC shipped with K8s
https://github.com/kubernetes/ingress/releases/tag/0.9.3
```
Major Changelog:
Bug fix: adding backends to existing backend-services #652
Bug fix: handling of secret-based SSL Certs #639
Add second LB healthcheck/proxy traffic source CIDR #574#479
Support backside re-encryption (HTTPS) #519
```
The two noted bugs are common occurrences for GKE users
**Release note**:
```release-note
Bump GLBC version to 0.9.3
```
Fixes#6962
To aid the effort of moving providers out of the cluster dir, I'm
removing Rackspace and leaving behind a README.md simply as a
placeholder until the entire dir is deleted.
Automatic merge from submit-queue (batch tested with PRs 42740, 44980, 45039, 41627, 45044)
Update kubernetes-e2e charm to use snaps
**What this PR does / why we need it**:
This updates the kubernetes-e2e charm to use snaps instead of Juju resources for payload delivery.
The main advantage of this is that it decouples the charm from the e2e payload, allowing us to support multiple versions of Kubernetes with a single release of the charm.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
Update kubernetes-e2e charm to use snaps
```
Automatic merge from submit-queue (batch tested with PRs 44591, 44549)
Update repo-infra bazel dependency and use new gcs_upload rule
This PR provides similar functionality to push-build.sh entirely within Bazel rules (though it relies on gsutil).
It's an alternative to #44306.
Depends on https://github.com/kubernetes/repo-infra/pull/13.
**Release note**:
```release-note
NONE
```
The master-components started state triggers a daemon recycle. The guard
was to prevent the daemons from being cycled too often and interrupting
normal workflow. This additional state check is guarded against the etcd
connection string from changing, allowing the current behavior but
triggers a re-configure and recycle of the api-control plane when etcd
units are scaling up and down.
Automatic merge from submit-queue
Support running Ubuntu image on GCE
**What this PR does / why we need it**:
This PR (on top of #44629) contains the script changes for running Ubuntu image on GCE.
**Special notes for your reviewer**:
We made change in `gci/node.yaml` and `gci/master.yaml` to ensure that Kubernetes jobs can start automatically after reboot. This is not needed for GCI but required by Ubuntu. See https://github.com/kubernetes/kubernetes/pull/44744#discussion_r113105970 for details. With this change, Ubuntu could use the same provisioning scripts as GCI's.
Ran e2e tests using the following command and all tests passed.
```
KUBE_GCE_NODE_IMAGE=ubuntu-gke-1604-xenial-v20170420-1 KUBE_GCE_NODE_PROJECT=ubuntu-os-gke-cloud KUBE_NODE_OS_DISTRIBUTION=ubuntu GINKGO_PARALLEL=y GINKGO_PARALLEL_NODES=30 go run hack/e2e.go -- -v --build --up --test --test_args="--ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]" --down
```
Also tested manually for both GCI and Ubuntu images.
**Release note**:
`Support Ubuntu 16.04 image on GCE`
Automatic merge from submit-queue
Send dns details only after cdk-addons are configured
**What this PR does / why we need it**: This is a bugfix on the deployment of Kubernetes via Juju. See issue below.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#40386 and
https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/262
**Special notes for your reviewer**:
**Release note**:
```
Fix KubeDNS issue in Juju deployments.
```
Automatic merge from submit-queue
Allow disabling log dump for nodes (in preparation for using logexporter)
This is, in part, a change required for allowing usage of [logexporter](https://github.com/kubernetes/test-infra/tree/master/logexporter) for dumping node logs to GCS directly, instead of doing it through log-dump.sh.
cc @kubernetes/test-infra-maintainers @wojtek-t @gmarek @fejta
Automatic merge from submit-queue (batch tested with PRs 44931, 44808)
Closes#44392
**What this PR does / why we need it**:
Fix the pause action with regard to the new behavior where
--delete-local-data=false by default. Historically --force was all that
was required, this flag has changed to be more descriptive of the
actions it's taking.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#44392
**Release note**:
```release-note
Added support to the pause action in the kubernetes-worker charm for new flag --delete-local-data
```
Fix the pause action with regard to the new behavior where
--delete-local-data=false by default. Historically --force was all that
was required, this flag has changed to be more descriptive of the
actions it's taking.
Automatic merge from submit-queue
Add namespace-{list, create, delete} actions to the kubernetes-master layer
**What this PR does / why we need it**:
This PR adds namespace-{list,create,delete} actions to the juju kubernetes-master layer.
**Which issue this PR fixes**: fixes#43712
**Special notes for your reviewer**:
Original PR https://github.com/juju-solutions/kubernetes/pull/109
**Release note**:
```
Add namespace-{list,create,delete} actions to the juju kubernetes-master layer
```
Automatic merge from submit-queue (batch tested with PRs 42486, 44780)
Hostname patch for vsphere provider limitations with juju
**What this PR does / why we need it**:
The Juju VSphere provider doesn't set a unique hostname which causes issues when scaling worker-pools and they all have the hostname `ubuntuguest`. Instead we assign the JUJU_UNIT_NAME to that hostname to prevent the collision which allows the master to sort out that there are multiple units and not one attempting re-registration.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/237
**Special notes for your reviewer**:
The charm-pre-exec runs before it installs the charm software so the validation can happen quickly. Check hostname output, as well as kubectl get no post deployment.
```release-note
Resolves juju vsphere hostname bug showing only a single node in a scaled node-pool.
```
This patch sets the hostname to a unique identifier (the juju unit name)
during pre-deployment of the charm. This may not be a FQDN resolveable
hostname but will prevent hostname collision.
Using Ubuntu on GCE to run cluster e2e tests requires slightly different
node.yaml and master.yaml files than GCI, because Ubuntu uses systemd as
PID 1, wheras GCI uses upstart with a systemd delegate. Therefore the
e2e tests fail using those files since the kubernetes services are not
brought back up after a node/master reboot.
Automatic merge from submit-queue (batch tested with PRs 44722, 44704, 44681, 44494, 39732)
prevent installation of docker from upstream
**What this PR does / why we need it**: Disallows installation of upstream docker from PPA in the Juju kubernetes-worker charm.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
Disallows installation of upstream docker from PPA in the Juju kubernetes-worker charm.
```
Automatic merge from submit-queue (batch tested with PRs 42177, 42176, 44721)
Removed fluentd-gcp manifest pod
```release-note
Fluentd manifest pod is no longer created on non-registered master when creating clusters using kube-up.sh.
```
Automatic merge from submit-queue
Remove the old docker-multinode files that were built into the hyperkube image
**What this PR does / why we need it**:
ref: https://goo.gl/VxSaKx
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
The hyperkube image has been slimmed down and no longer includes addon manifests and other various scripts. These were introduced for the now removed docker-multinode setup system.
```
cc @jbeda @brendandburns @bgrant0607 @justinsb @mikedanese
Automatic merge from submit-queue (batch tested with PRs 44687, 44689, 44661)
Retry in get-kube.sh to avoid download flakes.
GCS has up to 2% 5xx rates, so retrying is critical.
This is currently failing about 8 times per day [according to the dashboard](https://storage.googleapis.com/k8s-gubernator/triage/index.html?test=Extract#be2f33fb1e6dd2389d12). It could be backported to reduce the flake rate.
Relase note:
```release-note
NONE
```
Automatic merge from submit-queue
select one api endpoint at random when deploying kubernetes-core charm
**What this PR does / why we need it**: Fixes a bug in the kubernetes-worker Juju charm code that attempted to give kube-proxy more than one api endpoint.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**: https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/255
**Release note**:
```release-note
Fixes a bug in the kubernetes-worker Juju charm code that attempted to give kube-proxy more than one api endpoint.
```
Automatic merge from submit-queue
Update gcr.io/google_containers/porter image to 4524579c0e
**What this PR does / why we need it**: updates the porter image to one built at 4524579c0e using go1.8.1.
This incorporates #44638, which has a new dummy certificate that is compliant with go1.8+.
Image has already been pushed.
**Release note**:
```release-note
NONE
```
/assign @liggitt
/cc @luxas @lavalamp
Automatic merge from submit-queue
Fix ceph-secret type to kubernetes.io/rbd in kubernetes-master charm
**What this PR does / why we need it**:
This fixes the type of the ceph-secret secret that's created by the kubernetes-master charm.
Without the `kubernetes.io/rbd` type, automatic provisioning of PVCs doesn't work.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
Fix ceph-secret type to kubernetes.io/rbd in kubernetes-master charm
```
Automatic merge from submit-queue
issue_43986: fix docu with non-functional proxy
The documentation defines a couple of replication-controller and service
to provision a docker-registry somewhere on the cluster and have it
available by the name viz. A record of
kube-registry.default.svc.<clustername>.
On each node, http-proxies are placed as daemon-set with the
kube-registry DNS name set as upstream, so that the registry is
available on each host under endpoint localhost:5000
Because in the documentation, selector-identifiers are the same for
"upstream" registry and proxies, the proxies themselves register under
the service intended for the upstream and now have themselves as
upstream under a different port, where connection attempts result in
"connection refused".
Adapting selectors to be unique as in this patch fixes the problem.
**What this PR does / why we need it**:
Patch fixes (cf. above) erroneous documentation.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
Fixes#43986
**Special notes for your reviewer**:
Thank you for your consideration.
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 43000, 44500, 44457, 44553, 44267)
Add Kubernetes 1.6 support to Juju charms
**What this PR does / why we need it**:
This adds Kubernetes 1.6 support to Juju charms.
This includes some large architectural changes in order to support multiple versions of Kubernetes with a single release of the charms. There are a few bug fixes in here as well, for issues that we discovered during testing.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
Thanks to @marcoceppi, @ktsakalozos, @jacekn, @mbruzek, @tvansteenburgh for their work in this feature branch as well!
**Release note**:
```release-note
Add Kubernetes 1.6 support to Juju charms
Add metric collection to charms for autoscaling
Update kubernetes-e2e charm to fail when test suite fails
Update Juju charms to use snaps
Add registry action to the kubernetes-worker charm
Add support for kube-proxy cluster-cidr option to kubernetes-worker charm
Fix kubernetes-master charm starting services before TLS certs are saved
Fix kubernetes-worker charm failures in LXD
Fix stop hook failure on kubernetes-worker charm
Fix handling of juju kubernetes-worker.restart-needed state
Fix nagios checks in charms
```
The documentation defines a couple of replication-controller and service
to provision a docker-registry somewhere on the cluster and have it
available by the name viz. A record of
kube-registry.default.svc.<clustername>.
On each node, http-proxies are placed as daemon-set with the
kube-registry DNS name set as upstream, so that the registry is
available on each host under endpoint localhost:5000
Because in the documentation, selector-identifiers are the same for
"upstream" registry and proxies, the proxies themselves register under
the service intended for the upstream and now have themselves as
upstream under a different port, where connection attempts result in
"connection refused".
Adapting selectors to be unique as in this patch fixes the problem.
modified: cluster/addons/registry/README.md
modified: cluster/addons/registry/registry-rc.yaml
modified: cluster/addons/registry/registry-svc.yaml
Automatic merge from submit-queue (batch tested with PRs 40055, 42085, 44509, 44568, 43956)
Change the default CLUSTER_IP_RANGE used by e2e
The existing choice intersects with the range reserved for auto
subnets and cannot be used with some GCP features.
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 44406, 41543, 44071, 44374, 44299)
Enable service account token lookup by default
Fixes#24167
```release-note
kube-apiserver: --service-account-lookup now defaults to true, requiring the Secret API object containing the token to exist in order for a service account token to be valid. This enables service account tokens to be revoked by deleting the Secret object containing the token.
```
Automatic merge from submit-queue
Make get-kube.sh work properly the "ci/latest" pointer
**What this PR does / why we need it**: this is a (late) followup from #36419, fixing a bug discovered in https://github.com/kubernetes/kubernetes/pull/36419#issuecomment-265679578.
Basically, `get-kube-binaries.sh` looks at `$KUBERNETES_RELEASE_URL`, but we weren't properly overriding it in `get-kube.sh` when downloading binaries from the CI release bucket. With this change, we set the variable correctly, and everything works:
```console
$ KUBERNETES_RELEASE=ci/latest ~/code/kubernetes/src/k8s.io/kubernetes/cluster/get-kube.sh
Downloading kubernetes release v1.7.0-alpha.0.2068+3a3dc827e45426
from https://dl.k8s.io/ci/v1.7.0-alpha.0.2068+3a3dc827e45426/kubernetes.tar.gz
to /tmp/foo/kubernetes.tar.gz
Is this ok? [Y]/n
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 161 100 161 0 0 1004 0 --:--:-- --:--:-- --:--:-- 1006
100 6023k 100 6023k 0 0 10.9M 0 --:--:-- --:--:-- --:--:-- 10.9M
Unpacking kubernetes release v1.7.0-alpha.0.2068+3a3dc827e45426
Kubernetes release: v1.7.0-alpha.0.2068+3a3dc827e45426
Server: linux/amd64 (to override, set KUBERNETES_SERVER_ARCH)
Client: linux/amd64 (autodetected)
Will download kubernetes-server-linux-amd64.tar.gz from https://dl.k8s.io/ci/v1.7.0-alpha.0.2068+3a3dc827e45426
Will download and extract kubernetes-client-linux-amd64.tar.gz from https://dl.k8s.io/ci/v1.7.0-alpha.0.2068+3a3dc827e45426
Is this ok? [Y]/n
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 161 100 161 0 0 991 0 --:--:-- --:--:-- --:--:-- 987
100 348M 100 348M 0 0 39.1M 0 0:00:08 0:00:08 --:--:-- 34.2M
md5sum(kubernetes-server-linux-amd64.tar.gz)=e71c9b48f6551797a74de2b83b501c44
sha1sum(kubernetes-server-linux-amd64.tar.gz)=688dcf567b60e27e3d9bf97436154543432768cf
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 161 100 161 0 0 1019 0 --:--:-- --:--:-- --:--:-- 1025
100 29.0M 100 29.0M 0 0 32.2M 0 --:--:-- --:--:-- --:--:-- 95.4M
md5sum(kubernetes-client-linux-amd64.tar.gz)=8e6a90298411ae5a0e943b1c0e182b1d
sha1sum(kubernetes-client-linux-amd64.tar.gz)=187a2d2c1c6ae1ead32ec4c1fa51f695223edaae
Extracting /tmp/foo/kubernetes/client/kubernetes-client-linux-amd64.tar.gz into /tmp/foo/kubernetes/platforms/linux/amd64
Add '/tmp/foo/kubernetes/client/bin' to your PATH to use newly-installed binaries.
Creating a kubernetes on gce...
...
```
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
Ensure only 1 Swift URL is used in cluster operations
**What this PR does / why we need it**:
Extracts only 1 Swift URL if multiple are returned from Keystone.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*:
https://github.com/kubernetes/kubernetes/issues/34930
**Special notes for your reviewer**:
**Release note**:
```release-note
Heat cluster operations now support environments that have multiple Swift URLs
```
Automatic merge from submit-queue
Use auto mode networks instead of legacy networks in GCP
Use of the --range flag creates legacy networks in GCP.
Legacy networks will not support new GCP features.
```release-note
NONE
```
Automatic merge from submit-queue
Add support for IP aliases for pod IPs (GCP alpha feature)
```release-note
Adds support for allocation of pod IPs via IP aliases.
# Adds KUBE_GCE_ENABLE_IP_ALIASES flag to the cluster up scripts (`kube-{up,down}.sh`).
KUBE_GCE_ENABLE_IP_ALIASES=true will enable allocation of PodCIDR ips
using the ip alias mechanism rather than using routes. This feature is currently
only available on GCE.
## Usage
$ CLUSTER_IP_RANGE=10.100.0.0/16 KUBE_GCE_ENABLE_IP_ALIASES=true bash -x cluster/kube-up.sh
# Adds CloudAllocator to the node CIDR allocator (kubernetes-controller manager).
If CIDRAllocatorType is set to `CloudCIDRAllocator`, then allocation
of CIDR allocation instead is done by the external cloud provider and
the node controller is only responsible for reflecting the allocation
into the node spec.
- Splits off the rangeAllocator from the cidr_allocator.go file.
- Adds cloudCIDRAllocator, which is used when the cloud provider allocates
the CIDR ranges externally. (GCE support only)
- Updates RBAC permission for node controller to include PATCH
```
KUBE_GCE_ENABLE_IP_ALIASES=true will enable allocation of PodCIDR ips
using the ip alias mechanism rather than using routes.
NODE_IP_RANGE will control the node instance IP cidr
KUBE_GCE_IP_ALIAS_SIZE controls the size of each podCIDR
IP_ALIAS_SUBNETWORK controls the name of the subnet created for the cluster
Automatic merge from submit-queue (batch tested with PRs 43866, 42748)
hack/cluster: download cfssl if not present
hack/local-up-cluster.sh uses cfssl to generate certificates and
will exit it cfssl is not already installed. But other cluster-up
mechanisms (GCE) that generate certs just download cfssl if not
present. Make local-up-cluster.sh do that too so users don't have
to bother installing it from somewhere.
Automatic merge from submit-queue
fix kubedns-sa.yaml missing "namespace: kube-system" value
The file kubedns-sa.yaml missing `namespace: kube-system`, so it will create ServiceAccount kube-dns in default namespace, this will cause kube-dns deployment's pods be blocked forever;
Some logs as following:
> - lastTransitionTime: 2017-04-06T19:02:12Z
> lastUpdateTime: 2017-04-06T19:02:12Z
> message: 'unable to create pods: pods "kube-dns-699984412-" is forbidden: service
> account kube-system/kube-dns was not found, retry after the service account
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 42025, 44169, 43940)
if we have a dedicated serviceaccount keypair, use it to verify serviceaccounts
Automatic merge from submit-queue
[Federation] Remove FEDERATIONS_DOMAIN_MAP references
Remove all references to FEDERATIONS_DOMAIN_MAP as this method is no longer is used and is replaced by adding federation domain map to kube-dns configmap.
cc @madhusudancs @kubernetes/sig-federation-pr-reviews
**Release note**:
```
[Federation] Mechanism of adding `federation domain maps` to kube-dns deployment via `--federations` flag is superseded by adding/updating `federations` key in `kube-system/kube-dns` configmap. If user is using kubefed tool to join cluster federation, adding federation domain maps to kube-dns is already taken care by `kubefed join` and does not need further action.
```
Automatic merge from submit-queue
Use shared informers for proxy endpoints and service configs
Use shared informers instead of creating local controllers/reflectors
for the proxy's endpoints and service configs. This allows downstream
integrators to pass in preexisting shared informers to save on memory &
cpu usage.
This also enables the cache mutation detector for kube-proxy for those
presubmit jobs that already turn it on.
Follow-up to #43295 cc @wojtek-t
Will race with #43937 for conflicting changes 😄 cc @thockin
cc @smarterclayton @sttts @liggitt @deads2k @derekwaynecarr @eparis @kubernetes/rh-cluster-infra
```release-note
kube-apiserver: --service-account-lookup now defaults to true. This enables service account tokens to be revoked by deleting the Secret object containing the token.
```
Automatic merge from submit-queue (batch tested with PRs 44047, 43514, 44037, 43467)
Juju: Enable GPU mode if GPU hardware detected
**What this PR does / why we need it**:
Automatically configures kubernetes-worker node to utilize GPU hardware when such hardware is detected.
layer-nvidia-cuda does the hardware detection, installs CUDA and Nvidia
drivers, and sets a state that the k8s-worker can react to.
When gpu is available, worker updates config and restarts kubelet to
enable gpu mode. Worker then notifies master that it's in gpu mode via
the kube-control relation.
When master sees that a worker is in gpu mode, it updates to privileged
mode and restarts kube-apiserver.
The kube-control interface has subsumed the kube-dns interface
functionality.
An 'allow-privileged' config option has been added to both worker and
master charms. The gpu enablement respects the value of this option;
i.e., we can't enable gpu mode if the operator has set
allow-privileged="false".
**Special notes for your reviewer**:
Quickest test setup is as follows:
```bash
# Bootstrap. If your aws account doesn't have a default vpc, you'll need to
# specify one at bootstrap time so that juju can provision a p2.xlarge.
# Otherwise you can leave out the --config "vpc-id=vpc-xxxxxxxx" bit.
juju bootstrap --config "vpc-id=vpc-xxxxxxxx" --constraints "cores=4 mem=16G root-disk=64G" aws/us-east-1 k8s
# Deploy the bundle containing master and worker charms built from
# https://github.com/tvansteenburgh/kubernetes/tree/gpu-support/cluster/juju/layers
juju deploy cs:~tvansteenburgh/bundle/kubernetes-gpu-support-3
# Setup kubectl locally
mkdir -p ~/.kube
juju scp kubernetes-master/0:config ~/.kube/config
juju scp kubernetes-master/0:kubectl ./kubectl
# Download a gpu-dependent job spec
wget -O /tmp/nvidia-smi.yaml https://raw.githubusercontent.com/madeden/blogposts/master/k8s-gpu-cloud/src/nvidia-smi.yaml
# Create the job
kubectl create -f /tmp/nvidia-smi.yaml
# You should see a new nvidia-smi-xxxxx pod created
kubectl get pods
# Wait a bit for the job to run, then view logs; you should see the
# nvidia-smi table output
kubectl logs $(kubectl get pods -l name=nvidia-smi -o=name -a)
```
kube-control interface: https://github.com/juju-solutions/interface-kube-control
nvidia-cuda layer: https://github.com/juju-solutions/layer-nvidia-cuda
(Both are registered on http://interfaces.juju.solutions/)
**Release note**:
```release-note
Juju: Enable GPU mode if GPU hardware detected
```
Automatic merge from submit-queue
get-kube-local.sh checks pods with option "--namespace=kube-system"
**What this PR does / why we need it**:
Local cluster creation using get-kube-local.sh is never finished.
The get-kube-local.sh monitors running_count of pods such as etcd,
master and kube-proxy, but these pods are created under the namespace
kube-system. Therefore kubectl can't find these pods then cluster
creation isn't completed.
The get-kube-local.sh should monitor created pods with option
"--namespace=kube-system".
**Which issue this PR fixes** : fixes#42517
**Release note**:
```
`NONE`
```
Automatic merge from submit-queue
cluster/log-dump - chmod files before dumping
We make the files world-readable, so that installation techniques that
lock down the logfiles can still be dumped.
Issue https://github.com/kubernetes/test-infra/issues/2397
```release-note
NONE
```
Use shared informers instead of creating local controllers/reflectors
for the proxy's endpoints and service configs. This allows downstream
integrators to pass in preexisting shared informers to save on memory &
cpu usage.
This also enables the cache mutation detector for kube-proxy for those
presubmit jobs that already turn it on.
Automatic merge from submit-queue
Adding more proxy options and header to nginx load-balancer.
**What this PR does / why we need it**: The kubeapi-load-balancer uses nginx to proxy commands to the kube-apiserver. It currently does not support SPDY and therefore the `kubectl exec` command is broken.
**Which issue this PR fixes** :
fixes https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/226
fixes https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/201
**Special notes for your reviewer**: This only changes the nginx configuration no code change was required.
**Release note**:
```release-note
Using http2 in kubeapi-load-balancer to fix kubectl exec uses
```
hack/local-up-cluster.sh uses cfssl to generate certificates and
will exit it cfssl is not already installed. But other cluster-up
mechanisms (GCE) that generate certs just download cfssl if not
present. Make local-up-cluster.sh do that too.
Automatic merge from submit-queue (batch tested with PRs 42379, 42668, 42876, 41473, 43260)
Silence error messages from the docker rmi call we expect to fail
**What this PR does / why we need it**: when we removed `docker tag -f` in #34361 we added a bunch of `docker rmi` calls to preserve behavior for older docker versions. That step is usually a no-op, however, and results in confusing messages like
```
Tagging docker image gcr.io/google_containers/kube-proxy:c8d0b2e7a06b451117a8ac58fc3bb3d3 as gcr.io/kubernetes-release-test/kube-proxy-amd64:v1.5.4
Error response from daemon: No such image: gcr.io/kubernetes-release-test/kube-proxy-amd64:v1.5.4
```
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#42665
**Special notes for your reviewer**: I could probably remove the `docker rmi` calls entirely, though I don't know if folks are still using docker < 1.10. (I think Jenkins still has 1.9.1.)
**Release note**:
```release-note
NONE
```
cc @jessfraz
Per Clayton's suggestion, move stuff from cluster/lib/util.sh to
hack/lib/util.sh. Also consolidate ensure-temp-dir and use the
hack/lib/util.sh implementation rather than cluster/common.sh.
Automatic merge from submit-queue (batch tested with PRs 43726, 43643)
Make a smaller redis image for testing, based on Alpine.
**What this PR does / why we need it**:
This shrinks gcr.io/google_containers/redis from 400MB to 5MB, which should reduce flakes.
**Which issue this PR fixes**:
fixes#43631
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
Moves dns-horizontal-autoscaler to a separate service account
Similar to #38816.
As one of the cluster add-ons, dns-horizontal-autoscaler is now using the default service account in kube-system namespace, which is introduced by https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/e2e-rbac-bindings/random-addon-grabbag.yaml for the ease of transition. This default service account will be removed in the future.
This PR subdivides dns-horizontal-autoscaler to a separate service account and setup the necessary permissions.
@bowei
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 43518, 42467)
install/kube-up: fix some errors while install k8s through kube-up/down.sh
What this PR does / why we need it:
etcd2.3.1 will be installed follow this scripts, but k8s use etcd3 as default storage backend, so the next error will always be apprear:
API server: rpc error: code = 13 desc = transport is closing
so i think we should change the version of etcd
thank you!
Automatic merge from submit-queue
Centos provider: generate SSL certificates for etcd cluster.
**What this PR does / why we need it**:
Support secure etcd cluster for centos provider, generate SSL certificates for etcd in default. Running it w/o SSL is exposing cluster data to everyone and is not recommended. [#39462](https://github.com/kubernetes/kubernetes/pull/39462#issuecomment-271601547)
/cc @jszczepkowski @zmerlynn
**Release note**:
```release-note
Support secure etcd cluster for centos provider.
```
Automatic merge from submit-queue
added prompt warning if etcd3 media type isn't set during upgrade
**What this PR does / why we need it**:
This adds a prompt confirming the upgrade when `STORAGE_MEDIA_TYPE` is not explicitly set. This is to prevent users from accidentally upgrading to protobuf.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*:
Alongs with docs, addresses #43669
**Special notes for your reviewer**:
Should be cherrypicked onto `release-1.6`
**Release note**:
```release-note
NONE
```