Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Enable scaling fluentd-gcp resources using ScalingPolicy.
See https://github.com/justinsb/scaler for more details about ScalingPolicy resource.
**What this PR does / why we need it**:
This is adding a way to override fluentd-gcp resources in a running cluster. The resources syncing for fluentd-gcp is decoupled from addon manager.
**Special notes for your reviewer**:
**Release note**:
```release-note
fluentd-gcp resources can be modified via a ScalingPolicy
```
cc @kawych @justinsb
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Upload container runtime log to sd/es.
I've verified this in my environment. My stackdriver has an extra `container-runtime` entry for node log, and it collects container runtime daemon log correctly.
@yujuhong @feiskyer @crassirostris @piosz
@kubernetes/sig-node-pr-reviews @kubernetes/sig-instrumentation-pr-reviews
Signed-off-by: Lantao Liu <lantaol@google.com>
**Release note**:
```release-note
Container runtime daemon (e.g. dockerd) logs in GCE cluster will be uploaded to stackdriver and elasticsearch with tag `container-runtime`
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add a new environment variable to the gce startup scripts called KUBE_PROXY_MODE
**What this PR does / why we need it**:
This PR adds a new environment variable called KUBE_PROXY_MODE to the startup scripts for gce. This variable will allow a user to specify the kube-proxy implementation they want to use, with the choices being ipvs or iptables (iptables is default).
Next steps:
1. Need to remove use of feature gateway when IPVS goes GA
2. Need to add logic of loading required ipvs kernel modules in the scripts
Question: If the proxier is IPVS, is it necessary to have the iptables sync period flags?
**Release note**:
```release-note
None
```
This is the 2nd attempt. The previous was reverted while we figured out
the regional mirrors (oops).
New plan: k8s.gcr.io is a read-only facade that auto-detects your source
region (us, eu, or asia for now) and pulls from the closest. To publish
an image, push k8s-staging.gcr.io and it will be synced to the regionals
automatically (similar to today). For now the staging is an alias to
gcr.io/google_containers (the legacy URL).
When we move off of google-owned projects (working on it), then we just
do a one-time sync, and change the google-internal config, and nobody
outside should notice.
We can, in parallel, change the auto-sync into a manual sync - send a PR
to "promote" something from staging, and a bot activates it. Nice and
visible, easy to keep track of.
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
doc: fix typo in cluster
**What this PR does / why we need it**:
fix typo in cluster
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 54278, 56259, 56762). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add NODE_LOCAL_SSDS_EXT to config-test
**What this PR does / why we need it**:
Add NODE_LOCAL_SSDS_EXT to config-test so we can specify it for CI.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#57468
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 56386, 57204, 55692, 57107, 57177). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
GCE: bump COS image version to cos-stable-63-10032-71-0
```release-note
GCE: bump COS image version to cos-stable-63-10032-71-0
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add CoreDNS as an optional addon in kube-up
**What this PR does / why we need it**:
This PR adds the option of installing CoreDNS as an addon instead of kube-dns in kube-up.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#56439
**Special notes for your reviewer**:
**Release note**:
```release-note
kube-up: Add optional addon CoreDNS.
Install CoreDNS instead of kube-dns by setting CLUSTER_DNS_CORE_DNS value to 'true'.
```
Automatic merge from submit-queue (batch tested with PRs 54826, 53576, 55591, 54946, 54825). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Run nvidia-gpu device-plugin daemonset as an addon on GCE nodes that have nvidia GPUs attached
- Instead of the old `Accelerators` feature that added `alpha.kubernetes.io/nvidia-gpu` resource, use the new `DevicePlugins` feature that adds vendor specific resources. (In case of nvidia GPUs it will
add `nvidia.com/gpu` resource.)
- Add node label to GCE nodes with accelerators attached. This node label is the same as what GKE attaches to node pools with accelerators attached. (For example, for nvidia-tesla-p100 GPU, the label would be `cloud.google.com/gke-accelerator=nvidia-tesla-p100`) This will help us target accelerator specific
daemonsets etc. to these nodes.
- Run nvidia-gpu device-plugin daemonset as an addon on GCE nodes that have nvidia GPUs attached.
- Some minor documentation improvements in addon manager.
**Release note**:
```release-note
GCE nodes with NVIDIA GPUs attached now expose `nvidia.com/gpu` as a resource instead of `alpha.kubernetes.io/nvidia-gpu`.
```
/sig cluster-lifecycle
/sig scheduling
/area hw-accelerators
https://github.com/kubernetes/features/issues/368
Automatic merge from submit-queue (batch tested with PRs 55268, 55282, 55419, 48340, 54829). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Run ResourceQuota after GenericAdmissionWebhook admission plugin to avoid charging quota prematurely
This only affects e2e tests.
Automatic merge from submit-queue (batch tested with PRs 54488, 54838, 54964). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add support to for alternative container runtime in `kube-up.sh`
For https://github.com/kubernetes/features/issues/286.
This PR added 4 new environment variables in `kube-up.sh` to support alternative container runtime:
1) `KUBE_MASTER_EXTRA_METADATA` and `KUBE_NODE_EXTRA_METADATA`. Add extra metadata on master and node instance. With this we could specify different cloud-init for a different container runtime, and also add extra metadata for the new cloud-init, e.g. [master.yaml](7d73966214/test/e2e/master.yaml)
2) `KUBE_CONTAINER_RUNTIME_ENDPOINT`. Specify different sock for different container runtime. It's only used when it's not empty.
3) `KUBE_LOAD_IMAGE_COMMAND`. Specify different load image command for different container runtime.
An example for cri-containerd:
```
export KUBE_MASTER_EXTRA_METADATA="user-data=${GOPATH}/src/github.com/kubernetes-incubator/cri-containerd/test/e2e/master.yaml,cri-containerd-configure-sh=${GOPATH}/src/github.com/kubernetes-incubator/cri-containerd/test/configure.sh"
export KUBE_NODE_EXTRA_METADATA="user-data=${GOPATH}/src/github.com/kubernetes-incubator/cri-containerd/test/e2e/node.yaml,cri-containerd-configure-sh=${GOPATH}/src/github.com/kubernetes-incubator/cri-containerd/test/configure.sh"
export KUBE_CONTAINER_RUNTIME="remote"
export KUBE_CONTAINER_RUNTIME_ENDPOINT="/var/run/cri-containerd.sock"
export KUBE_LOAD_IMAGE_COMMAND="/home/cri-containerd/usr/local/bin/cri-containerd load"
export NETWORK_POLICY_PROVIDER="calico"
```
Signed-off-by: Lantao Liu <lantaol@google.com>
```release-note
none
```
/cc @yujuhong @dchen1107 @feiskyer @mikebrow @abhi @mrunalp @runcom
/cc @kubernetes/sig-node-pr-reviews
This node label is the same as what GKE attaches to node pools with
accelerators attached. This will help us target accelerator specific
daemonsets etc. to these nodes.
Instead of the old Accelerators feature that added
alpha.kubernetes.io/nvidia-gpu resource, use the new DevicePlugins
feature that adds vendor specific resources. (In case of nvidia it will
add nvidia.com/gpu resource.)
Automatic merge from submit-queue (batch tested with PRs 54112, 54150, 53816, 54321, 54338). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Enable metadata concealment for tests
**What this PR does / why we need it**: Metadata concealment is going to beta for v1.9; enable it by default in tests. Also, just use `ENABLE_METADATA_CONCEALMENT` instead of two different vars. Work toward #8867.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: none
**Special notes for your reviewer**:
**Release note**:
```release-note
Metadata concealment on GCE is now controlled by the `ENABLE_METADATA_CONCEALMENT` env var. See cluster/gce/config-default.sh for more info.
```
Automatic merge from submit-queue (batch tested with PRs 52868, 53196, 54207). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Allow users to configure the service account made available on their nodes
**What this PR does / why we need it**: This allows users (and tests) to configure what GCP service account nodes are given when they are created, to allow users to grant fewer permissions to their nodes via IAM (instead of scopes). Read more about service accounts and scopes here: https://cloud.google.com/compute/docs/access/service-accounts
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#53603
**Special notes for your reviewer**:
**Release note**:
```release-note
Allow GCE users to configure the service account made available on their nodes
```