Automatic merge from submit-queue (batch tested with PRs 41139, 41186, 38882, 37698, 42034)
create configmap from-env-file
Allow ConfigMaps to be created from Docker based env files.
See proposal https://github.com/kubernetes/community/issues/165
**Release-note:**
```release-note
1. create configmap has a new option --from-env-file that populates a configmap from file which follows a key=val format for each line.
2. create secret has a new option --from-env-file that populates a configmap from file which follows a key=val format for each line.
```
Automatic merge from submit-queue (batch tested with PRs 41794, 42349, 42755, 42901, 42933)
[Federation][e2e] Add framework for upgrade test in federation
Adding framework for federation upgrade tests. please refer to #41791
cc @madhusudancs @nikhiljindal @kubernetes/sig-federation-pr-reviews
Automatic merge from submit-queue (batch tested with PRs 42734, 42745, 42758, 42814, 42694)
Implement automated downgrade testing.
Node version cannot be higher than the master version, so we must
switch the node version first. Also, we must use the upgrade script
from the appropriate version for GCE.
Node version cannot be higher than the master version, so we must
switch the node version first. Also, we must use the upgrade script
from the appropriate version for GCE.
Automatic merge from submit-queue
kubeadm: Hook up kubeadm against the BootstrapSigner
**What this PR does / why we need it**:
This PR makes kubeadm able to use the BootstrapSigner.
Depends on a few other PRs I've made, I'll rebase and fix this up after they've merged.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
Example usage:
```console
lucas@THENINJA:~/luxas/kubernetes$ sudo ./kubeadm init --kubernetes-version v1.7.0-alpha.0.377-2a6414bc914d55
[sudo] password for lucas:
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[init] Using Kubernetes version: v1.7.0-alpha.0.377-2a6414bc914d55
[init] Using Authorization mode: RBAC
[preflight] Running pre-flight checks
[preflight] Starting the kubelet service
[certificates] Generated CA certificate and key.
[certificates] Generated API server certificate and key.
[certificates] Generated API server kubelet client certificate and key.
[certificates] Generated service account token signing key.
[certificates] Generated service account token signing public key.
[certificates] Generated front-proxy CA certificate and key.
[certificates] Generated front-proxy client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[apiclient] Created API client, waiting for the control plane to become ready
[apiclient] All control plane components are healthy after 21.301384 seconds
[apiclient] Waiting for at least one node to register and become ready
[apiclient] First node is ready after 8.072688 seconds
[apiclient] Test deployment succeeded
[token-discovery] Using token: 67a96d.02405a1773564431
[apiconfig] Created RBAC rules
[addons] Created essential addon: kube-proxy
[addons] Created essential addon: kube-dns
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
http://kubernetes.io/docs/admin/addons/
You can now join any number of machines by running the following on each node:
kubeadm join --token 67a96d.02405a1773564431 192.168.1.115:6443
other-computer $ ./kubeadm join --token 67a96d.02405a1773564431 192.168.1.115:6443
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Skipping pre-flight checks
[preflight] Starting the kubelet service
[discovery] Trying to connect to API Server "192.168.1.115:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.115:6443"
[discovery] Cluster info signature and contents are valid, will use API Server "https://192.168.1.115:6443"
[discovery] Successfully established connection with API Server "192.168.1.115:6443"
[bootstrap] Detected server version: v1.7.0-alpha.0.377+2a6414bc914d55
[bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)
[csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request
[csr] Received signed certificate from the API server, generating KubeConfig...
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
Node join complete:
* Certificate signing request sent to master and response
received.
* Kubelet informed of new secure connection details.
Run 'kubectl get nodes' on the master to see this machine join.
# Wrong secret!
other-computer $ ./kubeadm join --token 67a96d.02405a1773564432 192.168.1.115:6443
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Skipping pre-flight checks
[preflight] Starting the kubelet service
[discovery] Trying to connect to API Server "192.168.1.115:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.115:6443"
[discovery] Failed to connect to API Server "192.168.1.115:6443": failed to verify JWS signature of received cluster info object, can't trust this API Server
[discovery] Trying to connect to API Server "192.168.1.115:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.115:6443"
[discovery] Failed to connect to API Server "192.168.1.115:6443": failed to verify JWS signature of received cluster info object, can't trust this API Server
^C
# Poor method to create a cluster-info KubeConfig (a KubeConfig file with no credentials), but...
$ printf "kind: Config\n$(sudo ./kubeadm alpha phas --client-name foo --server https://192.168.1.115:6443 --token foo | head -6)\n" > cluster-info.yaml
$ cat cluster-info.yaml
kind: Config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRFM01ESXlPREl3TXpBek1Gb1hEVEkzTURJeU5qSXdNekF6TUZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTmt0ClFlUVVrenlkQjhTMVM2Y2ZSQ0ZPUnNhdDk2Wi9Id3A3TGJiNkhrSFRub1dvbDhOOVpGUXNDcCtYbDNWbStTS1AKZWFLTTFZWWVDVmNFd0JXNUlWclIxMk51UzYzcjRqK1dHK2NTdjhUOFBpYUZjWXpLalRpODYvajlMYlJYNlFQWAovYmNWTzBZZDVDMVJ1cmRLK2pnRGprdTBwbUl5RDRoWHlEZE1vZk1laStPMytwRC9BeVh5anhyd0crOUFiNjNrCmV6U3BSVHZSZ1h4R2dOMGVQclhKanMwaktKKzkxY0NXZTZJWEZkQnJKbFJnQktuMy9TazRlVVdIUTg0OWJOZHgKdllFblNON1BPaitySktPVEpLMnFlUW9ua0t3WU5qUDBGbW1zNnduL0J0dWkvQW9hanhQNUR3WXdxNEk2SzcvdgplbUM4STEvdzFpSk9RS2dxQmdzQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFNL1JQbTYzQTJVaGhPMVljTUNqSEJlUjROOHkKUzB0Q2RNdDRvK0NHRDJKTUt5NDJpNExmQTM2L2hvb01iM2tpUkVSWTRDaENrMGZ3VHpSMHc5Q21nZHlVSTVQSApEc0dIRWdkRHpTVXgyZ3lrWDBQU04zMjRXNCt1T0t6QVRLbm5mMUdiemo4cFA2Uk9QZDdCL09VNiswckhReGY2CnJ6cDRldHhWQjdQWVE0SWg5em1KcVY1QjBuaUZrUDBSYWNDYUxFTVI1NGZNWDk1VHM0amx1VFJrVnBtT1ZGNHAKemlzMlZlZmxLY3VHYTk1ME1CRGRZR2UvbGNXN3JpTkRIUGZZLzRybXIxWG9mUGZEY0Z0ZzVsbUNMWk8wMDljWQpNdGZBdjNBK2dYWjBUeExnU1BpYkxaajYrQU9lMnBiSkxCZkxOTmN6ODJMN1JjQ3RxS01NVHdxVnd0dz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
server: https://192.168.1.115:6443
name: kubernetes
lucas@THENINJA:~/luxas/kubernetes$ sudo ./kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION
67a96d.02405a1773564431 <forever> <never> authentication,signing The default bootstrap token generated by 'kubeadm init'.
# Any token with the authentication usage set works as the --tls-bootstrap-token arg here
other-computer $ ./kubeadm join --skip-preflight-checks --discovery-file cluster-info.yaml --tls-bootstrap-token 67a96d.02405a1773564431
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Skipping pre-flight checks
[preflight] Starting the kubelet service
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.115:6443"
[discovery] Synced cluster-info information from the API Server so we have got the latest information
[bootstrap] Detected server version: v1.7.0-alpha.0.377+2a6414bc914d55
[bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)
[csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request
[csr] Received signed certificate from the API server, generating KubeConfig...
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
Node join complete:
* Certificate signing request sent to master and response
received.
* Kubelet informed of new secure connection details.
Run 'kubectl get nodes' on the master to see this machine join.
# Delete the RoleBinding that exposes the cluster-info ConfigMap publicly. Now this ConfigMap will be private
lucas@THENINJA:~/luxas/kubernetes$ kubectl -n kube-public edit rolebindings kubeadm:bootstrap-signer-clusterinfo
# This breaks the token joining method
other-computer $ sudo ./kubeadm join --token 67a96d.02405a1773564431 192.168.1.115:6443
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Skipping pre-flight checks
[preflight] Starting the kubelet service
[discovery] Trying to connect to API Server "192.168.1.115:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.115:6443"
[discovery] Failed to request cluster info, will try again: [User "system:anonymous" cannot get configmaps in the namespace "kube-public". (get configmaps cluster-info)]
[discovery] Failed to request cluster info, will try again: [User "system:anonymous" cannot get configmaps in the namespace "kube-public". (get configmaps cluster-info)]
^C
# But we can still connect using the cluster-info file
other-computer $ sudo ./kubeadm join --skip-preflight-checks --discovery-file /k8s/cluster-info.yaml --tls-bootstrap-token 67a96d.02405a1773564431
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Skipping pre-flight checks
[preflight] Starting the kubelet service
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.115:6443"
[discovery] Could not access the cluster-info ConfigMap for refreshing the cluster-info information, but the TLS cert is valid so proceeding...
[discovery] The cluster-info ConfigMap isn't set up properly (no kubeconfig key in ConfigMap), but the TLS cert is valid so proceeding...
[bootstrap] Detected server version: v1.7.0-alpha.0.377+2a6414bc914d55
[bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)
[csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request
[csr] Received signed certificate from the API server, generating KubeConfig...
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
Node join complete:
* Certificate signing request sent to master and response
received.
* Kubelet informed of new secure connection details.
Run 'kubectl get nodes' on the master to see this machine join.
# What happens if the CA in the cluster-info file and the API Server's CA aren't equal?
# Generated new CA for the cluster-info file, a invalid one for connecting to the cluster
# The new cluster-info file is here:
lucas@THENINJA:~/luxas/kubernetes$ cat cluster-info.yaml
kind: Config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRFM01ESXlPREUyTkRBME1Wb1hEVEkzTURJeU5qRTJOREEwTVZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBS3VHCmR3MXQ5Nmlkd1YrVmwxQjRVSmZWdGpNZ0NTd1poMG00bmR5Q1JCR3FIRkpMTGhIWjREM2N2ckg1Tk44UmZHS0EKb1cwVjN3Q3R2THl4UFdnZkZMbGtrdERPWnBDQ01oYzd2alYxU2FKUE9MS1BIUUtEdm1CVWFNcTdrUzN5NEg1VApMcUp3bFBUUXNVVW5YNWM5V0pzS2JIcEx6MnJZbC9Pam4veGRtd1lQa3JUTTJwSitMS0RjUkxLTEpiQjhGc2pzCnZBQTg2QURjY3phMDd0WEgxL1MzeTN0UDJMTDN0UVgvZWJIYWNPcHluYnVaNlIwdFhKeUpsTTVlOHRHMzFhWHMKQTV3cGo1d2Z1RGU1amRuTHgxNnFtbG5ueGV3OGp0bk4zSDExYUp6VlErOWlSQUZkUTN4WmN4dWdmQVM2ZndqRwo0QnJFeGpUOUFaRlVQb0VkR09NQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFJc0pKRFIzbWMxQ1lCd2ViSkRPNm1MdWkwTk4KS3BVdlBuazlMbWlnb2JmYVhjQWlnUlo2M1pIYTd4MXBHNGpKRG8zY3lxNWEybTAzZ245RFMrcEpKYTdpMmpXUQpaV1YvZ2ZRMEk4RGc0endXU3J0T056NHpTTXQ1cW5JZjVWRC95KzVVSmVRck1XSEVFS1VrdklSQzhuUmIvV1F2CmNRWEpiN1hMY0dtbWJyaXpDSUlDYmI4KzhmNDFUWTZnTmg5ZzduaVdGZlp2VG1jN05aMTNjQVJjajJ0UTAzeVMKbWVPcEc2REdMRENFWWYzRld0QmdleE5CcFlFYy9ydUNnUE9IcEdhelYya3JHdFFNLzI0OGQ2ZndwcVNQOGc4RgpVSHNWZWxiMExnNmgvZ3VSYlZ5SENlck5zTDBJdFFhdjlscmZmWkxQaVA5TzNLQ0pBWk9MbXhEOUhaaz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
server: https://192.168.1.115:6443
name: kubernetes
# Try to join an API Server with the wrong CA
other-computer $ sudo ./kubeadm join --skip-preflight-checks --discovery-file /k8s/cluster-info.yaml --tls-bootstrap-token 67a96d.02405a1773564431
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Skipping pre-flight checks
[preflight] Starting the kubelet service
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.115:6443"
[discovery] Failed to validate the API Server's identity, will try again: [Get https://192.168.1.115:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")]
[discovery] Failed to validate the API Server's identity, will try again: [Get https://192.168.1.115:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")]
[discovery] Failed to validate the API Server's identity, will try again: [Get https://192.168.1.115:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")]
[discovery] Failed to validate the API Server's identity, will try again: [Get https://192.168.1.115:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")]
^C
```
**Release note**:
```release-note
```
@jbeda @mikedanese @justinsb @pires @dmmcquay @roberthbailey @dgoodwin
Automatic merge from submit-queue (batch tested with PRs 41919, 41149, 42350, 42351, 42285)
kubelet: enable qos-level memory limits
```release-note
Experimental support to reserve a pod's memory request from being utilized by pods in lower QoS tiers.
```
Enables the QoS-level memory cgroup limits described in https://github.com/kubernetes/community/pull/314
**Note: QoS level cgroups have to be enabled for any of this to take effect.**
Adds a new `--experimental-qos-reserved` flag that can be used to set the percentage of a resource to be reserved at the QoS level for pod resource requests.
For example, `--experimental-qos-reserved="memory=50%`, means that if a Guaranteed pod sets a memory request of 2Gi, the Burstable and BestEffort QoS memory cgroups will have their `memory.limit_in_bytes` set to `NodeAllocatable - (2Gi*50%)` to reserve 50% of the guaranteed pod's request from being used by the lower QoS tiers.
If a Burstable pod sets a request, its reserve will be deducted from the BestEffort memory limit.
The result is that:
- Guaranteed limit matches root cgroup at is not set by this code
- Burstable limit is `NodeAllocatable - Guaranteed reserve`
- BestEffort limit is `NodeAllocatable - Guaranteed reserve - Burstable reserve`
The only resource currently supported is `memory`; however, the code is generic enough that other resources can be added in the future.
@derekwaynecarr @vishh
Automatic merge from submit-queue (batch tested with PRs 41919, 41149, 42350, 42351, 42285)
Juju: Disable anonymous auth on kubelet
**What this PR does / why we need it**:
This disables anonymous authentication on kubelet when deployed via Juju.
I've also adjusted a few other TLS options for kubelet and kube-apiserver. The end result is that:
1. kube-apiserver can now authenticate with kubelet
2. kube-apiserver now verifies the integrity of kubelet
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*:
https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/219
**Special notes for your reviewer**:
This is dependent on PR #41251, where the tactics changes are being merged in separately.
Some useful pages from the documentation:
* [apiserver -> kubelet](https://kubernetes.io/docs/admin/master-node-communication/#apiserver---kubelet)
* [Kubelet authentication/authorization](https://kubernetes.io/docs/admin/kubelet-authentication-authorization/)
**Release note**:
```release-note
Juju: Disable anonymous auth on kubelet
```
Automatic merge from submit-queue (batch tested with PRs 42365, 42429, 41770, 42018, 35055)
kubeadm: Add --cert-dir, --cert-altnames instead of --api-external-dns-names
**What this PR does / why we need it**:
- For the beta kubeadm init UX, we need this change
- Also adds the `kubeadm phase certs selfsign` command that makes the phase invokable independently
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
This PR depends on https://github.com/kubernetes/kubernetes/pull/41897
**Release note**:
```release-note
```
@dmmcquay @pires @jbeda @errordeveloper @mikedanese @deads2k @liggitt
Automatic merge from submit-queue (batch tested with PRs 41984, 41682, 41924, 41928)
Add options to kubefed telling it to generate HTTP Basic and/or token credentials for the Federated API server
fixes#41265.
**Release notes**:
```release-note
Adds two options to kubefed, `-apiserver-enable-basic-auth` and `-apiserver-enable-token-auth`, which generate an HTTP Basic username/password and a token respectively for the Federated API server.
```
Automatic merge from submit-queue (batch tested with PRs 42128, 42064, 42253, 42309, 42322)
kubeadm: Rename some flags for beta UI and fixup some logic
**What this PR does / why we need it**:
In this PR:
- `--api-advertise-addresses` becomes `--apiserver-advertise-address`
- The API Server's logic here is that if the address is `0.0.0.0`, it chooses the host's default interface's address. kubeadm here uses exactly the same logic. This arg is then passed to `--advertise-address`, and the API Server will advertise that one for the service VIP.
- `--api-port` becomes `--apiserver-bind-port` for clarity
ref the meeting notes: https://docs.google.com/document/d/1deJYPIF4LmhGjDVaqrswErIrV7mtwJgovtLnPCDxP7U/edit#
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
```
@jbeda @dmmcquay @pires @lukemarsden @dgoodwin @mikedanese
This commit switches over the HPA controller to use the custom metrics
API. It also converts the HPA controller to use the generated client
in k8s.io/metrics for the resource metrics API.
In order to enable support, you must enable
`--horizontal-pod-autoscaler-use-rest-clients` on the
controller-manager, which will switch the HPA controller's MetricsClient
implementation over to use the standard rest clients for both custom
metrics and resource metrics. This requires that at the least resource
metrics API is registered with kube-aggregator, and that the controller
manager is pointed at kube-aggregator. For this to work, Heapster
must be serving the new-style API server (`--api-server=true`).
Automatic merge from submit-queue
Extend experimental support to multiple Nvidia GPUs
Extended from #28216
```release-note
`--experimental-nvidia-gpus` flag is **replaced** by `Accelerators` alpha feature gate along with support for multiple Nvidia GPUs.
To use GPUs, pass `Accelerators=true` as part of `--feature-gates` flag.
Works only with Docker runtime.
```
1. Automated testing for this PR is not possible since creation of clusters with GPUs isn't supported yet in GCP.
1. To test this PR locally, use the node e2e.
```shell
TEST_ARGS='--feature-gates=DynamicKubeletConfig=true' FOCUS=GPU SKIP="" make test-e2e-node
```
TODO:
- [x] Run manual tests
- [x] Add node e2e
- [x] Add unit tests for GPU manager (< 100% coverage)
- [ ] Add unit tests in kubelet package
Automatic merge from submit-queue (batch tested with PRs 41921, 41695, 42139, 42090, 41949)
kubeadm: join ux changes
**What this PR does / why we need it**: Update `kubeadm join` UX according to https://github.com/kubernetes/community/pull/381
**Which issue this PR fixes**: fixes # https://github.com/kubernetes/kubeadm/issues/176
**Special notes for your reviewer**: /cc @luxas @jbeda
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 42058, 41160, 42065, 42076, 39338)
New command for stand-alone GKE certificates controller
New stand-alone certificates controller for GKE. Rather than requiring the CA's private key on disk, this allows making external calls to GKE in order to sign cluster certificates.
**Which issue this PR fixes**: fixes#39761
**Release note**:
```release-note
New GKE certificates controller.
```
CC @mikedanese @jcbsmpsn
Automatic merge from submit-queue (batch tested with PRs 42044, 41694, 41927, 42050, 41987)
Add apply set-last-applied subcommand
implement part of https://github.com/kubernetes/community/pull/287, will rebase after https://github.com/kubernetes/kubernetes/pull/41699 got merged, EDIT: since bug output format has been confirmed, will update the behavior of output format soon
cc @kubernetes/sig-cli-pr-reviews @AdoHe @pwittrock
```release-note
Support kubectl apply set-last-applied command to update the applied-applied-configuration annotation
```
Automatic merge from submit-queue (batch tested with PRs 41701, 41818, 41897, 41119, 41562)
kubeadm: Secure the control plane communication and add the kubeconfig phase command
**What this PR does / why we need it**:
This generates kubeconfig files for the controller-manager and the scheduler, ref: https://github.com/kubernetes/kubeadm/issues/172
The second commit adds the `kubeadm alpha phase kubeconfig` command as described in the design doc: https://github.com/kubernetes/kubeadm/pull/156
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
@dmmcquay What kind of tests would you like for the kubeconfig phase command?
**Release note**:
```release-note
```
@jbeda @mikedanese @dmmcquay @pires @liggitt @deads2k @errordeveloper
Automatic merge from submit-queue (batch tested with PRs 41814, 41922, 41957, 41406, 41077)
add kubectl can-i to see if you can perform an action
Adds `kubectl auth can-i <verb> <resource> [<name>]` so that a user can see if they are allowed to perform an action.
@kubernetes/sig-cli-pr-reviews @fabianofranz
This particular command satisfies the immediate need of knowing if you can perform an action without trying that action. When using RBAC in a script that is adding permissions, there is a lag between adding the permission and the permission being realized in the RBAC cache. As a user on the CLI, you almost never see it, but as a script adding a binding and then using that new power, you hit it quite often.
There are natural follow-ons to the same area (hence the `auth` subcommand) to figure out if someone else can perform an action, what actions you can perform in total, and who can perform a given action. Someone else is an API we have already, what-can-i-do was a proposed API a while back and a very useful one for interfaces, and who-can is common question if someone is administering a namespace.
Automatic merge from submit-queue
Protect kubeproxy deployed via kube-up from system OOMs
This change is necessary until it can be moved to Guaranteed QoS Class.
For #40573
Automatic merge from submit-queue (batch tested with PRs 40665, 41094, 41351, 41721, 41843)
kubeadm: Add a --ca-cert-path flag to kubeadm join
**What this PR does / why we need it**:
This PR makes it possible to customize where the CA file is written
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
```
@pires @mikedanese @dmmcquay @jbeda @errordeveloper
This adds a new stand-alone certificates controller for use on GKE. It
allows calling GKE to sign certificates instead of requiring the CA
private key locally.
It does not aim for 100% feature parity with kube-controller-manager
yet, so for instance, leader election support is omitted.
Automatic merge from submit-queue (batch tested with PRs 41812, 41665, 40007, 41281, 41771)
kube-apiserver: add a bootstrap token authenticator for TLS bootstrapping
Follows up on https://github.com/kubernetes/kubernetes/pull/36101
Still needs:
* More tests.
* To be hooked up to the API server.
- Do I have to do that in a separate PR after k8s.io/apiserver is synced?
* Docs (kubernetes.io PR).
* Figure out caching strategy.
* Release notes.
cc @kubernetes/sig-auth-api-reviews @liggitt @luxas @jbeda
```release-notes
Added a new secret type "bootstrap.kubernetes.io/token" for dynamically creating TLS bootstrapping bearer tokens.
```
Automatic merge from submit-queue (batch tested with PRs 41349, 41532, 41256, 41587, 41657)
Lint fixes for the master and worker Python code.
**What this PR does / why we need it**: lint fixes for the python code.
**Which issue this PR fixes** none
**Special notes for your reviewer**: This is lint fixes for the Juju python code.
**Release note**:
```release-note
NONE
```
Please consider these changes so we can pass flake8 lint tests in our build process.
Automatic merge from submit-queue
Added a basic monitor for providing etcd version related info
Fixes#41071
This tool scrapes metrics partly from etcd's /version and /metrics endpoints and partly using etcdctl and exposes them as prometheus metrics at `http://localhost:9101/metrics` endpoint on the master. Here is a summary of the metrics it exposes (self-explanatory from the code):
- etcdVersionFetchCount = prometheus.NewCounterVec(
prometheus.CounterOpts{
Namespace: "etcd",
Name: "version_info_fetch_count",
Help: "Number of times etcd's version info was fetched, labeled by etcd's server binary and cluster version",
},
[]string{"serverversion", "clusterversion"})
- etcdGRPCRequestsTotal = prometheus.NewCounterVec(
prometheus.CounterOpts{
Namespace: namespace,
Name: "grpc_requests_total",
Help: "Counter of received grpc requests, labeled by grpc method and grpc service names",
},
[]string{"grpc_method", "grpc_service"})
For further info on how to run this as a binary/docker-container/kubernetes-pod and checking the metrics, have a look at the README.md file.
cc @fgrzadkowski @wojtek-t @piosz
Automatic merge from submit-queue (batch tested with PRs 40297, 41285, 41211, 41243, 39735)
cluster/gce: Add env var to enable apiserver basic audit log.
For now, this is focused on a fixed set of flags that makes the audit
log show up under /var/log/kube-apiserver-audit.log and behave similarly
to /var/log/kube-apiserver.log. Allowing other customization would
require significantly more complex changes.
Audit log rotation is handled the same as for `kube-apiserver.log`.
**What this PR does / why we need it**:
Add a knob to enable [basic audit logging](https://kubernetes.io/docs/admin/audit/) in GCE.
**Which issue this PR fixes**:
**Special notes for your reviewer**:
We would like to cherrypick/port this to release-1.5 also.
**Release note**:
```release-note
The kube-apiserver [basic audit log](https://kubernetes.io/docs/admin/audit/) can be enabled in GCE by exporting the environment variable `ENABLE_APISERVER_BASIC_AUDIT=true` before running `cluster/kube-up.sh`. This will log to `/var/log/kube-apiserver-audit.log` and use the same `logrotate` settings as `/var/log/kube-apiserver.log`.
```
This change makes kubelet to use the CRI implementation by default,
unless the users opt out explicitly by using --enable-cri=false.
For the rkt integration, the --enable-cri flag will have no effect
since rktnetes does not use CRI.
Also, mark the original --experimental-cri flag hidden and deprecated,
so that we can remove it in the next release.
For now, this is focused on a fixed set of flags that makes the audit
log show up under /var/log/kube-apiserver-audit.log and behave similarly
to /var/log/kube-apiserver.log. Allowing other customization would
require significantly more complex changes.
Audit log rotation is handled externally by the wildcard /var/log/*.log
already configured in configure-helper.sh.
Automatic merge from submit-queue
Added kubectl create role command
Added `kubectl create role` command.
Fixed part of #39596
**Release note**:
```
Added one new command `kubectl create role` to help user create a single role from command line.
```
Automatic merge from submit-queue (batch tested with PRs 39418, 41175, 40355, 41114, 32325)
TaintController
```release-note
This PR adds a manager to NodeController that is responsible for removing Pods from Nodes tainted with NoExecute Taints. This feature is beta (as the rest of taints) and enabled by default. It's gated by controller-manager enable-taint-manager flag.
```
Automatic merge from submit-queue (batch tested with PRs 40917, 41181, 41123, 36592, 41183)
[Federation] Add override flags options to kubefed init
**What this PR does / why we need it**:
Allows modification of startup flags (of apiserver and controller manager) through kubefed
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
https://github.com/kubernetes/kubernetes/issues/40398
**Special notes for your reviewer**:
I haven't removed the existing redundant flags now (for example --dns-zone-name) intentionally to avoid breaking any existing tests that might use them.
I guess that would be better done as a follow up PR.
@madhusudancs @marun @nikhiljindal
**Release note**:
```
It is now possible for the user to modify any startup flag of federation-apiserver and federation-controller-manager when deployed through kubefed.
There are two new options introduced in kubefed:
--apiserver-arg-overrides and --controllermanager-arg-overrides
Any number of actual federation-apiserver or federation-controller-manager flags can be specified using these options.
Example:
kubefed init "-other options-" ----apiserver-arg-overrides "--flag1=value1,--flag2=value2"
```
Automatic merge from submit-queue (batch tested with PRs 41121, 40048, 40502, 41136, 40759)
Remove deprecated kubelet flags that look safe to remove
Removes:
```
--config
--auth-path
--resource-container
--system-container
```
which have all been marked deprecated since at least 1.4 and look safe to remove.
```release-note
The deprecated flags --config, --auth-path, --resource-container, and --system-container were removed.
```
Automatic merge from submit-queue (batch tested with PRs 40873, 40948, 39580, 41065, 40815)
Default target storage in etcd
To make etcd v2->v3 upgrade work correctly, we need to correctly set the "TARGET_STORAGE" env var. Since in head we are defaulting to etcd v3, this PR is defaulting also that env var to etcd3, so that by default upgrade works fine.
@fgrzadkowski @gmarek
Automatic merge from submit-queue (batch tested with PRs 40175, 41107, 41111, 40893, 40919)
remove second CA used for kubelet auth in favor of webhook auth
partial fixes upgrade test.
Automatic merge from submit-queue (batch tested with PRs 40175, 41107, 41111, 40893, 40919)
kubeadm: skip integration tests if kubeadm-cmd-skip flag passed
Will skip integration tests for token generation if it can't find a file by the given --kubeadm-path or default value.
**What this PR does / why we need it**: Tests would fail if just running `go test` in the dir because it expects to have more values. This won't change the behavior of `make test-cmd` which gets run here:
https://github.com/kubernetes/kubernetes/blob/master/Makefile#L258
**Which issue this PR fixes**: fixes#40155
**Special notes for your reviewer**: /cc @pires @pipejakob @liggitt
```release-note
NONE
```
After today's SIG meeting, it was discussed how to proceed with these
types of test-cmd tests. They will live in kubeamd/test/cmd and will
provide a flag that will allow you to skip them (--kubeadm-cmd-skip) and
by default will fail if kubeadm binary is not present
Automatic merge from submit-queue (batch tested with PRs 40345, 38183, 40236, 40861, 40900)
remove the create-external-load-balancer flag in cmd/expose.go
**What this PR does / why we need it**:
In cmd/expose.go there is a todo "remove create-external-load-balancer in code on or after Aug 25, 2016.", and now it's been a long time past. So I remove this flag and modify the test cases.
Please check for this, thanks!
**Release note**:
```
remove the deprecated flag "create-external-load-balancer" and use --type="LoadBalancer" instead.
```
Automatic merge from submit-queue
[Federation][kubefed] Add option to expose federation apiserver on nodeport service
**What this PR does / why we need it**:
This PR adds an option to kubefed to expose federation api server over nodeport. This can be useful to deploy federation in non-cloud environments. This PR is target to address #39271
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```
[Federation] kubefed init learned a new flag, `--api-server-service-type`, that allows service type to be specified for the federation API server.
[Federation] kubefed init also learned a new flag, `--api-server-advertise-address`, that allows specifying advertise address for federation API server in case the service type is NodePort.
```
@kubernetes/sig-federation-misc @madhusudancs
Automatic merge from submit-queue (batch tested with PRs 40862, 40909)
[Federation][kubefed] Add option to disable persistence storage for etcd
**What this PR does / why we need it**:
This is part of updates to enable deployment of federation on non-cloud environments. This pr enables disabling persistent storage for etcd via kubefed.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#40617
**Special notes for your reviewer**:
**Release note**:
```
[Federation] Add --etcd-persistent-storage flag to kubefed to enable/disable persistent storage for etcd
```
cc: @kubernetes/sig-federation-bugs @madhusudancs
Automatic merge from submit-queue (batch tested with PRs 40812, 39903, 40525, 40729)
test/node_e2e: wire-in cri-enabled local testing
This commit wires-in the pre-existing `--container-runtime` flag for
local node_e2e testing.
This is needed in order to further skip docker specific testing
and validation.
Local CRI node_e2e can now be performed via
`make test-e2e-node RUNTIME=remote REMOTE=false`
which will also take care of passing the appropriate argument to
the kubelet.
This commit wires-in the pre-existing `--container-runtime` flag for
local node_e2e testing.
This is needed in order to further skip docker specific testing
and validation.
Local CRI node_e2e can now be performed via
`make test-e2e-node RUNTIME=remote REMOTE=false`
which will also take care of passing the appropriate arguments to
the kubelet.
Automatic merge from submit-queue
Add flag to node e2e test specifying location of ssh privkey
**What this PR does / why we need it**: in CI, the ssh private key is not always located at `$HOME/.ssh`, so it's helpful to be able to override it.
@krzyzacy here's my resurrected change. I'm not sure why I neglected to follow-through on it originally.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
Able to quick create a HA cluster by kube-up.sh centos provider
Make `kube-up.sh` `centos provider` support quick create a HA cluster, as I said above [#39430](https://github.com/kubernetes/kubernetes/issues/39430), it's more flexible than `kops` or `kubeadm` for some people in a limited network region.
I'm new to k8s dev, so if this pull request need to change, please let me know.
```release-note
Added support for creating HA clusters for centos using kube-up.sh.
```
Fix: cannot get default master advertise address correctly
Set default value of NUM_MASTERS and NUM_NODES by MASTERS and NODES themself
Code cleanup and documented
Using runtime reconfiguration for etcd cluster instead of etcd discovery
Add exceptions for verify-flags
Automatic merge from submit-queue (batch tested with PRs 40335, 40320, 40324, 39103, 40315)
Splitting master/node services into separate charm layers
**What this PR does / why we need it**:
This branch includes a roll-up series of commits from a fork of the
Kubernetes repository pre 1.5 release because we didn't make the code freeze.
This additional effort has been fully tested and has results submit into
the gubernator to enhance confidence in this code quality vs. the single
layer, posing as both master/node.
To reference the gubernator results, please see:
https://k8s-gubernator.appspot.com/builds/canonical-kubernetes-tests/logs/kubernetes-gce-e2e-node/
Apologies in advance for the large commit however, we did not want to
submit without having successful upstream automated testing results.
This commit includes:
- Support for CNI networking plugins
- Support for durable storage provided by Ceph
- Building from upstream templates (read: kubedns - no more template
drift!)
- An e2e charm-layer to make running validation tests much simpler/repeatable
- Changes to support the 1.5.x series of Kubernetes
**Special notes for your reviewer**:
Additional note: We will be targeting -all- future work against upstream
so large pull requests of this magnitude will not occur again.
**Release note**:
```release-note
- Splits Juju Charm layers into master/worker roles
- Adds support for 1.5.x series of Kubernetes
- Introduces a tactic for keeping templates in sync with upstream eliminating template drift
- Adds CNI support to the Juju Charms
- Adds durable storage support to the Juju Charms
- Introduces an e2e Charm layer for repeatable testing efforts and validation of clusters
```
This branch includes a rollup series of commits from a fork of the
kubernetes repository pre 1.5 release because we didn't make the code freeze.
This additional effort has been fully tested and has results submit into
the gubernator to enhance confidence in this code quality vs. the single
layer, posing as both master/node.
To reference the gubernator results, please see:
https://k8s-gubernator.appspot.com/builds/canonical-kubernetes-tests/logs/kubernetes-gce-e2e-node/
Apologies in advance for the large commit, however we did not want to
submit without having successful upstream automated testing results.
This commit includes:
- Support for CNI networking plugins
- Support for durable storage provided by ceph
- Building from upstream templates (read: kubedns - no more template
drift!)
- An e2e charm-layer to make running validation tests much simpler/repeatable
- Changes to support the 1.5.x series of kubernetes
Additional note: We will be targeting -all- future work against upstream
so large pull requests of this magnitude will not occur again.
Automatic merge from submit-queue (batch tested with PRs 38445, 40292)
Add the ability to edit fields within a config map.
Addresses part of https://github.com/kubernetes/kubernetes/issues/36222
Example command:
```console
$ kubectl edit configmap foo --config-map-data=bar
```
Will open the data element named `bar` in the `ConfigMap` named `foo` in `$EDITOR`, the edited contents are then updated back to the config map.
@kubernetes/sig-cli
```release-note
Add a special purpose tool for editing individual fields in a ConfigMap with kubectl
```
Automatic merge from submit-queue (batch tested with PRs 37228, 40146, 40075, 38789, 40189)
kubeadm: add optional self-hosted deployment
**What this PR does / why we need it**: add an optional self-hosted deployment type to `kubeadm`, for master components only, namely `apiserver`, `controller-manager` and `scheduler`.
**Which issue this PR fixes**: closes#38407
**Special notes for your reviewer**: /cc @aaronlevy @luxas @dgoodwin
**Release note**:
```release-note
kubeadm: add optional self-hosted deployment for apiserver, controller-manager and scheduler.
```
Automatic merge from submit-queue (batch tested with PRs 37228, 40146, 40075, 38789, 40189)
kubelet: storage: teardown terminated pod volumes
This is a continuation of the work done in https://github.com/kubernetes/kubernetes/pull/36779
There really is no reason to keep volumes for terminated pods attached on the node. This PR extends the removal of volumes on the node from memory-backed (the current policy) to all volumes.
@pmorie raised a concern an impact debugging volume related issues if terminated pod volumes are removed. To address this issue, the PR adds a `--keep-terminated-pod-volumes` flag the kubelet and sets it for `hack/local-up-cluster.sh`.
For consideration in 1.6.
Fixes#35406
@derekwaynecarr @vishh @dashpole
```release-note
kubelet tears down pod volumes on pod termination rather than pod deletion
```
Automatic merge from submit-queue (batch tested with PRs 39486, 37288, 39477, 39455, 39542)
Allow missing keys in templates by default
Switch to allowing missing keys in jsonpath templates by default.
Add support for allowing/disallowing missing keys in go templates
(default=allow).
Add --allow-missing-template-keys flag to control this behavior (default=true /
allow missing keys).
Fixes#37991
@kubernetes/sig-cli-misc @kubernetes/api-reviewers @smarterclayton @fabianofranz @liggitt @pwittrock
Switch to allowing missing keys in jsonpath templates by default.
Add support for allowing/disallowing missing keys in go templates
(default=allow).
Add --allow-missing-template-keys flag to control this behavior
(default=true / allow missing keys).
Automatic merge from submit-queue (batch tested with PRs 36229, 39450)
Bump etcd to 3.0.14 and switch to v3 API in etcd.
Ref #20504
**Release note**:
```release-note
Switch default etcd version to 3.0.14.
Switch default storage backend flag in apiserver to `etcd3` mode.
```
Automatic merge from submit-queue
Coreos kube-up now with less cloud init
This update includes significant refactoring. It moves almost all of the
logic into bash scripts, modeled after the `gci` cluster scripts.
The reason to do this is:
1. Avoid duplicating the saltbase manifests by reusing gci's parsing logic (easier maintenance)
2. Take an incremental step towards sharing more code between gci/trusty/coreos, again for better maintenance
3. Pave the way for making future changes (e.g. improved rkt support, kubelet support) easier to share
The primary differences from the gci scripts are the following:
1. Use of the `/opt/kubernetes` directory over `/home/kubernetes`
2. Support for rkt as a runtime
3. No use of logrotate
4. No use of `/etc/default/`
5. No logic related to noexec mounts or gci-specific firewall-stuff
It will make sense to move 2 over to gci, as well as perhaps a few other small improvements. That will be a separate PR for ease of review.
Ref #29720, this is a part of that because it removes a copy of them.
Fixes#24165
cc @yifan-gu
Since this logic largely duplicates logic from the gci folder, it would be nice if someone closely familiar with that gave an OK or made sure I didn't fall into any gotchas related to that, so cc @andyzheng0831
Automatic merge from submit-queue (batch tested with PRs 38426, 38917, 38891, 38935)
Remove cluster/mesos from hack/verify-flags/exceptions.txt
`cluster/mesos` scripts was removed; so remove it from `hack/verify-flags/exceptions.txt`.
The diff was generated by `hack/verify-flags-underscore.py -e > hack/verify-flags/exceptions.txt`.
Automatic merge from submit-queue
conversion-gen: add --skip-unsafe flag
We should expose the SkipUnsafe option, for legacy compatability, so
that conversion-go can be used in other projects, and for platforms
where unsafe is not available.
Make unsafe code generation the default though, and have the help text
hint that the resulting code is sub-optimal.
We should expose the SkipUnsafe option, for legacy compatability, so
that conversion-go can be used in other projects, and for platforms
where unsafe is not available.
Make unsafe code generation the default though, and have the help text
hint that the resulting code is sub-optimal.
Automatic merge from submit-queue
[Federation] Make federation etcd PVC size configurable
This one implements one of the many TODO items pending in the previous set of kubefed PRs.
The design doc PR is at https://github.com/kubernetes/kubernetes/pull/34484
cc @kubernetes/sig-cluster-federation @madhusudancs
**Release note**:
<!-- Steps to write your release note:
1. Use the release-note-* labels to set the release note state (if you have access)
2. Enter your extended release note in the below block; leaving it blank means using the PR title as the release note. If no release note is required, just write `NONE`.
-->
```
[Federation] kubefed init now has a new flag, --etcd-pv-capacity, which can be used to configure the persistent volume capacity for etcd.
```
Automatic merge from submit-queue
kubedns: use initial resource listing as ready signal
Fix#35140.
Set up the ready signal after the first resource listing finished for both endpoints and services instead of listen on kubernetes service.
@bprashanth @bowei @thockin
**Release note**:
```
```
Automatic merge from submit-queue
add a configuration for kubelet to register as a node with taints
and deprecate --register-schedulable
ref #28687#29178
cc @dchen1107 @davidopp @roberthbailey
Automatic merge from submit-queue
Split inflight requests into read-only and mutating groups
cc @smarterclayton @lavalamp @caesarxuchao
```release-note
API server have two separate limits for read-only and mutating inflight requests.
```
Automatic merge from submit-queue
plumb in front proxy group header
Builds on https://github.com/kubernetes/kubernetes/pull/36662 and https://github.com/kubernetes/kubernetes/pull/36774, so only the last commit is unique.
This completes the plumbing for front proxy header information and makes it possible to add just the front proxy header authenticator.
WIP because I'm going to assess it in use downstream.
Automatic merge from submit-queue
kubectl top pod|node should handle when Heapster is somewhere else
OpenShift runs Heapster on HTTPS, which means `top node` and `top pod`
are broken because they hardcode 'http' as the scheme. Provide an
options struct allowing users to specify `--heapster-namespace`,
`--heapster-service`, `--heapster-scheme`, and `--heapster-port` to the
commands (leveraging the existing defaults).
@kubernetes/sig-metrics makes top a little more useful in other spots
Automatic merge from submit-queue (batch tested with PRs 35300, 36709, 37643, 37813, 37697)
Add generated informers
Add informer-gen and the informers it generates. We'll do follow-up PRs to convert everything currently using the hand-written informers to the generated ones.
TODO:
- [x] switch to `GroupVersionResource`
- [x] finish godoc
@deads2k @caesarxuchao @sttts @liggitt
Automatic merge from submit-queue
Add kubernetes-anywhere as a new e2e deployment option.
This change adds support for using `kubernetes-anywhere` as a deployment option for hack/e2e.go. This work is toward the larger goal of being able to run e2e tests against `kubeadm` clusters, which `kubernetes-anywhere` supports.
**Release note**:
```release-note
Add kubernetes-anywhere as a new e2e deployment option
```
The configuration in `getConfig()` comes mostly from the defaults in `kubernetes-anywhere`. In the future, we can add more plumbing to override them via CLI flags.
CC @mikedanese
Automatic merge from submit-queue
create service add create ExternalName service implementation
@kubernetes/kubectl create service add ExternalName support, refer #34731 for more detail.
```release-note
kubectl create service externalname
```
OpenShift runs Heapster on HTTPS, which means `top node` and `top pod`
are broken because they hardcode 'http' as the scheme. Provide an
options struct allowing users to specify `--heapster-namespace`,
`--heapster-service`, `--heapster-scheme`, and `--heapster-port` to the
commands (leveraging the existing defaults).
Automatic merge from submit-queue
move parts of the mega generic run struct out
This splits the main `ServerRunOptions` into composeable pieces that are bindable separately and adds easy paths for composing servers to run delegating authentication and authorization.
@sttts @ncdc alright, I think this is as far as I need to go to make the composing servers reasonable to write. I'll try leaving it here