Automatic merge from submit-queue (batch tested with PRs 51682, 51546, 51369, 50924, 51827)
kubeadm: Detect kubelet readiness and error out if the kubelet is unhealthy
**What this PR does / why we need it**:
In order to improve the UX when the kubelet is unhealthy or stopped, or whatever, kubeadm now polls the kubelet's API after 40 and 60 seconds, and then performs an exponential backoff for a total of 155 seconds.
If the kubelet endpoint is not returning `ok` by then, kubeadm gives up and exits.
This will miligate at least 60% of our "[apiclient] Created API client, waiting for control plane to come up" issues in the kubeadm issue tracker 🎉, as kubeadm now informs the user what's wrong and also doesn't deadlock like before.
Demo:
```
lucas@THEGOPHER:~/luxas/kubernetes$ sudo ./kubeadm init --skip-preflight-checks
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.7.4
[init] Using Authorization modes: [Node RBAC]
[preflight] Skipping pre-flight checks
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [thegopher kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.115]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] This often takes around a minute; or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 40.502199 seconds
[markmaster] Will mark node thegopher as master by adding a label and a taint
[markmaster] Master thegopher tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: 5776d5.91e7ed14f9e274df
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run (as a regular user):
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
http://kubernetes.io/docs/admin/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join --token 5776d5.91e7ed14f9e274df 192.168.1.115:6443 --discovery-token-ca-cert-hash sha256:6f301ce8c3f5f6558090b2c3599d26d6fc94ffa3c3565ffac952f4f0c7a9b2a9
lucas@THEGOPHER:~/luxas/kubernetes$ sudo ./kubeadm reset
[preflight] Running pre-flight checks
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Removing kubernetes-managed containers
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes /var/lib/etcd]
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
lucas@THEGOPHER:~/luxas/kubernetes$ sudo systemctl stop kubelet
lucas@THEGOPHER:~/luxas/kubernetes$ sudo ./kubeadm init --skip-preflight-checks
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.7.4
[init] Using Authorization modes: [Node RBAC]
[preflight] Skipping pre-flight checks
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [thegopher kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.115]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] This often takes around a minute; or longer if the control plane images have to be pulled.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp 127.0.0.1:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp 127.0.0.1:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp 127.0.0.1:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz/syncloop' failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp 127.0.0.1:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz/syncloop' failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp 127.0.0.1:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz/syncloop' failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp 127.0.0.1:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp 127.0.0.1:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz/syncloop' failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp 127.0.0.1:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp 127.0.0.1:10255: getsockopt: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by that:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
- There is no internet connection; so the kubelet can't pull the following control plane images:
- gcr.io/google_containers/kube-apiserver-amd64:v1.7.4
- gcr.io/google_containers/kube-controller-manager-amd64:v1.7.4
- gcr.io/google_containers/kube-scheduler-amd64:v1.7.4
You can troubleshoot this for example with the following commands if you're on a systemd-powered system:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
couldn't initialize a Kubernetes cluster
```
In this demo, I'm first starting kubeadm normally and everything works as usual.
In the second case, I'm explicitely stopping the kubelet so it doesn't run, and skipping preflight checks, so that kubeadm doesn't even try to exec `systemctl start kubelet` like it does usually.
That obviously results in a non-working system, but now kubeadm tells the user what's the problem instead of waiting forever.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
Fixes: https://github.com/kubernetes/kubeadm/issues/377
**Special notes for your reviewer**:
**Release note**:
```release-note
kubeadm: Detect kubelet readiness and error out if the kubelet is unhealthy
```
@kubernetes/sig-cluster-lifecycle-pr-reviews @pipejakob
cc @justinsb @kris-nova @lukemarsden as well as you wanted this feature :)
Automatic merge from submit-queue (batch tested with PRs 50832, 51119, 51636, 48921, 51712)
kubeadm: Add support for using an external CA whose key is never stored in the cluster
We allow a kubeadm user to use an external CA by checking to see if ca.key is missing and skipping cert checks and kubeconfig generation if ca.key is missing. We also pass an empty arg --cluster-signing-key-file="" to kube controller manager so that the csr signer doesn't start.
**What this PR does / why we need it**:
This PR allows the kubeadm certs phase and kubeconfig phase to be skipped if the ca.key is missing but all other certs are present.
**Which issue this PR fixes** :
Fixes kubernetes/kubeadm/issues/280
**Special notes for your reviewer**:
@luxas @mikedanese @fabriziopandini
**Release note**:
```release-note
kubeadm: Add support for using an external CA whose key is never stored in the cluster
```
Automatic merge from submit-queue
kubeadm: Cut unnecessary kubectl dependency
**What this PR does / why we need it**:
Removes unnecessary dep
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
@kubernetes/sig-cli-pr-reviews
Automatic merge from submit-queue
kubeadm: preflight check for enabled swap
**What this PR does / why we need it**:
Recent versions of kubelet require special flags if runned
on the system with enabled swap. Thus, remind user about either
disabling swap or add appropriate flag to kubelet settings
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
kubeadm: Add node-cidr-mask-size to pass to kube-controller-manager for IPv6
Due to the increased size of subnets with IPv6, the node-cidr-mask-size needs to be passed to kube-controller-manager. If IPv4 it will be set to 24 as it was previously, if IPv6, it will be set to
64
**What this PR does / why we need it**:
If the user specifies the --pod-network-cidr with kubeadm init, this caused the kube-controller-manager manifest to include the "--allocate-node-cidrs" and "--cluster-cidr" flags to be set. The --node-cidr-mask-size is not set, and currently defaults to 24, which is fine for IPv4, but not appropriate for IPv6. This change passes the a value as the node-cidr-mask-size to the controller-manager. It detects if it is IPv4 or v6, and sets --node-cidr-mask-size to 24 for IPv4 as before, and to 64 for IPv6.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#50469
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
We allow a kubeadm user to use an external CA by checking to see if ca.key is missing and skipping cert checks and kubeconfig generation if ca.key is missing.
Recent versions of kubelet require special flags if runned
on the system with enabled swap. Thus, remind user about either
disabling swap or add appropriate flag to kubelet settings
Automatic merge from submit-queue (batch tested with PRs 49861, 50933, 51380, 50688, 51305)
Add configurable groups to bootstrap tokens.
**What this PR does / why we need it**:
This change adds support for authenticating bootstrap tokens into a configurable set of extra groups in addition to `system:bootstrappers`. Previously, bootstrap tokens could only ever authenticate to the `system:bootstrappers` group.
Groups are specified as a comma-separated list in the `auth-extra-groups` key of the `bootstrap.kubernetes.io/token` Secret, and must begin with the prefix `system:bootstrapper:` (and match a validation regex that checks against our normal convention). Whether or not any extra groups are configured, `system:bootstrappers` will still be added.
This also adds a `--groups` flag for `kubeadm token create`, which sets the `auth-extra-groups` key on the resulting Secret. The default is to not set the key.
`kubeadm token list` is also updated to include a `EXTRA GROUPS` output column.
**Which issue this PR fixes**: fixes#49306
**Special notes for your reviewer**:
The use case for this is in https://github.com/kubernetes/kubernetes/issues/49306. Comments on the feature itself are probably better over there. It will be part of how HA/self-hosting kubeadm bootstraps new master nodes (post 1.8).
**Release note**:
```release-note
Add support for configurable groups for bootstrap token authentication.
```
cc @luxas @kubernetes/sig-cluster-lifecycle-api-reviews @kubernetes/sig-auth-api-reviews
/kind feature
Automatic merge from submit-queue
kubeadm: Rename FeatureFlags to FeatureGates
**What this PR does / why we need it**:
Automatic rename from `FeatureFlags` to `FeatureGates`, as I noticed that's the real name for this feature. This is for consistency in the API and generally in the code.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
@kubernetes/sig-cluster-lifecycle-pr-reviews @fabriziopandini @jamiehannaford
Automatic merge from submit-queue (batch tested with PRs 49849, 50334, 51414)
kubeadm: Use the --enable-bootstrap-token-auth flag when possible
**What this PR does / why we need it**:
Uses the right API server flag for the right version.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
fixes: https://github.com/kubernetes/kubeadm/issues/414
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
@kubernetes/sig-cluster-lifecycle-pr-reviews @mattmoyer
Automatic merge from submit-queue
Remove null -> [] slice hack
Closes#44593
When 1.6 added protobuf storage, the storage layer lost the ability to persist slice fields with empty but non-null values.
As a workaround, we tried to convert empty slice fields to `[]`, rather than `null`. Compressing `null` -> `[]` was just as much of an API breakage as `[]` -> `null`, but was hoped to cause fewer problems in clients that don't do null checks.
Because of conversion optimizations around converting lists of objects, the `null` -> `[]` hack was discovered to only apply to individual get requests, not to a list of objects. 1.6 and 1.7 was released with this behavior, and the world didn't explode. 1.7 documented the breaking API change that `null` and `[]` should be considered equivalent, unless otherwise noted on a particular field.
This PR:
* Reverts the earlier attempt (https://github.com/kubernetes/kubernetes/pull/43422) at ensuring non-null json slice output in conversion
* Makes results of `get` consistent with the results of `list` (which helps naive clients that do deepequal comparisons of objects obtained via list/watch and get), and allows empty slice fields to be returned as `null`
```release-note
Protobuf serialization does not distinguish between `[]` and `null`.
API fields previously capable of storing and returning either `[]` and `null` via JSON API requests (for example, the Endpoints `subsets` field) can now store only `null` when created using the protobuf content-type or stored in etcd using protobuf serialization (the default in 1.6+). JSON API clients should tolerate `null` values for such fields, and treat `null` and `[]` as equivalent in meaning unless specifically documented otherwise for a particular field.
```
Automatic merge from submit-queue (batch tested with PRs 51174, 51363, 51087, 51382, 51388)
kubeadm: Move the uploadconfig phase right in the beginning of cluster init
**What this PR does / why we need it**:
In order to be forwards-compatible, I'm moving the uploadconfig to be the first thing in the chain in order to make it possible to rely on it being present in future releases when we have a beta or higher API to rely on.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
@kubernetes/sig-cluster-lifecycle-pr-reviews
Automatic merge from submit-queue (batch tested with PRs 51054, 51101, 50031, 51296, 51173)
Add host mountpath to controller-manager for flexvolume dir
Controller manager needs access to Flexvolume plugin when using attach-detach controller interface.
This PR adds the host mount path for the default directory of flexvolume plugins
Fixes https://github.com/kubernetes/kubeadm/issues/410
This adds an `EXTRA GROUPS` column to the output of `kubeadm token list`. This displays any extra `system:bootstrappers:*` groups that are specified in the token's `auth-extra-groups` key.
Controller manager needs access to Flexvolume plugin when
using attach-detach controller interface.
This PR adds the host mount path for the default directory of flexvolume
plugins
Fixes https://github.com/kubernetes/kubeadm/issues/410
Automatic merge from submit-queue (batch tested with PRs 51038, 50063, 51257, 47171, 51143)
update related manifest files to use hostpath type
**What this PR does / why we need it**:
Per [discussion in #46597](https://github.com/kubernetes/kubernetes/pull/46597#pullrequestreview-53568947)
Dependes on #46597
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
Fixes: https://github.com/kubernetes/kubeadm/issues/298
**Special notes for your reviewer**:
/cc @euank @thockin @tallclair @Random-Liu
**Release note**:
```release-note
None
```
Automatic merge from submit-queue (batch tested with PRs 50872, 51103, 51220, 51285, 50841)
kubeadm: Add 'kubeadm upgrade plan' and 'kubeadm upgrade apply' CLI commands
**What this PR does / why we need it**:
This PR is splitted out from: https://github.com/kubernetes/kubernetes/pull/48899 and only handles the CLI/command code. It adds no-op functions only to `phases/upgrade`.
A large chunk of this code is unit tests.
The code here should be pretty straightforward as there is no actual upgrade or business logic here.
It would be cool to get this merged soon-ish.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
fixes: https://github.com/kubernetes/kubeadm/issues/14
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
@kubernetes/sig-cluster-lifecycle-pr-reviews PTAL
Automatic merge from submit-queue (batch tested with PRs 51244, 50559, 49770, 51194, 50901)
Fix zsh completion for kubeadm
**What this PR does / why we need it**:
kubeadm zsh completion will report an error when using after '--flag':
```
kubeadm join --token=1 __handle_flag:25: bad math expression: operand expected at end of string
```
There is a similar bug in kubectl which has been fixed by #48553. It is due to `__kubeadm_declare` gets 'declare -A' into function scope, and `__kubeadm_declare` could be removed now.
This is to port that fix here.
**Which issue this PR fixes**
**Special notes for your reviewer**:
**Release note**:
Automatic merge from submit-queue
kubeadm: Implement 'kubeadm config'
**What this PR does / why we need it**:
Implements a `kubeadm config` command for viewing the current kubeadm configuration stored as a ConfigMap in the cluster and creating that configuration for v1.7- users. kubeadm v1.8+ handles the creation of this ConfigMap at init time, but v1.7 users have to create it themselves with this command in order to be able to preserve the same config after the upgrade.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
fixes: https://github.com/kubernetes/kubeadm/issues/406
**Special notes for your reviewer**:
**Release note**:
```release-note
Adds a new `kubeadm config` command that lets users tell `kubeadm upgrade` what kubeadm configuration to use and lets users view the current state.
```
@kubernetes/sig-cluster-lifecycle-pr-reviews
Automatic merge from submit-queue (batch tested with PRs 51039, 50512, 50546, 50965, 50467)
kubeadm: Get kube-dns based on the kubernetes version
**What this PR does / why we need it**:
Makes the kube-dns version used dependent on the kubernetes version. This is required for upgrades as we have to be able to handle one kube-dns version per branch for instance...
Currently a no-op though, as both v1.7 and v1.8 seem to use 1.14.4
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
Dependency for https://github.com/kubernetes/kubernetes/pull/48899 (kubeadm upgrades)
**Release note**:
```release-note
NONE
```
@kubernetes/sig-cluster-lifecycle-pr-reviews
@kubernetes/dns-maintainers FYI; next time you bump DNS version, please update this func instead of the constant there...
Automatic merge from submit-queue (batch tested with PRs 50967, 50505, 50706, 51033, 51028)
Clean kubelet certificates on kubeadm reset
**What this PR does / why we need it**:
After `kubeadm init` and `kubeadm reset` for a few times, kubelet will fail communicating with apiserver because certificate signed by unknown authority. We should cleanup kubelet certs on kubeadm reset.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#48378
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 50967, 50505, 50706, 51033, 51028)
kubeadm: Tell the user when a static pod is created
**What this PR does / why we need it**:
Prints a line to notify the user of the static pod creation in order to be consistent with the other phases (one line per phase and optionally per component).
Now the phase command `controlplane all` and `etcd local` also actually outputs something.
Also renamed `[token]` to `[bootstraptoken]` to match the output below and `s/mode/modes/`
`kubeadm init` output now:
```console
$ ./kubeadm init
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.7.4
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks
[preflight] WARNING: docker service is not enabled, please run 'systemctl enable docker.service'
[preflight] Starting the kubelet service
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [thegopher kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.115]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[apiclient] All control plane components are healthy after 40.002026 seconds
[markmaster] Master thegopher tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: cfe65e.d196614967c3ffe3
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run (as a regular user):
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
http://kubernetes.io/docs/admin/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join --token cfe65e.d196614967c3ffe3 192.168.1.115:6443 --discovery-token-ca-cert-hash sha256:eb3461b9b707eafc214577f36ae8c351bbc4d595ab928fc84caf1325b69cb192
$ ./kubeadm alpha phase controlplane all
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
$ ./kubeadm alpha phase etcd local
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
```
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
@kubernetes/sig-cluster-lifecycle-pr-reviews @fabriziopandini
Due to the increased size of subnets with IPv6, the node-cidr-mask-size needs to be passed to kube-controller-manager. If the user passes a IPv6 cidr, the node-cidr-mask-size will be set to 64, If IPv4 it will be set to 24 as it was previously.
Automatic merge from submit-queue (batch tested with PRs 50893, 50913, 50963, 50629, 50640)
kubeadm: Add back labels for the Static Pod control plane (attempt 2)
**What this PR does / why we need it**:
Exactly the same PR as https://github.com/kubernetes/kubernetes/pull/50174, but that PR was appearently lost in a rebase/mis-merge or something, so resending this one.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
@kubernetes/sig-cluster-lifecycle-pr-reviews
Automatic merge from submit-queue (batch tested with PRs 46458, 50934, 50766, 50970, 47698)
kubeadm: Warn in preflight checks if KubernetesVersion is of a newer branch than kubeadm
**What this PR does / why we need it**:
see https://github.com/kubernetes/kubeadm/issues/307
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
https://github.com/kubernetes/kubeadm/issues/307
**Special notes for your reviewer**:
**Release note**: