Commit Graph

3861 Commits (6122ff88fa9fccf1c3c3b1e3c07101998d01aed6)

Author SHA1 Message Date
Anthony Yeh c74aab649f RC/RS: Mark lookup-cache-size flags as deprecated. 2017-03-20 09:10:12 -07:00
Kubernetes Submit Queue 049b35c92a Merge pull request #43355 from luxas/kubeadm_dns_hostnet
Automatic merge from submit-queue (batch tested with PRs 43355, 42827)

kubeadm: In-cluster DNS should be used when self-hosting

**What this PR does / why we need it**:

I noticed that the master components doesn't use the built-in cluster DNS which they really should do in order to be able to discover other services inside the cluster (like extension API Servers like service catalog).

This is a really small change that fixes a misconfiguration that had slipped though earlier.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
@jbeda @bowei @MrHohn
2017-03-19 10:49:44 -07:00
Lucas Käldström b7d84d53b0
kubeadm: When self-hosting, cluster DNS should be used 2017-03-19 14:18:04 +02:00
Kubernetes Submit Queue 8532c63c50 Merge pull request #43161 from luxas/kubeadm_16_offline_version
Automatic merge from submit-queue

kubeadm: Default to v1.6.0 stable in offline scenarios in beforehand

**What this PR does / why we need it**:

In offline scenarios, kubeadm will fallback to the latest well-known version.
This PR bumps that to v1.6. We can merge now, and in the small gap between the merge of this PR and the actual v1.6 release, kubeadm devs will have to explicitely set k8s version.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
@jbeda
2017-03-19 05:16:20 -07:00
Lucas Käldström b451e08e9b
kubeadm: Default to v1.6.0 stable in offline scenarios in beforehand 2017-03-15 21:01:03 +02:00
Kubernetes Submit Queue 5826b09a19 Merge pull request #42713 from luxas/kubeadm_fix_reset
Automatic merge from submit-queue (batch tested with PRs 43018, 42713)

kubeadm: Don't drain and remove the current node on kubeadm reset

**What this PR does / why we need it**:

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

In v1.5, `kubeadm reset` would drain your node and remove it from your cluster if you specified, but now in v1.6 we can't do that due to the RBAC rules we have set up.

After conversations with @liggitt, I also agree this functionality was somehow a little mis-placed (though still very convenient to use), so we're removing it for v1.6.

It's the system administrator's duty to drain and remove nodes from the cluster, not the nodes' responsibility.

The current behavior is therefore a bug that needs to be fixed in v1.6

**Release note**:

```release-note
kubeadm: `kubeadm reset` won't drain and remove the current node anymore
```
@liggitt @deads2k @jbeda @dmmcquay @pires @errordeveloper
2017-03-14 15:59:20 -07:00
Kubernetes Submit Queue 08e351acc8 Merge pull request #41429 from mikedanese/kubeadm-owners
Automatic merge from submit-queue

remove dgoodwin and dmmcquay to kubeadm reviewers

@dgoodwin says he needs to work on other stuff right now. @dmmcquay says he wants to help with reviews.
2017-03-14 08:49:37 -07:00
Mike Danese 33d0c48313 remove dgoodwin and dmmcquay to kubeadm reviewers 2017-03-14 05:19:25 -07:00
Joe Beda 505464d496
Dumb typo in kubeadm instructions
Signed-off-by: Joe Beda <joe.github@bedafamily.com>
2017-03-13 21:45:36 +00:00
Kubernetes Submit Queue 9d78cbad89 Merge pull request #42970 from jbeda/kubeadm-message
Automatic merge from submit-queue (batch tested with PRs 42940, 42906, 42970, 42848)

Improve kubeadm init message

Now that we are locking down the insecure port, we should give clearer instructions on how to copy out the root owned admin.conf file, chmod it and use it.

Signed-off-by: Joe Beda <joe.github@bedafamily.com>

```release-note
NONE
```
2017-03-13 13:22:14 -07:00
Kubernetes Submit Queue 33c455271e Merge pull request #42966 from apprenda/kubeadm_beta_banner
Automatic merge from submit-queue (batch tested with PRs 42969, 42966)

kubeadm: update kubeadm banner to beta

**What this PR does / why we need it**: Updates the intro banner for kubeadm, which used to  state it is in alpha (but we are going to beta). This also updates the tagged github group (one that no longer exists) to the sig-cluster-lifecycle-misc group.  

**Special notes for your reviewer**: /cc @jbeda 

**Release note**:
```release-note
NONE
```
2017-03-12 18:08:24 -07:00
Joe Beda c15d011da3
Improve kubeadm init message
Now that we are locking down the insecure port, we should give clearer instructions on how to copy out the root owned admin.conf file, chmod it and use it.

Signed-off-by: Joe Beda <joe.github@bedafamily.com>
2017-03-13 00:33:58 +00:00
Derek McQuay 53818b6c84
kubeadm: remove utilerros pkg in favor of []error 2017-03-12 16:34:27 -07:00
Derek McQuay 7249ba2872
kubeadm: fixed warning nil logging 2017-03-12 16:17:58 -07:00
Derek McQuay b0fbff659c
kubeadm: moved alpha to beta in join and init 2017-03-12 15:28:28 -07:00
Derek McQuay ab1ce8b879
kubeadm: update kubeadm banner to beta 2017-03-12 14:48:26 -07:00
Kubernetes Submit Queue cf732613e3 Merge pull request #42278 from marun/fed-api-fixture
Automatic merge from submit-queue (batch tested with PRs 42728, 42278)

[Federation] Create integration test fixture for api

This PR factors a reusable fixture for the federation api server out of the existing integration test.

Targets #40705

cc: @kubernetes/sig-federation-pr-reviews
2017-03-09 05:45:32 -08:00
Kubernetes Submit Queue eefa2ef1bb Merge pull request #42425 from apprenda/kubeadm_189_docker_version
Automatic merge from submit-queue (batch tested with PRs 42762, 42739, 42425, 42778)

kubeadm: update docker version for CE and EE

**What this PR does / why we need it**: Update regex for docker version to also capture new CE and EE versions. 

**Which issue this PR fixes**: fixes #https://github.com/kubernetes/kubeadm/issues/189

**Special notes for your reviewer**: /cc @jbeda @luxas

**Release note**:
```release-note
NONE
```
2017-03-09 02:51:40 -08:00
Derek McQuay 35f07095d8
kubeadm: validators pass warnings and errors
This change allows validators to pass warnings as well as errors. This
was needed because of how support for docker 1.13+ and the new EE and CE
versions is currently being handled.
2017-03-08 14:35:26 -08:00
Maru Newby dd2a8127a5 fed: Create integration test fixture for api 2017-03-08 06:58:58 -08:00
gmarek 48d784272e Move taint eviction feature flag to feature-gates 2017-03-08 10:04:18 +01:00
Kubernetes Submit Queue 8e43f00d28 Merge pull request #42657 from luxas/kubeadm_fix_dummy
Automatic merge from submit-queue

kubeadm: Delete the dummy Deployment properly

**What this PR does / why we need it**:

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes https://github.com/kubernetes/kubeadm/issues/149

**Special notes for your reviewer**:

Earlier, the Pod created by the Deployment wasn't deleted. With this option it is.
As suggested by @deads2k, thank you!

This is a bug fix for v1.6

**Release note**:

```release-note
```
@mikedanese @jbeda @dmmcquay @pires @errordeveloper @deads2k @caesarxuchao
2017-03-08 00:33:27 -08:00
Lucas Käldström c7fc530bc7
kubeadm: Don't drain and remove the current node on kubeadm reset 2017-03-08 09:30:49 +02:00
Lucas Käldström 78fd645d12
kubeadm: Delete the dummy Deployment properly 2017-03-08 08:24:14 +02:00
Kubernetes Submit Queue 5af81b0955 Merge pull request #42173 from enisoc/controller-ref-ds
Automatic merge from submit-queue (batch tested with PRs 42692, 42169, 42173)

DaemonSet: Respect ControllerRef

**What this PR does / why we need it**:

This is part of the completion of the [ControllerRef](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/controller-ref.md) proposal. It brings DaemonSet into full compliance with ControllerRef. See the individual commit messages for details.

**Which issue this PR fixes**:

This ensures that DaemonSet does not fight with other controllers over control of Pods.

**Special notes for your reviewer**:

**Release note**:

```release-note
DaemonSet now respects ControllerRef to avoid fighting over Pods.
```
cc @erictune @kubernetes/sig-apps-pr-reviews
2017-03-07 20:10:28 -08:00
Kubernetes Submit Queue 5bc7387b3c Merge pull request #42169 from ncdc/pprof-trace
Automatic merge from submit-queue (batch tested with PRs 42692, 42169, 42173)

Add pprof trace support

Add support for `/debug/pprof/trace`

Can wait for master to reopen for 1.7.

cc @smarterclayton @wojtek-t @gmarek @timothysc @jeremyeder @kubernetes/sig-scalability-pr-reviews
2017-03-07 20:10:26 -08:00
Anthony Yeh e2deb1795d DaemonSet: Mark daemonset-lookup-cache-size flag as deprecated. 2017-03-07 16:42:29 -08:00
Anthony Yeh 1099811833 DaemonSet: Use ControllerRef to route watch events.
This is part of the completion of ControllerRef, as described here:

https://github.com/kubernetes/community/blob/master/contributors/design-proposals/controller-ref.md#watches
2017-03-07 16:42:28 -08:00
Kubernetes Submit Queue 55d500e610 Merge pull request #42613 from pipejakob/fix-health-port
Automatic merge from submit-queue

kubeadm: Make kube-apiserver's liveness probe match its bindport.

The `kube-apiserver` liveness probe port had previously been hardcoded, so if you used `--apiserver-bind-port` to override the default port (6443), then the health check for the pod would quickly fail and kubelet would continuously kill the apiserver.

**Which issue this PR fixes**: fixes https://github.com/kubernetes/kubeadm/issues/196

**Release note**:

```release-note
kubeadm: fix kube-apiserver liveness probe port when --apiserver-bind-port given
```
2017-03-07 10:42:35 -08:00
Kubernetes Submit Queue 2bdb22751a Merge pull request #42593 from deads2k/controller-02-disable
Automatic merge from submit-queue (batch tested with PRs 41890, 42593, 42633, 42626, 42609)

make all controllers obey the disable flags

Fixes https://github.com/kubernetes/kubernetes/issues/42592 

Some controllers weren't disable-able.  This fixes them so they obey our flags.

@ncdc
2017-03-07 08:10:41 -08:00
Andy Goldstein b011529d8a Add pprof trace support
Add pprof trace support and --enable-contention-profiling to those
components that don't already have it.
2017-03-07 10:10:42 -05:00
Jacob Beacham fe81169c1e kubeadm: make kube-apiserver's liveness probe match its bindport.
It had previously been hardcoded, so if you used --apiserver-bind-port
to override the default port (6443), then the health check for the pod
would quickly fail and kubelet would continuously kill the apiserver.
2017-03-06 18:11:08 -08:00
Kubernetes Submit Queue d731dc7546 Merge pull request #41826 from bowei/stub-2
Automatic merge from submit-queue (batch tested with PRs 41826, 42405)

Add stubDomains and upstreamNameservers configuration to kube-dns

```release-note
Updates the dnsmasq cache/mux layer to be managed by dnsmasq-nanny.
dnsmasq-nanny manages dnsmasq based on values from the
kube-system:kube-dns configmap:

"stubDomains": {
	"acme.local": ["1.2.3.4"]
},

is a map of domain to list of nameservers for the domain. This is used
to inject private DNS domains into the kube-dns namespace. In the above
example, any DNS requests for *.acme.local will be served by the
nameserver 1.2.3.4.

"upstreamNameservers": ["8.8.8.8", "8.8.4.4"]

is a list of upstreamNameservers to use, overriding the configuration
specified in /etc/resolv.conf.
```
2017-03-06 15:06:04 -08:00
deads2k 8be9a216d4 make all controllers obey the disable flags 2017-03-06 15:58:08 -05:00
wlan0 9875620388 add external cloudprovider to clerly denote the offloading off cloudprovider tasks 2017-03-06 10:45:13 -08:00
Kubernetes Submit Queue df70b30e59 Merge pull request #40537 from gnufied/fix-multizone-pv-breakage
Automatic merge from submit-queue

Fix Multizone pv creation on GCE

When Multizone is enabled static PV creation on GCE
fails because Cloud provider configuration is not
available in admission plugins.

cc @derekwaynecarr @childsb
2017-03-05 11:16:46 -08:00
Kubernetes Submit Queue 1a94d0186f Merge pull request #42530 from andrewrynhard/self_hosted
Automatic merge from submit-queue

kubeadm: Fix the nodeSelector and scheduler mounts when using the self-hosted mode

**What this PR does / why we need it**:
The self-hosted option in `kubeadm` was broken.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #42528
**Special notes for your reviewer**:

**Release note**:

```release-note
```


/cc @luxas
2017-03-04 15:53:12 -08:00
Kubernetes Submit Queue 79883dc48d Merge pull request #42070 from luxas/remove_kube_discovery
Automatic merge from submit-queue

Remove the kube-discovery binary from the tree

**What this PR does / why we need it**:

kube-discovery was a temporary solution to implementing proposal: https://github.com/kubernetes/community/blob/master/contributors/design-proposals/bootstrap-discovery.md

However, this functionality is now gonna be implemented in the core for v1.6 and will fully replace kube-discovery:
 - https://github.com/kubernetes/kubernetes/pull/36101 
 - https://github.com/kubernetes/kubernetes/pull/41281
 - https://github.com/kubernetes/kubernetes/pull/41417

So due to that `kube-discovery` isn't used in any v1.6 code, it should be removed.
The image `gcr.io/google_containers/kube-discovery-${ARCH}:1.0` should and will continue to exist so kubeadm <= v1.5 continues to work.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
Remove cmd/kube-discovery from the tree since it's not necessary anymore
```
@jbeda @dgoodwin @mikedanese @dmmcquay @lukemarsden @errordeveloper @pires
2017-03-04 12:58:23 -08:00
Andrew Rynhard 2419d0e845 Fix self-hosted 2017-03-04 11:41:37 -08:00
Kubernetes Submit Queue 7e37b895d7 Merge pull request #41417 from luxas/kubeadm_test_token
Automatic merge from submit-queue

kubeadm: Hook up kubeadm against the BootstrapSigner

**What this PR does / why we need it**:

This PR makes kubeadm able to use the BootstrapSigner. 
Depends on a few other PRs I've made, I'll rebase and fix this up after they've merged.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

Example usage:
```console
lucas@THENINJA:~/luxas/kubernetes$ sudo ./kubeadm init --kubernetes-version v1.7.0-alpha.0.377-2a6414bc914d55
[sudo] password for lucas: 
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[init] Using Kubernetes version: v1.7.0-alpha.0.377-2a6414bc914d55
[init] Using Authorization mode: RBAC
[preflight] Running pre-flight checks
[preflight] Starting the kubelet service
[certificates] Generated CA certificate and key.
[certificates] Generated API server certificate and key.
[certificates] Generated API server kubelet client certificate and key.
[certificates] Generated service account token signing key.
[certificates] Generated service account token signing public key.
[certificates] Generated front-proxy CA certificate and key.
[certificates] Generated front-proxy client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[apiclient] Created API client, waiting for the control plane to become ready
[apiclient] All control plane components are healthy after 21.301384 seconds
[apiclient] Waiting for at least one node to register and become ready
[apiclient] First node is ready after 8.072688 seconds
[apiclient] Test deployment succeeded
[token-discovery] Using token: 67a96d.02405a1773564431
[apiconfig] Created RBAC rules
[addons] Created essential addon: kube-proxy
[addons] Created essential addon: kube-dns

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run:
export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
    http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node:

kubeadm join --token 67a96d.02405a1773564431 192.168.1.115:6443

other-computer $ ./kubeadm join --token 67a96d.02405a1773564431 192.168.1.115:6443
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Skipping pre-flight checks
[preflight] Starting the kubelet service
[discovery] Trying to connect to API Server "192.168.1.115:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.115:6443"
[discovery] Cluster info signature and contents are valid, will use API Server "https://192.168.1.115:6443"
[discovery] Successfully established connection with API Server "192.168.1.115:6443"
[bootstrap] Detected server version: v1.7.0-alpha.0.377+2a6414bc914d55
[bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)
[csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request
[csr] Received signed certificate from the API server, generating KubeConfig...
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"

Node join complete:
* Certificate signing request sent to master and response
  received.
* Kubelet informed of new secure connection details.

Run 'kubectl get nodes' on the master to see this machine join.

# Wrong secret!
other-computer $ ./kubeadm join --token 67a96d.02405a1773564432 192.168.1.115:6443
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Skipping pre-flight checks
[preflight] Starting the kubelet service
[discovery] Trying to connect to API Server "192.168.1.115:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.115:6443"
[discovery] Failed to connect to API Server "192.168.1.115:6443": failed to verify JWS signature of received cluster info object, can't trust this API Server
[discovery] Trying to connect to API Server "192.168.1.115:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.115:6443"
[discovery] Failed to connect to API Server "192.168.1.115:6443": failed to verify JWS signature of received cluster info object, can't trust this API Server
^C

# Poor method to create a cluster-info KubeConfig (a KubeConfig file with no credentials), but...
$ printf "kind: Config\n$(sudo ./kubeadm alpha phas --client-name foo --server https://192.168.1.115:6443 --token foo | head -6)\n" > cluster-info.yaml
$ cat cluster-info.yaml
kind: Config
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRFM01ESXlPREl3TXpBek1Gb1hEVEkzTURJeU5qSXdNekF6TUZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTmt0ClFlUVVrenlkQjhTMVM2Y2ZSQ0ZPUnNhdDk2Wi9Id3A3TGJiNkhrSFRub1dvbDhOOVpGUXNDcCtYbDNWbStTS1AKZWFLTTFZWWVDVmNFd0JXNUlWclIxMk51UzYzcjRqK1dHK2NTdjhUOFBpYUZjWXpLalRpODYvajlMYlJYNlFQWAovYmNWTzBZZDVDMVJ1cmRLK2pnRGprdTBwbUl5RDRoWHlEZE1vZk1laStPMytwRC9BeVh5anhyd0crOUFiNjNrCmV6U3BSVHZSZ1h4R2dOMGVQclhKanMwaktKKzkxY0NXZTZJWEZkQnJKbFJnQktuMy9TazRlVVdIUTg0OWJOZHgKdllFblNON1BPaitySktPVEpLMnFlUW9ua0t3WU5qUDBGbW1zNnduL0J0dWkvQW9hanhQNUR3WXdxNEk2SzcvdgplbUM4STEvdzFpSk9RS2dxQmdzQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFNL1JQbTYzQTJVaGhPMVljTUNqSEJlUjROOHkKUzB0Q2RNdDRvK0NHRDJKTUt5NDJpNExmQTM2L2hvb01iM2tpUkVSWTRDaENrMGZ3VHpSMHc5Q21nZHlVSTVQSApEc0dIRWdkRHpTVXgyZ3lrWDBQU04zMjRXNCt1T0t6QVRLbm5mMUdiemo4cFA2Uk9QZDdCL09VNiswckhReGY2CnJ6cDRldHhWQjdQWVE0SWg5em1KcVY1QjBuaUZrUDBSYWNDYUxFTVI1NGZNWDk1VHM0amx1VFJrVnBtT1ZGNHAKemlzMlZlZmxLY3VHYTk1ME1CRGRZR2UvbGNXN3JpTkRIUGZZLzRybXIxWG9mUGZEY0Z0ZzVsbUNMWk8wMDljWQpNdGZBdjNBK2dYWjBUeExnU1BpYkxaajYrQU9lMnBiSkxCZkxOTmN6ODJMN1JjQ3RxS01NVHdxVnd0dz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    server: https://192.168.1.115:6443
  name: kubernetes

lucas@THENINJA:~/luxas/kubernetes$ sudo ./kubeadm token list
TOKEN                     TTL         EXPIRES   USAGES                   DESCRIPTION
67a96d.02405a1773564431   <forever>   <never>   authentication,signing   The default bootstrap token generated by 'kubeadm init'.

# Any token with the authentication usage set works as the --tls-bootstrap-token arg here
other-computer $ ./kubeadm join --skip-preflight-checks --discovery-file cluster-info.yaml --tls-bootstrap-token 67a96d.02405a1773564431
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Skipping pre-flight checks
[preflight] Starting the kubelet service
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.115:6443"
[discovery] Synced cluster-info information from the API Server so we have got the latest information
[bootstrap] Detected server version: v1.7.0-alpha.0.377+2a6414bc914d55
[bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)
[csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request
[csr] Received signed certificate from the API server, generating KubeConfig...
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"

Node join complete:
* Certificate signing request sent to master and response
  received.
* Kubelet informed of new secure connection details.

Run 'kubectl get nodes' on the master to see this machine join.

# Delete the RoleBinding that exposes the cluster-info ConfigMap publicly. Now this ConfigMap will be private
lucas@THENINJA:~/luxas/kubernetes$ kubectl -n kube-public edit rolebindings kubeadm:bootstrap-signer-clusterinfo

# This breaks the token joining method
other-computer $ sudo ./kubeadm join --token 67a96d.02405a1773564431 192.168.1.115:6443
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Skipping pre-flight checks
[preflight] Starting the kubelet service
[discovery] Trying to connect to API Server "192.168.1.115:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.115:6443"
[discovery] Failed to request cluster info, will try again: [User "system:anonymous" cannot get configmaps in the namespace "kube-public". (get configmaps cluster-info)]
[discovery] Failed to request cluster info, will try again: [User "system:anonymous" cannot get configmaps in the namespace "kube-public". (get configmaps cluster-info)]
^C

# But we can still connect using the cluster-info file
other-computer $ sudo ./kubeadm join --skip-preflight-checks --discovery-file /k8s/cluster-info.yaml --tls-bootstrap-token 67a96d.02405a1773564431
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Skipping pre-flight checks
[preflight] Starting the kubelet service
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.115:6443"
[discovery] Could not access the cluster-info ConfigMap for refreshing the cluster-info information, but the TLS cert is valid so proceeding...
[discovery] The cluster-info ConfigMap isn't set up properly (no kubeconfig key in ConfigMap), but the TLS cert is valid so proceeding...
[bootstrap] Detected server version: v1.7.0-alpha.0.377+2a6414bc914d55
[bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)
[csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request
[csr] Received signed certificate from the API server, generating KubeConfig...
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"

Node join complete:
* Certificate signing request sent to master and response
  received.
* Kubelet informed of new secure connection details.

Run 'kubectl get nodes' on the master to see this machine join.

# What happens if the CA in the cluster-info file and the API Server's CA aren't equal?
# Generated new CA for the cluster-info file, a invalid one for connecting to the cluster
# The new cluster-info file is here:
lucas@THENINJA:~/luxas/kubernetes$ cat cluster-info.yaml
kind: Config
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRFM01ESXlPREUyTkRBME1Wb1hEVEkzTURJeU5qRTJOREEwTVZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBS3VHCmR3MXQ5Nmlkd1YrVmwxQjRVSmZWdGpNZ0NTd1poMG00bmR5Q1JCR3FIRkpMTGhIWjREM2N2ckg1Tk44UmZHS0EKb1cwVjN3Q3R2THl4UFdnZkZMbGtrdERPWnBDQ01oYzd2alYxU2FKUE9MS1BIUUtEdm1CVWFNcTdrUzN5NEg1VApMcUp3bFBUUXNVVW5YNWM5V0pzS2JIcEx6MnJZbC9Pam4veGRtd1lQa3JUTTJwSitMS0RjUkxLTEpiQjhGc2pzCnZBQTg2QURjY3phMDd0WEgxL1MzeTN0UDJMTDN0UVgvZWJIYWNPcHluYnVaNlIwdFhKeUpsTTVlOHRHMzFhWHMKQTV3cGo1d2Z1RGU1amRuTHgxNnFtbG5ueGV3OGp0bk4zSDExYUp6VlErOWlSQUZkUTN4WmN4dWdmQVM2ZndqRwo0QnJFeGpUOUFaRlVQb0VkR09NQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFJc0pKRFIzbWMxQ1lCd2ViSkRPNm1MdWkwTk4KS3BVdlBuazlMbWlnb2JmYVhjQWlnUlo2M1pIYTd4MXBHNGpKRG8zY3lxNWEybTAzZ245RFMrcEpKYTdpMmpXUQpaV1YvZ2ZRMEk4RGc0endXU3J0T056NHpTTXQ1cW5JZjVWRC95KzVVSmVRck1XSEVFS1VrdklSQzhuUmIvV1F2CmNRWEpiN1hMY0dtbWJyaXpDSUlDYmI4KzhmNDFUWTZnTmg5ZzduaVdGZlp2VG1jN05aMTNjQVJjajJ0UTAzeVMKbWVPcEc2REdMRENFWWYzRld0QmdleE5CcFlFYy9ydUNnUE9IcEdhelYya3JHdFFNLzI0OGQ2ZndwcVNQOGc4RgpVSHNWZWxiMExnNmgvZ3VSYlZ5SENlck5zTDBJdFFhdjlscmZmWkxQaVA5TzNLQ0pBWk9MbXhEOUhaaz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    server: https://192.168.1.115:6443
  name: kubernetes

# Try to join an API Server with the wrong CA
other-computer $ sudo ./kubeadm join --skip-preflight-checks --discovery-file /k8s/cluster-info.yaml --tls-bootstrap-token 67a96d.02405a1773564431
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Skipping pre-flight checks
[preflight] Starting the kubelet service
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.115:6443"
[discovery] Failed to validate the API Server's identity, will try again: [Get https://192.168.1.115:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")]
[discovery] Failed to validate the API Server's identity, will try again: [Get https://192.168.1.115:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")]
[discovery] Failed to validate the API Server's identity, will try again: [Get https://192.168.1.115:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")]
[discovery] Failed to validate the API Server's identity, will try again: [Get https://192.168.1.115:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")]
^C
```

**Release note**:

```release-note
```
@jbeda @mikedanese @justinsb @pires @dmmcquay @roberthbailey @dgoodwin
2017-03-04 05:54:16 -08:00
Lucas Käldström 61a284d720
Hook up kubeadm against the BootstrapSigner/BootstrapTokenAuthenticator 2017-03-04 11:17:52 +02:00
Kubernetes Submit Queue 2d319bd406 Merge pull request #42204 from dashpole/allocatable_eviction
Automatic merge from submit-queue

Eviction Manager Enforces Allocatable Thresholds

This PR modifies the eviction manager to enforce node allocatable thresholds for memory as described in kubernetes/community#348.
This PR should be merged after #41234. 

cc @kubernetes/sig-node-pr-reviews @kubernetes/sig-node-feature-requests @vishh 

** Why is this a bug/regression**

Kubelet uses `oom_score_adj` to enforce QoS policies. But the `oom_score_adj` is based on overall memory requested, which means that a Burstable pod that requested a lot of memory can lead to OOM kills for Guaranteed pods, which violates QoS. Even worse, we have observed system daemons like kubelet or kube-proxy being killed by the OOM killer.
Without this PR, v1.6 will have node stability issues and regressions in an existing GA feature `out of Resource` handling.
2017-03-03 20:20:12 -08:00
Kubernetes Submit Queue f81a0107f0 Merge pull request #38924 from vladimirvivien/scaleio-k8s
Automatic merge from submit-queue (batch tested with PRs 42443, 38924, 42367, 42391, 42310)

Dell EMC ScaleIO Volume Plugin

**What this PR does / why we need it**
This PR implements the Kubernetes volume plugin to allow pods to seamlessly access and use data stored on ScaleIO volumes.  [ScaleIO](https://www.emc.com/storage/scaleio/index.htm) is a software-based storage platform that creates a pool of distributed block storage using locally attached disks on every server.  The code for this PR supports persistent volumes using PVs, PVCs, and dynamic provisioning.

You can find examples of how to use and configure the ScaleIO Kubernetes volume plugin in [examples/volumes/scaleio/README.md](examples/volumes/scaleio/README.md).

**Special notes for your reviewer**:
To facilitate code review, commits for source code implementation are separated from other artifacts such as generated, docs, and vendored sources.

```release-note
ScaleIO Kubernetes Volume Plugin added enabling pods to seamlessly access and use data stored on ScaleIO volumes.
```
2017-03-03 18:08:40 -08:00
Kubernetes Submit Queue b432e137e6 Merge pull request #42350 from vishh/enable-qos-cgroups
Automatic merge from submit-queue (batch tested with PRs 41919, 41149, 42350, 42351, 42285)

enable cgroups tiers and node allocatable enforcement on pods by default.

```release-note
Pods are launched in a separate cgroup hierarchy than system services.
```
Depends on #41753

cc @derekwaynecarr
2017-03-03 16:44:41 -08:00
Kubernetes Submit Queue 9cc5480918 Merge pull request #41149 from sjenning/qos-memory-limits
Automatic merge from submit-queue (batch tested with PRs 41919, 41149, 42350, 42351, 42285)

kubelet: enable qos-level memory limits

```release-note
Experimental support to reserve a pod's memory request from being utilized by pods in lower QoS tiers.
```

Enables the QoS-level memory cgroup limits described in https://github.com/kubernetes/community/pull/314

**Note: QoS level cgroups have to be enabled for any of this to take effect.**

Adds a new `--experimental-qos-reserved` flag that can be used to set the percentage of a resource to be reserved at the QoS level for pod resource requests.

For example, `--experimental-qos-reserved="memory=50%`, means that if a Guaranteed pod sets a memory request of 2Gi, the Burstable and BestEffort QoS memory cgroups will have their `memory.limit_in_bytes` set to `NodeAllocatable - (2Gi*50%)` to reserve 50% of the guaranteed pod's request from being used by the lower QoS tiers.

If a Burstable pod sets a request, its reserve will be deducted from the BestEffort memory limit.

The result is that:
- Guaranteed limit matches root cgroup at is not set by this code
- Burstable limit is `NodeAllocatable - Guaranteed reserve`
- BestEffort limit is `NodeAllocatable - Guaranteed reserve - Burstable reserve`

The only resource currently supported is `memory`; however, the code is generic enough that other resources can be added in the future.

@derekwaynecarr @vishh
2017-03-03 16:44:39 -08:00
Vladimir Vivien 915a54180d Addition of ScaleIO Kubernetes Volume Plugin
This commits implements the Kubernetes volume plugin allowing pods to seamlessly access and use data stored on ScaleIO volumes.
2017-03-03 15:47:19 -05:00
Kubernetes Submit Queue 4728a0520f Merge pull request #42018 from luxas/kubeadm_cert_phase
Automatic merge from submit-queue (batch tested with PRs 42365, 42429, 41770, 42018, 35055)

kubeadm: Add --cert-dir, --cert-altnames instead of --api-external-dns-names

**What this PR does / why we need it**:

 - For the beta kubeadm init UX, we need this change
 - Also adds the `kubeadm phase certs selfsign` command that makes the phase invokable independently

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

This PR depends on https://github.com/kubernetes/kubernetes/pull/41897

**Release note**:

```release-note
```
@dmmcquay @pires @jbeda @errordeveloper @mikedanese @deads2k @liggitt
2017-03-03 09:24:46 -08:00
Seth Jennings cc50aa9dfb kubelet: enable qos-level memory request reservation 2017-03-02 15:04:13 -06:00
Kubernetes Submit Queue 102f267b6a Merge pull request #40950 from MHBauer/duplicate-defaults
Automatic merge from submit-queue

Remove defaults from string flags

- The default is printed automatically
 - The string text did not match the actual default

**What this PR does / why we need it**:
Adjust the documentation for flags on `client-gen`.

**Special notes for your reviewer**:
Doc change. String text only.

**Release note**:
```release-note
NONE
```

Before:
```
client-gen  --help
Usage of ./client-gen:
      --build-tag string                       A Go build tag to use to identify files generated by this command. Should be unique. (default "ignore_autogenerated")
      --clientset-api-path string              the value of default API path.
  -n, --clientset-name string                  the name of the generated clientset package. (default "internalclientset")
      --clientset-only                         when set, client-gen only generates the clientset shell, without generating the individual typed clients
      --clientset-path string                  the generated clientset will be output to <clientset-path>/<clientset-name>. Default to "k8s.io/kubernetes/pkg/client/clientset_generated/" (default "k8s.io/kubernetes/pkg/client/clientset_generated/")
      --fake-clientset                         when set, client-gen will generate the fake clientset that can be used in tests (default true)
  -h, --go-header-file string                  File containing boilerplate header text. The string YEAR will be replaced with the current 4-digit year. (default "/Users/mhb/go/src/k8s.io/gengo/boilerplate/boilerplate.go.txt")
      --included-types-overrides stringSlice   list of group/version/type for which client should be generated. By default, client is generated for all types which have genclient=true in types.go. This overrides that. For each groupVersion in this list, only the types mentioned here will be included. The default check of genclient=true will be used for other group versions.
      --input stringSlice                      group/versions that client-gen will generate clients for. At most one version per group is allowed. Specified in the format "group1/version1,group2/version2...". Default to "api/,extensions/,autoscaling/,batch/,rbac/" (default [api/,authentication/,authorization/,autoscaling/,batch/,certificates/,extensions/,rbac/,storage/,apps/,policy/])
      --input-base string                      base path to look for the api group. Default to "k8s.io/kubernetes/pkg/apis" (default "k8s.io/kubernetes/pkg/apis")
  -i, --input-dirs stringSlice                 Comma-separated list of import paths to get input types from.
  -o, --output-base string                     Output base; defaults to $GOPATH/src/ or ./ if $GOPATH is not set. (default "/Users/mhb/go/src")
  -O, --output-file-base string                Base name (without .go suffix) for output files.
  -p, --output-package string                  Base package path.
  -t, --test                                   set this flag to generate the client code for the testdata
      --verify-only                            If true, only verify existing output, do not write anything.
```
After:
```
client-gen  --help
Usage of ./client-gen:
      --build-tag string                       A Go build tag to use to identify files generated by this command. Should be unique. (default "ignore_autogenerated")
      --clientset-api-path string              the value of default API path.
  -n, --clientset-name string                  the name of the generated clientset package. (default "internalclientset")
      --clientset-only                         when set, client-gen only generates the clientset shell, without generating the individual typed clients
      --clientset-path string                  the generated clientset will be output to <clientset-path>/<clientset-name>. (default "k8s.io/kubernetes/pkg/client/clientset_generated/")
      --fake-clientset                         when set, client-gen will generate the fake clientset that can be used in tests (default true)
  -h, --go-header-file string                  File containing boilerplate header text. The string YEAR will be replaced with the current 4-digit year. (default "/Users/mhb/go/src/k8s.io/gengo/boilerplate/boilerplate.go.txt")
      --included-types-overrides stringSlice   list of group/version/type for which client should be generated. By default, client is generated for all types which have genclient=true in types.go. This overrides that. For each groupVersion in this list, only the types mentioned here will be included. The default check of genclient=true will be used for other group versions.
      --input stringSlice                      group/versions that client-gen will generate clients for. At most one version per group is allowed. Specified in the format "group1/version1,group2/version2...". (default [api/,authentication/,authorization/,autoscaling/,batch/,certificates/,extensions/,rbac/,storage/,apps/,policy/])
      --input-base string                      base path to look for the api group. (default "k8s.io/kubernetes/pkg/apis")
  -i, --input-dirs stringSlice                 Comma-separated list of import paths to get input types from.
  -o, --output-base string                     Output base; defaults to $GOPATH/src/ or ./ if $GOPATH is not set. (default "/Users/mhb/go/src")
  -O, --output-file-base string                Base name (without .go suffix) for output files.
  -p, --output-package string                  Base package path.
  -t, --test                                   set this flag to generate the client code for the testdata
      --verify-only                            If true, only verify existing output, do not write anything.
```
2017-03-02 12:43:42 -08:00
Kubernetes Submit Queue 053458cc83 Merge pull request #41984 from enisoc/controller-ref-rc-rs
Automatic merge from submit-queue (batch tested with PRs 41984, 41682, 41924, 41928)

RC/RS: Fully Respect ControllerRef

**What this PR does / why we need it**:

This is part of the completion of the [ControllerRef](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/controller-ref.md) proposal. It brings ReplicaSet and ReplicationController into full compliance with ControllerRef. See the individual commit messages for details.

**Which issue this PR fixes**:

Although RC/RS had partially implemented ControllerRef, they didn't use it to determine which controller to sync, or to update expectations. This could lead to instability or controllers getting stuck.

Ref: https://github.com/kubernetes/kubernetes/issues/24433

**Special notes for your reviewer**:

**Release note**:
```release-note
```
cc @erictune @kubernetes/sig-apps-pr-reviews
2017-03-02 10:51:05 -08:00