Commit Graph

5041 Commits (6a8f55736ad092a8dbdba34b37df7574bd3371f9)

Author SHA1 Message Date
Alejandro Escobar 6a8f55736a allow etcd2/3 choice when bringing up a local cluster, default to 3 but thats negotiable, it is what we will be using going forwards. 2017-03-23 06:42:44 -07:00
Jordan Liggitt db52b4eb04
Make kubectl replace unconditional 2017-03-22 01:09:56 -04:00
Kubernetes Submit Queue b617cabfca Merge pull request #43192 from liggitt/kubectl-test-retry
Automatic merge from submit-queue

Retry kubectl test replace on conflict

Since kubectl is doing a resource-version-constrained replace, it is subject to conflicts on a contentious resource (like a node managed by the node controller)

Fixes #41892 (the specific flake, not the watch cache issue)
2017-03-16 08:23:21 -07:00
Jordan Liggitt 9cd791e83c
Retry kubectl test replace on conflict 2017-03-16 08:39:47 -04:00
Kubernetes Submit Queue 1357cafa71 Merge pull request #43180 from mbohlool/bugfix
Automatic merge from submit-queue (batch tested with PRs 43180, 42928)

Verify generated protobuf script should fail on staging/ changes too

fixes #35486
2017-03-15 19:58:20 -07:00
mbohlool f6e4c99548 Verify generated protobuf script should fail on staging/ changes too 2017-03-15 16:15:02 -07:00
Kubernetes Submit Queue 6d2defbc09 Merge pull request #42967 from cblecker/godep-version79
Automatic merge from submit-queue (batch tested with PRs 40964, 42967, 43091, 43115)

Update hack scripts to use godep v79 and ensure_godep_version

**What this PR does / why we need it**:
Based on #42965 and https://github.com/kubernetes/kubernetes/pull/42958#discussion_r105568318, this pins the godep version at v79, which should fix some issues when running godep in go1.8 local environments.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #42817

**Special notes for your reviewer**:
This should likely get the v1.6 milestone so that it can be merged into master. While I'm setting a default godep version, I'm continuing to use the local pins per this comment: https://github.com/kubernetes/kubernetes/pull/42965#issuecomment-285962723 .

**Release note**:

```release-note
NONE
```

cc: @sttts
2017-03-15 16:08:25 -07:00
Christoph Blecker 4d85a54027
Change verify-godeps.sh use ensure_godep_version 2017-03-15 10:32:44 -07:00
Christoph Blecker d31a88fee7
Bump godep version to v79 2017-03-15 10:32:37 -07:00
Kubernetes Submit Queue 9e8114655f Merge pull request #40404 from tanshanshan/unit-test-scheduler4
Automatic merge from submit-queue (batch tested with PRs 40404, 43134, 43117)

Improve code coverage for scheduler/api/validation

**What this PR does / why we need it**:

Improve code coverage for scheduler/api/validation from #39559

Thanks
**Special notes for your reviewer**:

**Release note**:

```release-note
```
2017-03-15 08:27:20 -07:00
Kubernetes Submit Queue a91869a0c5 Merge pull request #42819 from MrHohn/dns-cm-scripts
Automatic merge from submit-queue (batch tested with PRs 43018, 42713, 42819)

Update startup scripts for kube-dns ConfigMap and ServiceAccount

Follow up PR of #42757. This PR changes all existing startup scripts to support default kube-dns ConfigMap and ServiceAccount.

@bowei 

cc @liggitt 

**Release note**:

```release-note
NONE
```
2017-03-14 16:43:19 -07:00
Kubernetes Submit Queue 6de28fab7d Merge pull request #42942 from vishh/gpu-cont-fix
Automatic merge from submit-queue (batch tested with PRs 42942, 42935)

[Bug] Handle container restarts and avoid using runtime pod cache while allocating GPUs

Fixes #42412

**Background**
Support for multiple GPUs is an experimental feature in v1.6. 
Container restarts were handled incorrectly which resulted in stranding of GPUs
Kubelet is incorrectly using runtime cache to track running pods which can result in race conditions (as it did in other parts of kubelet). This can result in same GPU being assigned to multiple pods.

**What does this PR do**
This PR tracks assignment of GPUs to containers and returns pre-allocated GPUs instead of (incorrectly) allocating new GPUs.
GPU manager is updated to consume a list of active pods derived from apiserver cache instead of runtime cache.
Node e2e has been extended to validate this failure scenario.

**Risk**
Minimal/None since support for GPUs is an experimental feature that is turned off by default. The code is also isolated to GPU manager in kubelet.

**Workarounds**
In the absence of this PR, users can mitigate the original issue by setting `RestartPolicyNever`  in their pods.
There is no workaround for the race condition caused by using the runtime cache though.
Hence it is worth including this fix in v1.6.0.

cc @jianzhangbjz @seelam @kubernetes/sig-node-pr-reviews 

Replaces #42560
2017-03-14 10:19:17 -07:00
Kubernetes Submit Queue 5e29e1ee05 Merge pull request #42623 from liggitt/kubectl-version
Automatic merge from submit-queue

Fix v0.0.0 in kubectl built from master

Fixes https://github.com/kubernetes/kubernetes/issues/40813
2017-03-13 15:06:31 -07:00
Vishnu kannan 46708be3e8 linter fixes
Signed-off-by: Vishnu kannan <vishnuk@google.com>
2017-03-13 10:58:26 -07:00
Kubernetes Submit Queue e1ec10f248 Merge pull request #42851 from madhusudancs/fed-down-improvements
Automatic merge from submit-queue

[Federation] Unjoin only the joined clusters while bringing down the federation control plane.

A few other minor improvements.

**Release note**:

```release-note
NONE
```
2017-03-12 16:29:37 -07:00
Madhusudan.C.S ed10bb7643 [Federation] Unjoin only the joined clusters while bringing down the federation control plane.
A few other minor improvements.
2017-03-12 13:05:26 -07:00
Dr. Stefan Schimanski f88bae8191 hack/godep-restore.sh: use godep v79 which works 2017-03-12 18:43:10 +01:00
Kubernetes Submit Queue 328e555f72 Merge pull request #41794 from shashidharatd/federation-upgrade-tests-1
Automatic merge from submit-queue (batch tested with PRs 41794, 42349, 42755, 42901, 42933)

[Federation][e2e] Add framework for upgrade test in federation

Adding framework for federation upgrade tests. please refer to #41791

cc @madhusudancs @nikhiljindal @kubernetes/sig-federation-pr-reviews
2017-03-10 22:02:15 -08:00
Kubernetes Submit Queue d2d3884f83 Merge pull request #42362 from soltysh/deployment_generators
Automatic merge from submit-queue (batch tested with PRs 38805, 42362, 42862)

Fix deployment generator after introducing deployments in apps/v1beta1

This PR does two things:

1. Switches all generator to produce versioned objects, to bypass the problem of having an object in multiple versions, which then results in not having stable generator (iow. producing exactly the same object).
2. Introduces new generator for `apps/v1beta1` deployments.

@kargakis @janetkuo ptal

@kubernetes/sig-apps-pr-reviews @kubernetes/sig-cli-pr-reviews ptal

This is a followup to https://github.com/kubernetes/kubernetes/pull/39683, so I'm adding 1.6 milestone.

```release-note
Introduce new generator for apps/v1beta1 deployments
```
2017-03-10 14:01:21 -08:00
Kubernetes Submit Queue a54d493216 Merge pull request #42608 from xilabao/patch-8
Automatic merge from submit-queue (batch tested with PRs 42608, 42444)

fix typo in know-flags

ref to https://github.com/kubernetes/kubernetes/pull/41417
2017-03-10 12:50:22 -08:00
shashidharatd f2fa2f6dd6 New packages added to hack/.linted_packages 2017-03-11 01:39:56 +05:30
shashidharatd 662f0ef531 Add framework for federation upgrade tests 2017-03-11 01:39:56 +05:30
Maciej Szulik aa4390750c Introduce new generator for apps/v1beta1 deployments 2017-03-10 12:08:01 +01:00
Kubernetes Submit Queue ab6fecfa3a Merge pull request #42811 from gnufied/validation-no-probe
Automatic merge from submit-queue (batch tested with PRs 42811, 42859)

 Validation PVs for mount options

We are going to move the validation in its own package and we will be calling validation for individual volume types as needed.

Fixes https://github.com/kubernetes/kubernetes/issues/42573
2017-03-09 18:47:52 -08:00
tanshanshan 6fd76dc139 fix 2017-03-10 10:44:21 +08:00
Hemant Kumar 12d6b87894 Validation PVs for mount options
We are going to move the validation in its own package
and we will be calling validation for individual volume types
as needed.
2017-03-09 18:24:37 -05:00
Kubernetes Submit Queue 4540674b04 Merge pull request #42758 from krousey/downgrades
Automatic merge from submit-queue (batch tested with PRs 42734, 42745, 42758, 42814, 42694)

Implement automated downgrade testing.

Node version cannot be higher than the master version, so we must
switch the node version first. Also, we must use the upgrade script
from the appropriate version for GCE.
2017-03-09 15:06:56 -08:00
Kris cc84e0895a Implement automated downgrade testing.
Node version cannot be higher than the master version, so we must
switch the node version first. Also, we must use the upgrade script
from the appropriate version for GCE.
2017-03-09 12:45:20 -08:00
Zihong Zheng 3acff7d3ef Update startup scripts for kube-dns ConfigMap and ServiceAccount 2017-03-09 11:10:23 -08:00
Kubernetes Submit Queue 376282227f Merge pull request #42726 from sttts/sttts-unify-godep-scripts
Automatic merge from submit-queue

Ensure a fixed godep version in hack/*-godep*.sh

No godep pinning asks for trouble when godep changes behaviour once again.

Moreover, call `hack/godep-restore.go` from `hack/update-all-staging.sh`. This was an actual bug.
2017-03-09 07:37:47 -08:00
Kubernetes Submit Queue 1a3c3be58b Merge pull request #42786 from gyliu513/feature-gates
Automatic merge from submit-queue (batch tested with PRs 42786, 42553)

Updated comments for TaintBasedEvictions.

**What this PR does / why we need it**:

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:
2017-03-09 07:37:35 -08:00
Dr. Stefan Schimanski 798d9c2ed3 Unify godep code in hack/*-godep*.sh 2017-03-09 15:03:13 +01:00
Kubernetes Submit Queue 72c94b1aa8 Merge pull request #42728 from sttts/sttts-udpate-all-dirty-checkout
Automatic merge from submit-queue

Don't try to run hack/verify-staging-* on dirty repository

When the repo is dirty after running all `update-*` scripts in `hack/update-all.sh`, the staging verify scripts still fail. This PR removes these from `hack/update-all.sh`. Instead give useful instructions or continue automatically with `hack/update-all-staging.sh`.
2017-03-09 05:09:44 -08:00
Dr. Stefan Schimanski c128e4686a Don't try to run hack/verify-staging-* on dirty repository 2017-03-09 13:05:31 +01:00
Guangya Liu ed28695d3e Updated comments for TaintBasedEvictions. 2017-03-09 17:06:31 +08:00
Jordan Liggitt 2d06607885
Fix v0.0.0 in kubectl built from master 2017-03-07 01:09:57 -05:00
xilabao c64f146a34 fix typo in know-flags 2017-03-06 19:06:57 -06:00
Anthony Yeh cec3899b96 Deployment: Remove Overlap and SelectorUpdate annotations.
These are not used anymore since ControllerRef now protects against
fighting between controllers with overlapping selectors.
2017-03-06 15:12:08 -08:00
Kubernetes Submit Queue 79883dc48d Merge pull request #42070 from luxas/remove_kube_discovery
Automatic merge from submit-queue

Remove the kube-discovery binary from the tree

**What this PR does / why we need it**:

kube-discovery was a temporary solution to implementing proposal: https://github.com/kubernetes/community/blob/master/contributors/design-proposals/bootstrap-discovery.md

However, this functionality is now gonna be implemented in the core for v1.6 and will fully replace kube-discovery:
 - https://github.com/kubernetes/kubernetes/pull/36101 
 - https://github.com/kubernetes/kubernetes/pull/41281
 - https://github.com/kubernetes/kubernetes/pull/41417

So due to that `kube-discovery` isn't used in any v1.6 code, it should be removed.
The image `gcr.io/google_containers/kube-discovery-${ARCH}:1.0` should and will continue to exist so kubeadm <= v1.5 continues to work.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
Remove cmd/kube-discovery from the tree since it's not necessary anymore
```
@jbeda @dgoodwin @mikedanese @dmmcquay @lukemarsden @errordeveloper @pires
2017-03-04 12:58:23 -08:00
Kubernetes Submit Queue 7e37b895d7 Merge pull request #41417 from luxas/kubeadm_test_token
Automatic merge from submit-queue

kubeadm: Hook up kubeadm against the BootstrapSigner

**What this PR does / why we need it**:

This PR makes kubeadm able to use the BootstrapSigner. 
Depends on a few other PRs I've made, I'll rebase and fix this up after they've merged.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

Example usage:
```console
lucas@THENINJA:~/luxas/kubernetes$ sudo ./kubeadm init --kubernetes-version v1.7.0-alpha.0.377-2a6414bc914d55
[sudo] password for lucas: 
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[init] Using Kubernetes version: v1.7.0-alpha.0.377-2a6414bc914d55
[init] Using Authorization mode: RBAC
[preflight] Running pre-flight checks
[preflight] Starting the kubelet service
[certificates] Generated CA certificate and key.
[certificates] Generated API server certificate and key.
[certificates] Generated API server kubelet client certificate and key.
[certificates] Generated service account token signing key.
[certificates] Generated service account token signing public key.
[certificates] Generated front-proxy CA certificate and key.
[certificates] Generated front-proxy client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[apiclient] Created API client, waiting for the control plane to become ready
[apiclient] All control plane components are healthy after 21.301384 seconds
[apiclient] Waiting for at least one node to register and become ready
[apiclient] First node is ready after 8.072688 seconds
[apiclient] Test deployment succeeded
[token-discovery] Using token: 67a96d.02405a1773564431
[apiconfig] Created RBAC rules
[addons] Created essential addon: kube-proxy
[addons] Created essential addon: kube-dns

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run:
export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
    http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node:

kubeadm join --token 67a96d.02405a1773564431 192.168.1.115:6443

other-computer $ ./kubeadm join --token 67a96d.02405a1773564431 192.168.1.115:6443
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Skipping pre-flight checks
[preflight] Starting the kubelet service
[discovery] Trying to connect to API Server "192.168.1.115:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.115:6443"
[discovery] Cluster info signature and contents are valid, will use API Server "https://192.168.1.115:6443"
[discovery] Successfully established connection with API Server "192.168.1.115:6443"
[bootstrap] Detected server version: v1.7.0-alpha.0.377+2a6414bc914d55
[bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)
[csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request
[csr] Received signed certificate from the API server, generating KubeConfig...
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"

Node join complete:
* Certificate signing request sent to master and response
  received.
* Kubelet informed of new secure connection details.

Run 'kubectl get nodes' on the master to see this machine join.

# Wrong secret!
other-computer $ ./kubeadm join --token 67a96d.02405a1773564432 192.168.1.115:6443
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Skipping pre-flight checks
[preflight] Starting the kubelet service
[discovery] Trying to connect to API Server "192.168.1.115:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.115:6443"
[discovery] Failed to connect to API Server "192.168.1.115:6443": failed to verify JWS signature of received cluster info object, can't trust this API Server
[discovery] Trying to connect to API Server "192.168.1.115:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.115:6443"
[discovery] Failed to connect to API Server "192.168.1.115:6443": failed to verify JWS signature of received cluster info object, can't trust this API Server
^C

# Poor method to create a cluster-info KubeConfig (a KubeConfig file with no credentials), but...
$ printf "kind: Config\n$(sudo ./kubeadm alpha phas --client-name foo --server https://192.168.1.115:6443 --token foo | head -6)\n" > cluster-info.yaml
$ cat cluster-info.yaml
kind: Config
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRFM01ESXlPREl3TXpBek1Gb1hEVEkzTURJeU5qSXdNekF6TUZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTmt0ClFlUVVrenlkQjhTMVM2Y2ZSQ0ZPUnNhdDk2Wi9Id3A3TGJiNkhrSFRub1dvbDhOOVpGUXNDcCtYbDNWbStTS1AKZWFLTTFZWWVDVmNFd0JXNUlWclIxMk51UzYzcjRqK1dHK2NTdjhUOFBpYUZjWXpLalRpODYvajlMYlJYNlFQWAovYmNWTzBZZDVDMVJ1cmRLK2pnRGprdTBwbUl5RDRoWHlEZE1vZk1laStPMytwRC9BeVh5anhyd0crOUFiNjNrCmV6U3BSVHZSZ1h4R2dOMGVQclhKanMwaktKKzkxY0NXZTZJWEZkQnJKbFJnQktuMy9TazRlVVdIUTg0OWJOZHgKdllFblNON1BPaitySktPVEpLMnFlUW9ua0t3WU5qUDBGbW1zNnduL0J0dWkvQW9hanhQNUR3WXdxNEk2SzcvdgplbUM4STEvdzFpSk9RS2dxQmdzQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFNL1JQbTYzQTJVaGhPMVljTUNqSEJlUjROOHkKUzB0Q2RNdDRvK0NHRDJKTUt5NDJpNExmQTM2L2hvb01iM2tpUkVSWTRDaENrMGZ3VHpSMHc5Q21nZHlVSTVQSApEc0dIRWdkRHpTVXgyZ3lrWDBQU04zMjRXNCt1T0t6QVRLbm5mMUdiemo4cFA2Uk9QZDdCL09VNiswckhReGY2CnJ6cDRldHhWQjdQWVE0SWg5em1KcVY1QjBuaUZrUDBSYWNDYUxFTVI1NGZNWDk1VHM0amx1VFJrVnBtT1ZGNHAKemlzMlZlZmxLY3VHYTk1ME1CRGRZR2UvbGNXN3JpTkRIUGZZLzRybXIxWG9mUGZEY0Z0ZzVsbUNMWk8wMDljWQpNdGZBdjNBK2dYWjBUeExnU1BpYkxaajYrQU9lMnBiSkxCZkxOTmN6ODJMN1JjQ3RxS01NVHdxVnd0dz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    server: https://192.168.1.115:6443
  name: kubernetes

lucas@THENINJA:~/luxas/kubernetes$ sudo ./kubeadm token list
TOKEN                     TTL         EXPIRES   USAGES                   DESCRIPTION
67a96d.02405a1773564431   <forever>   <never>   authentication,signing   The default bootstrap token generated by 'kubeadm init'.

# Any token with the authentication usage set works as the --tls-bootstrap-token arg here
other-computer $ ./kubeadm join --skip-preflight-checks --discovery-file cluster-info.yaml --tls-bootstrap-token 67a96d.02405a1773564431
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Skipping pre-flight checks
[preflight] Starting the kubelet service
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.115:6443"
[discovery] Synced cluster-info information from the API Server so we have got the latest information
[bootstrap] Detected server version: v1.7.0-alpha.0.377+2a6414bc914d55
[bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)
[csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request
[csr] Received signed certificate from the API server, generating KubeConfig...
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"

Node join complete:
* Certificate signing request sent to master and response
  received.
* Kubelet informed of new secure connection details.

Run 'kubectl get nodes' on the master to see this machine join.

# Delete the RoleBinding that exposes the cluster-info ConfigMap publicly. Now this ConfigMap will be private
lucas@THENINJA:~/luxas/kubernetes$ kubectl -n kube-public edit rolebindings kubeadm:bootstrap-signer-clusterinfo

# This breaks the token joining method
other-computer $ sudo ./kubeadm join --token 67a96d.02405a1773564431 192.168.1.115:6443
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Skipping pre-flight checks
[preflight] Starting the kubelet service
[discovery] Trying to connect to API Server "192.168.1.115:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.115:6443"
[discovery] Failed to request cluster info, will try again: [User "system:anonymous" cannot get configmaps in the namespace "kube-public". (get configmaps cluster-info)]
[discovery] Failed to request cluster info, will try again: [User "system:anonymous" cannot get configmaps in the namespace "kube-public". (get configmaps cluster-info)]
^C

# But we can still connect using the cluster-info file
other-computer $ sudo ./kubeadm join --skip-preflight-checks --discovery-file /k8s/cluster-info.yaml --tls-bootstrap-token 67a96d.02405a1773564431
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Skipping pre-flight checks
[preflight] Starting the kubelet service
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.115:6443"
[discovery] Could not access the cluster-info ConfigMap for refreshing the cluster-info information, but the TLS cert is valid so proceeding...
[discovery] The cluster-info ConfigMap isn't set up properly (no kubeconfig key in ConfigMap), but the TLS cert is valid so proceeding...
[bootstrap] Detected server version: v1.7.0-alpha.0.377+2a6414bc914d55
[bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)
[csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request
[csr] Received signed certificate from the API server, generating KubeConfig...
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"

Node join complete:
* Certificate signing request sent to master and response
  received.
* Kubelet informed of new secure connection details.

Run 'kubectl get nodes' on the master to see this machine join.

# What happens if the CA in the cluster-info file and the API Server's CA aren't equal?
# Generated new CA for the cluster-info file, a invalid one for connecting to the cluster
# The new cluster-info file is here:
lucas@THENINJA:~/luxas/kubernetes$ cat cluster-info.yaml
kind: Config
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRFM01ESXlPREUyTkRBME1Wb1hEVEkzTURJeU5qRTJOREEwTVZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBS3VHCmR3MXQ5Nmlkd1YrVmwxQjRVSmZWdGpNZ0NTd1poMG00bmR5Q1JCR3FIRkpMTGhIWjREM2N2ckg1Tk44UmZHS0EKb1cwVjN3Q3R2THl4UFdnZkZMbGtrdERPWnBDQ01oYzd2alYxU2FKUE9MS1BIUUtEdm1CVWFNcTdrUzN5NEg1VApMcUp3bFBUUXNVVW5YNWM5V0pzS2JIcEx6MnJZbC9Pam4veGRtd1lQa3JUTTJwSitMS0RjUkxLTEpiQjhGc2pzCnZBQTg2QURjY3phMDd0WEgxL1MzeTN0UDJMTDN0UVgvZWJIYWNPcHluYnVaNlIwdFhKeUpsTTVlOHRHMzFhWHMKQTV3cGo1d2Z1RGU1amRuTHgxNnFtbG5ueGV3OGp0bk4zSDExYUp6VlErOWlSQUZkUTN4WmN4dWdmQVM2ZndqRwo0QnJFeGpUOUFaRlVQb0VkR09NQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFJc0pKRFIzbWMxQ1lCd2ViSkRPNm1MdWkwTk4KS3BVdlBuazlMbWlnb2JmYVhjQWlnUlo2M1pIYTd4MXBHNGpKRG8zY3lxNWEybTAzZ245RFMrcEpKYTdpMmpXUQpaV1YvZ2ZRMEk4RGc0endXU3J0T056NHpTTXQ1cW5JZjVWRC95KzVVSmVRck1XSEVFS1VrdklSQzhuUmIvV1F2CmNRWEpiN1hMY0dtbWJyaXpDSUlDYmI4KzhmNDFUWTZnTmg5ZzduaVdGZlp2VG1jN05aMTNjQVJjajJ0UTAzeVMKbWVPcEc2REdMRENFWWYzRld0QmdleE5CcFlFYy9ydUNnUE9IcEdhelYya3JHdFFNLzI0OGQ2ZndwcVNQOGc4RgpVSHNWZWxiMExnNmgvZ3VSYlZ5SENlck5zTDBJdFFhdjlscmZmWkxQaVA5TzNLQ0pBWk9MbXhEOUhaaz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    server: https://192.168.1.115:6443
  name: kubernetes

# Try to join an API Server with the wrong CA
other-computer $ sudo ./kubeadm join --skip-preflight-checks --discovery-file /k8s/cluster-info.yaml --tls-bootstrap-token 67a96d.02405a1773564431
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Skipping pre-flight checks
[preflight] Starting the kubelet service
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.115:6443"
[discovery] Failed to validate the API Server's identity, will try again: [Get https://192.168.1.115:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")]
[discovery] Failed to validate the API Server's identity, will try again: [Get https://192.168.1.115:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")]
[discovery] Failed to validate the API Server's identity, will try again: [Get https://192.168.1.115:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")]
[discovery] Failed to validate the API Server's identity, will try again: [Get https://192.168.1.115:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")]
^C
```

**Release note**:

```release-note
```
@jbeda @mikedanese @justinsb @pires @dmmcquay @roberthbailey @dgoodwin
2017-03-04 05:54:16 -08:00
Lucas Käldström 61a284d720
Hook up kubeadm against the BootstrapSigner/BootstrapTokenAuthenticator 2017-03-04 11:17:52 +02:00
Kubernetes Submit Queue db4fbf5958 Merge pull request #42369 from smarterclayton/get_warning
Automatic merge from submit-queue

Output of `kubectl get` is inconsistent for pods

Builds on top of fixes from #42283, only the last two commits are new. Reverts behavior of #39042 which was inconsistent and confusing.

Fixes #15853
2017-03-03 23:12:38 -08:00
Kubernetes Submit Queue f7c07a121d Merge pull request #42285 from liggitt/get-watch
Automatic merge from submit-queue (batch tested with PRs 41919, 41149, 42350, 42351, 42285)

Fix error printing objects from kubectl get -w

Fixes #42276
2017-03-03 16:44:45 -08:00
Kubernetes Submit Queue 9cc5480918 Merge pull request #41149 from sjenning/qos-memory-limits
Automatic merge from submit-queue (batch tested with PRs 41919, 41149, 42350, 42351, 42285)

kubelet: enable qos-level memory limits

```release-note
Experimental support to reserve a pod's memory request from being utilized by pods in lower QoS tiers.
```

Enables the QoS-level memory cgroup limits described in https://github.com/kubernetes/community/pull/314

**Note: QoS level cgroups have to be enabled for any of this to take effect.**

Adds a new `--experimental-qos-reserved` flag that can be used to set the percentage of a resource to be reserved at the QoS level for pod resource requests.

For example, `--experimental-qos-reserved="memory=50%`, means that if a Guaranteed pod sets a memory request of 2Gi, the Burstable and BestEffort QoS memory cgroups will have their `memory.limit_in_bytes` set to `NodeAllocatable - (2Gi*50%)` to reserve 50% of the guaranteed pod's request from being used by the lower QoS tiers.

If a Burstable pod sets a request, its reserve will be deducted from the BestEffort memory limit.

The result is that:
- Guaranteed limit matches root cgroup at is not set by this code
- Burstable limit is `NodeAllocatable - Guaranteed reserve`
- BestEffort limit is `NodeAllocatable - Guaranteed reserve - Burstable reserve`

The only resource currently supported is `memory`; however, the code is generic enough that other resources can be added in the future.

@derekwaynecarr @vishh
2017-03-03 16:44:39 -08:00
Kubernetes Submit Queue 5b8d600d72 Merge pull request #41919 from Cynerva/gkk/kubelet-auth
Automatic merge from submit-queue (batch tested with PRs 41919, 41149, 42350, 42351, 42285)

Juju: Disable anonymous auth on kubelet

**What this PR does / why we need it**:

This disables anonymous authentication on kubelet when deployed via Juju.

I've also adjusted a few other TLS options for kubelet and kube-apiserver. The end result is that:
1. kube-apiserver can now authenticate with kubelet
2. kube-apiserver now verifies the integrity of kubelet

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*:

https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/219

**Special notes for your reviewer**:

This is dependent on PR #41251, where the tactics changes are being merged in separately.

Some useful pages from the documentation:
* [apiserver -> kubelet](https://kubernetes.io/docs/admin/master-node-communication/#apiserver---kubelet)
* [Kubelet authentication/authorization](https://kubernetes.io/docs/admin/kubelet-authentication-authorization/)

**Release note**:

```release-note
Juju: Disable anonymous auth on kubelet
```
2017-03-03 16:44:37 -08:00
Kubernetes Submit Queue 4728a0520f Merge pull request #42018 from luxas/kubeadm_cert_phase
Automatic merge from submit-queue (batch tested with PRs 42365, 42429, 41770, 42018, 35055)

kubeadm: Add --cert-dir, --cert-altnames instead of --api-external-dns-names

**What this PR does / why we need it**:

 - For the beta kubeadm init UX, we need this change
 - Also adds the `kubeadm phase certs selfsign` command that makes the phase invokable independently

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

This PR depends on https://github.com/kubernetes/kubernetes/pull/41897

**Release note**:

```release-note
```
@dmmcquay @pires @jbeda @errordeveloper @mikedanese @deads2k @liggitt
2017-03-03 09:24:46 -08:00
Kubernetes Submit Queue 66a0311fd3 Merge pull request #42429 from kargakis/sts-observed-generation-fix
Automatic merge from submit-queue (batch tested with PRs 42365, 42429, 41770, 42018, 35055)

controller: statefulsets respect observed generation

StatefulSets do not update ObservedGeneration even though the API field is in place. This means that clients can never be sure whether the StatefulSet controller has observed the latest spec of a StatefulSet.

@kubernetes/sig-apps-bugs
2017-03-03 09:24:42 -08:00
Seth Jennings cc50aa9dfb kubelet: enable qos-level memory request reservation 2017-03-02 15:04:13 -06:00
Clayton Coleman 34e4337e57
Don't print the "filtered" message on generic output
Unify the various output displays and make them simpler. Don't write to
glog, but only output the info when `-v 2` to stderr.
2017-03-02 15:58:25 -05:00
Clayton Coleman 4e7c10a520
Don't bypass filter on generic output
It is inconsistent and confusing (filtering is orthogonal from output)
and we don't want to regress behavior from 1.5.
2017-03-02 15:58:22 -05:00