Commit Graph

44973 Commits (6c0cd2486556f71bb9d0acdfae576600c7c633db)

Author SHA1 Message Date
Maciej Szulik 6c0cd24865 Drop NewDefultGroupVersionFramework in favor of client automatically handling proper GroupVersions 2017-03-06 12:12:38 +01:00
Maciej Szulik 7cba9d9c92 Issue 37166: remove everything from batch/v2alpha1 that is not new 2017-03-06 12:12:38 +01:00
Kubernetes Submit Queue df70b30e59 Merge pull request #40537 from gnufied/fix-multizone-pv-breakage
Automatic merge from submit-queue

Fix Multizone pv creation on GCE

When Multizone is enabled static PV creation on GCE
fails because Cloud provider configuration is not
available in admission plugins.

cc @derekwaynecarr @childsb
2017-03-05 11:16:46 -08:00
Kubernetes Submit Queue 4bbf98850f Merge pull request #42500 from vishh/fix-gpu-init
Automatic merge from submit-queue

[Bug] Fix gpu initialization in Kubelet

Kubelet incorrectly fails if `AllAlpha=true` feature gate is enabled with container runtimes that are not `docker`.

Replaces #42407
2017-03-04 20:28:08 -08:00
Kubernetes Submit Queue 90a4eda96b Merge pull request #41809 from kargakis/rollout-status-fix
Automatic merge from submit-queue

kubectl: respect deployment strategy parameters for rollout status

Fixes https://github.com/kubernetes/kubernetes/issues/40496

`rollout status` now respects the strategy parameters for a RollingUpdate Deployment. This means that it will exit as soon as minimum availability is reached for a rollout (note that if you allow maximum availability, `rollout status` will succeed as soon as the new pods are created)

@janetkuo @AdoHe ptal
2017-03-04 19:35:21 -08:00
Kubernetes Submit Queue 1a94d0186f Merge pull request #42530 from andrewrynhard/self_hosted
Automatic merge from submit-queue

kubeadm: Fix the nodeSelector and scheduler mounts when using the self-hosted mode

**What this PR does / why we need it**:
The self-hosted option in `kubeadm` was broken.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #42528
**Special notes for your reviewer**:

**Release note**:

```release-note
```


/cc @luxas
2017-03-04 15:53:12 -08:00
Kubernetes Submit Queue 4092da38a6 Merge pull request #42127 from crassirostris/remove-fluentd-gcp-image
Automatic merge from submit-queue (batch tested with PRs 42070, 42127)

Remove fluentd-gcp image sources

This PR removes fluentd-gcp image sources from the main kubernetes repo to move it the `contrib`: https://github.com/kubernetes/contrib/pull/2426

Once image is moved, it will be maintained by Stackdriver team (@igorpeshansky, @qingling128 and @dhrupadb)

CC @ixdy @timstclair
2017-03-04 12:58:40 -08:00
Kubernetes Submit Queue 79883dc48d Merge pull request #42070 from luxas/remove_kube_discovery
Automatic merge from submit-queue

Remove the kube-discovery binary from the tree

**What this PR does / why we need it**:

kube-discovery was a temporary solution to implementing proposal: https://github.com/kubernetes/community/blob/master/contributors/design-proposals/bootstrap-discovery.md

However, this functionality is now gonna be implemented in the core for v1.6 and will fully replace kube-discovery:
 - https://github.com/kubernetes/kubernetes/pull/36101 
 - https://github.com/kubernetes/kubernetes/pull/41281
 - https://github.com/kubernetes/kubernetes/pull/41417

So due to that `kube-discovery` isn't used in any v1.6 code, it should be removed.
The image `gcr.io/google_containers/kube-discovery-${ARCH}:1.0` should and will continue to exist so kubeadm <= v1.5 continues to work.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
Remove cmd/kube-discovery from the tree since it's not necessary anymore
```
@jbeda @dgoodwin @mikedanese @dmmcquay @lukemarsden @errordeveloper @pires
2017-03-04 12:58:23 -08:00
Andrew Rynhard 2419d0e845 Fix self-hosted 2017-03-04 11:41:37 -08:00
Kubernetes Submit Queue b70a5b19cf Merge pull request #42519 from jbeda/fix-tokencleaner
Automatic merge from submit-queue

Small fix to the bootstrap TokenCleaner

Accidentally missed setting options and so the TokenCleaner was in a retry loop.  Also moved from using an explicit timer over cached values vs. relying on a short resync timeout.

```release-note
```

Putting this in the 1.6 milestone as this is clearly a bug fix in a new feature.
2017-03-04 10:42:24 -08:00
Kubernetes Submit Queue 7e37b895d7 Merge pull request #41417 from luxas/kubeadm_test_token
Automatic merge from submit-queue

kubeadm: Hook up kubeadm against the BootstrapSigner

**What this PR does / why we need it**:

This PR makes kubeadm able to use the BootstrapSigner. 
Depends on a few other PRs I've made, I'll rebase and fix this up after they've merged.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

Example usage:
```console
lucas@THENINJA:~/luxas/kubernetes$ sudo ./kubeadm init --kubernetes-version v1.7.0-alpha.0.377-2a6414bc914d55
[sudo] password for lucas: 
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[init] Using Kubernetes version: v1.7.0-alpha.0.377-2a6414bc914d55
[init] Using Authorization mode: RBAC
[preflight] Running pre-flight checks
[preflight] Starting the kubelet service
[certificates] Generated CA certificate and key.
[certificates] Generated API server certificate and key.
[certificates] Generated API server kubelet client certificate and key.
[certificates] Generated service account token signing key.
[certificates] Generated service account token signing public key.
[certificates] Generated front-proxy CA certificate and key.
[certificates] Generated front-proxy client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[apiclient] Created API client, waiting for the control plane to become ready
[apiclient] All control plane components are healthy after 21.301384 seconds
[apiclient] Waiting for at least one node to register and become ready
[apiclient] First node is ready after 8.072688 seconds
[apiclient] Test deployment succeeded
[token-discovery] Using token: 67a96d.02405a1773564431
[apiconfig] Created RBAC rules
[addons] Created essential addon: kube-proxy
[addons] Created essential addon: kube-dns

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run:
export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
    http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node:

kubeadm join --token 67a96d.02405a1773564431 192.168.1.115:6443

other-computer $ ./kubeadm join --token 67a96d.02405a1773564431 192.168.1.115:6443
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Skipping pre-flight checks
[preflight] Starting the kubelet service
[discovery] Trying to connect to API Server "192.168.1.115:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.115:6443"
[discovery] Cluster info signature and contents are valid, will use API Server "https://192.168.1.115:6443"
[discovery] Successfully established connection with API Server "192.168.1.115:6443"
[bootstrap] Detected server version: v1.7.0-alpha.0.377+2a6414bc914d55
[bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)
[csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request
[csr] Received signed certificate from the API server, generating KubeConfig...
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"

Node join complete:
* Certificate signing request sent to master and response
  received.
* Kubelet informed of new secure connection details.

Run 'kubectl get nodes' on the master to see this machine join.

# Wrong secret!
other-computer $ ./kubeadm join --token 67a96d.02405a1773564432 192.168.1.115:6443
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Skipping pre-flight checks
[preflight] Starting the kubelet service
[discovery] Trying to connect to API Server "192.168.1.115:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.115:6443"
[discovery] Failed to connect to API Server "192.168.1.115:6443": failed to verify JWS signature of received cluster info object, can't trust this API Server
[discovery] Trying to connect to API Server "192.168.1.115:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.115:6443"
[discovery] Failed to connect to API Server "192.168.1.115:6443": failed to verify JWS signature of received cluster info object, can't trust this API Server
^C

# Poor method to create a cluster-info KubeConfig (a KubeConfig file with no credentials), but...
$ printf "kind: Config\n$(sudo ./kubeadm alpha phas --client-name foo --server https://192.168.1.115:6443 --token foo | head -6)\n" > cluster-info.yaml
$ cat cluster-info.yaml
kind: Config
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRFM01ESXlPREl3TXpBek1Gb1hEVEkzTURJeU5qSXdNekF6TUZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTmt0ClFlUVVrenlkQjhTMVM2Y2ZSQ0ZPUnNhdDk2Wi9Id3A3TGJiNkhrSFRub1dvbDhOOVpGUXNDcCtYbDNWbStTS1AKZWFLTTFZWWVDVmNFd0JXNUlWclIxMk51UzYzcjRqK1dHK2NTdjhUOFBpYUZjWXpLalRpODYvajlMYlJYNlFQWAovYmNWTzBZZDVDMVJ1cmRLK2pnRGprdTBwbUl5RDRoWHlEZE1vZk1laStPMytwRC9BeVh5anhyd0crOUFiNjNrCmV6U3BSVHZSZ1h4R2dOMGVQclhKanMwaktKKzkxY0NXZTZJWEZkQnJKbFJnQktuMy9TazRlVVdIUTg0OWJOZHgKdllFblNON1BPaitySktPVEpLMnFlUW9ua0t3WU5qUDBGbW1zNnduL0J0dWkvQW9hanhQNUR3WXdxNEk2SzcvdgplbUM4STEvdzFpSk9RS2dxQmdzQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFNL1JQbTYzQTJVaGhPMVljTUNqSEJlUjROOHkKUzB0Q2RNdDRvK0NHRDJKTUt5NDJpNExmQTM2L2hvb01iM2tpUkVSWTRDaENrMGZ3VHpSMHc5Q21nZHlVSTVQSApEc0dIRWdkRHpTVXgyZ3lrWDBQU04zMjRXNCt1T0t6QVRLbm5mMUdiemo4cFA2Uk9QZDdCL09VNiswckhReGY2CnJ6cDRldHhWQjdQWVE0SWg5em1KcVY1QjBuaUZrUDBSYWNDYUxFTVI1NGZNWDk1VHM0amx1VFJrVnBtT1ZGNHAKemlzMlZlZmxLY3VHYTk1ME1CRGRZR2UvbGNXN3JpTkRIUGZZLzRybXIxWG9mUGZEY0Z0ZzVsbUNMWk8wMDljWQpNdGZBdjNBK2dYWjBUeExnU1BpYkxaajYrQU9lMnBiSkxCZkxOTmN6ODJMN1JjQ3RxS01NVHdxVnd0dz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    server: https://192.168.1.115:6443
  name: kubernetes

lucas@THENINJA:~/luxas/kubernetes$ sudo ./kubeadm token list
TOKEN                     TTL         EXPIRES   USAGES                   DESCRIPTION
67a96d.02405a1773564431   <forever>   <never>   authentication,signing   The default bootstrap token generated by 'kubeadm init'.

# Any token with the authentication usage set works as the --tls-bootstrap-token arg here
other-computer $ ./kubeadm join --skip-preflight-checks --discovery-file cluster-info.yaml --tls-bootstrap-token 67a96d.02405a1773564431
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Skipping pre-flight checks
[preflight] Starting the kubelet service
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.115:6443"
[discovery] Synced cluster-info information from the API Server so we have got the latest information
[bootstrap] Detected server version: v1.7.0-alpha.0.377+2a6414bc914d55
[bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)
[csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request
[csr] Received signed certificate from the API server, generating KubeConfig...
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"

Node join complete:
* Certificate signing request sent to master and response
  received.
* Kubelet informed of new secure connection details.

Run 'kubectl get nodes' on the master to see this machine join.

# Delete the RoleBinding that exposes the cluster-info ConfigMap publicly. Now this ConfigMap will be private
lucas@THENINJA:~/luxas/kubernetes$ kubectl -n kube-public edit rolebindings kubeadm:bootstrap-signer-clusterinfo

# This breaks the token joining method
other-computer $ sudo ./kubeadm join --token 67a96d.02405a1773564431 192.168.1.115:6443
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Skipping pre-flight checks
[preflight] Starting the kubelet service
[discovery] Trying to connect to API Server "192.168.1.115:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.115:6443"
[discovery] Failed to request cluster info, will try again: [User "system:anonymous" cannot get configmaps in the namespace "kube-public". (get configmaps cluster-info)]
[discovery] Failed to request cluster info, will try again: [User "system:anonymous" cannot get configmaps in the namespace "kube-public". (get configmaps cluster-info)]
^C

# But we can still connect using the cluster-info file
other-computer $ sudo ./kubeadm join --skip-preflight-checks --discovery-file /k8s/cluster-info.yaml --tls-bootstrap-token 67a96d.02405a1773564431
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Skipping pre-flight checks
[preflight] Starting the kubelet service
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.115:6443"
[discovery] Could not access the cluster-info ConfigMap for refreshing the cluster-info information, but the TLS cert is valid so proceeding...
[discovery] The cluster-info ConfigMap isn't set up properly (no kubeconfig key in ConfigMap), but the TLS cert is valid so proceeding...
[bootstrap] Detected server version: v1.7.0-alpha.0.377+2a6414bc914d55
[bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)
[csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request
[csr] Received signed certificate from the API server, generating KubeConfig...
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"

Node join complete:
* Certificate signing request sent to master and response
  received.
* Kubelet informed of new secure connection details.

Run 'kubectl get nodes' on the master to see this machine join.

# What happens if the CA in the cluster-info file and the API Server's CA aren't equal?
# Generated new CA for the cluster-info file, a invalid one for connecting to the cluster
# The new cluster-info file is here:
lucas@THENINJA:~/luxas/kubernetes$ cat cluster-info.yaml
kind: Config
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRFM01ESXlPREUyTkRBME1Wb1hEVEkzTURJeU5qRTJOREEwTVZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBS3VHCmR3MXQ5Nmlkd1YrVmwxQjRVSmZWdGpNZ0NTd1poMG00bmR5Q1JCR3FIRkpMTGhIWjREM2N2ckg1Tk44UmZHS0EKb1cwVjN3Q3R2THl4UFdnZkZMbGtrdERPWnBDQ01oYzd2alYxU2FKUE9MS1BIUUtEdm1CVWFNcTdrUzN5NEg1VApMcUp3bFBUUXNVVW5YNWM5V0pzS2JIcEx6MnJZbC9Pam4veGRtd1lQa3JUTTJwSitMS0RjUkxLTEpiQjhGc2pzCnZBQTg2QURjY3phMDd0WEgxL1MzeTN0UDJMTDN0UVgvZWJIYWNPcHluYnVaNlIwdFhKeUpsTTVlOHRHMzFhWHMKQTV3cGo1d2Z1RGU1amRuTHgxNnFtbG5ueGV3OGp0bk4zSDExYUp6VlErOWlSQUZkUTN4WmN4dWdmQVM2ZndqRwo0QnJFeGpUOUFaRlVQb0VkR09NQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFJc0pKRFIzbWMxQ1lCd2ViSkRPNm1MdWkwTk4KS3BVdlBuazlMbWlnb2JmYVhjQWlnUlo2M1pIYTd4MXBHNGpKRG8zY3lxNWEybTAzZ245RFMrcEpKYTdpMmpXUQpaV1YvZ2ZRMEk4RGc0endXU3J0T056NHpTTXQ1cW5JZjVWRC95KzVVSmVRck1XSEVFS1VrdklSQzhuUmIvV1F2CmNRWEpiN1hMY0dtbWJyaXpDSUlDYmI4KzhmNDFUWTZnTmg5ZzduaVdGZlp2VG1jN05aMTNjQVJjajJ0UTAzeVMKbWVPcEc2REdMRENFWWYzRld0QmdleE5CcFlFYy9ydUNnUE9IcEdhelYya3JHdFFNLzI0OGQ2ZndwcVNQOGc4RgpVSHNWZWxiMExnNmgvZ3VSYlZ5SENlck5zTDBJdFFhdjlscmZmWkxQaVA5TzNLQ0pBWk9MbXhEOUhaaz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    server: https://192.168.1.115:6443
  name: kubernetes

# Try to join an API Server with the wrong CA
other-computer $ sudo ./kubeadm join --skip-preflight-checks --discovery-file /k8s/cluster-info.yaml --tls-bootstrap-token 67a96d.02405a1773564431
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Skipping pre-flight checks
[preflight] Starting the kubelet service
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.115:6443"
[discovery] Failed to validate the API Server's identity, will try again: [Get https://192.168.1.115:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")]
[discovery] Failed to validate the API Server's identity, will try again: [Get https://192.168.1.115:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")]
[discovery] Failed to validate the API Server's identity, will try again: [Get https://192.168.1.115:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")]
[discovery] Failed to validate the API Server's identity, will try again: [Get https://192.168.1.115:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")]
^C
```

**Release note**:

```release-note
```
@jbeda @mikedanese @justinsb @pires @dmmcquay @roberthbailey @dgoodwin
2017-03-04 05:54:16 -08:00
Kubernetes Submit Queue 2ebf6edef3 Merge pull request #41942 from csbell/fw-name
Automatic merge from submit-queue

Add ProviderUid support to Federated Ingress

This PR (along with GLBC support [here](https://github.com/kubernetes/ingress/pull/278)) is a proposed fix for #39989. The Ingress controller uses a configMap reconciliation process to ensure that all underlying ingresses agree on a unique UID. This works for all of GLBC's resources except firewalls which need their own cluster-unique UID. This PR introduces a ProviderUid which is maintained and synchronized cross-cluster much like the UID. We chose to derive the ProviderUid from the cluster name (via md5 hash).

Testing here is augmented to guarantee that configMaps are adequately propagated prior to Ingress creation.

```release-note
Federated Ingress over GCE no longer requires separate firewall rules to be created for each cluster to circumvent flapping firewall health checks.
```

cc @madhusudancs @quinton-hoole
2017-03-04 02:51:04 -08:00
Lucas Käldström 61a284d720
Hook up kubeadm against the BootstrapSigner/BootstrapTokenAuthenticator 2017-03-04 11:17:52 +02:00
Kubernetes Submit Queue 52f4d38069 Merge pull request #42370 from janetkuo/ds-e2e-ignore-no-schedule-taint
Automatic merge from submit-queue (batch tested with PRs 42456, 42457, 42414, 42480, 42370)

In DaemonSet e2e test, don't check nodes with NoSchedule taints

Fixes #42345 

For example, master node has a ismaster:NoSchedule taint. We don't expect pods to be created there without toleration. 

cc @marun @lukaszo @kargakis @yujuhong @Random-Liu @davidopp @kubernetes/sig-apps-pr-reviews
2017-03-04 00:17:47 -08:00
Kubernetes Submit Queue ccaa1cc6bb Merge pull request #42480 from kargakis/update-log-verbosity-deployments
Automatic merge from submit-queue (batch tested with PRs 42456, 42457, 42414, 42480, 42370)

controller: reduce log verbosity for deployments

Fixes https://github.com/kubernetes/kubernetes/issues/41187

Labeling as a bug fix since I think excessive logging should be considered as a bug.

@kubernetes/sig-apps-bugs
2017-03-04 00:17:45 -08:00
Kubernetes Submit Queue 204ffda1a5 Merge pull request #42414 from lukaszo/ds-taint
Automatic merge from submit-queue (batch tested with PRs 42456, 42457, 42414, 42480, 42370)

Enque DaemonSet sync when node taints changed

Fixes #42398

 @kargakis @janetkuo @mdshuai PTAL
2017-03-04 00:17:44 -08:00
Kubernetes Submit Queue cb0728c50f Merge pull request #42457 from yujuhong/do_not_panic
Automatic merge from submit-queue (batch tested with PRs 42456, 42457, 42414, 42480, 42370)

node e2e: apparmor test should fail instead of panicking

This doesn't fix #42420, but at least stop the test from panicking.
2017-03-04 00:17:42 -08:00
Kubernetes Submit Queue c6d9d9c5ad Merge pull request #42456 from Random-Liu/update-npd-in-kubemark
Automatic merge from submit-queue (batch tested with PRs 42456, 42457, 42414, 42480, 42370)

Update npd in kubemark since #42201 is merged.

Revert https://github.com/kubernetes/kubernetes/pull/41716.

#42201 has been merged, and #41713 is fixed. Now we could retry update npd in kubemark.

/cc @shyamjvs @wojtek-t @dchen1107
2017-03-04 00:17:40 -08:00
Kubernetes Submit Queue a56bba263d Merge pull request #42455 from gmarek/kubemark-logs
Automatic merge from submit-queue (batch tested with PRs 42369, 42375, 42397, 42435, 42455)

Add alsologtostderr flag to hollow node

@yujuhong @wojtek-t that should solve the kubemark log issue.
2017-03-03 23:21:49 -08:00
Kubernetes Submit Queue f9ccee7714 Merge pull request #42435 from dashpole/timestamps_for_fsstats
Automatic merge from submit-queue (batch tested with PRs 42369, 42375, 42397, 42435, 42455)

[Bug Fix]: Avoid evicting more pods than necessary by adding Timestamps for fsstats and ignoring stale stats

Continuation of #33121.  Credit for most of this goes to @sjenning.  I added volume fs timestamps.

**why is this a bug** 
This PR attempts to fix part of https://github.com/kubernetes/kubernetes/issues/31362 which results in multiple pods getting evicted unnecessarily whenever the node runs into resource pressure. This PR reduces the chances of such disruptions by avoiding reacting to old/stale metrics.
Without this PR, kubernetes nodes under resource pressure will cause unnecessary disruptions to user workloads. 
This PR will also help deflake a node e2e test suite.

The eviction manager currently avoids evicting pods if metrics are old.  However, timestamp data is not available for filesystem data, and this causes lots of extra evictions.
See the [inode eviction test flakes](https://k8s-testgrid.appspot.com/google-node#kubelet-flaky-gce-e2e) for examples.
This should probably be treated as a bugfix, as it should help mitigate extra evictions.

cc: @kubernetes/sig-storage-pr-reviews  @kubernetes/sig-node-pr-reviews @vishh @derekwaynecarr @sjenning
2017-03-03 23:21:48 -08:00
Kubernetes Submit Queue 51a3d7b663 Merge pull request #42397 from feiskyer/fix-42396
Automatic merge from submit-queue (batch tested with PRs 42369, 42375, 42397, 42435, 42455)

Kubelet: return container runtime's version instead of CRI's one

**What this PR does / why we need it**:

With CRI enabled by default, kubelet reports the version of CRI instead of container runtime version. This PR fixes this problem.

**Which issue this PR fixes** 

Fixes #42396.

**Special notes for your reviewer**:

Should also cherry-pick to 1.6 branch.

**Release note**:

```release-note
NONE
```

cc @yujuhong  @kubernetes/sig-node-bugs
2017-03-03 23:21:46 -08:00
Kubernetes Submit Queue 6675dada8d Merge pull request #42375 from nikhiljindal/controllerRequiredResources
Automatic merge from submit-queue (batch tested with PRs 42369, 42375, 42397, 42435, 42455)

Fixing federation controllers to support controllers flag

Fixes https://github.com/kubernetes/kubernetes/issues/42374

cc @kubernetes/sig-federation-pr-reviews
2017-03-03 23:21:40 -08:00
Kubernetes Submit Queue db4fbf5958 Merge pull request #42369 from smarterclayton/get_warning
Automatic merge from submit-queue

Output of `kubectl get` is inconsistent for pods

Builds on top of fixes from #42283, only the last two commits are new. Reverts behavior of #39042 which was inconsistent and confusing.

Fixes #15853
2017-03-03 23:12:38 -08:00
Kubernetes Submit Queue 93a3efd896 Merge pull request #42300 from caesarxuchao/fix-client-verify
Automatic merge from submit-queue

ignore base.go in client-verify

We need to cherry-pick it to 1.6 to fix #42290.
2017-03-03 21:56:48 -08:00
Joe Beda 100d4c3b1f
Small fix to the bootstrap TokenCleaner
Accidentally missed setting options and so the TokenCleaner was in a retry loop.  Also moved from using an explicit timer over cached values vs. relying on a short resync timeout.

Signed-off-by: Joe Beda <joe.github@bedafamily.com>
2017-03-03 20:49:18 -08:00
Kubernetes Submit Queue 2d319bd406 Merge pull request #42204 from dashpole/allocatable_eviction
Automatic merge from submit-queue

Eviction Manager Enforces Allocatable Thresholds

This PR modifies the eviction manager to enforce node allocatable thresholds for memory as described in kubernetes/community#348.
This PR should be merged after #41234. 

cc @kubernetes/sig-node-pr-reviews @kubernetes/sig-node-feature-requests @vishh 

** Why is this a bug/regression**

Kubelet uses `oom_score_adj` to enforce QoS policies. But the `oom_score_adj` is based on overall memory requested, which means that a Burstable pod that requested a lot of memory can lead to OOM kills for Guaranteed pods, which violates QoS. Even worse, we have observed system daemons like kubelet or kube-proxy being killed by the OOM killer.
Without this PR, v1.6 will have node stability issues and regressions in an existing GA feature `out of Resource` handling.
2017-03-03 20:20:12 -08:00
Kubernetes Submit Queue 99445553df Merge pull request #42310 from liggitt/init-container-default
Automatic merge from submit-queue (batch tested with PRs 42443, 38924, 42367, 42391, 42310)

Apply custom defaults to init containers

Adds overridden defaults to init containers. They were not being defaulted the same way normal containers were.
2017-03-03 18:08:45 -08:00
Kubernetes Submit Queue b33d0fb394 Merge pull request #42391 from liggitt/patch-output
Automatic merge from submit-queue (batch tested with PRs 42443, 38924, 42367, 42391, 42310)

Fix 'not patched' kubectl error

fixes #42384
2017-03-03 18:08:44 -08:00
Kubernetes Submit Queue f750721b84 Merge pull request #42367 from kow3ns/ss-flakes
Automatic merge from submit-queue (batch tested with PRs 42443, 38924, 42367, 42391, 42310)

Fix StatefulSet e2e flake

**What this PR does / why we need it**:
Fixes StatefulSet e2e flake by ensuring that the StatefulSet controller has observed the unreadiness of Pods prior to attempting to exercise scale functionality.
**Which issue this PR fixes** 
fixes #41889
```release-note
NONE
```
2017-03-03 18:08:42 -08:00
Kubernetes Submit Queue f81a0107f0 Merge pull request #38924 from vladimirvivien/scaleio-k8s
Automatic merge from submit-queue (batch tested with PRs 42443, 38924, 42367, 42391, 42310)

Dell EMC ScaleIO Volume Plugin

**What this PR does / why we need it**
This PR implements the Kubernetes volume plugin to allow pods to seamlessly access and use data stored on ScaleIO volumes.  [ScaleIO](https://www.emc.com/storage/scaleio/index.htm) is a software-based storage platform that creates a pool of distributed block storage using locally attached disks on every server.  The code for this PR supports persistent volumes using PVs, PVCs, and dynamic provisioning.

You can find examples of how to use and configure the ScaleIO Kubernetes volume plugin in [examples/volumes/scaleio/README.md](examples/volumes/scaleio/README.md).

**Special notes for your reviewer**:
To facilitate code review, commits for source code implementation are separated from other artifacts such as generated, docs, and vendored sources.

```release-note
ScaleIO Kubernetes Volume Plugin added enabling pods to seamlessly access and use data stored on ScaleIO volumes.
```
2017-03-03 18:08:40 -08:00
Kubernetes Submit Queue 67500b3947 Merge pull request #42443 from Random-Liu/fix-node-e2e-npd
Automatic merge from submit-queue (batch tested with PRs 42443, 38924, 42367, 42391, 42310)

Cast system uptime to time.Duration to fix cross build.

Fixes https://github.com/kubernetes/kubernetes/issues/42441.

Cast system uptime to `time.Duration` to avoid different behavior on different architectures.

@sjenning @ixdy @ncdc
2017-03-03 18:08:38 -08:00
Kubernetes Submit Queue f7c07a121d Merge pull request #42285 from liggitt/get-watch
Automatic merge from submit-queue (batch tested with PRs 41919, 41149, 42350, 42351, 42285)

Fix error printing objects from kubectl get -w

Fixes #42276
2017-03-03 16:44:45 -08:00
Kubernetes Submit Queue 346c0ba993 Merge pull request #42351 from liggitt/scheduler-statefulset
Automatic merge from submit-queue (batch tested with PRs 41919, 41149, 42350, 42351, 42285)

Add read permissions for statefulsets for kube-scheduler

https://github.com/kubernetes/kubernetes/issues/41708 added statefulset awareness to the scheduler. This adds the corresponding permission to the scheduler role.
2017-03-03 16:44:43 -08:00
Kubernetes Submit Queue b432e137e6 Merge pull request #42350 from vishh/enable-qos-cgroups
Automatic merge from submit-queue (batch tested with PRs 41919, 41149, 42350, 42351, 42285)

enable cgroups tiers and node allocatable enforcement on pods by default.

```release-note
Pods are launched in a separate cgroup hierarchy than system services.
```
Depends on #41753

cc @derekwaynecarr
2017-03-03 16:44:41 -08:00
Kubernetes Submit Queue 9cc5480918 Merge pull request #41149 from sjenning/qos-memory-limits
Automatic merge from submit-queue (batch tested with PRs 41919, 41149, 42350, 42351, 42285)

kubelet: enable qos-level memory limits

```release-note
Experimental support to reserve a pod's memory request from being utilized by pods in lower QoS tiers.
```

Enables the QoS-level memory cgroup limits described in https://github.com/kubernetes/community/pull/314

**Note: QoS level cgroups have to be enabled for any of this to take effect.**

Adds a new `--experimental-qos-reserved` flag that can be used to set the percentage of a resource to be reserved at the QoS level for pod resource requests.

For example, `--experimental-qos-reserved="memory=50%`, means that if a Guaranteed pod sets a memory request of 2Gi, the Burstable and BestEffort QoS memory cgroups will have their `memory.limit_in_bytes` set to `NodeAllocatable - (2Gi*50%)` to reserve 50% of the guaranteed pod's request from being used by the lower QoS tiers.

If a Burstable pod sets a request, its reserve will be deducted from the BestEffort memory limit.

The result is that:
- Guaranteed limit matches root cgroup at is not set by this code
- Burstable limit is `NodeAllocatable - Guaranteed reserve`
- BestEffort limit is `NodeAllocatable - Guaranteed reserve - Burstable reserve`

The only resource currently supported is `memory`; however, the code is generic enough that other resources can be added in the future.

@derekwaynecarr @vishh
2017-03-03 16:44:39 -08:00
Kubernetes Submit Queue 5b8d600d72 Merge pull request #41919 from Cynerva/gkk/kubelet-auth
Automatic merge from submit-queue (batch tested with PRs 41919, 41149, 42350, 42351, 42285)

Juju: Disable anonymous auth on kubelet

**What this PR does / why we need it**:

This disables anonymous authentication on kubelet when deployed via Juju.

I've also adjusted a few other TLS options for kubelet and kube-apiserver. The end result is that:
1. kube-apiserver can now authenticate with kubelet
2. kube-apiserver now verifies the integrity of kubelet

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*:

https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/219

**Special notes for your reviewer**:

This is dependent on PR #41251, where the tactics changes are being merged in separately.

Some useful pages from the documentation:
* [apiserver -> kubelet](https://kubernetes.io/docs/admin/master-node-communication/#apiserver---kubelet)
* [Kubelet authentication/authorization](https://kubernetes.io/docs/admin/kubelet-authentication-authorization/)

**Release note**:

```release-note
Juju: Disable anonymous auth on kubelet
```
2017-03-03 16:44:37 -08:00
Kubernetes Submit Queue 98eae9b222 Merge pull request #42341 from dashpole/critial_pod_test
Automatic merge from submit-queue

Critial pod test uses allocatable instead of capacity

This solves #42239.

When this test was first introduced, pods could request up to the capacity of the node.
With the addition of allocatable introduced in #41234, this is no longer the case, and pods can only use up to allocatable.

This should be included in 1.6, as it is a bug related to a 1.6 feature.

cc @vish @yujuhong
2017-03-03 14:34:37 -08:00
Vladimir Vivien 915a54180d Addition of ScaleIO Kubernetes Volume Plugin
This commits implements the Kubernetes volume plugin allowing pods to seamlessly access and use data stored on ScaleIO volumes.
2017-03-03 15:47:19 -05:00
Vishnu kannan 038585626d fix gpu initialization
Signed-off-by: Vishnu kannan <vishnuk@google.com>
2017-03-03 12:13:01 -08:00
Kenneth Owens 08f95aff0f Fixes e2e flake by ensuring that the StatefulSet observes mutations to
Pods prior mutating the StatefulSet object to trigger sclaing.

Add ObervedVersion check
2017-03-03 11:24:47 -08:00
Kubernetes Submit Queue a2c7eb2754 Merge pull request #42266 from wojtek-t/fix_secret_tests_in_large_clusters
Automatic merge from submit-queue (batch tested with PRs 41306, 42187, 41666, 42275, 42266)

Bump test timeouts to make secret tests work in large clusters
2017-03-03 10:54:45 -08:00
Kubernetes Submit Queue 6db099fcee Merge pull request #42275 from deads2k/cli-05-restmapper
Automatic merge from submit-queue (batch tested with PRs 41306, 42187, 41666, 42275, 42266)

discovery restmapping should always prefer /v1

The core kube API, empty group, version==v1 should always be the most preferred group and resource from a rest mapper.  This special cases that.  All the others should be based on discovery order as we previously agreed.

@kubernetes/sig-cli-pr-reviews @kubernetes/sig-api-machinery-pr-reviews 
@enj
2017-03-03 10:54:43 -08:00
Kubernetes Submit Queue 097755fbd9 Merge pull request #41666 from mikedanese/cvm-master
Automatic merge from submit-queue (batch tested with PRs 41306, 42187, 41666, 42275, 42266)

remove support for debian masters in GCE

Asked about this on the mailing list and no one objects.

@zmerlynn @roberthbailey 

```release-note
Remove support for debian masters in GCE kube-up.
```
2017-03-03 10:54:42 -08:00
Kubernetes Submit Queue 4932b1422c Merge pull request #42187 from smarterclayton/wrong_error_from_timeout
Automatic merge from submit-queue (batch tested with PRs 41306, 42187, 41666, 42275, 42266)

Server timeout returns an incorrect error

Not a valid Status object in JSON

Part of #42163
2017-03-03 10:54:40 -08:00
Kubernetes Submit Queue e9bbfb81c1 Merge pull request #41306 from gnufied/implement-interface-bulk-volume-poll
Automatic merge from submit-queue (batch tested with PRs 41306, 42187, 41666, 42275, 42266)

Implement bulk polling of volumes

This implements Bulk volume polling using ideas presented by
justin in https://github.com/kubernetes/kubernetes/pull/39564

But it changes the implementation to use an interface
and doesn't affect other implementations.

cc @justinsb
2017-03-03 10:54:38 -08:00
Kubernetes Submit Queue ff9296fcad Merge pull request #35055 from ivan4th/make-downward-api-test-table-driven
Automatic merge from submit-queue (batch tested with PRs 42365, 42429, 41770, 42018, 35055)

Make Downward API test table-driven
2017-03-03 09:24:48 -08:00
Kubernetes Submit Queue 4728a0520f Merge pull request #42018 from luxas/kubeadm_cert_phase
Automatic merge from submit-queue (batch tested with PRs 42365, 42429, 41770, 42018, 35055)

kubeadm: Add --cert-dir, --cert-altnames instead of --api-external-dns-names

**What this PR does / why we need it**:

 - For the beta kubeadm init UX, we need this change
 - Also adds the `kubeadm phase certs selfsign` command that makes the phase invokable independently

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

This PR depends on https://github.com/kubernetes/kubernetes/pull/41897

**Release note**:

```release-note
```
@dmmcquay @pires @jbeda @errordeveloper @mikedanese @deads2k @liggitt
2017-03-03 09:24:46 -08:00
Kubernetes Submit Queue ec09dab13e Merge pull request #41770 from k82cn/updated_sched_name
Automatic merge from submit-queue (batch tested with PRs 42365, 42429, 41770, 42018, 35055)

Updated scheduler name for multi-scheduler.

fixes #41859
2017-03-03 09:24:44 -08:00
Kubernetes Submit Queue 66a0311fd3 Merge pull request #42429 from kargakis/sts-observed-generation-fix
Automatic merge from submit-queue (batch tested with PRs 42365, 42429, 41770, 42018, 35055)

controller: statefulsets respect observed generation

StatefulSets do not update ObservedGeneration even though the API field is in place. This means that clients can never be sure whether the StatefulSet controller has observed the latest spec of a StatefulSet.

@kubernetes/sig-apps-bugs
2017-03-03 09:24:42 -08:00
Kubernetes Submit Queue 083a00dbea Merge pull request #42365 from madhusudancs/fed-kubefed-remove-joinunjoin-kubedns
Automatic merge from submit-queue (batch tested with PRs 42365, 42429, 41770, 42018, 35055)

Remove kube-dns ConfigMap modification code from federation-{up,down}.sh scripts

PR #39338 has merged now. This shouldn't be necessary. This unbreaks federation tests.

```release-note
NONE
```
2017-03-03 09:24:40 -08:00