Commit Graph

778 Commits (a9bfae8ec3d4953a67258f1c50136b339dfcc834)

Author SHA1 Message Date
Kubernetes Submit Queue 896d2afb42 Merge pull request #44588 from dmmcquay/kubeadm_skip_token_print
Automatic merge from submit-queue (batch tested with PRs 44601, 44842, 44893, 44491, 44588)

kubeadm: add flag to skip token print out

**What this PR does / why we need it**: When kubeadm init is used in an automated context, it still prints the token to standard out. When standard output ends up in a log file, it can be considered that the token is leaked there and can be compromised. This PR adds a flag you can select to not have it print out and explicitly disable this behavior.

This is a continuation from https://github.com/kubernetes/kubernetes/pull/42823 since it had to be closed.

**Which issue this PR fixes** : fixes #https://github.com/kubernetes/kubeadm/issues/160

**Special notes for your reviewer**: /cc @luxas @errordeveloper 

**Release note**:
```release-note
NONE
```
2017-04-25 12:51:41 -07:00
Kubernetes Submit Queue 6c8cb33fb3 Merge pull request #42101 from Dmitry1987/feature/hpa-upscale-downscale-delay-configurable
Automatic merge from submit-queue (batch tested with PRs 44862, 42241, 42101, 43181, 44147)

Feature/hpa upscale downscale delay configurable

**What this PR does / why we need it**:
Makes "upscale forbidden window" and "downscale forbidden window"  duration configurable in arguments of kube-controller-manager. Those are options of horizontal pod autoscaler.

**Special notes for your reviewer**:
Please have a look @DirectXMan12 , the PR as discussed in Slack.

**Release note**:
```
Make "upscale forbidden window" and "downscale forbidden window"  duration configurable in arguments of kube-controller-manager. Those are options of horizontal pod autoscaler. Right now are hardcoded 3 minutes for upscale, and 5 minutes to downscale.  But sometimes cluster administrator might want to change this for his own needs.
```
2017-04-24 19:39:42 -07:00
derek mcquay d047dfbc6f kubeadm: add flag to skip token print out 2017-04-20 13:12:37 -07:00
Kubernetes Submit Queue fe44d1f5ce Merge pull request #44073 from marun/fed-e2e-config-from-secrets
Automatic merge from submit-queue (batch tested with PRs 43500, 44073)

[Federation] Add option to retrieve e2e cluster config from secrets

Previously the federation e2e setup was reading member cluster configuration from the test run's kubeconfig. This change removes that dependency in favor of reading member cluster configuration from secrets in the hosting cluster, and caches the configuration to avoid having to read it separately for each test.

cc: @kubernetes/sig-federation-pr-reviews @perotinus
2017-04-18 22:27:58 -07:00
Kubernetes Submit Queue 09e3fdbafe Merge pull request #44500 from Cynerva/gkk/cdk-1.6-support
Automatic merge from submit-queue (batch tested with PRs 43000, 44500, 44457, 44553, 44267)

Add Kubernetes 1.6 support to Juju charms

**What this PR does / why we need it**:

This adds Kubernetes 1.6 support to Juju charms.

This includes some large architectural changes in order to support multiple versions of Kubernetes with a single release of the charms. There are a few bug fixes in here as well, for issues that we discovered during testing.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

Thanks to @marcoceppi, @ktsakalozos, @jacekn, @mbruzek, @tvansteenburgh for their work in this feature branch as well!

**Release note**:

```release-note
Add Kubernetes 1.6 support to Juju charms
Add metric collection to charms for autoscaling
Update kubernetes-e2e charm to fail when test suite fails
Update Juju charms to use snaps
Add registry action to the kubernetes-worker charm
Add support for kube-proxy cluster-cidr option to kubernetes-worker charm
Fix kubernetes-master charm starting services before TLS certs are saved
Fix kubernetes-worker charm failures in LXD
Fix stop hook failure on kubernetes-worker charm
Fix handling of juju kubernetes-worker.restart-needed state
Fix nagios checks in charms
```
2017-04-18 13:19:06 -07:00
Maru Newby 9a9d897d94 fed: Add option to source e2e cluster config from host cluster
Add the option to configure e2e access to member clusters from the
same secrets in the host cluster used by the federation control plane.
The default behavior will continue to be sourcing this configuration
from the e2e kubeconfig.  The optional behavior can be enabled by
passing --federation-config-from-cluster=true as an argument to
ginkgo.
2017-04-17 23:38:03 -07:00
Rye Terrell 33fee22032 add support for kube-proxy cluster-cidr option 2017-04-14 10:45:23 -05:00
Rye Terrell ca4afd8773 Update CDK charms to use snaps 2017-04-14 10:43:00 -05:00
Kubernetes Submit Queue f1c0c0a73c Merge pull request #42395 from nicksardo/gce-src-ranges
Automatic merge from submit-queue

Adding load balancer src cidrs to GCE cloudprovider

**What this PR does / why we need it**:
As of January 31st, 2018, GCP will be sending health checks and l7 traffic from two CIDRs and legacy health checks from three CIDS. This PR moves them into the cloudprovider package and provides a flag for override.

Another PR will need to be address firewall rule creation for external L4 network loadbalancing #40778

**Which issue this PR fixes**
Step one of #40778
Step one of https://github.com/kubernetes/ingress/issues/197

**Release note**:
```release-note
Add flags to GCE cloud provider to override known L4/L7 proxy & health check source cidrs
```
2017-04-12 19:57:43 -07:00
Andy Goldstein 00e11566f2 Make the dockershim root directory configurable
Make the dockershim root directory configurable so things like
integration tests (e.g. in OpenShift) can run as non-root.
2017-04-12 09:06:21 -04:00
Bowei Du 091e46ef21 Update known-flags with cidr-allocator-type
I also sorted the file, it was almost sorted with a few exceptions.
2017-04-11 14:07:54 -07:00
Kubernetes Submit Queue 357af07718 Merge pull request #44197 from Random-Liu/dockershim-only-mode
Automatic merge from submit-queue

Add dockershim only mode

This PR added a `experimental-dockershim` hidden flag in kubelet to run dockershim only.

We introduce this flag mainly for cri validation test. In the future we should compile dockershim into another binary.

@yujuhong @feiskyer @xlgao-zju 
/cc @kubernetes/sig-node-pr-reviews
2017-04-09 19:27:51 -07:00
Bobby Salamat f9d1333144 Addressed reviewers comments 2017-04-07 17:31:45 -07:00
Bobby Salamat c55e5b6b8e Add flags to known-flags 2017-04-07 17:06:23 -07:00
Random-Liu 327fc270d7 Add dockershim only mode 2017-04-07 16:43:57 -07:00
Kubernetes Submit Queue de1cee38bf Merge pull request #35284 from jsafrane/fix-class-tests
Automatic merge from submit-queue

Add test for provisioning with storage class

This PR re-introduces e2e test for dynamic provisioning with storage classes.

It adds the same test as it was merged in PR #32485 with an extra patch adding region to AWS calls. It works well on my AWS setup, however I'm using shared company account and I can't run kube-up.sh and run the tests in the "official" way.

@zmerlynn, can you please try to run tests that led to #34961?

@justinsb, you're my AWS guru, would there be a way how to introduce fully initialized AWS cloud provider into e2e test framework? It would simplify everything. GCE has it there, but it's easier to initialize, I guess. See https://github.com/kubernetes/kubernetes/blob/master/test/e2e/pd.go#L486 for example - IMO tests should not talk to AWS directly.
2017-04-07 11:40:34 -07:00
Jan Safranek a327302200 e2e tests should be multizone aware
Pass MULTIZONE=true env. variable to e2e test framework.
2017-04-06 13:28:29 +02:00
Haoran Wang fcc73d355d Multiple scheduler leader election support 2017-04-05 22:36:13 +08:00
Kubernetes Submit Queue 20b01be016 Merge pull request #41813 from shiywang/timeout_options
Automatic merge from submit-queue (batch tested with PRs 43642, 43170, 41813, 42170, 41581)

Be able to specify the timeout to wait for pod for kubectl logs/attach

Fixes https://github.com/kubernetes/kubernetes/issues/41786
current flag is `get-pod-timeout`, we can have a discussion if you have better one, default unit is seconds, above 0

@soltysh @kargakis ptal, thanks
@kubernetes/sig-cli-feature-requests
2017-03-24 19:04:26 -07:00
Nick Sardo baab99b823 Adding load balancer src ranges; support flag overrides 2017-03-24 16:36:19 -07:00
Kubernetes Submit Queue 0e17e5bd9c Merge pull request #38882 from fraenkel/configmap_env_file
Automatic merge from submit-queue (batch tested with PRs 41139, 41186, 38882, 37698, 42034)

create configmap from-env-file

Allow ConfigMaps to be created from Docker based env files.

See proposal https://github.com/kubernetes/community/issues/165

**Release-note:**
```release-note
1. create configmap has a new option --from-env-file that populates a configmap from file which follows a key=val format for each line.
2. create secret has a new option --from-env-file that populates a configmap from file which follows a key=val format for each line.
```
2017-03-24 12:33:25 -07:00
Dmitry1987 965dab366b make hpa upscale and downscale delay window configurable 2017-03-24 18:01:04 +00:00
shiywang 52e4be2578 Be able to specify the timeout to wait for pod for kubectl logs/attach 2017-03-14 23:00:31 +08:00
Kubernetes Submit Queue 328e555f72 Merge pull request #41794 from shashidharatd/federation-upgrade-tests-1
Automatic merge from submit-queue (batch tested with PRs 41794, 42349, 42755, 42901, 42933)

[Federation][e2e] Add framework for upgrade test in federation

Adding framework for federation upgrade tests. please refer to #41791

cc @madhusudancs @nikhiljindal @kubernetes/sig-federation-pr-reviews
2017-03-10 22:02:15 -08:00
Kubernetes Submit Queue a54d493216 Merge pull request #42608 from xilabao/patch-8
Automatic merge from submit-queue (batch tested with PRs 42608, 42444)

fix typo in know-flags

ref to https://github.com/kubernetes/kubernetes/pull/41417
2017-03-10 12:50:22 -08:00
shashidharatd 662f0ef531 Add framework for federation upgrade tests 2017-03-11 01:39:56 +05:30
Kubernetes Submit Queue 4540674b04 Merge pull request #42758 from krousey/downgrades
Automatic merge from submit-queue (batch tested with PRs 42734, 42745, 42758, 42814, 42694)

Implement automated downgrade testing.

Node version cannot be higher than the master version, so we must
switch the node version first. Also, we must use the upgrade script
from the appropriate version for GCE.
2017-03-09 15:06:56 -08:00
Kris cc84e0895a Implement automated downgrade testing.
Node version cannot be higher than the master version, so we must
switch the node version first. Also, we must use the upgrade script
from the appropriate version for GCE.
2017-03-09 12:45:20 -08:00
Guangya Liu ed28695d3e Updated comments for TaintBasedEvictions. 2017-03-09 17:06:31 +08:00
Michael Fraenkel 7eb49628c6 create configmap from-env-file 2017-03-08 07:58:01 -08:00
xilabao c64f146a34 fix typo in know-flags 2017-03-06 19:06:57 -06:00
Kubernetes Submit Queue 7e37b895d7 Merge pull request #41417 from luxas/kubeadm_test_token
Automatic merge from submit-queue

kubeadm: Hook up kubeadm against the BootstrapSigner

**What this PR does / why we need it**:

This PR makes kubeadm able to use the BootstrapSigner. 
Depends on a few other PRs I've made, I'll rebase and fix this up after they've merged.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

Example usage:
```console
lucas@THENINJA:~/luxas/kubernetes$ sudo ./kubeadm init --kubernetes-version v1.7.0-alpha.0.377-2a6414bc914d55
[sudo] password for lucas: 
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[init] Using Kubernetes version: v1.7.0-alpha.0.377-2a6414bc914d55
[init] Using Authorization mode: RBAC
[preflight] Running pre-flight checks
[preflight] Starting the kubelet service
[certificates] Generated CA certificate and key.
[certificates] Generated API server certificate and key.
[certificates] Generated API server kubelet client certificate and key.
[certificates] Generated service account token signing key.
[certificates] Generated service account token signing public key.
[certificates] Generated front-proxy CA certificate and key.
[certificates] Generated front-proxy client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[apiclient] Created API client, waiting for the control plane to become ready
[apiclient] All control plane components are healthy after 21.301384 seconds
[apiclient] Waiting for at least one node to register and become ready
[apiclient] First node is ready after 8.072688 seconds
[apiclient] Test deployment succeeded
[token-discovery] Using token: 67a96d.02405a1773564431
[apiconfig] Created RBAC rules
[addons] Created essential addon: kube-proxy
[addons] Created essential addon: kube-dns

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run:
export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
    http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node:

kubeadm join --token 67a96d.02405a1773564431 192.168.1.115:6443

other-computer $ ./kubeadm join --token 67a96d.02405a1773564431 192.168.1.115:6443
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Skipping pre-flight checks
[preflight] Starting the kubelet service
[discovery] Trying to connect to API Server "192.168.1.115:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.115:6443"
[discovery] Cluster info signature and contents are valid, will use API Server "https://192.168.1.115:6443"
[discovery] Successfully established connection with API Server "192.168.1.115:6443"
[bootstrap] Detected server version: v1.7.0-alpha.0.377+2a6414bc914d55
[bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)
[csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request
[csr] Received signed certificate from the API server, generating KubeConfig...
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"

Node join complete:
* Certificate signing request sent to master and response
  received.
* Kubelet informed of new secure connection details.

Run 'kubectl get nodes' on the master to see this machine join.

# Wrong secret!
other-computer $ ./kubeadm join --token 67a96d.02405a1773564432 192.168.1.115:6443
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Skipping pre-flight checks
[preflight] Starting the kubelet service
[discovery] Trying to connect to API Server "192.168.1.115:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.115:6443"
[discovery] Failed to connect to API Server "192.168.1.115:6443": failed to verify JWS signature of received cluster info object, can't trust this API Server
[discovery] Trying to connect to API Server "192.168.1.115:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.115:6443"
[discovery] Failed to connect to API Server "192.168.1.115:6443": failed to verify JWS signature of received cluster info object, can't trust this API Server
^C

# Poor method to create a cluster-info KubeConfig (a KubeConfig file with no credentials), but...
$ printf "kind: Config\n$(sudo ./kubeadm alpha phas --client-name foo --server https://192.168.1.115:6443 --token foo | head -6)\n" > cluster-info.yaml
$ cat cluster-info.yaml
kind: Config
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRFM01ESXlPREl3TXpBek1Gb1hEVEkzTURJeU5qSXdNekF6TUZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTmt0ClFlUVVrenlkQjhTMVM2Y2ZSQ0ZPUnNhdDk2Wi9Id3A3TGJiNkhrSFRub1dvbDhOOVpGUXNDcCtYbDNWbStTS1AKZWFLTTFZWWVDVmNFd0JXNUlWclIxMk51UzYzcjRqK1dHK2NTdjhUOFBpYUZjWXpLalRpODYvajlMYlJYNlFQWAovYmNWTzBZZDVDMVJ1cmRLK2pnRGprdTBwbUl5RDRoWHlEZE1vZk1laStPMytwRC9BeVh5anhyd0crOUFiNjNrCmV6U3BSVHZSZ1h4R2dOMGVQclhKanMwaktKKzkxY0NXZTZJWEZkQnJKbFJnQktuMy9TazRlVVdIUTg0OWJOZHgKdllFblNON1BPaitySktPVEpLMnFlUW9ua0t3WU5qUDBGbW1zNnduL0J0dWkvQW9hanhQNUR3WXdxNEk2SzcvdgplbUM4STEvdzFpSk9RS2dxQmdzQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFNL1JQbTYzQTJVaGhPMVljTUNqSEJlUjROOHkKUzB0Q2RNdDRvK0NHRDJKTUt5NDJpNExmQTM2L2hvb01iM2tpUkVSWTRDaENrMGZ3VHpSMHc5Q21nZHlVSTVQSApEc0dIRWdkRHpTVXgyZ3lrWDBQU04zMjRXNCt1T0t6QVRLbm5mMUdiemo4cFA2Uk9QZDdCL09VNiswckhReGY2CnJ6cDRldHhWQjdQWVE0SWg5em1KcVY1QjBuaUZrUDBSYWNDYUxFTVI1NGZNWDk1VHM0amx1VFJrVnBtT1ZGNHAKemlzMlZlZmxLY3VHYTk1ME1CRGRZR2UvbGNXN3JpTkRIUGZZLzRybXIxWG9mUGZEY0Z0ZzVsbUNMWk8wMDljWQpNdGZBdjNBK2dYWjBUeExnU1BpYkxaajYrQU9lMnBiSkxCZkxOTmN6ODJMN1JjQ3RxS01NVHdxVnd0dz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    server: https://192.168.1.115:6443
  name: kubernetes

lucas@THENINJA:~/luxas/kubernetes$ sudo ./kubeadm token list
TOKEN                     TTL         EXPIRES   USAGES                   DESCRIPTION
67a96d.02405a1773564431   <forever>   <never>   authentication,signing   The default bootstrap token generated by 'kubeadm init'.

# Any token with the authentication usage set works as the --tls-bootstrap-token arg here
other-computer $ ./kubeadm join --skip-preflight-checks --discovery-file cluster-info.yaml --tls-bootstrap-token 67a96d.02405a1773564431
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Skipping pre-flight checks
[preflight] Starting the kubelet service
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.115:6443"
[discovery] Synced cluster-info information from the API Server so we have got the latest information
[bootstrap] Detected server version: v1.7.0-alpha.0.377+2a6414bc914d55
[bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)
[csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request
[csr] Received signed certificate from the API server, generating KubeConfig...
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"

Node join complete:
* Certificate signing request sent to master and response
  received.
* Kubelet informed of new secure connection details.

Run 'kubectl get nodes' on the master to see this machine join.

# Delete the RoleBinding that exposes the cluster-info ConfigMap publicly. Now this ConfigMap will be private
lucas@THENINJA:~/luxas/kubernetes$ kubectl -n kube-public edit rolebindings kubeadm:bootstrap-signer-clusterinfo

# This breaks the token joining method
other-computer $ sudo ./kubeadm join --token 67a96d.02405a1773564431 192.168.1.115:6443
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Skipping pre-flight checks
[preflight] Starting the kubelet service
[discovery] Trying to connect to API Server "192.168.1.115:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.115:6443"
[discovery] Failed to request cluster info, will try again: [User "system:anonymous" cannot get configmaps in the namespace "kube-public". (get configmaps cluster-info)]
[discovery] Failed to request cluster info, will try again: [User "system:anonymous" cannot get configmaps in the namespace "kube-public". (get configmaps cluster-info)]
^C

# But we can still connect using the cluster-info file
other-computer $ sudo ./kubeadm join --skip-preflight-checks --discovery-file /k8s/cluster-info.yaml --tls-bootstrap-token 67a96d.02405a1773564431
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Skipping pre-flight checks
[preflight] Starting the kubelet service
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.115:6443"
[discovery] Could not access the cluster-info ConfigMap for refreshing the cluster-info information, but the TLS cert is valid so proceeding...
[discovery] The cluster-info ConfigMap isn't set up properly (no kubeconfig key in ConfigMap), but the TLS cert is valid so proceeding...
[bootstrap] Detected server version: v1.7.0-alpha.0.377+2a6414bc914d55
[bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)
[csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request
[csr] Received signed certificate from the API server, generating KubeConfig...
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"

Node join complete:
* Certificate signing request sent to master and response
  received.
* Kubelet informed of new secure connection details.

Run 'kubectl get nodes' on the master to see this machine join.

# What happens if the CA in the cluster-info file and the API Server's CA aren't equal?
# Generated new CA for the cluster-info file, a invalid one for connecting to the cluster
# The new cluster-info file is here:
lucas@THENINJA:~/luxas/kubernetes$ cat cluster-info.yaml
kind: Config
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRFM01ESXlPREUyTkRBME1Wb1hEVEkzTURJeU5qRTJOREEwTVZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBS3VHCmR3MXQ5Nmlkd1YrVmwxQjRVSmZWdGpNZ0NTd1poMG00bmR5Q1JCR3FIRkpMTGhIWjREM2N2ckg1Tk44UmZHS0EKb1cwVjN3Q3R2THl4UFdnZkZMbGtrdERPWnBDQ01oYzd2alYxU2FKUE9MS1BIUUtEdm1CVWFNcTdrUzN5NEg1VApMcUp3bFBUUXNVVW5YNWM5V0pzS2JIcEx6MnJZbC9Pam4veGRtd1lQa3JUTTJwSitMS0RjUkxLTEpiQjhGc2pzCnZBQTg2QURjY3phMDd0WEgxL1MzeTN0UDJMTDN0UVgvZWJIYWNPcHluYnVaNlIwdFhKeUpsTTVlOHRHMzFhWHMKQTV3cGo1d2Z1RGU1amRuTHgxNnFtbG5ueGV3OGp0bk4zSDExYUp6VlErOWlSQUZkUTN4WmN4dWdmQVM2ZndqRwo0QnJFeGpUOUFaRlVQb0VkR09NQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFJc0pKRFIzbWMxQ1lCd2ViSkRPNm1MdWkwTk4KS3BVdlBuazlMbWlnb2JmYVhjQWlnUlo2M1pIYTd4MXBHNGpKRG8zY3lxNWEybTAzZ245RFMrcEpKYTdpMmpXUQpaV1YvZ2ZRMEk4RGc0endXU3J0T056NHpTTXQ1cW5JZjVWRC95KzVVSmVRck1XSEVFS1VrdklSQzhuUmIvV1F2CmNRWEpiN1hMY0dtbWJyaXpDSUlDYmI4KzhmNDFUWTZnTmg5ZzduaVdGZlp2VG1jN05aMTNjQVJjajJ0UTAzeVMKbWVPcEc2REdMRENFWWYzRld0QmdleE5CcFlFYy9ydUNnUE9IcEdhelYya3JHdFFNLzI0OGQ2ZndwcVNQOGc4RgpVSHNWZWxiMExnNmgvZ3VSYlZ5SENlck5zTDBJdFFhdjlscmZmWkxQaVA5TzNLQ0pBWk9MbXhEOUhaaz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    server: https://192.168.1.115:6443
  name: kubernetes

# Try to join an API Server with the wrong CA
other-computer $ sudo ./kubeadm join --skip-preflight-checks --discovery-file /k8s/cluster-info.yaml --tls-bootstrap-token 67a96d.02405a1773564431
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Skipping pre-flight checks
[preflight] Starting the kubelet service
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.115:6443"
[discovery] Failed to validate the API Server's identity, will try again: [Get https://192.168.1.115:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")]
[discovery] Failed to validate the API Server's identity, will try again: [Get https://192.168.1.115:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")]
[discovery] Failed to validate the API Server's identity, will try again: [Get https://192.168.1.115:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")]
[discovery] Failed to validate the API Server's identity, will try again: [Get https://192.168.1.115:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")]
^C
```

**Release note**:

```release-note
```
@jbeda @mikedanese @justinsb @pires @dmmcquay @roberthbailey @dgoodwin
2017-03-04 05:54:16 -08:00
Lucas Käldström 61a284d720
Hook up kubeadm against the BootstrapSigner/BootstrapTokenAuthenticator 2017-03-04 11:17:52 +02:00
Kubernetes Submit Queue 9cc5480918 Merge pull request #41149 from sjenning/qos-memory-limits
Automatic merge from submit-queue (batch tested with PRs 41919, 41149, 42350, 42351, 42285)

kubelet: enable qos-level memory limits

```release-note
Experimental support to reserve a pod's memory request from being utilized by pods in lower QoS tiers.
```

Enables the QoS-level memory cgroup limits described in https://github.com/kubernetes/community/pull/314

**Note: QoS level cgroups have to be enabled for any of this to take effect.**

Adds a new `--experimental-qos-reserved` flag that can be used to set the percentage of a resource to be reserved at the QoS level for pod resource requests.

For example, `--experimental-qos-reserved="memory=50%`, means that if a Guaranteed pod sets a memory request of 2Gi, the Burstable and BestEffort QoS memory cgroups will have their `memory.limit_in_bytes` set to `NodeAllocatable - (2Gi*50%)` to reserve 50% of the guaranteed pod's request from being used by the lower QoS tiers.

If a Burstable pod sets a request, its reserve will be deducted from the BestEffort memory limit.

The result is that:
- Guaranteed limit matches root cgroup at is not set by this code
- Burstable limit is `NodeAllocatable - Guaranteed reserve`
- BestEffort limit is `NodeAllocatable - Guaranteed reserve - Burstable reserve`

The only resource currently supported is `memory`; however, the code is generic enough that other resources can be added in the future.

@derekwaynecarr @vishh
2017-03-03 16:44:39 -08:00
Kubernetes Submit Queue 5b8d600d72 Merge pull request #41919 from Cynerva/gkk/kubelet-auth
Automatic merge from submit-queue (batch tested with PRs 41919, 41149, 42350, 42351, 42285)

Juju: Disable anonymous auth on kubelet

**What this PR does / why we need it**:

This disables anonymous authentication on kubelet when deployed via Juju.

I've also adjusted a few other TLS options for kubelet and kube-apiserver. The end result is that:
1. kube-apiserver can now authenticate with kubelet
2. kube-apiserver now verifies the integrity of kubelet

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*:

https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/219

**Special notes for your reviewer**:

This is dependent on PR #41251, where the tactics changes are being merged in separately.

Some useful pages from the documentation:
* [apiserver -> kubelet](https://kubernetes.io/docs/admin/master-node-communication/#apiserver---kubelet)
* [Kubelet authentication/authorization](https://kubernetes.io/docs/admin/kubelet-authentication-authorization/)

**Release note**:

```release-note
Juju: Disable anonymous auth on kubelet
```
2017-03-03 16:44:37 -08:00
Kubernetes Submit Queue 4728a0520f Merge pull request #42018 from luxas/kubeadm_cert_phase
Automatic merge from submit-queue (batch tested with PRs 42365, 42429, 41770, 42018, 35055)

kubeadm: Add --cert-dir, --cert-altnames instead of --api-external-dns-names

**What this PR does / why we need it**:

 - For the beta kubeadm init UX, we need this change
 - Also adds the `kubeadm phase certs selfsign` command that makes the phase invokable independently

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

This PR depends on https://github.com/kubernetes/kubernetes/pull/41897

**Release note**:

```release-note
```
@dmmcquay @pires @jbeda @errordeveloper @mikedanese @deads2k @liggitt
2017-03-03 09:24:46 -08:00
Seth Jennings cc50aa9dfb kubelet: enable qos-level memory request reservation 2017-03-02 15:04:13 -06:00
Kubernetes Submit Queue 4672314029 Merge pull request #41682 from perotinus/unpwandtokens
Automatic merge from submit-queue (batch tested with PRs 41984, 41682, 41924, 41928)

Add options to kubefed telling it to generate HTTP Basic and/or token credentials for the Federated API server

fixes #41265.

**Release notes**:
```release-note
Adds two options to kubefed, `-apiserver-enable-basic-auth` and `-apiserver-enable-token-auth`, which generate an HTTP Basic username/password and a token respectively for the Federated API server.
```
2017-03-02 10:51:10 -08:00
Lucas Käldström 579a743482
kubeadm: Add --cert-dir, --apiserver-cert-extra-sans, remove --api-external-dns-names and add the phase command for certs. Also use the CertificatesDir var everywhere instead of the HostPKIPath variable and fix some bugs in certs.go 2017-03-02 20:51:02 +02:00
Kubernetes Submit Queue 98ff34cc38 Merge pull request #42064 from luxas/kubeadm_beta_init_ux
Automatic merge from submit-queue (batch tested with PRs 42128, 42064, 42253, 42309, 42322)

kubeadm: Rename some flags for beta UI and fixup some logic

**What this PR does / why we need it**:

In this PR:
 - `--api-advertise-addresses` becomes `--apiserver-advertise-address`
   - The API Server's logic here is that if the address is `0.0.0.0`, it chooses the host's default interface's address. kubeadm here uses exactly the same logic. This arg is then passed to `--advertise-address`, and the API Server will advertise that one for the service VIP.
 - `--api-port` becomes `--apiserver-bind-port` for clarity

ref the meeting notes: https://docs.google.com/document/d/1deJYPIF4LmhGjDVaqrswErIrV7mtwJgovtLnPCDxP7U/edit#

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
```
@jbeda @dmmcquay @pires @lukemarsden @dgoodwin @mikedanese
2017-03-02 05:00:50 -08:00
Jonathan MacMillan 3d3941c6d8 Adds support for HTTP basic and token authentication to kubefed. 2017-03-01 11:04:05 -08:00
Solly Ross d6fe1e8764 HPA Controller: Use Custom Metrics API
This commit switches over the HPA controller to use the custom metrics
API.  It also converts the HPA controller to use the generated client
in k8s.io/metrics for the resource metrics API.

In order to enable support, you must enable
`--horizontal-pod-autoscaler-use-rest-clients` on the
controller-manager, which will switch the HPA controller's MetricsClient
implementation over to use the standard rest clients for both custom
metrics and resource metrics.  This requires that at the least resource
metrics API is registered with kube-aggregator, and that the controller
manager is pointed at kube-aggregator.  For this to work, Heapster
must be serving the new-style API server (`--api-server=true`).
2017-03-01 10:21:50 -05:00
Kubernetes Submit Queue ed479163fa Merge pull request #42116 from vishh/gpu-experimental-support
Automatic merge from submit-queue

Extend experimental support to multiple Nvidia GPUs

Extended from #28216

```release-note
`--experimental-nvidia-gpus` flag is **replaced** by `Accelerators` alpha feature gate along with  support for multiple Nvidia GPUs. 
To use GPUs, pass `Accelerators=true` as part of `--feature-gates` flag.
Works only with Docker runtime.
```

1. Automated testing for this PR is not possible since creation of clusters with GPUs isn't supported yet in GCP.
1. To test this PR locally, use the node e2e.
```shell
TEST_ARGS='--feature-gates=DynamicKubeletConfig=true' FOCUS=GPU SKIP="" make test-e2e-node
```

TODO:

- [x] Run manual tests
- [x] Add node e2e
- [x] Add unit tests for GPU manager (< 100% coverage)
- [ ] Add unit tests in kubelet package
2017-03-01 04:52:50 -08:00
Lucas Käldström 5cbefbcbca
kubeadm: Rename --api-advertise-addresses to --apiserver-advertise-address and --api-port to --apiserver-bind-port 2017-03-01 14:33:19 +02:00
Kubernetes Submit Queue 089947d996 Merge pull request #41921 from apprenda/kubeadm_join_ux_update_2
Automatic merge from submit-queue (batch tested with PRs 41921, 41695, 42139, 42090, 41949)

kubeadm: join ux changes

**What this PR does / why we need it**: Update `kubeadm join` UX according to https://github.com/kubernetes/community/pull/381

**Which issue this PR fixes**: fixes # https://github.com/kubernetes/kubeadm/issues/176

**Special notes for your reviewer**: /cc @luxas @jbeda 

**Release note**:
```release-note
NONE
```
2017-03-01 04:09:59 -08:00
Vishnu kannan 318f4e102a adding an e2e for GPUs
Signed-off-by: Vishnu kannan <vishnuk@google.com>
2017-02-28 13:42:08 -08:00
Derek McQuay 1d37c6be49
kubeadm: join ux changes 2017-02-28 11:06:08 -08:00
Irfan Ur Rehman b1bb51b6e8 [Federation][kubefed] Remove unnecessary flags from init and use overrides instead 2017-02-28 16:23:54 +05:30
Vishnu kannan b86882955b update flags script
Signed-off-by: Vishnu kannan <vishnuk@google.com>
2017-02-27 21:24:45 -08:00
Vishnu Kannan cc5f5474d5 add support for node allocatable phase 2 to kubelet
Signed-off-by: Vishnu Kannan <vishnuk@google.com>
2017-02-27 21:24:44 -08:00