Commit Graph

295 Commits (2fbd47da561cb80e7f01f38da6a556b63b2579bd)

Author SHA1 Message Date
shun-miyoshi-com 4820a6eadd fix kubemark, juju, and libvirt-coreos README.md (from minion to node) 2017-10-10 06:45:15 +00:00
Beata Skiba 1d94658912 Add launching Cluster Autoscaler in Kubemark 2017-10-09 11:29:15 +02:00
Zihong Zheng 2edbf83f89 Allow kubemark to use custom network for instance creation 2017-10-06 11:31:39 -07:00
Kubernetes Submit Queue f48eccad9e Merge pull request #53053 from shyamjvs/enable-audit-logging-kubemark
Automatic merge from submit-queue (batch tested with PRs 51765, 53053, 52771, 52860, 53284). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Add audit-logging, feature-gates & few admission plugins to kubemark

To make kubemark match real cluster settings. Also includes a few other settings like request-timeout, etcd-quorum, etc.

Fixes https://github.com/kubernetes/kubernetes/issues/53021
Related https://github.com/kubernetes/kubernetes/issues/51899 https://github.com/kubernetes/kubernetes/issues/44701

cc @kubernetes/sig-scalability-misc @wojtek-t @gmarek @smarterclayton
2017-10-03 09:02:32 -07:00
Hongchao Deng 39e5a56691 etcd: update version to 3.1.10 2017-10-02 12:27:46 -07:00
Shyam Jeedigunta eadce7a180 Add audit-logging, feature-gates & few admission plugins to kubemark 2017-10-02 12:13:52 +02:00
Mike Danese 4d2733d801 gce: remove compute-rw, see what breaks 2017-09-29 12:00:02 -07:00
Walter Fender d8c8b8d65b Enabling aggregator functionality on kubemark, gce
Enabling full functionality aggregator functionality in kubemark tests.
This includes configuring it to work in gce (we seem to assume gce in our kubemark tests)
It also includes setting up the relevant security and auth config.
Removing unneeded reference to CA key for MHBauer.
Fixed to pull the "parsed" values for the certs.
Fix from shyamjvs.
2017-09-05 13:01:05 -07:00
Shyam Jeedigunta c483c13aee Correct logdump logic for kubemark master 2017-09-04 12:59:36 +02:00
Shyam Jeedigunta 1f6809b746 Only list hollow-node pods while trying to count them 2017-08-30 14:02:33 +02:00
Kubernetes Submit Queue c041567b5a Merge pull request #46597 from dixudx/implement_proposal_34058
Automatic merge from submit-queue (batch tested with PRs 51113, 46597, 50397, 51052, 51166)

implement proposal 34058: hostPath volume type

**What this PR does / why we need it**:
implement proposal #34058

**Which issue this PR fixes** : fixes #46549

**Special notes for your reviewer**:
cc @thockin @luxas @euank PTAL
2017-08-23 23:16:27 -07:00
Di Xu 6f74af94ef update e2e tests and yaml files 2017-08-23 14:05:21 +08:00
Chen Rong d23df051e1 update to rbac v1 in yaml file 2017-08-21 17:29:37 +08:00
Kubernetes Submit Queue 276bfb8cf1 Merge pull request #50506 from bskiba/external_config
Automatic merge from submit-queue (batch tested with PRs 50485, 49951, 50508, 50511, 50506)

Pass config to external Kubemark cluster in e2e tests

When cluster autoscaler is used in kubemark tests,
pass default kubeconfig as external cluster config.

@shyamjvs @gmarek 

**Release note**:
```
NONE
```
2017-08-11 20:38:01 -07:00
Jeff Grafton a7f49c906d Use buildozer to delete licenses() rules except under third_party/ 2017-08-11 09:32:39 -07:00
Beata Skiba ec73d28e1b Pass config to external Kubemark cluster in e2e tests
When cluster autoscaler is used in kubemark tests,
pass default kubeconfig as external cluster config.
2017-08-11 14:07:07 +02:00
Shyam Jeedigunta 4a57d68da0 Reduce hollow-kubelet cpu request 2017-08-09 14:30:10 +02:00
Shyam Jeedigunta 55527036b4 Fix bug in command retrying in kubemark 2017-07-24 17:18:50 +02:00
Shyam JVS f81d9ed9b9 Reduce hollow proxy mem/node 2017-07-21 03:57:03 +02:00
Shyam Jeedigunta 5f8cb3d9ff Enable logexporter mechanism to dump logs from k8s nodes to GCS directly 2017-07-12 14:39:49 +02:00
Ryan Hallisey f9354bf6b7 Add a README to the pre-existing provdier 2017-07-05 09:14:58 -04:00
Ryan Hallisey 82e1d208f6 Launch kubemark with an existing Kubemark Master
In order to expand the use of kubemark, allow developers to
use kubemark with a pre-existing Kubemark master.
2017-07-05 09:14:53 -04:00
gmarek 64f6606833 Make big clusters work again after introduction of subnets 2017-06-26 21:27:04 +02:00
Ajit Kumar caff16c678 Bump up npd version to v0.4.1 2017-06-22 13:13:50 -07:00
Kubernetes Submit Queue 037330c365 Merge pull request #47470 from gyuho/kubemark-etcd
Automatic merge from submit-queue

test/kubemark/resources: configure custom etcd endpoints

We want to stress our own etcd cluster with Kubernetes
workloads, using kubemark e2e tests. This PR adds a new
environment variable 'ETCD_SERVERS' to configure custom
etcd endpoints.

/cc @xiang90 @hongchaodeng
2017-06-14 12:10:06 -07:00
Kubernetes Submit Queue d8983699e0 Merge pull request #47389 from ixdy/kube-addon-manager-update
Automatic merge from submit-queue (batch tested with PRs 47302, 47389, 47402, 47468, 47459)

Update to kube-addon-manager:v6.4-beta.2: kubectl v1.6.4 and refreshed base images

**What this PR does / why we need it**: refreshes base images for kube-addon-manager with fixes for CVE-2016-9841 and CVE-2016-9843.

x-ref https://github.com/kubernetes/kubernetes/issues/47386

**Special notes for your reviewer**: the updated images are not yet pushed, so tests will fail until that's done.

**Release note**:

```release-note
```

/assign @MrHohn
2017-06-13 23:37:43 -07:00
Gyu-Ho Lee ab1ebbc79c test/kubemark/resources: configure custom etcd endpoints
We want to stress our own etcd cluster with Kubernetes
workloads, using kubemark e2e tests. This PR adds a new
environment variable 'ETCD_SERVERS' to configure custom
etcd endpoints.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-06-13 19:08:53 -07:00
Jeff Grafton eddf98d2c8 Update to kube-addon-manager:v6.4-beta.2: new kubectl and base images 2017-06-12 19:28:23 -07:00
Jordan Liggitt 1d9855474d
Enable Node authorizer and NodeRestriction admission in kubemark 2017-06-09 10:17:08 -04:00
Random-Liu 1d3979190c Bump up npd version to v0.4.0 2017-06-06 16:30:02 -07:00
Kubernetes Submit Queue c0407972e9 Merge pull request #46938 from shyamjvs/no-reprinting-gcloud-result
Automatic merge from submit-queue

Avoid double printing output of gcloud commands in kubemark

Just noticed we were unnecessarily echoing the result again.

/cc @wojtek-t
2017-06-06 04:07:55 -07:00
Kubernetes Submit Queue f842bb9987 Merge pull request #46802 from shyamjvs/npd-kernel-config
Automatic merge from submit-queue (batch tested with PRs 46972, 42829, 46799, 46802, 46844)

Add KernelDeadlock condition to hollow NPD

Ref https://github.com/kubernetes/kubernetes/issues/44701

/cc @wojtek-t @gmarek
2017-06-05 17:46:55 -07:00
Shyam Jeedigunta 163f1de5ed Avoid double printing output of gcloud commands in kubemark 2017-06-04 20:07:36 +02:00
Shyam Jeedigunta b655953e21 Enable DefaultTolerationSeconds and PodPreset admission plugins for kubemark 2017-06-04 19:52:57 +02:00
Clayton Coleman 4ce3907639
Add Initializers to all admission control paths by default 2017-06-02 22:09:04 -04:00
Shyam Jeedigunta 23ef37d9ce Add KernelDeadlock condition to hollow NPD 2017-06-01 22:23:28 +02:00
Sen Lu d237e54a24 Switch gcloud compute copy-files to scp 2017-05-30 10:19:33 -07:00
Shyam Jeedigunta 02092312bb Make kubemark scripts fail fast 2017-05-30 11:59:13 +02:00
Kubernetes Submit Queue 52337d5db6 Merge pull request #46502 from gmarek/run_kubemark_tests
Automatic merge from submit-queue

Fix kubemark/run-e2e-tests.sh

This should make most common arguments work.

cc @shyamjvs
2017-05-29 09:35:01 -07:00
gmarek 0ca6aeb95c Fix kubemark/run-e2e-tests.sh 2017-05-29 15:20:54 +02:00
Shyam Jeedigunta b72cbc074c chmod +x kubemark scripts 2017-05-26 22:03:12 +02:00
Kubernetes Submit Queue c60bc53921 Merge pull request #46434 from shyamjvs/kubemark-config-upload
Automatic merge from submit-queue (batch tested with PRs 46124, 46434, 46089, 45589, 46045)

Copy kubeconfig to kubemark master

This should save the effort of digging through jenkins agent and its container to get the kubeconfig.
Ideally we should have kubectl directly working on the kubemark master, but I'm facing some issues due to older version of kubectl present by default on the node.

cc @wojtek-t @gmarek
2017-05-25 21:39:59 -07:00
Shyam Jeedigunta 8f2b4c3b33 Copy kubeconfig to kubemark master 2017-05-25 14:55:28 +02:00
gmarek 2437cf4d59 fix type in start-kubemark 2017-05-25 11:48:01 +02:00
Kubernetes Submit Queue 1e2105808b Merge pull request #45136 from vishh/cos-nvidia-driver-install
Automatic merge from submit-queue

Enable "kick the tires" support for Nvidia GPUs in COS

This PR provides an installation daemonset that will install Nvidia CUDA drivers on Google Container Optimized OS (COS).
User space libraries and debug utilities from the Nvidia driver installation are made available on the host in a special directory on the host -
* `/home/kubernetes/bin/nvidia/lib` for libraries
*  `/home/kubernetes/bin/nvidia/bin` for debug utilities

Containers that run CUDA applications on COS are expected to consume the libraries and debug utilities (if necessary) from the host directories using `HostPath` volumes.

Note: This solution requires updating Pod Spec across distros. This is a known issue and will be addressed in the future. Until then CUDA workloads will not be portable.

This PR updates the COS base image version to m59. This is coupled with this PR for the following reasons:
1. Driver installation requires disabling a kernel feature in COS. 
2. The kernel API for disabling this interface changed across COS versions
3. If the COS image update is not handled in this PR, then a subsequent COS image update will break GPU integration and will require an update to the installation scripts in this PR.
4. Instead of having to post `3` PRs, one each for adding the basic installer, updating COS to m59, and then updating the installer again, this PR combines all the changes to reduce review overhead and latency, and additional noise that will be created when GPU tests break.

**Try out this PR**
1. Get Quota for GPUs in any region
2. `export `KUBE_GCE_ZONE=<zone-with-gpus>` KUBE_NODE_OS_DISTRIBUTION=gci`
3. `NODE_ACCELERATORS="type=nvidia-tesla-k80,count=1" cluster/kube-up.sh`
4. `kubectl create -f cluster/gce/gci/nvidia-gpus/cos-installer-daemonset.yaml`
5. Run your CUDA app in a pod.

**Another option is to run a e2e manually to try out this PR**
1. Get Quota for GPUs in any region
2. export `KUBE_GCE_ZONE=<zone-with-gpus>` KUBE_NODE_OS_DISTRIBUTION=gci
3. `NODE_ACCELERATORS="type=nvidia-tesla-k80,count=1"`
4. `go run hack/e2e.go -- --up` 
5. `hack/ginkgo-e2e.sh --ginkgo.focus="\[Feature:GPU\]"`
The e2e will install the drivers automatically using the daemonset and then run test workloads to validate driver integration.

TODO:
- [x] Update COS image version to m59 release.
- [x] Remove sleep from the install script and add it to the daemonset
- [x] Add an e2e that will run the daemonset and run a sample CUDA app on COS clusters.
- [x] Setup a test project with necessary quota to run GPU tests against HEAD to start with https://github.com/kubernetes/test-infra/pull/2759
- [x] Update node e2e serial configs to install nvidia drivers on COS by default
2017-05-23 10:46:10 -07:00
Kubernetes Submit Queue 03ba1324cf Merge pull request #46224 from gmarek/kubemark_heapster
Automatic merge from submit-queue (batch tested with PRs 46133, 46211, 46224, 46205, 45910)

Make CPU request for heapster in kubemark scale with the number of Nodes
2017-05-22 15:50:03 -07:00
gmarek 27fc7be396 Make CPU request for heapster in kubemark scale with the number of Nodes 2017-05-22 16:20:27 +02:00
Vishnu kannan 86b5edb79a Update COS version to m59
Signed-off-by: Vishnu kannan <vishnuk@google.com>
2017-05-20 21:17:19 -07:00
Shyam Jeedigunta 360054a75f Add script to dump kubemark master logs 2017-05-20 13:12:38 +02:00
Kubernetes Submit Queue a1c2db2fec Merge pull request #45950 from shyamjvs/revert-proxier
Automatic merge from submit-queue

Make real proxier in hollow-proxy optional (default=true)

Ref https://github.com/kubernetes/kubernetes/pull/45622
This allows using real proxier for hollow proxy, but we use the fake one by default.

cc @kubernetes/sig-scalability-misc @wojtek-t @gmarek
2017-05-18 07:55:09 -07:00
Shyam Jeedigunta 804a4f558c Make usage of real proxier in hollow-proxy optional (default=true) 2017-05-18 14:30:12 +02:00
Michael Taufen 2ee2ec5e21 Remove the deprecated --babysit-daemons kubelet flag 2017-05-17 09:08:57 -07:00
Shyam Jeedigunta 87cde074f8 Minor fix in run-gcloud-compute-with-retries output piping 2017-05-15 13:39:10 +02:00
Shyam Jeedigunta 0f1d5e6e36 Remove kubemark.sh as we don't use pod IP from it anymore 2017-05-12 13:47:13 +02:00
Kubernetes Submit Queue e939019900 Merge pull request #45604 from shyamjvs/start-km-master-fix
Automatic merge from submit-queue (batch tested with PRs 45569, 45602, 45604, 45478, 45550)

Minor bug fix in start-kubemark-master script

cc @wojtek-t @gmarek
2017-05-10 21:34:41 -07:00
Shyam Jeedigunta 1078e9580c Minor bug fix in start-kubemark-master script 2017-05-10 19:51:14 +02:00
Shyam Jeedigunta 1fc831e0ec Fix bug in hollow-node deletion in stop-kubemark script 2017-05-10 12:57:43 +02:00
Shyam Jeedigunta 0759289dcf Stream output of run-gcloud-compute-with-retries to stdout in realtime 2017-05-09 13:44:48 +02:00
Shyam Jeedigunta 2e800eef20 Fix add-metadata command for kubemark master 2017-05-08 20:44:20 +02:00
Shyam Jeedigunta efc84378b8 Fix gcloud retries cmd to rightly capture return code 2017-05-08 19:34:26 +02:00
Shyam Jeedigunta 395d3bf3b4 Move hollow-node's initContainer from annotation to field 2017-05-04 11:41:33 +02:00
Dan Williams b3705b6e35 hack/cluster: consolidate cluster/ utils to hack/lib/util.sh
Per Clayton's suggestion, move stuff from cluster/lib/util.sh to
hack/lib/util.sh.  Also consolidate ensure-temp-dir and use the
hack/lib/util.sh implementation rather than cluster/common.sh.
2017-03-30 22:34:46 -05:00
Kubernetes Submit Queue 4f606b9c8d Merge pull request #42820 from MrHohn/addon-kubemark-v6.4-beta.1
Automatic merge from submit-queue (batch tested with PRs 42672, 42770, 42818, 42820, 40849)

kubemark test: Bump addon-manager to v6.4-beta.1

Follow up PR of #42760. This PR bumps addon-manager to v6.4-beta.1 for kubemark test.

**Release note**:

```release-note
NONE
```
2017-03-25 14:27:27 -07:00
Piotr Szczesniak 69fd7aafd0 Bumped Heapster to v1.3.0 2017-03-17 15:45:52 +01:00
Random-Liu c4b3fd4e63 Update npd to the official v0.3.0 release. 2017-03-15 14:26:12 -07:00
Zihong Zheng 34b8d008ec kubemark test: Bump addon-manager to v6.4-beta.1 2017-03-09 10:13:07 -08:00
Kubernetes Submit Queue c6d9d9c5ad Merge pull request #42456 from Random-Liu/update-npd-in-kubemark
Automatic merge from submit-queue (batch tested with PRs 42456, 42457, 42414, 42480, 42370)

Update npd in kubemark since #42201 is merged.

Revert https://github.com/kubernetes/kubernetes/pull/41716.

#42201 has been merged, and #41713 is fixed. Now we could retry update npd in kubemark.

/cc @shyamjvs @wojtek-t @dchen1107
2017-03-04 00:17:40 -08:00
Random-Liu 3f30532b0f Update npd in kubemark since #42201 is merged. 2017-03-02 16:29:24 -08:00
gmarek 30b9490d66 Add alsologtostderr flag to hollow node 2017-03-03 01:29:02 +01:00
Kubernetes Submit Queue db5e85af5f Merge pull request #41980 from shyamjvs/one-more-time
Automatic merge from submit-queue (batch tested with PRs 41980, 42192, 42223, 41822, 42048)

Modified kubemark startup scripts to restore master on reboot

Fixes #41735 

As discussed in the issue, modified the scripts to satisfy the conditions of restoring master env, running non-idempotent operations only for the first time and persist important data like pki/auth files on a PD.
Also attached `start-kubemark-master.sh` as startup-script metadata to master instance (on GCE) so that it is called automatically on each boot.

cc @kubernetes/sig-scalability-misc @wojtek-t @gmarek
2017-03-02 00:59:13 -08:00
Shyam JVS ab78b20bc1 Make kubemark hollow node logging verbosity configurable 2017-03-01 20:24:30 +01:00
Kubernetes Submit Queue 32d59cbb2f Merge pull request #42201 from shyamjvs/inotify-limit
Automatic merge from submit-queue (batch tested with PRs 42316, 41618, 42201, 42113, 42191)

[Kubemark] Add init container in hollow node for setting inotify limit of node to 200

Fixes #41713 

Along with adding the init container, I also changed the manifest to a yaml as otherwise the entire init container annotation would have to be in a single line (with escaped characters), as json doesn't allow multi-line strings.

cc @kubernetes/sig-scalability-misc @wojtek-t @gmarek @Random-Liu
2017-03-01 07:48:20 -08:00
Kubernetes Submit Queue 1a35155025 Merge pull request #41973 from wojtek-t/build_non_alpha_3_0_17_etcd_image
Automatic merge from submit-queue (batch tested with PRs 42162, 41973, 42015, 42115, 41923)

Release 3.0.17 etcd image
2017-02-28 22:05:59 -08:00
Shyam Jeedigunta 4574900634 Modified kubemark startup scripts to restore master on reboots 2017-02-28 19:51:00 +01:00
Kubernetes Submit Queue dac0296f0b Merge pull request #42093 from liggitt/avoid-fake-node-names
Automatic merge from submit-queue (batch tested with PRs 40746, 41699, 42108, 42174, 42093)

Avoid fake node names in user info

Node usernames should follow the format `system:node:<node-name>`,
but if we don't know the node name, it's worse to put a fake one in.

In the future, we plan to have a dedicated node authorizer, which would
start rejecting requests from a user with a bogus node name like this.

The right approach is to either mint correct credentials per node, or use node bootstrapping so it requests a correct client certificate itself.
2017-02-28 07:51:33 -08:00
Shyam JVS 75e602ca28 Convert hollow-node manifest to yaml and add init container for setting inotify limit 2017-02-28 00:53:36 +01:00
Wojciech Tyczynski 74266e0dc0 Release 3.0.17 etcd image 2017-02-27 16:23:44 +01:00
Kubernetes Submit Queue 61a2bd64a2 Merge pull request #42054 from fejta/kubemark
Automatic merge from submit-queue (batch tested with PRs 41962, 42055, 42062, 42019, 42054)

Update flag to --check-version-skew instead of --check_version_skew

https://github.com/kubernetes/test-infra/issues/2012

Also add a `--` to send the flags to kubetest without triggering a warning.
2017-02-27 00:17:00 -08:00
Jordan Liggitt 34ac0dc302
Avoid fake node names in user info 2017-02-25 02:09:55 -05:00
Zihong Zheng 64ba52ae71 Bumps addon-manager to v6.4-alpha.3 and updates template files 2017-02-24 16:52:31 -08:00
Erick Fejta db5a355336 Update flag to --check-version-skew instead of --check_version_skew 2017-02-24 07:49:55 -08:00
Kubernetes Submit Queue ac293b857c Merge pull request #41858 from shyamjvs/npd-logs
Automatic merge from submit-queue (batch tested with PRs 38702, 41810, 41778, 41858, 41872)

[Kubemark] Fixed hollow-npd container command to log to file

Fixes #41802 

cc @wojtek-t @gmarek @Random-Liu
2017-02-23 07:54:40 -08:00
Wojciech Tyczynski b70e392161 Update clusters to use 3.0.17 etcd 2017-02-23 10:08:50 +01:00
Kubernetes Submit Queue fe34705f8a Merge pull request #41587 from MrHohn/addon-manager-fix-hpa
Automatic merge from submit-queue (batch tested with PRs 41349, 41532, 41256, 41587, 41657)

Update kubectl in addon-manager to use HPA in autoscaling/v1

Addon-manager is broken since HPA objects were removed from extensions api group.

Came across the logs from [the latest addon-manager on Jenkins](https://storage.googleapis.com/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce/4290/artifacts/bootstrap-e2e-master/kube-addon-manager.log):
```
INFO: == Entering periodical apply loop at 2017-02-16T17:33:37+0000 ==
error: error pruning namespaced object extensions/v1beta1, Kind=HorizontalPodAutoscaler: the server could not find the requested resource
WRN: == Failed to execute /usr/local/bin/kubectl  apply --namespace=kube-system -f /etc/kubernetes/addons     --prune=true -l kubernetes.io/cluster-service=true --recursive >/dev/null at 2017-02-16T17:33:38+0000. 2 tries remaining. ==
error: error pruning namespaced object extensions/v1beta1, Kind=HorizontalPodAutoscaler: the server could not find the requested resource
WRN: == Failed to execute /usr/local/bin/kubectl  apply --namespace=kube-system -f /etc/kubernetes/addons     --prune=true -l kubernetes.io/cluster-service=true --recursive >/dev/null at 2017-02-16T17:33:46+0000. 1 tries remaining. ==
error: error pruning namespaced object extensions/v1beta1, Kind=HorizontalPodAutoscaler: the server could not find the requested resource
WRN: == Failed to execute /usr/local/bin/kubectl  apply --namespace=kube-system -f /etc/kubernetes/addons     --prune=true -l kubernetes.io/cluster-service=true --recursive >/dev/null at 2017-02-16T17:33:53+0000. 0 tries remaining. ==
WRN: == Kubernetes addon update completed with errors at 2017-02-16T17:33:58+0000 ==
```

And notice this commit (f66679a4e9) came in two weeks ago, which removed HorizontalPodAutoscaler from extensions/v1beta1.

Addon-manager is now partially functioning that it could successfully create and update addons, but will fail to prune objects, which means upgrade tests may mostly fail.

Pushed another version of addon-manager with kubectl v1.6.0-alpha.2 ([release 2 days ago](https://github.com/kubernetes/kubernetes/releases/tag/v1.6.0-alpha.2)) for fixing, including below images:
- gcr.io/google-containers/kube-addon-manager:v6.4-alpha.2
- gcr.io/google-containers/kube-addon-manager-amd64:v6.4-alpha.2
- gcr.io/google-containers/kube-addon-manager-arm:v6.4-alpha.2
- gcr.io/google-containers/kube-addon-manager-arm64:v6.4-alpha.2
- gcr.io/google-containers/kube-addon-manager-ppc64le:v6.4-alpha.2
- gcr.io/google-containers/kube-addon-manager-s390x:v6.4-alpha.2

@mikedanese 

cc @wojtek-t @shyamjvs
2017-02-22 08:12:46 -08:00
Wojciech Tyczynski 6d303d3329 Increase cpu for kubeproxy in kubemark in large clusters 2017-02-22 08:44:34 +01:00
Shyam Jeedigunta f40b5eed5d [Kubemark] Fixed hollow-npd container command to log to file 2017-02-22 02:38:38 +01:00
Kubernetes Submit Queue 70c9eebd21 Merge pull request #41739 from shyamjvs/hollow-node-logs
Automatic merge from submit-queue (batch tested with PRs 41706, 39063, 41330, 41739, 41576)

[Kubemark] Add option to log hollow-node logs

Ref https://github.com/kubernetes/kubernetes/issues/41613

Added an option to log kubemark hollow-node logs which includes kubelet, kubeproxy and npd logs for each hollow-node.
Setting the env var `ENABLE_HOLLOW_NODE_LOGS=true` should now enable logging for tests.

cc @kubernetes/sig-scalability-misc @wojtek-t @gmarek @yujuhong @Random-Liu
2017-02-21 02:24:43 -08:00
Zihong Zheng 2c8e89820a Update kubectl in addon-manager to use HPA in autoscaling/v1 instead of extensions/v1beta1 2017-02-20 10:49:10 -08:00
Kubernetes Submit Queue 5fb6b91faf Merge pull request #41751 from shyamjvs/fix-kubemark-default-suite
Automatic merge from submit-queue

Fix kubemark default e2e test suite's name

Seems like the suite "[Feature:performance]" doesn't trigger tests anymore. Changed it to "[Feature:Performance]" in kubemark run-e2e-tests.sh.

cc @wojtek-t @gmarek
2017-02-20 09:27:22 -08:00
Shyam Jeedigunta 7802c82671 Fix kubemark default e2e test suite's name 2017-02-20 16:08:28 +01:00
Shyam Jeedigunta ed0ab3cd8e [Kubemark] Add option to log hollow-node logs 2017-02-20 11:52:49 +01:00
Wojciech Tyczynski 4426156aa6 More resources for hollowproxy in large kubemarks 2017-02-20 09:26:17 +01:00
Random-Liu 47fc1d684d Revert the npd change in kubemark. 2017-02-19 04:14:30 -08:00
Random-Liu cd194bd9cc Fix kubemark hollow-npd. 2017-02-18 21:01:56 -08:00
Random-Liu d40c0a7099 Add standalone npd on GCI. 2017-02-17 16:18:08 -08:00
Shyam Jeedigunta 3c9a8a3b68 Modify kubemark run-e2e-tests.sh to run right command based on if its in docker/normal environment 2017-02-17 00:51:48 +01:00
Shyam Jeedigunta 4e43de4fc2 Bump addon-manager version to v6.4-alpha.1 in kubemark 2017-02-15 20:11:31 +01:00
Jordan Liggitt d69a75d50f
Mount kubeconfig file into kube-scheduler in kubemark 2017-02-15 10:03:57 -05:00
Kubernetes Submit Queue 5cc2f73bc9 Merge pull request #41134 from shyamjvs/refactor-final-blow
Automatic merge from submit-queue (batch tested with PRs 41134, 41410, 40177, 41049, 41313)

Refactored kubemark code into provider-specific and provider-independent parts [Part-3]

Fixes #38967
Applying final part of the changes in PR #39033 (which refactored kubemark code completely). The changes included in this PR are:

- Removed `test/kubemark/common.sh` and moved relevant parts of its code to the right places in start-kubemark/stop-kubemark scripts.
- Added DOCKER_REGISTRY, PROJECT, KUBEMARK_IMAGE_MAKE_TARGET variables to `/test/kubemark/cloud-provider-config.sh` to make the kubemark image push location variable wrt provider.
- Removed get-real-pod-for-hollow-node.sh as it doesn't seem to do anything useful.

@kubernetes/sig-scalability-misc @wojtek-t @gmarek
2017-02-15 05:58:15 -08:00
Kubernetes Submit Queue e4a4fe4a89 Merge pull request #41285 from liggitt/kube-scheduler-role
Automatic merge from submit-queue (batch tested with PRs 40297, 41285, 41211, 41243, 39735)

Secure kube-scheduler

This PR:
* Adds a bootstrap `system:kube-scheduler` clusterrole
* Adds a bootstrap clusterrolebinding to the `system:kube-scheduler` user
* Sets up a kubeconfig for kube-scheduler on GCE (following the controller-manager pattern)
* Switches kube-scheduler to running with kubeconfig against secured port (salt changes, beware)
* Removes superuser permissions from kube-scheduler in local-up-cluster.sh
* Adds detailed RBAC deny logging

```release-note
On kube-up.sh clusters on GCE, kube-scheduler now contacts the API on the secured port.
```
2017-02-15 03:25:10 -08:00