Commit Graph

120 Commits (8c28d3f63c360d43d00c80091c6aac7f19afc259)

Author SHA1 Message Date
Shyam Jeedigunta 02092312bb Make kubemark scripts fail fast 2017-05-30 11:59:13 +02:00
Shyam Jeedigunta b72cbc074c chmod +x kubemark scripts 2017-05-26 22:03:12 +02:00
gmarek 27fc7be396 Make CPU request for heapster in kubemark scale with the number of Nodes 2017-05-22 16:20:27 +02:00
Kubernetes Submit Queue a1c2db2fec Merge pull request #45950 from shyamjvs/revert-proxier
Automatic merge from submit-queue

Make real proxier in hollow-proxy optional (default=true)

Ref https://github.com/kubernetes/kubernetes/pull/45622
This allows using real proxier for hollow proxy, but we use the fake one by default.

cc @kubernetes/sig-scalability-misc @wojtek-t @gmarek
2017-05-18 07:55:09 -07:00
Shyam Jeedigunta 804a4f558c Make usage of real proxier in hollow-proxy optional (default=true) 2017-05-18 14:30:12 +02:00
Michael Taufen 2ee2ec5e21 Remove the deprecated --babysit-daemons kubelet flag 2017-05-17 09:08:57 -07:00
Shyam Jeedigunta 0f1d5e6e36 Remove kubemark.sh as we don't use pod IP from it anymore 2017-05-12 13:47:13 +02:00
Shyam Jeedigunta 1078e9580c Minor bug fix in start-kubemark-master script 2017-05-10 19:51:14 +02:00
Shyam Jeedigunta 395d3bf3b4 Move hollow-node's initContainer from annotation to field 2017-05-04 11:41:33 +02:00
Kubernetes Submit Queue 4f606b9c8d Merge pull request #42820 from MrHohn/addon-kubemark-v6.4-beta.1
Automatic merge from submit-queue (batch tested with PRs 42672, 42770, 42818, 42820, 40849)

kubemark test: Bump addon-manager to v6.4-beta.1

Follow up PR of #42760. This PR bumps addon-manager to v6.4-beta.1 for kubemark test.

**Release note**:

```release-note
NONE
```
2017-03-25 14:27:27 -07:00
Piotr Szczesniak 69fd7aafd0 Bumped Heapster to v1.3.0 2017-03-17 15:45:52 +01:00
Random-Liu c4b3fd4e63 Update npd to the official v0.3.0 release. 2017-03-15 14:26:12 -07:00
Zihong Zheng 34b8d008ec kubemark test: Bump addon-manager to v6.4-beta.1 2017-03-09 10:13:07 -08:00
Kubernetes Submit Queue c6d9d9c5ad Merge pull request #42456 from Random-Liu/update-npd-in-kubemark
Automatic merge from submit-queue (batch tested with PRs 42456, 42457, 42414, 42480, 42370)

Update npd in kubemark since #42201 is merged.

Revert https://github.com/kubernetes/kubernetes/pull/41716.

#42201 has been merged, and #41713 is fixed. Now we could retry update npd in kubemark.

/cc @shyamjvs @wojtek-t @dchen1107
2017-03-04 00:17:40 -08:00
Random-Liu 3f30532b0f Update npd in kubemark since #42201 is merged. 2017-03-02 16:29:24 -08:00
gmarek 30b9490d66 Add alsologtostderr flag to hollow node 2017-03-03 01:29:02 +01:00
Kubernetes Submit Queue db5e85af5f Merge pull request #41980 from shyamjvs/one-more-time
Automatic merge from submit-queue (batch tested with PRs 41980, 42192, 42223, 41822, 42048)

Modified kubemark startup scripts to restore master on reboot

Fixes #41735 

As discussed in the issue, modified the scripts to satisfy the conditions of restoring master env, running non-idempotent operations only for the first time and persist important data like pki/auth files on a PD.
Also attached `start-kubemark-master.sh` as startup-script metadata to master instance (on GCE) so that it is called automatically on each boot.

cc @kubernetes/sig-scalability-misc @wojtek-t @gmarek
2017-03-02 00:59:13 -08:00
Shyam JVS ab78b20bc1 Make kubemark hollow node logging verbosity configurable 2017-03-01 20:24:30 +01:00
Shyam Jeedigunta 4574900634 Modified kubemark startup scripts to restore master on reboots 2017-02-28 19:51:00 +01:00
Shyam JVS 75e602ca28 Convert hollow-node manifest to yaml and add init container for setting inotify limit 2017-02-28 00:53:36 +01:00
Zihong Zheng 64ba52ae71 Bumps addon-manager to v6.4-alpha.3 and updates template files 2017-02-24 16:52:31 -08:00
Kubernetes Submit Queue ac293b857c Merge pull request #41858 from shyamjvs/npd-logs
Automatic merge from submit-queue (batch tested with PRs 38702, 41810, 41778, 41858, 41872)

[Kubemark] Fixed hollow-npd container command to log to file

Fixes #41802 

cc @wojtek-t @gmarek @Random-Liu
2017-02-23 07:54:40 -08:00
Kubernetes Submit Queue fe34705f8a Merge pull request #41587 from MrHohn/addon-manager-fix-hpa
Automatic merge from submit-queue (batch tested with PRs 41349, 41532, 41256, 41587, 41657)

Update kubectl in addon-manager to use HPA in autoscaling/v1

Addon-manager is broken since HPA objects were removed from extensions api group.

Came across the logs from [the latest addon-manager on Jenkins](https://storage.googleapis.com/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce/4290/artifacts/bootstrap-e2e-master/kube-addon-manager.log):
```
INFO: == Entering periodical apply loop at 2017-02-16T17:33:37+0000 ==
error: error pruning namespaced object extensions/v1beta1, Kind=HorizontalPodAutoscaler: the server could not find the requested resource
WRN: == Failed to execute /usr/local/bin/kubectl  apply --namespace=kube-system -f /etc/kubernetes/addons     --prune=true -l kubernetes.io/cluster-service=true --recursive >/dev/null at 2017-02-16T17:33:38+0000. 2 tries remaining. ==
error: error pruning namespaced object extensions/v1beta1, Kind=HorizontalPodAutoscaler: the server could not find the requested resource
WRN: == Failed to execute /usr/local/bin/kubectl  apply --namespace=kube-system -f /etc/kubernetes/addons     --prune=true -l kubernetes.io/cluster-service=true --recursive >/dev/null at 2017-02-16T17:33:46+0000. 1 tries remaining. ==
error: error pruning namespaced object extensions/v1beta1, Kind=HorizontalPodAutoscaler: the server could not find the requested resource
WRN: == Failed to execute /usr/local/bin/kubectl  apply --namespace=kube-system -f /etc/kubernetes/addons     --prune=true -l kubernetes.io/cluster-service=true --recursive >/dev/null at 2017-02-16T17:33:53+0000. 0 tries remaining. ==
WRN: == Kubernetes addon update completed with errors at 2017-02-16T17:33:58+0000 ==
```

And notice this commit (f66679a4e9) came in two weeks ago, which removed HorizontalPodAutoscaler from extensions/v1beta1.

Addon-manager is now partially functioning that it could successfully create and update addons, but will fail to prune objects, which means upgrade tests may mostly fail.

Pushed another version of addon-manager with kubectl v1.6.0-alpha.2 ([release 2 days ago](https://github.com/kubernetes/kubernetes/releases/tag/v1.6.0-alpha.2)) for fixing, including below images:
- gcr.io/google-containers/kube-addon-manager:v6.4-alpha.2
- gcr.io/google-containers/kube-addon-manager-amd64:v6.4-alpha.2
- gcr.io/google-containers/kube-addon-manager-arm:v6.4-alpha.2
- gcr.io/google-containers/kube-addon-manager-arm64:v6.4-alpha.2
- gcr.io/google-containers/kube-addon-manager-ppc64le:v6.4-alpha.2
- gcr.io/google-containers/kube-addon-manager-s390x:v6.4-alpha.2

@mikedanese 

cc @wojtek-t @shyamjvs
2017-02-22 08:12:46 -08:00
Shyam Jeedigunta f40b5eed5d [Kubemark] Fixed hollow-npd container command to log to file 2017-02-22 02:38:38 +01:00
Kubernetes Submit Queue 70c9eebd21 Merge pull request #41739 from shyamjvs/hollow-node-logs
Automatic merge from submit-queue (batch tested with PRs 41706, 39063, 41330, 41739, 41576)

[Kubemark] Add option to log hollow-node logs

Ref https://github.com/kubernetes/kubernetes/issues/41613

Added an option to log kubemark hollow-node logs which includes kubelet, kubeproxy and npd logs for each hollow-node.
Setting the env var `ENABLE_HOLLOW_NODE_LOGS=true` should now enable logging for tests.

cc @kubernetes/sig-scalability-misc @wojtek-t @gmarek @yujuhong @Random-Liu
2017-02-21 02:24:43 -08:00
Zihong Zheng 2c8e89820a Update kubectl in addon-manager to use HPA in autoscaling/v1 instead of extensions/v1beta1 2017-02-20 10:49:10 -08:00
Shyam Jeedigunta ed0ab3cd8e [Kubemark] Add option to log hollow-node logs 2017-02-20 11:52:49 +01:00
Wojciech Tyczynski 4426156aa6 More resources for hollowproxy in large kubemarks 2017-02-20 09:26:17 +01:00
Random-Liu 47fc1d684d Revert the npd change in kubemark. 2017-02-19 04:14:30 -08:00
Random-Liu cd194bd9cc Fix kubemark hollow-npd. 2017-02-18 21:01:56 -08:00
Random-Liu d40c0a7099 Add standalone npd on GCI. 2017-02-17 16:18:08 -08:00
Shyam Jeedigunta 4e43de4fc2 Bump addon-manager version to v6.4-alpha.1 in kubemark 2017-02-15 20:11:31 +01:00
Jordan Liggitt d69a75d50f
Mount kubeconfig file into kube-scheduler in kubemark 2017-02-15 10:03:57 -05:00
Kubernetes Submit Queue 5cc2f73bc9 Merge pull request #41134 from shyamjvs/refactor-final-blow
Automatic merge from submit-queue (batch tested with PRs 41134, 41410, 40177, 41049, 41313)

Refactored kubemark code into provider-specific and provider-independent parts [Part-3]

Fixes #38967
Applying final part of the changes in PR #39033 (which refactored kubemark code completely). The changes included in this PR are:

- Removed `test/kubemark/common.sh` and moved relevant parts of its code to the right places in start-kubemark/stop-kubemark scripts.
- Added DOCKER_REGISTRY, PROJECT, KUBEMARK_IMAGE_MAKE_TARGET variables to `/test/kubemark/cloud-provider-config.sh` to make the kubemark image push location variable wrt provider.
- Removed get-real-pod-for-hollow-node.sh as it doesn't seem to do anything useful.

@kubernetes/sig-scalability-misc @wojtek-t @gmarek
2017-02-15 05:58:15 -08:00
Kubernetes Submit Queue e4a4fe4a89 Merge pull request #41285 from liggitt/kube-scheduler-role
Automatic merge from submit-queue (batch tested with PRs 40297, 41285, 41211, 41243, 39735)

Secure kube-scheduler

This PR:
* Adds a bootstrap `system:kube-scheduler` clusterrole
* Adds a bootstrap clusterrolebinding to the `system:kube-scheduler` user
* Sets up a kubeconfig for kube-scheduler on GCE (following the controller-manager pattern)
* Switches kube-scheduler to running with kubeconfig against secured port (salt changes, beware)
* Removes superuser permissions from kube-scheduler in local-up-cluster.sh
* Adds detailed RBAC deny logging

```release-note
On kube-up.sh clusters on GCE, kube-scheduler now contacts the API on the secured port.
```
2017-02-15 03:25:10 -08:00
Jordan Liggitt cc11d7367a
Switch kube-scheduler to secure API access 2017-02-15 01:05:42 -05:00
Jordan Liggitt 9e6a3496b4
Update rbac data to v1beta1 2017-02-14 00:50:31 -05:00
Shyam Jeedigunta 3ac0e22f62 Refactored kubemark code into provider-specific and provider-independent parts [Part-3] 2017-02-08 17:03:13 +01:00
Michael Taufen 982df56c52 Replace uses of --config with --pod-manifest-path 2017-02-07 14:32:37 -08:00
Kubernetes Submit Queue 3a3ca50653 Merge pull request #40619 from Random-Liu/update-kubemark-npd-version
Automatic merge from submit-queue (batch tested with PRs 40132, 39302, 40194, 40619, 40601)

Update NPD version to v0.3.0-alpha.0 in kubemark.

@wojtek-t @shyamjvs Update the NPD version in kubemark.

I just built the alpha release https://github.com/kubernetes/node-problem-detector/releases/tag/v0.3.0-alpha.0.

And the PR https://github.com/kubernetes/node-problem-detector/pull/79 is included.

However, I'm not sure whether 1 minute period is longer enough.

If it's still not longer enough, in fact we can extend it by split the resync and heartbeat:
* Every 1 minute, check whether there is inconsistency between apiserver and npd, and only update when there is inconsistency. (1 GET/m)
* Every > 2 minute, do forcibly update as heartbeat. (<0.5 PATCH/m)

And I can also make the sync period configurable after we finalize the sync mechanism.
2017-01-27 18:32:26 -08:00
Random-Liu e2abfb7120 Update NPD version to v0.3.0-alpha.0 in kubemark. 2017-01-27 11:16:24 -08:00
Shyam Jeedigunta c62e5214c3 Refactored kubemark code into provider-specific and provider-independent parts [Part-1] 2017-01-26 22:54:14 +01:00
Shyam Jeedigunta cad541eb0c fixing source for heapster eventer in kubemark 2017-01-25 14:16:06 +01:00
Wojciech Tyczynski fbd5c7c380 Revert "Refactored kubemark into cloud-provider independent code and GCE specific code" 2017-01-24 10:42:17 +01:00
Shyam Jeedigunta d2fadbe30f Refactored kubemark code into provider-specific and provider-independent parts 2017-01-19 15:34:13 +01:00
Kubernetes Submit Queue da7d17c8dd Merge pull request #39951 from shyamjvs/fix-kubemark-npd
Automatic merge from submit-queue (batch tested with PRs 40081, 39951)

Passing correct master address to kubemark NPD & authenticating+authorizing it with apiserver

Fixes #39245 
Fixes https://github.com/kubernetes/node-problem-detector/issues/50

Added RBAC for npd and fixed issue with the npd falling back to inClusterConfig.

cc @kubernetes/sig-scalability-misc @wojtek-t @gmarek
2017-01-19 05:01:04 -08:00
Shyam Jeedigunta cc78a3f428 Passing correct master address to kubemark NPD & authenticating+authorizing it with apiserver 2017-01-18 18:23:23 +01:00
Kubernetes Submit Queue 6dfe5c49f6 Merge pull request #38865 from vwfs/ext4_no_lazy_init
Automatic merge from submit-queue

Enable lazy initialization of ext3/ext4 filesystems

**What this PR does / why we need it**: It enables lazy inode table and journal initialization in ext3 and ext4.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #30752, fixes #30240

**Release note**:
```release-note
Enable lazy inode table and journal initialization for ext3 and ext4
```

**Special notes for your reviewer**:
This PR removes the extended options to mkfs.ext3/mkfs.ext4, so that the defaults (enabled) for lazy initialization are used.

These extended options come from a script that was historically located at */usr/share/google/safe_format_and_mount* and later ported to GO so this dependency to the script could be removed. After some search, I found the original script here: https://github.com/GoogleCloudPlatform/compute-image-packages/blob/legacy/google-startup-scripts/usr/share/google/safe_format_and_mount

Checking the history of this script, I found the commit [Disable lazy init of inode table and journal.](4d7346f7f5). This one introduces the extended flags with this description:
```
Now that discard with guaranteed zeroing is supported by PD,
initializing them is really fast and prevents perf from being affected
when the filesystem is first mounted.
```

The problem is, that this is not true for all cloud providers and all disk types, e.g. Azure and AWS. I only tested with magnetic disks on Azure and AWS, so maybe it's different for SSDs on these cloud providers. The result is that this performance optimization dramatically increases the time needed to format a disk in such cases.

When mkfs.ext4 is told to not lazily initialize the inode tables and the check for guaranteed zeroing on discard fails, it falls back to a very naive implementation that simply loops and writes zeroed buffers to the disk. Performance on this highly depends on free memory and also uses up all this free memory for write caching, reducing performance of everything else in the system. 

As of https://github.com/kubernetes/kubernetes/issues/30752, there is also something inside kubelet that somehow degrades performance of all this. It's however not exactly known what it is but I'd assume it has something to do with cgroups throttling IO or memory. 

I checked the kernel code for lazy inode table initialization. The nice thing is, that the kernel also does the guaranteed zeroing on discard check. If it is guaranteed, the kernel uses discard for the lazy initialization, which should finish in a just few seconds. If it is not guaranteed, it falls back to using *bio*s, which does not require the use of the write cache. The result is, that free memory is not required and not touched, thus performance is maxed and the system does not suffer.

As the original reason for disabling lazy init was a performance optimization and the kernel already does this optimization by default (and in a much better way), I'd suggest to completely remove these flags and rely on the kernel to do it in the best way.
2017-01-18 09:09:52 -08:00
Shyam Jeedigunta 9b0d8b9747 Added RBAC for heapster in kubemark 2017-01-18 13:47:08 +01:00
Shyam Jeedigunta 491c26feca Fix RBAC role for kube-proxy in Kubemark 2017-01-17 11:39:00 +01:00
Aleksandra Malinowska 043e809b8f update heapster version to 1.3.0-beta.0 2017-01-12 13:42:31 +01:00
Shyam Jeedigunta ce8c207328 Updated kubemark with RBAC for controller-manager, kubecfg, kubelet and proxy 2017-01-06 08:54:54 +01:00
Shyam Jeedigunta ac30fb28bd Fixing 'systemd restart docker' command in kubemark master 2016-12-21 11:46:33 +01:00
Shyam Jeedigunta 7e12fd4bfd Added 'hollow'-node-problem-detector to hollow-nodes in kubemark 2016-12-20 12:04:24 +01:00
Shyam Jeedigunta 9051462497 Migrated kubemark master to GCI from Debian. 2016-12-19 13:51:56 +01:00
Alexander Block 13a2bc8afb Enable lazy initialization of ext3/ext4 filesystems 2016-12-18 11:08:51 +01:00
Shyam Jeedigunta f7ce6a7d10 On kubemark master, kubelet now runs as a supervisord process and all master components as pods 2016-12-12 13:56:07 +01:00
Wojciech Tyczynski 9439453527 Increase single logfile size in kubemark 2016-12-12 11:18:20 +01:00
Shyam Jeedigunta 06ce9ae479 Moved start-kubemark-master.sh from test/kubemark/ to test/kubemark/resources/ 2016-12-07 18:15:24 +01:00
Wojciech Tyczynski 9ccddb9b7d Revert "Add log rotation to kubemark" 2016-11-21 14:55:52 +01:00
Wojciech Tyczynski a96dd63367 Add log rotation to kubemark 2016-11-18 16:19:32 +01:00
gmarek 7439a956ef Add ServiceAccounts to Kubemark 2016-11-15 16:03:48 +01:00
Piotr Szczesniak 0f40f94dd9 Bumped Heapster to v1.2.0 2016-09-14 09:16:09 +02:00
Piotr Szczesniak 2d87deb043 Bumped Heapster to v1.2.0-beta.3 2016-09-09 11:41:48 +02:00
Wojciech Tyczynski 56008de8d6 Fix heapster in kubemark 2016-08-22 15:38:02 +02:00
k8s-merge-robot d1cc7f9e2c Merge pull request #27037 from wojtek-t/push_hollow_nodes_logs_to_kubelets
Automatic merge from submit-queue

Mount hollow-node logs to parent node hostpath
2016-06-27 22:40:52 -07:00
Piotr Szczesniak 8fff5319db Bumped Heapster to v1.1.0 2016-06-16 20:41:28 +02:00
Wojciech Tyczynski 770bd6b7a4 Mount hollow-node logs to parent node hostpath 2016-06-08 13:24:49 +02:00
Wojciech Tyczynski fe470b664b Pipe content-type variable to hollow node 2016-05-11 14:57:40 +02:00
gmarek b14809832b Add heapster to kubemark 2016-04-27 16:04:07 +02:00