Increate memory for vagrant slave nodes to 2048

With current default number (1024) i am not able to spawn all required for
e2e tests kube-system containers. I had problems with heapster replicas, and
it was obvious from kube-scheduler logs that it is in Pending exaclty because of
insufficient memory.

To reproduce:
1. KUBERNETES_PROVIDER=vagrant ./cluster/kube-up.sh
2. Run any e2e test

Change-Id: I963347e86c7129607f07ce1cea8cc0b536b09b72
pull/6/head
Dmitry Shulyak 2016-09-15 15:16:29 +03:00
parent 843d7cd24c
commit 1ec4295266
1 changed files with 1 additions and 1 deletions

2
Vagrantfile vendored
View File

@ -111,7 +111,7 @@ end
# When doing Salt provisioning, we copy approximately 200MB of content in /tmp before anything else happens.
# This causes problems if anything else was in /tmp or the other directories that are bound to tmpfs device (i.e /run, etc.)
$vm_master_mem = (ENV['KUBERNETES_MASTER_MEMORY'] || ENV['KUBERNETES_MEMORY'] || 1280).to_i
$vm_node_mem = (ENV['KUBERNETES_NODE_MEMORY'] || ENV['KUBERNETES_MEMORY'] || 1024).to_i
$vm_node_mem = (ENV['KUBERNETES_NODE_MEMORY'] || ENV['KUBERNETES_MEMORY'] || 2048).to_i
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
if Vagrant.has_plugin?("vagrant-proxyconf")