From 1ec4295266e1c4c2ffd19599a973063d4a821aa3 Mon Sep 17 00:00:00 2001 From: Dmitry Shulyak Date: Thu, 15 Sep 2016 15:16:29 +0300 Subject: [PATCH] Increate memory for vagrant slave nodes to 2048 With current default number (1024) i am not able to spawn all required for e2e tests kube-system containers. I had problems with heapster replicas, and it was obvious from kube-scheduler logs that it is in Pending exaclty because of insufficient memory. To reproduce: 1. KUBERNETES_PROVIDER=vagrant ./cluster/kube-up.sh 2. Run any e2e test Change-Id: I963347e86c7129607f07ce1cea8cc0b536b09b72 --- Vagrantfile | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Vagrantfile b/Vagrantfile index 68de73639a..f199ee3a7c 100644 --- a/Vagrantfile +++ b/Vagrantfile @@ -111,7 +111,7 @@ end # When doing Salt provisioning, we copy approximately 200MB of content in /tmp before anything else happens. # This causes problems if anything else was in /tmp or the other directories that are bound to tmpfs device (i.e /run, etc.) $vm_master_mem = (ENV['KUBERNETES_MASTER_MEMORY'] || ENV['KUBERNETES_MEMORY'] || 1280).to_i -$vm_node_mem = (ENV['KUBERNETES_NODE_MEMORY'] || ENV['KUBERNETES_MEMORY'] || 1024).to_i +$vm_node_mem = (ENV['KUBERNETES_NODE_MEMORY'] || ENV['KUBERNETES_MEMORY'] || 2048).to_i Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| if Vagrant.has_plugin?("vagrant-proxyconf")