Merge pull request #49347 from kubernetes/hollow-proxy-mem

Automatic merge from submit-queue

Reduce hollow proxy mem/node

As likely expected, kubemark-scale failed to even start with n1-standard-8 nodes. Because 1/3rd of our hollow nodes didn't even get scheduled due to their requests:

```
I0720 17:45:08.139] Found only 3325 ready hollow-nodes while waiting for 5000.
I0720 17:45:20.435] 3326 hollow-nodes are reported as 'Running'
I0720 17:45:20.442] 1675 hollow-nodes are reported as NOT 'Running'
```

If we want to experiment with smaller nodes anyway, then this change is needed. Though we most likely will end up OOM'ing.

Explanation for new value:
We have 62.5 hollow-node / real-node
=> mem available per hollow node = 30GB / 62.5 = 480MB
minus 100MB (kubelet)
minus 20MB (npd) 
=> 360MB for proxy should be = 100MB + 5000*(mem/node)
=> 50KB mem/node (with some slight slack)

cc @kubernetes/sig-scalability-misc
pull/6/head
Kubernetes Submit Queue 2017-07-21 09:59:31 -07:00 committed by GitHub
commit 86b2fd380d
1 changed files with 1 additions and 1 deletions

View File

@ -320,7 +320,7 @@ current-context: kubemark-context")
if [ "${NUM_NODES:-10}" -gt 1000 ]; then
proxy_cpu=50
fi
proxy_mem_per_node=100
proxy_mem_per_node=50
proxy_mem=$((100 * 1024 + ${proxy_mem_per_node}*${NUM_NODES:-10}))
sed -i'' -e "s/{{HOLLOW_PROXY_CPU}}/${proxy_cpu}/g" "${RESOURCE_DIRECTORY}/hollow-node.yaml"
sed -i'' -e "s/{{HOLLOW_PROXY_MEM}}/${proxy_mem}/g" "${RESOURCE_DIRECTORY}/hollow-node.yaml"