k3s/pkg/registry/registrytest
k8s-merge-robot a275a045d1 Merge pull request #23914 from sky-uk/make-etcd-cache-size-configurable
Automatic merge from submit-queue

Make etcd cache size configurable

Instead of the prior 50K limit, allow users to specify a more sensible size for their cluster.

I'm not sure what a sensible default is here. I'm still experimenting on my own clusters. 50 gives me a 270MB max footprint. 50K caused my apiserver to run out of memory as it exceeded >2GB. I believe that number is far too large for most people's use cases.

There are some other fundamental issues that I'm not addressing here:
- Old etcd items are cached and potentially never removed (it stores using modifiedIndex, and doesn't remove the old object when it gets updated)
- Cache isn't LRU, so there's no guarantee the cache remains hot. This makes its performance difficult to predict. More of an issue with a smaller cache size.
- 1.2 etcd entries seem to have a larger memory footprint (I never had an issue in 1.1, even though this cache existed there). I suspect that's due to image lists on the node status.

This is provided as a fix for #23323
2016-04-17 00:06:31 -07:00
..
doc.go Make copyright ownership statement generic 2015-05-01 17:49:56 -04:00
endpoint.go Switch to versioned ListOptions in server. 2015-12-21 14:23:37 +01:00
etcd.go Merge pull request #23914 from sky-uk/make-etcd-cache-size-configurable 2016-04-17 00:06:31 -07:00
node.go Switch to versioned ListOptions in server. 2015-12-21 14:23:37 +01:00
service.go fix can not export service bug 2016-03-05 11:23:50 -05:00