The diurnal controller changes the number of replicas of a replication controller based on a list of times and replica counts. It is meant to be run under a replication controller.
Until Docker learns parent mount namespace customization the container will
always have the root ns as a parent, not the one of the km minion. Hence, the
kubelet (which lives in the km minion mount ns) will create mounts that cannot
be seen by the Docker containers.
This feature can be enabled again when Docker learns to explicitly set the
parent mount ns, in analogy to the parent cgroup.
The minion server will
- launch the proxy and executor
- relaunch them when they terminate uncleanly
- logrotate their logs.
It is a replacement for a full-blown init process like s6 which is not necessary
in this case.
Before NodeName in the pod spec was used. Hence, pods with a fixed, pre-set
NodeName were never scheduled by the k8sm-scheduler, leading e.g. to a failing
e2e intra-pod test.
Fixesmesosphere/kubernetes-mesos#388
This patch
- set limits (0.25 cpu, 64 MB) on containers which are not limited in pod spec
(these are also passed to the kubelet such that it uses them for the docker
run limits)
- sums up the container resource limits for cpu and memory inside a pod,
- compares the sums to the offered resources
- puts the sums into the Mesos TaskInfo such that Mesos does the accounting
for the pod.
- parses the static pod spec and adds up the resources
- sets the executor resources to 0.25 cpu, 64 MB plus the static pod resources
- sets the cgroups in the kubelet for system containers, resource containers
and docker to the one of the executor that Mesos assigned
- adds scheduler parameters --default-container-cpu-limit and
--default-container-mem-limit.
The containers themselves are resource limited the Docker resource limit which
the kubelet applies when launching them.
Fixesmesosphere/kubernetes-mesos#68 and mesosphere/kubernetes-mesos#304
This should ensure all load balancers get deleted even if a reordering of
watch events causes us to strand one after its service has been deleted,
because the sync will notice that the service controller's cache has a
service in it that no longer exists in the apiserver.
It could still leak in the case that the controller manager is killed
between when it leaks something and the sync runs, but this should
improve things.
Creating a cluster from scratch takes about 7 minutes. But if you just
rebuild the binaries and want to update those you don't want to have to
rerun the entire thing. There is an ansible tag 'binary-update' which
will do that. Now one can do
```
ANSIBLE_TAGS=binary-update vagrant provision
```
And it will push the new binaries.