- move assigned slave to T.Spec.AssignedSlave
- only create the BindingHost annoation in prepareTaskForLaunch
- recover the assigned slave from annotation and write it back to the T.Spec field
Before this patch the annotation were used to store the assign slave. But due
to the cloning of tasks in the registry, this value was never persisted in the
registry.
This patch adds it to the Spec of a task and only creates the annotation
last-minute before launching.
Without this patch pods which fail before binding will stay in the registry,
but they are never rescheduled again. The reason: the BindingHost annotation does
not exist in the registry and not on the apiserver (compare reconcilePod function).
pflag can handle IP addresses so use the pflag code instead of doing it
ourselves. This means our code just uses net.IP and we don't have all of
the useless casting back and forth!
The EndpointPort struct only stores one port: the port which is used
to connect to the container from outside. In the case of the Mesos
endpoint controller this is the host port. The container port is not part
of the endpoint structure at all.
A number of e2e tests need the container port information to validate correct
endpoint creation. Therefore this patch annotates the Endpoint struct with a
number of annotations mapping "<HostIP>:<HostPort>" to "<ContainerPort>". In a
follow-up commit these annotations are used to validate endpoints in a Mesos
setup.
Until Docker learns parent mount namespace customization the container will
always have the root ns as a parent, not the one of the km minion. Hence, the
kubelet (which lives in the km minion mount ns) will create mounts that cannot
be seen by the Docker containers.
This feature can be enabled again when Docker learns to explicitly set the
parent mount ns, in analogy to the parent cgroup.
The minion server will
- launch the proxy and executor
- relaunch them when they terminate uncleanly
- logrotate their logs.
It is a replacement for a full-blown init process like s6 which is not necessary
in this case.
Before NodeName in the pod spec was used. Hence, pods with a fixed, pre-set
NodeName were never scheduled by the k8sm-scheduler, leading e.g. to a failing
e2e intra-pod test.
Fixesmesosphere/kubernetes-mesos#388
This patch
- set limits (0.25 cpu, 64 MB) on containers which are not limited in pod spec
(these are also passed to the kubelet such that it uses them for the docker
run limits)
- sums up the container resource limits for cpu and memory inside a pod,
- compares the sums to the offered resources
- puts the sums into the Mesos TaskInfo such that Mesos does the accounting
for the pod.
- parses the static pod spec and adds up the resources
- sets the executor resources to 0.25 cpu, 64 MB plus the static pod resources
- sets the cgroups in the kubelet for system containers, resource containers
and docker to the one of the executor that Mesos assigned
- adds scheduler parameters --default-container-cpu-limit and
--default-container-mem-limit.
The containers themselves are resource limited the Docker resource limit which
the kubelet applies when launching them.
Fixesmesosphere/kubernetes-mesos#68 and mesosphere/kubernetes-mesos#304
This should ensure all load balancers get deleted even if a reordering of
watch events causes us to strand one after its service has been deleted,
because the sync will notice that the service controller's cache has a
service in it that no longer exists in the apiserver.
It could still leak in the case that the controller manager is killed
between when it leaks something and the sync runs, but this should
improve things.
- n.node used the n.lock as underlaying locker. The service loop initially
locked it, the Notify function tried to lock it before calling n.node.Signal,
leading to a dead-lock.
- the go routine calling ChangeMaster was not synchronized with the Notify
method. The former was triggering change events that the later never saw
when the former's startup was faster that of Notify. Hence, not even a single
event was noticed and not even a single start/stop call of the slow service
was triggered.
This patch replaces the n.node condition object with a simple channel n.changed.
The service loop watches it.
Updating the notified private variables is still protected with n.lock against
races, but independently of the n.changed channel. Hence, the deadlock is gone.
Moreover, the startup of the Notify loop is synchronized with the go routine which
changes the master. Hence, the Notify loop will see the master changes.
Fixes#10776
- Offers were reused and led to unexpected declining by the scheduler because
the reused offer did not get a new expiration time.
- Pod scheduling and offer creation was not synchronized. When scheduling
happened after aging of offers, the first issue was trigger. Because
the mesos driver DeclineOffer was not mocked this lead to a test error.
Depending on timing the mesos scheduler might call DeclineOffer:
The default ttl of an offer in mesos scheduler is 5sec. If the tests run longer,
the old, unused offers are declined, leading to an mock error.
Probably fixesGoogleCloudPlatform/kubernetes#10795
The file source was created even when no static pods were configured.
In this case it was never marked as seen. As a consequence the kubelet
syncPods functions never deleted pods because it was too cautious due
an unseen pod source, leading to leaked pods.
The TestExecutorFrameworkMessage test sends a "task-lost:foo" message to the
executor in order to mark a pod as lost. For that the pod must be running first.
Otherwise, the executor code will send "TASK_FAILED" status updates, not "TASK_LOST".
Before this patch there was no synchronization between the pod startup and the
test case. Moreover, in order to startup a task a working apiserver URL must be
passed to the executor which was not the case either.
Fixesmesosphere/kubernetes-mesos#351
- the mesos scheduler gets a --static-pods-config parameter with a directory with
pods specs. They are zipped and sent over to newly started mesos executors.
- the mesos executor receives the zipper static pod config via ExecutorInfo.Data
and starts up the pods via the kubelet FileSource mechanism.
- both - the scheduler and the executor side - are fully unit tested