Change all references to the container ID in pkg/kubelet/... to the
strong type defined in pkg/kubelet/container: ContainerID
The motivation for this change is to make the format of the ID
unambiguous, specifically whether or not it includes the runtime
prefix (e.g. "docker://").
Each container with a readiness has an individual go-routine which
handles periodic probing for that container. The results are cached, and
written to the status.Manager in the pod sync path.
Container runtime interface currently doesn't support GetContainers and no
test should be using fakeRuntime.ContainerList. Remove it to prevent accidental
use.
Increase the supported controls on pod logging. Add validaiton to pod
log options. Ensure the Kubelet is using a consistent, structured way to
process pod log arguments.
Add ?sinceSeconds=<durationInSeconds>, &sinceTime=<RFC3339>, ?timestamps=<bool>,
?tailLines=<number>, and ?limitBytes=<number>
In many cases clients may wish to view not ready addresses for endpoints
in order to do set membership prior to a pod being ready. For instance,
a pod that uses the service endpoints to connect to other pods under
the same service, but does not want to signal ready before it has
contacted at least a minimal number of other pods.
This is backwards compatible with old servers and clients. There is
an additional cost in size of endpoints before services ramp up, which
will add minor CPU and memory use for services that have a significant
number of pods which have not become ready.
1. Make reason field of StatusReport objects in kubelet in CamelCase format.
2. Add Message field for ContainerStateWaiting to describe detail about Reason.
3. Make reason field of Events in kubelet in CamelCase format.
4. Update swagger,deep-copy and so on.
Since it takes a while (1-2mins) for kubelet to pulling a big image
(>500MB). Just showing "Pending" for pod status is not very helpful.
This commit introduces a "pulling" event, and inserts it before the
kubelet starts to pull an image.
/runningpods returns a list of pods currently running on the kubelet. The list
is composed by examining the container runtime, and may be different from the
desired pods to run known by kubelet.
This is useful for tests to verify that pods are indeed deleted on nodes.
This commit wires together the graceful delete option for pods
on the Kubelet. When a pod is deleted on the API server, a
grace period is calculated that is based on the
Pod.Spec.TerminationGracePeriodInSeconds, the user's provided grace
period, or a default. The grace period can only shrink once set.
The value provided by the user (or the default) is set onto metadata
as DeletionGracePeriod.
When the Kubelet sees a pod with DeletionTimestamp set, it uses the
value of ObjectMeta.GracePeriodSeconds as the grace period
sent to Docker. When updating status, if the pod has DeletionTimestamp
set and all containers are terminated, the Kubelet will update the
status one last time and then invoke Delete(pod, grace: 0) to
clean up the pod immediately.
* Start using FakeRuntime to replace FakeDockerClient in unit tests.
* Move and adapt docker-specific tests (e.g. creating/deleting infra
containers) to manager_test.go in dockertools.
If a pod worker sees stale pods from the runtime cache which were retrieved
before their last sync finished, it may think that the pod were not started
correctly, and attemp to fix that by killing/restarting containers.
There are two issues that may cause runtime cache to store stale pods:
1. The timstamp is recorded *after* getting the pods from the container
runtime. This may lead the consumer to think the pods are newer than they
actually are.
2. The cache updates are triggered by many goroutines (pod workers, and the
updating thread). There is no mechanism to enforece that the cache would
only be updated to newer pods.
This change fixes the above two issues by making sure one always record the
timestamp before getting pods from the container runtime, and updates the
cached pods only if the timestamp is newer.