Each container with a readiness has an individual go-routine which
handles periodic probing for that container. The results are cached, and
written to the status.Manager in the pod sync path.
Network configuration error message while setting Kubelet status was
being written to "reasons" slice. Write this message to "messages" slice
instead.
Also remove "reasons" slice entirely since it is not used anywhere.
Now that kubelet has switched to incremental updates, it has complete
information of the pod update type (create, update, sync). This change pipes
this information to pod workers so that they don't have to derive the type
again.
Correct port-forward data copying logic so that the server closes its
half of the data stream when socat exits, and the client closes its half
of the data stream when it finishes writing.
Modify the client to wait for both copies (client->server,
server->client) to finish before it unblocks.
Fix race condition in the Kubelet's handling of incoming port forward
streams. Have the client generate a connectionID header to be used to
associate the error and data streams for a single connection, instead of
assuming that streams n and n+1 go together. Attempt to generate a
pseudo connectionID in the server in the event the connectionID header
isn't present (older clients); this is a best-effort approach that only
really works with 1 connection at a time, whereas multiple concurrent
connections will only work reliably with a newer client that is
generating connectionID.
Previously, GetPodStatus() will return error if the pod is never
created. However we've never seen the sync loop fail because in the
beginning of the loop, if the pod is not found, it will be created.
This works fine except the pod that keeps crashing. Because the above
logic will keep restarting the pod as if it's never created.
This PR fixes the bug.
Add an experimental network plugin implementation named "cni" that
uses the Container Networking Interface (CNI) specification for
configuring networking for pods.
https://github.com/appc/cni/blob/master/SPEC.md
Container runtime interface currently doesn't support GetContainers and no
test should be using fakeRuntime.ContainerList. Remove it to prevent accidental
use.
Increase the supported controls on pod logging. Add validaiton to pod
log options. Ensure the Kubelet is using a consistent, structured way to
process pod log arguments.
Add ?sinceSeconds=<durationInSeconds>, &sinceTime=<RFC3339>, ?timestamps=<bool>,
?tailLines=<number>, and ?limitBytes=<number>
Add a HostIPC field to the Pod Spec to create containers sharing
the same ipc of the host.
This feature must be explicitly enabled in apiserver using the
option host-ipc-sources.
Signed-off-by: Federico Simoncelli <fsimonce@redhat.com>
The appc spec isn't currently clear about if both fields are required, and before rkt v0.8.1 if either field
was nil it would result in a crash. Currently rkt will ignore isolators that don't have both fields set, so
I think this is a reasonable approach to making sure isolators are actually used.
kubelet will log errors when the cloudprovider API is unavailable, but
will continue to persist node updates using last-known addresses.
Without this change, all nodes are marked as NotReady, and eventually
controller-manager evicts all pods.
In many cases clients may wish to view not ready addresses for endpoints
in order to do set membership prior to a pod being ready. For instance,
a pod that uses the service endpoints to connect to other pods under
the same service, but does not want to signal ready before it has
contacted at least a minimal number of other pods.
This is backwards compatible with old servers and clients. There is
an additional cost in size of endpoints before services ramp up, which
will add minor CPU and memory use for services that have a significant
number of pods which have not become ready.
The methods for registering a node and syncing node status to the apiserver
have grown large enough that it makes sense for them to live in a separate
place. This change adds a nodeManager to handle such interaction with the
apiserver.
kubelet relies on source annotation to distinguish pods from different sources.
It should always annotate the source when receiving the pod to avoid any
confusion. This commit fixes that and also make sure we don't overwrite the
first seen time.
This refactor is in preparation for moving more state handling to the
status manager. It will become the canonical cache for the latest
information on running containers and probe status, as part of the
prober refactoring.
1. Make reason field of StatusReport objects in kubelet in CamelCase format.
2. Add Message field for ContainerStateWaiting to describe detail about Reason.
3. Make reason field of Events in kubelet in CamelCase format.
4. Update swagger,deep-copy and so on.
A lot of packages use StringSet, but they don't use anything else from
the util package. Moving StringSet into another package will shrink
their dependency trees significantly.
Both GetNode and the cache.ListWatch listfunc in the
kubelet package call List unnecessary.
GetNodeInfo is sufficient for GetNode and makes looping
through a list of nodes to check for a matching name
unnecessary.
resolves#13476
1. Add EvnetRecordQps and EventBurst parameter in kubelet.
2. If EvnetRecordQps and EventBurst was set, rate limit events in kubelet
with a independent ratelimiter as setted.
If stdin is noninteractive, the io.Copy from stdin to remoteStdin will
unblock when it finishes reading from stdin. In this case, make sure to
close remoteStdin so the server knows the client won't be sending any
more data. This ensures that the remote process terminates. For example:
echo foo | kubectl exec -i <pod> -- cat
Without this change, the `cat` process never terminates and `kubectl
exec` hangs.
Fix interactive exec sessions hanging after you type 'exit'.
Add e2e test to cover noninteractive stdin: `echo a | kubectl exec -i <pod>
cat`
Add e2e test to cover psuedo-interactive stdin: `kubectl exec -i <pod> bash`
Prep for sending multiple data frames over multiple streams in remote command
test, which is more likely to find flakes (requires bump of spdystream
once an issue with the frame worker queues not being fully drained when
a goaway frame is received).