Not all clients and systems can support SPDY protocols. This commit adds
support for two new websocket protocols, one to handle streaming of pod
logs from a pod, and the other to allow exec to be tunneled over
websocket.
Browser support for chunked encoding is still poor, and web consoles
that wish to show pod logs may need to make compromises to display the
output. The /pods/<name>/log endpoint now supports websocket upgrade to
the 'binary.k8s.io' subprotocol, which sends chunks of logs as binary to
the client. Messages are written as logs are streamed from the container
daemon, so flushing should be unaffected.
Browser support for raw communication over SDPY is not possible, and
some languages lack libraries for it and HTTP/2. The Kubelet supports
upgrade to WebSocket instead of SPDY, and will multiplex STDOUT/IN/ERR
over websockets by prepending each binary message with a single byte
representing the channel (0 for IN, 1 for OUT, and 2 for ERR). Because
framing on WebSockets suffers from head-of-line blocking, clients and
other server code should ensure that no particular stream blocks. An
alternative subprotocol 'base64.channel.k8s.io' base64 encodes the body
and uses '0'-'9' to represent the channel for ease of use in browsers.
Change all references to the container ID in pkg/kubelet/... to the
strong type defined in pkg/kubelet/container: ContainerID
The motivation for this change is to make the format of the ID
unambiguous, specifically whether or not it includes the runtime
prefix (e.g. "docker://").
This scopes down the initially ambitious PR:
https://github.com/kubernetes/kubernetes/pull/14960 to replace just
`pause` and `fluentd-elasticsearch` to come through `beta.gcr.io`.
The v2 versions have been pushed under new tags, `pause:2.0` and
`fluentd-elastisearch:1.12`.
NOTE: `beta.gcr.io` will still serve images using v1 until they are repushed with v2. Pulls through `gcr.io` will still work after pushing through `beta.gcr.io`, but will be served over v1 (via compat logic).
The current implementation considers a source seen when it receives a SET at
kubelet/config/config.go. However, the main kubelet sync loop may not have
received the pod update from the source via the channel. This change ensures
that kubelet would consider all sources are ready only after the sync loop has
seen all the sources.
Each container with a readiness has an individual go-routine which
handles periodic probing for that container. The results are cached, and
written to the status.Manager in the pod sync path.
Network configuration error message while setting Kubelet status was
being written to "reasons" slice. Write this message to "messages" slice
instead.
Also remove "reasons" slice entirely since it is not used anywhere.
Now that kubelet has switched to incremental updates, it has complete
information of the pod update type (create, update, sync). This change pipes
this information to pod workers so that they don't have to derive the type
again.
Correct port-forward data copying logic so that the server closes its
half of the data stream when socat exits, and the client closes its half
of the data stream when it finishes writing.
Modify the client to wait for both copies (client->server,
server->client) to finish before it unblocks.
Fix race condition in the Kubelet's handling of incoming port forward
streams. Have the client generate a connectionID header to be used to
associate the error and data streams for a single connection, instead of
assuming that streams n and n+1 go together. Attempt to generate a
pseudo connectionID in the server in the event the connectionID header
isn't present (older clients); this is a best-effort approach that only
really works with 1 connection at a time, whereas multiple concurrent
connections will only work reliably with a newer client that is
generating connectionID.
Previously, GetPodStatus() will return error if the pod is never
created. However we've never seen the sync loop fail because in the
beginning of the loop, if the pod is not found, it will be created.
This works fine except the pod that keeps crashing. Because the above
logic will keep restarting the pod as if it's never created.
This PR fixes the bug.
Add an experimental network plugin implementation named "cni" that
uses the Container Networking Interface (CNI) specification for
configuring networking for pods.
https://github.com/appc/cni/blob/master/SPEC.md
Container runtime interface currently doesn't support GetContainers and no
test should be using fakeRuntime.ContainerList. Remove it to prevent accidental
use.
Increase the supported controls on pod logging. Add validaiton to pod
log options. Ensure the Kubelet is using a consistent, structured way to
process pod log arguments.
Add ?sinceSeconds=<durationInSeconds>, &sinceTime=<RFC3339>, ?timestamps=<bool>,
?tailLines=<number>, and ?limitBytes=<number>