This scopes down the initially ambitious PR:
https://github.com/kubernetes/kubernetes/pull/14960 to replace just
`pause` and `fluentd-elasticsearch` to come through `beta.gcr.io`.
The v2 versions have been pushed under new tags, `pause:2.0` and
`fluentd-elastisearch:1.12`.
NOTE: `beta.gcr.io` will still serve images using v1 until they are repushed with v2. Pulls through `gcr.io` will still work after pushing through `beta.gcr.io`, but will be served over v1 (via compat logic).
The current implementation considers a source seen when it receives a SET at
kubelet/config/config.go. However, the main kubelet sync loop may not have
received the pod update from the source via the channel. This change ensures
that kubelet would consider all sources are ready only after the sync loop has
seen all the sources.
Each container with a readiness has an individual go-routine which
handles periodic probing for that container. The results are cached, and
written to the status.Manager in the pod sync path.
Network configuration error message while setting Kubelet status was
being written to "reasons" slice. Write this message to "messages" slice
instead.
Also remove "reasons" slice entirely since it is not used anywhere.
Now that kubelet has switched to incremental updates, it has complete
information of the pod update type (create, update, sync). This change pipes
this information to pod workers so that they don't have to derive the type
again.
Correct port-forward data copying logic so that the server closes its
half of the data stream when socat exits, and the client closes its half
of the data stream when it finishes writing.
Modify the client to wait for both copies (client->server,
server->client) to finish before it unblocks.
Fix race condition in the Kubelet's handling of incoming port forward
streams. Have the client generate a connectionID header to be used to
associate the error and data streams for a single connection, instead of
assuming that streams n and n+1 go together. Attempt to generate a
pseudo connectionID in the server in the event the connectionID header
isn't present (older clients); this is a best-effort approach that only
really works with 1 connection at a time, whereas multiple concurrent
connections will only work reliably with a newer client that is
generating connectionID.
Previously, GetPodStatus() will return error if the pod is never
created. However we've never seen the sync loop fail because in the
beginning of the loop, if the pod is not found, it will be created.
This works fine except the pod that keeps crashing. Because the above
logic will keep restarting the pod as if it's never created.
This PR fixes the bug.
Add an experimental network plugin implementation named "cni" that
uses the Container Networking Interface (CNI) specification for
configuring networking for pods.
https://github.com/appc/cni/blob/master/SPEC.md
Container runtime interface currently doesn't support GetContainers and no
test should be using fakeRuntime.ContainerList. Remove it to prevent accidental
use.
Increase the supported controls on pod logging. Add validaiton to pod
log options. Ensure the Kubelet is using a consistent, structured way to
process pod log arguments.
Add ?sinceSeconds=<durationInSeconds>, &sinceTime=<RFC3339>, ?timestamps=<bool>,
?tailLines=<number>, and ?limitBytes=<number>
Add a HostIPC field to the Pod Spec to create containers sharing
the same ipc of the host.
This feature must be explicitly enabled in apiserver using the
option host-ipc-sources.
Signed-off-by: Federico Simoncelli <fsimonce@redhat.com>
The appc spec isn't currently clear about if both fields are required, and before rkt v0.8.1 if either field
was nil it would result in a crash. Currently rkt will ignore isolators that don't have both fields set, so
I think this is a reasonable approach to making sure isolators are actually used.
kubelet will log errors when the cloudprovider API is unavailable, but
will continue to persist node updates using last-known addresses.
Without this change, all nodes are marked as NotReady, and eventually
controller-manager evicts all pods.
In many cases clients may wish to view not ready addresses for endpoints
in order to do set membership prior to a pod being ready. For instance,
a pod that uses the service endpoints to connect to other pods under
the same service, but does not want to signal ready before it has
contacted at least a minimal number of other pods.
This is backwards compatible with old servers and clients. There is
an additional cost in size of endpoints before services ramp up, which
will add minor CPU and memory use for services that have a significant
number of pods which have not become ready.
The methods for registering a node and syncing node status to the apiserver
have grown large enough that it makes sense for them to live in a separate
place. This change adds a nodeManager to handle such interaction with the
apiserver.
kubelet relies on source annotation to distinguish pods from different sources.
It should always annotate the source when receiving the pod to avoid any
confusion. This commit fixes that and also make sure we don't overwrite the
first seen time.
This refactor is in preparation for moving more state handling to the
status manager. It will become the canonical cache for the latest
information on running containers and probe status, as part of the
prober refactoring.
1. Make reason field of StatusReport objects in kubelet in CamelCase format.
2. Add Message field for ContainerStateWaiting to describe detail about Reason.
3. Make reason field of Events in kubelet in CamelCase format.
4. Update swagger,deep-copy and so on.
A lot of packages use StringSet, but they don't use anything else from
the util package. Moving StringSet into another package will shrink
their dependency trees significantly.
Both GetNode and the cache.ListWatch listfunc in the
kubelet package call List unnecessary.
GetNodeInfo is sufficient for GetNode and makes looping
through a list of nodes to check for a matching name
unnecessary.
resolves#13476
1. Add EvnetRecordQps and EventBurst parameter in kubelet.
2. If EvnetRecordQps and EventBurst was set, rate limit events in kubelet
with a independent ratelimiter as setted.
If stdin is noninteractive, the io.Copy from stdin to remoteStdin will
unblock when it finishes reading from stdin. In this case, make sure to
close remoteStdin so the server knows the client won't be sending any
more data. This ensures that the remote process terminates. For example:
echo foo | kubectl exec -i <pod> -- cat
Without this change, the `cat` process never terminates and `kubectl
exec` hangs.
Fix interactive exec sessions hanging after you type 'exit'.
Add e2e test to cover noninteractive stdin: `echo a | kubectl exec -i <pod>
cat`
Add e2e test to cover psuedo-interactive stdin: `kubectl exec -i <pod> bash`
Prep for sending multiple data frames over multiple streams in remote command
test, which is more likely to find flakes (requires bump of spdystream
once an issue with the frame worker queues not being fully drained when
a goaway frame is received).
PR #13293 added a safety check to not remove a pod directory if the child
volumes directory is not empty. This logic is faulty because kubelet may have
directory structures podUID/volumes/volumeKind/volumeName. E.g.,
`056db95d-50ee-11e5-a2e4-42010af0ba1d/volumes/kubernetes.io~empty-dir/default-token-al3r2`
This change fixes that by properly listing all volumes under a pod.
Before, kubelet performs global cleanup tasks every iteration. After the
PR #13003, kubelet performs the tasks on every sync internval (10 seconds).
This PR decouples the housekeeping period with the sync internval to ensure
that kubelet cleans up promptly, while not too often (no more than once every
minimum housekeeping period).
When the pod annotations are updated in the apiserver, update the pod.
Annotations may be used to convey attributes that are required to the
pod execution, such as networking parameters.
The health check is no longer needed since health checks are no longer
run by master but updated by the kubelet; so a host with the incorrect
host name will not be updated and show 'NotReady' status
Signed-off-by: Sami Wagiaalla <swagiaal@redhat.com>
Allow the user to specify the resolver configuration file that is used
to determine the default DNS parameters. This defaults to the system's
/etc/resolv.conf.
Currently, whenever there is any update, kubelet would force all pod workers to
sync again, causing resource contention and hence performance degradation.
This commit flips kubelet to use incremental updates (as opposed to snapshots).
This allows us to know what pods have changed and send updates to those pod
workers only. The `SyncPods` function has been replaced with individual
handlers, each handling an operation (ADD, REMOVE, UPDATE). Pod workers are
still triggered periodically, and kubelet performs periodic cleanup as well.
This commit also spawns a new goroutine solely responsible for killing pods.
This is necessary because pod killing could hold up the sync loop for
indefinitely long amount of time now user can define the graceful termination
period in the container spec.