Automatic merge from submit-queue (batch tested with PRs 60435, 60334, 60458, 59301, 60125). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
dockershim: don't check pod IP in StopPodSandbox
We're about to tear the container down, there's no point. It also suppresses
an annoying error message due to kubelet stupidity that causes multiple
parallel calls to StopPodSandbox for the same sandbox.
docker_sandbox.go:355] failed to read pod IP from plugin/docker: NetworkPlugin cni failed on the status hook for pod "docker-registry-1-deploy_default": Unexpected command output nsenter: cannot open /proc/22646/ns/net: No such file or directory
1) A first StopPodSandbox() request triggered by SyncLoop(PLEG) for
a ContainerDied event calls into TearDownPod() and thus the network
plugin. Until this completes, networkReady=true for the
sandbox.
2) A second StopPodSandbox() request triggered by SyncLoop(REMOVE)
calls PodSandboxStatus() and calls into the network plugin to read
the IP address because networkReady=true
3) The first request exits the network plugin, sets networReady=false,
and calls StopContainer() on the sandbox. This destroys the network
namespace.
4) The second request finally gets around to running nsenter but
the network namespace is already destroyed. It returns an error
which is logged by getIP().
```release-note
NONE
```
@yujuhong @freehan
We're about to tear the container down, there's no point. It also suppresses
an annoying error message due to kubelet stupidity that causes multiple
parallel calls to StopPodSandbox for the same sandbox.
docker_sandbox.go:355] failed to read pod IP from plugin/docker: NetworkPlugin cni failed on the status hook for pod "docker-registry-1-deploy_default": Unexpected command output nsenter: cannot open /proc/22646/ns/net: No such file or directory
1) A first StopPodSandbox() request triggered by SyncLoop(PLEG) for
a ContainerDied event calls into TearDownPod() and thus the network
plugin. Until this completes, networkReady=true for the
sandbox.
2) A second StopPodSandbox() request triggered by SyncLoop(REMOVE)
calls PodSandboxStatus() and calls into the network plugin to read
the IP address because networkReady=true
3) The first request exits the network plugin, sets networReady=false,
and calls StopContainer() on the sandbox. This destroys the network
namespace.
4) The second request finally gets around to running nsenter but
the network namespace is already destroyed. It returns an error
which is logged by getIP().
Automatic merge from submit-queue (batch tested with PRs 57824, 58806, 59410, 59280). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
2nd try at using a vanity GCR name
The 2nd commit here is the changes relative to the reverted PR. Please focus review attention on that.
This is the 2nd attempt. The previous try (#57573) was reverted while we
figured out the regional mirrors (oops).
New plan: k8s.gcr.io is a read-only facade that auto-detects your source
region (us, eu, or asia for now) and pulls from the closest. To publish
an image, push k8s-staging.gcr.io and it will be synced to the regionals
automatically (similar to today). For now the staging is an alias to
gcr.io/google_containers (the legacy URL).
When we move off of google-owned projects (working on it), then we just
do a one-time sync, and change the google-internal config, and nobody
outside should notice.
We can, in parallel, change the auto-sync into a manual sync - send a PR
to "promote" something from staging, and a bot activates it. Nice and
visible, easy to keep track of.
xref https://github.com/kubernetes/release/issues/281
TL;DR:
* The new `staging-k8s.gcr.io` is where we push images. It is literally an alias to `gcr.io/google_containers` (the existing repo) and is hosted in the US.
* The contents of `staging-k8s.gcr.io` are automatically synced to `{asia,eu,us)-k8s.gcr.io`.
* The new `k8s.gcr.io` will be a read-only alias to whichever regional repo is closest to you.
* In the future, images will be promoted from `staging` to regional "prod" more explicitly and auditably.
```release-note
Use "k8s.gcr.io" for pulling container images rather than "gcr.io/google_containers". Images are already synced, so this should not impact anyone materially.
Documentation and tools should all convert to the new name. Users should take note of this in case they see this new name in the system.
```
This is the 2nd attempt. The previous was reverted while we figured out
the regional mirrors (oops).
New plan: k8s.gcr.io is a read-only facade that auto-detects your source
region (us, eu, or asia for now) and pulls from the closest. To publish
an image, push k8s-staging.gcr.io and it will be synced to the regionals
automatically (similar to today). For now the staging is an alias to
gcr.io/google_containers (the legacy URL).
When we move off of google-owned projects (working on it), then we just
do a one-time sync, and change the google-internal config, and nobody
outside should notice.
We can, in parallel, change the auto-sync into a manual sync - send a PR
to "promote" something from staging, and a bot activates it. Nice and
visible, easy to keep track of.
This also incorporates the version string into the package name so
that incompatibile versions will fail to connect.
Arbitrary choices:
- The proto3 package name is runtime.v1alpha2. The proto compiler
normally translates this to a go package of "runtime_v1alpha2", but
I renamed it to "v1alpha2" for consistency with existing packages.
- kubelet/apis/cri is used as "internalapi". I left it alone and put the
public "runtimeapi" in kubelet/apis/cri/runtime.
We let dockershim implement the kubelet's internal (CRI) API as an
intermediary step before transitioning fully to communicate using gRPC.
Now that kubelet has been communicating to the runtime over gRPC for
multiple releases, we can safely retire the extra interface in
dockershim.
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
dockershim: bump the minimum supported docker version to 1.11
Drop the 1.10 compatibilty code.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 50457, 55558, 53483, 55731, 52842). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
improve the logic setting cgroupparent in RunPodSandbox
Signed-off-by: yanxuean <yan.xuean@zte.com.cn>
**What this PR does / why we need it**:
The setting of cgroupparent is too confused!
The old logic is:
1. set CgroupParent correctly
2. reset CgroupParent incorrectly
3. set CgroupParent again (refer to #42055 )
The login is too confused, and It is sure that there are many people who drop in trap.
We only need to set it in one place.
kubernetes/pkg/kubelet/dockershim/docker_sandbox.go
```
func (ds *dockerService) makeSandboxDockerConfig(c *runtimeapi.PodSandboxConfig, image string) (*dockertypes.ContainerCreateConfig, error) {
....
// Apply linux-specific options.
if lc := c.GetLinux(); lc != nil {
if err := ds.applySandboxLinuxOptions(hc, lc, createConfig, image, securityOptSep); err != nil {
return nil, err
}
}
// Apply resource options.
setSandboxResources(hc) **<-- reset the CgroupParent incorrectly**
// Apply cgroupsParent derived from the sandbox config.
if lc := c.GetLinux(); lc != nil {
// Apply Cgroup options.
cgroupParent, err := ds.GenerateExpectedCgroupParent(lc.CgroupParent)
if err != nil {
return nil, fmt.Errorf("failed to generate cgroup parent in expected syntax for container %q: %v", c.Metadata.Name, err)
}
hc.CgroupParent = cgroupParent
}
```
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 55050, 53464, 54936, 55028, 54928). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
kubelet: dockershim: remove orphaned checkpoint files
Fixes https://github.com/kubernetes/kubernetes/issues/55070
Currently, `ListPodSandbox()` returns a combined list of sandboxes populated from both the runtime and the dockershim checkpoint files. However the sandboxes in the checkpoint files might not exist anymore.
The kubelet sees the sandbox returned by `ListPodSandbox()` and determines it shouldn't be running and calls `StopPodSandbox()` on it. This generates an error when `StopContainer()` is called as the container does not exist. However the checkpoint file is not cleaned up. This leads to subsequent calls to `StopPodSandbox()` that fail in the same way each time.
This PR removes the checkpoint file if StopContainer fails due to container not found.
The only other place `RemoveCheckpoint()` is called, except if it is corrupt, is from `RemoveSandbox()`. If the container does not exist, what `RemoveSandbox()` would have done has been effectively been done already. So this is just clean up.
@derekwaynecarr @eparis @freehan @dcbw
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..
dockershim: fine-tune network-ready handling on sandbox teardown and removal
If sandbox teardown results in an error, GC will periodically attempt
to again remove the sandbox. Until the sandbox is removed, pod
sandbox status calls will attempt to enter the pod's namespace and
retrieve the pod IP, but the first teardown attempt may have already
removed the network namespace, resulting in a pointless log error
message that the network namespace doesn't exist, or that nsenter
can't find eth0.
The network-ready mechanism originally attempted to suppress those
messages by ensuring that pod sandbox status skipped network checks
when networking was already torn down, but unfortunately the ready
value was cleared too early.
Also, don't tear down the pod network multiple times if the first
time we tore it down, it succeeded.
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
```
If sandbox teardown results in an error, GC will periodically attempt
to again remove the sandbox. Until the sandbox is removed, pod
sandbox status calls will attempt to enter the pod's namespace and
retrieve the pod IP, but the first teardown attempt may have already
removed the network namespace, resulting in a pointless log error
message that the network namespace doesn't exist, or that nsenter
can't find eth0.
The network-ready mechanism originally attempted to suppress those
messages by ensuring that pod sandbox status skipped network checks
when networking was already torn down, but unfortunately the ready
value was cleared too early.
Also, don't tear down the pod network multiple times if the first
time we tore it down, it succeeded.
To ensure kubelet doesn't attempt network teardown on HostNetwork
containers that no longer exist but are still checkpointed, make
sure we preserve the HostNetwork property in checkpoints. If
the checkpoint indicates the container was a HostNetwork one,
don't tear down the network since that would fail anyway.
Related: https://github.com/kubernetes/kubernetes/issues/44307#issuecomment-299548609
GenericPLEG's 1s relist() loop races against pod network setup. It
may be called after the infra container has started but before
network setup is done, since PLEG and the runtime's SyncPod() run
in different goroutines.
Track network setup status and don't bother trying to read the pod's
IP address if networking is not yet ready.
See also: https://bugzilla.redhat.com/show_bug.cgi?id=1434950
Mar 22 12:18:17 ip-172-31-43-89 atomic-openshift-node: E0322
12:18:17.651013 25624 docker_manager.go:378] NetworkPlugin
cni failed on the status hook for pod 'pausepods22' - Unexpected
command output Device "eth0" does not exist.
GenericPLEG's 1s relist() loop races against pod network setup. It
may be called after the infra container has started but before
network setup is done, since PLEG and the runtime's SyncPod() run
in different goroutines.
Track network setup status and don't bother trying to read the pod's
IP address if networking is not yet ready.
See also: https://bugzilla.redhat.com/show_bug.cgi?id=1434950
Mar 22 12:18:17 ip-172-31-43-89 atomic-openshift-node: E0322
12:18:17.651013 25624 docker_manager.go:378] NetworkPlugin
cni failed on the status hook for pod 'pausepods22' - Unexpected
command output Device "eth0" does not exist.
Automatic merge from submit-queue (batch tested with PRs 44326, 45768)
[CRI] Forcibly remove container
Forcibly remove the running containers in `RemoveContainer`. Since we should forcibly remove the running containers in `RemovePodSandbox`. See [here](https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/api/v1alpha1/runtime/api.proto#L35).
cc @feiskyer @Random-Liu
Signed-off-by: Xianglin Gao <xlgao@zju.edu.cn>
Automatic merge from submit-queue (batch tested with PRs 42025, 44169, 43940)
[CRI] Remove all containers in the sandbox
Remove all containers in the sandbox, when we remove the sandbox.
/cc @feiskyer @Random-Liu
Signed-off-by: Xianglin Gao <xlgao@zju.edu.cn>
PR #29378 introduces ClusterFirstWithHostNet, but docker doesn't support
setting dns options togather with hostnetwork. This commit rewrites
resolv.conf same as dockertools.
In ListPodSandbox(), we
1. List all sandbox docker containers
2. List all sandbox checkpoints. If the checkpoint does not have a
corresponding container in (1), we return partial result based on
the checkpoint.
The problem is that new PodSandboxes can be created between step (1) and
(2). In those cases, we will see the checkpoints, but not the sandbox
containers. This leads to strange behavior because the partial result
from the checkpoint does not include some critical information. For
example, the creation timestamp'd be zero, and that would cause kubelet's
garbage collector to immediately remove the sandbox.
This change fixes that by getting the list of checkpoints before listing
all the containers (since in RunPodSandbox we create them in the reverse
order).
Automatic merge from submit-queue (batch tested with PRs 41980, 42192, 42223, 41822, 42048)
CRI: Make dockershim better implements CRI.
When thinking about CRI Validation test, I found that `PodSandboxStatus.Linux.Namespaces.Options.HostPid` and `PodSandboxStatus.Linux.Namespaces.Options.HostIpc` are not populated. Although they are not used by kuberuntime now, we should populate them to conform to CRI.
/cc @yujuhong @feiskyer