Automatic merge from submit-queue (batch tested with PRs 45467, 48091, 48033, 48498)
Allow Kubenet with ipv6
When running kubenet with IPv6, there is a panic as there
is IPv4 specific code the Event function.
With this change, Event will support IPv4 and IPv6
**What this PR does / why we need it**:
This PR allows kubenet to use IPv6. Currently there is a panic in kubenet_linux.go
as there is IPv4 specific code.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#48089
**Special notes for your reviewer**:
**Release note**:
```release-note-NONE
```
Since v1.5 and the removal of --configure-cbr0:
0800df74ab "Remove the legacy networking mode --configure-cbr0"
kubelet hasn't done any shaping operations internally. They
have all been delegated to network plugins like kubenet or
external CNI plugins. But some shaping code was still left
in kubelet, so remove it now that it's unused.
Runtimes should never return "" and nil errors, since network plugin
drivers need to treat netns differently in different cases. So return
errors when we can't get the netns, and fix up the plugins to do the
right thing.
Namely, we don't need a NetNS on pod network teardown. We do need
a netns for pod Status checks and for network setup.
Automatic merge from submit-queue (batch tested with PRs 46076, 43879, 44897, 46556, 46654)
kubelet/network: report but tolerate errors returned from GetNetNS()
Runtimes should never return "" and nil errors, since network plugin
drivers need to treat netns differently in different cases. So return
errors when we can't get the netns, and fix up the plugins to do the
right thing.
Namely, we don't need a NetNS on pod network teardown. We do need
a netns for pod Status checks and for network setup.
@kubernetes/rh-networking @kubernetes/sig-network-bugs @DirectXMan12
Runtimes should never return "" and nil errors, since network plugin
drivers need to treat netns differently in different cases. So return
errors when we can't get the netns, and fix up the plugins to do the
right thing.
Namely, we don't need a NetNS on pod network teardown. We do need
a netns for pod Status checks and for network setup.
** reason for this change **
CNI has recently introduced a new configuration list feature. This
allows for plugin chaining. It also supports varied plugin versions.
This fixes the race that happens in rktnetes when pod B invokes
'kubenet.SetUpPod()' before another pod A becomes actually running.
The second 'kubenet.SetUpPod()' call will not pick up the pod A
and thus overwrite the host port iptable rules that breaks pod A.
This PR fixes the case by listing all 'active pods' (all non-exited
pods) instead of only running pods.
MTU selection is difficult, and if there is a transport such as IPSEC in
use may be impossible. So we allow specification of the MTU with the
network-plugin-mtu flag, and we pass this down into the network
provider.
Currently implemented by kubenet.