We should use:
nsenter --net=netnsPath -- -F some_command
instend of:
nsenter -n netnsPath -- -F some_command
Because "nsenter -n netnsPath" get an error output:
# nsenter -n /proc/67197/ns/net ip addr
nsenter: neither filename nor target pid supplied for ns/net
If we really want use -n, we need to use -n in such format:
# sudo nsenter -n/proc/67197/ns/net ip addr
When cni is set to kubenet, kubelet should hold the host port socket,
so that other application in this node could not listen/bind this port
any more. However, the sockets are closed accidentally, because
kubelet forget to reconcile the protocol format before comparing.
This fixes the race that happens in rktnetes when pod B invokes
'kubenet.SetUpPod()' before another pod A becomes actually running.
The second 'kubenet.SetUpPod()' call will not pick up the pod A
and thus overwrite the host port iptable rules that breaks pod A.
This PR fixes the case by listing all 'active pods' (all non-exited
pods) instead of only running pods.
Automatic merge from submit-queue
Add flag to set CNI bin dir, and use it on gci nodes
**What this PR does / why we need it**:
When using `kube-up` on GCE, following #31023 which moved the workers from debian to gci, CNI just isn't working. The root cause is basically as discussed in #28563: one flag (`--network-plugin-dir`) means two different things, and the `configure-helper` script uses it for the wrong purpose.
This PR adds a new flag `--cni-bin-dir`, then uses it to configure CNI as desired.
As discussed at #28563, I have also added a flag `--cni-conf-dir` so users can be explicit
**Which issue this PR fixes** : fixes#28563
**Special notes for your reviewer**:
I left the old flag largely alone for backwards-compatibility, with the exception that I stop setting the default when CNI is in use. The value of `"/usr/libexec/kubernetes/kubelet-plugins/net/exec/"` is unlikely to be what is wanted there.
**Release note**:
```release-note
Added new kubelet flags `--cni-bin-dir` and `--cni-conf-dir` to specify where CNI files are located.
Fixed CNI configuration on GCI platform when using CNI.
```
Automatic merge from submit-queue
Kubelet: add KillPod for new runtime API
This PR adds implements of KillPod for new runtime API.
CC @yujuhong @Random-Liu @kubernetes/sig-node @kubernetes/sig-rktnetes
MTU selection is difficult, and if there is a transport such as IPSEC in
use may be impossible. So we allow specification of the MTU with the
network-plugin-mtu flag, and we pass this down into the network
provider.
Currently implemented by kubenet.
Automatic merge from submit-queue
Use the CNI bridge plugin to set hairpin mode
Following up this part of #23711:
> I'd like to wait until containernetworking/cni#175 lands and then just pass the request through to CNI.
The code here just
* passes the required setting down from kubenet to CNI
* disables `DockerManager` from doing hairpin-veth, if kubenet is in use
Note to test you need a very recent version of the CNI `bridge` plugin; the one brought in by #28799 should be OK.
Also relates to https://github.com/kubernetes/kubernetes/issues/19766#issuecomment-232722864
Automatic merge from submit-queue
kubenet: Fix host port for rktnetes.
Because rkt pod runs after plugin.SetUpPod() is called, so
getRunningPods() does not return the newly created pod, which
causes the hostport iptable rules to be missing for this new pod.
cc @dcbw @freehan
A follow up fix for https://github.com/kubernetes/kubernetes/pull/27878#issuecomment-227898936
Because rkt pod runs after plugin.SetUpPod() is called, so
getRunningPods() does not return the newly created pod, which
causes the hostport iptable rules to be missing for this new pod.
Use the generic runtime method to get the netns path. Also
move reading the container IP address into cni (based off kubenet)
instead of having it in the Docker manager code. Both old and new
methods use nsenter and /sbin/ip and should be functionally
equivalent.
Automatic merge from submit-queue
Sets IgnoreUnknown=1 in CNI_ARGS
```release-note
release-note-none
```
K8 uses CNI_ARGS to pass pod namespace, name and infra container
id to the CNI network plugin. CNI logic will throw an error
if these args are not known to it, unless the user specifies
IgnoreUnknown as part of CNI_ARGS. This PR sets IgnoreUnknown=1
to prevent the CNI logic from erroring and blocking pod setup.
https://github.com/appc/cni/pull/158https://github.com/appc/cni/issues/126
Automatic merge from submit-queue
Various kubenet fixes (panics and bugs and cidrs, oh my)
This PR fixes the following issues:
1. Corrects an inverse error-check that prevented `shaper.Reset` from ever being called with a correct ip address
2. Fix an issue where `parseCIDR` would fail after a kubelet restart due to an IP being stored instead of a CIDR being stored in the cache.
3. Fix an issue where kubenet could panic in TearDownPod if it was called before SetUpPod (e.g. after a kubelet restart).. because of bug number 1, this didn't happen except in rare situations (see 2 for why such a rare situation might happen)
This adds a test, but more would definitely be useful.
The commits are also granular enough I could split this up more if desired.
I'm also not super-familiar with this code, so review and feedback would be welcome.
Testing done:
```
$ cat examples/egress/egress.yml
apiVersion: v1
kind: Pod
metadata:
labels:
name: egress
name: egress-output
annotations: {"kubernetes.io/ingress-bandwidth": "300k"}
spec:
restartPolicy: Never
containers:
- name: egress
image: busybox
command: ["sh", "-c", "sleep 60"]
$ cat kubelet.log
...
Running: tc filter add dev cbr0 protocol ip parent 1:0 prio 1 u32 match ip dst 10.0.0.5/32 flowid 1:1
# setup
...
Running: tc filter del dev cbr0 parent 1:proto ip prio 1 handle 800::800 u32
# teardown
```
I also did various other bits of manual testing and logging to hunt down the panic and other issues, but don't have anything to paste for that
cc @dcbw @kubernetes/sig-network
Automatic merge from submit-queue
rkt: Pass through podIP
This is needed for the /etc/hosts mount and the downward API to work.
Furthermore, this is required for the reported `PodStatus` to be
correct.
The `Status` bit mostly worked prior to #25062, and this restores that
functionality in addition to the new functionality.
In retrospect, the regression in status is large enough the prior PR should have included at least some of this; my bad for not realizing the full implications there.
#25902 is needed for downwards api stuff, but either merge order is fine as neither will break badly by itself.
cc @yifan-gu @dcbw
The length of an IP can be 4 or 16, and even if 16 it can be a valid
ipv4 address. This check is the more-correct way to handle this, and it
also provides more granular error messages.
Teardown can run before Setup when the kubelet is restarted... in that
case, the shaper was nil and thus calling the shaper resulted in a panic
This fixes that by ensuring the shaper is always set... +1 level of
indirection and all that.
Before this change, the podCIDRs map contained both cidrs and ips
depending on which code path entered a container into it.
Specifically, SetUpPod would enter a CIDR while GetPodNetworkStatus
would enter an IP.
This normalizes both of them to always enter just IP addresses.
This also removes the now-redundant cidr parsing that was used to get
the ip before
When the IP isn't in the internal map, GetPodNetworkStatus() needs
to call the execer for the 'nsenter' program. That means the execer
needs to be !nil, which it wasn't before.
Automatic merge from submit-queue
Make IsQualifiedName return error strings
Part of the larger validation PR, broken out for easier review and merge.
@lavalamp FYI, but I know you're swamped, too.
Automatic merge from submit-queue
kubenet try to retrieve ip inside pod net namespace
Kubenet currently stores the ips of pods inside a map. Kubelet gets pod ip from kubenet during syncpod. If Kubelet restarts, all pods on the node lost their ips in podStatus. This PR adds logic to retrieve pod IP from pod netns.
cc: @yujuhong
Automatic merge from submit-queue
kubenet: fix up CNI bridge TX queue length if needed
CNI's bridge plugin mis-handles the TxQLen when creating the bridge,
leading to a zero-length TX queue. This doesn't typically cause
problems (since virtual interfaces don't have hard queue limits)
but when adding traffic shaping, some qdiscs pull their packet
limits from the TX queue length, leading to a packet limit of 0
in some cases. Until we can depend on a new enough version of
CNI, fix up the TX queue length internally.
Closes: https://github.com/kubernetes/kubernetes/issues/25092
CNI's bridge plugin mis-handles the TxQLen when creating the bridge,
leading to a zero-length TX queue. This doesn't typically cause
problems (since virtual interfaces don't have hard queue limits)
but when adding traffic shaping, some qdiscs pull their packet
limits from the TX queue length, leading to a packet limit of 0
in some cases. Until we can depend on a new enough version of
CNI, fix up the TX queue length internally.
K8 uses CNI_ARGS to pass pod namespace, name and infra container
id to the CNI network plugin. CNI logic will throw an error
if these args are not known to it, unless the user specifies
IgnoreUnknown as part of CNI_ARGS. This PR sets IgnoreUnknown=1
to prevent the CNI logic from erroring and blocking pod setup.
https://github.com/appc/cni/pull/158https://github.com/appc/cni/issues/126
Automatic merge from submit-queue
kubenet: Load bridge netfilter module in Init().
This lets the kubenet loads the bridge netfilter module and set bridge-nf-call-iptables=1
Fix#24018
Follow up PRs would be appreciate if we also load the module in the bridge plugin binary itself. Ref https://github.com/kubernetes/kubernetes/issues/24018#issuecomment-207682514
cc @kubernetes/sig-node @sjpotter @euank
Allow network plugins to declare that they handle shaping and that
Kuberenetes should not. Will be first used by openshift-sdn which
handles shaping through OVS, but this triggers a warning when
kubelet notices the bandwidth annotations.
Automatic merge from submit-queue
Make kubelet use an arch-specific pause image depending on GOARCH
Related to: #22876, #22683 and #15140
@ixdy @pwittrock @brendandburns @mikedanese @yujuhong @thockin @zmerlynn
bridge-nf-call-iptables appears to only be relevant when the containers are
attached to a Linux bridge, which is usually the case with default Kubernetes
setups, docker, and flannel. That ensures that the container traffic is
actually subject to the iptables rules since it traverses a Linux bridge
and bridged traffic is only subject to iptables when bridge-nf-call-iptables=1.
But with other networking solutions (like openshift-sdn) that don't use Linux
bridges, bridge-nf-call-iptables may not be not relevant, because iptables is
invoked at other points not involving a Linux bridge.
The decision to set bridge-nf-call-iptables should be influenced by networking
plugins, so push the responsiblity out to them. If no network plugin is
specified, fall back to the existing bridge-nf-call-iptables=1 behavior.
This commit builds on previous work and creates an independent
worker for every liveness probe. Liveness probes behave largely the same
as readiness probes, so much of the code is shared by introducing a
probeType paramater to distinguish the type when it matters. The
circular dependency between the runtime and the prober is broken by
exposing a shared liveness ResultsManager, owned by the
kubelet. Finally, an Updates channel is introduced to the ResultsManager
so the kubelet can react to unhealthy containers immediately.