Automatic merge from submit-queue
util/iptables: grab iptables locks if iptables-restore doesn't support --wait
When iptables-restore doesn't support --wait (which < 1.6.2 don't), it may
conflict with other iptables users on the system, like docker, because it
doesn't acquire the iptables lock before changing iptables rules. This causes
sporadic docker failures when starting containers.
To ensure those don't happen, essentially duplicate the iptables locking
logic inside util/iptables when we know iptables-restore doesn't support
the --wait option.
Unfortunately iptables uses two different locking mechanisms, one until
1.4.x (abstract socket based) and another from 1.6.x (/run/xtables.lock
flock() based). We have to grab both locks, because we don't know what
version of iptables-restore exists since iptables-restore doesn't have
a --version option before 1.6.2. Plus, distros (like RHEL) backport the
/run/xtables.lock patch to 1.4.x versions.
Related: https://github.com/kubernetes/kubernetes/pull/43575
See also: https://github.com/openshift/origin/pull/13845
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1417234
@kubernetes/rh-networking @kubernetes/sig-network-misc @eparis @knobunc @danwinship @thockin @freehan
When iptables-restore doesn't support --wait (which < 1.6.2 don't), it may
conflict with other iptables users on the system, like docker, because it
doesn't acquire the iptables lock before changing iptables rules. This causes
sporadic docker failures when starting containers.
To ensure those don't happen, essentially duplicate the iptables locking
logic inside util/iptables when we know iptables-restore doesn't support
the --wait option.
Unfortunately iptables uses two different locking mechanisms, one until
1.4.x (abstract socket based) and another from 1.6.x (/run/xtables.lock
flock() based). We have to grab both locks, because we don't know what
version of iptables-restore exists since iptables-restore doesn't have
a --version option before 1.6.2. Plus, distros (like RHEL) backport the
/run/xtables.lock patch to 1.4.x versions.
Related: https://github.com/kubernetes/kubernetes/pull/43575
See also: https://github.com/openshift/origin/pull/13845
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1417234
Automatic merge from submit-queue (batch tested with PRs 43575, 44672)
util/iptables: check for and use new iptables-restore 'wait' argument
iptables-restore did not previously perform any locking, meaning that
when callers (like kube-proxy) asked iptables-restore to write large
numbers of rules, the iptables-restore process might run in parallel
with other 'iptables' invocations in kubelet (hostports), docker,
and other software. This causes errors like:
"CNI request failed with status 400: 'Failed to ensure that nat chain
POSTROUTING jumps to MASQUERADE: error checking rule: exit status 4:
iptables: Resource temporarily unavailable."
or from Docker:
"Failed to allocate and map port 1095-1095: iptables failed:
iptables --wait -t nat -A DOCKER -p tcp -d 0/0 --dport 1095
-j DNAT --to-destination 10.1.0.2:1095 ! -i lbr0: iptables:
Resource temporarily unavailable.\n (exit status 4)"
iptables-restore "wait" functionality was added in iptables git
commit 999eaa241212d3952ddff39a99d0d55a74e3639e which
is not yet in a release.
See also https://bugzilla.redhat.com/show_bug.cgi?id=1417234
@eparis @knobunc @kubernetes/rh-networking @kubernetes/sig-network-misc @freehan @thockin @brendandburns
iptables-restore did not previously perform any locking, meaning that
when callers (like kube-proxy) asked iptables-restore to write large
numbers of rules, the iptables-restore process might run in parallel
with other 'iptables' invocations in kubelet (hostports), docker,
and other software. This causes errors like:
"CNI request failed with status 400: 'Failed to ensure that nat chain
POSTROUTING jumps to MASQUERADE: error checking rule: exit status 4:
iptables: Resource temporarily unavailable."
or from Docker
"Failed to allocate and map port 1095-1095: iptables failed:
iptables --wait -t nat -A DOCKER -p tcp -d 0/0 --dport 1095
-j DNAT --to-destination 10.1.0.2:1095 ! -i lbr0: iptables:
Resource temporarily unavailable.\n (exit status 4)"
iptables-restore "wait" functionality was added in iptables git
commit 999eaa241212d3952ddff39a99d0d55a74e3639e but is NOT YET
in a released version of iptables.
See also https://bugzilla.redhat.com/show_bug.cgi?id=1417234
Automatic merge from submit-queue (batch tested with PRs 38194, 37594, 38123, 37831, 37084)
Better compat with very old iptables (e.g. CentOS 6)
Fixes reported issue with CentOS6 iptables 1.4.7 (ancient)
Older iptables expanded things like 0x4000 into 0x00004000, which defeats the
fallback "check" logic.
Fixes#37416
- improve restoreInternal implementation in iptables
- add SetStdin and SetStdout functions to Cmd interface
- modify kubelet/prober and some tests in order to work with Cmd interface
A lot of packages use StringSet, but they don't use anything else from
the util package. Moving StringSet into another package will shrink
their dependency trees significantly.
Use iptables --wait (if available) to avoid race conditions with
util.iptables failing if it tries to modify the tables at the same
time as another process.
Also, reorganize the code a bit in preparation for checking for
another flag as well. And, if semver.NewVersion() returns an error, it
means there's a bug in the code somewhere (we should only ever be
passing it valid version strings), so just log that error rather than
returning it to the caller.