Automatic merge from submit-queue
util/iptables: grab iptables locks if iptables-restore doesn't support --wait
When iptables-restore doesn't support --wait (which < 1.6.2 don't), it may
conflict with other iptables users on the system, like docker, because it
doesn't acquire the iptables lock before changing iptables rules. This causes
sporadic docker failures when starting containers.
To ensure those don't happen, essentially duplicate the iptables locking
logic inside util/iptables when we know iptables-restore doesn't support
the --wait option.
Unfortunately iptables uses two different locking mechanisms, one until
1.4.x (abstract socket based) and another from 1.6.x (/run/xtables.lock
flock() based). We have to grab both locks, because we don't know what
version of iptables-restore exists since iptables-restore doesn't have
a --version option before 1.6.2. Plus, distros (like RHEL) backport the
/run/xtables.lock patch to 1.4.x versions.
Related: https://github.com/kubernetes/kubernetes/pull/43575
See also: https://github.com/openshift/origin/pull/13845
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1417234
@kubernetes/rh-networking @kubernetes/sig-network-misc @eparis @knobunc @danwinship @thockin @freehan
When iptables-restore doesn't support --wait (which < 1.6.2 don't), it may
conflict with other iptables users on the system, like docker, because it
doesn't acquire the iptables lock before changing iptables rules. This causes
sporadic docker failures when starting containers.
To ensure those don't happen, essentially duplicate the iptables locking
logic inside util/iptables when we know iptables-restore doesn't support
the --wait option.
Unfortunately iptables uses two different locking mechanisms, one until
1.4.x (abstract socket based) and another from 1.6.x (/run/xtables.lock
flock() based). We have to grab both locks, because we don't know what
version of iptables-restore exists since iptables-restore doesn't have
a --version option before 1.6.2. Plus, distros (like RHEL) backport the
/run/xtables.lock patch to 1.4.x versions.
Related: https://github.com/kubernetes/kubernetes/pull/43575
See also: https://github.com/openshift/origin/pull/13845
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1417234
iptables-restore did not previously perform any locking, meaning that
when callers (like kube-proxy) asked iptables-restore to write large
numbers of rules, the iptables-restore process might run in parallel
with other 'iptables' invocations in kubelet (hostports), docker,
and other software. This causes errors like:
"CNI request failed with status 400: 'Failed to ensure that nat chain
POSTROUTING jumps to MASQUERADE: error checking rule: exit status 4:
iptables: Resource temporarily unavailable."
or from Docker
"Failed to allocate and map port 1095-1095: iptables failed:
iptables --wait -t nat -A DOCKER -p tcp -d 0/0 --dport 1095
-j DNAT --to-destination 10.1.0.2:1095 ! -i lbr0: iptables:
Resource temporarily unavailable.\n (exit status 4)"
iptables-restore "wait" functionality was added in iptables git
commit 999eaa241212d3952ddff39a99d0d55a74e3639e but is NOT YET
in a released version of iptables.
See also https://bugzilla.redhat.com/show_bug.cgi?id=1417234
- improve restoreInternal implementation in iptables
- add SetStdin and SetStdout functions to Cmd interface
- modify kubelet/prober and some tests in order to work with Cmd interface
A lot of packages use StringSet, but they don't use anything else from
the util package. Moving StringSet into another package will shrink
their dependency trees significantly.
Use iptables --wait (if available) to avoid race conditions with
util.iptables failing if it tries to modify the tables at the same
time as another process.
Also, reorganize the code a bit in preparation for checking for
another flag as well. And, if semver.NewVersion() returns an error, it
means there's a bug in the code somewhere (we should only ever be
passing it valid version strings), so just log that error rather than
returning it to the caller.
With older iptables binary, kube-proxy generates duplicate
iptables rules in NAT table every few seconds.
This fixes the problem by properly unquoting && parsing
older iptables-save output.
A service with a NodePort set will listen on that port, on every node.
This is both handy for some load balancers (AWS ELB) and for people
that want to expose a service without using a load balancer.
After this DNS is resolvable from the host, if the DNS server is targetted
explicitly. This does NOT add the cluster DNS to the host's resolv.conf. That
is a larger problem, with distro-specific tie-ins and circular deps.
The iptables args list needs to include all fields as they are eventually spit
out by iptables-save. This is because some systems do not support the
'iptables -C' arg, and so fall back on parsing iptables-save output. If this
does not match, it will not pass the check. For example: adding the /32 on
the destination IP arg is not strictly required, but causes this list to not
match the final iptables-save output. This is fragile and I hope one day we
can stop supporting such old iptables versions.