in cases where iptables stucks forever due to some reasons, we lost
the availability of kube-proxy, this is about adding a timeout for
the rule checking operations, as a result, it should give us a more
reliable working iptables proxy.
Automatic merge from submit-queue (batch tested with PRs 55009, 55532, 55601, 52569, 55533). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Kube-proxy adds forward rules to ensure NodePorts work
**What this PR does / why we need it**:
Updates kube-proxy to set up proper forwarding so that NodePorts work with docker 1.13 without depending on iptables FORWARD being changed manually/externally.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#39823
**Special notes for your reviewer**:
@thockin I used option number 2 that I mentioned in the #39823 issue, please let me know what you think about this change. If you are happy with the change then I can try to add tests but may need a little direction about what and where to add them.
**Release note**:
```release-note
Add iptables rules to allow Pod traffic even when default iptables policy is to reject.
```
There were a few places that the last PR https://github.com/kubernetes/kubernetes/pull/54763 missed because the flags that PR covered were of the form -w2. Some of the code had --wait=2. This changes that code to use the same global variable for the wait setting so that everything is consistent.
For iptables save and restore operations, kube-proxy currently uses
the IPv4 versions of the iptables save and restore utilities
(iptables-save and iptables-restore, respectively). For IPv6 operation,
the IPv6 versions of these utilities needs to be used
(ip6tables-save and ip6tables-restore, respectively).
Both this change and PR #48551 are needed to get Kubernetes services
to work in an IPv6-only Kubernetes cluster (along with setting
'--bind-address ::0' on the kube-proxy command line. This change
was alluded to in a discussion on services for issue #1443.
fixes#50474
Automatic merge from submit-queue
util/iptables: grab iptables locks if iptables-restore doesn't support --wait
When iptables-restore doesn't support --wait (which < 1.6.2 don't), it may
conflict with other iptables users on the system, like docker, because it
doesn't acquire the iptables lock before changing iptables rules. This causes
sporadic docker failures when starting containers.
To ensure those don't happen, essentially duplicate the iptables locking
logic inside util/iptables when we know iptables-restore doesn't support
the --wait option.
Unfortunately iptables uses two different locking mechanisms, one until
1.4.x (abstract socket based) and another from 1.6.x (/run/xtables.lock
flock() based). We have to grab both locks, because we don't know what
version of iptables-restore exists since iptables-restore doesn't have
a --version option before 1.6.2. Plus, distros (like RHEL) backport the
/run/xtables.lock patch to 1.4.x versions.
Related: https://github.com/kubernetes/kubernetes/pull/43575
See also: https://github.com/openshift/origin/pull/13845
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1417234
@kubernetes/rh-networking @kubernetes/sig-network-misc @eparis @knobunc @danwinship @thockin @freehan
When iptables-restore doesn't support --wait (which < 1.6.2 don't), it may
conflict with other iptables users on the system, like docker, because it
doesn't acquire the iptables lock before changing iptables rules. This causes
sporadic docker failures when starting containers.
To ensure those don't happen, essentially duplicate the iptables locking
logic inside util/iptables when we know iptables-restore doesn't support
the --wait option.
Unfortunately iptables uses two different locking mechanisms, one until
1.4.x (abstract socket based) and another from 1.6.x (/run/xtables.lock
flock() based). We have to grab both locks, because we don't know what
version of iptables-restore exists since iptables-restore doesn't have
a --version option before 1.6.2. Plus, distros (like RHEL) backport the
/run/xtables.lock patch to 1.4.x versions.
Related: https://github.com/kubernetes/kubernetes/pull/43575
See also: https://github.com/openshift/origin/pull/13845
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1417234
iptables-restore did not previously perform any locking, meaning that
when callers (like kube-proxy) asked iptables-restore to write large
numbers of rules, the iptables-restore process might run in parallel
with other 'iptables' invocations in kubelet (hostports), docker,
and other software. This causes errors like:
"CNI request failed with status 400: 'Failed to ensure that nat chain
POSTROUTING jumps to MASQUERADE: error checking rule: exit status 4:
iptables: Resource temporarily unavailable."
or from Docker
"Failed to allocate and map port 1095-1095: iptables failed:
iptables --wait -t nat -A DOCKER -p tcp -d 0/0 --dport 1095
-j DNAT --to-destination 10.1.0.2:1095 ! -i lbr0: iptables:
Resource temporarily unavailable.\n (exit status 4)"
iptables-restore "wait" functionality was added in iptables git
commit 999eaa241212d3952ddff39a99d0d55a74e3639e but is NOT YET
in a released version of iptables.
See also https://bugzilla.redhat.com/show_bug.cgi?id=1417234
- improve restoreInternal implementation in iptables
- add SetStdin and SetStdout functions to Cmd interface
- modify kubelet/prober and some tests in order to work with Cmd interface
A lot of packages use StringSet, but they don't use anything else from
the util package. Moving StringSet into another package will shrink
their dependency trees significantly.
Use iptables --wait (if available) to avoid race conditions with
util.iptables failing if it tries to modify the tables at the same
time as another process.
Also, reorganize the code a bit in preparation for checking for
another flag as well. And, if semver.NewVersion() returns an error, it
means there's a bug in the code somewhere (we should only ever be
passing it valid version strings), so just log that error rather than
returning it to the caller.
With older iptables binary, kube-proxy generates duplicate
iptables rules in NAT table every few seconds.
This fixes the problem by properly unquoting && parsing
older iptables-save output.