The following changes are proposed for the iptables proxier:
* There are three places where a string specifying IP:port is parsed
using something like this:
if index := strings.Index(e.endpoint, ":"); index != -1 {
This will fail for IPv6 since V6 addresses contain colons. Also,
the V6 address is expected to be surrounded by square brackets
(i.e. []:). Fix this by replacing call to Index with
call to LastIndex() and stripping out square brackets.
* The String() method for the localPort struct should put square brackets
around IPv6 addresses.
* The logging in the merge() method for proxyServiceMap should put brackets
around IPv6 addresses.
* There are several places where filterRules destination is hardcoded to
/32. This should be a /128 for IPv6 case.
* Add IPv6 unit test cases
fixes#48550
Automatic merge from submit-queue (batch tested with PRs 49850, 47782, 50595, 50730, 51341)
Paramaterize `stickyMaxAgeMinutes` for service in API
**What this PR does / why we need it**:
Currently I find `stickyMaxAgeMinutes` for a session affinity type service is hard code to 180min. There is a TODO comment, see
https://github.com/kubernetes/kubernetes/blob/master/pkg/proxy/iptables/proxier.go#L205
I think the seesion sticky max time varies from service to service and users may not aware of it since it's hard coded in all proxier.go - iptables, userspace and winuserspace.
Once we parameterize it in API, users can set/get the values for their different services.
Perhaps, we can introduce a new field `api.ClientIPAffinityConfig` in `api.ServiceSpec`.
There is an initial discussion about it in sig-network group. See,
https://groups.google.com/forum/#!topic/kubernetes-sig-network/i-LkeHrjs80
**Which issue this PR fixes**:
fixes#49831
**Special notes for your reviewer**:
**Release note**:
```release-note
Paramaterize session affinity timeout seconds in service API for Client IP based session affinity.
```
Automatic merge from submit-queue
Edge-based userspace LB in kube-proxy
@thockin @bowei - if one of you could take a look if that PR doesn't break some basic kube-proxy assumptions. The similar change for winuserproxy should be pretty trivial.
And we should also do that for iptables, but that requires splitting the iptables code to syncProxyRules (which from what I know @thockin already started working on so we should probably wait for it to be done).
The existing healthcheck lib was pretty complicated and was hiding some
bugs (like the count always being 1), This is a reboot of the interface
and implementation to be significantly simpler and better tested.
Adding test cases for HC updates found a bug with an update that
simultaneously removes one port and adds another. Map iteration is
randomized, so sometimes no HC would be created.
We don't need the svcPortToInfoMap. Its only purpose was to
send "valid" local endpoints (those with valid IP and >0 port) to the
health checker. But we shouldn't be sending invalid endpoints to
the health checker anyway, because it can't do anything with them.
If we exclude invalid endpoints earlier, then we don't need
flattenValidEndpoints().
And if we don't need flattenValidEndpoints() it makes no sense to have
svcPortToInfoMap store hostPortInfo, since endpointsInfo is the same
thing as hostPortInfo except with a combined host:port.
And if svcPortToInfoMap now only stores valid endpointsInfos, it is
exactly the same thing as newEndpoints.
This changes the userspace proxy so that it cleans up its conntrack
settings when a service is removed (as the iptables proxy already
does). This could theoretically cause problems when a UDP service
as deleted and recreated quickly (with the same IP address). As
long as packets from the same UDP source IP and port were going to
the same destination IP and port, the the conntrack would apply and
the packets would be sent to the old destination.
This is astronomically unlikely if you did not specify the IP address
to use in the service, and even then, only happens with an "established"
UDP connection. However, in cases where a service could be "switched"
between using the iptables proxy and the userspace proxy, this case
becomes much more frequent.
This makes it more obvious that they run together and makes the upcoming
rate-limited syncs easier.
Also make test use ints for ports, so it's easier to see when a port is
a literal value vs a name.