Automatic merge from submit-queue
Restore "Setting endpoints" log message
**What this PR does / why we need it**:
The "Setting endpoints" message from kube-proxy at high verbosity was
lost as part of a larger simplification in kubernetes/kubernetes#42747.
This change brings it back, simply outputting the just-constructed
addresses list.
I need this message to monitor delays in propagating endpoints changes across nodes.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 43871, 44053)
Proxy healthchecks overhaul
The first commit is #44051
These three commits are tightly coupled, but should be reviewed one-by-one. The first adds tests for healthchecks, and found a bug. The second basically rewrites the healthcheck pkg to be much simpler and less flexible (since we weren't using the flexibility). The third tweaks how healthchecks are handled in endpoints-path to be more like they are in services-path.
@MrHohn because I know you were in here for source-IP GA work.
@wojtek-t
The "Setting endpoints" message from kube-proxy at high verbosity was
lost as part of a larger simplification in kubernetes/kubernetes#42747.
This change brings it back, simply outputting the just-constructed
addresses list.
Automatic merge from submit-queue
kube-proxy: filter INPUT as well as OUTPUT
We need to apply filter rules on the way in (nodeports) and out (cluster
IPs). Testing here is insufficient to have caught this - will come back
for that.
Fixes#43969
@justinsb since you have the best repro, can you test? It passes what I think is repro.
@ethernetdan we will want this in 1.6.x
```release-note
Fix bug with service nodeports that have no backends not being rejected, when they should be. This is not a regression vs v1.5 - it's a fix that didn't quite fix hard enough.
```
The existing healthcheck lib was pretty complicated and was hiding some
bugs (like the count always being 1), This is a reboot of the interface
and implementation to be significantly simpler and better tested.
Adding test cases for HC updates found a bug with an update that
simultaneously removes one port and adds another. Map iteration is
randomized, so sometimes no HC would be created.
We need to apply filter rules on the way in (nodeports) and out (cluster
IPs). Testing here is insufficient to have caught this - will come back
for that.
We don't need the svcPortToInfoMap. Its only purpose was to
send "valid" local endpoints (those with valid IP and >0 port) to the
health checker. But we shouldn't be sending invalid endpoints to
the health checker anyway, because it can't do anything with them.
If we exclude invalid endpoints earlier, then we don't need
flattenValidEndpoints().
And if we don't need flattenValidEndpoints() it makes no sense to have
svcPortToInfoMap store hostPortInfo, since endpointsInfo is the same
thing as hostPortInfo except with a combined host:port.
And if svcPortToInfoMap now only stores valid endpointsInfos, it is
exactly the same thing as newEndpoints.
This changes the userspace proxy so that it cleans up its conntrack
settings when a service is removed (as the iptables proxy already
does). This could theoretically cause problems when a UDP service
as deleted and recreated quickly (with the same IP address). As
long as packets from the same UDP source IP and port were going to
the same destination IP and port, the the conntrack would apply and
the packets would be sent to the old destination.
This is astronomically unlikely if you did not specify the IP address
to use in the service, and even then, only happens with an "established"
UDP connection. However, in cases where a service could be "switched"
between using the iptables proxy and the userspace proxy, this case
becomes much more frequent.
This makes it more obvious that they run together and makes the upcoming
rate-limited syncs easier.
Also make test use ints for ports, so it's easier to see when a port is
a literal value vs a name.
This is a weird function, but I didn't want to change any semantics
until the tests are in place. Testing exposed one bug where stale
connections of renamed ports were not marked stale.
There are other things that seem wrong here, more will follow.
Move the feature test to where we are activating the feature, rather
than where we detect locality. This is in service of better tests,
which is in service of less-frequent resyncing, which is going to
require refactoring.
Instead of copying the map, like OnServicesUpdate() used to do and which
was copied into buildServiceMap() to preserve semantics while creating
testcases, start with a new empty map and do deletion checking later.
The API docs say:
// ServiceTypeExternalName means a service consists of only a reference to
// an external name that kubedns or equivalent will return as a CNAME
// record, with no exposing or proxying of any pods involved.
which implies that ExternalName services should be ignored for proxy
purposes.
Automatic merge from submit-queue (batch tested with PRs 35884, 37305, 37369, 37429, 35679)
fix mixleading warning message regarding kube-proxy nodeIP initializa…
The current warning message implies that the operator should restart kube-proxy with some flag related to node IP which can be very misleading.
Automatic merge from submit-queue
Bug fix. Incoming UDP packets not reach newly deployed services
**What this PR does / why we need it**:
Incoming UDP packets not reach newly deployed services when old connection's state in conntrack is not cleared. When a packet arrives, it will not go through NAT table again, because it is not "the first" packet. The PR fix the issue
**Which issue this PR fixes**
Fixes#31983
xref https://github.com/docker/docker/issues/8795
Automatic merge from submit-queue
Curating Owners: pkg/proxy
cc @thockin
In an effort to expand the existing pool of reviewers and establish a
two-tiered review process (first someone lgtms and then someone
experienced in the project approves), we are adding new reviewers to
existing owners files.
If You Care About the Process:
------------------------------
We did this by algorithmically figuring out who’s contributed code to
the project and in what directories. Unfortunately, that doesn’t work
well: people that have made mechanical code changes (e.g change the
copyright header across all directories) end up as reviewers in lots of
places.
Instead of using pure commit data, we generated an excessively large
list of reviewers and pruned based on all time commit data, recent
commit data and review data (number of PRs commented on).
At this point we have a decent list of reviewers, but it needs one last
pass for fine tuning.
Also, see https://github.com/kubernetes/contrib/issues/1389.
TLDR:
-----
As an owner of a sig/directory and a leader of the project, here’s what
we need from you:
1. Use PR https://github.com/kubernetes/kubernetes/pull/35715 as an example.
2. The pull-request is made editable, please edit the `OWNERS` file to
remove the names of people that shouldn't be reviewing code in the
future in the **reviewers** section. You probably do NOT need to modify
the **approvers** section. Names asre sorted by relevance, using some
secret statistics.
3. Notify me if you want some OWNERS file to be removed. Being an
approver or reviewer of a parent directory makes you a reviewer/approver
of the subdirectories too, so not all OWNERS files may be necessary.
4. Please use ALIAS if you want to use the same list of people over and
over again (don't hesitate to ask me for help, or use the pull-request
above as an example)
Automatic merge from submit-queue
Change stickyMaxAge from seconds to minutes, fixes issue #35677
**What this PR does / why we need it**: Increases the service sessionAfinity time from 180 seconds to 180 minutes for proxy mode iptables which was a bug introduced in a refactor.
**Which issue this PR fixes**: fixes#35677
**Special notes for your reviewer**:
**Release note**:
``` release-note
Fixed wrong service sessionAffinity stickiness time from 180 sec to 180 minutes in proxy mode iptables.
```
Since there is no test for the sessionAffinity feature at the moment I wanted to create one but I don't know how.
Automatic merge from submit-queue
Add kubelet --network-plugin-mtu flag for MTU selection
* Add network-plugin-mtu option which lets us pass down a MTU to a network provider (currently processed by kubenet)
* Add a test, and thus make sysctl testable
bridge-nf-call-iptables appears to only be relevant when the containers are
attached to a Linux bridge, which is usually the case with default Kubernetes
setups, docker, and flannel. That ensures that the container traffic is
actually subject to the iptables rules since it traverses a Linux bridge
and bridged traffic is only subject to iptables when bridge-nf-call-iptables=1.
But with other networking solutions (like openshift-sdn) that don't use Linux
bridges, bridge-nf-call-iptables may not be not relevant, because iptables is
invoked at other points not involving a Linux bridge.
The decision to set bridge-nf-call-iptables should be influenced by networking
plugins, so push the responsiblity out to them. If no network plugin is
specified, fall back to the existing bridge-nf-call-iptables=1 behavior.
This allows us to use the MARK-MASQ chain as a subroutine, rather than encoding
the mark in many places. Having a KUBE-POSTROUTING chain means we can flush
and rebuild it atomically. This makes followon work to change the mark
significantly easier.