- Move from the old github.com/golang/glog to k8s.io/klog
- klog as explicit InitFlags() so we add them as necessary
- we update the other repositories that we vendor that made a similar
change from glog to klog
* github.com/kubernetes/repo-infra
* k8s.io/gengo/
* k8s.io/kube-openapi/
* github.com/google/cadvisor
- Entirely remove all references to glog
- Fix some tests by explicit InitFlags in their init() methods
Change-Id: I92db545ff36fcec83afe98f550c9e630098b3135
This will allow for kube-proxy to be run without `privileged` and
with only adding the capability `NET_ADMIN`.
Signed-off-by: Jess Frazelle <acidburn@microsoft.com>
The requested Service Protocol is checked against the supported protocols of GCE Internal LB. The supported protocols are TCP and UDP.
SCTP is not supported by OpenStack LBaaS. If SCTP is requested in a Service with type=LoadBalancer, the request is rejected. Comment style is also corrected.
SCTP is not allowed for LoadBalancer Service and for HostPort. Kube-proxy can be configured not to start listening on the host port for SCTP: see the new SCTPUserSpaceNode parameter
changed the vendor github.com/nokia/sctp to github.com/ishidawataru/sctp. I.e. from now on we use the upstream version.
netexec.go compilation fixed. Various test cases fixed
SCTP related conformance tests removed. Netexec's pod definition and Dockerfile are updated to expose the new SCTP port(8082)
SCTP related e2e test cases are removed as the e2e test systems do not support SCTP
sctp related firewall config is removed from cluster/gce/util.sh. Variable name sctp_addr is corrected to sctpAddr in pkg/proxy/ipvs/proxier.go
cluster/gce/util.sh is copied from master
Automatic merge from submit-queue (batch tested with PRs 65882, 65896, 65755, 60549, 65927). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Avoid printing some service comments in iptables rules
According to some profiles, with large number of endpoints in the system, comments mentioning the service in appropriate iptables rules may be responsible for 40% of all iptables contents.
Given that ~70% of memory usage of kube-proxy seems to be because of generated iptables rules, the overall saving may be at the level of 30% or so.
OTOH, we sacrifise a bit understandability of iptables, but this PR only changes some of iptables that contribute to the most painful rules.
@thockin @danwinship @dcbw - thoughts?
Ref #65441
Automatic merge from submit-queue (batch tested with PRs 60430, 60115, 58052, 60355, 60116). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Make nodeport ip configurable
**What this PR does / why we need it**:
By default, kube-proxy accepts everything from NodePort without any filter. It can be a problem for nodes which has both public and private NICs, and people only want to provide a service in private network and avoid exposing any internal service on the public IPs.
This PR makes nodeport ip configurable.
**Which issue(s) this PR fixes**:
Closes: #21070
**Special notes for your reviewer**:
Design proposal see: https://github.com/kubernetes/community/pull/1547
Issue in feature repo: https://github.com/kubernetes/features/issues/539
**Release note**:
```release-note
Make NodePort IP addresses configurable
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Only run connection-rejecting rules on new connections
Kube-proxy has two iptables chains full of rules to reject incoming connections to services that don't have any endpoints. Currently these rules get tested against all incoming packets, but that's unnecessary; if a connection to a given service has already been established, then we can't have been rejecting connections to that service. By only checking the first packet in each new connection, we can get rid of a lot of unnecessary checks on incoming traffic.
Fixes#56842
**Release note**:
```release-note
Additional changes to iptables kube-proxy backend to improve performance on clusters with very large numbers of services.
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Delete stale UDP conntrack entries that use hostPort
**What this PR does / why we need it**:
This PR introduces a change to delete stale conntrack entries for UDP connections, specifically for udp connections that use hostPort. When the pod listening on that udp port get updated/restarted(and gets a new ip address), these entries need to be flushed so that ongoing udp connections can recover once the pod is back and the new iptables rules have been installed.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#59033
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 55637, 57461, 60268, 60290, 60210). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Don't create no-op iptables rules for services with no endpoints
Currently for all services we create `-t nat -A KUBE-SERVICES` rules that match the destination IPs (ClusterIP, ExternalIP, NodePort IPs, etc) and then jump to the appropriate `KUBE-SVC-XXXXXX` chain. But if the service has no endpoints then the `KUBE-SVC-XXXXXX` chain will be empty and so nothing happens except that we wasted time (a) forcing iptables-restore to parse the match rules, and (b) forcing the kernel to test matches that aren't going to have any effect.
This PR gets rid of the match rules in this case. Which is to say, it changes things so that every incoming service packet is matched *either* by nat rules to rewrite it *or* by filter rules to ICMP reject it, but not both. (Actually, that's not quite true: there are no filter rules to reject Ingress-addressed packets, and I *think* that's a bug?)
I also got rid of some comments that seemed redundant.
The patch is mostly reindentation, so best viewed with `diff -w`.
Partial fix for #56842 / Related to #56164 (which it conflicts with but I'll fix that after one or the other merges).
**Release note**:
```release-note
Removed some redundant rules created by the iptables proxier, to improve performance on systems with very many services.
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Split KUBE-SERVICES chain to re-shrink the INPUT chain
**What this PR does / why we need it**:
#43972 added an iptables rule "`-A INPUT -j KUBE-SERVICES`" to make NodePort ICMP rejection work. (Previously the KUBE-SERVICES chain was only run from OUTPUT, not INPUT.) #44547 extended that patch for ExternalIP rejection as well.
However, the KUBE-SERVICES chain may potentially have a very large number of ICMP reject rules for plain ClusterIP services (the ones that get run from OUTPUT), and it seems that for some reason the kernel is much more sensitive to the length of the INPUT chain than it is to the length of the OUTPUT chain. So a node that worked fine with kube 1.6 (when KUBE-SERVICES was only run from OUTPUT) might fall over with kube 1.7 (with KUBE-SERVICES being run from both INPUT and OUTPUT).
(Specifically, a node with about 5000 ClusterIP reject rules that ran fine with OpenShift 3.6 [kube 1.6] slowed almost to a complete halt with OpenShift 3.7 [kube 1.7].)
This PR fixes things by splitting out the "new" part of KUBE-SERVICES (NodePort and ExternalIP reject rules) into a separate KUBE-EXTERNAL-SERVICES chain run from INPUT, and moves KUBE-SERVICES back to being only run from OUTPUT. (So, yes, this assumes that you don't have 5000 NodePort/ExternalIP services, but, if you do, there's not much we can do, since those rules *have* to be run on the INPUT side.)
Oh, and I left in the code to clean up the "`-A INPUT -j KUBE-SERVICES`" rule even though we don't generate it any more, so it gets fixed on upgrade.
**Release note**:
```release-note
Reorganized iptables rules to fix a performance regression on clusters with thousands of services.
```
@kubernetes/sig-network-bugs @kubernetes/rh-networking
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Refactor kube-proxy service/endpoints update so that can be consumed among different proxiers
**What this PR does / why we need it**:
There are huge duplication among different proxiers. For example, the service/endpoints list/watch part in iptables, ipvs and windows kernel mode(to be get in soon).
I think the more places this is replicated the harder it becomes to keep correct. We may need to refactor it and let different proxiers consume the same code.
**Which issue this PR fixes**:
fixes#52464
**Special notes for your reviewer**:
* This refactor reduces **500** Lines in iptables proxy, so it will reduce **500*N**(number of proxiers) lines in total. People no need to care the service/endpoints update logic any more and can be more focus on proxy logic.
* I would like to do the following things in follow-ups:
1. rsync it to ipvs proxier
2. rsync it to winkernel proxier
**Release note**:
```release-note
Refactor kube-proxy service/endpoints update so that can be consumed among different proxiers
```