Automatic merge from submit-queue
Add Windows support to kube-proxy
<!-- Thanks for sending a pull request! Here are some tips for you:
1. If this is your first time, read our contributor guidelines https://github.com/kubernetes/kubernetes/blob/master/CONTRIBUTING.md and developer guide https://github.com/kubernetes/kubernetes/blob/master/docs/devel/development.md
2. If you want *faster* PR reviews, read how: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/faster_reviews.md
3. Follow the instructions for writing a release note: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/pull-requests.md#release-notes
-->
**What this PR does / why we need it**:
This is the first stab at supporting kube-proxy (userspace mode) on Windows
**Which issue this PR fixes** :
fixes#30278
**Special notes for your reviewer**:
The MVP uses `netsh portproxy` to redirect traffic from `ServiceIP:ServicePort` to a `LocalIP:LocalPort`.
For the next version we are expecting to have guidance from Microsoft Container Networking team.
**Limitations**:
Current implementation does not support DNS queries over UDP as `netsh portproxy` currently only supports TCP. We are working with Microsoft to remediate this.
cc: @brendandburns @dcbw
**Release note**:
<!-- Steps to write your release note:
1. Use the release-note-* labels to set the release note state (if you have access)
2. Enter your extended release note in the below block; leaving it blank means using the PR title as the release note. If no release note is required, just write `NONE`.
-->
```release-note
```
Automatic merge from submit-queue
[Kubelet] Use the custom mounter script for Nfs and Glusterfs only
This patch reduces the scope for the containerized mounter to NFS and GlusterFS on GCE + GCI clusters
This patch also enabled the containerized mounter on GCI nodes
Shepherding multiple PRs through the submit queue is painful. Hence I combined them into this PR. Please review each commit individually.
cc @jingxu97 @saad-ali
https://github.com/kubernetes/kubernetes/pull/35652 has also been reverted as part of this PR
In order to be able to use new mounter library, this PR adds the
mounterPath flag to kubelet which passes the flag to the mount
interface. If flag is empty, mount uses default mount path.
Automatic merge from submit-queue
Remove implicit Prometheus metrics from client
**What this PR does / why we need it**: This PR starts to cut away at dependencies that the client has.
**Release note**:
<!-- Steps to write your release note:
1. Use the release-note-* labels to set the release note state (if you have access)
2. Enter your extended release note in the below block; leaving it blank means using the PR title as the release note. If no release note is required, just write `NONE`.
-->
```release-note
The implicit registration of Prometheus metrics for request count and latency have been removed, and a plug-able interface was added. If you were using our client libraries in your own binaries and want these metrics, add the following to your imports in the main package: "k8s.io/pkg/client/metrics/prometheus".
```
cc: @kubernetes/sig-api-machinery @kubernetes/sig-instrumentation @fgrzadkowski @wojtek-t
Automatic merge from submit-queue
kube-proxy: Propagate hostname to iptables proxier
Need to propagate the hostname (i.e. Nodename) from kube-proxy to the iptables proxier to allow kube-proxy to determine local endpoints.
<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.kubernetes.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.kubernetes.io/reviews/kubernetes/kubernetes/30293)
<!-- Reviewable:end -->
Add flags to control max connections (set to 256k vs 64k default) and TCP
established timeout (set to 1 day vs 5 day default). Flags can be set to 0 to
mean "don't change it".
This is only set at startup, and not wrapped in a rectifier loop.
Tested manually.
Default to hardcodes for components that had them, and 5.0 qps, 10 burst
for those that relied on client defaults
Unclear if maybe it'd be better to just assume these are set as part of
the incoming kubeconfig. For now just exposing them as flags since it's
easier for me to manually tweak.
This changes the --legacy-userspace-proxy flag to be a string flag
--proxy-mode. If specified, the flag will be respected ('userspace' and
'iptables' being valid values). If left blank (default) we will choose the
"best". best means userspace for now UNLESS the user adds an annotation
(net.experimental.kubernetes.io/proxy-mode) to their node, in which case we
will try to use that.
This allows people to try it on a single machine without fear of global failure
and without it getting rolled back on reboots. It is a poor-man's config blob.
Check to make sure there is not an alphanumeric character immeditely
before or after the 'flag'. It there is an alphanumeric character then
this is obviously not actually the flag we care about. For example if
the project declares a flag "valid-name" but the regex finds something
like "invalid_name" we should not match. Clearly this "invalid_name" is
not actually a wrong usage of the "valid-name" flag.
1. Add HostnameOverride parameter for kube-proxy as kubelet did.
2. Add Birthcry event for kube-proxy.
3. Because record event need apiserver client, adjust order of code partly.
pflag can handle IP addresses so use the pflag code instead of doing it
ourselves. This means our code just uses net.IP and we don't have all of
the useless casting back and forth!
Moves the userspace code in proxy to a sub-package and adds the
ProxyProvider interface.
This is in preparation for landing an implementation of
https://github.com/GoogleCloudPlatform/kubernetes/issues/3760, which
will mostly be in another sub package for iptables.
--master option still supported.
--kubeconfig option added to kube-proxy,
kube-scheduler, and kube-controller-manager
binaries.
Kube-proxy now always makes some kind of API
source, since that is its only kind of config.
Warn if it is using a default client, which probably won't work.
Uses the clientcmd builder.
--master flag is still supported for distros that need it.
But now, --kubeconfig flag can be used instead, or in addition,
to specify the auth info, and/or the location of the master.
A subsequent PR will change salt to generate a kubeconfig,
and to make kube-proxy use it, for salt-based clouds.
I had the kublet die on startup and the only error was "0x401da0" Which
I assume is an address of the err.Error function. The other way to fix
this, I think, would be to use err.Error(), however that could cause
fmt.Fprintf() problems, debuging on the error message people used.
Now I get a nice clean error I can understand:
"cAdvisor.New() err = mountpoint for cpu not found"
It currently is impossible to use two healthz handlers on different
ports in the same process. This removes the global variables in favor
of requiring the consumer to specify all health checks up front.
All the distros that use this have been updated,
or have PRs out to update them, or owners
have been asked to fix RPMs.
Removing this prevents further use of this model.
Remove now dead code: EtcdClientOrDie
Remove now dead pkg/proxy/config/etcd.go.
Remove unused imports.
RegisterHandlers was called after the listening for events had already begun.
So, there was a race where sometimes the first update would, with the
initial state, would notify an empty list of listeners.
This showed up in services.sh e2e test as empty service and endpoint maps
after the test step which restarts the kube-proxy.
Perhaps due to timing, this doesn't show up with etcd source, but does
show up with apiserver as a source. A separate PR makes APIserver
the source as a default, and depends on this.
This took me several days to debug.
apiserver becomes kube-apiserver
controller-manager -> kube-controller-manager
scheduler and proxy similarly.
Only thing I promise is that right now hack/build-go.sh and
build/release.sh exit with 0. That's it. Who knows if any of this
actually works....