This changes the --legacy-userspace-proxy flag to be a string flag
--proxy-mode. If specified, the flag will be respected ('userspace' and
'iptables' being valid values). If left blank (default) we will choose the
"best". best means userspace for now UNLESS the user adds an annotation
(net.experimental.kubernetes.io/proxy-mode) to their node, in which case we
will try to use that.
This allows people to try it on a single machine without fear of global failure
and without it getting rolled back on reboots. It is a poor-man's config blob.
Check to make sure there is not an alphanumeric character immeditely
before or after the 'flag'. It there is an alphanumeric character then
this is obviously not actually the flag we care about. For example if
the project declares a flag "valid-name" but the regex finds something
like "invalid_name" we should not match. Clearly this "invalid_name" is
not actually a wrong usage of the "valid-name" flag.
1. Add HostnameOverride parameter for kube-proxy as kubelet did.
2. Add Birthcry event for kube-proxy.
3. Because record event need apiserver client, adjust order of code partly.
pflag can handle IP addresses so use the pflag code instead of doing it
ourselves. This means our code just uses net.IP and we don't have all of
the useless casting back and forth!
Moves the userspace code in proxy to a sub-package and adds the
ProxyProvider interface.
This is in preparation for landing an implementation of
https://github.com/GoogleCloudPlatform/kubernetes/issues/3760, which
will mostly be in another sub package for iptables.
--master option still supported.
--kubeconfig option added to kube-proxy,
kube-scheduler, and kube-controller-manager
binaries.
Kube-proxy now always makes some kind of API
source, since that is its only kind of config.
Warn if it is using a default client, which probably won't work.
Uses the clientcmd builder.
--master flag is still supported for distros that need it.
But now, --kubeconfig flag can be used instead, or in addition,
to specify the auth info, and/or the location of the master.
A subsequent PR will change salt to generate a kubeconfig,
and to make kube-proxy use it, for salt-based clouds.
I had the kublet die on startup and the only error was "0x401da0" Which
I assume is an address of the err.Error function. The other way to fix
this, I think, would be to use err.Error(), however that could cause
fmt.Fprintf() problems, debuging on the error message people used.
Now I get a nice clean error I can understand:
"cAdvisor.New() err = mountpoint for cpu not found"
It currently is impossible to use two healthz handlers on different
ports in the same process. This removes the global variables in favor
of requiring the consumer to specify all health checks up front.
All the distros that use this have been updated,
or have PRs out to update them, or owners
have been asked to fix RPMs.
Removing this prevents further use of this model.
Remove now dead code: EtcdClientOrDie
Remove now dead pkg/proxy/config/etcd.go.
Remove unused imports.