Refactor `pkg/proxy/config`’s ServiceConfigHandler.OnUpdate and
EndpointsConfigHandler.OnUpdate to different method names as they have
different signatures.
This will let the new proxy
(https://github.com/GoogleCloudPlatform/kubernetes/issues/3760)
implement both interfaces.
Since we won’t need a separate loadbalancer structure (load balancing
is handled in the proxy rules), we will simply handle both event types
from the same object.
Moves the userspace code in proxy to a sub-package and adds the
ProxyProvider interface.
This is in preparation for landing an implementation of
https://github.com/GoogleCloudPlatform/kubernetes/issues/3760, which
will mostly be in another sub package for iptables.
Not doing so breaks e2e tests and people that may be using them,
even though we will eventually want to stop supporting this now
that we have better alternatives for typical use cases (NodePort)
A service with a NodePort set will listen on that port, on every node.
This is both handy for some load balancers (AWS ELB) and for people
that want to expose a service without using a load balancer.
Moves proxySocket out of proxier.go to new proxysocket.go in proxy
package in order to start separating proxy logic and implementation and
make proxier more manageable to review.
Standardize how our fakes are used so that a test case can use a
simpler mechanism for providing large, complex data sets, as well
as represent queries over time.
Instead of endpoints being a flat list, it is now a list of "subsets"
where each is a struct of {Addresses, Ports}. To generate the list of
endpoints you need to take union of the Cartesian products of the
subsets. This is compact in the vast majority of cases, yet still
represents named ports and corner cases (e.g. each pod has a different
port number).
This also stores subsets in a deterministic order (sorted by hash) to
avoid spurious updates and comparison problems.
This is a fully compatible change - old objects and clients will
keepworking as long as they don't need the new functionality.
This is the prep for multi-port Services, which will add API to produce
endpoints in this new structure.
All the distros that use this have been updated,
or have PRs out to update them, or owners
have been asked to fix RPMs.
Removing this prevents further use of this model.
Remove now dead code: EtcdClientOrDie
Remove now dead pkg/proxy/config/etcd.go.
Remove unused imports.
gofmt -s from 1.4 does not like
for _ = range BLAH
it wants
for range BLAH
But gofmt from 1.3 dies:
./pkg/proxy/config/config.go:265:6: expected operand, found 'range'
./pkg/proxy/config/config.go:268:3: expected '{', found 'EOF'
So instead, rewrite the code to make them both happy
As far as I know, nobody uses it. It was replaced by PublicIPs. If I were
being very polite I would leave it in internal, but since I am 99.99% sure
nobody uses it, I am cutting it. Let's argue about it.
It was an ABA problem where the proxy loop might see its own service as
"existing" when it had been destroyed and recreated (as in an update).
To prove this I added a counter of running ProxyLoop goroutines and check that
in tests. If I undo my main change, the tests fail. This makes the
proxier_test significantly slower (3 seconds vs 0.5 seconds). Sorry.
Split SourceAPI into two subobjects.
Parallel structure for endpoints, services will allow
changing to use generic code in pkg/client/cache/reflector.go.
Rename some funcs to be more like pkg/client/cache.
After this DNS is resolvable from the host, if the DNS server is targetted
explicitly. This does NOT add the cluster DNS to the host's resolv.conf. That
is a larger problem, with distro-specific tie-ins and circular deps.
- Added process to cleanup stale session affinity records
- Automatically set cloud provided load balancer for sticky session if the service requires it - Note, this only works on GCE right now.
- Changed sessionAffinityMap a map to pointers instead of structs to improve performance
- Commented out cookie and protocol from sessionAffinityDetail to avoid confusion as it is not yet implemented.
Don't log an error when Accept failed because the interface (portal)
was just removed.
Don't pass around a pointer to a serviceInfo since another thread
deletes those. Instead, just check if service name is still in the
service map.
Delete the locking on the serviceInfo object since it is only used
by the "main" proxier thread.
A watch of the API can return an api.Status rather than the watched
obejct type. This code didn't handle that.
Tested with services e2e test (in conjunction with other PR).