Not doing so breaks e2e tests and people that may be using them,
even though we will eventually want to stop supporting this now
that we have better alternatives for typical use cases (NodePort)
A service with a NodePort set will listen on that port, on every node.
This is both handy for some load balancers (AWS ELB) and for people
that want to expose a service without using a load balancer.
Moves proxySocket out of proxier.go to new proxysocket.go in proxy
package in order to start separating proxy logic and implementation and
make proxier more manageable to review.
Instead of endpoints being a flat list, it is now a list of "subsets"
where each is a struct of {Addresses, Ports}. To generate the list of
endpoints you need to take union of the Cartesian products of the
subsets. This is compact in the vast majority of cases, yet still
represents named ports and corner cases (e.g. each pod has a different
port number).
This also stores subsets in a deterministic order (sorted by hash) to
avoid spurious updates and comparison problems.
This is a fully compatible change - old objects and clients will
keepworking as long as they don't need the new functionality.
This is the prep for multi-port Services, which will add API to produce
endpoints in this new structure.
As far as I know, nobody uses it. It was replaced by PublicIPs. If I were
being very polite I would leave it in internal, but since I am 99.99% sure
nobody uses it, I am cutting it. Let's argue about it.
It was an ABA problem where the proxy loop might see its own service as
"existing" when it had been destroyed and recreated (as in an update).
To prove this I added a counter of running ProxyLoop goroutines and check that
in tests. If I undo my main change, the tests fail. This makes the
proxier_test significantly slower (3 seconds vs 0.5 seconds). Sorry.
After this DNS is resolvable from the host, if the DNS server is targetted
explicitly. This does NOT add the cluster DNS to the host's resolv.conf. That
is a larger problem, with distro-specific tie-ins and circular deps.
- Added process to cleanup stale session affinity records
- Automatically set cloud provided load balancer for sticky session if the service requires it - Note, this only works on GCE right now.
- Changed sessionAffinityMap a map to pointers instead of structs to improve performance
- Commented out cookie and protocol from sessionAffinityDetail to avoid confusion as it is not yet implemented.
Don't log an error when Accept failed because the interface (portal)
was just removed.
Don't pass around a pointer to a serviceInfo since another thread
deletes those. Instead, just check if service name is still in the
service map.
Delete the locking on the serviceInfo object since it is only used
by the "main" proxier thread.
The iptables args list needs to include all fields as they are eventually spit
out by iptables-save. This is because some systems do not support the
'iptables -C' arg, and so fall back on parsing iptables-save output. If this
does not match, it will not pass the check. For example: adding the /32 on
the destination IP arg is not strictly required, but causes this list to not
match the final iptables-save output. This is fragile and I hope one day we
can stop supporting such old iptables versions.
This allows the proxier to portal Public IPs even if the
createExternalLoadBalancer flag is not set.
This also fixes what appears to be a bug in the createExternalLoadBalancer path
wherein multiple PublicIPs would get truncated.