mirror of https://github.com/k3s-io/k3s
![]() Automatic merge from submit-queue (batch tested with PRs 46550, 46663, 46816, 46820, 46460) [GCE] Support internal load balancers **What this PR does / why we need it**: Allows users to expose K8s services externally of the K8s cluster but within their GCP network. Fixes #33483 **Important User Notes:** - This is a beta feature. ILB could be enabled differently in the future. - Requires nodes having version 1.7.0+ (ILB requires health checking and a health check endpoint on kube-proxy has just been exposed) - This cannot be used for intra-cluster communication. Do not call the load balancer IP from a K8s node/pod. - There is no reservation system for private IPs. You can specify a RFC 1918 address in `loadBalancerIP` field, but it could be lost to another VM or LB if service settings are modified. - If you're running an ingress, your existing loadbalancer backend service must be using BalancingMode type `RATE` - not `UTILIZATION`. - Option 1: With a 1.5.8+ or 1.6.4+ version master, delete all your ingresses, and re-create them. - Option 2: Migrate to a new cluster running 1.7.0. Considering ILB requires nodes with 1.7.0, this isn't a bad idea. - Option 3: Possible migration opportunity, but use at your own risk. More to come later. **Reviewer Notes**: Several files were renamed, so github thinks ~2k lines have changed. Review commits one-by-one to see the actual changes. **Release note**: ```release-note Support creation of GCP Internal Load Balancers from Service objects ``` |
||
---|---|---|
.. | ||
e2e | ||
e2e_federation | ||
e2e_node | ||
fixtures | ||
images | ||
integration | ||
kubemark | ||
list | ||
soak | ||
utils | ||
BUILD | ||
OWNERS | ||
test_owners.csv | ||
test_owners.json |