Merge pull request #12480 from GoogleCloudPlatform/revert-12309-haproxy_gcr

Revert "Add multi cluster services documentation "
pull/6/head
Marek Grabowski 2015-08-10 16:43:55 +02:00
commit 72db123025
6 changed files with 8 additions and 377 deletions

View File

@ -2,7 +2,7 @@ all: push
# 0.0 shouldn't clobber any released builds
TAG = 0.0
PREFIX = gcr.io/google_containers/servicelb
PREFIX = bprashanth/servicelb
server: service_loadbalancer.go
CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -ldflags '-w' -o service_loadbalancer ./service_loadbalancer.go
@ -11,7 +11,7 @@ container: server
docker build -t $(PREFIX):$(TAG) .
push: container
gcloud docker push $(PREFIX):$(TAG)
docker push $(PREFIX):$(TAG)
clean:
rm -f service_loadbalancer

View File

@ -32,9 +32,11 @@ __L7 load balancing of Http services__: The load balancer controller automatical
__L4 loadbalancing of Tcp services__: Since one needs to specify ports at pod creation time (kubernetes doesn't currently support port ranges), a single loadbalancer is tied to a set of preconfigured node ports, and hence a set of TCP services it can expose. The load balancer controller will dynamically add rules for each configured TCP service as it pops into existence. However, each "new" (unspecified in the tcpServices section of the loadbalancer.json) service will need you to open up a new container-host port pair for traffic. You can achieve this by creating a new loadbalancer pod with the `targetPort` set to the name of your service, and that service specified in the tcpServices map of the new loadbalancer.
### Cross-cluster loadbalancing
On cloud providers that offer a private ip range for all instances on a network, you can setup multiple clusters in different availability zones, on the same network, and loadbalancer services across these zones. On GCE for example, every instance is a member of a single network. A network performs the same function that a router does: it defines the network range and gateway IP address, handles communication between instances, and serves as a gateway between instances and other networks. On such networks the endpoints of a service in one cluster are visible in all other clusters in the same network, so you can setup an edge loadbalancer that watches a kubernetes master of another cluster for services. Such a deployment allows you to fallback to a different AZ during times of duress or planned downtime (eg: database update).
Still trying this out.
### Examples
@ -186,95 +188,6 @@ $ mysql -u root -ppassword --host 104.197.63.17 --port 3306 -e 'show databases;'
+--------------------+
```
#### Cross-cluster loadbalancing
First setup your 2 clusters, and a kubeconfig secret as described in the [sharing clusters example] (../../examples/sharing-clusters/README.md). We will create a loadbalancer in our first cluster (US) and have it publish the services from the second cluster (EU). This is the entire modified loadbalancer manifest:
```yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: service-loadbalancer
labels:
app: service-loadbalancer
version: v1
spec:
replicas: 1
selector:
app: service-loadbalancer
version: v1
template:
metadata:
labels:
app: service-loadbalancer
version: v1
spec:
volumes:
# token from the eu cluster, must already exist
# and match the name of the volume using in container
- name: eu-config
secret:
secretName: kubeconfig
nodeSelector:
role: loadbalancer
containers:
- image: gcr.io/google_containers/servicelb:0.1
imagePullPolicy: Always
livenessProbe:
httpGet:
path: /healthz
port: 8081
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
name: haproxy
ports:
# All http services
- containerPort: 80
hostPort: 80
protocol: TCP
# nginx https
- containerPort: 443
hostPort: 8080
protocol: TCP
# mysql
- containerPort: 3306
hostPort: 3306
protocol: TCP
# haproxy stats
- containerPort: 1936
hostPort: 1936
protocol: TCP
resources: {}
args:
- --tcp-services=mysql:3306,nginxsvc:443
- --use-kubernetes-cluster-service=false
# use-kubernetes-cluster-service=false in conjunction with the
# kube/config will force the service-loadbalancer to watch for
# services form the eu cluster.
volumeMounts:
- mountPath: /.kube
name: eu-config
env:
- name: KUBECONFIG
value: /.kube/config
```
Note that it is essentially the same as the rc.yaml checked into the service-loadbalancer directory expect that it consumes the kubeconfig secret as an extra KUBECONFIG environment variable.
```cmd
$ kubectl config use-context <us-clustername>
$ kubectl create -f rc.yaml
$ kubectl get pods -o wide
service-loadbalancer-5o2p4 1/1 Running 0 13m kubernetes-minion-5jtd
$ kubectl get node kubernetes-minion-5jtd -o json | grep -i externalip -A 2
"type": "ExternalIP",
"address": "104.197.81.116"
$ curl http://104.197.81.116/nginxsvc
Europe
```
### Troubleshooting:
- If you can curl or netcat the endpoint from the pod (with kubectl exec) and not from the node, you have not specified hostport and containerport.
- If you can hit the ips from the node but not from your machine outside the cluster, you have not opened firewall rules for the right network.

View File

@ -19,7 +19,7 @@ spec:
nodeSelector:
role: loadbalancer
containers:
- image: gcr.io/google_containers/servicelb:0.1
- image: bprashanth/servicelb:0.0
imagePullPolicy: Always
livenessProbe:
httpGet:

View File

@ -48,7 +48,7 @@ backend {{$svc.Name}}
balance roundrobin
# TODO: Make the path used to access a service customizable.
reqrep ^([^\ :]*)\ /{{$svc.Name}}[/]?(.*) \1\ /\2
{{range $j, $ep := $svc.Ep}}server {{$svcName}}_{{$j}} {{$ep}}
{{range $j, $ep := $svc.Ep}}server {{$svcName}}_{{$j}} {{$ep}} check
{{end}}
{{end}}
@ -64,6 +64,6 @@ frontend {{$svc.Name}}
backend {{$svc.Name}}
balance roundrobin
mode tcp
{{range $j, $ep := $svc.Ep}}server {{$svcName}}_{{$j}} {{$ep}}
{{range $j, $ep := $svc.Ep}}server {{$svcName}}_{{$j}} {{$ep}} check
{{end}}
{{end}}

View File

@ -1,220 +0,0 @@
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
<!-- BEGIN STRIP_FOR_RELEASE -->
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<h2>PLEASE NOTE: This document applies to the HEAD of the source tree</h2>
If you are using a released version of Kubernetes, you should
refer to the docs that go with that version.
<strong>
The latest 1.0.x release of this document can be found
[here](http://releases.k8s.io/release-1.0/examples/sharing-clusters/README.md).
Documentation for other releases can be found at
[releases.k8s.io](http://releases.k8s.io).
</strong>
--
<!-- END STRIP_FOR_RELEASE -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Sharing Clusters
This example demonstrates how to access one kubernetes cluster from another. It only works if both clusters are running on the same network, on a cloud provider that provides a private ip range per network (eg: GCE, GKE, AWS).
## Setup
Create a cluster in US (you don't need to do this if you already have a running kubernetes cluster)
```shell
$ cluster/kube-up.sh
```
Before creating our second cluster, lets have a look at the kubectl config:
```yaml
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://104.197.84.16
name: <clustername_us>
...
current-context: <clustername_us>
...
```
Now spin up the second cluster in Europe
```shell
$ ./cluster/kube-up.sh
$ KUBE_GCE_ZONE=europe-west1-b KUBE_GCE_INSTANCE_PREFIX=eu ./cluster/kube-up.sh
```
Your kubectl config should contain both clusters:
```yaml
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://146.148.25.221
name: <clustername_eu>
- cluster:
certificate-authority-data: REDACTED
server: https://104.197.84.16
name: <clustername_us>
...
current-context: kubernetesdev_eu
...
```
And kubectl get nodes should agree:
```
$ kubectl get nodes
NAME LABELS STATUS
eu-minion-0n61 kubernetes.io/hostname=eu-minion-0n61 Ready
eu-minion-79ua kubernetes.io/hostname=eu-minion-79ua Ready
eu-minion-7wz7 kubernetes.io/hostname=eu-minion-7wz7 Ready
eu-minion-loh2 kubernetes.io/hostname=eu-minion-loh2 Ready
$ kubectl config use-context <clustername_us>
$ kubectl get nodes
NAME LABELS STATUS
kubernetes-minion-5jtd kubernetes.io/hostname=kubernetes-minion-5jtd Ready
kubernetes-minion-lqfc kubernetes.io/hostname=kubernetes-minion-lqfc Ready
kubernetes-minion-sjra kubernetes.io/hostname=kubernetes-minion-sjra Ready
kubernetes-minion-wul8 kubernetes.io/hostname=kubernetes-minion-wul8 Ready
```
## Testing reachability
For this test to work we'll need to create a service in europe:
```
$ kubectl config use-context <clustername_eu>
$ kubectl create -f /tmp/secret.json
$ kubectl create -f examples/https-nginx/nginx-app.yaml
$ kubectl exec -it my-nginx-luiln -- echo "Europe nginx" >> /usr/share/nginx/html/index.html
$ kubectl get ep
NAME ENDPOINTS
kubernetes 10.240.249.92:443
nginxsvc 10.244.0.4:80,10.244.0.4:443
```
Just to test reachability, we'll try hitting the Europe nginx from our initial US central cluster. Create a basic curl pod in the US cluster:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: curlpod
spec:
containers:
- image: radial/busyboxplus:curl
command:
- sleep
- "360000000"
imagePullPolicy: IfNotPresent
name: curlcontainer
restartPolicy: Always
```
And test that you can actually reach the test nginx service across continents
```
$ kubectl config use-context <clustername_us>
$ kubectl -it exec curlpod -- /bin/sh
[ root@curlpod:/ ]$ curl http://10.244.0.4:80
Europe nginx
```
## Granting access to the remote cluster
We will grant the US cluster access to the Europe cluster. Basically we're going to setup a secret that allows kubectl to function in a pod running in the US cluster, just like it did on our local machine in the previous step. First create a secret with the contents of the current .kube/config:
```shell
$ kubectl config use-context <clustername_eu>
$ go run ./make_secret.go --kubeconfig=$HOME/.kube/config > /tmp/secret.json
$ kubectl config use-context <clustername_us>
$ kubectl create -f /tmp/secret.json
```
Create a kubectl pod that uses the secret, in the US cluster.
```json
{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "kubectl-tester"
},
"spec": {
"volumes": [
{
"name": "secret-volume",
"secret": {
"secretName": "kubeconfig"
}
}
],
"containers": [
{
"name": "kubectl",
"image": "bprashanth/kubectl:0.0",
"imagePullPolicy": "Always",
"env": [
{
"name": "KUBECONFIG",
"value": "/.kube/config"
}
],
"args": [
"proxy", "-p", "8001"
],
"volumeMounts": [
{
"name": "secret-volume",
"mountPath": "/.kube"
}
]
}
]
}
}
```
And check that you can access the remote cluster
```shell
$ kubectl config use-context <clustername_us>
$ kubectl exec -it kubectl-tester bash
kubectl-tester $ kubectl get nodes
NAME LABELS STATUS
eu-minion-0n61 kubernetes.io/hostname=eu-minion-0n61 Ready
eu-minion-79ua kubernetes.io/hostname=eu-minion-79ua Ready
eu-minion-7wz7 kubernetes.io/hostname=eu-minion-7wz7 Ready
eu-minion-loh2 kubernetes.io/hostname=eu-minion-loh2 Ready
```
For a more advanced example of sharing clusters, see the [service-loadbalancer](../../contrib/service-loadbalancer/README.md)
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/sharing-clusters/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View File

@ -1,62 +0,0 @@
/*
Copyright 2015 The Kubernetes Authors All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// A tiny script to help conver a given kubeconfig into a secret.
package main
import (
"flag"
"fmt"
"github.com/GoogleCloudPlatform/kubernetes/pkg/api"
"github.com/GoogleCloudPlatform/kubernetes/pkg/api/latest"
"github.com/GoogleCloudPlatform/kubernetes/pkg/runtime"
"io/ioutil"
"log"
)
// TODO:
// Add a -o flag that writes to the specified destination file.
var (
kubeconfig = flag.String("kubeconfig", "", "path to kubeconfig file.")
name = flag.String("name", "kubeconfig", "name to use in the metadata of the secret.")
ns = flag.String("ns", "default", "namespace of the secret.")
)
func read(file string) []byte {
b, err := ioutil.ReadFile(file)
if err != nil {
log.Fatalf("Cannot read file %v, %v", file, err)
}
return b
}
func main() {
flag.Parse()
if *kubeconfig == "" {
log.Fatalf("Need to specify --kubeconfig")
}
cfg := read(*kubeconfig)
secret := &api.Secret{
ObjectMeta: api.ObjectMeta{
Name: *name,
Namespace: *ns,
},
Data: map[string][]byte{
"config": cfg,
},
}
fmt.Printf(runtime.EncodeOrDie(latest.Codec, secret))
}