Added namespace to Spark example.

pull/6/head
Klaus Ma 2016-04-19 00:34:47 +08:00
parent 26c99fee00
commit 326e30772a
13 changed files with 50 additions and 8 deletions

View File

@ -372,6 +372,7 @@ func TestExampleObjectSchemas(t *testing.T) {
"secret-env-pod": &api.Pod{},
},
"../examples/spark": {
"namespace-spark-cluster": &api.Namespace{},
"spark-master-controller": &api.ReplicationController{},
"spark-master-service": &api.Service{},
"spark-webui": &api.Service{},

View File

@ -58,7 +58,31 @@ This example assumes
For details, you can look at the Dockerfiles in the Sources section.
## Step One: Start your Master service
## Step One: Create namespace
```sh
$ kubectl create -f examples/spark/namespace-spark-cluster.yaml
```
Now list all namespaces:
```sh
$ kubectl get namespaces
NAME LABELS STATUS
default <none> Active
spark-cluster name=spark-cluster Active
```
For kubectl client to work with namespace, we define one context and use it:
```sh
$ kubectl config set-context spark --namespace=spark-cluster --cluster=${CLUSTER_NAME} --user=${USER_NAME}
$ kubectl config use-context spark
```
You can view your cluster name and user name in kubernetes config at ~/.kube/config.
## Step Two: Start your Master service
The Master [service](../../docs/user-guide/services.md) is the master service
for a Spark cluster.
@ -71,7 +95,7 @@ running the Spark Master service.
```console
$ kubectl create -f examples/spark/spark-master-controller.yaml
replicationcontrollers/spark-master-controller
replicationcontroller "spark-master-controller" created
```
Then, use the
@ -81,14 +105,14 @@ Master pod.
```console
$ kubectl create -f examples/spark/spark-master-service.yaml
services/spark-master
service "spark-master" created
```
You can then create a service for the Spark Master WebUI:
```console
$ kubectl create -f examples/spark/spark-webui.yaml
services/spark-webui
service "spark-webui" created
```
### Check to see if Master is running and accessible
@ -134,7 +158,7 @@ kubectl proxy --port=8001
At which point the UI will be available at
[http://localhost:8001/api/v1/proxy/namespaces/default/services/spark-webui/](http://localhost:8001/api/v1/proxy/namespaces/default/services/spark-webui/).
## Step Two: Start your Spark workers
## Step Three: Start your Spark workers
The Spark workers do the heavy lifting in a Spark cluster. They
provide execution resources and data cache capabilities for your
@ -147,6 +171,7 @@ Use the [`examples/spark/spark-worker-controller.yaml`](spark-worker-controller.
```console
$ kubectl create -f examples/spark/spark-worker-controller.yaml
replicationcontroller "spark-worker-controller" created
```
### Check to see if the workers are running
@ -175,7 +200,7 @@ you should now see the workers in the UI as well. *Note:* The UI will have links
to worker Web UIs. The worker UI links do not work (the links will attempt to
connect to cluster IPs, which Kubernetes won't proxy automatically).
## Step Three: Start the Zeppelin UI to launch jobs on your Spark cluster
## Step Four: Start the Zeppelin UI to launch jobs on your Spark cluster
The Zeppelin UI pod can be used to launch jobs into the Spark cluster either via
a web notebook frontend or the traditional Spark command line. See
@ -185,7 +210,7 @@ for more details.
```console
$ kubectl create -f examples/spark/zeppelin-controller.yaml
replicationcontrollers/zeppelin-controller
replicationcontroller "zeppelin-controller" created
```
Zeppelin needs the Master service to be running.
@ -198,7 +223,7 @@ NAME READY STATUS RESTARTS AGE
zeppelin-controller-ja09s 1/1 Running 0 53s
```
## Step Four: Do something with the cluster
## Step Five: Do something with the cluster
Now you have two choices, depending on your predilections. You can do something
graphical with the Spark cluster, or you can stay in the CLI.

View File

@ -0,0 +1,6 @@
apiVersion: v1
kind: Namespace
metadata:
name: "spark-cluster"
labels:
name: "spark-cluster"

View File

@ -2,6 +2,7 @@ kind: Endpoints
apiVersion: v1
metadata:
name: glusterfs-cluster
namespace: spark-cluster
subsets:
- addresses:
- ip: 192.168.30.104

View File

@ -2,6 +2,7 @@ kind: ReplicationController
apiVersion: v1
metadata:
name: spark-master-controller
namespace: spark-cluster
labels:
component: spark-master
spec:

View File

@ -2,6 +2,7 @@ kind: Service
apiVersion: v1
metadata:
name: spark-master
namespace: spark-cluster
labels:
component: spark-master-service
spec:

View File

@ -2,6 +2,7 @@ kind: ReplicationController
apiVersion: v1
metadata:
name: spark-gluster-worker-controller
namespace: spark-cluster
labels:
component: spark-worker
spec:

View File

@ -2,6 +2,7 @@ kind: ReplicationController
apiVersion: v1
metadata:
name: spark-master-controller
namespace: spark-cluster
spec:
replicas: 1
selector:

View File

@ -2,6 +2,7 @@ kind: Service
apiVersion: v1
metadata:
name: spark-master
namespace: spark-cluster
spec:
ports:
- port: 7077

View File

@ -2,6 +2,7 @@ kind: Service
apiVersion: v1
metadata:
name: spark-webui
namespace: spark-cluster
spec:
ports:
- port: 8080

View File

@ -2,6 +2,7 @@ kind: ReplicationController
apiVersion: v1
metadata:
name: spark-worker-controller
namespace: spark-cluster
spec:
replicas: 2
selector:

View File

@ -2,6 +2,7 @@ kind: ReplicationController
apiVersion: v1
metadata:
name: zeppelin-controller
namespace: spark-cluster
spec:
replicas: 1
selector:

View File

@ -2,6 +2,7 @@ kind: Service
apiVersion: v1
metadata:
name: zeppelin
namespace: spark-cluster
spec:
ports:
- port: 8080