mirror of https://github.com/k3s-io/k3s
Merge pull request #23807 from k82/k8s-23537
Automatic merge from submit-queue Added namespace to Spark example. Issues #23537 [Added namespace to Spark example].pull/6/head
commit
e5f237a7ff
|
@ -372,6 +372,7 @@ func TestExampleObjectSchemas(t *testing.T) {
|
||||||
"secret-env-pod": &api.Pod{},
|
"secret-env-pod": &api.Pod{},
|
||||||
},
|
},
|
||||||
"../examples/spark": {
|
"../examples/spark": {
|
||||||
|
"namespace-spark-cluster": &api.Namespace{},
|
||||||
"spark-master-controller": &api.ReplicationController{},
|
"spark-master-controller": &api.ReplicationController{},
|
||||||
"spark-master-service": &api.Service{},
|
"spark-master-service": &api.Service{},
|
||||||
"spark-webui": &api.Service{},
|
"spark-webui": &api.Service{},
|
||||||
|
|
|
@ -58,7 +58,31 @@ This example assumes
|
||||||
|
|
||||||
For details, you can look at the Dockerfiles in the Sources section.
|
For details, you can look at the Dockerfiles in the Sources section.
|
||||||
|
|
||||||
## Step One: Start your Master service
|
## Step One: Create namespace
|
||||||
|
|
||||||
|
```sh
|
||||||
|
$ kubectl create -f examples/spark/namespace-spark-cluster.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
Now list all namespaces:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
$ kubectl get namespaces
|
||||||
|
NAME LABELS STATUS
|
||||||
|
default <none> Active
|
||||||
|
spark-cluster name=spark-cluster Active
|
||||||
|
```
|
||||||
|
|
||||||
|
For kubectl client to work with namespace, we define one context and use it:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
$ kubectl config set-context spark --namespace=spark-cluster --cluster=${CLUSTER_NAME} --user=${USER_NAME}
|
||||||
|
$ kubectl config use-context spark
|
||||||
|
```
|
||||||
|
|
||||||
|
You can view your cluster name and user name in kubernetes config at ~/.kube/config.
|
||||||
|
|
||||||
|
## Step Two: Start your Master service
|
||||||
|
|
||||||
The Master [service](../../docs/user-guide/services.md) is the master service
|
The Master [service](../../docs/user-guide/services.md) is the master service
|
||||||
for a Spark cluster.
|
for a Spark cluster.
|
||||||
|
@ -71,7 +95,7 @@ running the Spark Master service.
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ kubectl create -f examples/spark/spark-master-controller.yaml
|
$ kubectl create -f examples/spark/spark-master-controller.yaml
|
||||||
replicationcontrollers/spark-master-controller
|
replicationcontroller "spark-master-controller" created
|
||||||
```
|
```
|
||||||
|
|
||||||
Then, use the
|
Then, use the
|
||||||
|
@ -81,14 +105,14 @@ Master pod.
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ kubectl create -f examples/spark/spark-master-service.yaml
|
$ kubectl create -f examples/spark/spark-master-service.yaml
|
||||||
services/spark-master
|
service "spark-master" created
|
||||||
```
|
```
|
||||||
|
|
||||||
You can then create a service for the Spark Master WebUI:
|
You can then create a service for the Spark Master WebUI:
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ kubectl create -f examples/spark/spark-webui.yaml
|
$ kubectl create -f examples/spark/spark-webui.yaml
|
||||||
services/spark-webui
|
service "spark-webui" created
|
||||||
```
|
```
|
||||||
|
|
||||||
### Check to see if Master is running and accessible
|
### Check to see if Master is running and accessible
|
||||||
|
@ -134,7 +158,7 @@ kubectl proxy --port=8001
|
||||||
At which point the UI will be available at
|
At which point the UI will be available at
|
||||||
[http://localhost:8001/api/v1/proxy/namespaces/default/services/spark-webui/](http://localhost:8001/api/v1/proxy/namespaces/default/services/spark-webui/).
|
[http://localhost:8001/api/v1/proxy/namespaces/default/services/spark-webui/](http://localhost:8001/api/v1/proxy/namespaces/default/services/spark-webui/).
|
||||||
|
|
||||||
## Step Two: Start your Spark workers
|
## Step Three: Start your Spark workers
|
||||||
|
|
||||||
The Spark workers do the heavy lifting in a Spark cluster. They
|
The Spark workers do the heavy lifting in a Spark cluster. They
|
||||||
provide execution resources and data cache capabilities for your
|
provide execution resources and data cache capabilities for your
|
||||||
|
@ -147,6 +171,7 @@ Use the [`examples/spark/spark-worker-controller.yaml`](spark-worker-controller.
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ kubectl create -f examples/spark/spark-worker-controller.yaml
|
$ kubectl create -f examples/spark/spark-worker-controller.yaml
|
||||||
|
replicationcontroller "spark-worker-controller" created
|
||||||
```
|
```
|
||||||
|
|
||||||
### Check to see if the workers are running
|
### Check to see if the workers are running
|
||||||
|
@ -175,7 +200,7 @@ you should now see the workers in the UI as well. *Note:* The UI will have links
|
||||||
to worker Web UIs. The worker UI links do not work (the links will attempt to
|
to worker Web UIs. The worker UI links do not work (the links will attempt to
|
||||||
connect to cluster IPs, which Kubernetes won't proxy automatically).
|
connect to cluster IPs, which Kubernetes won't proxy automatically).
|
||||||
|
|
||||||
## Step Three: Start the Zeppelin UI to launch jobs on your Spark cluster
|
## Step Four: Start the Zeppelin UI to launch jobs on your Spark cluster
|
||||||
|
|
||||||
The Zeppelin UI pod can be used to launch jobs into the Spark cluster either via
|
The Zeppelin UI pod can be used to launch jobs into the Spark cluster either via
|
||||||
a web notebook frontend or the traditional Spark command line. See
|
a web notebook frontend or the traditional Spark command line. See
|
||||||
|
@ -185,7 +210,7 @@ for more details.
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ kubectl create -f examples/spark/zeppelin-controller.yaml
|
$ kubectl create -f examples/spark/zeppelin-controller.yaml
|
||||||
replicationcontrollers/zeppelin-controller
|
replicationcontroller "zeppelin-controller" created
|
||||||
```
|
```
|
||||||
|
|
||||||
Zeppelin needs the Master service to be running.
|
Zeppelin needs the Master service to be running.
|
||||||
|
@ -198,7 +223,7 @@ NAME READY STATUS RESTARTS AGE
|
||||||
zeppelin-controller-ja09s 1/1 Running 0 53s
|
zeppelin-controller-ja09s 1/1 Running 0 53s
|
||||||
```
|
```
|
||||||
|
|
||||||
## Step Four: Do something with the cluster
|
## Step Five: Do something with the cluster
|
||||||
|
|
||||||
Now you have two choices, depending on your predilections. You can do something
|
Now you have two choices, depending on your predilections. You can do something
|
||||||
graphical with the Spark cluster, or you can stay in the CLI.
|
graphical with the Spark cluster, or you can stay in the CLI.
|
||||||
|
|
|
@ -0,0 +1,6 @@
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Namespace
|
||||||
|
metadata:
|
||||||
|
name: "spark-cluster"
|
||||||
|
labels:
|
||||||
|
name: "spark-cluster"
|
|
@ -2,6 +2,7 @@ kind: Endpoints
|
||||||
apiVersion: v1
|
apiVersion: v1
|
||||||
metadata:
|
metadata:
|
||||||
name: glusterfs-cluster
|
name: glusterfs-cluster
|
||||||
|
namespace: spark-cluster
|
||||||
subsets:
|
subsets:
|
||||||
- addresses:
|
- addresses:
|
||||||
- ip: 192.168.30.104
|
- ip: 192.168.30.104
|
||||||
|
|
|
@ -2,6 +2,7 @@ kind: ReplicationController
|
||||||
apiVersion: v1
|
apiVersion: v1
|
||||||
metadata:
|
metadata:
|
||||||
name: spark-master-controller
|
name: spark-master-controller
|
||||||
|
namespace: spark-cluster
|
||||||
labels:
|
labels:
|
||||||
component: spark-master
|
component: spark-master
|
||||||
spec:
|
spec:
|
||||||
|
|
|
@ -2,6 +2,7 @@ kind: Service
|
||||||
apiVersion: v1
|
apiVersion: v1
|
||||||
metadata:
|
metadata:
|
||||||
name: spark-master
|
name: spark-master
|
||||||
|
namespace: spark-cluster
|
||||||
labels:
|
labels:
|
||||||
component: spark-master-service
|
component: spark-master-service
|
||||||
spec:
|
spec:
|
||||||
|
|
|
@ -2,6 +2,7 @@ kind: ReplicationController
|
||||||
apiVersion: v1
|
apiVersion: v1
|
||||||
metadata:
|
metadata:
|
||||||
name: spark-gluster-worker-controller
|
name: spark-gluster-worker-controller
|
||||||
|
namespace: spark-cluster
|
||||||
labels:
|
labels:
|
||||||
component: spark-worker
|
component: spark-worker
|
||||||
spec:
|
spec:
|
||||||
|
|
|
@ -2,6 +2,7 @@ kind: ReplicationController
|
||||||
apiVersion: v1
|
apiVersion: v1
|
||||||
metadata:
|
metadata:
|
||||||
name: spark-master-controller
|
name: spark-master-controller
|
||||||
|
namespace: spark-cluster
|
||||||
spec:
|
spec:
|
||||||
replicas: 1
|
replicas: 1
|
||||||
selector:
|
selector:
|
||||||
|
|
|
@ -2,6 +2,7 @@ kind: Service
|
||||||
apiVersion: v1
|
apiVersion: v1
|
||||||
metadata:
|
metadata:
|
||||||
name: spark-master
|
name: spark-master
|
||||||
|
namespace: spark-cluster
|
||||||
spec:
|
spec:
|
||||||
ports:
|
ports:
|
||||||
- port: 7077
|
- port: 7077
|
||||||
|
|
|
@ -2,6 +2,7 @@ kind: Service
|
||||||
apiVersion: v1
|
apiVersion: v1
|
||||||
metadata:
|
metadata:
|
||||||
name: spark-webui
|
name: spark-webui
|
||||||
|
namespace: spark-cluster
|
||||||
spec:
|
spec:
|
||||||
ports:
|
ports:
|
||||||
- port: 8080
|
- port: 8080
|
||||||
|
|
|
@ -2,6 +2,7 @@ kind: ReplicationController
|
||||||
apiVersion: v1
|
apiVersion: v1
|
||||||
metadata:
|
metadata:
|
||||||
name: spark-worker-controller
|
name: spark-worker-controller
|
||||||
|
namespace: spark-cluster
|
||||||
spec:
|
spec:
|
||||||
replicas: 2
|
replicas: 2
|
||||||
selector:
|
selector:
|
||||||
|
|
|
@ -2,6 +2,7 @@ kind: ReplicationController
|
||||||
apiVersion: v1
|
apiVersion: v1
|
||||||
metadata:
|
metadata:
|
||||||
name: zeppelin-controller
|
name: zeppelin-controller
|
||||||
|
namespace: spark-cluster
|
||||||
spec:
|
spec:
|
||||||
replicas: 1
|
replicas: 1
|
||||||
selector:
|
selector:
|
||||||
|
|
|
@ -2,6 +2,7 @@ kind: Service
|
||||||
apiVersion: v1
|
apiVersion: v1
|
||||||
metadata:
|
metadata:
|
||||||
name: zeppelin
|
name: zeppelin
|
||||||
|
namespace: spark-cluster
|
||||||
spec:
|
spec:
|
||||||
ports:
|
ports:
|
||||||
- port: 8080
|
- port: 8080
|
||||||
|
|
Loading…
Reference in New Issue