mirror of https://github.com/k3s-io/k3s
commit
b4171c439e
|
@ -81,12 +81,7 @@ $ kubectl create -f examples/spark/spark-master-service.yaml
|
||||||
services/spark-master
|
services/spark-master
|
||||||
```
|
```
|
||||||
|
|
||||||
Optionally, you can create a service for the Spark Master WebUI at this point as
|
You can then create a service for the Spark Master WebUI:
|
||||||
well. If you are running on a cloud provider that supports it, this will create
|
|
||||||
an external load balancer and open a firewall to the Spark Master WebUI on the
|
|
||||||
cluster. **Note:** With the existing configuration, there is **ABSOLUTELY NO**
|
|
||||||
authentication on this WebUI. With slightly more work, it would be
|
|
||||||
straightforward to put an `nginx` proxy in front to password protect it.
|
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ kubectl create -f examples/spark/spark-webui.yaml
|
$ kubectl create -f examples/spark/spark-webui.yaml
|
||||||
|
@ -125,29 +120,16 @@ Spark Command: /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -cp /opt/spark-1.5
|
||||||
15/10/27 21:25:07 INFO Master: I have been elected leader! New state: ALIVE
|
15/10/27 21:25:07 INFO Master: I have been elected leader! New state: ALIVE
|
||||||
```
|
```
|
||||||
|
|
||||||
If you created the Spark WebUI and waited sufficient time for the load balancer
|
After you know the master is running, you can use the (cluster
|
||||||
to be create, the `spark-webui` service should look something like this:
|
proxy)[../../docs/user-guide/accessing-the-cluster.md#using-kubectl-proxy] to
|
||||||
|
connect to the Spark WebUI:
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ kubectl describe services/spark-webui
|
kubectl proxy --port=8001
|
||||||
Name: spark-webui
|
|
||||||
Namespace: default
|
|
||||||
Labels: <none>
|
|
||||||
Selector: component=spark-master
|
|
||||||
Type: LoadBalancer
|
|
||||||
IP: 10.0.152.249
|
|
||||||
LoadBalancer Ingress: 104.197.147.190
|
|
||||||
Port: <unnamed> 8080/TCP
|
|
||||||
NodePort: <unnamed> 31141/TCP
|
|
||||||
Endpoints: 10.244.1.12:8080
|
|
||||||
Session Affinity: None
|
|
||||||
Events: [...]
|
|
||||||
```
|
```
|
||||||
|
|
||||||
You should now be able to visit `http://104.197.147.190:8080` and see the Spark
|
At which point the UI will be available at
|
||||||
Master UI. *Note:* After workers connect, this UI has links to worker Web
|
http://localhost:8001/api/v1/proxy/namespaces/default/services/spark-webui/
|
||||||
UIs. The worker UI links do not work (the links attempt to connect to cluster
|
|
||||||
IPs).
|
|
||||||
|
|
||||||
## Step Two: Start your Spark workers
|
## Step Two: Start your Spark workers
|
||||||
|
|
||||||
|
@ -185,6 +167,11 @@ $ kubectl logs spark-master-controller-5u0q5
|
||||||
15/10/26 18:20:14 INFO Master: Registering worker 10.244.3.8:39926 with 2 cores, 6.3 GB RAM
|
15/10/26 18:20:14 INFO Master: Registering worker 10.244.3.8:39926 with 2 cores, 6.3 GB RAM
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Assuming you still have the `kubectl proxy` running from the previous section,
|
||||||
|
you should now see the workers in the UI as well. *Note:* The UI will have links
|
||||||
|
to worker Web UIs. The worker UI links do not work (the links will attempt to
|
||||||
|
connect to cluster IPs, which Kubernetes won't proxy automatically).
|
||||||
|
|
||||||
## Step Three: Start your Spark driver to launch jobs on your Spark cluster
|
## Step Three: Start your Spark driver to launch jobs on your Spark cluster
|
||||||
|
|
||||||
The Spark driver is used to launch jobs into Spark cluster. You can read more about it in
|
The Spark driver is used to launch jobs into Spark cluster. You can read more about it in
|
||||||
|
@ -241,18 +228,14 @@ information.
|
||||||
## tl;dr
|
## tl;dr
|
||||||
|
|
||||||
```console
|
```console
|
||||||
kubectl create -f examples/spark/spark-master-controller.yaml
|
kubectl create -f examples/spark
|
||||||
kubectl create -f examples/spark/spark-master-service.yaml
|
|
||||||
kubectl create -f examples/spark/spark-webui.yaml
|
|
||||||
kubectl create -f examples/spark/spark-worker-controller.yaml
|
|
||||||
kubectl create -f examples/spark/spark-driver-controller.yaml
|
|
||||||
```
|
```
|
||||||
|
|
||||||
After it's setup:
|
After it's setup:
|
||||||
|
|
||||||
```console
|
```console
|
||||||
kubectl get pods # Make sure everything is running
|
kubectl get pods # Make sure everything is running
|
||||||
kubectl get services spark-webui # Get the IP of the Spark WebUI
|
kubectl proxy --port=8001 # Start an application proxy, if you want to see the Spark WebUI
|
||||||
kubectl get pods -lcomponent=spark-driver # Get the driver pod to interact with.
|
kubectl get pods -lcomponent=spark-driver # Get the driver pod to interact with.
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
|
@ -2,8 +2,6 @@ kind: ReplicationController
|
||||||
apiVersion: v1
|
apiVersion: v1
|
||||||
metadata:
|
metadata:
|
||||||
name: spark-driver-controller
|
name: spark-driver-controller
|
||||||
labels:
|
|
||||||
component: spark-driver
|
|
||||||
spec:
|
spec:
|
||||||
replicas: 1
|
replicas: 1
|
||||||
selector:
|
selector:
|
||||||
|
|
|
@ -2,8 +2,6 @@ kind: ReplicationController
|
||||||
apiVersion: v1
|
apiVersion: v1
|
||||||
metadata:
|
metadata:
|
||||||
name: spark-master-controller
|
name: spark-master-controller
|
||||||
labels:
|
|
||||||
component: spark-master
|
|
||||||
spec:
|
spec:
|
||||||
replicas: 1
|
replicas: 1
|
||||||
selector:
|
selector:
|
||||||
|
@ -19,6 +17,15 @@ spec:
|
||||||
ports:
|
ports:
|
||||||
- containerPort: 7077
|
- containerPort: 7077
|
||||||
- containerPort: 8080
|
- containerPort: 8080
|
||||||
|
livenessProbe:
|
||||||
|
exec:
|
||||||
|
command:
|
||||||
|
- /opt/spark/sbin/spark-daemon.sh
|
||||||
|
- status
|
||||||
|
- org.apache.spark.deploy.master.Master
|
||||||
|
- '1'
|
||||||
|
initialDelaySeconds: 30
|
||||||
|
timeoutSeconds: 1
|
||||||
resources:
|
resources:
|
||||||
requests:
|
requests:
|
||||||
cpu: 100m
|
cpu: 100m
|
||||||
|
|
|
@ -2,8 +2,6 @@ kind: Service
|
||||||
apiVersion: v1
|
apiVersion: v1
|
||||||
metadata:
|
metadata:
|
||||||
name: spark-master
|
name: spark-master
|
||||||
labels:
|
|
||||||
component: spark-master-service
|
|
||||||
spec:
|
spec:
|
||||||
ports:
|
ports:
|
||||||
- port: 7077
|
- port: 7077
|
||||||
|
|
|
@ -8,4 +8,3 @@ spec:
|
||||||
targetPort: 8080
|
targetPort: 8080
|
||||||
selector:
|
selector:
|
||||||
component: spark-master
|
component: spark-master
|
||||||
type: LoadBalancer
|
|
||||||
|
|
|
@ -2,8 +2,6 @@ kind: ReplicationController
|
||||||
apiVersion: v1
|
apiVersion: v1
|
||||||
metadata:
|
metadata:
|
||||||
name: spark-worker-controller
|
name: spark-worker-controller
|
||||||
labels:
|
|
||||||
component: spark-worker
|
|
||||||
spec:
|
spec:
|
||||||
replicas: 3
|
replicas: 3
|
||||||
selector:
|
selector:
|
||||||
|
@ -12,13 +10,21 @@ spec:
|
||||||
metadata:
|
metadata:
|
||||||
labels:
|
labels:
|
||||||
component: spark-worker
|
component: spark-worker
|
||||||
uses: spark-master
|
|
||||||
spec:
|
spec:
|
||||||
containers:
|
containers:
|
||||||
- name: spark-worker
|
- name: spark-worker
|
||||||
image: gcr.io/google_containers/spark-worker:1.5.1_v1
|
image: gcr.io/google_containers/spark-worker:1.5.1_v1
|
||||||
ports:
|
ports:
|
||||||
- containerPort: 8888
|
- containerPort: 8888
|
||||||
|
livenessProbe:
|
||||||
|
exec:
|
||||||
|
command:
|
||||||
|
- /opt/spark/sbin/spark-daemon.sh
|
||||||
|
- status
|
||||||
|
- org.apache.spark.deploy.worker.Worker
|
||||||
|
- '1'
|
||||||
|
initialDelaySeconds: 30
|
||||||
|
timeoutSeconds: 1
|
||||||
resources:
|
resources:
|
||||||
requests:
|
requests:
|
||||||
cpu: 100m
|
cpu: 100m
|
||||||
|
|
Loading…
Reference in New Issue