Prometheus/promdash replication controller

pull/6/head
jayunit100 2015-05-19 18:42:58 -04:00
parent f85e6bcf74
commit 39011ae2d1
3 changed files with 79 additions and 88 deletions

View File

@ -23,7 +23,7 @@ Now quickly confirm that /mnt/promdash/file.sqlite3 exists, and has a non-zero s
```
Looks open enough :).
1. Now, you can start this pod, like so `kubectl create -f cluster/add-ons/prometheus/prometheusB3.yaml`. This pod will start both prometheus, the server, as well as promdash, the visualization tool. You can then configure promdash, and next time you restart the pod - you're configuration will be remain (since the promdash directory was mounted as a local docker volume).
1. Now, you can start this pod, like so `kubectl create -f contrib/prometheus/prometheus-all.json`. This ReplicationController will maintain both prometheus, the server, as well as promdash, the visualization tool. You can then configure promdash, and next time you restart the pod - you're configuration will be remain (since the promdash directory was mounted as a local docker volume).
1. Finally, you can simply access localhost:3000, which will have promdash running. Then, add the prometheus server (locahost:9090)to as a promdash server, and create a dashboard according to the promdash directions.
@ -31,13 +31,13 @@ Looks open enough :).
You can launch prometheus easily, by simply running.
`kubectl create -f cluster/addons/prometheus/prometheus.yaml`
`kubectl create -f cluster/addons/prometheus/prometheus-all.json`
This will bind to port 9090 locally. You can see the prometheus database at that URL.
# How it works
This is a v1beta1 based, containerized prometheus pod, which scrapes endpoints which are readable on the KUBERNETES_RO service (the internal kubernetes service running in the default namespace, which is visible to all pods).
This is a v1beta3 based, containerized prometheus ReplicationController, which scrapes endpoints which are readable on the KUBERNETES_RO service (the internal kubernetes service running in the default namespace, which is visible to all pods).
1. The KUBERNETES_RO service is already running : providing read access to the API metrics.
@ -68,7 +68,5 @@ at port 9090.
- We should publish this image into the kube/ namespace.
- Possibly use postgre or mysql as a promdash database.
- push gateway (https://github.com/prometheus/pushgateway) setup.
- Setup high availability via NFS
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/contrib/prometheus/README.md?pixel)]()

View File

@ -0,0 +1,76 @@
{
"apiVersion": "v1beta3",
"kind": "ReplicationController",
"metadata": {
"labels": {
"name": "kube-prometheus"
},
"name": "kube-prometheus"
},
"spec": {
"replicas": 1,
"selector": {
"name": "kube-prometheus"
},
"template": {
"metadata": {
"labels": {
"name": "kube-prometheus"
}
},
"spec": {
"containers": [
{
"name":"kube-promdash",
"image": "prom/promdash",
"env": [
{
"name": "DATABASE_URL",
"value": "sqlite3:/promdash/file.sqlite3"
}
],
"ports": [
{
"containerPort": 3000,
"hostPort": 3000,
"protocol": "TCP"
}
],
"volumeMounts": [
{
"mountPath": "/promdash",
"name": "data"
}
]
},
{
"command": ["./run_prometheus.sh", "-t", "KUBERNETES_RO", "-d", "/var/prometheus/"],
"image": "jayunit100/kube-prometheus",
"name": "kube-prometheus",
"ports": [
{
"containerPort": 9090,
"hostPort": 9090,
"protocol": "TCP"
}
],
"volumeMounts": [
{
"mountPath": "/var/prometheus/",
"name": "data"
}
]
}
],
"volumes": [
{
"hostPath": {
"path": "/mnt/promdash"
},
"name": "data"
}
]
}
}
}
}

View File

@ -1,83 +0,0 @@
apiVersion: v1beta3
kind: Pod
metadata:
creationTimestamp: null
labels:
name: kube-prometheus
name: kube-prometheus
spec:
containers:
- capabilities: {}
env:
- name: DATABASE_URL
value: sqlite3:/promdash/file.sqlite3 #see volume comment below.
image: prom/promdash
imagePullPolicy: IfNotPresent
name: kube-promdash
ports:
- containerPort: 3000
hostPort: 3000
protocol: TCP
resources: {}
securityContext:
capabilities: {}
privileged: false
terminationMessagePath: /dev/termination-log
volumeMounts:
- mountPath: /promdash
name: promdashdb
- args:
- -t
- KUBERNETES_RO
- -d
- /var/prometheus/
capabilities: {}
image: jayunit100/kube-prometheus
imagePullPolicy: IfNotPresent
name: kube-prometheus
ports:
- containerPort: 9090
hostPort: 9090
protocol: TCP
resources: {}
securityContext:
capabilities: {}
privileged: false
terminationMessagePath: /dev/termination-log
volumeMounts:
- mountPath: /var/prometheus/
name: data
dnsPolicy: ClusterFirst
restartPolicy: Always
volumes:
# There are many ways to create these volumes.
# for example, gcePersistentDisk:, glusterfs:, and so on...
# for the shared data volume (which we may not need going forward)
# we just use the undefined shared volume type.
- awsElasticBlockStore: null
emptyDir:
medium: ""
gcePersistentDisk: null
gitRepo: null
glusterfs: null
hostPath: null
iscsi: null
name: data
nfs: null
secret: null
# Again, many ways to create the promdash mount. We are just using local
# disk for now. Later maybe just replace with pure RDBMS rather than file
# based sqlite db. The reason we have a volume is so that its persistent between
# pod restarts.
- awsElasticBlockStore: null
emptyDir: null
gcePersistentDisk: null
gitRepo: null
glusterfs: null
hostPath:
path: /mnt/promdash
iscsi: null
name: promdashdb
nfs: null
secret: null
status: {}