Merge pull request #13652 from clasohm/example_download_links_raw

add raw flag for GitHub download links
pull/6/head
Chao Xu 2015-09-09 16:19:14 -07:00
commit 183c6e2e84
15 changed files with 40 additions and 40 deletions

View File

@ -43,7 +43,7 @@ var exampleMungeTagRE = regexp.MustCompile(beginMungeTag(fmt.Sprintf("%s %s", ex
// bar: // bar:
// ``` // ```
// //
// [Download example](../../examples/guestbook/frontend-controller.yaml) // [Download example](../../examples/guestbook/frontend-controller.yaml?raw=true)
// <!-- END MUNGE: EXAMPLE --> // <!-- END MUNGE: EXAMPLE -->
func syncExamples(filePath string, mlines mungeLines) (mungeLines, error) { func syncExamples(filePath string, mlines mungeLines) (mungeLines, error) {
var err error var err error
@ -108,7 +108,7 @@ func exampleContent(filePath, linkPath, fileType string) (mungeLines, error) {
// remove leading and trailing spaces and newlines // remove leading and trailing spaces and newlines
trimmedFileContent := strings.TrimSpace(string(dat)) trimmedFileContent := strings.TrimSpace(string(dat))
content := fmt.Sprintf("\n```%s\n%s\n```\n\n[Download example](%s)", fileType, trimmedFileContent, fileRel) content := fmt.Sprintf("\n```%s\n%s\n```\n\n[Download example](%s?raw=true)", fileType, trimmedFileContent, fileRel)
out := getMungeLines(content) out := getMungeLines(content)
return out, nil return out, nil
} }

View File

@ -41,11 +41,11 @@ spec:
{"", ""}, {"", ""},
{ {
"<!-- BEGIN MUNGE: EXAMPLE testdata/pod.yaml -->\n<!-- END MUNGE: EXAMPLE testdata/pod.yaml -->\n", "<!-- BEGIN MUNGE: EXAMPLE testdata/pod.yaml -->\n<!-- END MUNGE: EXAMPLE testdata/pod.yaml -->\n",
"<!-- BEGIN MUNGE: EXAMPLE testdata/pod.yaml -->\n\n```yaml\n" + podExample + "```\n\n[Download example](testdata/pod.yaml)\n<!-- END MUNGE: EXAMPLE testdata/pod.yaml -->\n", "<!-- BEGIN MUNGE: EXAMPLE testdata/pod.yaml -->\n\n```yaml\n" + podExample + "```\n\n[Download example](testdata/pod.yaml?raw=true)\n<!-- END MUNGE: EXAMPLE testdata/pod.yaml -->\n",
}, },
{ {
"<!-- BEGIN MUNGE: EXAMPLE ../mungedocs/testdata/pod.yaml -->\n<!-- END MUNGE: EXAMPLE ../mungedocs/testdata/pod.yaml -->\n", "<!-- BEGIN MUNGE: EXAMPLE ../mungedocs/testdata/pod.yaml -->\n<!-- END MUNGE: EXAMPLE ../mungedocs/testdata/pod.yaml -->\n",
"<!-- BEGIN MUNGE: EXAMPLE ../mungedocs/testdata/pod.yaml -->\n\n```yaml\n" + podExample + "```\n\n[Download example](../mungedocs/testdata/pod.yaml)\n<!-- END MUNGE: EXAMPLE ../mungedocs/testdata/pod.yaml -->\n", "<!-- BEGIN MUNGE: EXAMPLE ../mungedocs/testdata/pod.yaml -->\n\n```yaml\n" + podExample + "```\n\n[Download example](../mungedocs/testdata/pod.yaml?raw=true)\n<!-- END MUNGE: EXAMPLE ../mungedocs/testdata/pod.yaml -->\n",
}, },
} }
repoRoot = "" repoRoot = ""

View File

@ -98,7 +98,7 @@ Use the file [`namespace-dev.json`](namespace-dev.json) which describes a develo
} }
``` ```
[Download example](namespace-dev.json) [Download example](namespace-dev.json?raw=true)
<!-- END MUNGE: EXAMPLE namespace-dev.json --> <!-- END MUNGE: EXAMPLE namespace-dev.json -->
Create the development namespace using kubectl. Create the development namespace using kubectl.

View File

@ -73,7 +73,7 @@ spec:
'for ((i = 0; ; i++)); do echo "$i: $(date)"; sleep 1; done'] 'for ((i = 0; ; i++)); do echo "$i: $(date)"; sleep 1; done']
``` ```
[Download example](../../examples/blog-logging/counter-pod.yaml) [Download example](../../examples/blog-logging/counter-pod.yaml?raw=true)
<!-- END MUNGE: EXAMPLE ../../examples/blog-logging/counter-pod.yaml --> <!-- END MUNGE: EXAMPLE ../../examples/blog-logging/counter-pod.yaml -->
This pod specification has one container which runs a bash script when the container is born. This script simply writes out the value of a counter and the date once per second and runs indefinitely. Lets create the pod in the default This pod specification has one container which runs a bash script when the container is born. This script simply writes out the value of a counter and the date once per second and runs indefinitely. Lets create the pod in the default
@ -192,7 +192,7 @@ spec:
path: /var/lib/docker/containers path: /var/lib/docker/containers
``` ```
[Download example](../../cluster/saltbase/salt/fluentd-gcp/fluentd-gcp.yaml) [Download example](../../cluster/saltbase/salt/fluentd-gcp/fluentd-gcp.yaml?raw=true)
<!-- END MUNGE: EXAMPLE ../../cluster/saltbase/salt/fluentd-gcp/fluentd-gcp.yaml --> <!-- END MUNGE: EXAMPLE ../../cluster/saltbase/salt/fluentd-gcp/fluentd-gcp.yaml -->
This pod specification maps the directory on the host containing the Docker log files, `/var/lib/docker/containers`, to a directory inside the container which has the same path. The pod runs one image, `gcr.io/google_containers/fluentd-gcp:1.6`, which is configured to collect the Docker log files from the logs directory and ingest them into Google Cloud Logging. One instance of this pod runs on each node of the cluster. Kubernetes will notice if this pod fails and automatically restart it. This pod specification maps the directory on the host containing the Docker log files, `/var/lib/docker/containers`, to a directory inside the container which has the same path. The pod runs one image, `gcr.io/google_containers/fluentd-gcp:1.6`, which is configured to collect the Docker log files from the logs directory and ingest them into Google Cloud Logging. One instance of this pod runs on each node of the cluster. Kubernetes will notice if this pod fails and automatically restart it.

View File

@ -108,7 +108,7 @@ spec:
restartPolicy: Never restartPolicy: Never
``` ```
[Download example](downward-api/dapi-pod.yaml) [Download example](downward-api/dapi-pod.yaml?raw=true)
<!-- END MUNGE: EXAMPLE downward-api/dapi-pod.yaml --> <!-- END MUNGE: EXAMPLE downward-api/dapi-pod.yaml -->
@ -178,7 +178,7 @@ spec:
fieldPath: metadata.annotations fieldPath: metadata.annotations
``` ```
[Download example](downward-api/volume/dapi-volume.yaml) [Download example](downward-api/volume/dapi-volume.yaml?raw=true)
<!-- END MUNGE: EXAMPLE downward-api/volume/dapi-volume.yaml --> <!-- END MUNGE: EXAMPLE downward-api/volume/dapi-volume.yaml -->
Some more thorough examples: Some more thorough examples:

View File

@ -58,7 +58,7 @@ spec:
'for ((i = 0; ; i++)); do echo "$i: $(date)"; sleep 1; done'] 'for ((i = 0; ; i++)); do echo "$i: $(date)"; sleep 1; done']
``` ```
[Download example](../../examples/blog-logging/counter-pod.yaml) [Download example](../../examples/blog-logging/counter-pod.yaml?raw=true)
<!-- END MUNGE: EXAMPLE ../../examples/blog-logging/counter-pod.yaml --> <!-- END MUNGE: EXAMPLE ../../examples/blog-logging/counter-pod.yaml -->
we can run the pod: we can run the pod:

View File

@ -64,7 +64,7 @@ spec:
- containerPort: 80 - containerPort: 80
``` ```
[Download example](pod.yaml) [Download example](pod.yaml?raw=true)
<!-- END MUNGE: EXAMPLE pod.yaml --> <!-- END MUNGE: EXAMPLE pod.yaml -->
You can see your cluster's pods: You can see your cluster's pods:
@ -116,7 +116,7 @@ spec:
- containerPort: 80 - containerPort: 80
``` ```
[Download example](replication.yaml) [Download example](replication.yaml?raw=true)
<!-- END MUNGE: EXAMPLE replication.yaml --> <!-- END MUNGE: EXAMPLE replication.yaml -->
To delete the replication controller (and the pods it created): To delete the replication controller (and the pods it created):

View File

@ -165,7 +165,7 @@ spec:
emptyDir: {} emptyDir: {}
``` ```
[Download example](pod-redis.yaml) [Download example](pod-redis.yaml?raw=true)
<!-- END MUNGE: EXAMPLE pod-redis.yaml --> <!-- END MUNGE: EXAMPLE pod-redis.yaml -->
Notes: Notes:

View File

@ -86,7 +86,7 @@ spec:
- containerPort: 80 - containerPort: 80
``` ```
[Download example](pod-nginx-with-label.yaml) [Download example](pod-nginx-with-label.yaml?raw=true)
<!-- END MUNGE: EXAMPLE pod-nginx-with-label.yaml --> <!-- END MUNGE: EXAMPLE pod-nginx-with-label.yaml -->
Create the labeled pod ([pod-nginx-with-label.yaml](pod-nginx-with-label.yaml)): Create the labeled pod ([pod-nginx-with-label.yaml](pod-nginx-with-label.yaml)):
@ -142,7 +142,7 @@ spec:
- containerPort: 80 - containerPort: 80
``` ```
[Download example](replication-controller.yaml) [Download example](replication-controller.yaml?raw=true)
<!-- END MUNGE: EXAMPLE replication-controller.yaml --> <!-- END MUNGE: EXAMPLE replication-controller.yaml -->
#### Replication Controller Management #### Replication Controller Management
@ -195,7 +195,7 @@ spec:
app: nginx app: nginx
``` ```
[Download example](service.yaml) [Download example](service.yaml?raw=true)
<!-- END MUNGE: EXAMPLE service.yaml --> <!-- END MUNGE: EXAMPLE service.yaml -->
#### Service Management #### Service Management
@ -311,7 +311,7 @@ spec:
- containerPort: 80 - containerPort: 80
``` ```
[Download example](pod-with-http-healthcheck.yaml) [Download example](pod-with-http-healthcheck.yaml?raw=true)
<!-- END MUNGE: EXAMPLE pod-with-http-healthcheck.yaml --> <!-- END MUNGE: EXAMPLE pod-with-http-healthcheck.yaml -->
For more information about health checking, see [Container Probes](../pod-states.md#container-probes). For more information about health checking, see [Container Probes](../pod-states.md#container-probes).

View File

@ -100,7 +100,7 @@ spec:
emptyDir: {} emptyDir: {}
``` ```
[Download example](cassandra-controller.yaml) [Download example](cassandra-controller.yaml?raw=true)
<!-- END MUNGE: EXAMPLE cassandra-controller.yaml --> <!-- END MUNGE: EXAMPLE cassandra-controller.yaml -->
There are a few things to note in this description. First is that we are running the ```kubernetes/cassandra``` image. This is a standard Cassandra installation on top of Debian. However it also adds a custom [```SeedProvider```](https://svn.apache.org/repos/asf/cassandra/trunk/src/java/org/apache/cassandra/locator/SeedProvider.java) to Cassandra. In Cassandra, a ```SeedProvider``` bootstraps the gossip protocol that Cassandra uses to find other nodes. The ```KubernetesSeedProvider``` discovers the Kubernetes API Server using the built in Kubernetes discovery service, and then uses the Kubernetes API to find new nodes (more on this later) There are a few things to note in this description. First is that we are running the ```kubernetes/cassandra``` image. This is a standard Cassandra installation on top of Debian. However it also adds a custom [```SeedProvider```](https://svn.apache.org/repos/asf/cassandra/trunk/src/java/org/apache/cassandra/locator/SeedProvider.java) to Cassandra. In Cassandra, a ```SeedProvider``` bootstraps the gossip protocol that Cassandra uses to find other nodes. The ```KubernetesSeedProvider``` discovers the Kubernetes API Server using the built in Kubernetes discovery service, and then uses the Kubernetes API to find new nodes (more on this later)
@ -131,7 +131,7 @@ spec:
name: cassandra name: cassandra
``` ```
[Download example](cassandra-service.yaml) [Download example](cassandra-service.yaml?raw=true)
<!-- END MUNGE: EXAMPLE cassandra-service.yaml --> <!-- END MUNGE: EXAMPLE cassandra-service.yaml -->
The important thing to note here is the ```selector```. It is a query over labels, that identifies the set of _Pods_ contained by the _Service_. In this case the selector is ```name=cassandra```. If you look back at the Pod specification above, you'll see that the pod has the corresponding label, so it will be selected for membership in this Service. The important thing to note here is the ```selector```. It is a query over labels, that identifies the set of _Pods_ contained by the _Service_. In this case the selector is ```name=cassandra```. If you look back at the Pod specification above, you'll see that the pod has the corresponding label, so it will be selected for membership in this Service.
@ -241,7 +241,7 @@ spec:
emptyDir: {} emptyDir: {}
``` ```
[Download example](cassandra-controller.yaml) [Download example](cassandra-controller.yaml?raw=true)
<!-- END MUNGE: EXAMPLE cassandra-controller.yaml --> <!-- END MUNGE: EXAMPLE cassandra-controller.yaml -->
Most of this replication controller definition is identical to the Cassandra pod definition above, it simply gives the replication controller a recipe to use when it creates new Cassandra pods. The other differentiating parts are the ```selector``` attribute which contains the controller's selector query, and the ```replicas``` attribute which specifies the desired number of replicas, in this case 1. Most of this replication controller definition is identical to the Cassandra pod definition above, it simply gives the replication controller a recipe to use when it creates new Cassandra pods. The other differentiating parts are the ```selector``` attribute which contains the controller's selector query, and the ```replicas``` attribute which specifies the desired number of replicas, in this case 1.

View File

@ -81,7 +81,7 @@ spec:
component: rabbitmq component: rabbitmq
``` ```
[Download example](rabbitmq-service.yaml) [Download example](rabbitmq-service.yaml?raw=true)
<!-- END MUNGE: EXAMPLE rabbitmq-service.yaml --> <!-- END MUNGE: EXAMPLE rabbitmq-service.yaml -->
To start the service, run: To start the service, run:
@ -126,7 +126,7 @@ spec:
cpu: 100m cpu: 100m
``` ```
[Download example](rabbitmq-controller.yaml) [Download example](rabbitmq-controller.yaml?raw=true)
<!-- END MUNGE: EXAMPLE rabbitmq-controller.yaml --> <!-- END MUNGE: EXAMPLE rabbitmq-controller.yaml -->
Running `$ kubectl create -f examples/celery-rabbitmq/rabbitmq-controller.yaml` brings up a replication controller that ensures one pod exists which is running a RabbitMQ instance. Running `$ kubectl create -f examples/celery-rabbitmq/rabbitmq-controller.yaml` brings up a replication controller that ensures one pod exists which is running a RabbitMQ instance.
@ -167,7 +167,7 @@ spec:
cpu: 100m cpu: 100m
``` ```
[Download example](celery-controller.yaml) [Download example](celery-controller.yaml?raw=true)
<!-- END MUNGE: EXAMPLE celery-controller.yaml --> <!-- END MUNGE: EXAMPLE celery-controller.yaml -->
There are several things to point out here... There are several things to point out here...
@ -238,7 +238,7 @@ spec:
type: LoadBalancer type: LoadBalancer
``` ```
[Download example](flower-service.yaml) [Download example](flower-service.yaml?raw=true)
<!-- END MUNGE: EXAMPLE flower-service.yaml --> <!-- END MUNGE: EXAMPLE flower-service.yaml -->
It is marked as external (LoadBalanced). However on many platforms you will have to add an explicit firewall rule to open port 5555. It is marked as external (LoadBalanced). However on many platforms you will have to add an explicit firewall rule to open port 5555.
@ -279,7 +279,7 @@ spec:
cpu: 100m cpu: 100m
``` ```
[Download example](flower-controller.yaml) [Download example](flower-controller.yaml?raw=true)
<!-- END MUNGE: EXAMPLE flower-controller.yaml --> <!-- END MUNGE: EXAMPLE flower-controller.yaml -->
This will bring up a new pod with Flower installed and port 5555 (Flower's default port) exposed through the service endpoint. This image uses the following command to start Flower: This will bring up a new pod with Flower installed and port 5555 (Flower's default port) exposed through the service endpoint. This image uses the following command to start Flower:

View File

@ -100,7 +100,7 @@ spec:
- containerPort: 6379 - containerPort: 6379
``` ```
[Download example](redis-master-controller.yaml) [Download example](redis-master-controller.yaml?raw=true)
<!-- END MUNGE: EXAMPLE redis-master-controller.yaml --> <!-- END MUNGE: EXAMPLE redis-master-controller.yaml -->
Change to the `<kubernetes>/examples/guestbook` directory if you're not already there. Create the redis master pod in your Kubernetes cluster by running: Change to the `<kubernetes>/examples/guestbook` directory if you're not already there. Create the redis master pod in your Kubernetes cluster by running:
@ -227,7 +227,7 @@ spec:
name: redis-master name: redis-master
``` ```
[Download example](redis-master-service.yaml) [Download example](redis-master-service.yaml?raw=true)
<!-- END MUNGE: EXAMPLE redis-master-service.yaml --> <!-- END MUNGE: EXAMPLE redis-master-service.yaml -->
Create the service by running: Create the service by running:
@ -316,7 +316,7 @@ spec:
- containerPort: 6379 - containerPort: 6379
``` ```
[Download example](redis-slave-controller.yaml) [Download example](redis-slave-controller.yaml?raw=true)
<!-- END MUNGE: EXAMPLE redis-slave-controller.yaml --> <!-- END MUNGE: EXAMPLE redis-slave-controller.yaml -->
and create the replication controller by running: and create the replication controller by running:
@ -367,7 +367,7 @@ spec:
name: redis-slave name: redis-slave
``` ```
[Download example](redis-slave-service.yaml) [Download example](redis-slave-service.yaml?raw=true)
<!-- END MUNGE: EXAMPLE redis-slave-service.yaml --> <!-- END MUNGE: EXAMPLE redis-slave-service.yaml -->
This time the selector for the service is `name=redis-slave`, because that identifies the pods running redis slaves. It may also be helpful to set labels on your service itself as we've done here to make it easy to locate them with the `kubectl get services -l "label=value"` command. This time the selector for the service is `name=redis-slave`, because that identifies the pods running redis slaves. It may also be helpful to set labels on your service itself as we've done here to make it easy to locate them with the `kubectl get services -l "label=value"` command.
@ -426,7 +426,7 @@ spec:
- containerPort: 80 - containerPort: 80
``` ```
[Download example](frontend-controller.yaml) [Download example](frontend-controller.yaml?raw=true)
<!-- END MUNGE: EXAMPLE frontend-controller.yaml --> <!-- END MUNGE: EXAMPLE frontend-controller.yaml -->
Using this file, you can turn up your frontend with: Using this file, you can turn up your frontend with:
@ -539,7 +539,7 @@ spec:
name: frontend name: frontend
``` ```
[Download example](frontend-service.yaml) [Download example](frontend-service.yaml?raw=true)
<!-- END MUNGE: EXAMPLE frontend-service.yaml --> <!-- END MUNGE: EXAMPLE frontend-service.yaml -->
#### Using 'type: LoadBalancer' for the frontend service (cloud-provider-specific) #### Using 'type: LoadBalancer' for the frontend service (cloud-provider-specific)

View File

@ -83,7 +83,7 @@ spec:
name: hazelcast name: hazelcast
``` ```
[Download example](hazelcast-service.yaml) [Download example](hazelcast-service.yaml?raw=true)
<!-- END MUNGE: EXAMPLE hazelcast-service.yaml --> <!-- END MUNGE: EXAMPLE hazelcast-service.yaml -->
The important thing to note here is the `selector`. It is a query over labels, that identifies the set of _Pods_ contained by the _Service_. In this case the selector is `name: hazelcast`. If you look at the Replication Controller specification below, you'll see that the pod has the corresponding label, so it will be selected for membership in this Service. The important thing to note here is the `selector`. It is a query over labels, that identifies the set of _Pods_ contained by the _Service_. In this case the selector is `name: hazelcast`. If you look at the Replication Controller specification below, you'll see that the pod has the corresponding label, so it will be selected for membership in this Service.
@ -138,7 +138,7 @@ spec:
name: hazelcast name: hazelcast
``` ```
[Download example](hazelcast-controller.yaml) [Download example](hazelcast-controller.yaml?raw=true)
<!-- END MUNGE: EXAMPLE hazelcast-controller.yaml --> <!-- END MUNGE: EXAMPLE hazelcast-controller.yaml -->
There are a few things to note in this description. First is that we are running the `quay.io/pires/hazelcast-kubernetes` image, tag `0.5`. This is a `busybox` installation with JRE 8 Update 45. However it also adds a custom [`application`](https://github.com/pires/hazelcast-kubernetes-bootstrapper) that finds any Hazelcast nodes in the cluster and bootstraps an Hazelcast instance accordingly. The `HazelcastDiscoveryController` discovers the Kubernetes API Server using the built in Kubernetes discovery service, and then uses the Kubernetes API to find new nodes (more on this later). There are a few things to note in this description. First is that we are running the `quay.io/pires/hazelcast-kubernetes` image, tag `0.5`. This is a `busybox` installation with JRE 8 Update 45. However it also adds a custom [`application`](https://github.com/pires/hazelcast-kubernetes-bootstrapper) that finds any Hazelcast nodes in the cluster and bootstraps an Hazelcast instance accordingly. The `HazelcastDiscoveryController` discovers the Kubernetes API Server using the built in Kubernetes discovery service, and then uses the Kubernetes API to find new nodes (more on this later).

View File

@ -131,7 +131,7 @@ spec:
fsType: ext4 fsType: ext4
``` ```
[Download example](mysql.yaml) [Download example](mysql.yaml?raw=true)
<!-- END MUNGE: EXAMPLE mysql.yaml --> <!-- END MUNGE: EXAMPLE mysql.yaml -->
Note that we've defined a volume mount for `/var/lib/mysql`, and specified a volume that uses the persistent disk (`mysql-disk`) that you created. Note that we've defined a volume mount for `/var/lib/mysql`, and specified a volume that uses the persistent disk (`mysql-disk`) that you created.
@ -186,7 +186,7 @@ spec:
name: mysql name: mysql
``` ```
[Download example](mysql-service.yaml) [Download example](mysql-service.yaml?raw=true)
<!-- END MUNGE: EXAMPLE mysql-service.yaml --> <!-- END MUNGE: EXAMPLE mysql-service.yaml -->
Start the service like this: Start the service like this:
@ -241,7 +241,7 @@ spec:
fsType: ext4 fsType: ext4
``` ```
[Download example](wordpress.yaml) [Download example](wordpress.yaml?raw=true)
<!-- END MUNGE: EXAMPLE wordpress.yaml --> <!-- END MUNGE: EXAMPLE wordpress.yaml -->
Create the pod: Create the pod:
@ -282,7 +282,7 @@ spec:
type: LoadBalancer type: LoadBalancer
``` ```
[Download example](wordpress-service.yaml) [Download example](wordpress-service.yaml?raw=true)
<!-- END MUNGE: EXAMPLE wordpress-service.yaml --> <!-- END MUNGE: EXAMPLE wordpress-service.yaml -->
Note the `type: LoadBalancer` setting. This will set up the wordpress service behind an external IP. Note the `type: LoadBalancer` setting. This will set up the wordpress service behind an external IP.

View File

@ -98,7 +98,7 @@ To start Phabricator server use the file [`examples/phabricator/phabricator-cont
} }
``` ```
[Download example](phabricator-controller.json) [Download example](phabricator-controller.json?raw=true)
<!-- END MUNGE: EXAMPLE phabricator-controller.json --> <!-- END MUNGE: EXAMPLE phabricator-controller.json -->
Create the phabricator pod in your Kubernetes cluster by running: Create the phabricator pod in your Kubernetes cluster by running:
@ -188,7 +188,7 @@ To automate this process and make sure that a proper host is authorized even if
} }
``` ```
[Download example](authenticator-controller.json) [Download example](authenticator-controller.json?raw=true)
<!-- END MUNGE: EXAMPLE authenticator-controller.json --> <!-- END MUNGE: EXAMPLE authenticator-controller.json -->
To create the pod run: To create the pod run:
@ -237,7 +237,7 @@ Use the file [`examples/phabricator/phabricator-service.json`](phabricator-servi
} }
``` ```
[Download example](phabricator-service.json) [Download example](phabricator-service.json?raw=true)
<!-- END MUNGE: EXAMPLE phabricator-service.json --> <!-- END MUNGE: EXAMPLE phabricator-service.json -->
To create the service run: To create the service run: