diff --git a/README.md b/README.md index 2271f20623..9044018e34 100644 --- a/README.md +++ b/README.md @@ -73,7 +73,7 @@ Kubernetes documentation is organized into several categories. - in the [Kubernetes Cluster Admin Guide](docs/admin/README.md) - **Developer and API documentation** - for people who want to write programs that access the Kubernetes API, write plugins - or extensions, or modify the core Kubernete code + or extensions, or modify the core Kubernetes code - in the [Kubernetes Developer Guide](docs/devel/README.md) - see also [notes on the API](docs/api.md) - see also the [API object documentation](http://kubernetes.io/third_party/swagger-ui/), a diff --git a/cluster/addons/README.md b/cluster/addons/README.md index a433445e46..44dc4e4ebb 100644 --- a/cluster/addons/README.md +++ b/cluster/addons/README.md @@ -6,7 +6,7 @@ Kubernetes clusters. The add-ons are visible through the API (they can be listed using ```kubectl```), but manipulation of these objects is discouraged because the system will bring them back to the original state, in particular: * if an add-on is stopped, it will be restarted automatically -* if an add-on is rolling-updated (for Replication Controlers), the system will stop the new version and +* if an add-on is rolling-updated (for Replication Controllers), the system will stop the new version and start the old one again (or perform rolling update to the old version, in the future). diff --git a/cluster/addons/dns/README.md b/cluster/addons/dns/README.md index 5009b431f5..20cad42079 100644 --- a/cluster/addons/dns/README.md +++ b/cluster/addons/dns/README.md @@ -164,7 +164,7 @@ If you see that, DNS is working correctly. ## How does it work? SkyDNS depends on etcd for what to serve, but it doesn't really need all of -what etcd offers (at least not in the way we use it). For simplicty, we run +what etcd offers (at least not in the way we use it). For simplicity, we run etcd and SkyDNS together in a pod, and we do not try to link etcd instances across replicas. A helper container called [kube2sky](kube2sky/) also runs in the pod and acts a bridge between Kubernetes and SkyDNS. It finds the diff --git a/cluster/addons/dns/kube2sky/README.md b/cluster/addons/dns/kube2sky/README.md index 09867534d3..a7e6ccd6ac 100644 --- a/cluster/addons/dns/kube2sky/README.md +++ b/cluster/addons/dns/kube2sky/README.md @@ -26,7 +26,7 @@ mutation (insertion or removal of a dns entry) before giving up and crashing. `--etcd-server`: The etcd server that is being used by skydns. -`--kube_master_url`: URL of kubernetes master. Reuired if `--kubecfg_file` is not set. +`--kube_master_url`: URL of kubernetes master. Required if `--kubecfg_file` is not set. `--kubecfg_file`: Path to kubecfg file that contains the master URL and tokens to authenticate with the master. diff --git a/cluster/juju/bundles/README.md b/cluster/juju/bundles/README.md index 6014c3de50..18799aa635 100644 --- a/cluster/juju/bundles/README.md +++ b/cluster/juju/bundles/README.md @@ -14,7 +14,7 @@ containerized applications. The [Juju](https://juju.ubuntu.com) system provides provisioning and orchestration across a variety of clouds and bare metal. A juju bundle -describes collection of services and how they interelate. `juju +describes collection of services and how they interrelate. `juju quickstart` allows you to bootstrap a deployment environment and deploy a bundle. @@ -136,7 +136,7 @@ configuration on it's own ## Installing the kubectl outside of kubernetes master machine -Download the Kuberentes release from: +Download the Kubernetes release from: https://github.com/GoogleCloudPlatform/kubernetes/releases and extract the release, you can then just directly use the cli binary at ./kubernetes/platforms/linux/amd64/kubectl diff --git a/cluster/saltbase/pillar/README.md b/cluster/saltbase/pillar/README.md index 33710233ad..0273c3ff26 100644 --- a/cluster/saltbase/pillar/README.md +++ b/cluster/saltbase/pillar/README.md @@ -1,6 +1,6 @@ The [SaltStack pillar](http://docs.saltstack.com/en/latest/topics/pillar/) -data is partially statically dervied from the contents of this +data is partially statically derived from the contents of this directory. The bulk of the pillars are hard to perceive from browsing this directory, though, because they are written into [cluster-params.sls](cluster-params.sls) at cluster inception. diff --git a/contrib/exec-healthz/README.md b/contrib/exec-healthz/README.md index 6cef042304..9951bb3e72 100644 --- a/contrib/exec-healthz/README.md +++ b/contrib/exec-healthz/README.md @@ -1,6 +1,6 @@ # Exec healthz server -The exec healthz server is a sidecar container meant to serve as a liveness-exec-over-http bridge. It isolates pods from the idiosyncracies of container runtime exec implemetations. +The exec healthz server is a sidecar container meant to serve as a liveness-exec-over-http bridge. It isolates pods from the idiosyncrasies of container runtime exec implementations. ## Examples: diff --git a/contrib/logging/fluentd-sidecar-es/README.md b/contrib/logging/fluentd-sidecar-es/README.md index 73ad9eb00e..6bde813994 100644 --- a/contrib/logging/fluentd-sidecar-es/README.md +++ b/contrib/logging/fluentd-sidecar-es/README.md @@ -1,5 +1,5 @@ # Collecting log files from within containers with Fluentd and sending them to Elasticsearch. -*Note that this only works for clusters with an Elastisearch service. If your cluster is logging to Google Cloud Logging instead (e.g. if you're using Container Engine), see [this guide](/contrib/logging/fluentd-sidecar-gcp/) instead.* +*Note that this only works for clusters with an ElasticSearch service. If your cluster is logging to Google Cloud Logging instead (e.g. if you're using Container Engine), see [this guide](/contrib/logging/fluentd-sidecar-gcp/) instead.* This directory contains the source files needed to make a Docker image that collects log files from arbitrary files within a container using [Fluentd](http://www.fluentd.org/) and sends them to the cluster's Elasticsearch service. The image is designed to be used as a sidecar container as part of a pod. diff --git a/contrib/mesos/docs/ha.md b/contrib/mesos/docs/ha.md index 055804d4d7..7cdc6ecb75 100644 --- a/contrib/mesos/docs/ha.md +++ b/contrib/mesos/docs/ha.md @@ -34,7 +34,7 @@ In this case, if there are problems launching a replacement scheduler process th ##### Command Line Arguments - `--ha` is required to enable scheduler HA and multi-scheduler leader election. -- `--km_path` or else (`--executor_path` and `--proxy_path`) should reference non-local-file URI's and must be identicial across schedulers. +- `--km_path` or else (`--executor_path` and `--proxy_path`) should reference non-local-file URI's and must be identical across schedulers. If you have HDFS installed on your slaves then you can specify HDFS URI locations for the binaries: diff --git a/contrib/prometheus/README.md b/contrib/prometheus/README.md index a85a47d243..fdce5aef92 100644 --- a/contrib/prometheus/README.md +++ b/contrib/prometheus/README.md @@ -25,7 +25,7 @@ Looks open enough :). 1. Now, you can start this pod, like so `kubectl create -f contrib/prometheus/prometheus-all.json`. This ReplicationController will maintain both prometheus, the server, as well as promdash, the visualization tool. You can then configure promdash, and next time you restart the pod - you're configuration will be remain (since the promdash directory was mounted as a local docker volume). -1. Finally, you can simply access localhost:3000, which will have promdash running. Then, add the prometheus server (locahost:9090)to as a promdash server, and create a dashboard according to the promdash directions. +1. Finally, you can simply access localhost:3000, which will have promdash running. Then, add the prometheus server (localhost:9090)to as a promdash server, and create a dashboard according to the promdash directions. ## Prometheus @@ -52,14 +52,14 @@ This is a v1 api based, containerized prometheus ReplicationController, which sc 1. Use kubectl to handle auth & proxy the kubernetes API locally, emulating the old KUBERNETES_RO service. -1. The list of services to be monitored is passed as a command line aguments in +1. The list of services to be monitored is passed as a command line arguments in the yaml file. 1. The startup scripts assumes that each service T will have 2 environment variables set ```T_SERVICE_HOST``` and ```T_SERVICE_PORT``` 1. Each can be configured manually in yaml file if you want to monitor something -that is not a regular Kubernetes service. For example, you can add comma delimted +that is not a regular Kubernetes service. For example, you can add comma delimited endpoints which can be scraped like so... ``` - -t @@ -77,7 +77,7 @@ at port 9090. # TODO - We should publish this image into the kube/ namespace. -- Possibly use postgre or mysql as a promdash database. +- Possibly use Postgres or mysql as a promdash database. - stop using kubectl to make a local proxy faking the old RO port and build in real auth capabilities. diff --git a/contrib/service-loadbalancer/README.md b/contrib/service-loadbalancer/README.md index 836eb4d3fa..15a6fb4c04 100644 --- a/contrib/service-loadbalancer/README.md +++ b/contrib/service-loadbalancer/README.md @@ -191,7 +191,7 @@ $ mysql -u root -ppassword --host 104.197.63.17 --port 3306 -e 'show databases;' ### Troubleshooting: - If you can curl or netcat the endpoint from the pod (with kubectl exec) and not from the node, you have not specified hostport and containerport. - If you can hit the ips from the node but not from your machine outside the cluster, you have not opened firewall rules for the right network. -- If you can't hit the ips from within the container, either haproxy or the service_loadbalacer script is not runing. +- If you can't hit the ips from within the container, either haproxy or the service_loadbalacer script is not running. 1. Use ps in the pod 2. sudo restart haproxy in the pod 3. cat /etc/haproxy/haproxy.cfg in the pod diff --git a/docs/admin/cluster-management.md b/docs/admin/cluster-management.md index d2149e9354..fcc33b8ff2 100644 --- a/docs/admin/cluster-management.md +++ b/docs/admin/cluster-management.md @@ -35,7 +35,7 @@ Documentation for other releases can be found at This document describes several topics related to the lifecycle of a cluster: creating a new cluster, upgrading your cluster's -master and worker nodes, performing node maintainence (e.g. kernel upgrades), and upgrading the Kubernetes API version of a +master and worker nodes, performing node maintenance (e.g. kernel upgrades), and upgrading the Kubernetes API version of a running cluster. ## Creating and configuring a Cluster @@ -132,7 +132,7 @@ For pods with a replication controller, the pod will eventually be replaced by a For pods with no replication controller, you need to bring up a new copy of the pod, and assuming it is not part of a service, redirect clients to it. -Perform maintainence work on the node. +Perform maintenance work on the node. Make the node schedulable again: diff --git a/docs/admin/etcd.md b/docs/admin/etcd.md index 00322598b6..44a1824c54 100644 --- a/docs/admin/etcd.md +++ b/docs/admin/etcd.md @@ -41,7 +41,7 @@ objects. Access Control: give *only* kube-apiserver read/write access to etcd. You do not want apiserver's etcd exposed to every node in your cluster (or worse, to the -internet at large), because access to etcd is equivilent to root in your +internet at large), because access to etcd is equivalent to root in your cluster. Data Reliability: for reasonable safety, either etcd needs to be run as a diff --git a/docs/admin/kubelet.md b/docs/admin/kubelet.md index 17a464a838..e5dfec1d2d 100644 --- a/docs/admin/kubelet.md +++ b/docs/admin/kubelet.md @@ -41,7 +41,7 @@ Documentation for other releases can be found at The kubelet is the primary "node agent" that runs on each node. The kubelet works in terms of a PodSpec. A PodSpec is a YAML or JSON object that describes a pod. The kubelet takes a set of PodSpecs that are provided through -various echanisms (primarily through the apiserver) and ensures that the containers +various mechanisms (primarily through the apiserver) and ensures that the containers described in those PodSpecs are running and healthy. Other than from an PodSpec from the apiserver, there are three ways that a container diff --git a/docs/admin/service-accounts-admin.md b/docs/admin/service-accounts-admin.md index 3fe0c85af2..5dfeda013b 100644 --- a/docs/admin/service-accounts-admin.md +++ b/docs/admin/service-accounts-admin.md @@ -84,7 +84,7 @@ TokenController runs as part of controller-manager. It acts asynchronously. It: - observes serviceAccount creation and creates a corresponding Secret to allow API access. - observes serviceAccount deletion and deletes all corresponding ServiceAccountToken Secrets - observes secret addition, and ensures the referenced ServiceAccount exists, and adds a token to the secret if needed -- observes secret deleteion and removes a reference from the corresponding ServiceAccount if needed +- observes secret deletion and removes a reference from the corresponding ServiceAccount if needed #### To create additional API tokens diff --git a/docs/devel/development.md b/docs/devel/development.md index 454632934b..2929f28103 100644 --- a/docs/devel/development.md +++ b/docs/devel/development.md @@ -87,7 +87,7 @@ Note: If you have write access to the main repository at github.com/GoogleCloudP git remote set-url --push upstream no_push ``` -### Commiting changes to your fork +### Committing changes to your fork ```sh git commit diff --git a/docs/getting-started-guides/coreos/azure/README.md b/docs/getting-started-guides/coreos/azure/README.md index 3e01027b3e..40ec141dc1 100644 --- a/docs/getting-started-guides/coreos/azure/README.md +++ b/docs/getting-started-guides/coreos/azure/README.md @@ -223,7 +223,7 @@ frontend-z9oxo 1/1 Running 0 41s ## Exposing the app to the outside world -There is no native Azure load-ballancer support in Kubernets 1.0, however here is how you can expose the Guestbook app to the Internet. +There is no native Azure load-balancer support in Kubernetes 1.0, however here is how you can expose the Guestbook app to the Internet. ``` ./expose_guestbook_app_port.sh ./output/kube_1c1496016083b4_ssh_conf diff --git a/docs/getting-started-guides/docker-multinode.md b/docs/getting-started-guides/docker-multinode.md index 60787ac710..4ca6e1c259 100644 --- a/docs/getting-started-guides/docker-multinode.md +++ b/docs/getting-started-guides/docker-multinode.md @@ -87,7 +87,7 @@ cd kubernetes/cluster/docker-multinode `Master done!` -See [here](docker-multinode/master.md) for detailed instructions explaination. +See [here](docker-multinode/master.md) for detailed instructions explanation. ## Adding a worker node @@ -104,7 +104,7 @@ cd kubernetes/cluster/docker-multinode `Worker done!` -See [here](docker-multinode/worker.md) for detailed instructions explaination. +See [here](docker-multinode/worker.md) for detailed instructions explanation. ## Testing your cluster diff --git a/docs/getting-started-guides/docker.md b/docs/getting-started-guides/docker.md index c905fdc957..a8b8de890a 100644 --- a/docs/getting-started-guides/docker.md +++ b/docs/getting-started-guides/docker.md @@ -74,7 +74,7 @@ parameters as follows: ``` NOTE: The above is specifically for GRUB2. - You can check the command line parameters passed to your kenel by looking at the + You can check the command line parameters passed to your kernel by looking at the output of /proc/cmdline: ```console diff --git a/docs/getting-started-guides/fedora/fedora_ansible_config.md b/docs/getting-started-guides/fedora/fedora_ansible_config.md index 7fb3fc530b..42d2b4a626 100644 --- a/docs/getting-started-guides/fedora/fedora_ansible_config.md +++ b/docs/getting-started-guides/fedora/fedora_ansible_config.md @@ -187,7 +187,7 @@ cd ~/kubernetes/contrib/ansible/ That's all there is to it. It's really that easy. At this point you should have a functioning Kubernetes cluster. -**Show kubernets nodes** +**Show kubernetes nodes** Run the following on the kube-master: diff --git a/docs/getting-started-guides/scratch.md b/docs/getting-started-guides/scratch.md index fd9289b4eb..01e0055610 100644 --- a/docs/getting-started-guides/scratch.md +++ b/docs/getting-started-guides/scratch.md @@ -657,7 +657,7 @@ This pod mounts several node file system directories using the `hostPath` volum authenticate external services, such as a cloud provider. - This is not required if you do not use a cloud provider (e.g. bare-metal). - The `/srv/kubernetes` mount allows the apiserver to read certs and credentials stored on the - node disk. These could instead be stored on a persistend disk, such as a GCE PD, or baked into the image. + node disk. These could instead be stored on a persistent disk, such as a GCE PD, or baked into the image. - Optionally, you may want to mount `/var/log` as well and redirect output there (not shown in template). - Do this if you prefer your logs to be accessible from the root filesystem with tools like journalctl. diff --git a/docs/proposals/apiserver_watch.md b/docs/proposals/apiserver_watch.md index 5610ccbc68..a731c7f46c 100644 --- a/docs/proposals/apiserver_watch.md +++ b/docs/proposals/apiserver_watch.md @@ -67,14 +67,14 @@ When a client sends a watch request to apiserver, instead of redirecting it to etcd, it will cause: - registering a handler to receive all new changes coming from etcd - - iteratiting though a watch window, starting at the requested resourceVersion - to the head and sending filetered changes directory to the client, blocking + - iterating though a watch window, starting at the requested resourceVersion + to the head and sending filtered changes directory to the client, blocking the above until this iteration has caught up This will be done be creating a go-routine per watcher that will be responsible for performing the above. -The following section describes the proposal in more details, analizes some +The following section describes the proposal in more details, analyzes some corner cases and divides the whole design in more fine-grained steps. diff --git a/docs/user-guide/connecting-applications.md b/docs/user-guide/connecting-applications.md index 3aabeb265b..0f9a55e1df 100644 --- a/docs/user-guide/connecting-applications.md +++ b/docs/user-guide/connecting-applications.md @@ -238,8 +238,8 @@ Address 1: 10.0.116.146 ## Securing the Service Till now we have only accessed the nginx server from within the cluster. Before exposing the Service to the internet, you want to make sure the communication channel is secure. For this, you will need: -* Self signed certificates for https (unless you already have an identitiy certificate) -* An nginx server configured to use the cretificates +* Self signed certificates for https (unless you already have an identity certificate) +* An nginx server configured to use the certificates * A [secret](secrets.md) that makes the certificates accessible to pods You can acquire all these from the [nginx https example](../../examples/https-nginx/README.md), in short: diff --git a/docs/user-guide/docker-cli-to-kubectl.md b/docs/user-guide/docker-cli-to-kubectl.md index a73b409347..fcb7d970ff 100644 --- a/docs/user-guide/docker-cli-to-kubectl.md +++ b/docs/user-guide/docker-cli-to-kubectl.md @@ -214,7 +214,7 @@ $ kubectl logs -f nginx-app-zibvs ``` -Now's a good time to mention slight difference between pods and containers; by default pods will not terminate if their processes exit. Instead it will restart the process. This is similar to the docker run option `--restart=always` with one major difference. In docker, the output for each invocation of the process is concatenated but for Kubernetes, each invokation is separate. To see the output from a prevoius run in Kubernetes, do this: +Now's a good time to mention slight difference between pods and containers; by default pods will not terminate if their processes exit. Instead it will restart the process. This is similar to the docker run option `--restart=always` with one major difference. In docker, the output for each invocation of the process is concatenated but for Kubernetes, each invocation is separate. To see the output from a previous run in Kubernetes, do this: ```console diff --git a/docs/user-guide/pod-states.md b/docs/user-guide/pod-states.md index ff9e661a9a..521d7e8564 100644 --- a/docs/user-guide/pod-states.md +++ b/docs/user-guide/pod-states.md @@ -58,7 +58,7 @@ A [Probe](https://godoc.org/github.com/GoogleCloudPlatform/kubernetes/pkg/api/v1 * `ExecAction`: executes a specified command inside the container expecting on success that the command exits with status code 0. * `TCPSocketAction`: performs a tcp check against the container's IP address on a specified port expecting on success that the port is open. -* `HTTPGetAction`: performs an HTTP Get againsts the container's IP address on a specified port and path expecting on success that the response has a status code greater than or equal to 200 and less than 400. +* `HTTPGetAction`: performs an HTTP Get against the container's IP address on a specified port and path expecting on success that the response has a status code greater than or equal to 200 and less than 400. Each probe will have one of three results: diff --git a/docs/whatisk8s.md b/docs/whatisk8s.md index e7c00bba21..5ceeeab923 100644 --- a/docs/whatisk8s.md +++ b/docs/whatisk8s.md @@ -61,7 +61,7 @@ Here are some key points: * **Application-centric management**: Raises the level of abstraction from running an OS on virtual hardware to running an application on an OS using logical resources. This provides the simplicity of PaaS with the flexibility of IaaS and enables you to run much more than just [12-factor apps](http://12factor.net/). * **Dev and Ops separation of concerns**: - Provides separatation of build and deployment; therefore, decoupling applications from infrastructure. + Provides separation of build and deployment; therefore, decoupling applications from infrastructure. * **Agile application creation and deployment**: Increased ease and efficiency of container image creation compared to VM image use. * **Continuous development, integration, and deployment**: diff --git a/examples/cassandra/README.md b/examples/cassandra/README.md index 6183f0937c..ff51ddd499 100644 --- a/examples/cassandra/README.md +++ b/examples/cassandra/README.md @@ -244,7 +244,7 @@ spec: [Download example](cassandra-controller.yaml) -Most of this replication controller definition is identical to the Cassandra pod definition above, it simply gives the resplication controller a recipe to use when it creates new Cassandra pods. The other differentiating parts are the ```selector``` attribute which contains the controller's selector query, and the ```replicas``` attribute which specifies the desired number of replicas, in this case 1. +Most of this replication controller definition is identical to the Cassandra pod definition above, it simply gives the replication controller a recipe to use when it creates new Cassandra pods. The other differentiating parts are the ```selector``` attribute which contains the controller's selector query, and the ```replicas``` attribute which specifies the desired number of replicas, in this case 1. Create this controller: diff --git a/examples/elasticsearch/README.md b/examples/elasticsearch/README.md index b9c9ffd04f..8a03b935f2 100644 --- a/examples/elasticsearch/README.md +++ b/examples/elasticsearch/README.md @@ -40,7 +40,7 @@ with [replication controllers](../../docs/user-guide/replication-controller.md). because multicast discovery will not find the other pod IPs needed to form a cluster. This image detects other Elasticsearch [pods](../../docs/user-guide/pods.md) running in a specified [namespace](../../docs/user-guide/namespaces.md) with a given label selector. The detected instances are used to form a list of peer hosts which -are used as part of the unicast discovery mechansim for Elasticsearch. The detection +are used as part of the unicast discovery mechanism for Elasticsearch. The detection of the peer nodes is done by a program which communicates with the Kubernetes API server to get a list of matching Elasticsearch pods. To enable authenticated communication this image needs a [secret](../../docs/user-guide/secrets.md) to be mounted at `/etc/apiserver-secret` diff --git a/examples/guestbook-go/README.md b/examples/guestbook-go/README.md index d9a6cac1b0..f27b3bb6f0 100644 --- a/examples/guestbook-go/README.md +++ b/examples/guestbook-go/README.md @@ -280,7 +280,7 @@ You can now play with the guestbook that you just created by opening it in a bro ### Step Eight: Cleanup -After you're done playing with the guestbook, you can cleanup by deleting the guestbook service and removing the associated resources that were created, including load balancers, forwarding rules, target pools, and Kuberentes replication controllers and services. +After you're done playing with the guestbook, you can cleanup by deleting the guestbook service and removing the associated resources that were created, including load balancers, forwarding rules, target pools, and Kubernetes replication controllers and services. Delete all the resources by running the following `kubectl delete -f` *`filename`* command: diff --git a/examples/hazelcast/README.md b/examples/hazelcast/README.md index ddb1d3ad61..5ae17f5c69 100644 --- a/examples/hazelcast/README.md +++ b/examples/hazelcast/README.md @@ -141,7 +141,7 @@ spec: [Download example](hazelcast-controller.yaml) -There are a few things to note in this description. First is that we are running the `quay.io/pires/hazelcast-kubernetes` image, tag `0.5`. This is a `busybox` installation with JRE 8 Update 45. However it also adds a custom [`application`](https://github.com/pires/hazelcast-kubernetes-bootstrapper) that finds any Hazelcast nodes in the cluster and bootstraps an Hazelcast instance accordingle. The `HazelcastDiscoveryController` discovers the Kubernetes API Server using the built in Kubernetes discovery service, and then uses the Kubernetes API to find new nodes (more on this later). +There are a few things to note in this description. First is that we are running the `quay.io/pires/hazelcast-kubernetes` image, tag `0.5`. This is a `busybox` installation with JRE 8 Update 45. However it also adds a custom [`application`](https://github.com/pires/hazelcast-kubernetes-bootstrapper) that finds any Hazelcast nodes in the cluster and bootstraps an Hazelcast instance accordingly. The `HazelcastDiscoveryController` discovers the Kubernetes API Server using the built in Kubernetes discovery service, and then uses the Kubernetes API to find new nodes (more on this later). You may also note that we tell Kubernetes that the container exposes the `hazelcast` port. Finally, we tell the cluster manager that we need 1 cpu core. diff --git a/examples/k8petstore/README.md b/examples/k8petstore/README.md index c2b702d60e..be59840d9e 100644 --- a/examples/k8petstore/README.md +++ b/examples/k8petstore/README.md @@ -89,7 +89,7 @@ The web front end provides users an interface for watching pet store transaction To generate those transactions, you can use the bigpetstore data generator. Alternatively, you could just write a -shell script which calls "curl localhost:3000/k8petstore/rpush/blahblahblah" over and over again :). But thats not nearly +shell script which calls "curl localhost:3000/k8petstore/rpush/blahblahblah" over and over again :). But that's not nearly as fun, and its not a good test of a real world scenario where payloads scale and have lots of information content. diff --git a/examples/meteor/README.md b/examples/meteor/README.md index b56aec8fe3..8057eed5e6 100644 --- a/examples/meteor/README.md +++ b/examples/meteor/README.md @@ -141,7 +141,7 @@ your cluster. Edit [`meteor-controller.json`](meteor-controller.json) and make sure the `image:` points to the container you just pushed to the Docker Hub or GCR. -We will need to provide MongoDB a persistent Kuberetes volume to +We will need to provide MongoDB a persistent Kubernetes volume to store its data. See the [volumes documentation](../../docs/user-guide/volumes.md) for options. We're going to use Google Compute Engine persistent disks. Create the MongoDB disk by running: diff --git a/examples/openshift-origin/README.md b/examples/openshift-origin/README.md index 4073a7d7e0..72e9c32f47 100644 --- a/examples/openshift-origin/README.md +++ b/examples/openshift-origin/README.md @@ -98,7 +98,7 @@ $ cluster/kubectl.sh config view --output=yaml --flatten=true --minify=true > ${ The output from this command will contain a single file that has all the required information needed to connect to your Kubernetes cluster that you previously provisioned. This file should be considered sensitive, so do not share this file with untrusted parties. -We will later use this file to tell OpenShift how to bootstap its own configuration. +We will later use this file to tell OpenShift how to bootstrap its own configuration. ### Step 2: Create an External Load Balancer to Route Traffic to OpenShift