mirror of https://github.com/k3s-io/k3s
Copy edits for typos
parent
2bfa9a1f98
commit
35a5eda585
|
@ -73,7 +73,7 @@ Kubernetes documentation is organized into several categories.
|
||||||
- in the [Kubernetes Cluster Admin Guide](docs/admin/README.md)
|
- in the [Kubernetes Cluster Admin Guide](docs/admin/README.md)
|
||||||
- **Developer and API documentation**
|
- **Developer and API documentation**
|
||||||
- for people who want to write programs that access the Kubernetes API, write plugins
|
- for people who want to write programs that access the Kubernetes API, write plugins
|
||||||
or extensions, or modify the core Kubernete code
|
or extensions, or modify the core Kubernetes code
|
||||||
- in the [Kubernetes Developer Guide](docs/devel/README.md)
|
- in the [Kubernetes Developer Guide](docs/devel/README.md)
|
||||||
- see also [notes on the API](docs/api.md)
|
- see also [notes on the API](docs/api.md)
|
||||||
- see also the [API object documentation](http://kubernetes.io/third_party/swagger-ui/), a
|
- see also the [API object documentation](http://kubernetes.io/third_party/swagger-ui/), a
|
||||||
|
|
|
@ -6,7 +6,7 @@ Kubernetes clusters. The add-ons are visible through the API (they can be listed
|
||||||
using ```kubectl```), but manipulation of these objects is discouraged because
|
using ```kubectl```), but manipulation of these objects is discouraged because
|
||||||
the system will bring them back to the original state, in particular:
|
the system will bring them back to the original state, in particular:
|
||||||
* if an add-on is stopped, it will be restarted automatically
|
* if an add-on is stopped, it will be restarted automatically
|
||||||
* if an add-on is rolling-updated (for Replication Controlers), the system will stop the new version and
|
* if an add-on is rolling-updated (for Replication Controllers), the system will stop the new version and
|
||||||
start the old one again (or perform rolling update to the old version, in the
|
start the old one again (or perform rolling update to the old version, in the
|
||||||
future).
|
future).
|
||||||
|
|
||||||
|
|
|
@ -164,7 +164,7 @@ If you see that, DNS is working correctly.
|
||||||
|
|
||||||
## How does it work?
|
## How does it work?
|
||||||
SkyDNS depends on etcd for what to serve, but it doesn't really need all of
|
SkyDNS depends on etcd for what to serve, but it doesn't really need all of
|
||||||
what etcd offers (at least not in the way we use it). For simplicty, we run
|
what etcd offers (at least not in the way we use it). For simplicity, we run
|
||||||
etcd and SkyDNS together in a pod, and we do not try to link etcd instances
|
etcd and SkyDNS together in a pod, and we do not try to link etcd instances
|
||||||
across replicas. A helper container called [kube2sky](kube2sky/) also runs in
|
across replicas. A helper container called [kube2sky](kube2sky/) also runs in
|
||||||
the pod and acts a bridge between Kubernetes and SkyDNS. It finds the
|
the pod and acts a bridge between Kubernetes and SkyDNS. It finds the
|
||||||
|
|
|
@ -26,7 +26,7 @@ mutation (insertion or removal of a dns entry) before giving up and crashing.
|
||||||
|
|
||||||
`--etcd-server`: The etcd server that is being used by skydns.
|
`--etcd-server`: The etcd server that is being used by skydns.
|
||||||
|
|
||||||
`--kube_master_url`: URL of kubernetes master. Reuired if `--kubecfg_file` is not set.
|
`--kube_master_url`: URL of kubernetes master. Required if `--kubecfg_file` is not set.
|
||||||
|
|
||||||
`--kubecfg_file`: Path to kubecfg file that contains the master URL and tokens to authenticate with the master.
|
`--kubecfg_file`: Path to kubecfg file that contains the master URL and tokens to authenticate with the master.
|
||||||
|
|
||||||
|
|
|
@ -14,7 +14,7 @@ containerized applications.
|
||||||
|
|
||||||
The [Juju](https://juju.ubuntu.com) system provides provisioning and
|
The [Juju](https://juju.ubuntu.com) system provides provisioning and
|
||||||
orchestration across a variety of clouds and bare metal. A juju bundle
|
orchestration across a variety of clouds and bare metal. A juju bundle
|
||||||
describes collection of services and how they interelate. `juju
|
describes collection of services and how they interrelate. `juju
|
||||||
quickstart` allows you to bootstrap a deployment environment and
|
quickstart` allows you to bootstrap a deployment environment and
|
||||||
deploy a bundle.
|
deploy a bundle.
|
||||||
|
|
||||||
|
@ -136,7 +136,7 @@ configuration on it's own
|
||||||
|
|
||||||
## Installing the kubectl outside of kubernetes master machine
|
## Installing the kubectl outside of kubernetes master machine
|
||||||
|
|
||||||
Download the Kuberentes release from:
|
Download the Kubernetes release from:
|
||||||
https://github.com/GoogleCloudPlatform/kubernetes/releases and extract
|
https://github.com/GoogleCloudPlatform/kubernetes/releases and extract
|
||||||
the release, you can then just directly use the cli binary at
|
the release, you can then just directly use the cli binary at
|
||||||
./kubernetes/platforms/linux/amd64/kubectl
|
./kubernetes/platforms/linux/amd64/kubectl
|
||||||
|
|
|
@ -1,6 +1,6 @@
|
||||||
The
|
The
|
||||||
[SaltStack pillar](http://docs.saltstack.com/en/latest/topics/pillar/)
|
[SaltStack pillar](http://docs.saltstack.com/en/latest/topics/pillar/)
|
||||||
data is partially statically dervied from the contents of this
|
data is partially statically derived from the contents of this
|
||||||
directory. The bulk of the pillars are hard to perceive from browsing
|
directory. The bulk of the pillars are hard to perceive from browsing
|
||||||
this directory, though, because they are written into
|
this directory, though, because they are written into
|
||||||
[cluster-params.sls](cluster-params.sls) at cluster inception.
|
[cluster-params.sls](cluster-params.sls) at cluster inception.
|
||||||
|
|
|
@ -1,6 +1,6 @@
|
||||||
# Exec healthz server
|
# Exec healthz server
|
||||||
|
|
||||||
The exec healthz server is a sidecar container meant to serve as a liveness-exec-over-http bridge. It isolates pods from the idiosyncracies of container runtime exec implemetations.
|
The exec healthz server is a sidecar container meant to serve as a liveness-exec-over-http bridge. It isolates pods from the idiosyncrasies of container runtime exec implementations.
|
||||||
|
|
||||||
## Examples:
|
## Examples:
|
||||||
|
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
# Collecting log files from within containers with Fluentd and sending them to Elasticsearch.
|
# Collecting log files from within containers with Fluentd and sending them to Elasticsearch.
|
||||||
*Note that this only works for clusters with an Elastisearch service. If your cluster is logging to Google Cloud Logging instead (e.g. if you're using Container Engine), see [this guide](/contrib/logging/fluentd-sidecar-gcp/) instead.*
|
*Note that this only works for clusters with an ElasticSearch service. If your cluster is logging to Google Cloud Logging instead (e.g. if you're using Container Engine), see [this guide](/contrib/logging/fluentd-sidecar-gcp/) instead.*
|
||||||
|
|
||||||
This directory contains the source files needed to make a Docker image that collects log files from arbitrary files within a container using [Fluentd](http://www.fluentd.org/) and sends them to the cluster's Elasticsearch service.
|
This directory contains the source files needed to make a Docker image that collects log files from arbitrary files within a container using [Fluentd](http://www.fluentd.org/) and sends them to the cluster's Elasticsearch service.
|
||||||
The image is designed to be used as a sidecar container as part of a pod.
|
The image is designed to be used as a sidecar container as part of a pod.
|
||||||
|
|
|
@ -34,7 +34,7 @@ In this case, if there are problems launching a replacement scheduler process th
|
||||||
##### Command Line Arguments
|
##### Command Line Arguments
|
||||||
|
|
||||||
- `--ha` is required to enable scheduler HA and multi-scheduler leader election.
|
- `--ha` is required to enable scheduler HA and multi-scheduler leader election.
|
||||||
- `--km_path` or else (`--executor_path` and `--proxy_path`) should reference non-local-file URI's and must be identicial across schedulers.
|
- `--km_path` or else (`--executor_path` and `--proxy_path`) should reference non-local-file URI's and must be identical across schedulers.
|
||||||
|
|
||||||
If you have HDFS installed on your slaves then you can specify HDFS URI locations for the binaries:
|
If you have HDFS installed on your slaves then you can specify HDFS URI locations for the binaries:
|
||||||
|
|
||||||
|
|
|
@ -25,7 +25,7 @@ Looks open enough :).
|
||||||
|
|
||||||
1. Now, you can start this pod, like so `kubectl create -f contrib/prometheus/prometheus-all.json`. This ReplicationController will maintain both prometheus, the server, as well as promdash, the visualization tool. You can then configure promdash, and next time you restart the pod - you're configuration will be remain (since the promdash directory was mounted as a local docker volume).
|
1. Now, you can start this pod, like so `kubectl create -f contrib/prometheus/prometheus-all.json`. This ReplicationController will maintain both prometheus, the server, as well as promdash, the visualization tool. You can then configure promdash, and next time you restart the pod - you're configuration will be remain (since the promdash directory was mounted as a local docker volume).
|
||||||
|
|
||||||
1. Finally, you can simply access localhost:3000, which will have promdash running. Then, add the prometheus server (locahost:9090)to as a promdash server, and create a dashboard according to the promdash directions.
|
1. Finally, you can simply access localhost:3000, which will have promdash running. Then, add the prometheus server (localhost:9090)to as a promdash server, and create a dashboard according to the promdash directions.
|
||||||
|
|
||||||
## Prometheus
|
## Prometheus
|
||||||
|
|
||||||
|
@ -52,14 +52,14 @@ This is a v1 api based, containerized prometheus ReplicationController, which sc
|
||||||
|
|
||||||
1. Use kubectl to handle auth & proxy the kubernetes API locally, emulating the old KUBERNETES_RO service.
|
1. Use kubectl to handle auth & proxy the kubernetes API locally, emulating the old KUBERNETES_RO service.
|
||||||
|
|
||||||
1. The list of services to be monitored is passed as a command line aguments in
|
1. The list of services to be monitored is passed as a command line arguments in
|
||||||
the yaml file.
|
the yaml file.
|
||||||
|
|
||||||
1. The startup scripts assumes that each service T will have
|
1. The startup scripts assumes that each service T will have
|
||||||
2 environment variables set ```T_SERVICE_HOST``` and ```T_SERVICE_PORT```
|
2 environment variables set ```T_SERVICE_HOST``` and ```T_SERVICE_PORT```
|
||||||
|
|
||||||
1. Each can be configured manually in yaml file if you want to monitor something
|
1. Each can be configured manually in yaml file if you want to monitor something
|
||||||
that is not a regular Kubernetes service. For example, you can add comma delimted
|
that is not a regular Kubernetes service. For example, you can add comma delimited
|
||||||
endpoints which can be scraped like so...
|
endpoints which can be scraped like so...
|
||||||
```
|
```
|
||||||
- -t
|
- -t
|
||||||
|
@ -77,7 +77,7 @@ at port 9090.
|
||||||
# TODO
|
# TODO
|
||||||
|
|
||||||
- We should publish this image into the kube/ namespace.
|
- We should publish this image into the kube/ namespace.
|
||||||
- Possibly use postgre or mysql as a promdash database.
|
- Possibly use Postgres or mysql as a promdash database.
|
||||||
- stop using kubectl to make a local proxy faking the old RO port and build in
|
- stop using kubectl to make a local proxy faking the old RO port and build in
|
||||||
real auth capabilities.
|
real auth capabilities.
|
||||||
|
|
||||||
|
|
|
@ -191,7 +191,7 @@ $ mysql -u root -ppassword --host 104.197.63.17 --port 3306 -e 'show databases;'
|
||||||
### Troubleshooting:
|
### Troubleshooting:
|
||||||
- If you can curl or netcat the endpoint from the pod (with kubectl exec) and not from the node, you have not specified hostport and containerport.
|
- If you can curl or netcat the endpoint from the pod (with kubectl exec) and not from the node, you have not specified hostport and containerport.
|
||||||
- If you can hit the ips from the node but not from your machine outside the cluster, you have not opened firewall rules for the right network.
|
- If you can hit the ips from the node but not from your machine outside the cluster, you have not opened firewall rules for the right network.
|
||||||
- If you can't hit the ips from within the container, either haproxy or the service_loadbalacer script is not runing.
|
- If you can't hit the ips from within the container, either haproxy or the service_loadbalacer script is not running.
|
||||||
1. Use ps in the pod
|
1. Use ps in the pod
|
||||||
2. sudo restart haproxy in the pod
|
2. sudo restart haproxy in the pod
|
||||||
3. cat /etc/haproxy/haproxy.cfg in the pod
|
3. cat /etc/haproxy/haproxy.cfg in the pod
|
||||||
|
|
|
@ -35,7 +35,7 @@ Documentation for other releases can be found at
|
||||||
|
|
||||||
This document describes several topics related to the lifecycle of a cluster: creating a new cluster,
|
This document describes several topics related to the lifecycle of a cluster: creating a new cluster,
|
||||||
upgrading your cluster's
|
upgrading your cluster's
|
||||||
master and worker nodes, performing node maintainence (e.g. kernel upgrades), and upgrading the Kubernetes API version of a
|
master and worker nodes, performing node maintenance (e.g. kernel upgrades), and upgrading the Kubernetes API version of a
|
||||||
running cluster.
|
running cluster.
|
||||||
|
|
||||||
## Creating and configuring a Cluster
|
## Creating and configuring a Cluster
|
||||||
|
@ -132,7 +132,7 @@ For pods with a replication controller, the pod will eventually be replaced by a
|
||||||
|
|
||||||
For pods with no replication controller, you need to bring up a new copy of the pod, and assuming it is not part of a service, redirect clients to it.
|
For pods with no replication controller, you need to bring up a new copy of the pod, and assuming it is not part of a service, redirect clients to it.
|
||||||
|
|
||||||
Perform maintainence work on the node.
|
Perform maintenance work on the node.
|
||||||
|
|
||||||
Make the node schedulable again:
|
Make the node schedulable again:
|
||||||
|
|
||||||
|
|
|
@ -41,7 +41,7 @@ objects.
|
||||||
|
|
||||||
Access Control: give *only* kube-apiserver read/write access to etcd. You do not
|
Access Control: give *only* kube-apiserver read/write access to etcd. You do not
|
||||||
want apiserver's etcd exposed to every node in your cluster (or worse, to the
|
want apiserver's etcd exposed to every node in your cluster (or worse, to the
|
||||||
internet at large), because access to etcd is equivilent to root in your
|
internet at large), because access to etcd is equivalent to root in your
|
||||||
cluster.
|
cluster.
|
||||||
|
|
||||||
Data Reliability: for reasonable safety, either etcd needs to be run as a
|
Data Reliability: for reasonable safety, either etcd needs to be run as a
|
||||||
|
|
|
@ -41,7 +41,7 @@ Documentation for other releases can be found at
|
||||||
The kubelet is the primary "node agent" that runs on each
|
The kubelet is the primary "node agent" that runs on each
|
||||||
node. The kubelet works in terms of a PodSpec. A PodSpec is a YAML or JSON object
|
node. The kubelet works in terms of a PodSpec. A PodSpec is a YAML or JSON object
|
||||||
that describes a pod. The kubelet takes a set of PodSpecs that are provided through
|
that describes a pod. The kubelet takes a set of PodSpecs that are provided through
|
||||||
various echanisms (primarily through the apiserver) and ensures that the containers
|
various mechanisms (primarily through the apiserver) and ensures that the containers
|
||||||
described in those PodSpecs are running and healthy.
|
described in those PodSpecs are running and healthy.
|
||||||
|
|
||||||
Other than from an PodSpec from the apiserver, there are three ways that a container
|
Other than from an PodSpec from the apiserver, there are three ways that a container
|
||||||
|
|
|
@ -84,7 +84,7 @@ TokenController runs as part of controller-manager. It acts asynchronously. It:
|
||||||
- observes serviceAccount creation and creates a corresponding Secret to allow API access.
|
- observes serviceAccount creation and creates a corresponding Secret to allow API access.
|
||||||
- observes serviceAccount deletion and deletes all corresponding ServiceAccountToken Secrets
|
- observes serviceAccount deletion and deletes all corresponding ServiceAccountToken Secrets
|
||||||
- observes secret addition, and ensures the referenced ServiceAccount exists, and adds a token to the secret if needed
|
- observes secret addition, and ensures the referenced ServiceAccount exists, and adds a token to the secret if needed
|
||||||
- observes secret deleteion and removes a reference from the corresponding ServiceAccount if needed
|
- observes secret deletion and removes a reference from the corresponding ServiceAccount if needed
|
||||||
|
|
||||||
#### To create additional API tokens
|
#### To create additional API tokens
|
||||||
|
|
||||||
|
|
|
@ -87,7 +87,7 @@ Note: If you have write access to the main repository at github.com/GoogleCloudP
|
||||||
git remote set-url --push upstream no_push
|
git remote set-url --push upstream no_push
|
||||||
```
|
```
|
||||||
|
|
||||||
### Commiting changes to your fork
|
### Committing changes to your fork
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
git commit
|
git commit
|
||||||
|
|
|
@ -223,7 +223,7 @@ frontend-z9oxo 1/1 Running 0 41s
|
||||||
|
|
||||||
## Exposing the app to the outside world
|
## Exposing the app to the outside world
|
||||||
|
|
||||||
There is no native Azure load-ballancer support in Kubernets 1.0, however here is how you can expose the Guestbook app to the Internet.
|
There is no native Azure load-balancer support in Kubernetes 1.0, however here is how you can expose the Guestbook app to the Internet.
|
||||||
|
|
||||||
```
|
```
|
||||||
./expose_guestbook_app_port.sh ./output/kube_1c1496016083b4_ssh_conf
|
./expose_guestbook_app_port.sh ./output/kube_1c1496016083b4_ssh_conf
|
||||||
|
|
|
@ -87,7 +87,7 @@ cd kubernetes/cluster/docker-multinode
|
||||||
|
|
||||||
`Master done!`
|
`Master done!`
|
||||||
|
|
||||||
See [here](docker-multinode/master.md) for detailed instructions explaination.
|
See [here](docker-multinode/master.md) for detailed instructions explanation.
|
||||||
|
|
||||||
## Adding a worker node
|
## Adding a worker node
|
||||||
|
|
||||||
|
@ -104,7 +104,7 @@ cd kubernetes/cluster/docker-multinode
|
||||||
|
|
||||||
`Worker done!`
|
`Worker done!`
|
||||||
|
|
||||||
See [here](docker-multinode/worker.md) for detailed instructions explaination.
|
See [here](docker-multinode/worker.md) for detailed instructions explanation.
|
||||||
|
|
||||||
## Testing your cluster
|
## Testing your cluster
|
||||||
|
|
||||||
|
|
|
@ -74,7 +74,7 @@ parameters as follows:
|
||||||
```
|
```
|
||||||
|
|
||||||
NOTE: The above is specifically for GRUB2.
|
NOTE: The above is specifically for GRUB2.
|
||||||
You can check the command line parameters passed to your kenel by looking at the
|
You can check the command line parameters passed to your kernel by looking at the
|
||||||
output of /proc/cmdline:
|
output of /proc/cmdline:
|
||||||
|
|
||||||
```console
|
```console
|
||||||
|
|
|
@ -187,7 +187,7 @@ cd ~/kubernetes/contrib/ansible/
|
||||||
|
|
||||||
That's all there is to it. It's really that easy. At this point you should have a functioning Kubernetes cluster.
|
That's all there is to it. It's really that easy. At this point you should have a functioning Kubernetes cluster.
|
||||||
|
|
||||||
**Show kubernets nodes**
|
**Show kubernetes nodes**
|
||||||
|
|
||||||
Run the following on the kube-master:
|
Run the following on the kube-master:
|
||||||
|
|
||||||
|
|
|
@ -657,7 +657,7 @@ This pod mounts several node file system directories using the `hostPath` volum
|
||||||
authenticate external services, such as a cloud provider.
|
authenticate external services, such as a cloud provider.
|
||||||
- This is not required if you do not use a cloud provider (e.g. bare-metal).
|
- This is not required if you do not use a cloud provider (e.g. bare-metal).
|
||||||
- The `/srv/kubernetes` mount allows the apiserver to read certs and credentials stored on the
|
- The `/srv/kubernetes` mount allows the apiserver to read certs and credentials stored on the
|
||||||
node disk. These could instead be stored on a persistend disk, such as a GCE PD, or baked into the image.
|
node disk. These could instead be stored on a persistent disk, such as a GCE PD, or baked into the image.
|
||||||
- Optionally, you may want to mount `/var/log` as well and redirect output there (not shown in template).
|
- Optionally, you may want to mount `/var/log` as well and redirect output there (not shown in template).
|
||||||
- Do this if you prefer your logs to be accessible from the root filesystem with tools like journalctl.
|
- Do this if you prefer your logs to be accessible from the root filesystem with tools like journalctl.
|
||||||
|
|
||||||
|
|
|
@ -67,14 +67,14 @@ When a client sends a watch request to apiserver, instead of redirecting it to
|
||||||
etcd, it will cause:
|
etcd, it will cause:
|
||||||
|
|
||||||
- registering a handler to receive all new changes coming from etcd
|
- registering a handler to receive all new changes coming from etcd
|
||||||
- iteratiting though a watch window, starting at the requested resourceVersion
|
- iterating though a watch window, starting at the requested resourceVersion
|
||||||
to the head and sending filetered changes directory to the client, blocking
|
to the head and sending filtered changes directory to the client, blocking
|
||||||
the above until this iteration has caught up
|
the above until this iteration has caught up
|
||||||
|
|
||||||
This will be done be creating a go-routine per watcher that will be responsible
|
This will be done be creating a go-routine per watcher that will be responsible
|
||||||
for performing the above.
|
for performing the above.
|
||||||
|
|
||||||
The following section describes the proposal in more details, analizes some
|
The following section describes the proposal in more details, analyzes some
|
||||||
corner cases and divides the whole design in more fine-grained steps.
|
corner cases and divides the whole design in more fine-grained steps.
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -238,8 +238,8 @@ Address 1: 10.0.116.146
|
||||||
## Securing the Service
|
## Securing the Service
|
||||||
|
|
||||||
Till now we have only accessed the nginx server from within the cluster. Before exposing the Service to the internet, you want to make sure the communication channel is secure. For this, you will need:
|
Till now we have only accessed the nginx server from within the cluster. Before exposing the Service to the internet, you want to make sure the communication channel is secure. For this, you will need:
|
||||||
* Self signed certificates for https (unless you already have an identitiy certificate)
|
* Self signed certificates for https (unless you already have an identity certificate)
|
||||||
* An nginx server configured to use the cretificates
|
* An nginx server configured to use the certificates
|
||||||
* A [secret](secrets.md) that makes the certificates accessible to pods
|
* A [secret](secrets.md) that makes the certificates accessible to pods
|
||||||
|
|
||||||
You can acquire all these from the [nginx https example](../../examples/https-nginx/README.md), in short:
|
You can acquire all these from the [nginx https example](../../examples/https-nginx/README.md), in short:
|
||||||
|
|
|
@ -214,7 +214,7 @@ $ kubectl logs -f nginx-app-zibvs
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Now's a good time to mention slight difference between pods and containers; by default pods will not terminate if their processes exit. Instead it will restart the process. This is similar to the docker run option `--restart=always` with one major difference. In docker, the output for each invocation of the process is concatenated but for Kubernetes, each invokation is separate. To see the output from a prevoius run in Kubernetes, do this:
|
Now's a good time to mention slight difference between pods and containers; by default pods will not terminate if their processes exit. Instead it will restart the process. This is similar to the docker run option `--restart=always` with one major difference. In docker, the output for each invocation of the process is concatenated but for Kubernetes, each invocation is separate. To see the output from a previous run in Kubernetes, do this:
|
||||||
|
|
||||||
```console
|
```console
|
||||||
|
|
||||||
|
|
|
@ -58,7 +58,7 @@ A [Probe](https://godoc.org/github.com/GoogleCloudPlatform/kubernetes/pkg/api/v1
|
||||||
|
|
||||||
* `ExecAction`: executes a specified command inside the container expecting on success that the command exits with status code 0.
|
* `ExecAction`: executes a specified command inside the container expecting on success that the command exits with status code 0.
|
||||||
* `TCPSocketAction`: performs a tcp check against the container's IP address on a specified port expecting on success that the port is open.
|
* `TCPSocketAction`: performs a tcp check against the container's IP address on a specified port expecting on success that the port is open.
|
||||||
* `HTTPGetAction`: performs an HTTP Get againsts the container's IP address on a specified port and path expecting on success that the response has a status code greater than or equal to 200 and less than 400.
|
* `HTTPGetAction`: performs an HTTP Get against the container's IP address on a specified port and path expecting on success that the response has a status code greater than or equal to 200 and less than 400.
|
||||||
|
|
||||||
Each probe will have one of three results:
|
Each probe will have one of three results:
|
||||||
|
|
||||||
|
|
|
@ -61,7 +61,7 @@ Here are some key points:
|
||||||
* **Application-centric management**:
|
* **Application-centric management**:
|
||||||
Raises the level of abstraction from running an OS on virtual hardware to running an application on an OS using logical resources. This provides the simplicity of PaaS with the flexibility of IaaS and enables you to run much more than just [12-factor apps](http://12factor.net/).
|
Raises the level of abstraction from running an OS on virtual hardware to running an application on an OS using logical resources. This provides the simplicity of PaaS with the flexibility of IaaS and enables you to run much more than just [12-factor apps](http://12factor.net/).
|
||||||
* **Dev and Ops separation of concerns**:
|
* **Dev and Ops separation of concerns**:
|
||||||
Provides separatation of build and deployment; therefore, decoupling applications from infrastructure.
|
Provides separation of build and deployment; therefore, decoupling applications from infrastructure.
|
||||||
* **Agile application creation and deployment**:
|
* **Agile application creation and deployment**:
|
||||||
Increased ease and efficiency of container image creation compared to VM image use.
|
Increased ease and efficiency of container image creation compared to VM image use.
|
||||||
* **Continuous development, integration, and deployment**:
|
* **Continuous development, integration, and deployment**:
|
||||||
|
|
|
@ -244,7 +244,7 @@ spec:
|
||||||
[Download example](cassandra-controller.yaml)
|
[Download example](cassandra-controller.yaml)
|
||||||
<!-- END MUNGE: EXAMPLE cassandra-controller.yaml -->
|
<!-- END MUNGE: EXAMPLE cassandra-controller.yaml -->
|
||||||
|
|
||||||
Most of this replication controller definition is identical to the Cassandra pod definition above, it simply gives the resplication controller a recipe to use when it creates new Cassandra pods. The other differentiating parts are the ```selector``` attribute which contains the controller's selector query, and the ```replicas``` attribute which specifies the desired number of replicas, in this case 1.
|
Most of this replication controller definition is identical to the Cassandra pod definition above, it simply gives the replication controller a recipe to use when it creates new Cassandra pods. The other differentiating parts are the ```selector``` attribute which contains the controller's selector query, and the ```replicas``` attribute which specifies the desired number of replicas, in this case 1.
|
||||||
|
|
||||||
Create this controller:
|
Create this controller:
|
||||||
|
|
||||||
|
|
|
@ -40,7 +40,7 @@ with [replication controllers](../../docs/user-guide/replication-controller.md).
|
||||||
because multicast discovery will not find the other pod IPs needed to form a cluster. This
|
because multicast discovery will not find the other pod IPs needed to form a cluster. This
|
||||||
image detects other Elasticsearch [pods](../../docs/user-guide/pods.md) running in a specified [namespace](../../docs/user-guide/namespaces.md) with a given
|
image detects other Elasticsearch [pods](../../docs/user-guide/pods.md) running in a specified [namespace](../../docs/user-guide/namespaces.md) with a given
|
||||||
label selector. The detected instances are used to form a list of peer hosts which
|
label selector. The detected instances are used to form a list of peer hosts which
|
||||||
are used as part of the unicast discovery mechansim for Elasticsearch. The detection
|
are used as part of the unicast discovery mechanism for Elasticsearch. The detection
|
||||||
of the peer nodes is done by a program which communicates with the Kubernetes API
|
of the peer nodes is done by a program which communicates with the Kubernetes API
|
||||||
server to get a list of matching Elasticsearch pods. To enable authenticated
|
server to get a list of matching Elasticsearch pods. To enable authenticated
|
||||||
communication this image needs a [secret](../../docs/user-guide/secrets.md) to be mounted at `/etc/apiserver-secret`
|
communication this image needs a [secret](../../docs/user-guide/secrets.md) to be mounted at `/etc/apiserver-secret`
|
||||||
|
|
|
@ -280,7 +280,7 @@ You can now play with the guestbook that you just created by opening it in a bro
|
||||||
|
|
||||||
### Step Eight: Cleanup <a id="step-eight"></a>
|
### Step Eight: Cleanup <a id="step-eight"></a>
|
||||||
|
|
||||||
After you're done playing with the guestbook, you can cleanup by deleting the guestbook service and removing the associated resources that were created, including load balancers, forwarding rules, target pools, and Kuberentes replication controllers and services.
|
After you're done playing with the guestbook, you can cleanup by deleting the guestbook service and removing the associated resources that were created, including load balancers, forwarding rules, target pools, and Kubernetes replication controllers and services.
|
||||||
|
|
||||||
Delete all the resources by running the following `kubectl delete -f` *`filename`* command:
|
Delete all the resources by running the following `kubectl delete -f` *`filename`* command:
|
||||||
|
|
||||||
|
|
|
@ -141,7 +141,7 @@ spec:
|
||||||
[Download example](hazelcast-controller.yaml)
|
[Download example](hazelcast-controller.yaml)
|
||||||
<!-- END MUNGE: EXAMPLE hazelcast-controller.yaml -->
|
<!-- END MUNGE: EXAMPLE hazelcast-controller.yaml -->
|
||||||
|
|
||||||
There are a few things to note in this description. First is that we are running the `quay.io/pires/hazelcast-kubernetes` image, tag `0.5`. This is a `busybox` installation with JRE 8 Update 45. However it also adds a custom [`application`](https://github.com/pires/hazelcast-kubernetes-bootstrapper) that finds any Hazelcast nodes in the cluster and bootstraps an Hazelcast instance accordingle. The `HazelcastDiscoveryController` discovers the Kubernetes API Server using the built in Kubernetes discovery service, and then uses the Kubernetes API to find new nodes (more on this later).
|
There are a few things to note in this description. First is that we are running the `quay.io/pires/hazelcast-kubernetes` image, tag `0.5`. This is a `busybox` installation with JRE 8 Update 45. However it also adds a custom [`application`](https://github.com/pires/hazelcast-kubernetes-bootstrapper) that finds any Hazelcast nodes in the cluster and bootstraps an Hazelcast instance accordingly. The `HazelcastDiscoveryController` discovers the Kubernetes API Server using the built in Kubernetes discovery service, and then uses the Kubernetes API to find new nodes (more on this later).
|
||||||
|
|
||||||
You may also note that we tell Kubernetes that the container exposes the `hazelcast` port. Finally, we tell the cluster manager that we need 1 cpu core.
|
You may also note that we tell Kubernetes that the container exposes the `hazelcast` port. Finally, we tell the cluster manager that we need 1 cpu core.
|
||||||
|
|
||||||
|
|
|
@ -89,7 +89,7 @@ The web front end provides users an interface for watching pet store transaction
|
||||||
|
|
||||||
To generate those transactions, you can use the bigpetstore data generator. Alternatively, you could just write a
|
To generate those transactions, you can use the bigpetstore data generator. Alternatively, you could just write a
|
||||||
|
|
||||||
shell script which calls "curl localhost:3000/k8petstore/rpush/blahblahblah" over and over again :). But thats not nearly
|
shell script which calls "curl localhost:3000/k8petstore/rpush/blahblahblah" over and over again :). But that's not nearly
|
||||||
|
|
||||||
as fun, and its not a good test of a real world scenario where payloads scale and have lots of information content.
|
as fun, and its not a good test of a real world scenario where payloads scale and have lots of information content.
|
||||||
|
|
||||||
|
|
|
@ -141,7 +141,7 @@ your cluster. Edit [`meteor-controller.json`](meteor-controller.json)
|
||||||
and make sure the `image:` points to the container you just pushed to
|
and make sure the `image:` points to the container you just pushed to
|
||||||
the Docker Hub or GCR.
|
the Docker Hub or GCR.
|
||||||
|
|
||||||
We will need to provide MongoDB a persistent Kuberetes volume to
|
We will need to provide MongoDB a persistent Kubernetes volume to
|
||||||
store its data. See the [volumes documentation](../../docs/user-guide/volumes.md) for
|
store its data. See the [volumes documentation](../../docs/user-guide/volumes.md) for
|
||||||
options. We're going to use Google Compute Engine persistent
|
options. We're going to use Google Compute Engine persistent
|
||||||
disks. Create the MongoDB disk by running:
|
disks. Create the MongoDB disk by running:
|
||||||
|
|
|
@ -98,7 +98,7 @@ $ cluster/kubectl.sh config view --output=yaml --flatten=true --minify=true > ${
|
||||||
|
|
||||||
The output from this command will contain a single file that has all the required information needed to connect to your Kubernetes cluster that you previously provisioned. This file should be considered sensitive, so do not share this file with untrusted parties.
|
The output from this command will contain a single file that has all the required information needed to connect to your Kubernetes cluster that you previously provisioned. This file should be considered sensitive, so do not share this file with untrusted parties.
|
||||||
|
|
||||||
We will later use this file to tell OpenShift how to bootstap its own configuration.
|
We will later use this file to tell OpenShift how to bootstrap its own configuration.
|
||||||
|
|
||||||
### Step 2: Create an External Load Balancer to Route Traffic to OpenShift
|
### Step 2: Create an External Load Balancer to Route Traffic to OpenShift
|
||||||
|
|
||||||
|
|
Loading…
Reference in New Issue