mirror of https://github.com/k3s-io/k3s
Copy edits for spelling errors and typos
Signed-off-by: Ed Costello <epc@epcostello.com>pull/6/head
parent
3ce7fe8310
commit
05714d416b
|
@ -7,7 +7,7 @@ Documentation for previous releases is available in their respective branches:
|
|||
* [v0.18.1](https://github.com/GoogleCloudPlatform/kubernetes/tree/release-0.18/docs)
|
||||
* [v0.17.1](https://github.com/GoogleCloudPlatform/kubernetes/tree/release-0.17/docs)
|
||||
|
||||
* The [User's guide](user-guide.md) is for anyone who wants to run programs and services on an exisiting Kubernetes cluster.
|
||||
* The [User's guide](user-guide.md) is for anyone who wants to run programs and services on an existing Kubernetes cluster.
|
||||
|
||||
* The [Cluster Admin's guide](cluster-admin-guide.md) is for anyone setting up a Kubernetes cluster or administering it.
|
||||
|
||||
|
|
|
@ -197,7 +197,7 @@ As mentioned above, you use the `kubectl cluster-info` command to retrieve the s
|
|||
|
||||
#### Using web browsers to access services running on the cluster
|
||||
You may be able to put a apiserver proxy url into the address bar of a browser. However:
|
||||
- Web browsers cannot usually pass tokens, so you may need to use basic (password) auth. Apiserver can be configured to accespt basic auth,
|
||||
- Web browsers cannot usually pass tokens, so you may need to use basic (password) auth. Apiserver can be configured to accept basic auth,
|
||||
but your cluster may not be configured to accept basic auth.
|
||||
- Some web apps may not work, particularly those with client side javascript that construct urls in a
|
||||
way that is unaware of the proxy path prefix.
|
||||
|
|
|
@ -21,7 +21,7 @@ Specific scenarios:
|
|||
- Apiserver backing storage lost
|
||||
- Results
|
||||
- apiserver should fail to come up.
|
||||
- kubelets will not be able to reach it but will continute to run the same pods and provide the same service proxying.
|
||||
- kubelets will not be able to reach it but will continue to run the same pods and provide the same service proxying.
|
||||
- manual recovery or recreation of apiserver state necessary before apiserver is restarted.
|
||||
- Supporting services (node controller, replication controller manager, scheduler, etc) VM shutdown or crashes
|
||||
- currently those are colocated with the apiserver, and their unavailability has similar consequences as apiserver
|
||||
|
@ -103,11 +103,11 @@ Reasons to have multiple clusters include:
|
|||
- test clusters to canary new Kubernetes releases or other cluster software.
|
||||
|
||||
### Selecting the right number of clusters
|
||||
The selection of the number of kubernetes clusters may be a relatively static choice, only revisted occasionally.
|
||||
The selection of the number of kubernetes clusters may be a relatively static choice, only revisited occasionally.
|
||||
By contrast, the number of nodes in a cluster and the number of pods in a service may be change frequently according to
|
||||
load and growth.
|
||||
|
||||
To pick the number of clusters, first, decide which regions you need to be in to have adequete latency to all your end users, for services that will run
|
||||
To pick the number of clusters, first, decide which regions you need to be in to have adequate latency to all your end users, for services that will run
|
||||
on Kubernetes (if you use a Content Distribution Network, the latency requirements for the CDN-hosted content need not
|
||||
be considered). Legal issues might influence this as well. For example, a company with a global customer base might decide to have clusters in US, EU, AP, and SA regions.
|
||||
Call the number of regions to be in `R`.
|
||||
|
|
|
@ -8,7 +8,7 @@ It assumes some familiarity with concepts in the [User Guide](user-guide.md).
|
|||
There are many different examples of how to setup a kubernetes cluster. Many of them are listed in this
|
||||
[matrix](getting-started-guides/README.md). We call each of the combinations in this matrix a *distro*.
|
||||
|
||||
Before chosing a particular guide, here are some things to consider:
|
||||
Before choosing a particular guide, here are some things to consider:
|
||||
- Are you just looking to try out Kubernetes on your laptop, or build a high-availability many-node cluster? Both
|
||||
models are supported, but some distros are better for one case or the other.
|
||||
- Will you be using a hosted Kubernetes cluster, such as [GKE](https://cloud.google.com/container-engine), or setting
|
||||
|
|
|
@ -38,7 +38,7 @@ The proposed solution will provide a range of options for setting up and maintai
|
|||
|
||||
The building blocks of an easier solution:
|
||||
|
||||
* **Move to TLS** We will move to using TLS for all intra-cluster communication. We will explicitly idenitfy the trust chain (the set of trusted CAs) as opposed to trusting the system CAs. We will also use client certificates for all AuthN.
|
||||
* **Move to TLS** We will move to using TLS for all intra-cluster communication. We will explicitly identify the trust chain (the set of trusted CAs) as opposed to trusting the system CAs. We will also use client certificates for all AuthN.
|
||||
* [optional] **API driven CA** Optionally, we will run a CA in the master that will mint certificates for the nodes/kubelets. There will be pluggable policies that will automatically approve certificate requests here as appropriate.
|
||||
* **CA approval policy** This is a pluggable policy object that can automatically approve CA signing requests. Stock policies will include `always-reject`, `queue` and `insecure-always-approve`. With `queue` there would be an API for evaluating and accepting/rejecting requests. Cloud providers could implement a policy here that verifies other out of band information and automatically approves/rejects based on other external factors.
|
||||
* **Scoped Kubelet Accounts** These accounts are per-minion and (optionally) give a minion permission to register itself.
|
||||
|
|
|
@ -25,7 +25,7 @@ Instead of a single Timestamp, each event object [contains](https://github.com/G
|
|||
|
||||
Each binary that generates events:
|
||||
* Maintains a historical record of previously generated events:
|
||||
* Implmented with ["Least Recently Used Cache"](https://github.com/golang/groupcache/blob/master/lru/lru.go) in [```pkg/client/record/events_cache.go```](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/pkg/client/record/events_cache.go).
|
||||
* Implemented with ["Least Recently Used Cache"](https://github.com/golang/groupcache/blob/master/lru/lru.go) in [```pkg/client/record/events_cache.go```](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/pkg/client/record/events_cache.go).
|
||||
* The key in the cache is generated from the event object minus timestamps/count/transient fields, specifically the following events fields are used to construct a unique key for an event:
|
||||
* ```event.Source.Component```
|
||||
* ```event.Source.Host```
|
||||
|
|
|
@ -55,7 +55,7 @@ available to subsequent expansions.
|
|||
### Use Case: Variable expansion in command
|
||||
|
||||
Users frequently need to pass the values of environment variables to a container's command.
|
||||
Currently, Kubernetes does not perform any expansion of varibles. The workaround is to invoke a
|
||||
Currently, Kubernetes does not perform any expansion of variables. The workaround is to invoke a
|
||||
shell in the container's command and have the shell perform the substitution, or to write a wrapper
|
||||
script that sets up the environment and runs the command. This has a number of drawbacks:
|
||||
|
||||
|
@ -116,7 +116,7 @@ expanded, then `$(VARIABLE_NAME)` should be present in the output.
|
|||
|
||||
Although the `$(var)` syntax does overlap with the `$(command)` form of command substitution
|
||||
supported by many shells, because unexpanded variables are present verbatim in the output, we
|
||||
expect this will not present a problem to many users. If there is a collision between a varible
|
||||
expect this will not present a problem to many users. If there is a collision between a variable
|
||||
name and command substitution syntax, the syntax can be escaped with the form `$$(VARIABLE_NAME)`,
|
||||
which will evaluate to `$(VARIABLE_NAME)` whether `VARIABLE_NAME` can be expanded or not.
|
||||
|
||||
|
|
|
@ -22,13 +22,13 @@ While Kubernetes today is not primarily a multi-tenant system, the long term evo
|
|||
|
||||
We define "user" as a unique identity accessing the Kubernetes API server, which may be a human or an automated process. Human users fall into the following categories:
|
||||
|
||||
1. k8s admin - administers a kubernetes cluster and has access to the undelying components of the system
|
||||
1. k8s admin - administers a kubernetes cluster and has access to the underlying components of the system
|
||||
2. k8s project administrator - administrates the security of a small subset of the cluster
|
||||
3. k8s developer - launches pods on a kubernetes cluster and consumes cluster resources
|
||||
|
||||
Automated process users fall into the following categories:
|
||||
|
||||
1. k8s container user - a user that processes running inside a container (on the cluster) can use to access other cluster resources indepedent of the human users attached to a project
|
||||
1. k8s container user - a user that processes running inside a container (on the cluster) can use to access other cluster resources independent of the human users attached to a project
|
||||
2. k8s infrastructure user - the user that kubernetes infrastructure components use to perform cluster functions with clearly defined roles
|
||||
|
||||
|
||||
|
|
|
@ -13,7 +13,7 @@ Processes in Pods may need to call the Kubernetes API. For example:
|
|||
They also may interact with services other than the Kubernetes API, such as:
|
||||
- an image repository, such as docker -- both when the images are pulled to start the containers, and for writing
|
||||
images in the case of pods that generate images.
|
||||
- accessing other cloud services, such as blob storage, in the context of a larged, integrated, cloud offering (hosted
|
||||
- accessing other cloud services, such as blob storage, in the context of a large, integrated, cloud offering (hosted
|
||||
or private).
|
||||
- accessing files in an NFS volume attached to the pod
|
||||
|
||||
|
|
|
@ -22,7 +22,7 @@ The value of that label is the hash of the complete JSON representation of the``
|
|||
If a rollout fails or is terminated in the middle, it is important that the user be able to resume the roll out.
|
||||
To facilitate recovery in the case of a crash of the updating process itself, we add the following annotations to each replicaController in the ```kubernetes.io/``` annotation namespace:
|
||||
* ```desired-replicas``` The desired number of replicas for this controller (either N or zero)
|
||||
* ```update-partner``` A pointer to the replicaiton controller resource that is the other half of this update (syntax ```<name>``` the namespace is assumed to be identical to the namespace of this replication controller.)
|
||||
* ```update-partner``` A pointer to the replication controller resource that is the other half of this update (syntax ```<name>``` the namespace is assumed to be identical to the namespace of this replication controller.)
|
||||
|
||||
Recovery is achieved by issuing the same command again:
|
||||
|
||||
|
|
|
@ -8,7 +8,7 @@ First and foremost: as a potential contributor, your changes and ideas are welco
|
|||
|
||||
## Code reviews
|
||||
|
||||
All changes must be code reviewed. For non-maintainers this is obvious, since you can't commit anyway. But even for maintainers, we want all changes to get at least one review, preferably (for non-trivial changes obligately) from someone who knows the areas the change touches. For non-trivial changes we may want two reviewers. The primary reviewer will make this decision and nominate a second reviewer, if needed. Except for trivial changes, PRs should not be committed until relevant parties (e.g. owners of the subsystem affected by the PR) have had a reasonable chance to look at PR in their local business hours.
|
||||
All changes must be code reviewed. For non-maintainers this is obvious, since you can't commit anyway. But even for maintainers, we want all changes to get at least one review, preferably (for non-trivial changes obligatorily) from someone who knows the areas the change touches. For non-trivial changes we may want two reviewers. The primary reviewer will make this decision and nominate a second reviewer, if needed. Except for trivial changes, PRs should not be committed until relevant parties (e.g. owners of the subsystem affected by the PR) have had a reasonable chance to look at PR in their local business hours.
|
||||
|
||||
Most PRs will find reviewers organically. If a maintainer intends to be the primary reviewer of a PR they should set themselves as the assignee on GitHub and say so in a reply to the PR. Only the primary reviewer of a change should actually do the merge, except in rare cases (e.g. they are unavailable in a reasonable timeframe).
|
||||
|
||||
|
|
|
@ -164,7 +164,7 @@ frontend-controller-oh43e 10.2.2.22 php-redis kubernetes/example-guestboo
|
|||
|
||||
## Exposing the app to the outside world
|
||||
|
||||
To makes sure the app is working, you probably want to load it in the browser. For accessing the Guesbook service from the outside world, an Azure endpoint needs to be created like shown on the picture below.
|
||||
To makes sure the app is working, you probably want to load it in the browser. For accessing the Guestbook service from the outside world, an Azure endpoint needs to be created like shown on the picture below.
|
||||
|
||||
![Creating an endpoint](external_access.png)
|
||||
|
||||
|
|
|
@ -77,7 +77,7 @@ You now need to edit the docker configuration to activate new flags. Again, thi
|
|||
|
||||
This may be in ```/etc/default/docker``` or ```/etc/systemd/service/docker.service``` or it may be elsewhere.
|
||||
|
||||
Regardless, you need to add the following to the docker comamnd line:
|
||||
Regardless, you need to add the following to the docker command line:
|
||||
```sh
|
||||
--bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}
|
||||
```
|
||||
|
|
|
@ -61,7 +61,7 @@ You now need to edit the docker configuration to activate new flags. Again, thi
|
|||
|
||||
This may be in ```/etc/default/docker``` or ```/etc/systemd/service/docker.service``` or it may be elsewhere.
|
||||
|
||||
Regardless, you need to add the following to the docker comamnd line:
|
||||
Regardless, you need to add the following to the docker command line:
|
||||
```sh
|
||||
--bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}
|
||||
```
|
||||
|
|
|
@ -14,7 +14,7 @@ Ansible will take care of the rest of the configuration for you - configuring ne
|
|||
|
||||
## Architecture of the cluster
|
||||
|
||||
A Kubernetes cluster reqiures etcd, a master, and n minions, so we will create a cluster with three hosts, for example:
|
||||
A Kubernetes cluster requires etcd, a master, and n minions, so we will create a cluster with three hosts, for example:
|
||||
|
||||
```
|
||||
fed1 (master,etcd) = 192.168.121.205
|
||||
|
|
|
@ -130,7 +130,7 @@ Get info on the pod:
|
|||
|
||||
To test the hello app, we need to locate which minion is hosting
|
||||
the container. Better tooling for using juju to introspect container
|
||||
is in the works but for we can use `juju run` and `juju status` to find
|
||||
is in the works but we can use `juju run` and `juju status` to find
|
||||
our hello app.
|
||||
|
||||
Exit out of our ssh session and run:
|
||||
|
|
|
@ -15,7 +15,7 @@ export LOGGING_DESTINATION=elasticsearch
|
|||
```
|
||||
|
||||
This will instantiate a [Fluentd](http://www.fluentd.org/) instance on each node which will
|
||||
collect all the Dcoker container log files. The collected logs will
|
||||
collect all the Docker container log files. The collected logs will
|
||||
be targeted at an [Elasticsearch](http://www.elasticsearch.org/) instance assumed to be running on the
|
||||
local node and accepting log information on port 9200. This can be accomplished
|
||||
by writing a pod specification and service specification to define an
|
||||
|
|
|
@ -61,7 +61,7 @@ Then the `roles ` variable defines the role of above machine in the same order,
|
|||
|
||||
The `NUM_MINIONS` variable defines the total number of minions.
|
||||
|
||||
The `SERVICE_CLUSTER_IP_RANGE` variable defines the kubernetes service IP range. Please make sure that you do have a valid private ip range defined here, because some IaaS provider may reserve private ips. You can use below three private network range accordin to rfc1918. Besides you'd better not choose the one that conflicts with your own private network range.
|
||||
The `SERVICE_CLUSTER_IP_RANGE` variable defines the kubernetes service IP range. Please make sure that you do have a valid private ip range defined here, because some IaaS provider may reserve private ips. You can use below three private network range according to rfc1918. Besides you'd better not choose the one that conflicts with your own private network range.
|
||||
|
||||
10.0.0.0 - 10.255.255.255 (10/8 prefix)
|
||||
|
||||
|
@ -114,7 +114,7 @@ Also you can run kubernetes [guest-example](https://github.com/GoogleCloudPlatfo
|
|||
|
||||
#### IV. Deploy addons
|
||||
|
||||
After the previous parts, you will have a working k8s cluster, this part will teach you how to deploy addones like dns onto the existing cluster.
|
||||
After the previous parts, you will have a working k8s cluster, this part will teach you how to deploy addons like dns onto the existing cluster.
|
||||
|
||||
The configuration of dns is configured in cluster/ubuntu/config-default.sh.
|
||||
|
||||
|
@ -150,7 +150,7 @@ After some time, you can use `$ kubectl get pods` to see the dns pod is running
|
|||
|
||||
Generally, what this approach did is quite simple:
|
||||
|
||||
1. Download and copy binaries and configuration files to proper dirctories on every node
|
||||
1. Download and copy binaries and configuration files to proper directories on every node
|
||||
|
||||
2. Configure `etcd` using IPs based on input from user
|
||||
|
||||
|
|
|
@ -4,7 +4,7 @@ All objects in the Kubernetes REST API are unambiguously identified by a Name an
|
|||
For non-unique user-provided attributes, Kubernetes provides [labels](labels.md) and [annotations](annotations.md).
|
||||
|
||||
## Names
|
||||
Names are generally client-provided. Only one object of a given kind can have a given name at a time (i.e., they are spatially unique). But if you delete an object, you can make a new object with the same name. Names are the used to refer to an object in a resource URL, such as `/api/v1/pods/some-name`. By convention, the names of Kubernetes resources should be up to maximum length of 253 characters and consist of lower case alphanumeric characters, `-`, and `.`, but certain resources have more specific restructions. See the [identifiers design doc](design/identifiers.md) for the precise syntax rules for names.
|
||||
Names are generally client-provided. Only one object of a given kind can have a given name at a time (i.e., they are spatially unique). But if you delete an object, you can make a new object with the same name. Names are the used to refer to an object in a resource URL, such as `/api/v1/pods/some-name`. By convention, the names of Kubernetes resources should be up to maximum length of 253 characters and consist of lower case alphanumeric characters, `-`, and `.`, but certain resources have more specific restrictions. See the [identifiers design doc](design/identifiers.md) for the precise syntax rules for names.
|
||||
|
||||
## UIDs
|
||||
UID are generated by Kubernetes. Every object created over the whole lifetime of a Kubernetes cluster has a distinct UID (i.e., they are spatially and temporally unique).
|
||||
|
|
|
@ -15,7 +15,7 @@ your image.
|
|||
## Using a Private Registry
|
||||
Private registries may require keys to read images from them.
|
||||
Credentials can be provided in several ways:
|
||||
- Using Google Container Registy
|
||||
- Using Google Container Registry
|
||||
- Per-cluster
|
||||
- automatically configured on GCE/GKE
|
||||
- all pods can read the project's private registry
|
||||
|
|
|
@ -90,7 +90,7 @@ The kube-controller-manager has several options.
|
|||
The period for syncing nodes from cloudprovider. Longer periods will result in fewer calls to cloud provider, but may delay addition of new nodes to cluster.
|
||||
|
||||
**--pod-eviction-timeout**=5m0s
|
||||
The grace peroid for deleting pods on failed nodes.
|
||||
The grace period for deleting pods on failed nodes.
|
||||
|
||||
**--port**=10252
|
||||
The port that the controller-manager's http service runs on
|
||||
|
|
|
@ -118,7 +118,7 @@ The kube\-controller\-manager has several options.
|
|||
|
||||
.PP
|
||||
\fB\-\-pod\-eviction\-timeout\fP=5m0s
|
||||
The grace peroid for deleting pods on failed nodes.
|
||||
The grace period for deleting pods on failed nodes.
|
||||
|
||||
.PP
|
||||
\fB\-\-port\fP=10252
|
||||
|
|
|
@ -4,9 +4,9 @@ This document serves as a proposal for high availability of the scheduler and co
|
|||
## Design Options
|
||||
For complete reference see [this](https://www.ibm.com/developerworks/community/blogs/RohitShetty/entry/high_availability_cold_warm_hot?lang=en)
|
||||
|
||||
1. Hot Standby: In this scenario, data and state are shared between the two components such that an immediate failure in one component causes the the standby deamon to take over exactly where the failed component had left off. This would be an ideal solution for kubernetes, however it poses a series of challenges in the case of controllers where component-state is cached locally and not persisted in a transactional way to a storage facility. This would also introduce additional load on the apiserver, which is not desirable. As a result, we are **NOT** planning on this approach at this time.
|
||||
1. Hot Standby: In this scenario, data and state are shared between the two components such that an immediate failure in one component causes the the standby daemon to take over exactly where the failed component had left off. This would be an ideal solution for kubernetes, however it poses a series of challenges in the case of controllers where component-state is cached locally and not persisted in a transactional way to a storage facility. This would also introduce additional load on the apiserver, which is not desirable. As a result, we are **NOT** planning on this approach at this time.
|
||||
|
||||
2. **Warm Standby**: In this scenario there is only one active component acting as the master and additional components running but not providing service or responding to requests. Data and state are not shared between the active and standby components. When a failure occurs, the standby component that becomes the master must determine the current state of the system before resuming functionality. This is the apprach that this proposal will leverage.
|
||||
2. **Warm Standby**: In this scenario there is only one active component acting as the master and additional components running but not providing service or responding to requests. Data and state are not shared between the active and standby components. When a failure occurs, the standby component that becomes the master must determine the current state of the system before resuming functionality. This is the approach that this proposal will leverage.
|
||||
|
||||
3. Active-Active (Load Balanced): Clients can simply load-balance across any number of servers that are currently running. Their general availability can be continuously updated, or published, such that load balancing only occurs across active participants. This aspect of HA is outside of the scope of *this* proposal because there is already a partial implementation in the apiserver.
|
||||
|
||||
|
@ -16,7 +16,7 @@ Implementation References:
|
|||
* [etcd](https://groups.google.com/forum/#!topic/etcd-dev/EbAa4fjypb4)
|
||||
* [initialPOC](https://github.com/rrati/etcd-ha)
|
||||
|
||||
In HA, the apiserver will provide an api for sets of replicated clients to do master election: acquire the lease, renew the lease, and release the lease. This api is component agnostic, so a client will need to provide the component type and the lease duration when attemping to become master. The lease duration should be tuned per component. The apiserver will attempt to create a key in etcd based on the component type that contains the client's hostname/ip and port information. This key will be created with a ttl from the lease duration provided in the request. Failure to create this key means there is already a master of that component type, and the error from etcd will propigate to the client. Successfully creating the key means the client making the request is the master. Only the current master can renew the lease. When renewing the lease, the apiserver will update the existing key with a new ttl. The location in etcd for the HA keys is TBD.
|
||||
In HA, the apiserver will provide an api for sets of replicated clients to do master election: acquire the lease, renew the lease, and release the lease. This api is component agnostic, so a client will need to provide the component type and the lease duration when attempting to become master. The lease duration should be tuned per component. The apiserver will attempt to create a key in etcd based on the component type that contains the client's hostname/ip and port information. This key will be created with a ttl from the lease duration provided in the request. Failure to create this key means there is already a master of that component type, and the error from etcd will propagate to the client. Successfully creating the key means the client making the request is the master. Only the current master can renew the lease. When renewing the lease, the apiserver will update the existing key with a new ttl. The location in etcd for the HA keys is TBD.
|
||||
|
||||
The first component to request leadership will become the master. All other components of that type will fail until the current leader releases the lease, or fails to renew the lease within the expiration time. On startup, all components should attempt to become master. The component that succeeds becomes the master, and should perform all functions of that component. The components that fail to become the master should not perform any tasks and sleep for their lease duration and then attempt to become the master again. A clean shutdown of the leader will cause a release of the lease and a new master will be elected.
|
||||
|
||||
|
|
Loading…
Reference in New Issue