mirror of https://github.com/k3s-io/k3s
Fix capitalization of Kubernetes in the documentation.
parent
7536db6d53
commit
acd1bed70e
|
@ -40,8 +40,8 @@ Documentation for other releases can be found at
|
|||
a Kubernetes cluster or administering it.
|
||||
|
||||
* The [Developer guide](devel/README.md) is for anyone wanting to write
|
||||
programs that access the kubernetes API, write plugins or extensions, or
|
||||
modify the core code of kubernetes.
|
||||
programs that access the Kubernetes API, write plugins or extensions, or
|
||||
modify the core code of Kubernetes.
|
||||
|
||||
* The [Kubectl Command Line Interface](user-guide/kubectl/kubectl.md) is a detailed reference on
|
||||
the `kubectl` CLI.
|
||||
|
|
|
@ -33,7 +33,7 @@ Documentation for other releases can be found at
|
|||
|
||||
# Configuring APIserver ports
|
||||
|
||||
This document describes what ports the kubernetes apiserver
|
||||
This document describes what ports the Kubernetes apiserver
|
||||
may serve on and how to reach them. The audience is
|
||||
cluster administrators who want to customize their cluster
|
||||
or understand the details.
|
||||
|
@ -44,7 +44,7 @@ in [Accessing the cluster](../user-guide/accessing-the-cluster.md).
|
|||
|
||||
## Ports and IPs Served On
|
||||
|
||||
The Kubernetes API is served by the Kubernetes APIServer process. Typically,
|
||||
The Kubernetes API is served by the Kubernetes apiserver process. Typically,
|
||||
there is one of these running on a single kubernetes-master node.
|
||||
|
||||
By default the Kubernetes APIserver serves HTTP on 2 ports:
|
||||
|
|
|
@ -69,7 +69,7 @@ with a value of `Basic BASE64ENCODEDUSER:PASSWORD`.
|
|||
We plan for the Kubernetes API server to issue tokens
|
||||
after the user has been (re)authenticated by a *bedrock* authentication
|
||||
provider external to Kubernetes. We plan to make it easy to develop modules
|
||||
that interface between kubernetes and a bedrock authentication provider (e.g.
|
||||
that interface between Kubernetes and a bedrock authentication provider (e.g.
|
||||
github.com, google.com, enterprise directory, kerberos, etc.)
|
||||
|
||||
|
||||
|
|
|
@ -75,7 +75,7 @@ Root causes:
|
|||
- Network partition within cluster, or between cluster and users
|
||||
- Crashes in Kubernetes software
|
||||
- Data loss or unavailability of persistent storage (e.g. GCE PD or AWS EBS volume)
|
||||
- Operator error, e.g. misconfigured kubernetes software or application software
|
||||
- Operator error, e.g. misconfigured Kubernetes software or application software
|
||||
|
||||
Specific scenarios:
|
||||
- Apiserver VM shutdown or apiserver crashing
|
||||
|
@ -127,7 +127,7 @@ Mitigations:
|
|||
- Action: Snapshot apiserver PDs/EBS-volumes periodically
|
||||
- Mitigates: Apiserver backing storage lost
|
||||
- Mitigates: Some cases of operator error
|
||||
- Mitigates: Some cases of kubernetes software fault
|
||||
- Mitigates: Some cases of Kubernetes software fault
|
||||
|
||||
- Action: use replication controller and services in front of pods
|
||||
- Mitigates: Node shutdown
|
||||
|
|
|
@ -33,7 +33,7 @@ Documentation for other releases can be found at
|
|||
|
||||
# DNS Integration with Kubernetes
|
||||
|
||||
As of kubernetes 0.8, DNS is offered as a [cluster add-on](http://releases.k8s.io/HEAD/cluster/addons/README.md).
|
||||
As of Kubernetes 0.8, DNS is offered as a [cluster add-on](http://releases.k8s.io/HEAD/cluster/addons/README.md).
|
||||
If enabled, a DNS Pod and Service will be scheduled on the cluster, and the kubelets will be
|
||||
configured to tell individual containers to use the DNS Service's IP to resolve DNS names.
|
||||
|
||||
|
@ -42,7 +42,7 @@ assigned a DNS name. By default, a client Pod's DNS search list will
|
|||
include the Pod's own namespace and the cluster's default domain. This is best
|
||||
illustrated by example:
|
||||
|
||||
Assume a Service named `foo` in the kubernetes namespace `bar`. A Pod running
|
||||
Assume a Service named `foo` in the Kubernetes namespace `bar`. A Pod running
|
||||
in namespace `bar` can look up this service by simply doing a DNS query for
|
||||
`foo`. A Pod running in namespace `quux` can look up this service by doing a
|
||||
DNS query for `foo.bar`.
|
||||
|
@ -53,14 +53,14 @@ supports forward lookups (A records) and service lookups (SRV records).
|
|||
## How it Works
|
||||
|
||||
The running DNS pod holds 3 containers - skydns, etcd (a private instance which skydns uses),
|
||||
and a kubernetes-to-skydns bridge called kube2sky. The kube2sky process
|
||||
watches the kubernetes master for changes in Services, and then writes the
|
||||
and a Kubernetes-to-skydns bridge called kube2sky. The kube2sky process
|
||||
watches the Kubernetes master for changes in Services, and then writes the
|
||||
information to etcd, which skydns reads. This etcd instance is not linked to
|
||||
any other etcd clusters that might exist, including the kubernetes master.
|
||||
any other etcd clusters that might exist, including the Kubernetes master.
|
||||
|
||||
## Issues
|
||||
|
||||
The skydns service is reachable directly from kubernetes nodes (outside
|
||||
The skydns service is reachable directly from Kubernetes nodes (outside
|
||||
of any container) and DNS resolution works if the skydns service is targeted
|
||||
explicitly. However, nodes are not configured to use the cluster DNS service or
|
||||
to search the cluster's DNS domain by default. This may be resolved at a later
|
||||
|
|
|
@ -38,7 +38,7 @@ Documentation for other releases can be found at
|
|||
### Synopsis
|
||||
|
||||
|
||||
The kubernetes API server validates and configures data
|
||||
The Kubernetes API server validates and configures data
|
||||
for the api objects which include pods, services, replicationcontrollers, and
|
||||
others. The API Server services REST operations and provides the frontend to the
|
||||
cluster's shared state through which all other components interact.
|
||||
|
@ -80,7 +80,7 @@ cluster's shared state through which all other components interact.
|
|||
--kubelet_port=0: Kubelet port
|
||||
--kubelet_timeout=0: Timeout for kubelet operations
|
||||
--long-running-request-regexp="(/|^)((watch|proxy)(/|$)|(logs|portforward|exec)/?$)": A regular expression matching long running requests which should be excluded from maximum inflight request handling.
|
||||
--master-service-namespace="": The namespace from which the kubernetes master services should be injected into pods
|
||||
--master-service-namespace="": The namespace from which the Kubernetes master services should be injected into pods
|
||||
--max-requests-inflight=400: The maximum number of requests in flight at a given time. When the server exceeds this, it rejects requests. Zero for no limit.
|
||||
--min-request-timeout=1800: An optional field indicating the minimum number of seconds a handler must keep a request open before timing it out. Currently only honored by the watch request handler, which picks a randomized value above this number as the connection timeout, to spread out load.
|
||||
--old-etcd-prefix="": The previous prefix for all resource paths in etcd, if any.
|
||||
|
|
|
@ -38,7 +38,7 @@ Documentation for other releases can be found at
|
|||
### Synopsis
|
||||
|
||||
|
||||
The kubernetes controller manager is a daemon that embeds
|
||||
The Kubernetes controller manager is a daemon that embeds
|
||||
the core control loops shipped with Kubernetes. In applications of robotics and
|
||||
automation, a control loop is a non-terminating loop that regulates the state of
|
||||
the system. In Kubernetes, a controller is a control loop that watches the shared
|
||||
|
|
|
@ -38,7 +38,7 @@ Documentation for other releases can be found at
|
|||
### Synopsis
|
||||
|
||||
|
||||
The kubernetes network proxy runs on each node. This
|
||||
The Kubernetes network proxy runs on each node. This
|
||||
reflects services as defined in the Kubernetes API on each node and can do simple
|
||||
TCP,UDP stream forwarding or round robin TCP,UDP forwarding across a set of backends.
|
||||
Service cluster ips and ports are currently found through Docker-links-compatible
|
||||
|
|
|
@ -38,7 +38,7 @@ Documentation for other releases can be found at
|
|||
### Synopsis
|
||||
|
||||
|
||||
The kubernetes scheduler is a policy-rich, topology-aware,
|
||||
The Kubernetes scheduler is a policy-rich, topology-aware,
|
||||
workload-specific function that significantly impacts availability, performance,
|
||||
and capacity. The scheduler needs to take into account individual and collective
|
||||
resource requirements, quality of service requirements, hardware/software/policy
|
||||
|
|
|
@ -91,7 +91,7 @@ HTTP server: The kubelet can also listen for HTTP and respond to a simple API
|
|||
--kubeconfig=: Path to a kubeconfig file, specifying how to authenticate to API server (the master location is set by the api-servers flag).
|
||||
--low-diskspace-threshold-mb=0: The absolute free disk space, in MB, to maintain. When disk space falls below this threshold, new pods would be rejected. Default: 256
|
||||
--manifest-url="": URL for accessing the container manifest
|
||||
--master-service-namespace="": The namespace from which the kubernetes master services should be injected into pods
|
||||
--master-service-namespace="": The namespace from which the Kubernetes master services should be injected into pods
|
||||
--max-pods=40: Number of Pods that can run on this Kubelet.
|
||||
--maximum-dead-containers=0: Maximum number of old instances of a containers to retain globally. Each container takes up some disk space. Default: 100.
|
||||
--maximum-dead-containers-per-container=0: Maximum number of old instances of a container to retain per container. Each container takes up some disk space. Default: 2.
|
||||
|
|
|
@ -33,7 +33,7 @@ Documentation for other releases can be found at
|
|||
|
||||
# Considerations for running multiple Kubernetes clusters
|
||||
|
||||
You may want to set up multiple kubernetes clusters, both to
|
||||
You may want to set up multiple Kubernetes clusters, both to
|
||||
have clusters in different regions to be nearer to your users, and to tolerate failures and/or invasive maintenance.
|
||||
This document describes some of the issues to consider when making a decision about doing so.
|
||||
|
||||
|
@ -67,7 +67,7 @@ Reasons to have multiple clusters include:
|
|||
|
||||
## Selecting the right number of clusters
|
||||
|
||||
The selection of the number of kubernetes clusters may be a relatively static choice, only revisited occasionally.
|
||||
The selection of the number of Kubernetes clusters may be a relatively static choice, only revisited occasionally.
|
||||
By contrast, the number of nodes in a cluster and the number of pods in a service may be change frequently according to
|
||||
load and growth.
|
||||
|
||||
|
|
|
@ -125,7 +125,7 @@ number of pods that can be scheduled onto the node.
|
|||
|
||||
### Node Info
|
||||
|
||||
General information about the node, for instance kernel version, kubernetes version
|
||||
General information about the node, for instance kernel version, Kubernetes version
|
||||
(kubelet version, kube-proxy version), docker version (if used), OS name.
|
||||
The information is gathered by Kubelet from the node.
|
||||
|
||||
|
@ -231,7 +231,7 @@ Normally, nodes register themselves and report their capacity when creating the
|
|||
you are doing [manual node administration](#manual-node-administration), then you need to set node
|
||||
capacity when adding a node.
|
||||
|
||||
The kubernetes scheduler ensures that there are enough resources for all the pods on a node. It
|
||||
The Kubernetes scheduler ensures that there are enough resources for all the pods on a node. It
|
||||
checks that the sum of the limits of containers on the node is no greater than than the node capacity. It
|
||||
includes all containers started by kubelet, but not containers started directly by docker, nor
|
||||
processes not in containers.
|
||||
|
|
|
@ -63,7 +63,7 @@ Neither contention nor changes to quota will affect already-running pods.
|
|||
|
||||
## Enabling Resource Quota
|
||||
|
||||
Resource Quota support is enabled by default for many kubernetes distributions. It is
|
||||
Resource Quota support is enabled by default for many Kubernetes distributions. It is
|
||||
enabled when the apiserver `--admission_control=` flag has `ResourceQuota` as
|
||||
one of its arguments.
|
||||
|
||||
|
|
|
@ -95,7 +95,7 @@ Key | Value
|
|||
------------- | -------------
|
||||
`api_servers` | (Optional) The IP address / host name where a kubelet can get read-only access to kube-apiserver
|
||||
`cbr-cidr` | (Optional) The minion IP address range used for the docker container bridge.
|
||||
`cloud` | (Optional) Which IaaS platform is used to host kubernetes, *gce*, *azure*, *aws*, *vagrant*
|
||||
`cloud` | (Optional) Which IaaS platform is used to host Kubernetes, *gce*, *azure*, *aws*, *vagrant*
|
||||
`etcd_servers` | (Optional) Comma-delimited list of IP addresses the kube-apiserver and kubelet use to reach etcd. Uses the IP of the first machine in the kubernetes_master role, or 127.0.0.1 on GCE.
|
||||
`hostnamef` | (Optional) The full host name of the machine, i.e. uname -n
|
||||
`node_ip` | (Optional) The IP address to use to address this node
|
||||
|
@ -103,7 +103,7 @@ Key | Value
|
|||
`network_mode` | (Optional) Networking model to use among nodes: *openvswitch*
|
||||
`networkInterfaceName` | (Optional) Networking interface to use to bind addresses, default value *eth0*
|
||||
`publicAddressOverride` | (Optional) The IP address the kube-apiserver should use to bind against for external read-only access
|
||||
`roles` | (Required) 1. `kubernetes-master` means this machine is the master in the kubernetes cluster. 2. `kubernetes-pool` means this machine is a kubernetes-minion. Depending on the role, the Salt scripts will provision different resources on the machine.
|
||||
`roles` | (Required) 1. `kubernetes-master` means this machine is the master in the Kubernetes cluster. 2. `kubernetes-pool` means this machine is a kubernetes-minion. Depending on the role, the Salt scripts will provision different resources on the machine.
|
||||
|
||||
These keys may be leveraged by the Salt sls files to branch behavior.
|
||||
|
||||
|
|
|
@ -200,7 +200,7 @@ Namespaces versus userAccount vs Labels:
|
|||
|
||||
Goals for K8s authentication:
|
||||
- Include a built-in authentication system with no configuration required to use in single-user mode, and little configuration required to add several user accounts, and no https proxy required.
|
||||
- Allow for authentication to be handled by a system external to Kubernetes, to allow integration with existing to enterprise authorization systems. The kubernetes namespace itself should avoid taking contributions of multiple authorization schemes. Instead, a trusted proxy in front of the apiserver can be used to authenticate users.
|
||||
- Allow for authentication to be handled by a system external to Kubernetes, to allow integration with existing to enterprise authorization systems. The Kubernetes namespace itself should avoid taking contributions of multiple authorization schemes. Instead, a trusted proxy in front of the apiserver can be used to authenticate users.
|
||||
- For organizations whose security requirements only allow FIPS compliant implementations (e.g. apache) for authentication.
|
||||
- So the proxy can terminate SSL, and isolate the CA-signed certificate from less trusted, higher-touch APIserver.
|
||||
- For organizations that already have existing SaaS web services (e.g. storage, VMs) and want a common authentication portal.
|
||||
|
|
|
@ -36,7 +36,7 @@ Documentation for other releases can be found at
|
|||
|
||||
## Overview
|
||||
|
||||
The term "clustering" refers to the process of having all members of the kubernetes cluster find and trust each other. There are multiple different ways to achieve clustering with different security and usability profiles. This document attempts to lay out the user experiences for clustering that Kubernetes aims to address.
|
||||
The term "clustering" refers to the process of having all members of the Kubernetes cluster find and trust each other. There are multiple different ways to achieve clustering with different security and usability profiles. This document attempts to lay out the user experiences for clustering that Kubernetes aims to address.
|
||||
|
||||
Once a cluster is established, the following is true:
|
||||
|
||||
|
|
|
@ -94,7 +94,7 @@ script that sets up the environment and runs the command. This has a number of
|
|||
|
||||
1. Solutions that require a shell are unfriendly to images that do not contain a shell
|
||||
2. Wrapper scripts make it harder to use images as base images
|
||||
3. Wrapper scripts increase coupling to kubernetes
|
||||
3. Wrapper scripts increase coupling to Kubernetes
|
||||
|
||||
Users should be able to do the 80% case of variable expansion in command without writing a wrapper
|
||||
script or adding a shell invocation to their containers' commands.
|
||||
|
|
|
@ -81,7 +81,7 @@ Goals of this design:
|
|||
the kubelet implement some reserved behaviors based on the types of secrets the service account
|
||||
consumes:
|
||||
1. Use credentials for a docker registry to pull the pod's docker image
|
||||
2. Present kubernetes auth token to the pod or transparently decorate traffic between the pod
|
||||
2. Present Kubernetes auth token to the pod or transparently decorate traffic between the pod
|
||||
and master service
|
||||
4. As a user, I want to be able to indicate that a secret expires and for that secret's value to
|
||||
be rotated once it expires, so that the system can help me follow good practices
|
||||
|
@ -112,7 +112,7 @@ other system components to take action based on the secret's type.
|
|||
#### Example: service account consumes auth token secret
|
||||
|
||||
As an example, the service account proposal discusses service accounts consuming secrets which
|
||||
contain kubernetes auth tokens. When a Kubelet starts a pod associated with a service account
|
||||
contain Kubernetes auth tokens. When a Kubelet starts a pod associated with a service account
|
||||
which consumes this type of secret, the Kubelet may take a number of actions:
|
||||
|
||||
1. Expose the secret in a `.kubernetes_auth` file in a well-known location in the container's
|
||||
|
|
|
@ -55,14 +55,14 @@ While Kubernetes today is not primarily a multi-tenant system, the long term evo
|
|||
|
||||
We define "user" as a unique identity accessing the Kubernetes API server, which may be a human or an automated process. Human users fall into the following categories:
|
||||
|
||||
1. k8s admin - administers a kubernetes cluster and has access to the underlying components of the system
|
||||
1. k8s admin - administers a Kubernetes cluster and has access to the underlying components of the system
|
||||
2. k8s project administrator - administrates the security of a small subset of the cluster
|
||||
3. k8s developer - launches pods on a kubernetes cluster and consumes cluster resources
|
||||
3. k8s developer - launches pods on a Kubernetes cluster and consumes cluster resources
|
||||
|
||||
Automated process users fall into the following categories:
|
||||
|
||||
1. k8s container user - a user that processes running inside a container (on the cluster) can use to access other cluster resources independent of the human users attached to a project
|
||||
2. k8s infrastructure user - the user that kubernetes infrastructure components use to perform cluster functions with clearly defined roles
|
||||
2. k8s infrastructure user - the user that Kubernetes infrastructure components use to perform cluster functions with clearly defined roles
|
||||
|
||||
|
||||
### Description of roles
|
||||
|
|
|
@ -76,7 +76,7 @@ type ServiceAccount struct {
|
|||
```
|
||||
|
||||
The name ServiceAccount is chosen because it is widely used already (e.g. by Kerberos and LDAP)
|
||||
to refer to this type of account. Note that it has no relation to kubernetes Service objects.
|
||||
to refer to this type of account. Note that it has no relation to Kubernetes Service objects.
|
||||
|
||||
The ServiceAccount object does not include any information that could not be defined separately:
|
||||
- username can be defined however users are defined.
|
||||
|
@ -90,12 +90,12 @@ These features are explained later.
|
|||
|
||||
### Names
|
||||
|
||||
From the standpoint of the Kubernetes API, a `user` is any principal which can authenticate to kubernetes API.
|
||||
From the standpoint of the Kubernetes API, a `user` is any principal which can authenticate to Kubernetes API.
|
||||
This includes a human running `kubectl` on her desktop and a container in a Pod on a Node making API calls.
|
||||
|
||||
There is already a notion of a username in kubernetes, which is populated into a request context after authentication.
|
||||
There is already a notion of a username in Kubernetes, which is populated into a request context after authentication.
|
||||
However, there is no API object representing a user. While this may evolve, it is expected that in mature installations,
|
||||
the canonical storage of user identifiers will be handled by a system external to kubernetes.
|
||||
the canonical storage of user identifiers will be handled by a system external to Kubernetes.
|
||||
|
||||
Kubernetes does not dictate how to divide up the space of user identifier strings. User names can be
|
||||
simple Unix-style short usernames, (e.g. `alice`), or may be qualified to allow for federated identity (
|
||||
|
@ -104,7 +104,7 @@ accounts (e.g. `alice@example.com` vs `build-service-account-a3b7f0@foo-namespac
|
|||
but Kubernetes does not require this.
|
||||
|
||||
Kubernetes also does not require that there be a distinction between human and Pod users. It will be possible
|
||||
to setup a cluster where Alice the human talks to the kubernetes API as username `alice` and starts pods that
|
||||
to setup a cluster where Alice the human talks to the Kubernetes API as username `alice` and starts pods that
|
||||
also talk to the API as user `alice` and write files to NFS as user `alice`. But, this is not recommended.
|
||||
|
||||
Instead, it is recommended that Pods and Humans have distinct identities, and reference implementations will
|
||||
|
@ -153,7 +153,7 @@ get a `Secret` which allows them to authenticate to the Kubernetes APIserver as
|
|||
policy that is desired can be applied to them.
|
||||
|
||||
A higher level workflow is needed to coordinate creation of serviceAccounts, secrets and relevant policy objects.
|
||||
Users are free to extend kubernetes to put this business logic wherever is convenient for them, though the
|
||||
Users are free to extend Kubernetes to put this business logic wherever is convenient for them, though the
|
||||
Service Account Finalizer is one place where this can happen (see below).
|
||||
|
||||
### Kubelet
|
||||
|
|
|
@ -34,7 +34,7 @@ Documentation for other releases can be found at
|
|||
# Kubernetes Developer Guide
|
||||
|
||||
The developer guide is for anyone wanting to either write code which directly accesses the
|
||||
kubernetes API, or to contribute directly to the kubernetes project.
|
||||
Kubernetes API, or to contribute directly to the Kubernetes project.
|
||||
It assumes some familiarity with concepts in the [User Guide](../user-guide/README.md) and the [Cluster Admin
|
||||
Guide](../admin/README.md).
|
||||
|
||||
|
|
|
@ -35,8 +35,8 @@ API Conventions
|
|||
|
||||
Updated: 4/16/2015
|
||||
|
||||
*This document is oriented at users who want a deeper understanding of the kubernetes
|
||||
API structure, and developers wanting to extend the kubernetes API. An introduction to
|
||||
*This document is oriented at users who want a deeper understanding of the Kubernetes
|
||||
API structure, and developers wanting to extend the Kubernetes API. An introduction to
|
||||
using resources with kubectl can be found in (working_with_resources.md).*
|
||||
|
||||
**Table of Contents**
|
||||
|
|
|
@ -31,7 +31,7 @@ Documentation for other releases can be found at
|
|||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
## kubernetes API client libraries
|
||||
## Kubernetes API client libraries
|
||||
|
||||
### Supported
|
||||
|
||||
|
|
|
@ -56,7 +56,7 @@ Below, we outline one of the more common git workflows that core developers use.
|
|||
|
||||
### Clone your fork
|
||||
|
||||
The commands below require that you have $GOPATH set ([$GOPATH docs](https://golang.org/doc/code.html#GOPATH)). We highly recommend you put kubernetes' code into your GOPATH. Note: the commands below will not work if there is more than one directory in your `$GOPATH`.
|
||||
The commands below require that you have $GOPATH set ([$GOPATH docs](https://golang.org/doc/code.html#GOPATH)). We highly recommend you put Kubernetes' code into your GOPATH. Note: the commands below will not work if there is more than one directory in your `$GOPATH`.
|
||||
|
||||
```sh
|
||||
mkdir -p $GOPATH/src/github.com/GoogleCloudPlatform/
|
||||
|
@ -207,7 +207,7 @@ godep go test ./...
|
|||
If you only want to run unit tests in one package, you could run ``godep go test`` under the package directory. For example, the following commands will run all unit tests in package kubelet:
|
||||
|
||||
```console
|
||||
$ cd kubernetes # step into kubernetes' directory.
|
||||
$ cd kubernetes # step into the kubernetes directory.
|
||||
$ cd pkg/kubelet
|
||||
$ godep go test
|
||||
# some output from unit tests
|
||||
|
|
|
@ -66,7 +66,7 @@ These guidelines say *what* to do. See the Rationale section for *why*.
|
|||
- We may ask that you host binary assets or large amounts of code in our `contrib` directory or on your
|
||||
own repo.
|
||||
- Add or update a row in [The Matrix](../../docs/getting-started-guides/README.md).
|
||||
- State the binary version of kubernetes that you tested clearly in your Guide doc.
|
||||
- State the binary version of Kubernetes that you tested clearly in your Guide doc.
|
||||
- Setup a cluster and run the [conformance test](development.md#conformance-testing) against it, and report the
|
||||
results in your PR.
|
||||
- Versioned distros should typically not modify or add code in `cluster/`. That is just scripts for developer
|
||||
|
|
|
@ -84,7 +84,7 @@ You can download and install the latest Kubernetes release from [this page](http
|
|||
The script above will start (by default) a single master VM along with 4 worker VMs. You
|
||||
can tweak some of these parameters by editing `cluster/azure/config-default.sh`.
|
||||
|
||||
### Adding the kubernetes command line tools to PATH
|
||||
### Adding the Kubernetes command line tools to PATH
|
||||
|
||||
The [kubectl](../../docs/user-guide/kubectl/kubectl.md) tool controls the Kubernetes cluster manager. It lets you inspect your cluster resources, create, delete, and update components, and much more.
|
||||
You will use it to look at your new cluster and bring up example apps.
|
||||
|
|
|
@ -46,9 +46,9 @@ You need two machines with CentOS installed on them.
|
|||
|
||||
This is a getting started guide for CentOS. It is a manual configuration so you understand all the underlying packages / services / ports, etc...
|
||||
|
||||
This guide will only get ONE node working. Multiple nodes requires a functional [networking configuration](../../admin/networking.md) done outside of kubernetes. Although the additional kubernetes configuration requirements should be obvious.
|
||||
This guide will only get ONE node working. Multiple nodes requires a functional [networking configuration](../../admin/networking.md) done outside of kubernetes. Although the additional Kubernetes configuration requirements should be obvious.
|
||||
|
||||
The kubernetes package provides a few services: kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, kube-proxy. These services are managed by systemd and the configuration resides in a central location: /etc/kubernetes. We will break the services up between the hosts. The first host, centos-master, will be the kubernetes master. This host will run the kube-apiserver, kube-controller-manager, and kube-scheduler. In addition, the master will also run _etcd_. The remaining host, centos-minion will be the node and run kubelet, proxy, cadvisor and docker.
|
||||
The Kubernetes package provides a few services: kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, kube-proxy. These services are managed by systemd and the configuration resides in a central location: /etc/kubernetes. We will break the services up between the hosts. The first host, centos-master, will be the Kubernetes master. This host will run the kube-apiserver, kube-controller-manager, and kube-scheduler. In addition, the master will also run _etcd_. The remaining host, centos-minion will be the node and run kubelet, proxy, cadvisor and docker.
|
||||
|
||||
**System Information:**
|
||||
|
||||
|
@ -70,7 +70,7 @@ baseurl=http://cbs.centos.org/repos/virt7-testing/x86_64/os/
|
|||
gpgcheck=0
|
||||
```
|
||||
|
||||
* Install kubernetes on all hosts - centos-{master,minion}. This will also pull in etcd, docker, and cadvisor.
|
||||
* Install Kubernetes on all hosts - centos-{master,minion}. This will also pull in etcd, docker, and cadvisor.
|
||||
|
||||
```sh
|
||||
yum -y install --enablerepo=virt7-testing kubernetes
|
||||
|
@ -123,7 +123,7 @@ systemctl disable iptables-services firewalld
|
|||
systemctl stop iptables-services firewalld
|
||||
```
|
||||
|
||||
**Configure the kubernetes services on the master.**
|
||||
**Configure the Kubernetes services on the master.**
|
||||
|
||||
* Edit /etc/kubernetes/apiserver to appear as such:
|
||||
|
||||
|
@ -157,7 +157,7 @@ for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do
|
|||
done
|
||||
```
|
||||
|
||||
**Configure the kubernetes services on the node.**
|
||||
**Configure the Kubernetes services on the node.**
|
||||
|
||||
***We need to configure the kubelet and start the kubelet and proxy***
|
||||
|
||||
|
|
|
@ -258,7 +258,7 @@ These are based on the work found here: [master.yml](cloud-configs/master.yaml),
|
|||
To make the setup work, you need to replace a few placeholders:
|
||||
|
||||
- Replace `<PXE_SERVER_IP>` with your PXE server ip address (e.g. 10.20.30.242)
|
||||
- Replace `<MASTER_SERVER_IP>` with the kubernetes master ip address (e.g. 10.20.30.40)
|
||||
- Replace `<MASTER_SERVER_IP>` with the Kubernetes master ip address (e.g. 10.20.30.40)
|
||||
- If you run a private docker registry, replace `rdocker.example.com` with your docker registry dns name.
|
||||
- If you use a proxy, replace `rproxy.example.com` with your proxy server (and port)
|
||||
- Add your own SSH public key(s) to the cloud config at the end
|
||||
|
|
|
@ -56,7 +56,7 @@ Please install Docker 1.6.2 or wait for Docker 1.7.1.
|
|||
|
||||
## Overview
|
||||
|
||||
This guide will set up a 2-node kubernetes cluster, consisting of a _master_ node which hosts the API server and orchestrates work
|
||||
This guide will set up a 2-node Kubernetes cluster, consisting of a _master_ node which hosts the API server and orchestrates work
|
||||
and a _worker_ node which receives work from the master. You can repeat the process of adding worker nodes an arbitrary number of
|
||||
times to create larger clusters.
|
||||
|
||||
|
|
|
@ -41,7 +41,7 @@ We will assume that the IP address of this node is `${NODE_IP}` and you have the
|
|||
|
||||
For each worker node, there are three steps:
|
||||
* [Set up `flanneld` on the worker node](#set-up-flanneld-on-the-worker-node)
|
||||
* [Start kubernetes on the worker node](#start-kubernetes-on-the-worker-node)
|
||||
* [Start Kubernetes on the worker node](#start-kubernetes-on-the-worker-node)
|
||||
* [Add the worker to the cluster](#add-the-node-to-the-cluster)
|
||||
|
||||
### Set up Flanneld on the worker node
|
||||
|
|
|
@ -30,7 +30,7 @@ Documentation for other releases can be found at
|
|||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
Running kubernetes locally via Docker
|
||||
Running Kubernetes locally via Docker
|
||||
-------------------------------------
|
||||
|
||||
**Table of Contents**
|
||||
|
@ -47,7 +47,7 @@ Running kubernetes locally via Docker
|
|||
|
||||
### Overview
|
||||
|
||||
The following instructions show you how to set up a simple, single node kubernetes cluster using Docker.
|
||||
The following instructions show you how to set up a simple, single node Kubernetes cluster using Docker.
|
||||
|
||||
Here's a diagram of what the final result will look like:
|
||||
![Kubernetes Single Node on Docker](k8s-singlenode-docker.png)
|
||||
|
@ -80,7 +80,7 @@ docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v0.21.2
|
|||
|
||||
### Test it out
|
||||
|
||||
At this point you should have a running kubernetes cluster. You can test this by downloading the kubectl
|
||||
At this point you should have a running Kubernetes cluster. You can test this by downloading the kubectl
|
||||
binary
|
||||
([OS X](https://storage.googleapis.com/kubernetes-release/release/v0.18.2/bin/darwin/amd64/kubectl))
|
||||
([linux](https://storage.googleapis.com/kubernetes-release/release/v0.18.2/bin/linux/amd64/kubectl))
|
||||
|
@ -105,7 +105,7 @@ NAME LABELS STATUS
|
|||
127.0.0.1 <none> Ready
|
||||
```
|
||||
|
||||
If you are running different kubernetes clusters, you may need to specify `-s http://localhost:8080` to select the local cluster.
|
||||
If you are running different Kubernetes clusters, you may need to specify `-s http://localhost:8080` to select the local cluster.
|
||||
|
||||
### Run an application
|
||||
|
||||
|
|
|
@ -30,10 +30,10 @@ Documentation for other releases can be found at
|
|||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
Configuring kubernetes on [Fedora](http://fedoraproject.org) via [Ansible](http://www.ansible.com/home)
|
||||
Configuring Kubernetes on [Fedora](http://fedoraproject.org) via [Ansible](http://www.ansible.com/home)
|
||||
-------------------------------------------------------------------------------------------------------
|
||||
|
||||
Configuring kubernetes on Fedora via Ansible offers a simple way to quickly create a clustered environment with little effort.
|
||||
Configuring Kubernetes on Fedora via Ansible offers a simple way to quickly create a clustered environment with little effort.
|
||||
|
||||
**Table of Contents**
|
||||
|
||||
|
@ -73,7 +73,7 @@ If not
|
|||
yum install -y ansible git python-netaddr
|
||||
```
|
||||
|
||||
**Now clone down the kubernetes repository**
|
||||
**Now clone down the Kubernetes repository**
|
||||
|
||||
```sh
|
||||
git clone https://github.com/GoogleCloudPlatform/kubernetes.git
|
||||
|
@ -134,7 +134,7 @@ edit: ~/kubernetes/contrib/ansible/group_vars/all.yml
|
|||
|
||||
**Configure the IP addresses used for services**
|
||||
|
||||
Each kubernetes service gets its own IP address. These are not real IPs. You need only select a range of IPs which are not in use elsewhere in your environment.
|
||||
Each Kubernetes service gets its own IP address. These are not real IPs. You need only select a range of IPs which are not in use elsewhere in your environment.
|
||||
|
||||
```yaml
|
||||
kube_service_addresses: 10.254.0.0/16
|
||||
|
@ -167,7 +167,7 @@ dns_setup: true
|
|||
|
||||
**Tell ansible to get to work!**
|
||||
|
||||
This will finally setup your whole kubernetes cluster for you.
|
||||
This will finally setup your whole Kubernetes cluster for you.
|
||||
|
||||
```sh
|
||||
cd ~/kubernetes/contrib/ansible/
|
||||
|
@ -177,7 +177,7 @@ cd ~/kubernetes/contrib/ansible/
|
|||
|
||||
## Testing and using your new cluster
|
||||
|
||||
That's all there is to it. It's really that easy. At this point you should have a functioning kubernetes cluster.
|
||||
That's all there is to it. It's really that easy. At this point you should have a functioning Kubernetes cluster.
|
||||
|
||||
**Show kubernets nodes**
|
||||
|
||||
|
|
|
@ -46,9 +46,9 @@ Getting started on [Fedora](http://fedoraproject.org)
|
|||
|
||||
This is a getting started guide for Fedora. It is a manual configuration so you understand all the underlying packages / services / ports, etc...
|
||||
|
||||
This guide will only get ONE node (previously minion) working. Multiple nodes require a functional [networking configuration](../../admin/networking.md) done outside of kubernetes. Although the additional kubernetes configuration requirements should be obvious.
|
||||
This guide will only get ONE node (previously minion) working. Multiple nodes require a functional [networking configuration](../../admin/networking.md) done outside of Kubernetes. Although the additional Kubernetes configuration requirements should be obvious.
|
||||
|
||||
The kubernetes package provides a few services: kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, kube-proxy. These services are managed by systemd and the configuration resides in a central location: /etc/kubernetes. We will break the services up between the hosts. The first host, fed-master, will be the kubernetes master. This host will run the kube-apiserver, kube-controller-manager, and kube-scheduler. In addition, the master will also run _etcd_ (not needed if _etcd_ runs on a different host but this guide assumes that _etcd_ and kubernetes master run on the same host). The remaining host, fed-node will be the node and run kubelet, proxy and docker.
|
||||
The Kubernetes package provides a few services: kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, kube-proxy. These services are managed by systemd and the configuration resides in a central location: /etc/kubernetes. We will break the services up between the hosts. The first host, fed-master, will be the Kubernetes master. This host will run the kube-apiserver, kube-controller-manager, and kube-scheduler. In addition, the master will also run _etcd_ (not needed if _etcd_ runs on a different host but this guide assumes that _etcd_ and Kubernetes master run on the same host). The remaining host, fed-node will be the node and run kubelet, proxy and docker.
|
||||
|
||||
**System Information:**
|
||||
|
||||
|
@ -61,7 +61,7 @@ fed-node = 192.168.121.65
|
|||
|
||||
**Prepare the hosts:**
|
||||
|
||||
* Install kubernetes on all hosts - fed-{master,node}. This will also pull in docker. Also install etcd on fed-master. This guide has been tested with kubernetes-0.18 and beyond.
|
||||
* Install Kubernetes on all hosts - fed-{master,node}. This will also pull in docker. Also install etcd on fed-master. This guide has been tested with kubernetes-0.18 and beyond.
|
||||
* The [--enablerepo=update-testing](https://fedoraproject.org/wiki/QA:Updates_Testing) directive in the yum command below will ensure that the most recent Kubernetes version that is scheduled for pre-release will be installed. This should be a more recent version than the Fedora "stable" release for Kubernetes that you would get without adding the directive.
|
||||
* If you want the very latest Kubernetes release [you can download and yum install the RPM directly from Fedora Koji](http://koji.fedoraproject.org/koji/packageinfo?packageID=19202) instead of using the yum install command below.
|
||||
|
||||
|
@ -105,7 +105,7 @@ systemctl disable iptables-services firewalld
|
|||
systemctl stop iptables-services firewalld
|
||||
```
|
||||
|
||||
**Configure the kubernetes services on the master.**
|
||||
**Configure the Kubernetes services on the master.**
|
||||
|
||||
* Edit /etc/kubernetes/apiserver to appear as such. The service_cluster_ip_range IP addresses must be an unused block of addresses, not used anywhere else. They do not need to be routed or assigned to anything.
|
||||
|
||||
|
@ -141,7 +141,7 @@ done
|
|||
|
||||
* Addition of nodes:
|
||||
|
||||
* Create following node.json file on kubernetes master node:
|
||||
* Create following node.json file on Kubernetes master node:
|
||||
|
||||
```json
|
||||
{
|
||||
|
@ -157,7 +157,7 @@ done
|
|||
}
|
||||
```
|
||||
|
||||
Now create a node object internally in your kubernetes cluster by running:
|
||||
Now create a node object internally in your Kubernetes cluster by running:
|
||||
|
||||
```console
|
||||
$ kubectl create -f ./node.json
|
||||
|
@ -170,10 +170,10 @@ fed-node name=fed-node-label Unknown
|
|||
Please note that in the above, it only creates a representation for the node
|
||||
_fed-node_ internally. It does not provision the actual _fed-node_. Also, it
|
||||
is assumed that _fed-node_ (as specified in `name`) can be resolved and is
|
||||
reachable from kubernetes master node. This guide will discuss how to provision
|
||||
a kubernetes node (fed-node) below.
|
||||
reachable from Kubernetes master node. This guide will discuss how to provision
|
||||
a Kubernetes node (fed-node) below.
|
||||
|
||||
**Configure the kubernetes services on the node.**
|
||||
**Configure the Kubernetes services on the node.**
|
||||
|
||||
***We need to configure the kubelet on the node.***
|
||||
|
||||
|
@ -181,7 +181,7 @@ a kubernetes node (fed-node) below.
|
|||
|
||||
```sh
|
||||
###
|
||||
# kubernetes kubelet (node) config
|
||||
# Kubernetes kubelet (node) config
|
||||
|
||||
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
|
||||
KUBELET_ADDRESS="--address=0.0.0.0"
|
||||
|
@ -216,7 +216,7 @@ fed-node name=fed-node-label Ready
|
|||
|
||||
* Deletion of nodes:
|
||||
|
||||
To delete _fed-node_ from your kubernetes cluster, one should run the following on fed-master (Please do not do it, it is just for information):
|
||||
To delete _fed-node_ from your Kubernetes cluster, one should run the following on fed-master (Please do not do it, it is just for information):
|
||||
|
||||
```sh
|
||||
kubectl delete -f ./node.json
|
||||
|
|
|
@ -43,7 +43,7 @@ Kubernetes multiple nodes cluster with flannel on Fedora
|
|||
|
||||
## Introduction
|
||||
|
||||
This document describes how to deploy kubernetes on multiple hosts to set up a multi-node cluster and networking with flannel. Follow fedora [getting started guide](fedora_manual_config.md) to setup 1 master (fed-master) and 2 or more nodes. Make sure that all nodes have different names (fed-node1, fed-node2 and so on) and labels (fed-node1-label, fed-node2-label, and so on) to avoid any conflict. Also make sure that the kubernetes master host is running etcd, kube-controller-manager, kube-scheduler, and kube-apiserver services, and the nodes are running docker, kube-proxy and kubelet services. Now install flannel on kubernetes nodes. flannel on each node configures an overlay network that docker uses. flannel runs on each node to setup a unique class-C container network.
|
||||
This document describes how to deploy Kubernetes on multiple hosts to set up a multi-node cluster and networking with flannel. Follow fedora [getting started guide](fedora_manual_config.md) to setup 1 master (fed-master) and 2 or more nodes. Make sure that all nodes have different names (fed-node1, fed-node2 and so on) and labels (fed-node1-label, fed-node2-label, and so on) to avoid any conflict. Also make sure that the Kubernetes master host is running etcd, kube-controller-manager, kube-scheduler, and kube-apiserver services, and the nodes are running docker, kube-proxy and kubelet services. Now install flannel on Kubernetes nodes. flannel on each node configures an overlay network that docker uses. flannel runs on each node to setup a unique class-C container network.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
|
@ -51,7 +51,7 @@ This document describes how to deploy kubernetes on multiple hosts to set up a m
|
|||
|
||||
## Master Setup
|
||||
|
||||
**Perform following commands on the kubernetes master**
|
||||
**Perform following commands on the Kubernetes master**
|
||||
|
||||
* Configure flannel by creating a `flannel-config.json` in your current directory on fed-master. flannel provides udp and vxlan among other overlay networking backend options. In this guide, we choose kernel based vxlan backend. The contents of the json are:
|
||||
|
||||
|
@ -82,7 +82,7 @@ etcdctl get /coreos.com/network/config
|
|||
|
||||
## Node Setup
|
||||
|
||||
**Perform following commands on all kubernetes nodes**
|
||||
**Perform following commands on all Kubernetes nodes**
|
||||
|
||||
* Edit the flannel configuration file /etc/sysconfig/flanneld as follows:
|
||||
|
||||
|
@ -127,7 +127,7 @@ systemctl start docker
|
|||
|
||||
## **Test the cluster and flannel configuration**
|
||||
|
||||
* Now check the interfaces on the nodes. Notice there is now a flannel.1 interface, and the ip addresses of docker0 and flannel.1 interfaces are in the same network. You will notice that docker0 is assigned a subnet (18.16.29.0/24 as shown below) on each kubernetes node out of the IP range configured above. A working output should look like this:
|
||||
* Now check the interfaces on the nodes. Notice there is now a flannel.1 interface, and the ip addresses of docker0 and flannel.1 interfaces are in the same network. You will notice that docker0 is assigned a subnet (18.16.29.0/24 as shown below) on each Kubernetes node out of the IP range configured above. A working output should look like this:
|
||||
|
||||
```console
|
||||
# ip -4 a|grep inet
|
||||
|
@ -172,7 +172,7 @@ FLANNEL_MTU=1450
|
|||
FLANNEL_IPMASQ=false
|
||||
```
|
||||
|
||||
* At this point, we have etcd running on the kubernetes master, and flannel / docker running on kubernetes nodes. Next steps are for testing cross-host container communication which will confirm that docker and flannel are configured properly.
|
||||
* At this point, we have etcd running on the Kubernetes master, and flannel / docker running on Kubernetes nodes. Next steps are for testing cross-host container communication which will confirm that docker and flannel are configured properly.
|
||||
|
||||
* Issue the following commands on any 2 nodes:
|
||||
|
||||
|
@ -211,7 +211,7 @@ PING 18.16.90.4 (18.16.90.4) 56(84) bytes of data.
|
|||
64 bytes from 18.16.90.4: icmp_seq=2 ttl=62 time=0.372 ms
|
||||
```
|
||||
|
||||
* Now kubernetes multi-node cluster is set up with overlay networking set up by flannel.
|
||||
* Now Kubernetes multi-node cluster is set up with overlay networking set up by flannel.
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
|
|
|
@ -38,7 +38,7 @@ Getting started on Google Compute Engine
|
|||
- [Before you start](#before-you-start)
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Starting a cluster](#starting-a-cluster)
|
||||
- [Installing the kubernetes command line tools on your workstation](#installing-the-kubernetes-command-line-tools-on-your-workstation)
|
||||
- [Installing the Kubernetes command line tools on your workstation](#installing-the-kubernetes-command-line-tools-on-your-workstation)
|
||||
- [Getting started with your cluster](#getting-started-with-your-cluster)
|
||||
- [Inspect your cluster](#inspect-your-cluster)
|
||||
- [Run some examples](#run-some-examples)
|
||||
|
@ -109,7 +109,7 @@ The next few steps will show you:
|
|||
1. how to delete the cluster
|
||||
1. how to start clusters with non-default options (like larger clusters)
|
||||
|
||||
### Installing the kubernetes command line tools on your workstation
|
||||
### Installing the Kubernetes command line tools on your workstation
|
||||
|
||||
The cluster startup script will leave you with a running cluster and a `kubernetes` directory on your workstation.
|
||||
The next step is to make sure the `kubectl` tool is in your path.
|
||||
|
|
|
@ -103,7 +103,7 @@ the required predependencies to get started with Juju, additionally it will
|
|||
launch a curses based configuration utility allowing you to select your cloud
|
||||
provider and enter the proper access credentials.
|
||||
|
||||
Next it will deploy the kubernetes master, etcd, 2 nodes with flannel based
|
||||
Next it will deploy the Kubernetes master, etcd, 2 nodes with flannel based
|
||||
Software Defined Networking.
|
||||
|
||||
|
||||
|
@ -129,7 +129,7 @@ You can use `juju ssh` to access any of the units:
|
|||
|
||||
## Run some containers!
|
||||
|
||||
`kubectl` is available on the kubernetes master node. We'll ssh in to
|
||||
`kubectl` is available on the Kubernetes master node. We'll ssh in to
|
||||
launch some containers, but one could use kubectl locally setting
|
||||
KUBERNETES_MASTER to point at the ip of `kubernetes-master/0`.
|
||||
|
||||
|
|
|
@ -103,7 +103,7 @@ $ usermod -a -G libvirtd $USER
|
|||
|
||||
#### ² Qemu will run with a specific user. It must have access to the VMs drives
|
||||
|
||||
All the disk drive resources needed by the VM (CoreOS disk image, kubernetes binaries, cloud-init files, etc.) are put inside `./cluster/libvirt-coreos/libvirt_storage_pool`.
|
||||
All the disk drive resources needed by the VM (CoreOS disk image, Kubernetes binaries, cloud-init files, etc.) are put inside `./cluster/libvirt-coreos/libvirt_storage_pool`.
|
||||
|
||||
As we’re using the `qemu:///system` instance of libvirt, qemu will run with a specific `user:group` distinct from your user. It is configured in `/etc/libvirt/qemu.conf`. That qemu user must have access to that libvirt storage pool.
|
||||
|
||||
|
@ -128,7 +128,7 @@ setfacl -m g:kvm:--x ~
|
|||
|
||||
### Setup
|
||||
|
||||
By default, the libvirt-coreos setup will create a single kubernetes master and 3 kubernetes nodes. Because the VM drives use Copy-on-Write and because of memory ballooning and KSM, there is a lot of resource over-allocation.
|
||||
By default, the libvirt-coreos setup will create a single Kubernetes master and 3 Kubernetes nodes. Because the VM drives use Copy-on-Write and because of memory ballooning and KSM, there is a lot of resource over-allocation.
|
||||
|
||||
To start your local cluster, open a shell and run:
|
||||
|
||||
|
@ -143,7 +143,7 @@ The `KUBERNETES_PROVIDER` environment variable tells all of the various cluster
|
|||
|
||||
The `NUM_MINIONS` environment variable may be set to specify the number of nodes to start. If it is not set, the number of nodes defaults to 3.
|
||||
|
||||
The `KUBE_PUSH` environment variable may be set to specify which kubernetes binaries must be deployed on the cluster. Its possible values are:
|
||||
The `KUBE_PUSH` environment variable may be set to specify which Kubernetes binaries must be deployed on the cluster. Its possible values are:
|
||||
|
||||
* `release` (default if `KUBE_PUSH` is not set) will deploy the binaries of `_output/release-tars/kubernetes-server-….tar.gz`. This is built with `make release` or `make release-skip-tests`.
|
||||
* `local` will deploy the binaries of `_output/local/go/bin`. These are built with `make`.
|
||||
|
@ -160,7 +160,7 @@ $ virsh -c qemu:///system list
|
|||
18 kubernetes_minion-03 running
|
||||
```
|
||||
|
||||
You can check that the kubernetes cluster is working with:
|
||||
You can check that the Kubernetes cluster is working with:
|
||||
|
||||
```console
|
||||
$ kubectl get nodes
|
||||
|
|
|
@ -60,7 +60,7 @@ Not running Linux? Consider running Linux in a local virtual machine with [Vagra
|
|||
|
||||
At least [Docker](https://docs.docker.com/installation/#installation)
|
||||
1.3+. Ensure the Docker daemon is running and can be contacted (try `docker
|
||||
ps`). Some of the kubernetes components need to run as root, which normally
|
||||
ps`). Some of the Kubernetes components need to run as root, which normally
|
||||
works fine with docker.
|
||||
|
||||
#### etcd
|
||||
|
@ -73,7 +73,7 @@ You need [go](https://golang.org/doc/install) at least 1.3+ in your path, please
|
|||
|
||||
### Starting the cluster
|
||||
|
||||
In a separate tab of your terminal, run the following (since one needs sudo access to start/stop kubernetes daemons, it is easier to run the entire script as root):
|
||||
In a separate tab of your terminal, run the following (since one needs sudo access to start/stop Kubernetes daemons, it is easier to run the entire script as root):
|
||||
|
||||
```sh
|
||||
cd kubernetes
|
||||
|
@ -108,7 +108,7 @@ cluster/kubectl.sh run my-nginx --image=nginx --replicas=2 --port=80
|
|||
exit
|
||||
## end wait
|
||||
|
||||
## introspect kubernetes!
|
||||
## introspect Kubernetes!
|
||||
cluster/kubectl.sh get pods
|
||||
cluster/kubectl.sh get services
|
||||
cluster/kubectl.sh get replicationcontrollers
|
||||
|
@ -118,7 +118,7 @@ cluster/kubectl.sh get replicationcontrollers
|
|||
### Running a user defined pod
|
||||
|
||||
Note the difference between a [container](../user-guide/containers.md)
|
||||
and a [pod](../user-guide/pods.md). Since you only asked for the former, kubernetes will create a wrapper pod for you.
|
||||
and a [pod](../user-guide/pods.md). Since you only asked for the former, Kubernetes will create a wrapper pod for you.
|
||||
However you cannot view the nginx start page on localhost. To verify that nginx is running you need to run `curl` within the docker container (try `docker exec`).
|
||||
|
||||
You can control the specifications of a pod via a user defined manifest, and reach nginx through your browser on the port specified therein:
|
||||
|
@ -157,7 +157,7 @@ hack/local-up-cluster.sh
|
|||
|
||||
#### kubectl claims to start a container but `get pods` and `docker ps` don't show it.
|
||||
|
||||
One or more of the kubernetes daemons might've crashed. Tail the logs of each in /tmp.
|
||||
One or more of the KUbernetes daemons might've crashed. Tail the logs of each in /tmp.
|
||||
|
||||
#### The pods fail to connect to the services by host names
|
||||
|
||||
|
|
|
@ -46,12 +46,12 @@ oVirt is a virtual datacenter manager that delivers powerful management of multi
|
|||
|
||||
## oVirt Cloud Provider Deployment
|
||||
|
||||
The oVirt cloud provider allows to easily discover and automatically add new VM instances as nodes to your kubernetes cluster.
|
||||
At the moment there are no community-supported or pre-loaded VM images including kubernetes but it is possible to [import] or [install] Project Atomic (or Fedora) in a VM to [generate a template]. Any other distribution that includes kubernetes may work as well.
|
||||
The oVirt cloud provider allows to easily discover and automatically add new VM instances as nodes to your Kubernetes cluster.
|
||||
At the moment there are no community-supported or pre-loaded VM images including Kubernetes but it is possible to [import] or [install] Project Atomic (or Fedora) in a VM to [generate a template]. Any other distribution that includes Kubernetes may work as well.
|
||||
|
||||
It is mandatory to [install the ovirt-guest-agent] in the guests for the VM ip address and hostname to be reported to ovirt-engine and ultimately to kubernetes.
|
||||
It is mandatory to [install the ovirt-guest-agent] in the guests for the VM ip address and hostname to be reported to ovirt-engine and ultimately to Kubernetes.
|
||||
|
||||
Once the kubernetes template is available it is possible to start instantiating VMs that can be discovered by the cloud provider.
|
||||
Once the Kubernetes template is available it is possible to start instantiating VMs that can be discovered by the cloud provider.
|
||||
|
||||
[import]: http://ovedou.blogspot.it/2014/03/importing-glance-images-as-ovirt.html
|
||||
[install]: http://www.ovirt.org/Quick_Start_Guide#Create_Virtual_Machines
|
||||
|
@ -67,13 +67,13 @@ The oVirt Cloud Provider requires access to the oVirt REST-API to gather the pro
|
|||
username = admin@internal
|
||||
password = admin
|
||||
|
||||
In the same file it is possible to specify (using the `filters` section) what search query to use to identify the VMs to be reported to kubernetes:
|
||||
In the same file it is possible to specify (using the `filters` section) what search query to use to identify the VMs to be reported to Kubernetes:
|
||||
|
||||
[filters]
|
||||
# Search query used to find nodes
|
||||
vms = tag=kubernetes
|
||||
|
||||
In the above example all the VMs tagged with the `kubernetes` label will be reported as nodes to kubernetes.
|
||||
In the above example all the VMs tagged with the `kubernetes` label will be reported as nodes to Kubernetes.
|
||||
|
||||
The `ovirt-cloud.conf` file then must be specified in kube-controller-manager:
|
||||
|
||||
|
@ -81,7 +81,7 @@ The `ovirt-cloud.conf` file then must be specified in kube-controller-manager:
|
|||
|
||||
## oVirt Cloud Provider Screencast
|
||||
|
||||
This short screencast demonstrates how the oVirt Cloud Provider can be used to dynamically add VMs to your kubernetes cluster.
|
||||
This short screencast demonstrates how the oVirt Cloud Provider can be used to dynamically add VMs to your Kubernetes cluster.
|
||||
|
||||
[![Screencast](http://img.youtube.com/vi/JyyST4ZKne8/0.jpg)](http://www.youtube.com/watch?v=JyyST4ZKne8)
|
||||
|
||||
|
|
|
@ -67,11 +67,11 @@ The current cluster design is inspired by:
|
|||
|
||||
- To build your own released version from source use `export KUBERNETES_PROVIDER=rackspace` and run the `bash hack/dev-build-and-up.sh`
|
||||
- Note: The get.k8s.io install method is not working yet for our scripts.
|
||||
* To install the latest released version of kubernetes use `export KUBERNETES_PROVIDER=rackspace; wget -q -O - https://get.k8s.io | bash`
|
||||
* To install the latest released version of Kubernetes use `export KUBERNETES_PROVIDER=rackspace; wget -q -O - https://get.k8s.io | bash`
|
||||
|
||||
## Build
|
||||
|
||||
1. The kubernetes binaries will be built via the common build scripts in `build/`.
|
||||
1. The Kubernetes binaries will be built via the common build scripts in `build/`.
|
||||
2. If you've set the ENV `KUBERNETES_PROVIDER=rackspace`, the scripts will upload `kubernetes-server-linux-amd64.tar.gz` to Cloud Files.
|
||||
2. A cloud files container will be created via the `swiftly` CLI and a temp URL will be enabled on the object.
|
||||
3. The built `kubernetes-server-linux-amd64.tar.gz` will be uploaded to this container and the URL will be passed to master/nodes when booted.
|
||||
|
|
|
@ -136,7 +136,7 @@ accomplished in two ways:
|
|||
- Harder to setup from scratch.
|
||||
- Google Compute Engine ([GCE](gce.md)) and [AWS](aws.md) guides use this approach.
|
||||
- Need to make the Pod IPs routable by programming routers, switches, etc.
|
||||
- Can be configured external to kubernetes, or can implement in the "Routes" interface of a Cloud Provider module.
|
||||
- Can be configured external to Kubernetes, or can implement in the "Routes" interface of a Cloud Provider module.
|
||||
- Generally highest performance.
|
||||
- Create an Overlay network
|
||||
- Easier to setup
|
||||
|
@ -241,7 +241,7 @@ For etcd, you can:
|
|||
- Build your own image
|
||||
- You can do: `cd kubernetes/cluster/images/etcd; make`
|
||||
|
||||
We recommend that you use the etcd version which is provided in the kubernetes binary distribution. The kubernetes binaries in the release
|
||||
We recommend that you use the etcd version which is provided in the Kubernetes binary distribution. The Kubernetes binaries in the release
|
||||
were tested extensively with this version of etcd and not with any other version.
|
||||
The recommended version number can also be found as the value of `ETCD_VERSION` in `kubernetes/cluster/images/etcd/Makefile`.
|
||||
|
||||
|
@ -353,7 +353,7 @@ guide assume that there are kubeconfigs in `/var/lib/kube-proxy/kubeconfig` and
|
|||
|
||||
## Configuring and Installing Base Software on Nodes
|
||||
|
||||
This section discusses how to configure machines to be kubernetes nodes.
|
||||
This section discusses how to configure machines to be Kubernetes nodes.
|
||||
|
||||
You should run three daemons on every node:
|
||||
- docker or rkt
|
||||
|
|
|
@ -37,13 +37,13 @@ Kubernetes Deployment On Bare-metal Ubuntu Nodes
|
|||
- [Prerequisites](#prerequisites)
|
||||
- [Starting a Cluster](#starting-a-cluster)
|
||||
- [Make *kubernetes* , *etcd* and *flanneld* binaries](#make-kubernetes--etcd-and-flanneld-binaries)
|
||||
- [Configure and start the kubernetes cluster](#configure-and-start-the-kubernetes-cluster)
|
||||
- [Configure and start the Kubernetes cluster](#configure-and-start-the-kubernetes-cluster)
|
||||
- [Deploy addons](#deploy-addons)
|
||||
- [Trouble Shooting](#trouble-shooting)
|
||||
|
||||
## Introduction
|
||||
|
||||
This document describes how to deploy kubernetes on ubuntu nodes, including 1 kubernetes master and 3 kubernetes nodes, and people uses this approach can scale to **any number of nodes** by changing some settings with ease. The original idea was heavily inspired by @jainvipin 's ubuntu single node work, which has been merge into this document.
|
||||
This document describes how to deploy Kubernetes on ubuntu nodes, including 1 Kubernetes master and 3 Kubernetes nodes, and people uses this approach can scale to **any number of nodes** by changing some settings with ease. The original idea was heavily inspired by @jainvipin 's ubuntu single node work, which has been merge into this document.
|
||||
|
||||
[Cloud team from Zhejiang University](https://github.com/ZJU-SEL) will maintain this work.
|
||||
|
||||
|
@ -64,7 +64,7 @@ This document describes how to deploy kubernetes on ubuntu nodes, including 1 ku
|
|||
|
||||
#### Make *kubernetes* , *etcd* and *flanneld* binaries
|
||||
|
||||
First clone the kubernetes github repo, `$ git clone https://github.com/GoogleCloudPlatform/kubernetes.git`
|
||||
First clone the Kubernetes github repo, `$ git clone https://github.com/GoogleCloudPlatform/kubernetes.git`
|
||||
then `$ cd kubernetes/cluster/ubuntu`.
|
||||
|
||||
Then run `$ ./build.sh`, this will download all the needed binaries into `./binaries`.
|
||||
|
@ -75,7 +75,7 @@ Please make sure that there are `kube-apiserver`, `kube-controller-manager`, `ku
|
|||
|
||||
> We used flannel here because we want to use overlay network, but please remember it is not the only choice, and it is also not a k8s' necessary dependence. Actually you can just build up k8s cluster natively, or use flannel, Open vSwitch or any other SDN tool you like, we just choose flannel here as a example.
|
||||
|
||||
#### Configure and start the kubernetes cluster
|
||||
#### Configure and start the Kubernetes cluster
|
||||
|
||||
An example cluster is listed as below:
|
||||
|
||||
|
@ -105,7 +105,7 @@ Then the `roles ` variable defines the role of above machine in the same order,
|
|||
|
||||
The `NUM_MINIONS` variable defines the total number of nodes.
|
||||
|
||||
The `SERVICE_CLUSTER_IP_RANGE` variable defines the kubernetes service IP range. Please make sure that you do have a valid private ip range defined here, because some IaaS provider may reserve private ips. You can use below three private network range according to rfc1918. Besides you'd better not choose the one that conflicts with your own private network range.
|
||||
The `SERVICE_CLUSTER_IP_RANGE` variable defines the Kubernetes service IP range. Please make sure that you do have a valid private ip range defined here, because some IaaS provider may reserve private ips. You can use below three private network range according to rfc1918. Besides you'd better not choose the one that conflicts with your own private network range.
|
||||
|
||||
10.0.0.0 - 10.255.255.255 (10/8 prefix)
|
||||
|
||||
|
@ -148,7 +148,7 @@ NAME LABELS STATUS
|
|||
10.10.103.250 kubernetes.io/hostname=10.10.103.250 Ready
|
||||
```
|
||||
|
||||
Also you can run kubernetes [guest-example](../../examples/guestbook/) to build a redis backend cluster on the k8s.
|
||||
Also you can run Kubernetes [guest-example](../../examples/guestbook/) to build a redis backend cluster on the k8s.
|
||||
|
||||
|
||||
#### Deploy addons
|
||||
|
|
|
@ -33,7 +33,7 @@ Documentation for other releases can be found at
|
|||
|
||||
## Getting started with Vagrant
|
||||
|
||||
Running kubernetes with Vagrant (and VirtualBox) is an easy way to run/test/develop on your local machine (Linux, Mac OS X).
|
||||
Running Kubernetes with Vagrant (and VirtualBox) is an easy way to run/test/develop on your local machine (Linux, Mac OS X).
|
||||
|
||||
**Table of Contents**
|
||||
|
||||
|
|
|
@ -354,7 +354,7 @@ time.
|
|||
|
||||
This is closely related to location affinity above, and also discussed
|
||||
there. The basic idea is that some controller, logically outside of
|
||||
the basic kubernetes control plane of the clusters in question, needs
|
||||
the basic Kubernetes control plane of the clusters in question, needs
|
||||
to be able to:
|
||||
|
||||
1. Receive "global" resource creation requests.
|
||||
|
|
|
@ -33,13 +33,13 @@ Documentation for other releases can be found at
|
|||
|
||||
# High Availability of Scheduling and Controller Components in Kubernetes
|
||||
|
||||
This document serves as a proposal for high availability of the scheduler and controller components in kubernetes. This proposal is intended to provide a simple High Availability api for kubernetes components with the potential to extend to services running on kubernetes. Those services would be subject to their own constraints.
|
||||
This document serves as a proposal for high availability of the scheduler and controller components in Kubernetes. This proposal is intended to provide a simple High Availability api for Kubernetes components with the potential to extend to services running on Kubernetes. Those services would be subject to their own constraints.
|
||||
|
||||
## Design Options
|
||||
|
||||
For complete reference see [this](https://www.ibm.com/developerworks/community/blogs/RohitShetty/entry/high_availability_cold_warm_hot?lang=en)
|
||||
|
||||
1. Hot Standby: In this scenario, data and state are shared between the two components such that an immediate failure in one component causes the standby daemon to take over exactly where the failed component had left off. This would be an ideal solution for kubernetes, however it poses a series of challenges in the case of controllers where component-state is cached locally and not persisted in a transactional way to a storage facility. This would also introduce additional load on the apiserver, which is not desirable. As a result, we are **NOT** planning on this approach at this time.
|
||||
1. Hot Standby: In this scenario, data and state are shared between the two components such that an immediate failure in one component causes the standby daemon to take over exactly where the failed component had left off. This would be an ideal solution for Kubernetes, however it poses a series of challenges in the case of controllers where component-state is cached locally and not persisted in a transactional way to a storage facility. This would also introduce additional load on the apiserver, which is not desirable. As a result, we are **NOT** planning on this approach at this time.
|
||||
|
||||
2. **Warm Standby**: In this scenario there is only one active component acting as the master and additional components running but not providing service or responding to requests. Data and state are not shared between the active and standby components. When a failure occurs, the standby component that becomes the master must determine the current state of the system before resuming functionality. This is the approach that this proposal will leverage.
|
||||
|
||||
|
|
|
@ -44,7 +44,7 @@ Documentation for other releases can be found at
|
|||
|
||||
<!-- END MUNGE: GENERATED_TOC -->
|
||||
|
||||
The user guide is intended for anyone who wants to run programs and services on an existing Kubernetes cluster. Setup and administration of a Kubernetes cluster is described in the [Cluster Admin Guide](../../docs/admin/README.md). The [Developer Guide](../../docs/devel/README.md) is for anyone wanting to either write code which directly accesses the kubernetes API, or to contribute directly to the kubernetes project.
|
||||
The user guide is intended for anyone who wants to run programs and services on an existing Kubernetes cluster. Setup and administration of a Kubernetes cluster is described in the [Cluster Admin Guide](../../docs/admin/README.md). The [Developer Guide](../../docs/devel/README.md) is for anyone wanting to either write code which directly accesses the Kubernetes API, or to contribute directly to the Kubernetes project.
|
||||
|
||||
Please ensure you have completed the [prerequisites for running examples from the user guide](prereqs.md).
|
||||
|
||||
|
|
|
@ -60,7 +60,7 @@ Documentation for other releases can be found at
|
|||
### Accessing for the first time with kubectl
|
||||
|
||||
When accessing the Kubernetes API for the first time, we suggest using the
|
||||
kubernetes CLI, `kubectl`.
|
||||
Kubernetes CLI, `kubectl`.
|
||||
|
||||
To access a cluster, you need to know the location of the cluster and have credentials
|
||||
to access it. Typically, this is automatically set-up when you work through
|
||||
|
@ -172,7 +172,7 @@ at `/var/run/secrets/kubernetes.io/serviceaccount/token`.
|
|||
From within a pod the recommended ways to connect to API are:
|
||||
- run a kubectl proxy as one of the containers in the pod, or as a background
|
||||
process within a container. This proxies the
|
||||
kubernetes API to the localhost interface of the pod, so that other processes
|
||||
Kubernetes API to the localhost interface of the pod, so that other processes
|
||||
in any container of the pod can access it. See this [example of using kubectl proxy
|
||||
in a pod](../../examples/kubectl-container/).
|
||||
- use the Go client library, and create a client using the `client.NewInCluster()` factory.
|
||||
|
@ -183,7 +183,7 @@ In each case, the credentials of the pod are used to communicate securely with t
|
|||
## Accessing services running on the cluster
|
||||
|
||||
The previous section was about connecting the Kubernetes API server. This section is about
|
||||
connecting to other services running on Kubernetes cluster. In kubernetes, the
|
||||
connecting to other services running on Kubernetes cluster. In Kubernetes, the
|
||||
[nodes](../admin/node.md), [pods](pods.md) and [services](services.md) all have
|
||||
their own IPs. In many cases, the node IPs, pod IPs, and some service IPs on a cluster will not be
|
||||
routable, so they will not be reachable from a machine outside the cluster,
|
||||
|
@ -280,10 +280,10 @@ The redirect capabilities have been deprecated and removed. Please use a proxy
|
|||
|
||||
## So Many Proxies
|
||||
|
||||
There are several different proxies you may encounter when using kubernetes:
|
||||
There are several different proxies you may encounter when using Kubernetes:
|
||||
1. The [kubectl proxy](#directly-accessing-the-rest-api):
|
||||
- runs on a user's desktop or in a pod
|
||||
- proxies from a localhost address to the kubernetes apiserver
|
||||
- proxies from a localhost address to the Kubernetes apiserver
|
||||
- client to proxy uses HTTP
|
||||
- proxy to apiserver uses HTTPS
|
||||
- locates apiserver
|
||||
|
@ -308,7 +308,7 @@ There are several different proxies you may encounter when using kubernetes:
|
|||
- acts as load balancer if there are several apiservers.
|
||||
1. Cloud Load Balancers on external services:
|
||||
- are provided by some cloud providers (e.g. AWS ELB, Google Cloud Load Balancer)
|
||||
- are created automatically when the kubernetes service has type `LoadBalancer`
|
||||
- are created automatically when the Kubernetes service has type `LoadBalancer`
|
||||
- use UDP/TCP only
|
||||
- implementation varies by cloud provider.
|
||||
|
||||
|
|
|
@ -103,7 +103,7 @@ spec:
|
|||
|
||||
## How Pods with Resource Limits are Scheduled
|
||||
|
||||
When a pod is created, the kubernetes scheduler selects a node for the pod to
|
||||
When a pod is created, the Kubernetes scheduler selects a node for the pod to
|
||||
run on. Each node has a maximum capacity for each of the resource types: the
|
||||
amount of CPU and memory it can provide for pods. The scheduler ensures that,
|
||||
for each resource type (CPU and memory), the sum of the resource limits of the
|
||||
|
|
|
@ -33,7 +33,7 @@ Documentation for other releases can be found at
|
|||
|
||||
# kubectl for docker users
|
||||
|
||||
In this doc, we introduce the kubernetes command line to for interacting with the api to docker-cli users. The tool, kubectl, is designed to be familiar to docker-cli users but there are a few necessary differences. Each section of this doc highlights a docker subcommand explains the kubectl equivalent.
|
||||
In this doc, we introduce the Kubernetes command line to for interacting with the api to docker-cli users. The tool, kubectl, is designed to be familiar to docker-cli users but there are a few necessary differences. Each section of this doc highlights a docker subcommand explains the kubectl equivalent.
|
||||
|
||||
**Table of Contents**
|
||||
<!-- BEGIN MUNGE: GENERATED_TOC -->
|
||||
|
@ -163,7 +163,7 @@ $ kubectl logs -f nginx-app-zibvs
|
|||
10.240.63.110 - - [14/Jul/2015:01:09:02 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.26.0" "-"
|
||||
```
|
||||
|
||||
Now's a good time to mention slight difference between pods and containers; by default pods will not terminate if their process's exit. Instead it will restart the process. This is similar to the docker run option `--restart=always` with one major difference. In docker, the output for each invocation of the process is concatenated but for Kubernetes, each invokation is separate. To see the output from a prevoius run in kubernetes, do this:
|
||||
Now's a good time to mention slight difference between pods and containers; by default pods will not terminate if their process's exit. Instead it will restart the process. This is similar to the docker run option `--restart=always` with one major difference. In docker, the output for each invocation of the process is concatenated but for Kubernetes, each invokation is separate. To see the output from a prevoius run in Kubernetes, do this:
|
||||
|
||||
```console
|
||||
$ kubectl logs --previous nginx-app-zibvs
|
||||
|
|
|
@ -37,7 +37,7 @@ It is sometimes useful for a container to have information about itself, but we
|
|||
want to be careful not to over-couple containers to Kubernetes. The downward
|
||||
API allows containers to consume information about themselves or the system and
|
||||
expose that information how they want it, without necessarily coupling to the
|
||||
kubernetes client or REST API.
|
||||
Kubernetes client or REST API.
|
||||
|
||||
An example of this is a "legacy" app that is already written assuming
|
||||
that a particular environment variable will hold a unique identifier. While it
|
||||
|
|
|
@ -38,7 +38,7 @@ services on top of both. Accessing the frontend pod will return
|
|||
environment information about itself, and a backend pod that it has
|
||||
accessed through the service. The goal is to illuminate the
|
||||
environment metadata available to running containers inside the
|
||||
Kubernetes cluster. The documentation for the kubernetes environment
|
||||
Kubernetes cluster. The documentation for the Kubernetes environment
|
||||
is [here](../../../docs/user-guide/container-environment.md).
|
||||
|
||||
![Diagram](diagram.png)
|
||||
|
@ -102,7 +102,7 @@ First the frontend pod's information is printed. The pod name and
|
|||
[namespace](../../../docs/design/namespaces.md) are retreived from the
|
||||
[Downward API](../../../docs/user-guide/downward-api.md). Next, `USER_VAR` is the name of
|
||||
an environment variable set in the [pod
|
||||
definition](show-rc.yaml). Then, the dynamic kubernetes environment
|
||||
definition](show-rc.yaml). Then, the dynamic Kubernetes environment
|
||||
variables are scanned and printed. These are used to find the backend
|
||||
service, named `backend-srv`. Finally, the frontend pod queries the
|
||||
backend service and prints the information returned. Again the backend
|
||||
|
|
|
@ -35,7 +35,7 @@ Documentation for other releases can be found at
|
|||
|
||||
Each container in a pod has its own image. Currently, the only type of image supported is a [Docker Image](https://docs.docker.com/userguide/dockerimages/).
|
||||
|
||||
You create your Docker image and push it to a registry before referring to it in a kubernetes pod.
|
||||
You create your Docker image and push it to a registry before referring to it in a Kubernetes pod.
|
||||
|
||||
The `image` property of a container supports the same syntax as the `docker` command does, including private registries and tags.
|
||||
|
||||
|
@ -267,7 +267,7 @@ common use cases and suggested solutions.
|
|||
- may be hosted on the [Docker Hub](https://hub.docker.com/account/signup/), or elsewhere.
|
||||
- manually configure .dockercfg on each node as described above
|
||||
- Or, run an internal private registry behind your firewall with open read access.
|
||||
- no kubernetes configuration required
|
||||
- no Kubernetes configuration required
|
||||
- Or, when on GCE/GKE, use the project's Google Container Registry.
|
||||
- will work better with cluster autoscaling than manual node configuration
|
||||
- Or, on a cluster where changing the node configuration is inconvenient, use `imagePullSecrets`.
|
||||
|
|
|
@ -68,7 +68,7 @@ These are just examples; you are free to develop your own conventions.
|
|||
## Syntax and character set
|
||||
|
||||
_Labels_ are key value pairs. Valid label keys have two segments: an optional prefix and name, separated by a slash (`/`). The name segment is required and must be 63 characters or less, beginning and ending with an alphanumeric character (`[a-z0-9A-Z]`) with dashes (`-`), underscores (`_`), dots (`.`), and alphanumerics between. The prefix is optional. If specified, the prefix must be a DNS subdomain: a series of DNS labels separated by dots (`.`), not longer than 253 characters in total, followed by a slash (`/`).
|
||||
If the prefix is omitted, the label key is presumed to be private to the user. Automated system components (e.g. `kube-scheduler`, `kube-controller-manager`, `kube-apiserver`, `kubectl`, or other third-party automation) which add labels to end-user objects must specify a prefix. The `kubernetes.io/` prefix is reserved for kubernetes core components.
|
||||
If the prefix is omitted, the label key is presumed to be private to the user. Automated system components (e.g. `kube-scheduler`, `kube-controller-manager`, `kube-apiserver`, `kubectl`, or other third-party automation) which add labels to end-user objects must specify a prefix. The `kubernetes.io/` prefix is reserved for Kubernetes core components.
|
||||
|
||||
Valid label values must be 63 characters or less and must be empty or begin and end with an alphanumeric character (`[a-z0-9A-Z]`) with dashes (`-`), underscores (`_`), dots (`.`), and alphanumerics between.
|
||||
|
||||
|
|
|
@ -115,7 +115,7 @@ running in containers. The guide [Collecting log files within containers with Fl
|
|||
|
||||
## Known issues
|
||||
|
||||
Kubernetes does log rotation for kubernetes components and docker containers. The command `kubectl logs` currently only read the latest logs, not all historical ones.
|
||||
Kubernetes does log rotation for Kubernetes components and docker containers. The command `kubectl logs` currently only read the latest logs, not all historical ones.
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
|
|
|
@ -59,13 +59,13 @@ The Kubelet acts as a bridge between the Kubernetes master and the nodes. It man
|
|||
|
||||
### InfluxDB and Grafana
|
||||
|
||||
A Grafana setup with InfluxDB is a very popular combination for monitoring in the open source world. InfluxDB exposes an easy to use API to write and fetch time series data. Heapster is setup to use this storage backend by default on most kubernetes clusters. A detailed setup guide can be found [here](https://github.com/GoogleCloudPlatform/heapster/blob/master/docs/influxdb.md). InfluxDB and Grafana run in Pods. The pod exposes itself as a Kubernetes service which is how Heapster discovers it.
|
||||
A Grafana setup with InfluxDB is a very popular combination for monitoring in the open source world. InfluxDB exposes an easy to use API to write and fetch time series data. Heapster is setup to use this storage backend by default on most Kubernetes clusters. A detailed setup guide can be found [here](https://github.com/GoogleCloudPlatform/heapster/blob/master/docs/influxdb.md). InfluxDB and Grafana run in Pods. The pod exposes itself as a Kubernetes service which is how Heapster discovers it.
|
||||
|
||||
The Grafana container serves Grafana’s UI which provides an easy to configure dashboard interface. The default dashboard for Kubernetes contains an example dashboard that monitors resource usage of the cluster and the pods inside of it. This dashboard can easily be customized and expanded. Take a look at the storage schema for InfluxDB [here](https://github.com/GoogleCloudPlatform/heapster/blob/master/docs/storage-schema.md#metrics).
|
||||
|
||||
Here is a video showing how to monitor a kubernetes cluster using heapster, InfluxDB and Grafana:
|
||||
Here is a video showing how to monitor a Kubernetes cluster using heapster, InfluxDB and Grafana:
|
||||
|
||||
[![How to monitor a kubernetes cluster using heapster, InfluxDB and Grafana](http://img.youtube.com/vi/SZgqjMrxo3g/0.jpg)](http://www.youtube.com/watch?v=SZgqjMrxo3g)
|
||||
[![How to monitor a Kubernetes cluster using heapster, InfluxDB and Grafana](http://img.youtube.com/vi/SZgqjMrxo3g/0.jpg)](http://www.youtube.com/watch?v=SZgqjMrxo3g)
|
||||
|
||||
Here is a snapshot of the default Kubernetes Grafana dashboard that shows the CPU and Memory usage of the entire cluster, individual pods and containers:
|
||||
|
||||
|
|
|
@ -37,7 +37,7 @@ This example shows how to assign a [pod](../pods.md) to a specific [node](../../
|
|||
|
||||
### Step Zero: Prerequisites
|
||||
|
||||
This example assumes that you have a basic understanding of kubernetes pods and that you have [turned up a Kubernetes cluster](https://github.com/GoogleCloudPlatform/kubernetes#documentation).
|
||||
This example assumes that you have a basic understanding of Kubernetes pods and that you have [turned up a Kubernetes cluster](https://github.com/GoogleCloudPlatform/kubernetes#documentation).
|
||||
|
||||
### Step One: Attach label to the node
|
||||
|
||||
|
|
|
@ -522,7 +522,7 @@ Pod level](#use-case-two-containers).
|
|||
run a pod which exposes the secret.
|
||||
If multiple replicas of etcd are run, then the secrets will be shared between them.
|
||||
By default, etcd does not secure peer-to-peer communication with SSL/TLS, though this can be configured.
|
||||
- It is not possible currently to control which users of a kubernetes cluster can
|
||||
- It is not possible currently to control which users of a Kubernetes cluster can
|
||||
access a secret. Support for this is planned.
|
||||
- Currently, anyone with root on any node can read any secret from the apiserver,
|
||||
by impersonating the kubelet. It is a planned feature to only send secrets to
|
||||
|
|
|
@ -33,7 +33,7 @@ Documentation for other releases can be found at
|
|||
|
||||
# Sharing Cluster Access
|
||||
|
||||
Client access to a running kubernetes cluster can be shared by copying
|
||||
Client access to a running Kubernetes cluster can be shared by copying
|
||||
the `kubectl` client config bundle ([.kubeconfig](kubeconfig-file.md)).
|
||||
This config bundle lives in `$HOME/.kube/config`, and is generated
|
||||
by `cluster/kube-up.sh`. Sample steps for sharing `kubeconfig` below.
|
||||
|
|
|
@ -55,7 +55,7 @@ The Kubernetes UI can be used to introspect your current cluster, such as checki
|
|||
### Node Resource Usage
|
||||
|
||||
After accessing Kubernetes UI, you'll see a homepage dynamically listing out all nodes in your current cluster, with related information including internal IP addresses, CPU usage, memory usage, and file systems usage.
|
||||
![kubernetes UI home page](k8s-ui-overview.png)
|
||||
![Kubernetes UI home page](k8s-ui-overview.png)
|
||||
|
||||
### Dashboard Views
|
||||
|
||||
|
@ -64,18 +64,18 @@ Click on the "Views" button in the top-right of the page to see other views avai
|
|||
#### Explore View
|
||||
|
||||
The "Explore" view allows your to see the pods, replication controllers, and services in current cluster easily.
|
||||
![kubernetes UI Explore View](k8s-ui-explore.png)
|
||||
![Kubernetes UI Explore View](k8s-ui-explore.png)
|
||||
The "Group by" dropdown list allows you to group these resources by a number of factors, such as type, name, host, etc.
|
||||
![kubernetes UI Explore View - Group by](k8s-ui-explore-groupby.png)
|
||||
![Kubernetes UI Explore View - Group by](k8s-ui-explore-groupby.png)
|
||||
You can also create filters by clicking on the down triangle of any listed resource instances and choose which filters you want to add.
|
||||
![kubernetes UI Explore View - Filter](k8s-ui-explore-filter.png)
|
||||
![Kubernetes UI Explore View - Filter](k8s-ui-explore-filter.png)
|
||||
To see more details of each resource instance, simply click on it.
|
||||
![kubernetes UI - Pod](k8s-ui-explore-poddetail.png)
|
||||
![Kubernetes UI - Pod](k8s-ui-explore-poddetail.png)
|
||||
|
||||
### Other Views
|
||||
|
||||
Other views (Pods, Nodes, Replication Controllers, Services, and Events) simply list information about each type of resource. You can also click on any instance for more details.
|
||||
![kubernetes UI - Nodes](k8s-ui-nodes.png)
|
||||
![Kubernetes UI - Nodes](k8s-ui-nodes.png)
|
||||
|
||||
## More Information
|
||||
|
||||
|
|
|
@ -220,7 +220,7 @@ For more information, see [Services](../services.md).
|
|||
|
||||
## Health Checking
|
||||
|
||||
When I write code it never crashes, right? Sadly the [kubernetes issues list](https://github.com/GoogleCloudPlatform/kubernetes/issues) indicates otherwise...
|
||||
When I write code it never crashes, right? Sadly the [Kubernetes issues list](https://github.com/GoogleCloudPlatform/kubernetes/issues) indicates otherwise...
|
||||
|
||||
Rather than trying to write bug-free code, a better approach is to use a management system to perform periodic health checking
|
||||
and repair of your application. That way, a system, outside of your application itself, is responsible for monitoring the
|
||||
|
|
|
@ -36,7 +36,7 @@ Documentation for other releases can be found at
|
|||
*This document is aimed at users who have worked through some of the examples,
|
||||
and who want to learn more about using kubectl to manage resources such
|
||||
as pods and services. Users who want to access the REST API directly,
|
||||
and developers who want to extend the kubernetes API should
|
||||
and developers who want to extend the Kubernetes API should
|
||||
refer to the [api conventions](../devel/api-conventions.md) and
|
||||
the [api document](../api.md).*
|
||||
|
||||
|
|
Loading…
Reference in New Issue