mirror of https://github.com/k3s-io/k3s
commit
f4c2a05eea
|
@ -1,6 +1,6 @@
|
|||
# Contributing guidelines
|
||||
|
||||
Want to hack on kubernetes? Yay!
|
||||
Want to hack on Kubernetes? Yay!
|
||||
|
||||
## Developer Guide
|
||||
|
||||
|
|
|
@ -45,20 +45,20 @@ All Docker names are suffixed with a hash derived from the file path (to allow c
|
|||
|
||||
## Proxy Settings
|
||||
|
||||
If you are behind a proxy and you are letting these scripts use `docker-machine` to set up your local VM for you on macOS, you need to export proxy settings for kubernetes build, the following environment variables should be defined.
|
||||
If you are behind a proxy and you are letting these scripts use `docker-machine` to set up your local VM for you on macOS, you need to export proxy settings for Kubernetes build, the following environment variables should be defined.
|
||||
|
||||
```
|
||||
export KUBERNETES_HTTP_PROXY=http://username:password@proxyaddr:proxyport
|
||||
export KUBERNETES_HTTPS_PROXY=https://username:password@proxyaddr:proxyport
|
||||
```
|
||||
|
||||
Optionally, you can specify addresses of no proxy for kubernetes build, for example
|
||||
Optionally, you can specify addresses of no proxy for Kubernetes build, for example
|
||||
|
||||
```
|
||||
export KUBERNETES_NO_PROXY=127.0.0.1
|
||||
```
|
||||
|
||||
If you are using sudo to make kubernetes build for example make quick-release, you need run `sudo -E make quick-release` to pass the environment variables.
|
||||
If you are using sudo to make Kubernetes build for example make quick-release, you need run `sudo -E make quick-release` to pass the environment variables.
|
||||
|
||||
## Really Remote Docker Engine
|
||||
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
# Cluster Configuration
|
||||
|
||||
##### Deprecation Notice: This directory has entered maintainence mode and will not be accepting new providers. Please submit new automation deployments to [kube-deploy](https://github.com/kubernetes/kube-deploy). Deployments in this directory will continue to be maintained and supported at their current level of support.
|
||||
##### Deprecation Notice: This directory has entered maintenance mode and will not be accepting new providers. Please submit new automation deployments to [kube-deploy](https://github.com/kubernetes/kube-deploy). Deployments in this directory will continue to be maintained and supported at their current level of support.
|
||||
|
||||
The scripts and data in this directory automate creation and configuration of a Kubernetes cluster, including networking, DNS, nodes, and master components.
|
||||
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
# Python image
|
||||
|
||||
The python image here is used by OS distros that don't have python installed to
|
||||
run python scripts to parse the yaml files in the addon updator script.
|
||||
run python scripts to parse the yaml files in the addon updater script.
|
||||
|
||||
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/cluster/addons/python-image/README.md?pixel)]()
|
||||
|
|
|
@ -91,7 +91,7 @@ spec:
|
|||
This tells Kubernetes that you want to use storage, and the `PersistentVolume`
|
||||
you created before will be bound to this claim (unless you have other
|
||||
`PersistentVolumes` in which case those might get bound instead). This claim
|
||||
gives you the rigth to use this storage until you release the claim.
|
||||
gives you the right to use this storage until you release the claim.
|
||||
|
||||
## Run the registry
|
||||
|
||||
|
|
|
@ -24,7 +24,7 @@ deploy a bundle.
|
|||
|
||||
You will need to
|
||||
[install the Juju client](https://jujucharms.com/get-started) and
|
||||
`juju-quickstart` as pre-requisites. To deploy the bundle use
|
||||
`juju-quickstart` as prerequisites. To deploy the bundle use
|
||||
`juju-quickstart` which runs on Mac OS (`brew install
|
||||
juju-quickstart`) or Ubuntu (`apt-get install juju-quickstart`).
|
||||
|
||||
|
@ -191,7 +191,7 @@ Send us pull requests! We'll send you a cookie if they include tests and docs.
|
|||
The charms and bundles are in the [kubernetes](https://github.com/kubernetes/kubernetes)
|
||||
repository in github.
|
||||
|
||||
- [kubernetes-master charm on Github](https://github.com/kubernetes/kubernetes/tree/master/cluster/juju/charms/trusty/kubernetes-master)
|
||||
- [kubernetes-master charm on GitHub](https://github.com/kubernetes/kubernetes/tree/master/cluster/juju/charms/trusty/kubernetes-master)
|
||||
- [kubernetes charm on GitHub](https://github.com/kubernetes/kubernetes/tree/master/cluster/juju/charms/trusty/kubernetes)
|
||||
|
||||
|
||||
|
|
|
@ -6,7 +6,7 @@ It basically does the following:
|
|||
- iterate over all files in the given doc root.
|
||||
- for each file split it into a slice (mungeLines) of lines (mungeLine)
|
||||
- a mungeline has metadata about each line typically determined by a 'fast' regex.
|
||||
- metadata contains things like 'is inside a preformmatted block'
|
||||
- metadata contains things like 'is inside a preformatted block'
|
||||
- contains a markdown header
|
||||
- has a link to another file
|
||||
- etc..
|
||||
|
|
|
@ -113,7 +113,7 @@ Ubuntu system. The profiles can be found at `{securityfs}/apparmor/profiles`
|
|||
|
||||
## API Changes
|
||||
|
||||
The intial alpha support of AppArmor will follow the pattern
|
||||
The initial alpha support of AppArmor will follow the pattern
|
||||
[used by seccomp](https://github.com/kubernetes/kubernetes/pull/25324) and specify profiles through
|
||||
annotations. Profiles can be specified per-container through pod annotations. The annotation format
|
||||
is a key matching the container, and a profile name value:
|
||||
|
|
|
@ -56,7 +56,7 @@ ship with all of the requirements for the node specification by default.
|
|||
|
||||
**Objective**: Generate security certificates used to configure secure communication between client, master and nodes
|
||||
|
||||
TODO: Enumerate ceritificates which have to be generated.
|
||||
TODO: Enumerate certificates which have to be generated.
|
||||
|
||||
## Step 3: Deploy master
|
||||
|
||||
|
|
|
@ -245,7 +245,7 @@ discussion and may be achieved alternatively:
|
|||
**Imperative pod-level interface**
|
||||
The interface contains only CreatePod(), StartPod(), StopPod() and RemovePod().
|
||||
This implies that the runtime needs to take over container lifecycle
|
||||
manangement (i.e., enforce restart policy), lifecycle hooks, liveness checks,
|
||||
management (i.e., enforce restart policy), lifecycle hooks, liveness checks,
|
||||
etc. Kubelet will mainly be responsible for interfacing with the apiserver, and
|
||||
can potentially become a very thin daemon.
|
||||
- Pros: Lower maintenance overhead for the Kubernetes maintainers if `Docker`
|
||||
|
|
|
@ -86,7 +86,7 @@ To prevent re-adoption of an object during deletion the `DeletionTimestamp` will
|
|||
Necessary related work:
|
||||
* `OwnerReferences` are correctly added/deleted,
|
||||
* GarbageCollector removes dangling references,
|
||||
* Controllers don't take any meaningfull actions when `DeletionTimestamps` is set.
|
||||
* Controllers don't take any meaningful actions when `DeletionTimestamps` is set.
|
||||
|
||||
# Considered alternatives
|
||||
|
||||
|
|
|
@ -37,7 +37,7 @@ the pods matching the service pod-selector.
|
|||
|
||||
## Motivation
|
||||
|
||||
The current implemention requires that the cloud loadbalancer balances traffic across all
|
||||
The current implementation requires that the cloud loadbalancer balances traffic across all
|
||||
Kubernetes worker nodes, and this traffic is then equally distributed to all the backend
|
||||
pods for that service.
|
||||
Due to the DNAT required to redirect the traffic to its ultimate destination, the return
|
||||
|
|
|
@ -16,7 +16,7 @@ federated servers.
|
|||
* Unblock new APIs from core kubernetes team review: A lot of new API proposals
|
||||
are currently blocked on review from the core kubernetes team. By allowing
|
||||
developers to expose their APIs as a separate server and enabling the cluster
|
||||
admin to use it without any change to the core kubernetes reporsitory, we
|
||||
admin to use it without any change to the core kubernetes repository, we
|
||||
unblock these APIs.
|
||||
* Place for staging experimental APIs: New APIs can remain in separate
|
||||
federated servers until they become stable, at which point, they can be moved
|
||||
|
@ -167,7 +167,7 @@ resource.
|
|||
|
||||
This proposal is not enough for hosted cluster users, but allows us to improve
|
||||
that in the future.
|
||||
On a hosted kubernetes cluster, for eg on GKE - where Google manages the kubernetes
|
||||
On a hosted kubernetes cluster, for e.g. on GKE - where Google manages the kubernetes
|
||||
API server, users will have to bring up and maintain the proxy and federated servers
|
||||
themselves.
|
||||
Other system components like the various controllers, will not be aware of the
|
||||
|
|
|
@ -102,7 +102,7 @@ The first is accomplished in this PR, while a timeline for 2. and 3. is TDB. To
|
|||
- Put: This is a request for a lease. If the nodecontroller is allocating CIDRs we can probably just no-op.
|
||||
* `/network/reservations`: TDB, we can probably use this to accommodate node controller allocating CIDR instead of flannel requesting it
|
||||
|
||||
The ick-iest part of this implementation is going to the `GET /network/leases`, i.e the watch proxy. We can side-step by waiting for a more generic Kubernetes resource. However, we can also implement it as follows:
|
||||
The ick-iest part of this implementation is going to the `GET /network/leases`, i.e. the watch proxy. We can side-step by waiting for a more generic Kubernetes resource. However, we can also implement it as follows:
|
||||
* Watch all nodes, ignore heartbeats
|
||||
* On each change, figure out the lease for the node, construct a [lease watch result](https://github.com/coreos/flannel/blob/0bf263826eab1707be5262703a8092c7d15e0be4/subnet/subnet.go#L72), and send it down the watch with the RV from the node
|
||||
* Implement a lease list that does a similar translation
|
||||
|
|
|
@ -52,7 +52,7 @@ The admission controller code will go in `plugin/pkg/admission/imagepolicy`.
|
|||
There will be a cache of decisions in the admission controller.
|
||||
|
||||
If the apiserver cannot reach the webhook backend, it will log a warning and either admit or deny the pod.
|
||||
A flag will control whether it admits or denys on failure.
|
||||
A flag will control whether it admits or denies on failure.
|
||||
The rationale for deny is that an attacker could DoS the backend or wait for it to be down, and then sneak a
|
||||
bad pod into the system. The rationale for allow here is that, if the cluster admin also does
|
||||
after-the-fact auditing of what images were run (which we think will be common), this will catch
|
||||
|
|
|
@ -88,7 +88,7 @@ type JobSpec struct {
|
|||
}
|
||||
```
|
||||
|
||||
`JobStatus` structure is defined to contain informations about pods executing
|
||||
`JobStatus` structure is defined to contain information about pods executing
|
||||
specified job. The structure holds information about pods currently executing
|
||||
the job.
|
||||
|
||||
|
|
|
@ -63,10 +63,10 @@ by the latter command.
|
|||
When clusters utilize authorization plugins access decisions are based on the
|
||||
correct configuration of an auth-N plugin, an auth-Z plugin, and client side
|
||||
credentials. Being rejected then begs several questions. Is the user's
|
||||
kubeconfig mis-configured? Is the authorization plugin setup wrong? Is the user
|
||||
kubeconfig misconfigured? Is the authorization plugin setup wrong? Is the user
|
||||
authenticating as a different user than the one they assume?
|
||||
|
||||
To help `kubectl login` diagnose mis-configured credentials, responses from the
|
||||
To help `kubectl login` diagnose misconfigured credentials, responses from the
|
||||
API server to authenticated requests SHOULD include the `Authentication-Info`
|
||||
header as defined in [RFC 7615](https://tools.ietf.org/html/rfc7615). The value
|
||||
will hold name value pairs for `username` and `uid`. Since usernames and IDs
|
||||
|
|
|
@ -145,7 +145,7 @@ The following node conditions are defined that correspond to the specified evict
|
|||
| Node Condition | Eviction Signal | Description |
|
||||
|----------------|------------------|------------------------------------------------------------------|
|
||||
| MemoryPressure | memory.available | Available memory on the node has satisfied an eviction threshold |
|
||||
| DiskPressure | nodefs.available, nodefs.inodesFree, imagefs.available, or imagefs.inodesFree | Available disk space and inodes on either the node's root filesytem or image filesystem has satisfied an eviction threshold |
|
||||
| DiskPressure | nodefs.available, nodefs.inodesFree, imagefs.available, or imagefs.inodesFree | Available disk space and inodes on either the node's root filesystem or image filesystem has satisfied an eviction threshold |
|
||||
|
||||
The `kubelet` will continue to report node status updates at the frequency specified by
|
||||
`--node-status-update-frequency` which defaults to `10s`.
|
||||
|
@ -300,7 +300,7 @@ In the future, if we store logs of dead containers outside of the container itse
|
|||
Once the lifetime of containers and logs are split, kubelet can support more user friendly policies
|
||||
around log evictions. `kubelet` can delete logs of the oldest containers first.
|
||||
Since logs from the first and the most recent incarnation of a container is the most important for most applications,
|
||||
kubelet can try to preserve these logs and aggresively delete logs from other container incarnations.
|
||||
kubelet can try to preserve these logs and aggressively delete logs from other container incarnations.
|
||||
|
||||
Until logs are split from container's lifetime, `kubelet` can delete dead containers to free up disk space.
|
||||
|
||||
|
|
|
@ -46,12 +46,12 @@ For a large enterprise where computing power is the king, one may imagine the fo
|
|||
- `linux/ppc64le`: For running highly-optimized software; especially massive compute tasks
|
||||
- `windows/amd64`: For running services that are only compatible on windows; e.g. business applications written in C# .NET
|
||||
|
||||
For a mid-sized business where efficency is most important, these could be combinations:
|
||||
For a mid-sized business where efficiency is most important, these could be combinations:
|
||||
- `linux/amd64`: For running most of the general-purpose computing tasks, plus tasks that require very high single-core performance.
|
||||
- `linux/arm64`: For running webservices and high-density tasks => the cluster could autoscale in a way that `linux/amd64` machines could hibernate at night in order to minimize power usage.
|
||||
|
||||
For a small business or university, arm is often sufficent:
|
||||
- `linux/arm`: Draws very little power, and can run web sites and app backends efficently on Scaleway for example.
|
||||
For a small business or university, arm is often sufficient:
|
||||
- `linux/arm`: Draws very little power, and can run web sites and app backends efficiently on Scaleway for example.
|
||||
|
||||
And last but not least; Raspberry Pi's should be used for [education at universities](http://kubecloud.io/) and are great for **demoing Kubernetes' features at conferences.**
|
||||
|
||||
|
@ -514,14 +514,14 @@ Linux 0a7da80f1665 4.2.0-25-generic #30-Ubuntu SMP Mon Jan 18 12:31:50 UTC 2016
|
|||
|
||||
Here a linux module called `binfmt_misc` registered the "magic numbers" in the kernel, so the kernel may detect which architecture a binary is, and prepend the call with `/usr/bin/qemu-(arm|aarch64|ppc64le)-static`. For example, `/usr/bin/qemu-arm-static` is a statically linked `amd64` binary that translates all ARM syscalls to `amd64` syscalls.
|
||||
|
||||
The multiarch guys have done a great job here, you may find the source for this and other images at [Github](https://github.com/multiarch)
|
||||
The multiarch guys have done a great job here, you may find the source for this and other images at [GitHub](https://github.com/multiarch)
|
||||
|
||||
|
||||
## Implementation
|
||||
|
||||
## History
|
||||
|
||||
32-bit ARM (`linux/arm`) was the first platform Kubernetes was ported to, and luxas' project [`Kubernetes on ARM`](https://github.com/luxas/kubernetes-on-arm) (released on Github the 31st of September 2015)
|
||||
32-bit ARM (`linux/arm`) was the first platform Kubernetes was ported to, and luxas' project [`Kubernetes on ARM`](https://github.com/luxas/kubernetes-on-arm) (released on GitHub the 31st of September 2015)
|
||||
served as a way of running Kubernetes on ARM devices easily.
|
||||
The 30th of November 2015, a tracking issue about making Kubernetes run on ARM was opened: [#17981](https://github.com/kubernetes/kubernetes/issues/17981). It later shifted focus to how to make Kubernetes a more platform-independent system.
|
||||
|
||||
|
|
|
@ -18,7 +18,7 @@ chosen networking solution.
|
|||
|
||||
## Implementation
|
||||
|
||||
The implmentation in Kubernetes consists of:
|
||||
The implementation in Kubernetes consists of:
|
||||
- A v1beta1 NetworkPolicy API object
|
||||
- A structure on the `Namespace` object to control policy, to be developed as an annotation for now.
|
||||
|
||||
|
|
|
@ -48,7 +48,7 @@ Basic ideas:
|
|||
|
||||
### Logging monitoring
|
||||
|
||||
Log spam is a serious problem and we need to keep it under control. Simplest way to check for regressions, suggested by @bredanburns, is to compute the rate in which log files
|
||||
Log spam is a serious problem and we need to keep it under control. Simplest way to check for regressions, suggested by @brendandburns, is to compute the rate in which log files
|
||||
grow in e2e tests.
|
||||
|
||||
Basic ideas:
|
||||
|
@ -70,7 +70,7 @@ Basic ideas:
|
|||
Reverse of REST call monitoring done in the API server. We need to know when a given component increases a pressure it puts on the API server. As a proxy for number of
|
||||
requests sent we can track how saturated are rate limiters. This has additional advantage of giving us data needed to fine-tune rate limiter constants.
|
||||
|
||||
Because we have rate limitting on both ends (client and API server) we should monitor number of inflight requests in API server and how it relates to `max-requests-inflight`.
|
||||
Because we have rate limiting on both ends (client and API server) we should monitor number of inflight requests in API server and how it relates to `max-requests-inflight`.
|
||||
|
||||
Basic ideas:
|
||||
- percentage of used non-burst limit,
|
||||
|
|
|
@ -383,7 +383,7 @@ The implementation goals of the first milestone are outlined below.
|
|||
- [x] Add PodContainerManagerImpl Create and Destroy methods which implements the respective PodContainerManager methods using a cgroupfs driver. #28017
|
||||
- [x] Have docker manager create container cgroups under pod level cgroups. Inject creation and deletion of pod cgroups into the pod workers. Add e2e tests to test this behaviour. #29049
|
||||
- [x] Add support for updating policy for the pod cgroups. Add e2e tests to test this behaviour. #29087
|
||||
- [ ] Enabling 'cgroup-per-qos' flag in Kubelet: The user is expected to drain the node and restart it before eenabling this feature, but as a fallback we also want to allow the user to just restart the kubelet with the cgroup-per-qos flag enabled to use this feature. As a part of this we need to figure out a policy for pods having Restart Policy: Never. More details in this [issue](https://github.com/kubernetes/kubernetes/issues/29946).
|
||||
- [ ] Enabling 'cgroup-per-qos' flag in Kubelet: The user is expected to drain the node and restart it before enabling this feature, but as a fallback we also want to allow the user to just restart the kubelet with the cgroup-per-qos flag enabled to use this feature. As a part of this we need to figure out a policy for pods having Restart Policy: Never. More details in this [issue](https://github.com/kubernetes/kubernetes/issues/29946).
|
||||
- [ ] Removing terminated pod's Cgroup : We need to cleanup the pod's cgroup once the pod is terminated. More details in this [issue](https://github.com/kubernetes/kubernetes/issues/29927).
|
||||
- [ ] Kubelet needs to ensure that the cgroup settings are what the kubelet expects them to be. If security is not of concern, one can assume that once kubelet applies cgroups setting successfully, the values will never change unless kubelet changes it. If security is of concern, then kubelet will have to ensure that the cgroup values meet its requirements and then continue to watch for updates to cgroups via inotify and re-apply cgroup values if necessary.
|
||||
Updating QoS limits needs to happen before pod cgroups values are updated. When pod cgroups are being deleted, QoS limits have to be updated after pod cgroup values have been updated for deletion or pod cgroups have been removed. Given that kubelet doesn't have any checkpoints and updates to QoS and pod cgroups are not atomic, kubelet needs to reconcile cgroups status whenever it restarts to ensure that the cgroups values match kubelet's expectation.
|
||||
|
|
|
@ -56,7 +56,7 @@ attributes.
|
|||
Some use cases require the containers in a pod to run with different security settings. As an
|
||||
example, a user may want to have a pod with two containers, one of which runs as root with the
|
||||
privileged setting, and one that runs as a non-root UID. To support use cases like this, it should
|
||||
be possible to override appropriate (ie, not intrinsically pod-level) security settings for
|
||||
be possible to override appropriate (i.e., not intrinsically pod-level) security settings for
|
||||
individual containers.
|
||||
|
||||
## Proposed Design
|
||||
|
|
|
@ -58,7 +58,7 @@ obtained by summing over usage of all nodes in the cluster.
|
|||
This feature is not yet specified/implemented although it seems reasonable to provide users information
|
||||
about resource usage on pod/node level.
|
||||
|
||||
Since this feature has not been fully specified yet it will be not supported initally in the API although
|
||||
Since this feature has not been fully specified yet it will be not supported initially in the API although
|
||||
it will be probably possible to provide a reasonable implementation of the feature anyway.
|
||||
|
||||
#### Kubernetes dashboard
|
||||
|
@ -67,7 +67,7 @@ it will be probably possible to provide a reasonable implementation of the featu
|
|||
in timeseries format from relatively long period of time. The aggregations should be also possible on various levels
|
||||
including replication controllers, deployments, services, etc.
|
||||
|
||||
Since the use case is complicated it will not be supported initally in the API and they will query Heapster
|
||||
Since the use case is complicated it will not be supported initially in the API and they will query Heapster
|
||||
directly using some custom API there.
|
||||
|
||||
## Proposed API
|
||||
|
|
|
@ -303,7 +303,7 @@ in namespace `ns1` might create a job `nightly-earnings-report-3m4d3`, and later
|
|||
a job called `nightly-earnings-report-6k7ts`. This is consistent with pods, but
|
||||
does not give the user much information.
|
||||
|
||||
Alternatively, we can use time as a uniqifier. For example, the same scheduledJob could
|
||||
Alternatively, we can use time as a uniquifier. For example, the same scheduledJob could
|
||||
create a job called `nightly-earnings-report-2016-May-19`.
|
||||
However, for Jobs that run more than once per day, we would need to represent
|
||||
time as well as date. Standard date formats (e.g. RFC 3339) use colons for time.
|
||||
|
|
|
@ -172,7 +172,7 @@ not specify all files in the object.
|
|||
The are two downside:
|
||||
|
||||
* The files are symlinks pointint to the real file, and the realfile
|
||||
permissions are only set. The symlink has the clasic symlink permissions.
|
||||
permissions are only set. The symlink has the classic symlink permissions.
|
||||
This is something already present in 1.3, and it seems applications like ssh
|
||||
work just fine with that. Something worth mentioning, but doesn't seem to be
|
||||
an issue.
|
||||
|
|
|
@ -10,7 +10,7 @@ There are two main motivators for Template functionality in Kubernetes: Control
|
|||
|
||||
Today the replication controller defines a PodTemplate which allows it to instantiate multiple pods with identical characteristics.
|
||||
This is useful but limited. Stateful applications have a need to instantiate multiple instances of a more sophisticated topology
|
||||
than just a single pod (eg they also need Volume definitions). A Template concept would allow a Controller to stamp out multiple
|
||||
than just a single pod (e.g. they also need Volume definitions). A Template concept would allow a Controller to stamp out multiple
|
||||
instances of a given Template definition. This capability would be immediately useful to the [PetSet](https://github.com/kubernetes/kubernetes/pull/18016) proposal.
|
||||
|
||||
Similarly the [Service Catalog proposal](https://github.com/kubernetes/kubernetes/pull/17543) could leverage template instantiation as a mechanism for claiming service instances.
|
||||
|
@ -22,7 +22,7 @@ Kubernetes gives developers a platform on which to run images and many configura
|
|||
constructing a cohesive application made up of images and configuration objects is currently difficult. Applications
|
||||
require:
|
||||
|
||||
* Information sharing between images (eg one image provides a DB service, another consumes it)
|
||||
* Information sharing between images (e.g. one image provides a DB service, another consumes it)
|
||||
* Configuration/tuning settings (memory sizes, queue limits)
|
||||
* Unique/customizable identifiers (service names, routes)
|
||||
|
||||
|
@ -30,7 +30,7 @@ Application authors know which values should be tunable and what information mus
|
|||
consistent way for an application author to define that set of information so that application consumers can easily deploy
|
||||
an application and make appropriate decisions about the tunable parameters the author intended to expose.
|
||||
|
||||
Furthermore, even if an application author provides consumers with a set of API object definitions (eg a set of yaml files)
|
||||
Furthermore, even if an application author provides consumers with a set of API object definitions (e.g. a set of yaml files)
|
||||
it is difficult to build a UI around those objects that would allow the deployer to modify names in one place without
|
||||
potentially breaking assumed linkages to other pieces. There is also no prescriptive way to define which configuration
|
||||
values are appropriate for a deployer to tune or what the parameters control.
|
||||
|
@ -40,14 +40,14 @@ values are appropriate for a deployer to tune or what the parameters control.
|
|||
### Use cases for templates in general
|
||||
|
||||
* Providing a full baked application experience in a single portable object that can be repeatably deployed in different environments.
|
||||
* eg Wordpress deployment with separate database pod/replica controller
|
||||
* e.g. Wordpress deployment with separate database pod/replica controller
|
||||
* Complex service/replication controller/volume topologies
|
||||
* Bulk object creation
|
||||
* Provide a management mechanism for deleting/uninstalling an entire set of components related to a single deployed application
|
||||
* Providing a library of predefined application definitions that users can select from
|
||||
* Enabling the creation of user interfaces that can guide an application deployer through the deployment process with descriptive help about the configuration value decisions they are making, and useful default values where appropriate
|
||||
* Exporting a set of objects in a namespace as a template so the topology can be inspected/visualized or recreated in another environment
|
||||
* Controllers that need to instantiate multiple instances of identical objects (eg PetSets).
|
||||
* Controllers that need to instantiate multiple instances of identical objects (e.g. PetSets).
|
||||
|
||||
|
||||
### Use cases for parameters within templates
|
||||
|
@ -59,9 +59,9 @@ values are appropriate for a deployer to tune or what the parameters control.
|
|||
* Allow simple, declarative defaulting of parameter values and expose them to end users in an approachable way - a parameter
|
||||
like “MySQL table space” can be parameterized in images as an env var - the template parameters declare the parameter, give
|
||||
it a friendly name, give it a reasonable default, and informs the user what tuning options are available.
|
||||
* Customization of component names to avoid collisions and ensure matched labeling (eg replica selector value and pod label are
|
||||
* Customization of component names to avoid collisions and ensure matched labeling (e.g. replica selector value and pod label are
|
||||
user provided and in sync).
|
||||
* Customize cross-component references (eg user provides the name of a secret that already exists in their namespace, to use in
|
||||
* Customize cross-component references (e.g. user provides the name of a secret that already exists in their namespace, to use in
|
||||
a pod as a TLS cert).
|
||||
* Provide guidance to users for parameters such as default values, descriptions, and whether or not a particular parameter value
|
||||
is required or can be left blank.
|
||||
|
@ -410,7 +410,7 @@ The api endpoint will then:
|
|||
returned.
|
||||
5. Return the processed template object. (or List, depending on the choice made when this is implemented)
|
||||
|
||||
The client can now either return the processed template to the user in a desired form (eg json or yaml), or directly iterate the
|
||||
The client can now either return the processed template to the user in a desired form (e.g. json or yaml), or directly iterate the
|
||||
api objects within the template, invoking the appropriate object creation api endpoint for each element. (If the api returns
|
||||
a List, the client would simply iterate the list to create the objects).
|
||||
|
||||
|
@ -453,9 +453,9 @@ automatic generation of passwords.
|
|||
(mapped to use cases described above)
|
||||
|
||||
* [Share passwords](https://github.com/jboss-openshift/application-templates/blob/master/eap/eap64-mongodb-s2i.json#L146-L152)
|
||||
* [Simple deployment-time customization of “app” configuration via environment values](https://github.com/jboss-openshift/application-templates/blob/master/eap/eap64-mongodb-s2i.json#L108-L126) (eg memory tuning, resource limits, etc)
|
||||
* [Simple deployment-time customization of “app” configuration via environment values](https://github.com/jboss-openshift/application-templates/blob/master/eap/eap64-mongodb-s2i.json#L108-L126) (e.g. memory tuning, resource limits, etc)
|
||||
* [Customization of component names with referential integrity](https://github.com/jboss-openshift/application-templates/blob/master/eap/eap64-mongodb-s2i.json#L199-L207)
|
||||
* [Customize cross-component references](https://github.com/jboss-openshift/application-templates/blob/master/eap/eap64-mongodb-s2i.json#L78-L83) (eg user provides the name of a secret that already exists in their namespace, to use in a pod as a TLS cert)
|
||||
* [Customize cross-component references](https://github.com/jboss-openshift/application-templates/blob/master/eap/eap64-mongodb-s2i.json#L78-L83) (e.g. user provides the name of a secret that already exists in their namespace, to use in a pod as a TLS cert)
|
||||
|
||||
## Requirements analysis
|
||||
|
||||
|
@ -546,7 +546,7 @@ fields to be substituted by a parameter value use the "$(parameter)" syntax whic
|
|||
value of `parameter` should be matched to a parameter with that name, and the value of the matched parameter substituted into
|
||||
the field value.
|
||||
|
||||
Other suggestions include a path/map approach in which a list of field paths (eg json path expressions) and corresponding
|
||||
Other suggestions include a path/map approach in which a list of field paths (e.g. json path expressions) and corresponding
|
||||
parameter names are provided. The substitution process would walk the map, replacing fields with the appropriate
|
||||
parameter value. This approach makes templates more fragile from the perspective of editing/refactoring as field paths
|
||||
may change, thus breaking the map. There is of course also risk of breaking references with the previous scheme, but
|
||||
|
@ -560,7 +560,7 @@ Openshift defines templates as a first class resource so they can be created/ret
|
|||
|
||||
Openshift handles template processing via a server endpoint which consumes a template object from the client and returns the list of objects
|
||||
produced by processing the template. It is also possible to handle the entire template processing flow via the client, but this was deemed
|
||||
undesirable as it would force each client tool to reimplement template processing (eg the standard CLI tool, an eclipse plugin, a plugin for a CI system like Jenkins, etc). The assumption in this proposal is that server side template processing is the preferred implementation approach for
|
||||
undesirable as it would force each client tool to reimplement template processing (e.g. the standard CLI tool, an eclipse plugin, a plugin for a CI system like Jenkins, etc). The assumption in this proposal is that server side template processing is the preferred implementation approach for
|
||||
this reason.
|
||||
|
||||
|
||||
|
|
|
@ -140,7 +140,7 @@ We propose that:
|
|||
controller attempts to delete the provisioned volume and creates an event
|
||||
on the claim
|
||||
|
||||
Existing behavior is un-changed for claims that do not specify
|
||||
Existing behavior is unchanged for claims that do not specify
|
||||
`claim.Spec.Class`.
|
||||
|
||||
* **Out of tree provisioning**
|
||||
|
|
|
@ -123,7 +123,7 @@ log4j:WARN No such property [maxBackupIndex] in org.apache.log4j.DailyRollingFil
|
|||
|
||||
## Access the service
|
||||
|
||||
*Don't forget* that services in Kubernetes are only acessible from containers in the cluster. For different behavior you should [configure the creation of an external load-balancer](http://kubernetes.io/v1.0/docs/user-guide/services.html#type-loadbalancer). While it's supported within this example service descriptor, its usage is out of scope of this document, for now.
|
||||
*Don't forget* that services in Kubernetes are only accessible from containers in the cluster. For different behavior you should [configure the creation of an external load-balancer](http://kubernetes.io/v1.0/docs/user-guide/services.html#type-loadbalancer). While it's supported within this example service descriptor, its usage is out of scope of this document, for now.
|
||||
|
||||
```
|
||||
$ kubectl get service elasticsearch
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
|
||||
This example shows how to use experimental persistent volume provisioning.
|
||||
|
||||
### Pre-requisites
|
||||
### Prerequisites
|
||||
|
||||
This example assumes that you have an understanding of Kubernetes administration and can modify the
|
||||
scripts that launch kube-controller-manager.
|
||||
|
|
|
@ -1,11 +1,11 @@
|
|||
### explorer
|
||||
|
||||
Explorer is a little container for examining the runtime environment kubernetes produces for your pods.
|
||||
Explorer is a little container for examining the runtime environment Kubernetes produces for your pods.
|
||||
|
||||
The intended use is to substitute gcr.io/google_containers/explorer for your intended container, and then visit it via the proxy.
|
||||
|
||||
Currently, you can look at:
|
||||
* The environment variables to make sure kubernetes is doing what you expect.
|
||||
* The environment variables to make sure Kubernetes is doing what you expect.
|
||||
* The filesystem to make sure the mounted volumes and files are also what you expect.
|
||||
* Perform DNS lookups, to see how DNS works.
|
||||
|
||||
|
|
|
@ -98,7 +98,7 @@ $ kubectl delete -f examples/guestbook/
|
|||
|
||||
### Step One: Start up the redis master
|
||||
|
||||
Before continuing to the gory details, we also recommend you to read [Quick walkthrough](../../docs/user-guide/#quick-walkthrough), [Thorough walkthough](../../docs/user-guide/#thorough-walkthrough) and [Concept guide](../../docs/user-guide/#concept-guide).
|
||||
Before continuing to the gory details, we also recommend you to read [Quick walkthrough](../../docs/user-guide/#quick-walkthrough), [Thorough walkthrough](../../docs/user-guide/#thorough-walkthrough) and [Concept guide](../../docs/user-guide/#concept-guide).
|
||||
**Note**: The redis master in this example is *not* highly available. Making it highly available would be an interesting, but intricate exercise — redis doesn't actually support multi-master Deployments at this point in time, so high availability would be a somewhat tricky thing to implement, and might involve periodic serialization to disk, and so on.
|
||||
|
||||
#### Define a Deployment
|
||||
|
|
|
@ -68,7 +68,7 @@ Examples are not:
|
|||
in the example config.
|
||||
* Only use the code highlighting types
|
||||
[supported by Rouge](https://github.com/jneen/rouge/wiki/list-of-supported-languages-and-lexers),
|
||||
as this is what Github Pages uses.
|
||||
as this is what GitHub Pages uses.
|
||||
* Commands to be copied use the `shell` syntax highlighting type, and
|
||||
do not include any kind of prompt.
|
||||
* Example output is in a separate block quote to distinguish it from
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Mysql installation with cinder volume plugin
|
||||
# MySQL installation with cinder volume plugin
|
||||
|
||||
Cinder is a Block Storage service for OpenStack. This example shows how it can be used as an attachment mounted to a pod in Kubernets.
|
||||
|
||||
|
|
|
@ -23,7 +23,7 @@ Demonstrated Kubernetes Concepts:
|
|||
|
||||
## Quickstart
|
||||
|
||||
Put your desired mysql password in a file called `password.txt` with
|
||||
Put your desired MySQL password in a file called `password.txt` with
|
||||
no trailing newline. The first `tr` command will remove the newline if
|
||||
your editor added one.
|
||||
|
||||
|
|
|
@ -6,7 +6,7 @@ This example will create a DaemonSet which places the New Relic monitoring agent
|
|||
|
||||
### Step 0: Prerequisites
|
||||
|
||||
This process will create priviliged containers which have full access to the host system for logging. Beware of the security implications of this.
|
||||
This process will create privileged containers which have full access to the host system for logging. Beware of the security implications of this.
|
||||
|
||||
If you are using a Salt based KUBERNETES\_PROVIDER (**gce**, **vagrant**, **aws**), you should make sure the creation of privileged containers via the API is enabled. Check `cluster/saltbase/pillar/privilege.sls`.
|
||||
|
||||
|
|
|
@ -168,7 +168,7 @@ You now have 10 Firefox and 10 Chrome nodes, happy Seleniuming!
|
|||
|
||||
### Debugging
|
||||
|
||||
Sometimes it is neccessary to check on a hung test. Each pod is running VNC. To check on one of the browser nodes via VNC, it's recommended that you proxy, since we don't want to expose a service for every pod, and the containers have a weak VNC password. Replace POD_NAME with the name of the pod you want to connect to.
|
||||
Sometimes it is necessary to check on a hung test. Each pod is running VNC. To check on one of the browser nodes via VNC, it's recommended that you proxy, since we don't want to expose a service for every pod, and the containers have a weak VNC password. Replace POD_NAME with the name of the pod you want to connect to.
|
||||
|
||||
```console
|
||||
kubectl port-forward --pod=POD_NAME 5900:5900
|
||||
|
|
|
@ -101,7 +101,7 @@ kubectl scale rc cassandra --replicas=4
|
|||
kubectl delete rc cassandra
|
||||
|
||||
#
|
||||
# Create a daemonset to place a cassandra node on each kubernetes node
|
||||
# Create a DaemonSet to place a cassandra node on each kubernetes node
|
||||
#
|
||||
|
||||
kubectl create -f examples/storage/cassandra/cassandra-daemonset.yaml --validate=false
|
||||
|
@ -659,7 +659,7 @@ cluster can react by re-replicating the data to other running nodes.
|
|||
|
||||
`DaemonSet` is designed to place a single pod on each node in the Kubernetes
|
||||
cluster. That will give us data redundancy. Let's create a
|
||||
daemonset to start our storage cluster:
|
||||
DaemonSet to start our storage cluster:
|
||||
|
||||
<!-- BEGIN MUNGE: EXAMPLE cassandra-daemonset.yaml -->
|
||||
|
||||
|
@ -725,16 +725,16 @@ spec:
|
|||
[Download example](cassandra-daemonset.yaml?raw=true)
|
||||
<!-- END MUNGE: EXAMPLE cassandra-daemonset.yaml -->
|
||||
|
||||
Most of this Daemonset definition is identical to the ReplicationController
|
||||
Most of this DaemonSet definition is identical to the ReplicationController
|
||||
definition above; it simply gives the daemon set a recipe to use when it creates
|
||||
new Cassandra pods, and targets all Cassandra nodes in the cluster.
|
||||
|
||||
Differentiating aspects are the `nodeSelector` attribute, which allows the
|
||||
Daemonset to target a specific subset of nodes (you can label nodes just like
|
||||
DaemonSet to target a specific subset of nodes (you can label nodes just like
|
||||
other resources), and the lack of a `replicas` attribute due to the 1-to-1 node-
|
||||
pod relationship.
|
||||
|
||||
Create this daemonset:
|
||||
Create this DaemonSet:
|
||||
|
||||
```console
|
||||
|
||||
|
@ -750,7 +750,7 @@ $ kubectl create -f examples/storage/cassandra/cassandra-daemonset.yaml --valida
|
|||
|
||||
```
|
||||
|
||||
You can see the daemonset running:
|
||||
You can see the DaemonSet running:
|
||||
|
||||
```console
|
||||
|
||||
|
@ -793,8 +793,8 @@ UN 10.244.3.3 51.28 KB 256 100.0% dafe3154-1d67-42e1-ac1d-78e
|
|||
```
|
||||
|
||||
**Note**: This example had you delete the cassandra Replication Controller before
|
||||
you created the Daemonset. This is because – to keep this example simple – the
|
||||
RC and the Daemonset are using the same `app=cassandra` label (so that their pods map to the
|
||||
you created the DaemonSet. This is because – to keep this example simple – the
|
||||
RC and the DaemonSet are using the same `app=cassandra` label (so that their pods map to the
|
||||
service we created, and so that the SeedProvider can identify them).
|
||||
|
||||
If we didn't delete the RC first, the two resources would conflict with
|
||||
|
@ -821,7 +821,7 @@ In Cassandra, a `SeedProvider` bootstraps the gossip protocol that Cassandra use
|
|||
Cassandra nodes. Seed addresses are hosts deemed as contact points. Cassandra
|
||||
instances use the seed list to find each other and learn the topology of the
|
||||
ring. The [`KubernetesSeedProvider`](java/src/main/java/io/k8s/cassandra/KubernetesSeedProvider.java)
|
||||
discovers Cassandra seeds IP addresses vis the Kubernetes API, those Cassandra
|
||||
discovers Cassandra seeds IP addresses via the Kubernetes API, those Cassandra
|
||||
instances are defined within the Cassandra Service.
|
||||
|
||||
Refer to the custom seed provider [README](java/README.md) for further
|
||||
|
|
|
@ -16,7 +16,7 @@ The basic idea is this: three replication controllers with a single pod, corresp
|
|||
|
||||
By defaults, there are only three pods (hence replication controllers) for this cluster. This number can be increased using the variable NUM_NODES, specified in the replication controller configuration file. It's important to know the number of nodes must always be odd.
|
||||
|
||||
When the replication controller is created, it results in the corresponding container to start, run an entrypoint script that installs the mysql system tables, set up users, and build up a list of servers that is used with the galera parameter ```wsrep_cluster_address```. This is a list of running nodes that galera uses for election of a node to obtain SST (Single State Transfer) from.
|
||||
When the replication controller is created, it results in the corresponding container to start, run an entrypoint script that installs the MySQL system tables, set up users, and build up a list of servers that is used with the galera parameter ```wsrep_cluster_address```. This is a list of running nodes that galera uses for election of a node to obtain SST (Single State Transfer) from.
|
||||
|
||||
Note: Kubernetes best-practices is to pre-create the services for each controller, and the configuration files which contain the service and replication controller for each node, when created, will result in both a service and replication contrller running for the given node. An important thing to know is that it's important that initially pxc-node1.yaml be processed first and no other pxc-nodeN services that don't have corresponding replication controllers should exist. The reason for this is that if there is a node in ```wsrep_clsuter_address``` without a backing galera node there will be nothing to obtain SST from which will cause the node to shut itself down and the container in question to exit (and another soon relaunched, repeatedly).
|
||||
|
||||
|
@ -32,7 +32,7 @@ Create the service and replication controller for the first node:
|
|||
|
||||
Repeat the same previous steps for ```pxc-node2``` and ```pxc-node3```
|
||||
|
||||
When complete, you should be able connect with a mysql client to the IP address
|
||||
When complete, you should be able connect with a MySQL client to the IP address
|
||||
service ```pxc-cluster``` to find a working cluster
|
||||
|
||||
### An example of creating a cluster
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
## Obsolete Config Files From Docs
|
||||
|
||||
These config files were orginally from docs, but have been separated
|
||||
These config files were originally from docs, but have been separated
|
||||
and put here to be used by various tests.
|
||||
|
||||
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
# Redis petset e2e tester
|
||||
|
||||
The image in this directory is the init container for contrib/pets/redis but for one difference, it bakes a specific verfion of redis into the base image so we get deterministic test results without having to depend on a redis download server. Discussing the tradeoffs to either approach (download the version at runtime, or maintain an image per version) are outside the scope of this document.
|
||||
The image in this directory is the init container for contrib/pets/redis but for one difference, it bakes a specific version of redis into the base image so we get deterministic test results without having to depend on a redis download server. Discussing the tradeoffs to either approach (download the version at runtime, or maintain an image per version) are outside the scope of this document.
|
||||
|
||||
You can execute the image locally via:
|
||||
```
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
# Zookeeper petset e2e tester
|
||||
|
||||
The image in this directory is the init container for contrib/pets/zookeeper but for one difference, it bakes a specific verfion of zookeeper into the base image so we get deterministic test results without having to depend on a zookeeper download server. Discussing the tradeoffs to either approach (download the version at runtime, or maintain an image per version) are outside the scope of this document.
|
||||
The image in this directory is the init container for contrib/pets/zookeeper but for one difference, it bakes a specific version of zookeeper into the base image so we get deterministic test results without having to depend on a zookeeper download server. Discussing the tradeoffs to either approach (download the version at runtime, or maintain an image per version) are outside the scope of this document.
|
||||
|
||||
You can execute the image locally via:
|
||||
```
|
||||
|
|
Loading…
Reference in New Issue