Job and DaemonSet documentation.

pull/6/head
Eric Tune 2015-09-15 17:29:44 -07:00
parent 2cc9ed6b8d
commit bf9e93250e
9 changed files with 562 additions and 11 deletions

18
docs/admin/daemon.yaml Normal file
View File

@ -0,0 +1,18 @@
apiVersion: experimental/v1alpha1
kind: DaemonSet
metadata:
name: prometheus-node-exporter
spec:
template:
metadata:
name: prometheus-node-exporter
labels:
daemon: prom-node-exp
spec:
containers:
- name: c
image: prom/prometheus
ports:
- containerPort: 9090
hostPort: 9090
name: serverport

219
docs/admin/daemons.md Normal file
View File

@ -0,0 +1,219 @@
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
<!-- BEGIN STRIP_FOR_RELEASE -->
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<h2>PLEASE NOTE: This document applies to the HEAD of the source tree</h2>
If you are using a released version of Kubernetes, you should
refer to the docs that go with that version.
<strong>
The latest 1.0.x release of this document can be found
[here](http://releases.k8s.io/release-1.0/docs/admin/daemons.md).
Documentation for other releases can be found at
[releases.k8s.io](http://releases.k8s.io).
</strong>
--
<!-- END STRIP_FOR_RELEASE -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Daemon Sets
**Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [Daemon Sets](#daemon-sets)
- [What is a _Daemon Set_?](#what-is-a-daemon-set)
- [Writing a DaemonSet Spec](#writing-a-daemonset-spec)
- [Required Fields](#required-fields)
- [Pod Template](#pod-template)
- [Pod Selector](#pod-selector)
- [Running Pods on Only Some Nodes](#running-pods-on-only-some-nodes)
- [How Daemon Pods are Scheduled](#how-daemon-pods-are-scheduled)
- [Communicating with DaemonSet Pods](#communicating-with-daemonset-pods)
- [Updating a DaemonSet](#updating-a-daemonset)
- [Alternatives to Daemon Set](#alternatives-to-daemon-set)
- [Init Scripts](#init-scripts)
- [Bare Pods](#bare-pods)
- [Static Pods](#static-pods)
- [Replication Controller](#replication-controller)
- [Caveats](#caveats)
<!-- END MUNGE: GENERATED_TOC -->
## What is a _Daemon Set_?
A _Daemon Set_ ensures that all (or some) nodes run a copy of a pod. As nodes are added to the
cluster, pods are added to them. As nodes are removed from the cluster, those pods are garbage
collected. Deleting a Daemon Set will clean up the pods it created.
Some typical uses of a Daemon Set are:
- running a cluster storage daemon, such as `glusterd`, `ceph`, on each node.
- running a logs collection daemon on every node, such as `fluentd` or `logstash`.
- running a node monitoring daemon on every node, such as [Prometheus Node Exporter](
https://github.com/prometheus/node_exporter), `collectd`, New Relic agent, or Ganglia `gmond`.
In a simple case, one Daemon Set, covering all nodes, would be used for each type of daemon.
A more complex setup might use multiple DaemonSets would be used for a single type of daemon,
but with different flags and/or different memory and cpu requests for different hardware types.
## Writing a DaemonSet Spec
### Required Fields
As with all other Kubernetes config, a DaemonSet needs `apiVersion`, `kind`, and `metadata` fields. For
general information about working with config files, see [here](../user-guide/simple-yaml.md),
[here](../user-guide/configuring-containers.md), and [here](../user-guide/working-with-resources.md).
A DaemonSet also needs a [`.spec`](../devel/api-conventions.md#spec-and-status) section.
### Pod Template
The `.spec.template` is the only required field of the `.spec`.
The `.spec.template` is a [pod template](../user-guide/replication-controller.md#pod-template).
It has exactly the same schema as a [pod](../user-guide/pods.md), except
it is nested and does not have an `apiVersion` or `kind`.
In addition to required fields for a pod, a pod template in a DaemonSet has to specify appropriate
labels (see [pod selector](#pod-selector)).
A pod template in a DaemonSet must have a [`RestartPolicy`](../user-guide/pod-states.md)
equal to `Always`, or be unspecified, which defaults to `Always`.
### Pod Selector
The `.spec.selector` field is a pod selector. It works the same as the `.spec.selector` of
a [ReplicationController](../user-guide/replication-controller.md) or
[Job](../user-guide/jobs.md).
If the `.spec.selector` is specified, it must equal the `.spec.template.metadata.labels`. If not
specified, the are default to be equal. Config with these unequal will be rejected by the API.
Also you should not normally create any pods whose labels match this selector, either directly, via
another DaemonSet, or via other controller such as ReplicationController. Otherwise, the DaemonSet
controller will think that those pods were created by it. Kubernetes will not stop you from doing
this. Once case where you might want to do this is manually create a pod with a different value on
a node for testing.
### Running Pods on Only Some Nodes
If you specify a `.spec.template.spec.nodeSelector`, then the DaemonSet controller will
create pods on nodes which match that [node
selector](../user-guide/node-selection/README.md).
If you do not specify a `.spec.template.spec.nodeSelector`, then the DaemonSet controller will
create pods on all nodes.
## How Daemon Pods are Scheduled
Normally, the machine that a pod runs on is selected by the Kubernetes scheduler. However, pods
created by the Daemon controller have the machine already selected (`.spec.nodeName` is specified
when the pod is created, so it is ignored by the scheduler). Therefore:
- the [`unschedulable`](node.md#manual-node-administration) field of a node is not respected
by the daemon set controller.
- daemon set controller can make pods even when the scheduler has not been started, which can help cluster
bootstrap.
## Communicating with DaemonSet Pods
Some possible patterns for communicating with pods in a DaemonSet are:
- **Push**: Pods in the Daemon Set are configured to send updates to another service, such
as a stats database. They do not have clients.
- **NodeIP and Known Port**: Pods in the Daemon Set use a `hostPort`, so that the pods are reachable
via the node IPs. Clients knows the the list of nodes ips somehow, and know the port by convention.
- **DNS**: Create a [headless service](../user-guide/services.md#headless-services) with the same pod selector,
and then discover DaemonSets using the `endpoints` resource or retrieve multiple A records from
DNS.
- **Service**: Create a service with the same pod selector, and use the service to reach a
daemon on a random node. (No way to reach specific node.)
## Updating a DaemonSet
If node labels are changed, the DaemonSet will promptly add pods to newly matching nodes and delete
pods from newly not-matching nodes.
You can modify the pods that a DaemonSet creates. However, pods do not allow all
fields to be updated. Also, the DeamonSet controller will use the original template the next
time a node (even with the same name) is created.
You can delete a DeamonSet. If you specify `--cascade=false` with `kubectl`, then the pods
will be left on the nodes. You can then create a new DaemonSet with a different template.
the new DaemonSet with the different template will recognize all the existing pods as having
matching labels. It will not modify or delete them despite a mismatch in the pod template.
You will need to force new pod creation by deleting the pod or deleting the node.
You cannot update a DaemonSet.
Support for updating DaemonSets and controlled updating of nodes is planned.
## Alternatives to Daemon Set
### Init Scripts
It is certainly possible to run daemon processes by directly starting them on a node (e.g using
`init`, `upstartd`, or `systemd`). This is perfectly fine. However, there are several advantages to
running such processes via a DaemonSet:
- Ability to monitor and manage logs for daemons in the same way as applications.
- Same config language and tools (e.g. pod templates, `kubectl`) for daemons and applications.
- Future versions of Kubernetes will likely support integration between DaemonSet-created
pods and node upgrade workflows.
- Running daemons in containers with resource limits increases isolation between daemons from app
containers. However, this can also be accomplished by running the daemons in a container but not in a pod
(e.g. start directly via Docker).
### Bare Pods
It is possible to create pods directly which specify a particular node to run on. However,
a Daemon Set replaces pods that are deleted or terminated for any reason, such as in the case of
node failure or disruptive node maintenance, such as a kernel upgrade. For this reason, you should
use a Daemon Set rather than creating individual pods.
### Static Pods
It is possible to create pods by writing a file to a certain directory watched by Kubelet. These
are called [static pods](static-pods.md).
Unlike DaemonSet, static pods cannot be managed with kubectl
or other Kubernetes API clients. Static pods do not depend on the apiserver, making them useful
in cluster bootstrapping cases. Also, static pods may be deprecated in the future.
### Replication Controller
Daemon Set are similar to [Replication Controllers](../user-guide/replication-controller.md) in that
they both create pods, and those pods have processes which are not expected to terminate (e.g. web servers,
storage servers).
Use a replication controller for stateless services, like frontends, where scaling up and down the
number of replicas and rolling out updates are more important than controlling exactly which host
the pod runs on. Use a Daemon Controller when it is important that a copy of a pod always run on
all or certain hosts, and when it needs to start before other pods.
## Caveats
DaemonSet is part of the experimental API group, so it is not subject to the same compatibility
guarantees as objects in the main API. It may not be enabled. Enable by setting
`--runtime-config=experimental/v1alpha1` on the apiserver.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/daemons.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View File

@ -214,6 +214,10 @@ unschedulable, run this command:
kubectl replace nodes 10.1.2.3 --patch='{"apiVersion": "v1", "unschedulable": true}'
```
Note that pods which are created by a daemonSet controller bypass the Kubernetes scheduler,
and do not respect the unschedulable attribute on a node. The assumption is that daemons belong on
the machine even if it is being drained of applications in preparation for a reboot.
### Node capacity
The capacity of the node (number of cpus and amount of memory) is part of the node resource.

19
docs/user-guide/job.yaml Normal file
View File

@ -0,0 +1,19 @@
apiVersion: experimental/v1alpha1
kind: Job
metadata:
name: pi
spec:
selector:
app: pi
template:
metadata:
name: pi
labels:
app: pi
spec:
containers:
- name: pi
image: perl
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
restartPolicy: Never

255
docs/user-guide/jobs.md Normal file
View File

@ -0,0 +1,255 @@
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
<!-- BEGIN STRIP_FOR_RELEASE -->
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<h2>PLEASE NOTE: This document applies to the HEAD of the source tree</h2>
If you are using a released version of Kubernetes, you should
refer to the docs that go with that version.
<strong>
The latest 1.0.x release of this document can be found
[here](http://releases.k8s.io/release-1.0/docs/user-guide/jobs.md).
Documentation for other releases can be found at
[releases.k8s.io](http://releases.k8s.io).
</strong>
--
<!-- END STRIP_FOR_RELEASE -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Jobs
**Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [Jobs](#jobs)
- [What is a _job_?](#what-is-a-job)
- [Running an example Job](#running-an-example-job)
- [Writing a Job Spec](#writing-a-job-spec)
- [Pod Template](#pod-template)
- [Pod Selector](#pod-selector)
- [Multiple Completions](#multiple-completions)
- [Parallelism](#parallelism)
- [Handling Pod and Container Failures](#handling-pod-and-container-failures)
- [Alternatives to Job](#alternatives-to-job)
- [Bare Pods](#bare-pods)
- [Replication Controller](#replication-controller)
- [Caveats](#caveats)
- [Future work](#future-work)
<!-- END MUNGE: GENERATED_TOC -->
## What is a _job_?
A _job_ creates one or more pods and ensures that a specified number of them successfully terminate.
As pods successfully complete, the _job_ tracks the successful completions. When a specified number
of successful completions is reached, the job itself is complete. Deleting a Job will cleanup the
pods it created.
A simple case is to create 1 Job object in order to reliably run one Pod to completion.
A Job can also be used to run multiple pods in parallel.
## Running an example Job
Here is an example Job config. It computes π to 2000 places and prints it out.
It takes around 10s to complete.
<!-- BEGIN MUNGE: EXAMPLE job.yaml -->
```yaml
apiVersion: experimental/v1alpha1
kind: Job
metadata:
name: pi
spec:
selector:
app: pi
template:
metadata:
name: pi
labels:
app: pi
spec:
containers:
- name: pi
image: perl
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
restartPolicy: Never
```
[Download example](job.yaml?raw=true)
<!-- END MUNGE: EXAMPLE job.yaml -->
Run the example job by downloading the example file and then running this command:
```console
$ kubectl create -f ./job.yaml
jobs/pi
```
Check on the status of the job using this command:
```console
$ kubectl describe jobs/pi
Name: pi
Namespace: default
Image(s): perl
Selector: app=pi
Parallelism: 2
Completions: 1
Labels: <none>
Pods Statuses: 1 Running / 0 Succeeded / 0 Failed
Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
───────── ──────── ───── ──── ───────────── ────── ───────
1m 1m 1 {job } SuccessfulCreate Created pod: pi-z548a
```
To view completed pods of a job, use `kubectl get pods --show-all`. The `--show-all` will show completed pods too.
To list all the pods that belong to job in a machine readable form, you can use a command like this:
```console
$ pods=$(kubectl get pods --selector=app=pi --output=jsonpath={.items..metadata.name})
echo $pods
pi-aiw0a
```
Here, the selector is the same as the selector for the job. The `--output=jsonpath` option specifies an expression
that just gets the name from each pod in the returned list.
View the standard output of one of the pods:
```console
$ kubectl logs pi-aiw0a
3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489549303819644288109756659334461284756482337867831652712019091456485669234603486104543266482133936072602491412737245870066063155881748815209209628292540917153643678925903600113305305488204665213841469519415116094330572703657595919530921861173819326117931051185480744623799627495673518857527248912279381830119491298336733624406566430860213949463952247371907021798609437027705392171762931767523846748184676694051320005681271452635608277857713427577896091736371787214684409012249534301465495853710507922796892589235420199561121290219608640344181598136297747713099605187072113499999983729780499510597317328160963185950244594553469083026425223082533446850352619311881710100031378387528865875332083814206171776691473035982534904287554687311595628638823537875937519577818577805321712268066130019278766111959092164201989380952572010654858632788659361533818279682303019520353018529689957736225994138912497217752834791315155748572424541506959508295331168617278558890750983817546374649393192550604009277016711390098488240128583616035637076601047101819429555961989467678374494482553797747268471040475346462080466842590694912933136770289891521047521620569660240580381501935112533824300355876402474964732639141992726042699227967823547816360093417216412199245863150302861829745557067498385054945885869269956909272107975093029553211653449872027559602364806654991198818347977535663698074265425278625518184175746728909777727938000816470600161452491921732172147723501414419735685481613611573525521334757418494684385233239073941433345477624168625189835694855620992192221842725502542568876717904946016534668049886272327917860857843838279679766814541009538837863609506800642251252051173929848960841284886269456042419652850222106611863067442786220391949450471237137869609563643719172874677646575739624138908658326459958133904780275901
```
## Writing a Job Spec
As with all other Kubernetes config, a Job needs `apiVersion`, `kind`, and `metadata` fields. For
general information about working with config files, see [here](simple-yaml.md),
[here](configuring-containers.md), and [here](working-with-resources.md).
A Job also needs a [`.spec` section](../devel/api-conventions.md#spec-and-status).
### Pod Template
The `.spec.template` is the only required field of the `.spec`.
The `.spec.template` is a [pod template](replication-controller.md#pod-template). It has exactly
the same schema as a [pod](pods.md), except it is nested and does not have an `apiVersion` or
`kind`.
In addition to required fields for a Pod, a pod template in a job must specify appropriate
lables (see [pod selector](#pod-selector) and an appropriate restart policy.
Only a [`RestartPolicy`](pod-states.md) equal to `Never` or `OnFailure` are allowed.
### Pod Selector
The `.spec.selector` field is a pod selector. It works the same as the `.spec.selector` of
a [ReplicationController](replication-controller.md).
If specified, the `.spec.template.metadata.labels` must be equal to the `.spec.selector`, or it will
be rejected by the API. If `.spec.selector` is unspecified, it will be defaulted to
`.spec.template.metadata.labels`.
Also you should not normally create any pods whose labels match this selector, either directly,
via another Job, or via another controller such as ReplicationController. Otherwise, the Job will
think that those pods were created by it. Kubernetes will not stop you from doing this.
### Multiple Completions
By default, a Job is complete when one Pod runs to successful completion. You can also specify that
this needs to happen multiple times by specifying `.spec.completions` with a value greater than 1.
When multiple completions are requested, each Pod created by the Job controller has an identical
[`spec`](../devel/api-conventions.md#spec-and-status). In particular, all pods will have
the same command line and the same image, the same volumes, and mostly the same environment
variables. It is up to the user to arrange for the pods to do work on different things. For
example, the pods might all access a shared work queue service to acquire work units.
To create multiple pods which are similar, but have slightly different arguments, environment
variables or images, use multiple Jobs.
### Parallelism
You can suggest how many pods should run concurrently by setting `.spec.parallelism` to the number
of pods you would like to have running concurrently. This number is a suggestion. The number
running concurrently may be lower or higher for a variety of reasons. For example, it may be lower
if the number of remaining completions is less, or as the controller is ramping up, or if it is
throttling the job due to excessive failures. It may be higher for example if a pod is gracefully
shutdown, and the replacement starts early.
If you do not specify `.spec.parallelism`, then it defaults to `.spec.completions`.
## Handling Pod and Container Failures
A Container in a Pod may fail for a number of reasons, such as because the process in it exited with
a non-zero exit code, or the Container was killed for exceeding a memory limit, etc. If this
happens, and the `.spec.template.containers[].restartPolicy = "OnFailure"`, then the Pod stays
on the node, but the Container is re-run. Therefore, your program needs to handle the the case when it is
restarted locally, or else specify `.spec.template.containers[].restartPolicy = "Never"`.
See [pods-states](pod-states.md) for more information on `restartPolicy`.
An entire Pod can also fail, for a number of reasons, such as when the pod is kicked off the node
(node is upgraded, rebooted, delelted, etc.), or if a container of the Pod fails and the
`.spec.template.containers[].restartPolicy = "Never"`. When a Pod fails, then the Job controller
starts a new Pod. Therefore, your program needs to handle the case when it is restarted in a new
pod. In particular, it needs to handle temporary files, locks, incomplete output and the like
caused by previous runs.
Note that even if you specify `.spec.parallelism = 1` and `.spec.completions = 1` and
`.spec.template.containers[].restartPolicy = "Never"`, the same program may
sometimes be started twice.
If you do specify `.spec.parallelism` and `.spec.completions` both greater than 1, then there may be
multiple pods running at once. Therefore, your pods must also be tolerant of concurrency.
## Alternatives to Job
### Bare Pods
When the node that a pod is running on reboots or fails, the pod is terminated
and will not be restarted. However, a Job will create new pods to replace terminated ones.
For this reason, we recommend that you use a job rather than a bare pod, even if your application
requires only a single pod.
### Replication Controller
Jobs are complementary to [Replication Controllers](replication-controller.md).
A Replication Controller manages pods which are not expected to terminate (e.g. web servers), and a Job
manages pods that are expected to terminate (e.g. batch jobs).
As discussed in [life of a pod](pod-states.md), `Job` is *only* appropriate for pods with
`RestartPolicy` equal to `OnFailure` or `Never`. (Note: If `RestartPolicy` is not set, the default
value is `Always`.)
## Caveats
Job is part of the experimental API group, so it is not subject to the same compatibility
guarantees as objects in the main API. It may not be enabled. Enable by setting
`--runtime-config=experimental/v1alpha1` on the apiserver.
## Future work
Support for creating Jobs at specified times/dates (i.e. cron) is expected in the next minor
release.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/jobs.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View File

@ -80,13 +80,27 @@ More detailed information about the current (and previous) container statuses ca
The possible values for RestartPolicy are `Always`, `OnFailure`, or `Never`. If RestartPolicy is not set, the default value is `Always`. RestartPolicy applies to all containers in the pod. RestartPolicy only refers to restarts of the containers by the Kubelet on the same node. Failed containers that are restarted by Kubelet, are restarted with an exponential back-off delay, the delay is in multiples of sync-frequency 0, 1x, 2x, 4x, 8x ... capped at 5 minutes and is reset after 10 minutes of successful execution. As discussed in the [pods document](pods.md#durability-of-pods-or-lack-thereof), once bound to a node, a pod will never be rebound to another node. This means that some kind of controller is necessary in order for a pod to survive node failure, even if just a single pod at a time is desired.
The only controller we have today is [`ReplicationController`](replication-controller.md). `ReplicationController` is *only* appropriate for pods with `RestartPolicy = Always`. `ReplicationController` should refuse to instantiate any pod that has a different restart policy.
Three types of controllers are currently available:
There is a legitimate need for a controller which keeps pods with other policies alive. Pods having any of the other policies (`OnFailure` or `Never`) eventually terminate, at which point the controller should stop recreating them. Because of this fundamental distinction, let's hypothesize a new controller, called [`JobController`](http://issue.k8s.io/1624) for the sake of this document, which can implement this policy.
- Use a [`Job`](jobs.md) for pods which are expected to terminate (e.g. batch computations).
- Use a [`ReplicationController`](replication-controller.md) for pods which are not expected to
terminate, and where (e.g. web servers).
- Use a [`DaemonSet`](../admin/daemons.md): Use for pods which need to run 1 per machine because they provide a
machine-specific system service.
If you are unsure whether to use ReplicationController or Daemon, then see [Daemon Set versus
Replication Controller](../admin/daemons.md#daemon-set-versus-replication-controller).
`ReplicationController` is *only* appropriate for pods with `RestartPolicy = Always`.
`Job` is *only* appropriate for pods with `RestartPolicy` equal to `OnFailure` or `Never`.
All 3 types of controllers contain a PodTemplate, which has all the same fields as a Pod.
It is recommended to create the appropriate controller and let it create pods, rather than to
directly create pods yourself. That is because pods alone are not resilient to machine failures,
but Controllers are.
## Pod lifetime
In general, pods which are created do not disappear until someone destroys them. This might be a human or a `ReplicationController`. The only exception to this rule is that pods with a `PodPhase` of `Succeeded` or `Failed` for more than some duration (determined by the master) will expire and be automatically reaped.
In general, pods which are created do not disappear until someone destroys them. This might be a human or a `ReplicationController`, or another controller. The only exception to this rule is that pods with a `PodPhase` of `Succeeded` or `Failed` for more than some duration (determined by the master) will expire and be automatically reaped.
If a node dies or is disconnected from the rest of the cluster, some entity within the system (call it the NodeController for now) is responsible for applying policy (e.g. a timeout) and marking any pods on the lost node as `Failed`.

View File

@ -106,6 +106,16 @@ func validateObject(obj runtime.Object) (errors []error) {
t.Namespace = api.NamespaceDefault
}
errors = expValidation.ValidateDeployment(t)
case *experimental.Job:
if t.Namespace == "" {
t.Namespace = api.NamespaceDefault
}
errors = expValidation.ValidateJob(t)
case *experimental.DaemonSet:
if t.Namespace == "" {
t.Namespace = api.NamespaceDefault
}
errors = expValidation.ValidateDaemonSet(t)
default:
return []error{fmt.Errorf("no validation defined for %#v", obj)}
}
@ -211,6 +221,10 @@ func TestExampleObjectSchemas(t *testing.T) {
"multi-pod": nil,
"pod": &api.Pod{},
"replication": &api.ReplicationController{},
"job": &experimental.Job{},
},
"../docs/admin": {
"daemon": &experimental.DaemonSet{},
},
"../examples": {
"scheduler-policy-config": &schedulerapi.Policy{},

View File

@ -314,14 +314,17 @@ type DaemonSetSpec struct {
type DaemonSetStatus struct {
// CurrentNumberScheduled is the number of nodes that are running exactly 1
// daemon pod and are supposed to run the daemon pod.
// More info: http://releases.k8s.io/HEAD/docs/admin/daemon.md
CurrentNumberScheduled int `json:"currentNumberScheduled"`
// NumberMisscheduled is the number of nodes that are running the daemon pod, but are
// not supposed to run the daemon pod.
// More info: http://releases.k8s.io/HEAD/docs/admin/daemon.md
NumberMisscheduled int `json:"numberMisscheduled"`
// DesiredNumberScheduled is the total number of nodes that should be running the daemon
// pod (including nodes correctly running the daemon pod).
// More info: http://releases.k8s.io/HEAD/docs/admin/daemon.md
DesiredNumberScheduled int `json:"desiredNumberScheduled"`
}
@ -400,17 +403,21 @@ type JobSpec struct {
// run at any given time. The actual number of pods running in steady state will
// be less than this number when ((.spec.completions - .status.successful) < .spec.parallelism),
// i.e. when the work left to do is less than max parallelism.
// More info: http://releases.k8s.io/HEAD/docs/user-guide/jobs.md
Parallelism *int `json:"parallelism,omitempty"`
// Completions specifies the desired number of successfully finished pods the
// job should be run with. Defaults to 1.
// More info: http://releases.k8s.io/HEAD/docs/user-guide/jobs.md
Completions *int `json:"completions,omitempty"`
// Selector is a label query over pods that should match the pod count.
// More info: http://releases.k8s.io/HEAD/docs/user-guide/labels.md#label-selectors
Selector map[string]string `json:"selector,omitempty"`
// Template is the object that describes the pod that will be created when
// executing a job.
// More info: http://releases.k8s.io/HEAD/docs/user-guide/jobs.md
Template *v1.PodTemplateSpec `json:"template"`
}
@ -418,6 +425,7 @@ type JobSpec struct {
type JobStatus struct {
// Conditions represent the latest available observations of an object's current state.
// More info: http://releases.k8s.io/HEAD/docs/user-guide/jobs.md
Conditions []JobCondition `json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type"`
// StartTime represents time when the job was acknowledged by the Job Manager.

View File

@ -70,9 +70,9 @@ func (DaemonSetSpec) SwaggerDoc() map[string]string {
var map_DaemonSetStatus = map[string]string{
"": "DaemonSetStatus represents the current status of a daemon set.",
"currentNumberScheduled": "CurrentNumberScheduled is the number of nodes that are running exactly 1 daemon pod and are supposed to run the daemon pod.",
"numberMisscheduled": "NumberMisscheduled is the number of nodes that are running the daemon pod, but are not supposed to run the daemon pod.",
"desiredNumberScheduled": "DesiredNumberScheduled is the total number of nodes that should be running the daemon pod (including nodes correctly running the daemon pod).",
"currentNumberScheduled": "CurrentNumberScheduled is the number of nodes that are running exactly 1 daemon pod and are supposed to run the daemon pod. More info: http://releases.k8s.io/HEAD/docs/admin/daemon.md",
"numberMisscheduled": "NumberMisscheduled is the number of nodes that are running the daemon pod, but are not supposed to run the daemon pod. More info: http://releases.k8s.io/HEAD/docs/admin/daemon.md",
"desiredNumberScheduled": "DesiredNumberScheduled is the total number of nodes that should be running the daemon pod (including nodes correctly running the daemon pod). More info: http://releases.k8s.io/HEAD/docs/admin/daemon.md",
}
func (DaemonSetStatus) SwaggerDoc() map[string]string {
@ -285,10 +285,10 @@ func (JobList) SwaggerDoc() map[string]string {
var map_JobSpec = map[string]string{
"": "JobSpec describes how the job execution will look like.",
"parallelism": "Parallelism specifies the maximum desired number of pods the job should run at any given time. The actual number of pods running in steady state will be less than this number when ((.spec.completions - .status.successful) < .spec.parallelism), i.e. when the work left to do is less than max parallelism.",
"completions": "Completions specifies the desired number of successfully finished pods the job should be run with. Defaults to 1.",
"selector": "Selector is a label query over pods that should match the pod count.",
"template": "Template is the object that describes the pod that will be created when executing a job.",
"parallelism": "Parallelism specifies the maximum desired number of pods the job should run at any given time. The actual number of pods running in steady state will be less than this number when ((.spec.completions - .status.successful) < .spec.parallelism), i.e. when the work left to do is less than max parallelism. More info: http://releases.k8s.io/HEAD/docs/user-guide/jobs.md",
"completions": "Completions specifies the desired number of successfully finished pods the job should be run with. Defaults to 1. More info: http://releases.k8s.io/HEAD/docs/user-guide/jobs.md",
"selector": "Selector is a label query over pods that should match the pod count. More info: http://releases.k8s.io/HEAD/docs/user-guide/labels.md#label-selectors",
"template": "Template is the object that describes the pod that will be created when executing a job. More info: http://releases.k8s.io/HEAD/docs/user-guide/jobs.md",
}
func (JobSpec) SwaggerDoc() map[string]string {
@ -297,7 +297,7 @@ func (JobSpec) SwaggerDoc() map[string]string {
var map_JobStatus = map[string]string{
"": "JobStatus represents the current state of a Job.",
"conditions": "Conditions represent the latest available observations of an object's current state.",
"conditions": "Conditions represent the latest available observations of an object's current state. More info: http://releases.k8s.io/HEAD/docs/user-guide/jobs.md",
"startTime": "StartTime represents time when the job was acknowledged by the Job Manager. It is not guaranteed to be set in happens-before order across separate operations. It is represented in RFC3339 form and is in UTC.",
"completionTime": "CompletionTime represents time when the job was completed. It is not guaranteed to be set in happens-before order across separate operations. It is represented in RFC3339 form and is in UTC.",
"active": "Active is the number of actively running pods.",