Documented manualSelector field.
Documented that you do not need to provide a selector
or unique labels with batch/v1 Job.
Updated all Job examples to apiVersion: batch/v1
Updated all Job examples to use generated selectors.
Added selector generation to Job's
strategy.Validate, right before validation.
Can't do in defaulting since UID is not known.
Added a validation to Job to ensure that the generated
labels and selector are correct when generation was requested.
This happens right after generation, but validation is in a better
place to return an error.
Adds "manualSelector" field to batch/v1 Job to control selector generation.
Adds same field to extensions/__internal. Conversion between those two
is automatic.
Adds "autoSelector" field to extensions/v1beta1 Job. Used for storing batch/v1 Jobs
- Default for v1 is to do generation.
- Default for v1beta1 is to not do it.
- In both cases, unset == false == do the default thing.
Release notes:
Added batch/v1 group, which contains just Job, and which is the next
version of extensions/v1beta1 Job.
The changes from the previous version are:
- Users no longer need to ensure labels on their pod template are unique to the enclosing
job (but may add labels as needed for categorization).
- In v1beta1, job.spec.selector was defaulted from pod labels, with the user responsible for uniqueness.
In v1, a unique label is generated and added to the pod template, and used as the selector (other
labels added by user stay on pod template, but need not be used by selector).
- a new field called "manualSelector" field exists to control whether the new behavior is used,
versus a more error-prone but more flexible "manual" (not generated) seletor. Most users
will not need to use this field and should leave it unset.
Users who are creating extensions.Job go objects and then posting them using the go client
will see a change in the default behavior. They need to either stop providing a selector (relying on
selector generation) or else specify "spec.manualSelector" until they are ready to do the former.
Update the Deployments' API types, defaulting code, conversions, helpers
and validation to use ReplicaSets instead of ReplicationControllers and
LabelSelector instead of map[string]string for selectors.
Also update the Deployment controller, registry, kubectl subcommands,
client listers package and e2e tests to use ReplicaSets and
LabelSelector for Deployments.
Adds a document on pod templates that can be shared
between various controller docs.
Move more philosophical content to later in the doc.
Add more task-oriented stuff earlier.
Put example config in the document, early on, so users have something concrete to relate the discussion of fields to.
Link to Job and DaemonSet docs.
Make format more like that of Job and DaemonSet docs.
Use jsonpath in examples, which is available in v1.1.
Added example files.
When job.spec.completions is nil, only
one task needs to succeed for the job to succeed,
and parallelism can be scaled freely during runtime.
Added tests.
Release Note:
This causes two minor changes to the API.
First, unset parallelism previously was defaulted to be
equal to completions. Now it always defaults to 1 if unset.
Second, having parallelism=N and completions unset would previously
be defaulted to 1 completion and N parallelism.
(this is not something we expect people to do, though)
Now, no defaulting occurs in that case, and the job's
behavior is different (any completion causes success).
Signed-off-by: Ryan Wallner <ryan.wallner@clusterhq.com>
change wording
Signed-off-by: Ryan Wallner <ryan.wallner@clusterhq.com>
change name of volume to be consistent
Signed-off-by: Ryan Wallner <ryan.wallner@clusterhq.com>
update node flag without =
Signed-off-by: Ryan Wallner <ryan.wallner@clusterhq.com>
make things a bit clearer, seperate More Info
Signed-off-by: Ryan Wallner <ryan.wallner@clusterhq.com>
refacter so we include -n example
Signed-off-by: Ryan Wallner <ryan.wallner@clusterhq.com>
keep uuids consistent in examples
Signed-off-by: Ryan Wallner <ryan.wallner@clusterhq.com>
detail example about how to set env vars
Signed-off-by: Ryan Wallner <ryan.wallner@clusterhq.com>
move demo video to more info
Signed-off-by: Ryan Wallner <ryan.wallner@clusterhq.com>
add references for how to create volume using docker cli
Signed-off-by: Ryan Wallner <ryan.wallner@clusterhq.com>
italics
Signed-off-by: Ryan Wallner <ryan.wallner@clusterhq.com>
fix italics
Signed-off-by: Ryan Wallner <ryan.wallner@clusterhq.com>
fix extra paren
Signed-off-by: Ryan Wallner <ryan.wallner@clusterhq.com>
run hack/update-generated-docs.sh
We must handle null addresses in the cassandra seed provider. This
can occur when there are 'notReadyAddresses' but no 'addresses'.
While we're at it, update the makefile to build the jar.
For AWS EBS, a volume can only be attached to a node in the same AZ.
The scheduler must therefore detect if a volume is being attached to a
pod, and ensure that the pod is scheduled on a node in the same AZ as
the volume.
So that the scheduler need not query the cloud provider every time, and
to support decoupled operation (e.g. bare metal) we tag the volume with
our placement labels. This is done automatically by means of an
admission controller on AWS when a PersistentVolume is created backed by
an EBS volume.
Support for tagging GCE PVs will follow.
Pods that specify a volume directly (i.e. without using a
PersistentVolumeClaim) will not currently be scheduled correctly (i.e.
they will be scheduled without zone-awareness).
The pending codec -> conversion split changes the signature of
Encode and Decode to be more complicated. Create a stub helper
with the exact semantics of today and do the simple mechanical
refactor here to reduce the cost of that change.
This enables use of software or hardware transports viz. be2iscsi,
bnx2i, cxgb3i, cxgb4i, qla4xx, iser and ocs. The default transport
(tcp) happens to be called "default".
Use of non-default transports changes the disk path to the following format:
/dev/disk/by-path/pci-<pci_id>-ip-<portal>-iscsi-<iqn>-lun-<lun_id>
Before this change we have a mish-mash of ways to pass field names around for
error generation. Sometimes string fieldnames, sometimes .Prefix(), sometimes
neither, often wrong names or not indexed when it should be.
Instead of that mess, this is part one of a couple of commits that will make it
more strongly typed and hopefully encourage correct behavior. At least you
will have to think about field names, which is better than nothing.
It turned out to be really hard to do this incrementally.
Remove the id field to fix this error:
```
$ kubectl create -f redis-slave-controller.json
error validating "redis-slave-controller.json": error validating data: found invalid field id for v1.ReplicationController; if you choose to ignore these errors, turn validation off with --validate=false
```
Fixes#17846
hypothesis: The old userspace proxier would internally retry connections. The
new one does not. When this test comes up, the firewall might not yet be open or
something is causing a long delay and a timeout. I can't repro this failure
locally, so I am shooting in the dark. It's sort of plausible.
evidence: I can SSH into the jenkins master that is hung and I can see the hung
curl. I can run that curl by hand and it works. I can see that my shell is in
the same netns as that hung curl.
Introduce examples explaining how to use DataSets to optimally
distribute cassandra nodes onto each kubernetes node in the network.
Signed-off-by: Christian Stewart <christian@paral.in>
This adds a very basic Zeppelin image that works with the existing
Spark example. As can be seen from the documentation, it has a couple
of warts:
* It requires kubectl port-forward (which is unstable across long
periods of time, at least for me, on this app, bug incoming). See
* I needed to roll my own container (none of the existing containers
exactly matched needs, or even built anymore against modern Zeppelin
master, and the rest of the example is Spark 1.5).
The image itself is *huge*. One of the further refinements we need to
look at is how to possibly strip the Maven build for this container
down to just the interpreters we care about, because the deps here
are frankly ridiculous.
This might be a case where, if possible, we might want to open an
upstream request to build things dynamically, then use something like
probably the cut the image down considerably. (This might already be
possible, need to poke at whether you can late-bind interpreters
later.)
Adds an example using DaemonSets to distribute the NewRelic worker onto all nodes in a k8s cluster.
Signed-off-by: Christian Stewart <christian@paral.in>
Since this is a container service port anyways, "insecure" is a bit of
a red herring. There's no real security relevance to the incoming port
numbers for the NFS server pod.
This lets us get rid of the examples/nfs/exporter Docker build
(@jsafrane's personal image).
This ensures nfs-common is installed on GCE, and provides a more
functional explanation/example. I launched two replication controllers
so that there were busybox pods to poke around at the NFS volume, and
so that the later wget actually works (the original example would have
to work on the node, or need some other access to the container
network). After switching to two controllers, it actually makes more
sense to use PV claims, and it's probably a configuration that makes
more sense for indirection for NFS anyways.