When job.spec.completions is nil, only
one task needs to succeed for the job to succeed,
and parallelism can be scaled freely during runtime.
Added tests.
Release Note:
This causes two minor changes to the API.
First, unset parallelism previously was defaulted to be
equal to completions. Now it always defaults to 1 if unset.
Second, having parallelism=N and completions unset would previously
be defaulted to 1 completion and N parallelism.
(this is not something we expect people to do, though)
Now, no defaulting occurs in that case, and the job's
behavior is different (any completion causes success).
Signed-off-by: Ryan Wallner <ryan.wallner@clusterhq.com>
change wording
Signed-off-by: Ryan Wallner <ryan.wallner@clusterhq.com>
change name of volume to be consistent
Signed-off-by: Ryan Wallner <ryan.wallner@clusterhq.com>
update node flag without =
Signed-off-by: Ryan Wallner <ryan.wallner@clusterhq.com>
make things a bit clearer, seperate More Info
Signed-off-by: Ryan Wallner <ryan.wallner@clusterhq.com>
refacter so we include -n example
Signed-off-by: Ryan Wallner <ryan.wallner@clusterhq.com>
keep uuids consistent in examples
Signed-off-by: Ryan Wallner <ryan.wallner@clusterhq.com>
detail example about how to set env vars
Signed-off-by: Ryan Wallner <ryan.wallner@clusterhq.com>
move demo video to more info
Signed-off-by: Ryan Wallner <ryan.wallner@clusterhq.com>
add references for how to create volume using docker cli
Signed-off-by: Ryan Wallner <ryan.wallner@clusterhq.com>
italics
Signed-off-by: Ryan Wallner <ryan.wallner@clusterhq.com>
fix italics
Signed-off-by: Ryan Wallner <ryan.wallner@clusterhq.com>
fix extra paren
Signed-off-by: Ryan Wallner <ryan.wallner@clusterhq.com>
run hack/update-generated-docs.sh
We must handle null addresses in the cassandra seed provider. This
can occur when there are 'notReadyAddresses' but no 'addresses'.
While we're at it, update the makefile to build the jar.
For AWS EBS, a volume can only be attached to a node in the same AZ.
The scheduler must therefore detect if a volume is being attached to a
pod, and ensure that the pod is scheduled on a node in the same AZ as
the volume.
So that the scheduler need not query the cloud provider every time, and
to support decoupled operation (e.g. bare metal) we tag the volume with
our placement labels. This is done automatically by means of an
admission controller on AWS when a PersistentVolume is created backed by
an EBS volume.
Support for tagging GCE PVs will follow.
Pods that specify a volume directly (i.e. without using a
PersistentVolumeClaim) will not currently be scheduled correctly (i.e.
they will be scheduled without zone-awareness).
The pending codec -> conversion split changes the signature of
Encode and Decode to be more complicated. Create a stub helper
with the exact semantics of today and do the simple mechanical
refactor here to reduce the cost of that change.
This enables use of software or hardware transports viz. be2iscsi,
bnx2i, cxgb3i, cxgb4i, qla4xx, iser and ocs. The default transport
(tcp) happens to be called "default".
Use of non-default transports changes the disk path to the following format:
/dev/disk/by-path/pci-<pci_id>-ip-<portal>-iscsi-<iqn>-lun-<lun_id>
Before this change we have a mish-mash of ways to pass field names around for
error generation. Sometimes string fieldnames, sometimes .Prefix(), sometimes
neither, often wrong names or not indexed when it should be.
Instead of that mess, this is part one of a couple of commits that will make it
more strongly typed and hopefully encourage correct behavior. At least you
will have to think about field names, which is better than nothing.
It turned out to be really hard to do this incrementally.
Remove the id field to fix this error:
```
$ kubectl create -f redis-slave-controller.json
error validating "redis-slave-controller.json": error validating data: found invalid field id for v1.ReplicationController; if you choose to ignore these errors, turn validation off with --validate=false
```
Fixes#17846