Combine the fields that will be used for content transformation
(content-type, codec, and group version) into a single struct in client,
and then pass that struct into the rest client and request. Set the
content-type when sending requests to the server, and accept the content
type as primary.
Will form the foundation for content-negotiation via the client.
When job.spec.completions is nil, only
one task needs to succeed for the job to succeed,
and parallelism can be scaled freely during runtime.
Added tests.
Release Note:
This causes two minor changes to the API.
First, unset parallelism previously was defaulted to be
equal to completions. Now it always defaults to 1 if unset.
Second, having parallelism=N and completions unset would previously
be defaulted to 1 completion and N parallelism.
(this is not something we expect people to do, though)
Now, no defaulting occurs in that case, and the job's
behavior is different (any completion causes success).
Most of the logic related to type and kind retrieval belongs in the
codec, not in the various classes. Make it explicit that the codec
should handle these details.
Factory now returns a universal Decoder and a JSONEncoder to assist code
in kubectl that needs to specifically deal with JSON serialization
(apply, merge, patch, edit, jsonpath). Add comments to indicate the
serialization is explicit in those places. These methods decode to
internal and encode to the preferred API version as previous, although
in the future they may be changed.
React to removing Codec from version interfaces and RESTMapping by
passing it in to all the places that it is needed.
In general, everything in kubectl/* needs to be ignorant of api/* unless
it deals with a concrete type - this change forces resource_printer to
accept interface abstractions (that are already part of kubectl).
Support a desired replica count of 0 for the new RC. Users sometimes
want to roll out a new "inactive" template with the intent of scaling
it up manually later.
It cordons (marks unschedulable) the given node, and then deletes every
pod on it, optionally using a grace period. It will not delete pods
managed by neither a ReplicationController nor a DaemonSet without the
use of --force.
Also add cordon/uncordon, which just toggle node schedulability.