Automatic merge from submit-queue
Add unit test for bad ReclaimPolicy and valid ReclaimPolicy in /pkg/api/validation
unit tests for validation.go regarding PersistentVolumeReclaimPolicy (bad value and good value)
see PR: #30304
Automatic merge from submit-queue
Add PVC storage to LimitRange
This PR adds the ability to add a LimitRange to a namespace that enforces min/max on `pvc.Spec.Resources.Requests["storage"]`.
@derekwaynecarr @abhgupta @kubernetes/sig-storage
Examples forthcoming.
```release-note
pvc.Spec.Resources.Requests min and max can be enforced with a LimitRange of type "PersistentVolumeClaim" in the namespace
```
Automatic merge from submit-queue
Match GroupVersionKind against specific version
Currently when multiple GVK match a specific kind in `KindForGroupVersionKinds` only the first will be matched, which not necessarily will be the correct one. I'm proposing to extend this to pick the best match, instead.
Here's my problematic use-case, of course it involves ScheduledJobs 😉:
I have a `GroupVersions` with `batch/v1` and `batch/v2alpha1` in that order. I'm calling `KindForGroupVersionKinds` with kind `batch/v2alpha1 ScheduledJob` and that currently results this matching first `GroupVersion`, instead of picking more concrete one. There's a [clear description](ee77d4e6ca/pkg/api/unversioned/group_version.go (L183)) why it is on single `GroupVersion`, but `GroupVersions` should pick this more carefully.
@deads2k this is your baby, wdyt?
Contination of #1111
I tried to keep this PR down to just a simple search-n-replace to keep
things simple. I may have gone too far in some spots but its easy to
roll those back if needed.
I avoided renaming `contrib/mesos/pkg/minion` because there's already
a `contrib/mesos/pkg/node` dir and fixing that will require a bit of work
due to a circular import chain that pops up. So I'm saving that for a
follow-on PR.
I rolled back some of this from a previous commit because it just got
to big/messy. Will follow up with additional PRs
Signed-off-by: Doug Davis <dug@us.ibm.com>
Automatic merge from submit-queue
Dynamic provisioning for flocker volume plugin
Refactor flocker volume plugin
* [x] Support provisioning beta (#29006)
* [x] Support deletion
* [x] Use bind mounts instead of /flocker in containers
* [x] support ownership management or SELinux relabeling.
* [x] adds volume specification via datasetUUID (this is guranted to be unique)
I based my refactor work to replicate pretty much GCE-PD behaviour
**Related issues**: #29006#26908
@jsafrane @mattbates @wallrj @wallnerryan
Automatic merge from submit-queue
Revert "Work around the etcd watch issue"
Reverts kubernetes/kubernetes#33101
Since #33393 is merged, the bug should have been fixed.
We had another bug where we confused the hostname with the NodeName.
To avoid this happening again, and to make the code more
self-documenting, we use types.NodeName (a typedef alias for string)
whenever we are referring to the Node.Name.
A tedious but mechanical commit therefore, to change all uses of the
node name to use types.NodeName
Also clean up some of the (many) places where the NodeName is referred
to as a hostname (not true on AWS), or an instanceID (not true on GCE),
etc.
* flocker datasets should be attached using an unique identifier. This
is not the case for the name metadata used by datasetName
* allow only one of datasetUUID / datasetName specified
Automatic merge from submit-queue
Allow garbage collection to work against different API prefixes
The GC needs to build clients based only on Resource or Kind. Hoist the
restmapper out of the controller and the clientpool, support a new
ClientForGroupVersionKind and ClientForGroupVersionResource, and use the
appropriate one in both places.
Allows OpenShift to use the GC
Automatic merge from submit-queue
unify available api group versions in our scripts
There are currently many parallel lists of available group versions with slightly different syntaxes in each one. This collapses them into a single list for us to maintain.
Also caught spots where the lists didn't match before.
@sttts @ncdc
The GC needs to build clients based only on Resource or Kind. Hoist the
restmapper out of the controller and the clientpool, support a new
ClientForGroupVersionKind and ClientForGroupVersionResource, and use the
appropriate one in both places.
Automatic merge from submit-queue
Fix backward compatibility issue caused by promoting initcontainers f…
#31026 moves init-container feature from alpha to beta, but only took care the backward compatibility for pod specification, not deal with status. For status, it simply moved from `pods.beta.kubernetes.io/init-container-statuses` to
`pods.beta.kubernetes.io/init-container-statuses` instead of introducing one more pods.beta.kubernetes.io/init-container-statuses. This breaks when the cluster is running with 1.4, but the user is still running with kubectl 1.3.x.
Fixed#32711
Automatic merge from submit-queue
Specific error message on failed rolling update issued by older kubectl against 1.4 master
Fix#32706
`kubernetes-e2e-gke-1.4-1.3-kubectl-skew` (1.3 kubectl and 1.4 master) test suite failed with:
```
k8s.io] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance]
...
Error from server: object is being deleted: replicationcontrollers "e2e-test-nginx-rc" already exists error: exit status 1 not to have occurred
```
It's because the old RC had an orphanFinalizer, so it is not deleted from the key-value store immediately. In turn, the creation of the new RC of the same name failed.
In this failure, the RC and pods are updated, it's just that the RC is of different name, i.e., original name + a hash generated based on podTemplate. The error is confusing to user, but not that bad. So this PR just prints a warning message to instruct users how to work around.
1.4 kubectl rolling-update uses different logic so it's working.
@lavalamp @gmarek @janetkuo @pwittrock
cc @liggitt for the ctx changes.
Automatic merge from submit-queue
Centralize install code
Trying to figure out a way to do this that makes the changes as painless to roll out as possible. This is going to be a multi-step process...