Automatic merge from submit-queue
Fixed misleading error message when a resource with no selector or na…
Commit:
- Fixed misleading error message when a resource with no selector or name is provided to kubectl delete or label command
This commit fixes#25541
Automatic merge from submit-queue
kubectl attach: error out for non-existing containers
Currently, kubectl attach falls back to the first container which is pretty confusing.
Based on https://github.com/kubernetes/kubernetes/pull/27541.
Automatic merge from submit-queue
Use strategic patch to replace changeCause in patch command
This is partial rework of 11da9a7638
StrategicPatch will be used to update changeCause but failure wont affect command result
fixes: #24858
Automatic merge from submit-queue
Enable HTTP2 by default
Update to enable HTTP2 by default, with the option to disable.
This is a continuation of #25280 for the 1.4 release. This should provide ample time for vetting.
/cc @krousey
Automatic merge from submit-queue
Track object modifications in fake clientset
Fake clientset is used by unit tests extensively but it has some
shortcomings:
- no filtering on namespace and name: tests that want to test objects in
multiple namespaces end up getting all objects from this clientset,
as it doesn't perform any filtering based on name and namespace;
- updates and deletes don't modify the clientset state, so some tests
can get unexpected results if they modify/delete objects using the
clientset;
- it's possible to insert multiple objects with the same
kind/name/namespace, this leads to confusing behavior, as retrieval is
based on the insertion order, but anchors on the last added object as
long as no more objects are added.
This change changes core.ObjectRetriever implementation to track object
adds, updates and deletes.
Some unit tests were depending on the previous (and somewhat incorrect)
behavior. These are fixed in the following few commits.
Automatic merge from submit-queue
[Refactor] Make QoS naming consistent across the codebase
@derekwaynecarr @vishh PTAL. Can one of you please attach a LGTM.
Automatic merge from submit-queue
Volume manager must verify containers terminated before deleting for ungracefully terminated pods
A pod is removed from volume manager (triggering unmount) when it is deleted from the kubelet pod manager. Kubelet deletes the pod from pod manager as soon as it receives a delete pod request. As long as the graceful termination period is non-zero, this happens after kubelet has terminated all containers for the pod. However, when graceful termination period for a pod is set to zero, the volume is deleted from pod manager *before* its containers are terminated.
This can result in volumes getting unmounted from a pod before all containers have exited when graceful termination is set to zero.
This PR prevents that from happening by only deleting a volume from volume manager once it is deleted from the pod manager AND the kubelet containerRuntime status indicates all containers for the pod have exited. Because we do not want to call containerRuntime too frequently, we introduce a delay in the `findAndRemoveDeletedPods()` method to prevent it from executing more frequently than every two seconds.
Fixes https://github.com/kubernetes/kubernetes/issues/27691
Running test in tight loop to verify fix.
Automatic merge from submit-queue
Remove comment about empty selectors in the service spec
As discussed, removing the comment about empty selectors in Service specs.
Automatic merge from submit-queue
federation: Upgrading the groupversion to v1beta1
This PR contains 2 commits:
* Removing fields from Cluster API object that we are not using. This includes: Capacity, Allocatable and ClusterMeta.
* Move code and rename groupversion `federation/v1alpha1` to `federation/v1beta1`
cc @kubernetes/sig-cluster-federation
Automatic merge from submit-queue
Kubelet should mark VolumeInUse before checking if it is Attached
Kubelet should mark VolumeInUse before checking if it is Attached.
Controller should fetch fresh copy of node object before detach instead of relying on node informer cache.
Fixes#27836
Ensure that kublet marks VolumeInUse before checking if it is Attached.
Also ensures that the attach/detach controller always fetches a fresh
copy of the node object before detach (instead ofKubelet relying on node
informer cache).
Automatic merge from submit-queue
TLS bootstrap API group (alpha)
This PR only covers the new types and related client/storage code- the vast majority of the line count is codegen. The implementation differs slightly from the current proposal document based on discussions in design thread (#20439). The controller logic and kubelet support mentioned in the proposal are forthcoming in separate requests.
I submit that #18762 ("Creating a new API group is really hard") is, if anything, understating it. I've tried to structure the commits to illustrate the process.
@mikedanese @erictune @smarterclayton @deads2k
```release-note-experimental
An alpha implementation of the the TLS bootstrap API described in docs/proposals/kubelet-tls-bootstrap.md.
```
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/.github/PULL_REQUEST_TEMPLATE.md?pixel)]()
Automatic merge from submit-queue
Dedent
Adding the dedent package and then applying it to the kubectl help commands. Also updating the documentation to reflect the use of dedent.
Using new fake clientset registry exposes the actual flow on pending
namespace finalization: get namespace, create finalizer, list pods and
delete namespace if there are no pods.
Since fake clientset now correctly tracks objects created by deployment
controller, it triggers different controller behavior: controller only
creates replica set once and updates deployment once.
Fake clientset no longer needs to be prepopulated with records: keeping
them in leads to the name conflict on creates. Also, since fake
clientset now respects namespaces, we need to correctly populate them.
Fake clientset is used by unit tests extensively but it has some
shortcomings:
- no filtering on namespace and name: tests that want to test objects in
multiple namespaces end up getting all objects from this clientset,
as it doesn't perform any filtering based on name and namespace;
- updates and deletes don't modify the clientset state, so some tests
can get unexpected results if they modify/delete objects using the
clientset;
- it's possible to insert multiple objects with the same
kind/name/namespace, this leads to confusing behavior, as retrieval is
based on the insertion order, but anchors on the last added object as
long as no more objects are added.
This change changes core.ObjectRetriever implementation to track object
adds, updates and deletes.
Some unit tests were depending on the previous (and somewhat incorrect)
behavior. These are fixed in the following few commits.
Automatic merge from submit-queue
vSphere provider - Adding config for working dir
This allows the user the set "working-dir" in their vsphere.cfg file.
The value should be a path in the vSphere datastore in which the
provider will look for vms. This should help compartmentalize
workloads in vSphere.
Automatic merge from submit-queue
Add EndpointReconcilerConfig to master Config
Add EndpointReconcilerConfig to master Config to allow downstream integrators to customize the reconciler and reconciliation interval when starting a customized master
@kubernetes/sig-api-machinery @deads2k @smarterclayton @liggitt @kubernetes/rh-cluster-infra
Automatic merge from submit-queue
Convert service account token controller to use a work queue
Converts the service account token controller to use a work queue. This allows parallelization of token generation (useful when there are several simultaneous namespaces or service accounts being created). It also lets us requeue failures to be retried sooned than the next sync period (which can be very long).
Fixes an issue seen when a namespace is created with secrets quotaed, and the token controller tries to create a token secret prior to the quota status having been initialized. In that case, the secret is rejected at admission, and the token controller wasn't retrying until the resync period.