Automatic merge from submit-queue
Make EnableCRI default to true
This change makes kubelet to use the CRI implementation by default,
unless the users opt out explicitly by using --enable-cri=false.
For the rkt integration, the --enable-cri flag will have no effect
since rktnetes does not use CRI.
Also, mark the original --experimental-cri flag hidden and deprecated,
so that we can remove it in the next release. If both flags are specified,
the --enable-cri flag overrides the --experimental-cri flag.
Automatic merge from submit-queue
Make kubectl edit work with unstructured objects
Fixes https://github.com/kubernetes/kubernetes/issues/35993
1. First (before any other changes), added several test cases for complex edit scenarios:
- [x] ensure the edit loop bails out if given the same result that already caused errors
- [x] ensure an edited file with a syntax error is reopened preserving the input
- [x] ensure objects with existing "caused-by" annotations get updated with the current command
2. Refactored the edit code to prep for switching to unstructured:
- [x] made editFn operate on a slice of resource.Info objects passed as an arg, regardless of edit mode
- [x] simplified short-circuiting logic when re-editing a file containing an error
- [x] refactored how we build the various visitors (namespace enforcement, annotation application, patching, creating) so we could easily switch to just using a single visitor over a set of resource infos read from the updated input for all of them
3. Switched to using a resource builder to parse the stream of the user's edited output
- [x] improve the error message you get on syntax errors
- [x] preserve the user's input more faithfully (see how the captured testcase requests to the server changed to reflect exactly what the user edited)
- [x] stopped doing client-side conversion (means deprecating `--output-version`)
4. Switched edit to work with generic objects
- [x] use unstructured objects
- [x] fall back to generic json merge patch for unrecognized group/version/kinds
5. Added new test cases
- [x] schemaless objects falls back to generic json merge (covers TPR scenario)
- [x] edit unknown version of known kind (version "v0" of storageclass) falls back to generic json merge
```release-note
`kubectl edit` now edits objects exactly as they were retrieved from the API. This allows using `kubectl edit` with third-party resources and extension API servers. Because client-side conversion is no longer done, the `--output-version` option is deprecated for `kubectl edit`. To edit using a particular API version, fully-qualify the resource, version, and group used to fetch the object (for example, `job.v1.batch/myjob`)
```
Automatic merge from submit-queue
fix comment
**What this PR does / why we need it**:
fix comment
Thanks.
**Special notes for your reviewer**:
**Release note**:
```release-note
```
This change makes kubelet to use the CRI implementation by default,
unless the users opt out explicitly by using --enable-cri=false.
For the rkt integration, the --enable-cri flag will have no effect
since rktnetes does not use CRI.
Also, mark the original --experimental-cri flag hidden and deprecated,
so that we can remove it in the next release.
Automatic merge from submit-queue (batch tested with PRs 41382, 41407, 41409, 41296, 39636)
Update to use proxy subresource consistently
Proxy subresources have been in place since 1.2.0 and improve the ability to put policy in place around proxy access.
This PR updates the last few clients to use proxy subresources rather than the root proxy
mark --output-version as deprecated, add example for fully-qualifying version to edit
Add 'kubectl edit' testcase for editing schemaed and schemaless data together
Add 'kubectl edit' testcase for editing unknown version of known group/kind
edit: make editFn operate on arguments regardless of mode
edit: simplify short-circuiting logic when re-editing a file containing an error
edit: factor out visitor building
edit: use resource builder to get results from edited file
Add 'kubectl edit' testcase for saving a repeated error
Add 'kubectl edit' testcase for preserving an edited file with a syntax error
Add 'kubectl edit' testcase for recording command on list of objects
Automatic merge from submit-queue (batch tested with PRs 41299, 41325, 41386, 41329, 41418)
move metav1 conversions to metav1
Conversions for `metav1` types belong in metav1 and should be registered when you register the types.
@mikedanese @luxas I think this is what you just hit in your fresh scheme.
@smarterclayton @lavalamp double check the sanity, but I think this does what people expect.
Automatic merge from submit-queue (batch tested with PRs 41299, 41325, 41386, 41329, 41418)
stop senseless negotiation
Most client commands don't respect a negotiated version at all. If you request a particular version, then of course it should be respected, but if you have none to request, then the current negotiation step doesn't return anything useful so we may as well have nothing so we can at least detect the situation.
@jwforres @kubernetes/sig-cli-pr-reviews
Added a TODO to make the negotiate function useful. I think I'm inclined to remove it entirely unless someone can come up with a useful reason to have it.
Automatic merge from submit-queue (batch tested with PRs 41337, 41375, 41363, 41034, 41350)
use instance's Name to attach gce disk
**What this PR does / why we need it**:
fix#40427
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#40427
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 41357, 41178, 41280, 41184, 41278)
Switch RBAC subject apiVersion to apiGroup in v1beta1
Referencing a subject from an RBAC role binding, the API group and kind of the subject is needed to fully-qualify the reference.
The version is not, and adds complexity around re-writing the reference when returning the binding from different versions of the API, and when reconciling subjects.
This PR:
* v1beta1: change the subject `apiVersion` field to `apiGroup` (to match roleRef)
* v1alpha1: convert apiVersion to apiGroup for backwards compatibility
* all versions: add defaulting for the three allowed subject kinds
* all versions: add validation to the field so we can count on the data in etcd being good until we decide to relax the apiGroup restriction
```release-note
RBAC `v1beta1` RoleBinding/ClusterRoleBinding subjects changed `apiVersion` to `apiGroup` to fully-qualify a subject. ServiceAccount subjects default to an apiGroup of `""`, User and Group subjects default to an apiGroup of `"rbac.authorization.k8s.io"`.
```
@deads2k @kubernetes/sig-auth-api-reviews @kubernetes/sig-auth-pr-reviews
Automatic merge from submit-queue (batch tested with PRs 41115, 41212, 41346, 41340, 41172)
Enable PodTolerateNodeTaints predicate in DaemonSet controller
Ref #28687, this enables the PodTolerateNodeTaints predicate to the daemonset controller
cc @Random-Liu @dchen1107 @davidopp @mikedanese @kubernetes/sig-apps-pr-reviews @kubernetes/sig-node-pr-reviews @kargakis @lukaszo
```release-note
Make DaemonSet controller respect node taints and pod tolerations.
```
Automatic merge from submit-queue
fix service spec for kube api server
For the auto generated kube api-server service, the service spec re-uses the service port itself. The endpoint is created correctly using public port. Fix the service also because there are some plugin controllers that react to service spec itself.
Before fix:
```
sh-4.2# kubectl get endpoints
NAME ENDPOINTS AGE
kubernetes 172.17.0.2:8443,172.17.0.2:8053,172.17.0.2:8053 20h
sh-4.2# kubectl get services kubernetes -o json
...
...
"spec": {
"clusterIP": "172.30.0.1",
"ports": [
{
"name": "https",
"port": 443,
"protocol": "TCP",
"targetPort": 443 ## <--- same as port, even if the endpoint really means 8443
},
{
"name": "dns",
"port": 53,
"protocol": "UDP",
"targetPort": 8053
},
{
"name": "dns-tcp",
...
```
After fix:
```
"spec": {
"clusterIP": "172.30.0.1",
"ports": [
{
"name": "https",
"port": 443,
"protocol": "TCP",
"targetPort": 8443 # <-- fixed, now matches the endpoint object
},
{
"name": "dns",
"port": 53,
"protocol": "UDP",
"targetPort": 8053
},
{
"name": "dns-tcp",
``
Automatic merge from submit-queue
Added kubectl create role command
Added `kubectl create role` command.
Fixed part of #39596
**Release note**:
```
Added one new command `kubectl create role` to help user create a single role from command line.
```
Automatic merge from submit-queue (batch tested with PRs 41312, 41289)
resolve udevadm from PATH in cinder_util.go
**What this PR does / why we need it**:
When a cinder volume gets attached to a node, the cinder volume plugin calls `udevadm` with an absolute path `/usr/bin/udevadm`. This path is incorrect for recent versions of debian, ubuntu or the hyperkube image on gcr.io where `udevadm` is located at `/bin/udevadm` or `/sbin/udevadm`. A variant of the hyperkube image is used on CoreOS to run kubelet with rkt fly stage 1.
As a result of the failed `udevadm` exec, the `AttachDisk` function in `cinder_util.go` returns an error.
This PR removes the absolute path from the `udevadm` exec. As a result, `udevadm` is resolved by looking it up in `PATH`.
This is consistent with the gce volume plugin, which executes `udevadm` the same way.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#29832
**Special notes for your reviewer**:
**Release note**:
```release-note
```
To safely mark a volume detached when the volume controller manager is used.
An example of one such problem:
1. pod is created, volume is added to desired state of the world
2. reconciler process starts
3. reconciler starts MountVolume, which is kicked off asynchronously via
operation_executor.go
4. MountVolume mounts the volume, but hasn't yet marked it as mounted
5. pod is deleted, volume is removed from desired state of the world
6. reconciler detects volume is no longer in desired state of world,
removes it from volumes in use
7. MountVolume tries to mark volume in use, throws an error because
volume is no longer in actual state of world list.
8. controller-manager tries to detach the volume, this fails because it
is still mounted to the OS.
9. EBS gets stuck indefinitely in busy state trying to detach.
* Switches glog.Errorf to utilruntime.HandleError in DS and RC controllers
* Drops a couple of unused variables in the DS, SS, and Deployment controllers
* Updates some comments
Automatic merge from submit-queue (batch tested with PRs 41137, 41268)
Allow the CertificateController to use any Signer implementation.
**What this PR does / why we need it**:
This will allow developers to create `CertificateController`s with arbitrary `Signer`s, instead of forcing the use of `CFSSLSigner`. It matches the behavior of allowing an arbitrary `AutoApprover` to be passed in the constructor.
**Release note**:
```release-note
NONE
```
CC @mikedanese
Automatic merge from submit-queue (batch tested with PRs 38252, 41122, 36101, 41017, 41264)
BootstrapSigner and TokenCleaner controllers
This is part of https://github.com/kubernetes/features/issues/130 and is an implementation of https://github.com/kubernetes/community/pull/189.
Work that needs to be done yet in this PR:
* [ ] ~~e2e tests~~ Will come in new PR.
* [x] flag to disable this by default
```release-note
Native support for token based bootstrap flow. This includes signing a well known ConfigMap in the `kube-public` namespace and cleaning out expired tokens.
```
@kubernetes/sig-cluster-lifecycle @dgoodwin @roberthbailey @mikedanese
Automatic merge from submit-queue (batch tested with PRs 41223, 40892, 41220, 41207, 41242)
Fixes#40819 and Fixes#33114
**What this PR does / why we need it**:
Start looking up the virtual machine by it's UUID in vSphere again. Looking up by IP address is problematic and can either not return a VM entirely, or could return the wrong VM.
Retrieves the VM's UUID in one of two methods - either by a `vm-uuid` entry in the cloud config file on the VM, or via sysfs. The sysfs route requires root access, but restores the previous functionality.
Multiple VMs in a vCenter cluster can share an IP address - for example, if you have multiple VM networks, but they're all isolated and use the same address range. Additionally, flannel network address ranges can overlap.
vSphere seems to have a limitation of reporting no more than 16 interfaces from a virtual machine, so it's possible that the IP address list on a VM is completely untrustworthy anyhow - it can either be empty (because the 16 interfaces it found were veth interfaces with no IP address), or it can report the flannel IP.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
Fixes#40819Fixes#33114
**Special notes for your reviewer**:
**Release note**:
```release-note
Reverts to looking up the current VM in vSphere using the machine's UUID, either obtained via sysfs or via the `vm-uuid` parameter in the cloud configuration file.
```
Automatic merge from submit-queue (batch tested with PRs 41223, 40892, 41220, 41207, 41242)
skip iptables sync if no endpoint changes
Alternative to https://github.com/kubernetes/kubernetes/pull/41173fixes: #26637
No need to checksum. Just compare endpoint maps.
Automatic merge from submit-queue (batch tested with PRs 41248, 41214)
Switch hpa controller to shared informer
**What this PR does / why we need it**: switch the hpa controller to use a shared informer
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**: Only the last commit is relevant. The others are from #40759, #41114, #41148
**Release note**:
```release-note
```
cc @smarterclayton @deads2k @sttts @liggitt @DirectXMan12 @timothysc @kubernetes/sig-scalability-pr-reviews @jszczepkowski @mwielgus @piosz
Automatic merge from submit-queue (batch tested with PRs 41246, 39998)
Cinder volume attacher: use instanceID instead of NodeID when verifying attachment
**What this PR does / why we need it**: Cinder volume attacher incorrectly uses NodeID instead of openstack instance id, so that reconciliation fails.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#39978
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 39418, 41175, 40355, 41114, 32325)
TaintController
```release-note
This PR adds a manager to NodeController that is responsible for removing Pods from Nodes tainted with NoExecute Taints. This feature is beta (as the rest of taints) and enabled by default. It's gated by controller-manager enable-taint-manager flag.
```
Automatic merge from submit-queue (batch tested with PRs 41112, 41201, 41058, 40650, 40926)
Promote TokenReview to v1
Peer to https://github.com/kubernetes/kubernetes/pull/40709
We have multiple features that depend on this API:
- [webhook authentication](https://kubernetes.io/docs/admin/authentication/#webhook-token-authentication)
- [kubelet delegated authentication](https://kubernetes.io/docs/admin/kubelet-authentication-authorization/#kubelet-authentication)
- add-on API server delegated authentication
The API has been in use since 1.3 in beta status (v1beta1) with negligible changes:
- Added a status field for reporting errors evaluating the token
This PR promotes the existing v1beta1 API to v1 with no changes
Because the API does not persist data (it is a query/response-style API), there are no data migration concerns.
This positions us to promote the features that depend on this API to stable in 1.7
cc @kubernetes/sig-auth-api-reviews @kubernetes/sig-auth-misc
```release-note
The authentication.k8s.io API group was promoted to v1
```
Automatic merge from submit-queue (batch tested with PRs 41112, 41201, 41058, 40650, 40926)
make round trip testing generic
RoundTrip testing is something associated with a scheme and everyone who writes an API will want to do it. In the end, we should wire each API group separately in a test scheme and have them all call this general function. Once `kubeadm` is out of the main scheme, we'll be able to remove the one really ugly hack.
@luxas @sttts @kubernetes/sig-apimachinery-pr-reviews @smarterclayton
Automatic merge from submit-queue (batch tested with PRs 40796, 40878, 36033, 40838, 41210)
StatefulSet hardening
**What this PR does / why we need it**:
This PR contains the following changes to StatefulSet. Only one change effects the semantics of how the controller operates (This is described in #38418), and this change only brings the controller into conformance with its documented behavior.
1. pcb and pcb controller are removed and their functionality is encapsulated in StatefulPodControlInterface. This class modules the design contoller.PodControlInterface and provides an abstraction to clientset.Interface which is useful for testing purposes.
2. IdentityMappers has been removed to clarify what properties of a Pod are mutated by the controller. All mutations are performed in the UpdateStatefulPod method of the StatefulPodControlInterface.
3. The statefulSetIterator and petQueue classes are removed. These classes sorted Pods by CreationTimestamp. This is brittle and not resilient to clock skew. The current control loop, which implements the same logic, is in stateful_set_control.go. The Pods are now sorted and considered by their ordinal indices, as is outlined in the documentation.
4. StatefulSetController now checks to see if the Pods matching a StatefulSet's Selector also match the Name of the StatefulSet. This will make the controller resilient to overlapping, and will be enhanced by the addition of ControllerRefs.
5. The total lines of production code have been reduced, and the total number of unit tests has been increased. All new code has 100% unit coverage giving the module 83% coverage. Tests for StatefulSetController have been added, but it is not practical to achieve greater coverage in unit testing for this code (the e2e tests for StatefulSet cover these areas).
6. Issue #38418 is fixed in that StaefulSet will ensure that all Pods that are predecessors of another Pod are Running and Ready prior to launching a new Pod. This removes the potential for deadlock when a Pod needs to be rescheduled while its predecessor is hung in Pending or Initializing.
7. All reference to pet have been removed from the code and comments.
**Which issue this PR fixes**
fixes #38418,#36859
**Special notes for your reviewer**:
**Release note**:
```release-note
Fixes issue #38418 which, under circumstance, could cause StatefulSet to deadlock.
Mediates issue #36859. StatefulSet only acts on Pods whose identity matches the StatefulSet, providing a partial mediation for overlapping controllers.
```
Automatic merge from submit-queue (batch tested with PRs 40796, 40878, 36033, 40838, 41210)
HPA v2 (API Changes)
**Release note**:
```release-note
Introduces an new alpha version of the Horizontal Pod Autoscaler including expanded support for specifying metrics.
```
Implements the API changes for kubernetes/features#117.
This implements #34754, which is the new design for the Horizontal Pod Autoscaler. It includes improved support for custom metrics (and/or arbitrary metrics) as well as expanded support for resource metrics. The new HPA object is introduces in the API group "autoscaling/v1alpha1".
Note that the improved custom metric support currently is limited to per pod metrics from Heapster -- attempting to use the new "object metrics" will simply result in an error. This will change once #34586 is merged and implemented.
Automatic merge from submit-queue (batch tested with PRs 40796, 40878, 36033, 40838, 41210)
Implement TTL controller and use the ttl annotation attached to node in secret manager
For every secret attached to a pod as volume, Kubelet is trying to refresh it every sync period. Currently Kubelet has a ttl-cache of secrets of its pods and the ttl is set to 1 minute. That means that in large clusters we are targetting (5k nodes, 30pods/node), given that each pod has a secret associated with ServiceAccount from its namespaces, and with large enough number of namespaces (where on each node (almost) every pod is from a different namespace), that resource in ~30 GETs to refresh all secrets every minute from one node, which gives ~2500QPS for GET secrets to apiserver.
Apiserver cannot keep up with it very easily.
Desired solution would be to watch for secret changes, but because of security we don't want a node watching for all secrets, and it is not possible for now to watch only for secrets attached to pods from my node.
So as a temporary solution, we are introducing an annotation that would be a suggestion for kubelet for the TTL of secrets in the cache and a very simple controller that would be setting this annotation based on the cluster size (the large cluster is, the bigger ttl is).
That workaround mean that only very local changes are needed in Kubelet, we are creating a well separated very simple controller, and once watching "my secrets" will be possible it will be easy to remove it and switch to that. And it will allow us to reach scalability goals.
@dchen1107 @thockin @liggitt
Automatic merge from submit-queue (batch tested with PRs 40917, 41181, 41123, 36592, 41183)
Set all node conditions to Unknown when node is unreachable
**What this PR does / why we need it**:
Sets all node conditions to Unknown when node does not report status/unreachable
**Which issue this PR fixes**
fixes https://github.com/kubernetes/kubernetes/issues/36273
Start looking up the virtual machine by it's UUID in vSphere again. Looking up by IP address is problematic and can either not return a VM entirely, or could return the wrong VM.
Retrieves the VM's UUID in one of two methods - either by a `vm-uuid` entry in the cloud config file on the VM, or via sysfs. The sysfs route requires root access, but restores the previous functionality.
Multiple VMs in a vCenter cluster can share an IP address - for example, if you have multiple VM networks, but they're all isolated and use the same address range. Additionally, flannel network address ranges can overlap.
vSphere seems to have a limitation of reporting no more than 16 interfaces from a virtual machine, so it's possible that the IP address list on a VM is completely untrustworthy anyhow - it can either be empty (because the 16 interfaces it found were veth interfaces with no IP address), or it can report the flannel IP.
Automatic merge from submit-queue (batch tested with PRs 41074, 41147, 40854, 41167, 40045)
Add debug logging to eviction manager
**What this PR does / why we need it**:
This PR adds debug logging to eviction manager.
We need it to help users understand when/why eviction manager is/is not making decisions to support information gathering during support.
Automatic merge from submit-queue (batch tested with PRs 41037, 40118, 40959, 41084, 41092)
Switch CSR controller to use shared informer
Switch the CSR controller to use a shared informer. Originally part of #40097 but I'm splitting that up into multiple PRs.
I have added a test to try to ensure we don't mutate the cache. It could use some fleshing out for additional coverage but it gets the initial job done, I think.
cc @mikedanese @deads2k @liggitt @sttts @kubernetes/sig-scalability-pr-reviews
Automatic merge from submit-queue (batch tested with PRs 41037, 40118, 40959, 41084, 41092)
Fix for detach volume when node is not present/ powered off
Fixes#33061
When a vm is reported as no longer present in cloud provider and is deleted by node controller, there are no attempts to detach respective volumes. For example, if a VM is powered off or paused, and pods are migrated to other nodes. In the case of vSphere, the VM cannot be started again because the VM still holds mount points to volumes that are now mounted to other VMs.
In order to re-join this node again, you will have to manually detach these volumes from the powered off vm before starting it.
The current fix will make sure the mount points are deleted when the VM is powered off. Since all the mount points are deleted, the VM can be powered on again.
This is a workaround proposal only. I still don't see the kubernetes issuing a detach request to the vsphere cloud provider which should be the case. (Details in original issue #33061 )
@luomiao @kerneltime @pdhamdhere @jingxu97 @saad-ali
Automatic merge from submit-queue (batch tested with PRs 41121, 40048, 40502, 41136, 40759)
add k8s.io/sample-apiserver to demonstrate how to build an aggregated API server
builds on https://github.com/kubernetes/kubernetes/pull/41093
This creates a sample API server is a separate staging repo to guarantee no cheating with `k8s.io/kubernetes` dependencies. The sample is run during integration tests (simple tests on it so far) to ensure that it continues to run.
@sttts @kubernetes/sig-api-machinery-misc ptal
@pwittrock @pmorie @kris-nova an aggregated API server example that will stay up to date.
Automatic merge from submit-queue (batch tested with PRs 41145, 38771, 41003, 41089, 40365)
Add `kubectl attach` support for multiple types
To address this issue: https://github.com/kubernetes/kubernetes/issues/24857
the new `kubectl attach` will contain three scenarios depend on args:
1. `kubectl attach POD` : if only one argument provided, we assume it's a pod name
2. `kubectl attach TYPE NAME` : if two arguments provided, we assume first one is resource we [supported](4770162fd3/pkg/kubectl/cmd/util/factory_object_mapping.go (L285)), the second resource's name.
3. `kubectl attach TYPE/NAME` : one argument provided and arg[0] must contain `/`, ditto
Is there any other scenarios I haven't consider in ?
for now the first scenario is compatible with changed before, also `make test` pass ✅
will write some unit test to test second and third scenario, if you guys think i'm doing the right way.
@pwittrock @kargakis @fabianofranz @ymqytw @AdoHe
Automatic merge from submit-queue (batch tested with PRs 41145, 38771, 41003, 41089, 40365)
Remove useless param from kubectl create rolebinding
The `force` param is not used in
`kubectl create rolebinding` & `kubectl create clusterrolebinding`
commands, removed it.
Automatic merge from submit-queue
Add OWNERS file for GCE cloud provider
GCE cloud provider does not have OWNERS file and all PRs need to be approved by owner of pkg/cloudprovider, which is currently only @mikedanese. Adding more options would be helpful to speed up reviews.
Feel free to add/remove some names, this first version is just my qualified guess. It's hard to distinguish generic Kubernetes refactoring from real cloud provider work in git log.
```release-note
NONE
```
1. pcb and pcb controller are removed and their functionality is
encapsulated in StatefulPodControlInterface.
2. IdentityMappers has been removed to clarify what properties of a Pod are
mutated by the controller. All mutations are performed in the
UpdateStatefulPod method of the StatefulPodControlInterface.
3. The statefulSetIterator and petQueue classes are removed. These classes
sorted Pods by CreationTimestamp. This is brittle and not resilient to
clock skew. The current control loop, which implements the same logic,
is in stateful_set_control.go. The Pods are now sorted and considered by
their ordinal indices, as is outlined in the documentation.
4. StatefulSetController now checks to see if the Pods matching a
StatefulSet's Selector also match the Name of the StatefulSet. This will
make the controller resilient to overlapping, and will be enhanced by
the addition of ControllerRefs.
Automatic merge from submit-queue (batch tested with PRs 40873, 40948, 39580, 41065, 40815)
[CRI] Enable Hostport Feature for Dockershim
Commits:
1. Refactor common hostport util logics and add more tests
2. Add HostportManager which can ADD/DEL hostports instead of a complete sync.
3. Add Interface for retreiving portMappings information of a pod in Network Host interface.
Implement GetPodPortMappings interface in dockerService.
4. Teach kubenet to use HostportManager
Automatic merge from submit-queue
[Kubelet] Delay deletion of pod from the API server until volumes are deleted
Previous PR that was reverted: #40239.
To summarize the conclusion of the previous PR after reverting:
- The status manager has the most up-to-date status, but the volume manager uses the status from the pod manager, which only is as up-to-date as the API server.
- Because of this, the previous change required an additional round trip between the kubelet and API server.
- When few pods are being added or deleted, this is only a minor issue. However, when under heavy load, the QPS limit to the API server causes this round trip to take ~60 seconds, which is an unacceptable increase in latency. Take a look at the graphs in #40239 to see the effect of QPS changes on timing.
- To remedy this, the volume manager looks at the status from the status manager, which eliminates the round trip.
cc: @vishh @derekwaynecarr @sjenning @jingxu97 @kubernetes/sig-storage-misc
Automatic merge from submit-queue
add deads2k to approvers for controllers
I've done significant maintenance on these for a while and introduced new patterns like shared informers and rate limited work queues.
Automatic merge from submit-queue
avoid repeated length calculation and some other code improvements
**What this PR does / why we need it**:
1. in function `ParsePairs`, calculating `invalidBuf`'s length over and over again brings performance penalty. a `invalidBufNonEmpty` bool value can fix this.
2. pairArg is not a string template and also there is no other arguments for `fmt.Sprintf`, so i remove `fmt.Sprintf`
3. in function `DumpReaderToFile`, we must check nil error first before defer statement, otherwise there maybe a potential nil error on `f.Close()`
4. add nil checks into `GetWideFlag` function
5. some other minor code improvements for better readability.
Signed-off-by: bruceauyeung <ouyang.qinhua@zte.com.cn>
Automatic merge from submit-queue
Removed a space in portforward.go.
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 38796, 40823, 40756, 41083, 41105)
Let ReadLogs return when there is a read error.
Fixes a bug in kuberuntime log.
Today, @yujuhong found that once we cancel `kubectl logs -f` with `Ctrl+C`, kuberuntime will keep complaining:
```
27939 kuberuntime_logs.go:192] Failed with err write tcp 10.240.0.4:10250->10.240.0.2:53913: write: broken pipe when writing log for log file "/var/log/pods/5bb76510-ed71-11e6-ad02-42010af00002/busybox_0.log": &{timestamp:{sec:63622095387 nsec:625309193 loc:0x484c440} stream:stdout log:[84 117 101 32 70 101 98 32 32 55 32 50 48 58 49 54 58 50 55 32 85 84 67 32 50 48 49 55 10]}
```
This is because kuberuntime keeps writing to the connection even though it is already closed. Actually, kuberuntime should return and report error whenever there is a writing error.
Ref the [docker code](3a4ae1f661/pkg/stdcopy/stdcopy.go (L159-L167))
I'm still creating the cluster and verifying this fix. Will post the result here after that.
/cc @yujuhong @kubernetes/sig-node-bugs
Automatic merge from submit-queue (batch tested with PRs 38796, 40823, 40756, 41083, 41105)
e2e tests for vSphere cloud provider
**What this PR does / why we need it**:
This PR contains changes for existing e2e volume provisioning test cases for running on vsphere cloud provider.
**Following is the summary of changes made in existing e2e test cases**
**Added test/e2e/persistent_volumes-vsphere.go**
- This test verifies deleting a PVC before the pod does not cause pod deletion to fail on PD detach and deleting the PV before the pod does not cause pod deletion to fail on PD detach.
**test/e2e/volume_provisioning.go**
- This test creates a StorageClass and claim with dynamic provisioning and alpha dynamic provisioning annotations and verifies that required volumes are getting created. Test also verifies that created volume is readable and retaining data.
- Added vsphere as supported cloud provider. Also set pluginName to "kubernetes.io/vsphere-volume" for vsphere cloud provider.
**test/e2e/volumes.go**
- Added test spec for vsphere
- This test creates requested volume, mount it on the pod, write some random content at /opt/0/index.html and verifies file contents are perfect to make sure we don't see the content from previous test runs.
- This test also passes "1234" as fsGroup to mount volume and verifies fsGroup is set correctly.
**added test/e2e/vsphere_utils.go**
- Added function verifyVSphereDiskAttached - Verify the persistent disk attached to the node.
- Added function waitForVSphereDiskToDetach - Wait until vsphere vmdk is deteched from the given node or time out after 5 minutes
- Added getVSpherePersistentVolumeSpec - create vsphere volume spec with given VMDK volume path, Reclaim Policy and labels
- Added getVSpherePersistentVolumeClaimSpec - get vsphere persistent volume spec with given selector labels
- createVSphereVolume - function to create vmdk volume
**Following is the summary of new e2e tests added with this PR**
**test/e2e/vsphere_volume_placement.go**
- contains volume placement tests using node label selector
- Test Back-to-back pod creation/deletion with the same volume source on the same worker node
- Test Back-to-back pod creation/deletion with the same volume source attach/detach to different worker nodes
**test/e2e/pv_reclaimpolicy.go**
- contains tests for PV/PVC - Reclaiming Policy
- Test verifies persistent volume should be deleted when reclaimPolicy on the PV is set to delete and associated claim is deleted
- Test also verified that persistent volume should be retained when reclaimPolicy on the PV is set to retain and associated claim is deleted
**test/e2e/pvc_label_selector.go**
- This is function test for Selector-Label Volume Binding Feature.
- Verify volume with the matching label is bounded with the PVC.
Other changes
Updated pkg/cloudprovider/providers/vsphere/BUILD and test/e2e/BUILD
**Which issue this PR fixes** *
fixes # 41087
**Special notes for your reviewer**:
Updated tests were executed on kubernetes v1.4.8 release on vsphere.
Test steps are provided in comments
@kerneltime @BaluDontu
Automatic merge from submit-queue (batch tested with PRs 38796, 40823, 40756, 41083, 41105)
Add unit tests for interactive edit command
Before updating edit to use unstructured objects and use generic JSON patching, we need better test coverage of the existing paths. This adds unit tests for the interactive edit scenarios.
This PR adds:
* Simple framework for recording tests for interactive edit:
* record.go is a tiny test server that records editor and API inputs as test expectations, and editor and API outputs as playback stubs
* record_editor.sh is a shell script that sends the before/after of an interactive `vi` edit to the test server
* record_testcase.sh (see README) starts up the test server, sets up a kubeconfig to proxy to the test server, sets EDITOR to invoke record_editor.sh, then opens a shell that lets you use `kubectl edit` normally
* Adds test cases for the following scenarios:
- [x] no-op edit (open and close without making changes)
- [x] try to edit a missing object
- [x] edit single item successfully
- [x] edit list of items successfully
- [x] edit a single item, submit with an error, re-edit, submit fixed successfully
- [x] edit list of items, submit some with errors and some good, re-edit errors, submit fixed
- [x] edit trying to change immutable things like name/version/kind, ensure preconditions prevent submission
- [x] edit in "create mode" successfully (`kubectl create -f ... --edit`)
- [x] edit in "create mode" introducing errors (`kubectl create -f ... --edit`)
* Fixes a bug with edit printing errors to stdout (caught when testing stdout/stderr against expected output)
Follow-ups:
- [ ] clean up edit code path
- [ ] switch edit to use unstructured objects
- [ ] make edit fall back to jsonmerge for objects without registered go structs (TPR, unknown versions of pods, etc)
- [ ] add tests:
- [ ] edit TPR
- [ ] edit mix of TPR and known objects
- [ ] edit known object with extra field from server
- [ ] edit known object with new version from server
Automatic merge from submit-queue (batch tested with PRs 38796, 40823, 40756, 41083, 41105)
kubelet/network-cni-plugin: modify the log's info
**What this PR does / why we need it**:
Checking the startup logs of kubelet, i can always find a error like this:
"E1215 10:19:24.891724 2752 cni.go:163] error updating cni config: No networks found in /etc/cni/net.d"
It will appears, neither i use cni network-plugin or not.
After analysis codes, i thought it should be a warn log, because it will not produce any actions like as exit or abort, and just ignored when not any valid plugins exit.
thank you!
hot fix
add unit test and statefulSet
update example
remove package
change to ResourceNames
remove some code
remove strings
add fake testing func for AttachablePodForObject
minor change
add test.obj nil check
update testfile
gofmt
update
add fallthough
revert back
Automatic merge from submit-queue (batch tested with PRs 41061, 40888, 40664, 41020, 41085)
move --runtime-config to kubeapiserver
`--runtime-config` is only useful if you have a lot of API groups in one server. If you have a single API group in your server (the vast majority of aggregated API servers), then the flag is unneeded and relatively complex. This moves it to closer to point of use.
@sttts
Automatic merge from submit-queue (batch tested with PRs 41103, 41042, 41097, 40946, 40770)
Use Clientset interface in KubeletDeps
**What this PR does / why we need it**:
This replaces the Clientset struct with the equivalent interface for the KubeClient injected via KubeletDeps. This is useful for testing and for accessing the Node and Pod status event stream without an API server.
**Special notes for your reviewer**:
Follow up to #4907
**Release note**:
`NONE`
Automatic merge from submit-queue (batch tested with PRs 41103, 41042, 41097, 40946, 40770)
dockershim: set security option separators based on the docker version
Also add a version cache to avoid hitting the docker daemon frequently.
This is part of #38164
Automatic merge from submit-queue
Add gnufied as reviewer for aws and gce volumes
Adding myself as reviewer for aws and gce volume plugins. I understand the code well enough and have helped with review in those areas already.
cc @childsb @justinsb @saad-ali
Automatic merge from submit-queue
Add OWNERS to the dockertools package
We are in the middle of switching to the CRI implementation. It's critical to minimize
the development of dockertools to avoid any more diversion. We should freeze any
non-essential changes to dockertools once CRI becomes the default. This change
adds an OWNERS file with a small group of people to ensure no unintentional changes
go through unnoticed.
Automatic merge from submit-queue
Update owners file for job and cronjob controller
I've just noticed we have outdated OWNERS files for job and cronjob controllers.
@erictune ptal
@kubernetes/sig-contributor-experience-pr-reviews fyi
Automatic merge from submit-queue (batch tested with PRs 40345, 38183, 40236, 40861, 40900)
refactor approver and signer interfaces to be consisten w.r.t. apiserver interaction
This makes it so that only the controller loop talks to the
API server directly. The signatures for Sign and Approve also
become more consistent, while allowing the Signer to report
conditions (which it wasn't able to do before).
Automatic merge from submit-queue (batch tested with PRs 40345, 38183, 40236, 40861, 40900)
remove the create-external-load-balancer flag in cmd/expose.go
**What this PR does / why we need it**:
In cmd/expose.go there is a todo "remove create-external-load-balancer in code on or after Aug 25, 2016.", and now it's been a long time past. So I remove this flag and modify the test cases.
Please check for this, thanks!
**Release note**:
```
remove the deprecated flag "create-external-load-balancer" and use --type="LoadBalancer" instead.
```
Automatic merge from submit-queue
Extract a number of short description strings for translation
@fabianofranz
@kubernetes/sig-cli-pr-reviews
Follow on for https://github.com/kubernetes/kubernetes/pull/39223
addressed review comments
Addressed review comment for pv_reclaimpolicy.go to verify content of the volume
addressed 2nd round of review comments
addressed 3rd round of review comments from jeffvance
Automatic merge from submit-queue (batch tested with PRs 41023, 41031, 40947)
apiserver command line options lead to config
Logically command line options lead to config, not the other way around. We're clean enough now we can actually do the inversion.
WIP because I have some test cycles to fix, but this is all the meat.
@kubernetes/sig-api-machinery-misc
Automatic merge from submit-queue (batch tested with PRs 40980, 40985)
added short names for resources which are exposed during discovery
**What this PR does / why we need it**:
The changes add short names for resources. The short names will be delivered to kubectl during discovery.
Automatic merge from submit-queue (batch tested with PRs 40971, 41027, 40709, 40903, 39369)
Validate unique against HostPort/Protocol/HostIP
**What this PR does / why we need it**:
We can bind to specific IPs however validation will fail for different HostIP:HostPort combination. This is a small fix to check combination of HostPort/Protocol/HostIP rather than just HostPort/Protocol.
Sample configuration
...
"ports": [
{
"protocol": "TCP",
"containerPort": 53,
"hostPort": 55,
"hostIP": "127.0.0.1",
"name": "dns-local-tcp"
},
{
"protocol": "TCP",
"containerPort": 53,
"hostPort": 55,
"hostIP": "127.0.0.2",
"name": "dns-local-tcp2"
}
]
Before:
* spec.template.spec.containers[1].ports[2].hostPort: Duplicate value: "55/TCP"
* spec.template.spec.containers[1].ports[3].hostPort: Duplicate value: "55/TCP"
After applying the patch:
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:55 0.0.0.0:* LISTEN 3644/docker-proxy
tcp 0 0 127.0.0.2:55 0.0.0.0:* LISTEN 3629/docker-proxy
Thanks
Ashley
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 40971, 41027, 40709, 40903, 39369)
Set docker opt separator correctly for SELinux options
This is based on @pmorie's commit from #40179
Automatic merge from submit-queue (batch tested with PRs 40971, 41027, 40709, 40903, 39369)
Promote SubjectAccessReview to v1
We have multiple features that depend on this API:
SubjectAccessReview
- [webhook authorization](https://kubernetes.io/docs/admin/authorization/#webhook-mode)
- [kubelet delegated authorization](https://kubernetes.io/docs/admin/kubelet-authentication-authorization/#kubelet-authorization)
- add-on API server delegated authorization
The API has been in use since 1.3 in beta status (v1beta1) with negligible changes:
- Added a status field for reporting errors evaluating access
- A typo was discovered in the SubjectAccessReviewSpec Groups field name
This PR promotes the existing v1beta1 API to v1, with the only change being the typo fix to the groups field. (fixes https://github.com/kubernetes/kubernetes/issues/32709)
Because the API does not persist data (it is a query/response-style API), there are no data migration concerns.
This positions us to promote the features that depend on this API to stable in 1.7
cc @kubernetes/sig-auth-api-reviews @kubernetes/sig-auth-misc
```release-note
The authorization.k8s.io API group was promoted to v1
```
Automatic merge from submit-queue
federation: Refactoring namespaced resources deletion code from kube ns controller and sharing it with fed ns controller
Ref https://github.com/kubernetes/kubernetes/issues/33612
Refactoring code in kube namespace controller to delete all resources in a namespace when the namespace is deleted. Refactored this code into a separate NamespacedResourcesDeleter class and calling it from federation namespace controller.
This is required for enabling cascading deletion of namespaced resources in federation apiserver.
Before this PR, we were directly deleting the namespaced resources and assuming that they go away immediately. With cascading deletion, we will have to wait for the corresponding controllers to first delete the resources from underlying clusters and then delete the resource from federation control plane. NamespacedResourcesDeleter has this waiting logic.
cc @kubernetes/sig-federation-misc @caesarxuchao @derekwaynecarr @mwielgus
Automatic merge from submit-queue (batch tested with PRs 40385, 40786, 40999, 41026, 40996)
optimize duplicate openstack serverList judgement
if len(serverList) > 1, we will return err in pager.EachPage() function,so here we do not need to judge again
Automatic merge from submit-queue (batch tested with PRs 40385, 40786, 40999, 41026, 40996)
Fixed a tiny bug on using RoleBindingGenerator
Fixed a typo bug while using RoleBindingGenerator, this
bug causes error when binding role to service accounts
through "kubectl create rolebinding" command.
Automatic merge from submit-queue
Replace hand-written informers with generated ones
Replace existing uses of hand-written informers with generated ones.
Follow-up commits will switch the use of one-off informers to shared
informers.
This is a precursor to #40097. That PR will switch one-off informers to shared informers for the majority of the code base (but not quite all of it...).
NOTE: this does create a second set of shared informers in the kube-controller-manager. This will be resolved back down to a single factory once #40097 is reviewed and merged.
There are a couple of places where I expanded the # of caches we wait for in the calls to `WaitForCacheSync` - please pay attention to those. I also added in a commented-out wait in the attach/detach controller. If @kubernetes/sig-storage-pr-reviews is ok with enabling the waiting, I'll do it (I'll just need to tweak an integration test slightly).
@deads2k @sttts @smarterclayton @liggitt @soltysh @timothysc @lavalamp @wojtek-t @gmarek @sjenning @derekwaynecarr @kubernetes/sig-scalability-pr-reviews
Automatic merge from submit-queue
add deads2k to registry package owners
I established the package layout and wrote a lot of the non-boilerplate code in this package.
Automatic merge from submit-queue (batch tested with PRs 40930, 40951)
Fix CRI port forwarding
Websocket support was introduced #33684, which broke the CRI
implementation. This change fixes it.
Automatic merge from submit-queue (batch tested with PRs 40943, 40967)
Switch kubectl version and api-versions to create a discovery client …
…directly.
The clientset will throw an error for aggregated apiservers because the
clientset looks for specific versions of apis that are compiled into
the client. These will be missing from aggregated apiservers.
The discoveryclient is fully dynamic and does not rely on compiled
in apiversions.
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 40943, 40967)
switch admission serialized config to an apiserver type
Switches the kube admission config from componentconfig to the new `apiserver.k8s.io` group so that all API servers can use the shared configuration. This switch external serialization, but it does not move the code that reads the config. I'd like to do that as a follow-on.
@kubernetes/sig-api-machinery-misc @kubernetes/api-reviewers @smarterclayton
@derekwaynecarr ptal
@sttts
1. Update the configuration file for Photon Controller cloud provider
2. Only master nodes can communicate with Photon Controller endpoint
3. Enable support for authentication-enabled Photon Controller endpoint
4. Update NodeAddresses function for query from local node
The clientset will throw an error for aggregated apiservers because the
clientset looks for specific versions of apis that are compiled into
the client. These will be missing from aggregated apiservers.
The discoveryclient is fully dynamic and does not rely on compiled
in apiversions.
Automatic merge from submit-queue (batch tested with PRs 40289, 40877, 40879, 39972, 40942)
Extract util used by jsonmergepatch and SMPatch
followup https://github.com/kubernetes/kubernetes/pull/40666#discussion_r99198931
Extract some util out of the `strategicMergePatch` to make `jsonMergePatch` doesn't depend on `strategicMergePatch`.
```release-note
None
```
cc: @liggitt
Automatic merge from submit-queue (batch tested with PRs 40289, 40877, 40879, 39972, 40942)
Rename experimental-cgroups-per-pod flag
**What this PR does / why we need it**:
1. Rename `experimental-cgroups-per-qos` to `cgroups-per-qos`
1. Update hack/local-up-cluster to match `CGROUP_DRIVER` with docker runtime if used.
**Special notes for your reviewer**:
We plan to roll this feature out in the upcoming release. Previous node e2e runs were running with this feature on by default. We will default this feature on for all e2es next week.
**Release note**:
```release-note
Rename --experiemental-cgroups-per-qos to --cgroups-per-qos
```
Automatic merge from submit-queue (batch tested with PRs 40289, 40877, 40879, 39972, 40942)
Remove the temporary fix for pre-1.0 mirror pods
The fix was introduced to fix#15960 for pre-1.0 pods. It should be safe to remove
this fix now.
Automatic merge from submit-queue
CRI: Handle cri in-place upgrade
Fixes https://github.com/kubernetes/kubernetes/issues/40051.
## How does this PR restart/remove legacy containers/sandboxes?
With this PR, dockershim will convert and return legacy containers and infra containers as regular containers/sandboxes. Then we can rely on the SyncPod logic to stop the legacy containers/sandboxes, and the garbage collector to remove the legacy containers/sandboxes.
To forcibly trigger restart:
* For infra containers, we manually set `hostNetwork` to opposite value to trigger a restart (See [here](https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/kuberuntime/kuberuntime_manager.go#L389))
* For application containers, they will be restarted with the infra container.
## How does this PR avoid extra overhead when there is no legacy container/sandbox?
For the lack of some labels, listing legacy containers needs extra `docker ps`. We should not introduce constant performance regression for legacy container cleanup. So we added the `legacyCleanupFlag`:
* In `ListContainers` and `ListPodSandbox`, only do extra `ListLegacyContainers` and `ListLegacyPodSandbox` when `legacyCleanupFlag` is `NotDone`.
* When dockershim starts, it will check whether there are legacy containers/sandboxes.
* If there are none, it will mark `legacyCleanupFlag` as `Done`.
* If there are any, it will leave `legacyCleanupFlag` as `NotDone`, and start a goroutine periodically check whether legacy cleanup is done.
This makes sure that there is overhead only when there are legacy containers/sandboxes not cleaned up yet.
## Caveats
* In-place upgrade will cause kubelet to restart all running containers.
* RestartNever container will not be restarted.
* Garbage collector sometimes keep the legacy containers for a long time if there aren't too many containers on the node. In that case, dockershim will keep performing extra `docker ps` which introduces overhead.
* Manually remove all legacy containers will fix this.
* Should we garbage collect legacy containers/sandboxes in dockershim by ourselves? /cc @yujuhong
* Host port will not be reclaimed for the lack of checkpoint for legacy sandboxes. https://github.com/kubernetes/kubernetes/pull/39903 /cc @freehan
/cc @yujuhong @feiskyer @dchen1107 @kubernetes/sig-node-api-reviews
**Release note**:
```release-note
We should mention the caveats of in-place upgrade in release note.
```
Automatic merge from submit-queue
Plumb subresource through subjectaccessreview
plumb all fields for subjectaccessreview into the resulting `authorizer.AttributesRecord`
```release-note
The SubjectAccessReview API passes subresource and resource name information to the authorizer to answer authorization queries.
```
Automatic merge from submit-queue
Optionally avoid evicting critical pods in kubelet
For #40573
```release-note
When feature gate "ExperimentalCriticalPodAnnotation" is set, Kubelet will avoid evicting pods in "kube-system" namespace that contains a special annotation - `scheduler.alpha.kubernetes.io/critical-pod`
This feature should be used in conjunction with the rescheduler to guarantee availability for critical system pods - https://kubernetes.io/docs/admin/rescheduler/
```
Automatic merge from submit-queue (batch tested with PRs 40696, 39914, 40374)
Forgiveness library changes
**What this PR does / why we need it**:
Splited from #34825, contains library changes that are needed to implement forgiveness:
1. ~~make taints-tolerations matching respect timestamps, so that one toleration can just tolerate a taint for only a period of time.~~ As TaintManager is caching taints and observing taint changes, time-based checking is now outside the library (in TaintManager). see #40355.
2. make tolerations respect wildcard key.
3. add/refresh some related functions to wrap taints-tolerations operation.
**Which issue this PR fixes**:
Related issue: #1574
Related PR: #34825, #39469
~~Please note that the first 2 commits in this PR come from #39469 .~~
**Special notes for your reviewer**:
~~Since currently we have `pkg/api/helpers.go` and `pkg/api/v1/helpers.go`, there are some duplicated periods of code laying in these two files.~~
~~Ideally we should move taints-tolerations related functions into a separate package (pkg/util/taints), and make it a unified set of implementations. But I'd just suggest to do it in a follow-up PR after Forgiveness ones done, in case of feature Forgiveness getting blocked to long.~~
**Release note**:
```release-note
make tolerations respect wildcard key
```
Automatic merge from submit-queue (batch tested with PRs 40795, 40863)
Use caching secret manager in kubelet
I just found that this is in my local branch I'm using for testing, but not in master :)
Automatic merge from submit-queue (batch tested with PRs 40864, 40666, 38382, 40874)
Promote init containers to GA
This is proposed for 1.6
PR moves beta proved concept for init containers to stable. Specification of init containers can be now stated under initContainers field in PodSpec/PodTemplateSpec. Specifying init-containers in annotation is still possible, but will be removed in future version.
```release-note
Init containers have graduated to GA and now appear as a field. The beta annotation value will still be respected and overrides the field value.
```
Automatic merge from submit-queue (batch tested with PRs 40864, 40666, 38382, 40874)
apply falls back to generic JSON patch computation if no go struct is registered for the target GVK
This PR is the master version of #40096 which is target 1.4 branch.
This PR is based on #40260
- [x] ensure subkey deletion works in CreateThreeWayJSONMergePatch
- [x] ensure type stomping works in CreateThreeWayJSONMergePatch
- [x] lots of tests for generic json patch computation
- [x] apply falls back to generic 3-way JSON merge patch if no go struct is registered for the target GVK
- [x] prevent generic apply patch computation between different apiVersions and/or kinds
- [x] make pruner generic (apply --prune works with TPR)
```release-note
apply falls back to generic 3-way JSON merge patch if no go struct is registered for the target GVK
```