Automatic merge from submit-queue
Enable NFSv4 and GlusterFS tests on cluster e2e tests
Enable NFSv4 and GlusterFS tests on cluster e2e tests for GCI images
only.
Automatic merge from submit-queue
Node E2E: Avoid printing test result twice.
This is a problem since long time ago.
`RunSshCommand` includes the command output to the error. If the command running the test fails, the test output will also be included in the error. [The runner prints both the test output and the error](https://github.com/kubernetes/kubernetes/blob/master/test/e2e_node/runner/remote/run_remote.go#L270), which leads the test result to be printed twice. (See the [test result](https://storage.googleapis.com/kubernetes-jenkins/logs/kubelet-gce-e2e-ci/10968/build-log.txt) on node tmp-node-e2e-af900a4d-e2e-node-ubuntu-trusty-docker9-v1-image)
This PR changes `RunSshCommand` not to put command output into the error, and leave the caller to decide how to deal with command output when the command fails.
Automatic merge from submit-queue
Add e2e test for CockroachDB statefulset
Refactor the code of statefulset e2e test for clustered applications, and add a test for CockroachDB.
The yaml file is copied from examples/cockroachdb/
cc @erictune @foxish @kow3ns @kubernetes/sig-apps
Automatic merge from submit-queue
e2e pod cleanup test: restrict pods to be assigned to nodes observed …
The test checks the individual kubelet /runningPods endpoint based on the
initial list of nodes it observes. It is important that all pods are
scheduled only onto those nodes. Apply node labels to ensure no stray pods on
other nodes.
This fixes#35197
kubectl-in-pod.json has a hardcoded path for
kubectl in its hostPath ('/usr/bin/kubectl').
When the test actually runs on gke, kubectl
is available at '/workspace/kubernetes/platforms/linux/amd64/kubectl'
and NOT at '/usr/bin/kubectl'. So we need to
fix the hostPath for the test to work properly.
Fixes#36586
Automatic merge from submit-queue
Garbage collection tests the MaxPerPodContainers and MaxContainers constraints
This is the first version of this test. It tests that containers are garbage collected according to the default configuration.
Automatic merge from submit-queue
Cleanup pod in MatchContainerOutput
MatchContainerOutput always creates a pod and does not cleanup. We need
to fix this to be better at re-trying the scenarios.
When there is an error say in the first attempt of ExpectNoErrorWithRetries
(for example in "Pods should contain environment variables for services" test)
the retries logic calls MatchContainerOutput another time and the
podClient.create fails correctly since the pod was not cleaned up the
first time MatchContainerOutput was called.
Fixes#35089
Automatic merge from submit-queue
Change dnsutils image to use alpine
This reduces the size of the dnsutils image and should reduce the # of failed e2e test runs due to image pull timeout.
MatchContainerOutput always creates a pod and does not cleanup. We need
to fix this to be better at re-trying the scenarios.
When there is an error say in the first attempt of ExpectNoErrorWithRetries
(for example in "Pods should contain environment variables for services" test)
the retries logic calls MatchContainerOutput another time and the
podClient.create fails correctly since the pod was not cleaned up the
first time MatchContainerOutput was called.
Fixes#35089
Automatic merge from submit-queue
[kubelet] rename --cgroups-per-qos to --experimental-cgroups-per-qos
This reflects the true nature of "cgroups per qos" feature.
```release-note
* Rename `--cgroups-per-qos` to `--experimental-cgroups-per-qos` in Kubelet
```
Automatic merge from submit-queue
tests: update memory resource limits
```release-note
NONE
```
On ubuntu, the `RestartNever` test OOMs its cgroup limit fairly
frequently.
This bumps the number up to something suitably large since the test
isn't testing anything related to this anyways.
Fixes#36159
Fix based on https://github.com/kubernetes/kubernetes/issues/36159#issuecomment-258992255
cc @yujuhong @saad-ali
The test checks the individual kubelet /runningPods endpoint based on the
initial list of nodes it observes. It is important that all pods are
scheduled only onto those nodes. Apply node labels to ensure no stray pods on
other nodes.
On ubuntu, the `RestartNever` test OOMs its cgroup limit fairly
frequently.
This bumps the number up to something suitably large since the test
isn't testing anything related to this anyways.
Fixes#36159
Automatic merge from submit-queue
Use generous limits in the resource usage tracking tests
These tests are mainly used to catch resource leaks in the soak cluster. Using
higher limits to reduce noise.
This should fix#32942 and #32214.
Automatic merge from submit-queue
Bump GCI version to gci-dev-56-8977-0-0
@vishh @saad
``` release-note
Updating GCI base image to gci-dev-56-8977-0-0. Changelog as follows:
* runc: Eliminate redundant parsing of mountinfo
* Updated kubernetes to v1.4.5
```
Automatic merge from submit-queue
Add back e2e tests for disruption budget
The tests were temporarily removed due to problems with api versions in client-go. The test is almost exactly the same as it used to be before api version change.
cc: @caesarxuchao @davidopp
Automatic merge from submit-queue
Restore event messages for replica sets in the deployment controller
Needed to unblock release upgrade tests (see https://github.com/kubernetes/kubernetes/issues/36453)
@kubernetes/deployment ptal
Automatic merge from submit-queue
Node Conformance & E2E: Get node name from node object.
This PR changes the node e2e test framework to get node name from apiserver instead of test flags.
When a user tried out the node conformance test, he found that node conformance test will not work properly if kubelet is started with `hostname-override`.
The reason is that node conformance test is using [the default node name - `os.Hostname`](https://github.com/kubernetes/kubernetes/blob/master/test/e2e_node/e2e_node_suite_test.go#L124), which may be different from `hostname-override`. This will cause test pods not scheduled, and eventually test timeout.
We can expose a flag from node conformance test, and let user set node name themselves if they are using `hostname-override` on kubelet. However, let the framework automatically detect it from apiserver is more user friendly.
/cc @kubernetes/sig-node
This PR 1) only changes node e2e test framework; 2) fixes a problem in node conformance test which is a 1.5 feature. @saad-ali Can we have this in 1.5?
Automatic merge from submit-queue
Add e2e node test for log path
fixes#34661
A node e2e test to check if container logs files are properly created with right content.
Since the log files under `/var/log/containers` are actually symbolic of docker containers log files, we can not use a pod to mount them in and do check (symbolic doesn't supported by docker volume).
cc @Random-Liu
Automatic merge from submit-queue
Migrates addons from RCs to Deployments
Fixes#33698.
Below addons are being migrated:
- kube-dns
- GLBC default backend
- Dashboard UI
- Kibana
For the new deployments, the version suffixes are removed from their names. Version related labels are also removed because they are confusing and not needed any more with regard to how Deployment and the new Addon Manager works.
The `replica` field in `kube-dns` Deployment manifest is removed for the incoming DNS horizontal autoscaling feature #33239.
The `replica` field in `Dashboard` Deployment manifest is also removed because the rescheduler e2e test is manually scaling it.
Some resource limit related fields in `heapster-controller.yaml` are removed, as they will be set up by the `addon resizer` containers. Detailed reasons in #34513.
Three e2e tests are modified:
- `rescheduler.go`: Changed to resize Dashboard UI Deployment instead of ReplicationController.
- `addon_update.go`: Some namespace related changes in order to make it compatible with the new Addon Manager.
- `dns_autoscaling.go`: Changed to examine kube-dns Deployment instead of ReplicationController.
Both of above two tests passed on my own cluster. The upgrade process --- from old Addons with RCs to new Addons with Deployments --- was also tested and worked as expected.
The last commit upgrades Addon Manager to v6.0. It is still a work in process and currently waiting for #35220 to be finished. (The Addon Manager image in used comes from a non-official registry but it mostly works except some corner cases.)
@piosz @gmarek could you please review the heapster part and the rescheduler test?
@mikedanese @thockin
cc @kubernetes/sig-cluster-lifecycle
---
Notes:
- Kube-dns manifest still uses *-rc.yaml for the new Deployment. The stale file names are preserved here for receiving faster review. May send out PR to re-organize kube-dns's file names after this.
- Heapster Deployment's name remains in the old fashion(with `-v1.2.0` suffix) for avoiding describe this upgrade transition explicitly. In this way we don't need to attach fake apply labels to the old Deployments.
Automatic merge from submit-queue
Adding cascading deletion support to more federation controllers
Ref #33612
Adding cascading deletion support for federated daemonsets and ingress.
The code is same as that for namespaces. Just ensuring that DeletionHelper functions are called at right places in these controllers.
e2e tests coming up in another PR.
cc @kubernetes/sig-cluster-federation @caesarxuchao @madhusudancs @mwielgus
```release-note
federation: Adding support for DeleteOptions.OrphanDependents for federated daemonsets and ingresses. Setting it to false while deleting a federated daemonset or ingress also deletes the corresponding resource from all registered clusters.
```
Automatic merge from submit-queue
Enable NFS and GlusterFS tests in both node and cluster e2e tests
This PR is to enable NFS and GlusterFS tests on both node and cluster
e2e tests.
It also change the code to use ExecCommandInPod instead of kubectl since
node does not have kubectl available
This PR is to enable NFS and GlusterFS tests on both node and cluster
e2e tests for gci and containervm distro.
It also change the code to use ExecCommandInPod instead of kubectl since
node does not have kubectl available
Automatic merge from submit-queue
Add Windows support to kube-proxy
<!-- Thanks for sending a pull request! Here are some tips for you:
1. If this is your first time, read our contributor guidelines https://github.com/kubernetes/kubernetes/blob/master/CONTRIBUTING.md and developer guide https://github.com/kubernetes/kubernetes/blob/master/docs/devel/development.md
2. If you want *faster* PR reviews, read how: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/faster_reviews.md
3. Follow the instructions for writing a release note: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/pull-requests.md#release-notes
-->
**What this PR does / why we need it**:
This is the first stab at supporting kube-proxy (userspace mode) on Windows
**Which issue this PR fixes** :
fixes#30278
**Special notes for your reviewer**:
The MVP uses `netsh portproxy` to redirect traffic from `ServiceIP:ServicePort` to a `LocalIP:LocalPort`.
For the next version we are expecting to have guidance from Microsoft Container Networking team.
**Limitations**:
Current implementation does not support DNS queries over UDP as `netsh portproxy` currently only supports TCP. We are working with Microsoft to remediate this.
cc: @brendandburns @dcbw
**Release note**:
<!-- Steps to write your release note:
1. Use the release-note-* labels to set the release note state (if you have access)
2. Enter your extended release note in the below block; leaving it blank means using the PR title as the release note. If no release note is required, just write `NONE`.
-->
```release-note
```
Automatic merge from submit-queue
Extend test timeout for LB creation in large clusters
This will most probably be necessary to test 3000-node clusters.
Automatic merge from submit-queue
Support persistent volume usage for kubernetes running on Photon Controller platform
**What this PR does / why we need it:**
Enable the persistent volume usage for kubernetes running on Photon platform.
Photon Controller: https://vmware.github.io/photon-controller/
_Only the first commit include the real code change.
The following commits are for third-party vendor dependency and auto-generated code/docs updating._
Two components are added:
pkg/cloudprovider/providers/photon: support Photon Controller as cloud provider
pkg/volume/photon_pd: support Photon persistent disk as volume source for persistent volume
Usage introduction:
a. Photon Controller is supported as cloud provider.
When choosing to use photon controller as a cloud provider, "--cloud-provider=photon --cloud-config=[path_to_config_file]" is required for kubelet/kube-controller-manager/kube-apiserver. The config file of Photon Controller should follow the following usage:
```
[Global]
target = http://[photon_controller_endpoint_IP]
ignoreCertificate = true
tenant = [tenant_name]
project = [project_name]
overrideIP = true
```
b. Photon persistent disk is supported as volume source/persistent volume source.
yaml usage:
```
volumes:
- name: photon-storage-1
photonPersistentDisk:
pdID: "643ed4e2-3fcc-482b-96d0-12ff6cab2a69"
```
pdID is the persistent disk ID from Photon Controller.
c. Enable Photon Controller as volume provisioner.
yaml usage:
```
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: gold_sc
provisioner: kubernetes.io/photon-pd
parameters:
flavor: persistent-disk-gold
```
The flavor "persistent-disk-gold" needs to be created by Photon platform admin before hand.
Automatic merge from submit-queue
add e2e test for kubectl in a Pod
Add a e2e test to make sure kubectl can talk to the api server when it is mounted in a pod.
Fixes: #33138
Automatic merge from submit-queue
Make GCI nodes mount non tmpfs, ext* & bind mounts using an external mounter
This PR downloads the stage1 & gci-mounter ACIs as part of cluster bring up instead of downloading them dynamically from gcr.io, which was the cause for #36206.
I have also optimized the containerized mounter to pre-load the mounter image once to avoid fetch latency while using it.
Original PR which got reverted: https://github.com/kubernetes/kubernetes/pull/35821
```release-note
GCI nodes use an external mounter script to mount NFS & GlusterFS storage volumes
```
@mtaufen Node e2e is not re-enabled in this PR.
cc @jingxu97
Automatic merge from submit-queue
Node Conformance Test: Containerize the node e2e test
For #30122, #30174.
Based on #32427, #32454.
**Please only review the last 3 commits.**
This PR packages the node e2e test into a docker image:
- 1st commit: Add `NodeConformance` flag in the node e2e framework to avoid starting kubelet and collecting system logs. We do this because:
- There are all kinds of ways to manage kubelet and system logs, for different situation we need to mount different things into the container, run different commands. It is hard and unnecessary to handle the complexity inside the test suite.
- 2nd commit: Remove all `sudo` in the test container. We do this because:
- In most container, there is no `sudo` command, and there is no need to use `sudo` inside the container.
- It introduces some complexity to use `sudo` inside the test. (https://github.com/kubernetes/kubernetes/issues/29211, https://github.com/kubernetes/kubernetes/issues/26748) In fact we just need to run the test suite with `sudo`.
- 3rd commit: Package the test into a docker container with corresponding `Makefile` and `Dockerfile`. We also added a `run_test.sh` script to start kubelet and run the test container. The script is only for demonstration purpose and we'll also use the script in our node e2e framework. In the future, we should update the script to start kubelet in production way (maybe with `systemd` or `supervisord`).
@dchen1107 @vishh
/cc @kubernetes/sig-node @kubernetes/sig-testing
**Release note**:
<!-- Steps to write your release note:
1. Use the release-note-* labels to set the release note state (if you have access)
2. Enter your extended release note in the below block; leaving it blank means using the PR title as the release note. If no release note is required, just write `NONE`.
-->
``` release-note
Release alpha version node test container gcr.io/google_containers/node-test-ARCH:0.1 for users to verify their node setup.
```
Automatic merge from submit-queue
Adding cadcading deletion support for federated secrets
Ref https://github.com/kubernetes/kubernetes/issues/33612
Adding cascading deletion support for federated secrets.
The code is same as that for namespaces. Just ensuring that DeletionHelper functions are called at right places in secret_controller.
Also added e2e tests.
cc @kubernetes/sig-cluster-federation @caesarxuchao
```release-note
federation: Adding support for DeleteOptions.OrphanDependents for federated secrets. Setting it to false while deleting a federated secret also deletes the corresponding secrets from all registered clusters.
```
Automatic merge from submit-queue
Rename experimental-runtime-integration-type to experimental-cri
Also rename the field in the component config to `EnableCRI`
The e2e tests cover cases like cluster size changed, parameters
changed, ConfigMap got deleted, autoscaler pod got deleted, etc.
They are separated into a fast part(could be run parallelly) and
a slow part(put in [serial]). The fast part of the e2e tests cost
around 50 seconds to run.
Automatic merge from submit-queue
Rename ScheduledJobs to CronJobs
I went with @smarterclayton idea of registering named types in schema. This way we can support both the new (CronJobs) and old (ScheduledJobs) resource name. Fixes#32150.
fyi @erictune @caesarxuchao @janetkuo
Not ready yet, but getting close there...
**Release note**:
```release-note
Rename ScheduledJobs to CronJobs.
```
This allows us to interrupt/kill the executed command if it exceeds the
timeout (not implemented by this commit).
Set timeout in Exec probes. HTTPGet and TCPSocket probes respect the
timeout, while Exec probes used to ignore it.
Add e2e test for exec probe with timeout. However, the test is skipped
while the default exec handler doesn't support timeouts.
Automatic merge from submit-queue
Adding more e2e tests for federated namespace cascading deletion and fixing bugs
Ref https://github.com/kubernetes/kubernetes/issues/33612
Adding more e2e tests for testing cascading deletion of federated namespace.
New tests are now verifying that cascading deletion happen when DeletionOptions.OrphanDependents=false and it does not happen when DeleteOptions.OrphanDependents=true.
Also updated deletion helper to always add OrphanFinalizer. generic registry will remove it if DeleteOptions.OrphanDependents=false. Also updated namespace registry to do the same.
We need to add the orphan finalizer to keep the orphan by default behavior. We assume that its dependents are going to be orphaned and hence add that finalizer. If user does not want the orphan behavior, he can do so using DeleteOptions and then the registry will remove that finalizer.
cc @kubernetes/sig-cluster-federation @caesarxuchao @derekwaynecarr
Automatic merge from submit-queue
Node Conformance Test: Add system verification
For #30122 and #29081.
This PR introduces system verification test in node e2e and conformance test. It will run before the real test. Once the system verification fails, the test will just fail. The output of the system verification is like this:
```
I0909 23:33:20.622122 2717 validators.go:45] Validating os...
OS: Linux
I0909 23:33:20.623274 2717 validators.go:45] Validating kernel...
I0909 23:33:20.624037 2717 kernel_validator.go:79] Validating kernel version
KERNEL_VERSION: 3.16.0-4-amd64
I0909 23:33:20.624146 2717 kernel_validator.go:93] Validating kernel config
CONFIG_NAMESPACES: enabled
CONFIG_NET_NS: enabled
CONFIG_PID_NS: enabled
CONFIG_IPC_NS: enabled
CONFIG_UTS_NS: enabled
CONFIG_CGROUPS: enabled
CONFIG_CGROUP_CPUACCT: enabled
CONFIG_CGROUP_DEVICE: enabled
CONFIG_CGROUP_FREEZER: enabled
CONFIG_CGROUP_SCHED: enabled
CONFIG_CPUSETS: enabled
CONFIG_MEMCG: enabled
I0909 23:33:20.679328 2717 validators.go:45] Validating cgroups...
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
I0909 23:33:20.679454 2717 validators.go:45] Validating docker...
DOCKER_GRAPH_DRIVER: aufs
```
It verifies the system following a predefined `SysSpec`:
``` go
// DefaultSysSpec is the default SysSpec.
var DefaultSysSpec = SysSpec{
OS: "Linux",
KernelVersion: []string{`3\.[1-9][0-9].*`, `4\..*`}, // Requires 3.10+ or 4+
// TODO(random-liu): Add more config
KernelConfig: KernelConfig{
Required: []string{
"NAMESPACES", "NET_NS", "PID_NS", "IPC_NS", "UTS_NS",
"CGROUPS", "CGROUP_CPUACCT", "CGROUP_DEVICE", "CGROUP_FREEZER",
"CGROUP_SCHED", "CPUSETS", "MEMCG",
},
Forbidden: []string{},
},
Cgroups: []string{"cpu", "cpuacct", "cpuset", "devices", "freezer", "memory"},
RuntimeSpec: RuntimeSpec{
DockerSpec: &DockerSpec{
Version: []string{`1\.(9|\d{2,})\..*`}, // Requires 1.9+
GraphDriver: []string{"aufs", "overlay", "devicemapper"},
},
},
}
```
Currently, it only supports:
- Kernel validation: version validation and kernel configuration validation
- Cgroup validation: validating whether required cgroups subsystems are enabled.
- Runtime Validation: currently, only validates docker graph driver.
The validating framework is ready. The specific validation items could be added over time.
@dchen1107
/cc @kubernetes/sig-node
Automatic merge from submit-queue
lister-gen updates
- Remove "zz_generated." prefix from generated lister file names
- Add support for expansion interfaces
- Switch to new generated JobLister
@deads2k @liggitt @sttts @mikedanese @caesarxuchao for the lister-gen changes
@soltysh @deads2k for the informer / job controller changes
Automatic merge from submit-queue
Add cmd support to gcp auth provider plugin
**What this PR does / why we need it**:
Adds ability for gcp auth provider plugin to get access token by shelling out to an external command. We need this because for GKE, kubectl should be using gcloud credentials. It currently uses google application default credentials, which causes confusion if user has configured both with different permissions (previously the two were almost always identical).
**Which issue this PR fixes**:
Addresses #35530 with gcp-only solution, as generic cmd plugin was deemed not useful for other providers.
**Special notes for your reviewer**:
Configuration options are to support whatever future command gcloud provides for printing access token of active user. Also works with existing command (`gcloud auth print-access-token`)
```release-note
```
Automatic merge from submit-queue
Remove GetRootContext method from VolumeHost interface
Remove the `GetRootContext` call from the `VolumeHost` interface, since Kubernetes no longer needs to know the SELinux context of the Kubelet directory.
Per #33951 and #35127.
Depends on #33663; only the last commit is relevant to this PR.
Automatic merge from submit-queue
Per Volume Inode Accounting
Collects volume inode stats using the same find command as cadvisor. The command is "find _path_ -xdev -printf '.' | wc -c". The output is passed to the summary api, and will be consumed by the eviction manager.
This cannot be merged yet, as it depends on changes adding the InodesUsed field to the summary api, and the eviction manager consuming this. Expect tests to fail until this happens.
DEPENDS ON #35137
Automatic merge from submit-queue
[AppArmor] Hold bad AppArmor pods in pending rather than rejecting
Fixes https://github.com/kubernetes/kubernetes/issues/32837
Overview of the fix:
If the Kubelet needs to reject a Pod for a reason that the control plane doesn't understand (e.g. which AppArmor profiles are installed on the node), then it might contiinuously try to run the pod on the same rejecting node. This change adds a concept of "soft rejection", in which the Pod is admitted, but not allowed to run (and therefore held in a pending state). This prevents the pod from being retried on other nodes, but also prevents the high churn. This is consistent with how other missing local resources (e.g. volumes) is handled.
A side effect of the change is that Pods which are not initially runnable will be retried. This is desired behavior since it avoids a race condition when a new node is brought up but the AppArmor profiles have not yet been loaded on it.
``` release-note
Pods with invalid AppArmor configurations will be held in a Pending state, rather than rejected (failed). Check the pod status message to find out why it is not running.
```
@kubernetes/sig-node @timothysc @rrati @davidopp
Automatic merge from submit-queue
Update how we detect overlapping deployments
<!-- Thanks for sending a pull request! Here are some tips for you:
1. If this is your first time, read our contributor guidelines https://github.com/kubernetes/kubernetes/blob/master/CONTRIBUTING.md and developer guide https://github.com/kubernetes/kubernetes/blob/master/docs/devel/development.md
2. If you want *faster* PR reviews, read how: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/faster_reviews.md
3. Follow the instructions for writing a release note: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/pull-requests.md#release-notes
-->
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#24152
**Special notes for your reviewer**: cc @kubernetes/deployment
**Release note**:
<!-- Steps to write your release note:
1. Use the release-note-* labels to set the release note state (if you have access)
2. Enter your extended release note in the below block; leaving it blank means using the PR title as the release note. If no release note is required, just write `NONE`.
-->
```release-note
NONE
```
When looking for overlapping deployments, we should also find other deployments that select current deployment's pods,
not just the ones whose pods are selected by current deployment.
Automatic merge from submit-queue
New command: "kubeadm token generate"
As part of #33930, this PR adds a new top-level command to kubeadm to just generate a token for use with the init/join commands. Otherwise, users are left to either figure out how to generate a token on their own, or let `kubeadm init` generate a token, capture and parse the output, and then use that token for `kubeadm join`.
At this point, I was hoping for feedback on the CLI experience, and then I can add tests. I spoke with @mikedanese and he didn't like the original propose of `kubeadm util generate-token`, so here are the runners up:
```
$ kubeadm generate-token # <--- current implementation
$ kubeadm generate token # in case kubeadm might generate other things in the future?
$ kubeadm init --generate-token # possibly as a subcommand of an existing one
```
Currently, the output is simply the token on one line without any padding/formatting:
```
$ kubeadm generate-token
1087fd.722b60cdd39b1a5f
```
CC: @kubernetes/sig-cluster-lifecycle
**Release note**:
<!-- Steps to write your release note:
1. Use the release-note-* labels to set the release note state (if you have access)
2. Enter your extended release note in the below block; leaving it blank means using the PR title as the release note. If no release note is required, just write `NONE`.
-->
``` release-note
New kubeadm command: generate-token
```
Automatic merge from submit-queue
Add --force to kubectl delete and explain force deletion
--force is required for --grace-period=0. --now is == --grace-period=1.
Improve command help to explain what graceful deletion is and warn about
force deletion.
Part of #34160 & #29033
```release-note
In order to bypass graceful deletion of pods (to immediately remove the pod from the API) the user must now provide the `--force` flag in addition to `--grace-period=0`. This prevents users from accidentally force deleting pods without being aware of the consequences of force deletion. Force deleting pods for resources like StatefulSets can result in multiple pods with the same name having running processes in the cluster, which may lead to data corruption or data inconsistency when using shared storage or common API endpoints.
```
Automatic merge from submit-queue
NPD: Add e2e test for NPD v0.2.
Node problem detector has been updated after v0.1, including:
1. Add lookback support. It will lookback for configured time to search for possible kernel panic before node reboot.
2. Get node name via downward api.
This PR updates the test to test the new NPD behavior.
@dchen1107
/cc @kubernetes/sig-node
Automatic merge from submit-queue
Made changes to DELETE API to let v1.DeleteOptions be passed in as a queryParameter
**Which issue this PR fixes** _(optional, in `fixes #<issue number>(, #<issue_number>, ...)` format, will close that issue when PR gets merged)_: fixes#34856
```release-note
DELETE requests can now pass in their DeleteOptions as a query parameter or a body parameter, rather than just as a body parameter.
```