Automatic merge from submit-queue (batch tested with PRs 46091, 48280)
allow output patch string in edit command
**What this PR does / why we need it**:
allow user to get the patch from edit command if user is not familiar with the patch format.
```
# ./cluster/kubectl.sh create role a --verb=get,list --resource=no
role "a" created
# ./cluster/kubectl.sh edit role a --output-patch=true
Patch: {"rules":[{"apiGroups":[""],"resources":["nodes"],"verbs":["get","list","delete"]}]}
role "a" edited
# ./cluster/kubectl.sh create role b --verb=get,list --resource=no
role "b" created
# ./cluster/kubectl.sh patch role b -p '{"rules":[{"apiGroups":[""],"resources":["nodes"],"verbs":["get","list","delete"]}]}'
role "b" patched
```
**Which issue this PR fixes**: fixes#47173
**Special notes for your reviewer**:
**Release note**:
```release-note
Could get the patch from kubectl edit command
```
Automatic merge from submit-queue (batch tested with PRs 48890, 46893, 48872, 48896)
Support customized system spec in the node conformance test and create the GKE system spec
ref: https://github.com/kubernetes/kubernetes/issues/46891
- System specs are located in `test/e2e_node/system/specs`. Created one for validating GKE images in `test/e2e_node/system/specs/gke.yaml`.
- `--image-spec-name` can be used to specify a system spec in node e2e and conformance tests. This option maps to `SYSTEM_SPEC_NAME` in a test properties file, which is the user facing configuration. So, users can specify `SYSTEM_SPEC_NAME=gke` to run the image validation using the GKE system spec.
- If `SYSTEM_SPEC_NAME` is unspecified, the default spec (`system.DefaultSysSpec`) will be used.
- We can also use `make test-e2e-node SYSTEM_SPEC_NAME=gke` to run tests using GKE image spec.
**Release note**:
`None`
Automatic merge from submit-queue
Azure PD (Managed/Blob)
This is exactly the same code as this [PR](https://github.com/kubernetes/kubernetes/pull/41950). It has a clean set of generated items. We created a separate PR to accelerate the accept/merge the PR
CC @colemickens
CC @brendandburns
**What this PR does / why we need it**:
1. Adds K8S support for Azure Managed Disks.
2. Adds support for dedicated blob disks (1:1 to storage account) in addition to shared blob disks (n:1 to storage account).
3. Automatically manages the underlying storage accounts. New storage accounts are created at 50% utilization. Max is 100 disks, 60 disks per storage account.
2. Addresses the current issues with Blob Disks:
..* Significantly faster attach process. Disks are now usually available for pods on nodes under 30 sec if formatted, under a min if not formatted.
..* Adds support to move disks between nodes.
..* Adds consistent attach/detach behavior, checks if the disk is leased/attached on a different node before attempting to attach to target nodes.
..* Fixes a random hang behavior on Azure VMs during mount/format (for both blob + managed disks).
..* Fixes a potential conflict by avoiding the use of disk names for mount paths. The new plugin uses hashed disk uri for mount path.
The existing AzureDisk is used as is. Additional "kind" property was added allowing the user to decide if the pd will be shared, dedicated or managed (Azure Managed Disks are used).
Due to the change in mounting paths, existing PDs need to be recreated as PV or PVCs on the new plugin.
Automatic merge from submit-queue (batch tested with PRs 48864, 48651, 47703)
Enable logexporter mechanism to dump logs from k8s nodes to GCS directly
Ref https://github.com/kubernetes/kubernetes/issues/48513
This adds support for logexporter from k8s side. Next I'll send a PR adding support from test-infra side.
/cc @kubernetes/sig-scalability-misc @kubernetes/test-infra-maintainers @fejta @wojtek-t @gmarek
Automatic merge from submit-queue
add dockershim checkpoint node e2e test
Add a bunch of disruptive cases to test kubelet/dockershim's checkpoint work flow.
Some steps are quite hacky. Not sure if there is better ways to do things.
Automatic merge from submit-queue (batch tested with PRs 47918, 47964, 48151, 47881, 48299)
Add ApiEndpoint support to GCE config.
**What this PR does / why we need it**:
Add the ability to change ApiEndpoint for GCE.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
None
```
Automatic merge from submit-queue (batch tested with PRs 46972, 42829, 46799, 46802, 46844)
promote tls-bootstrap to beta
last commit of this PR.
Towards https://github.com/kubernetes/kubernetes/issues/46999
```release-note
Promote kubelet tls bootstrap to beta. Add a non-experimental flag to use it and deprecate the old flag.
```
Automatic merge from submit-queue
apiserver: add a webhook implementation of the audit backend
This builds off of #45315 and is intended to implement an interfaced defined in #45766.
TODO:
- [x] Rebase on top of API types PR.
- [x] Rebase on top of API types updates (#46065)
- [x] Rebase on top of feature flag (#46009)
- [x] Rebase on top of audit instrumentation.
- [x] Hook up API server flag or register plugin (depending on #45766)
Features issue https://github.com/kubernetes/features/issues/22
Design proposal https://github.com/kubernetes/community/blob/master/contributors/design-proposals/auditing.md
```release-notes
Webhook added to the API server which omits structured audit log events.
```
/cc @soltysh @timstclair @soltysh @deads2k
Automatic merge from submit-queue (batch tested with PRs 46686, 45049, 46323, 45708, 46487)
[Federation][kubefed]: Use StorageClassName for etcd pvc
This PR updates kubefed to use the StorageClassName field [added in 1.6](http://blog.kubernetes.io/2017/03/dynamic-provisioning-and-storage-classes-kubernetes.html
) for etcd's pvc to allow the user to specify which storage class they want to use. If no value is provided to ``kubefed init``, the field will not be set, and initialization of the pvc may fail on a cluster without a default storage class configured.
The alpha annotation that was previously used (``volume.alpha.kubernetes.io/storage-class``) was deprecated as of 1.4 according to the following blog post:
http://blog.kubernetes.io/2016/10/dynamic-provisioning-and-storage-in-kubernetes.html
**Release note**:
```
'kubefed init' has been updated to support specification of the storage class (via --etcd-pv-storage-class) for the Persistent Volume Claim (PVC) used for etcd storage. If --etcd-pv-storage-class is not specified, the default storage class configured for the cluster will be used.
```
cc: @kubernetes/sig-federation-pr-reviews
Automatic merge from submit-queue
Support grabbing test suite metrics
**What this PR does / why we need it**:
Add support for grabbing metrics that cover the entire test suite's execution.
Update the "interesting" controller-manager metrics to match the
current names for the garbage collector, and add namespace controller
metrics to the list.
If you enable `--gather-suite-metrics-at-teardown`, the metrics file is written to a file with a name such as `MetricsForE2ESuite_2017-05-25T20:25:57Z.json` in the `--report-dir`. If you don't specify `--report-dir`, the metrics are written to the test log output.
I'd like to enable this for some of the `pull-*` CI jobs, which will require a separate PR to test-infra.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
@kubernetes/sig-testing-pr-reviews @smarterclayton @wojtek-t @gmarek @derekwaynecarr @timothysc
Automatic merge from submit-queue
no-snat test
This test checks that Pods can communicate with each other in the same cluster without SNAT.
I intend to create a job that runs this in small clusters (\~3 nodes) at a low frequency (\~once per day) so that we have a signal as we work on allowing multiple non-masquerade CIDRs to be configured (see [kubernetes-incubator/ip-masq-agent](https://github.com/kubernetes-incubator/ip-masq-agent), for example).
/cc @dnardo
Automatic merge from submit-queue (batch tested with PRs 46302, 44597, 44742, 46554)
Change to aggregator so it calls a user apiservice via its pod IP.
proxy_handler now does a sideways call to lookup the pod IPs for aservice.
It will then pick a random pod IP to forward the use apiserver request to.
**What this PR does / why we need it**: It allows the aggregator to work without setting up the full network stack on the kube master (i.e. with kube-dns or kube-proxy)
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#44619
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue
support NonResourceURL for kubectl create clusterrole
Release note:
```release-note
add --non-resource-url to kubectl create clusterrole
```
proxy_handler now uses the endpoint router to map the cluster IP to
appropriate endpoint (Pod) IP for the given resource.
Added code to allow aggregator routing to be optional.
Updated bazel build.
Fixes to cover JLiggit comments.
Added util ResourceLocation method based on Listers.
Fixed issues from verification steps.
Updated to add an interface to obfuscate some of the routing logic.
Collapsed cluster IP resolution in to the aggregator routing
implementation.
Added 2 simple unit tests for ResolveEndpoint
Update the "interesting" controller-manager metrics to match the
current names for the garbage collector, and add namespace controller
metrics to the list.
Automatic merge from submit-queue
Allow the /logs handler on the apiserver to be toggled.
Adds a flag to kube-apiserver, and plumbs through en environment variable in configure-helper.sh
Automatic merge from submit-queue
kube-proxy: add --write-config-to flag
Add --write-config-to flag to kube-proxy to write the default configuration
values to the specified file location.
@deads2k suggested I create my own scheme for this, so I followed the example he shared with me. The only bit currently still referring to `api.Scheme` is where we create the event broadcaster recorder. In order to use the custom private scheme, I either have to pass it in to `NewProxyServer()`, or I have to make `NewProxyServer()` a member of the `Options` struct. If the former, then I probably need to export `Options.scheme`. Thoughts?
cc @mikedanese @sttts @liggitt @deads2k @smarterclayton @timothysc @kubernetes/sig-network-pr-reviews @kubernetes/sig-api-machinery-pr-reviews
```release-note
Add --write-config-to flag to kube-proxy to allow users to write the default configuration settings to a file.
```
Automatic merge from submit-queue (batch tested with PRs 41535, 45985, 45929, 45948, 46056)
remove useless flags from hack/verify-flags/known-flags.txt
Flags in known-flags.txt is used to check misspelling from "-" to "_" in
workspace, so a flag with out "-" should not show up in this file.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
Make real proxier in hollow-proxy optional (default=true)
Ref https://github.com/kubernetes/kubernetes/pull/45622
This allows using real proxier for hollow proxy, but we use the fake one by default.
cc @kubernetes/sig-scalability-misc @wojtek-t @gmarek