Automatic merge from submit-queue (batch tested with PRs 40885, 43623, 43735)
Use "hack/godep-restore.sh" instead of "godep restore"
Now we get errors when run "godep restore".
So we need to update the help message.
@derekwaynecarr
**Special notes for your reviewer**:
**Release note**:
```NONE
```
Automatic merge from submit-queue
Extract GCEPD pv tests
**What this PR does / why we need it**:
This is strictly a refactor moving the GCEPD suite in Persistent Volumes E2E to it's own file. This will make future provider specific additions to pv testing much more organized and readable. It will also enable a smoother transition to providers moving out of tree by consolidating related tests.
```release-note
NONE
```
Automatic merge from submit-queue
Volume Provisioning E2E: test PVC delete causes PV delete
**What this PR does / why we need it**:
Test for a regression addressed in #21268. There was a case where the PVC being created and deleted quickly may result in a provisioned PV left behind as `Available.`
```release-note
NONE
```
cc @jeffvance
Automatic merge from submit-queue
Centos provider: generate SSL certificates for etcd cluster.
**What this PR does / why we need it**:
Support secure etcd cluster for centos provider, generate SSL certificates for etcd in default. Running it w/o SSL is exposing cluster data to everyone and is not recommended. [#39462](https://github.com/kubernetes/kubernetes/pull/39462#issuecomment-271601547)
/cc @jszczepkowski @zmerlynn
**Release note**:
```release-note
Support secure etcd cluster for centos provider.
```
Removed wait for PVC phase Pending.
iterate test 100 times to increase chance of regression
Moved claim obj assignment out of loop.
add wait loop check for PVs
loop until no PVs detected
refactor per git comments
replace api calls with framework wrappers
add default suffix
Automatic merge from submit-queue (batch tested with PRs 41541, 43710)
Admission plugin initializer for the generic API server.
**What this PR does / why we need it**:
This PR implements a standard admission plugin initializer for the generic API server.
The initializer uses kubeconfig to populate external clients and informers. By default
in-cluster config is used.
**Special notes for your reviewer**:
https://github.com/kubernetes/community/blob/master/contributors/design-proposals/apiserver-build-in-admission-plugins.md
**Release note**: NONE
```release-note
```
Automatic merge from submit-queue
Update algorithm of equivalence class cache predicates
NOTE: This is the first two commits of #36238
**What's in this PR:**
1. Definition of equivalence class
2. An update of `equivalence_cache.go` algorithms to implement enable/disable equivalence cache for individual predicate
3. Added equivalence class data structure to `Generic Scheduler` but did not initialize it. This is used to show how we will use equivalence class when scheduling.
**Why I did this:**
Although #36238 has been finished for a period of time, we found it's still very hard to review, because it mixed 1) definition of equivalence class, 2) how to use equivalence cache, and 3) how to keep this cache up-to-date, 4) e2e tests to verify (3) works.
So reviewers are easily distracted by different technical points like `hash algorithms`, `how to properly use Informer` etc, left the more important equivalence algorithms untouched.
So this PR I only includes 1) and 2), leaves updating this cache in #36238. I can see this part is totally independent from the rest part of it. So we can definitely review this equivalence strategies first.
cc @kubernetes/sig-scheduling-pr-reviews @davidopp @jayunit100 @wojtek-t
Automatic merge from submit-queue
Bump cluster autoscaler to 0.5.1
Fixes: #43709
**Release note**:
```release-note
With Cluster Autoscaler 0.5 the cluster will be autoscaled even if there are some unready or broken nodes. Moreover the status of CA is exposed in kube-system/cluster-autoscaler-status config map.
```
Automatic merge from submit-queue
Fix problems of not-starting image pullers
In e2e.go there are the following lines:
https://github.com/kubernetes/kubernetes/blob/master/test/e2e/e2e.go#L150
```
if err := framework.WaitForPodsSuccess(c, metav1.NamespaceSystem, framework.ImagePullerLabels, imagePrePullingTimeout); err != nil {
// There is no guarantee that the image pulling will succeed in 3 minutes
// and we don't even run the image puller on all platforms (including GKE).
// We wait for it so we get an indication of failures in the logs, and to
// maximize benefit of image pre-pulling.
framework.Logf("WARNING: Image pulling pods failed to enter success in %v: %v", imagePrePullingTimeout, err)
}
```
However, few lines above:
https://github.com/kubernetes/kubernetes/blob/master/test/e2e/e2e.go#L143
we were waiting for all image pullers to actually enter Success state. It's pretty clear that the latter wasn't expected.
This PR is fixing this problem.
Ref #43728
@anhowe @davidopp
Automatic merge from submit-queue
Move cluster logging tests to a separate folder
Since there are several e2e tests for cluster logging and the infrastructure for them got complicated, it makes sense to move those tests to a separate folder.
Also, adding myself and Piotr to OWNERS of this directory as owners of the tests.
Automatic merge from submit-queue
Use ProviderID to address nodes in the cloudprovider
The cloudprovider is being refactored out of kubernetes core. This is being
done by moving all the cloud-specific calls from kube-apiserver, kubelet and
kube-controller-manager into a separately maintained binary(by vendors) called
cloud-controller-manager. The Kubelet relies on the cloudprovider to detect information
about the node that it is running on. Some of the cloudproviders worked by
querying local information to obtain this information. In the new world of things,
local information cannot be relied on, since cloud-controller-manager will not
run on every node. Only one active instance of it will be run in the cluster.
Today, all calls to the cloudprovider are based on the nodename. Nodenames are
unqiue within the kubernetes cluster, but generally not unique within the cloud.
This model of addressing nodes by nodename will not work in the future because
local services cannot be queried to uniquely identify a node in the cloud. Therefore,
I propose that we perform some(to start off with) of the cloudprovider calls based on
ProviderID. This ID is a unique identifier for identifying a node on an external database (such as
the instanceID in aws cloud).
In the next PR, i'll add support to initialize nodes from the cloud-controller-manager instead of the kubelet using this API.
@thockin @keontang @joonas @luxas @justinsb
```release-note
```
Automatic merge from submit-queue
Move DNS configmap tests to slow, serial suites
These tests take a long time due to the ConfigMap update interval
and may briefly disrupt DNS resolution in the cluster.
The cloudprovider is being refactored out of kubernetes core. This is being
done by moving all the cloud-specific calls from kube-apiserver, kubelet and
kube-controller-manager into a separately maintained binary(by vendors) called
cloud-controller-manager. The Kubelet relies on the cloudprovider to detect information
about the node that it is running on. Some of the cloudproviders worked by
querying local information to obtain this information. In the new world of things,
local information cannot be relied on, since cloud-controller-manager will not
run on every node. Only one active instance of it will be run in the cluster.
Today, all calls to the cloudprovider are based on the nodename. Nodenames are
unqiue within the kubernetes cluster, but generally not unique within the cloud.
This model of addressing nodes by nodename will not work in the future because
local services cannot be queried to uniquely identify a node in the cloud. Therefore,
I propose that we perform all cloudprovider calls based on ProviderID. This ID is
a unique identifier for identifying a node on an external database (such as
the instanceID in aws cloud).
This PR implements a standard admission plugin initializer for the generic API server.
The initializer accepts external clientset, external informers and the authorizer.
Automatic merge from submit-queue
Use ping to ip instead of wget google.com in net connectivity check
This is a flakey test and this commit reduces the number of dependent
systems involved with the flake.
Automatic merge from submit-queue (batch tested with PRs 42835, 42974)
VSAN policy support for storage volume provisioning inside kubernetes
The vsphere users will have the ability to specify custom Virtual SAN Storage Capabilities during dynamic volume provisioning. You can now define storage requirements, such as performance and availability, in the form of storage capabilities during dynamic volume provisioning. The storage capability requirements are converted into a Virtual SAN policy which are then pushed down to the Virtual SAN layer when a storage volume (virtual disk) is being created. The virtual disk is distributed across the Virtual SAN datastore to meet the requirements.
For example, User creates a storage class with VSAN storage capabilities:
> kind: StorageClass
> apiVersion: storage.k8s.io/v1beta1
> metadata:
> name: slow
> provisioner: kubernetes.io/vsphere-volume
> parameters:
> hostFailuresToTolerate: "2"
> diskStripes: "1"
> cacheReservation: "20"
> datastore: VSANDatastore
The vSphere Cloud provider provisions a virtual disk (VMDK) on VSAN with the policy configured to the disk.
When you know storage requirements of your application that is being deployed on a container, you can specify these storage capabilities when you create a storage class inside Kubernetes.
@pdhamdhere @tthole @abrarshivani @divyenpatel
**Release note**:
```release-note
None
```
Automatic merge from submit-queue (batch tested with PRs 42835, 42974)
remove legacy insecure port options from genericapiserver
The insecure port has been a source of problems and it will prevent proper aggregation into a cluster, so the genericapiserver has no need for it. In addition, there's no reason for it to be in the main kube-apiserver flow either. This pull removes it from genericapiserver and removes it from the shared kube-apiserver code. It's still wired up in the command, but its no longer possible for someone to mess up and start using in mainline code.
@kubernetes/sig-api-machinery-misc @ncdc
Automatic merge from submit-queue (batch tested with PRs 42087, 43383, 43622)
move category expansion out of restmapper
RESTMapping isn't related to CategoryExpansion (the bit that expands "all" into items to be RESTMapped). This provides that separation and simplifies the RESTMapper interface.
@kubernetes/sig-cli-pr-reviews
Automatic merge from submit-queue
proxy to IP instead of name, but still use host verification
I think I found a setting that lets us proxy to an IP and still do hostname verification on the certificate.
@liggitt @sttts Can you see if you agree that this knob does what I think it does? Last commit only, still needs tests.
Automatic merge from submit-queue
Add plugin/pkg/scheduler to linted packages
**What this PR does / why we need it**:
Adds plugin/pkg/scheduler to linted packages to improve style correctness.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#41868
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
add proxy client-certs to kube-apiserver to allow it to proxy aggregated api servers
The `kube-apiserver` contains the aggregator for combining API servers and `kubeadm` has the client certificates required for aggregated API servers to trust the authentication info. This wires those bits together.
@luxas
Automatic merge from submit-queue
local-up-cluster.sh should create a default storage class
To make dynamic provisioning working out of the box in local cluster a default
storage class needs to be instantiated.
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 43681, 40423, 43562, 43008, 43381)
Changes for removing deadcode in taint_tolerations
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#43007
Automatic merge from submit-queue (batch tested with PRs 43681, 40423, 43562, 43008, 43381)
k8s.io/apiserver: make maxRetryWhenPatchConflicts public
This variable used to be public (before https://github.com/kubernetes/kubernetes/pull/37468). It is pretty use-full to write reliable integration tests that involve resource patching, and it is used in downstream project for that purpose.
Automatic merge from submit-queue (batch tested with PRs 43681, 40423, 43562, 43008, 43381)
Openstack cinder v1/v2/auto API support
**What this PR does / why we need it**:
It adds support for v2 cinder API + autodetection of available cinder API level (as in LBs).
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#39572
**Special notes for your reviewer**:
Based on work by @anguslees. The first two commits are just rebased from https://github.com/kubernetes/kubernetes/pull/36344 which already had a lgtm by @jbeda
**Release note**:
```
Add support for v2 cinder API for openstack cloud provider. By default it autodetects the available version.
```
Automatic merge from submit-queue
added prompt warning if etcd3 media type isn't set during upgrade
**What this PR does / why we need it**:
This adds a prompt confirming the upgrade when `STORAGE_MEDIA_TYPE` is not explicitly set. This is to prevent users from accidentally upgrading to protobuf.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*:
Alongs with docs, addresses #43669
**Special notes for your reviewer**:
Should be cherrypicked onto `release-1.6`
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 41728, 42231)
Fix docker volume selinux issue
**What this PR does / why we need it**:
**Which issue this PR fixes** * Fixes#42230
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 41728, 42231)
Adding new tests to e2e/vsphere_volume_placement.go
**What this PR does / why we need it**:
Adding new tests to e2e/vsphere_volume_placement.go
Below is the tests description and test steps.
**Test Back-to-back pod creation/deletion with different volume sources on the same worker node**
1. Create volumes - vmdk2, vmdk1 is created in the test setup.
2. Create pod Spec - pod-SpecA with volume path of vmdk1 and NodeSelector set to label assigned to node1.
3. Create pod Spec - pod-SpecB with volume path of vmdk2 and NodeSelector set to label assigned to node1.
4. Create pod-A using pod-SpecA and wait for pod to become ready.
5. Create pod-B using pod-SpecB and wait for POD to become ready.
6. Verify volumes are attached to the node.
7. Create empty file on the volume to make sure volume is accessible. (Perform this step on pod-A and pod-B)
8. Verify file created in step 5 is present on the volume. (perform this step on pod-A and pod-B)
9. Delete pod-A and pod-B
10. Repeatedly (5 times) perform step 4 to 9 and verify associated volume's content is matching.
11. Wait for vmdk1 and vmdk2 to be detached from node.
12. Delete vmdk1 and vmdk2
**Test multiple volumes from different datastore within the same pod**
1. Create volumes - vmdk2 on non default shared datastore.
2. Create pod Spec with volume path of vmdk1 (vmdk1 is created in test setup on default datastore) and vmdk2.
3. Create pod using spec created in step-2 and wait for pod to become ready.
4. Verify both volumes are attached to the node on which pod are created. Write some data to make sure volume are accessible.
5. Delete pod.
6. Wait for vmdk1 and vmdk2 to be detached from node.
7. Create pod using spec created in step-2 and wait for pod to become ready.
8. Verify both volumes are attached to the node on which PODs are created. Verify volume contents are matching with the content written in step 4.
9. Delete POD.
10. Wait for vmdk1 and vmdk2 to be detached from node.
11. Delete vmdk1 and vmdk2
**Test multiple volumes from same datastore within the same pod**
1. Create volumes - vmdk2, vmdk1 is created in testsetup
2. Create pod Spec with volume path of vmdk1 (vmdk1 is created in test setup) and vmdk2.
3. Create pod using spec created in step-2 and wait for pod to become ready.
4. Verify both volumes are attached to the node on which pod are created. Write some data to make sure volume are accessible.
5. Delete pod.
6. Wait for vmdk1 and vmdk2 to be detached from node.
7. Create pod using spec created in step-2 and wait for pod to become ready.
8. Verify both volumes are attached to the node on which PODs are created. Verify volume contents are matching with the content written in step 4.
9. Delete POD.
10. Wait for vmdk1 and vmdk2 to be detached from node.
11. Delete vmdk1 and vmdk2
**Which issue this PR fixes**
fixes #
**Special notes for your reviewer**:
Executed tests against K8S v1.5.3 release
**Release note**:
```release-note
NONE
```
cc: @kerneltime @abrarshivani @BaluDontu @tusharnt @pdhamdhere