Automatic merge from submit-queue (batch tested with PRs 41248, 41214)
Switch hpa controller to shared informer
**What this PR does / why we need it**: switch the hpa controller to use a shared informer
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**: Only the last commit is relevant. The others are from #40759, #41114, #41148
**Release note**:
```release-note
```
cc @smarterclayton @deads2k @sttts @liggitt @DirectXMan12 @timothysc @kubernetes/sig-scalability-pr-reviews @jszczepkowski @mwielgus @piosz
Automatic merge from submit-queue (batch tested with PRs 39418, 41175, 40355, 41114, 32325)
add e2e tests for replicasets with weight, min and max replicas
e2e test with weight, min and max replicas set
#31904#32014
@quinton-hoole @nikhiljindal @deepak-vij @kshafiee @mwielgus
Automatic merge from submit-queue (batch tested with PRs 39418, 41175, 40355, 41114, 32325)
TaintController
```release-note
This PR adds a manager to NodeController that is responsible for removing Pods from Nodes tainted with NoExecute Taints. This feature is beta (as the rest of taints) and enabled by default. It's gated by controller-manager enable-taint-manager flag.
```
Automatic merge from submit-queue (batch tested with PRs 41112, 41201, 41058, 40650, 40926)
[Federation][e2e] Fix few flakes in federation e2e tests
**What this PR does / why we need it**:
Fixes few flakes in #37105
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes # partly fixes few test cases in the above mentioned issue.
**Special notes for your reviewer**:
While cleaning up in AfterEach Block some objects are returned while listing, but by the time the object is delete is issued the object is disappearing resulting in this flake occasionally.
To fix this, we need to check if the err is NotFound while deleting, its ok and need not fail the test.
**Release note**: `NONE`
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 41112, 41201, 41058, 40650, 40926)
Promote TokenReview to v1
Peer to https://github.com/kubernetes/kubernetes/pull/40709
We have multiple features that depend on this API:
- [webhook authentication](https://kubernetes.io/docs/admin/authentication/#webhook-token-authentication)
- [kubelet delegated authentication](https://kubernetes.io/docs/admin/kubelet-authentication-authorization/#kubelet-authentication)
- add-on API server delegated authentication
The API has been in use since 1.3 in beta status (v1beta1) with negligible changes:
- Added a status field for reporting errors evaluating the token
This PR promotes the existing v1beta1 API to v1 with no changes
Because the API does not persist data (it is a query/response-style API), there are no data migration concerns.
This positions us to promote the features that depend on this API to stable in 1.7
cc @kubernetes/sig-auth-api-reviews @kubernetes/sig-auth-misc
```release-note
The authentication.k8s.io API group was promoted to v1
```
Automatic merge from submit-queue (batch tested with PRs 40796, 40878, 36033, 40838, 41210)
StatefulSet hardening
**What this PR does / why we need it**:
This PR contains the following changes to StatefulSet. Only one change effects the semantics of how the controller operates (This is described in #38418), and this change only brings the controller into conformance with its documented behavior.
1. pcb and pcb controller are removed and their functionality is encapsulated in StatefulPodControlInterface. This class modules the design contoller.PodControlInterface and provides an abstraction to clientset.Interface which is useful for testing purposes.
2. IdentityMappers has been removed to clarify what properties of a Pod are mutated by the controller. All mutations are performed in the UpdateStatefulPod method of the StatefulPodControlInterface.
3. The statefulSetIterator and petQueue classes are removed. These classes sorted Pods by CreationTimestamp. This is brittle and not resilient to clock skew. The current control loop, which implements the same logic, is in stateful_set_control.go. The Pods are now sorted and considered by their ordinal indices, as is outlined in the documentation.
4. StatefulSetController now checks to see if the Pods matching a StatefulSet's Selector also match the Name of the StatefulSet. This will make the controller resilient to overlapping, and will be enhanced by the addition of ControllerRefs.
5. The total lines of production code have been reduced, and the total number of unit tests has been increased. All new code has 100% unit coverage giving the module 83% coverage. Tests for StatefulSetController have been added, but it is not practical to achieve greater coverage in unit testing for this code (the e2e tests for StatefulSet cover these areas).
6. Issue #38418 is fixed in that StaefulSet will ensure that all Pods that are predecessors of another Pod are Running and Ready prior to launching a new Pod. This removes the potential for deadlock when a Pod needs to be rescheduled while its predecessor is hung in Pending or Initializing.
7. All reference to pet have been removed from the code and comments.
**Which issue this PR fixes**
fixes #38418,#36859
**Special notes for your reviewer**:
**Release note**:
```release-note
Fixes issue #38418 which, under circumstance, could cause StatefulSet to deadlock.
Mediates issue #36859. StatefulSet only acts on Pods whose identity matches the StatefulSet, providing a partial mediation for overlapping controllers.
```
Automatic merge from submit-queue (batch tested with PRs 40796, 40878, 36033, 40838, 41210)
Implement TTL controller and use the ttl annotation attached to node in secret manager
For every secret attached to a pod as volume, Kubelet is trying to refresh it every sync period. Currently Kubelet has a ttl-cache of secrets of its pods and the ttl is set to 1 minute. That means that in large clusters we are targetting (5k nodes, 30pods/node), given that each pod has a secret associated with ServiceAccount from its namespaces, and with large enough number of namespaces (where on each node (almost) every pod is from a different namespace), that resource in ~30 GETs to refresh all secrets every minute from one node, which gives ~2500QPS for GET secrets to apiserver.
Apiserver cannot keep up with it very easily.
Desired solution would be to watch for secret changes, but because of security we don't want a node watching for all secrets, and it is not possible for now to watch only for secrets attached to pods from my node.
So as a temporary solution, we are introducing an annotation that would be a suggestion for kubelet for the TTL of secrets in the cache and a very simple controller that would be setting this annotation based on the cluster size (the large cluster is, the bigger ttl is).
That workaround mean that only very local changes are needed in Kubelet, we are creating a well separated very simple controller, and once watching "my secrets" will be possible it will be easy to remove it and switch to that. And it will allow us to reach scalability goals.
@dchen1107 @thockin @liggitt
Automatic merge from submit-queue (batch tested with PRs 40917, 41181, 41123, 36592, 41183)
fix scheduler performance test script
**What this PR does / why we need it**:
`test-performance.sh` is in dir `kubernetes/test/integration/scheduler_perf`
the dir `kubernetes/test/component/scheduler/perf` does not exist
Thanks.
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 41074, 41147, 40854, 41167, 40045)
Fix some funky funcs.
This is code cleanup. Fix function declarations and remove stale comment.
Automatic merge from submit-queue (batch tested with PRs 41074, 41147, 40854, 41167, 40045)
Upgrade test for deployments
Upgrade test for Deployments. Should prevent issues like https://github.com/kubernetes/kubernetes/issues/40415 in the future.
@krousey @janetkuo @soltysh
Haven't managed to run it locally...
```
$ go run hack/e2e.go --up --test --test_args="--ginkgo.focus=\[Feature:MasterUpgrade\] --upgrade-target=ci/latest --upgrade-image=gci"
2017/02/02 11:43:22 e2e.go:946: Running: ./hack/e2e-internal/e2e-down.sh
2017/02/02 11:43:22 e2e.go:948: Step './hack/e2e-internal/e2e-down.sh' finished in 7.278236ms
2017/02/02 11:43:22 e2e.go:946: Running: ./hack/e2e-internal/e2e-up.sh
2017/02/02 11:43:22 e2e.go:948: Step './hack/e2e-internal/e2e-up.sh' finished in 5.286328ms
2017/02/02 11:43:22 e2e.go:946: Running: ./cluster/kubectl.sh version --match-server-version=false
2017/02/02 11:43:22 e2e.go:948: Step './cluster/kubectl.sh version --match-server-version=false' finished in 213.847259ms
2017/02/02 11:43:22 e2e.go:946: Running: ./hack/e2e-internal/e2e-status.sh
2017/02/02 11:43:22 e2e.go:948: Step './hack/e2e-internal/e2e-status.sh' finished in 103.253183ms
2017/02/02 11:43:22 e2e.go:230: Something went wrong: encountered 2 errors: [exit status 1 exit status 1]
exit status 1
```
@krousey any eta for when the upgrade framework will be integrated in the pr builder?
Automatic merge from submit-queue (batch tested with PRs 41121, 40048, 40502, 41136, 40759)
add k8s.io/sample-apiserver to demonstrate how to build an aggregated API server
builds on https://github.com/kubernetes/kubernetes/pull/41093
This creates a sample API server is a separate staging repo to guarantee no cheating with `k8s.io/kubernetes` dependencies. The sample is run during integration tests (simple tests on it so far) to ensure that it continues to run.
@sttts @kubernetes/sig-api-machinery-misc ptal
@pwittrock @pmorie @kris-nova an aggregated API server example that will stay up to date.
Automatic merge from submit-queue (batch tested with PRs 41121, 40048, 40502, 41136, 40759)
Remove deprecated kubelet flags that look safe to remove
Removes:
```
--config
--auth-path
--resource-container
--system-container
```
which have all been marked deprecated since at least 1.4 and look safe to remove.
```release-note
The deprecated flags --config, --auth-path, --resource-container, and --system-container were removed.
```
Automatic merge from submit-queue (batch tested with PRs 41145, 38771, 41003, 41089, 40365)
Use privileged containers for statefulset e2e tests
Test containers need to run as spc_t in order to interact with the host
filesystem under /tmp, as the tests for StatefulSet are doing. Docker
will transition the container into this domain when running the container
as privileged.
Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>
**Release note**:
```release-note
NONE
```
/cc @ncdc @soltysh @pmorie
1. pcb and pcb controller are removed and their functionality is
encapsulated in StatefulPodControlInterface.
2. IdentityMappers has been removed to clarify what properties of a Pod are
mutated by the controller. All mutations are performed in the
UpdateStatefulPod method of the StatefulPodControlInterface.
3. The statefulSetIterator and petQueue classes are removed. These classes
sorted Pods by CreationTimestamp. This is brittle and not resilient to
clock skew. The current control loop, which implements the same logic,
is in stateful_set_control.go. The Pods are now sorted and considered by
their ordinal indices, as is outlined in the documentation.
4. StatefulSetController now checks to see if the Pods matching a
StatefulSet's Selector also match the Name of the StatefulSet. This will
make the controller resilient to overlapping, and will be enhanced by
the addition of ControllerRefs.
Automatic merge from submit-queue (batch tested with PRs 40175, 41107, 41111, 40893, 40919)
[Federation][e2e] Move Cluster Registration to federation-up.sh
**What this PR does / why we need it**:
Remove cluster register/unregister calls from test case BeforeEach/AfterEach blocks.
Register clusters once in federation-up.sh
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#40768
**Special notes for your reviewer**:
**Release note**: `NONE`
cc: @madhusudancs @kubernetes/sig-federation-pr-reviews
Automatic merge from submit-queue (batch tested with PRs 38796, 40823, 40756, 41083, 41105)
e2e tests for vSphere cloud provider
**What this PR does / why we need it**:
This PR contains changes for existing e2e volume provisioning test cases for running on vsphere cloud provider.
**Following is the summary of changes made in existing e2e test cases**
**Added test/e2e/persistent_volumes-vsphere.go**
- This test verifies deleting a PVC before the pod does not cause pod deletion to fail on PD detach and deleting the PV before the pod does not cause pod deletion to fail on PD detach.
**test/e2e/volume_provisioning.go**
- This test creates a StorageClass and claim with dynamic provisioning and alpha dynamic provisioning annotations and verifies that required volumes are getting created. Test also verifies that created volume is readable and retaining data.
- Added vsphere as supported cloud provider. Also set pluginName to "kubernetes.io/vsphere-volume" for vsphere cloud provider.
**test/e2e/volumes.go**
- Added test spec for vsphere
- This test creates requested volume, mount it on the pod, write some random content at /opt/0/index.html and verifies file contents are perfect to make sure we don't see the content from previous test runs.
- This test also passes "1234" as fsGroup to mount volume and verifies fsGroup is set correctly.
**added test/e2e/vsphere_utils.go**
- Added function verifyVSphereDiskAttached - Verify the persistent disk attached to the node.
- Added function waitForVSphereDiskToDetach - Wait until vsphere vmdk is deteched from the given node or time out after 5 minutes
- Added getVSpherePersistentVolumeSpec - create vsphere volume spec with given VMDK volume path, Reclaim Policy and labels
- Added getVSpherePersistentVolumeClaimSpec - get vsphere persistent volume spec with given selector labels
- createVSphereVolume - function to create vmdk volume
**Following is the summary of new e2e tests added with this PR**
**test/e2e/vsphere_volume_placement.go**
- contains volume placement tests using node label selector
- Test Back-to-back pod creation/deletion with the same volume source on the same worker node
- Test Back-to-back pod creation/deletion with the same volume source attach/detach to different worker nodes
**test/e2e/pv_reclaimpolicy.go**
- contains tests for PV/PVC - Reclaiming Policy
- Test verifies persistent volume should be deleted when reclaimPolicy on the PV is set to delete and associated claim is deleted
- Test also verified that persistent volume should be retained when reclaimPolicy on the PV is set to retain and associated claim is deleted
**test/e2e/pvc_label_selector.go**
- This is function test for Selector-Label Volume Binding Feature.
- Verify volume with the matching label is bounded with the PVC.
Other changes
Updated pkg/cloudprovider/providers/vsphere/BUILD and test/e2e/BUILD
**Which issue this PR fixes** *
fixes # 41087
**Special notes for your reviewer**:
Updated tests were executed on kubernetes v1.4.8 release on vsphere.
Test steps are provided in comments
@kerneltime @BaluDontu
Test containers need to run as spc_t in order to interact with the host
filesystem under /tmp, as the tests for StatefulSet are doing. Docker
will transition the container into this domain when running the container
as privileged.
Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>
Automatic merge from submit-queue (batch tested with PRs 40345, 38183, 40236, 40861, 40900)
Remove checks for pods responding in deployment e2e tests
Fixes#39879
Remove it because it caused deployment e2e tests sometimes timed out waiting for pods responding, and pods responding isn't related to deployment controller and is not a prerequisite of deployment e2e tests.
@kargakis
addressed review comments
Addressed review comment for pv_reclaimpolicy.go to verify content of the volume
addressed 2nd round of review comments
addressed 3rd round of review comments from jeffvance
Automatic merge from submit-queue (batch tested with PRs 41023, 41031, 40947)
scrub aggregator names to eliminate discovery
Cleanup old uses of `discovery`. Also removes the legacy functionality.
@kubernetes/sig-api-machinery-misc @sttts