Automatic merge from submit-queue
controller: decouple cleanup policy from deployment strategies
Deployments get cleaned up only when they are paused, they get scaled up/down,
or when the strategy that drives rollouts completes. This means that stuck
deployments that fall into none of the above categories will not get cleaned
up. Since cleanup is already safe by itself (we only delete old replica sets
that are synced by the replica set controller and have no replicas) we can
execute it for every deployment when there is no intention to rollback.
Fixes https://github.com/kubernetes/kubernetes/issues/40068
Automatic merge from submit-queue
Fix resttest Update action when AllowUnconditionalUpdate is false
Currently our storage Update test assumes that AllowUncoditionalUpdate returns true, and in testUpdateRejectsMismatchedNamespace updates the same object it's passing to create. This results in errors when trying to update that object, due to resourceVersion not being set to a proper value. This patch modifes this so that the update is executed on a stored object, which will have correct values set.
@deads2k ptal
@kubernetes/sig-api-machinery-misc fyi
Automatic merge from submit-queue
Fixed merging of host's and dns' search lines
Fixed forming of pod's Search line in resolv.conf:
- exclude duplicates while merging of host's and dns' search lines to form pod's one
- truncate pod's search line if it exceeds resolver limits: is > 255 chars and containes > 6 searches
- monitoring the resolv.conf file which is used by kubelet (set thru --resolv-conf="") and logging and eventing if search line in it consists of more than 3 entries (or 6 if Cluster Domain is set) or its lenght is > 255 chars
- logging and eventing when a pod's search line is > 255 chars or containes > 6 searches during forming
Fixes#29270
**Release note**:
```release-note
Fixed forming resolver search line for pods: exclude duplicates, obey libc limitations, logging and eventing appropriately.
```
Deployments get cleaned up only when they are paused, they get scaled up/down,
or when the strategy that drives rollouts completes. This means that stuck
deployments that fall into none of the above categories will not get cleaned
up. Since cleanup is already safe by itself (we only delete old replica sets
that are synced by the replica set controller and have no replicas) we can
execute it for every deployment when there is no intention to rollback.
Automatic merge from submit-queue
Add authorization mode to kubeadm
This PR adds an option in `kubeadm` to allow a user to specify an [authorization plugin](https://kubernetes.io/docs/admin/authorization/). It defaults to RBAC.
Automatic merge from submit-queue
Move Federation e2e test code to independent package
**What this PR does / why we need it**: Move federation e2e test code to an independent package called e2e_federation. This will help in multiple ways.
- easy to move the federation related code to a separate repo from core.
- one step closer to register/unregister clusters to federation only once during e2e instead of every test case. we need to introduce singleton to register cluster during framework creation which will be handled in subsequent PR.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
Optimize federation e2e suite which takes long time to execute currently.
**Special notes for your reviewer**: I have tried to segregate into multiple commits. request to review commit by commit. also mostly the change is about moving the functions to a new location/package.
**Release note**:
```release-note
```
@madhusudancs @nikhiljindal @colhom
Automatic merge from submit-queue
turn on dynamic config for flaky tests
Added dynamic config to inode eviction node e2e tests in #39546, but did not enable it for flaky tests. This PR enables this feature for the flaky test suite
Automatic merge from submit-queue
Curating Owners: pkg/cloudprovider
cc @runseb @justinsb @kerneltime @mikedanese @svanharmelen @anguslees @brendandburns @abrarshivani @imkin @luomiao @colemickens @ngtuna @dagnello @abithap
In an effort to expand the existing pool of reviewers and establish a
two-tiered review process (first someone lgtms and then someone
experienced in the project approves), we are adding new reviewers to
existing owners files.
If You Care About the Process:
------------------------------
We did this by algorithmically figuring out who’s contributed code to
the project and in what directories. Unfortunately, that doesn’t work
well: people that have made mechanical code changes (e.g change the
copyright header across all directories) end up as reviewers in lots of
places.
Instead of using pure commit data, we generated an excessively large
list of reviewers and pruned based on all time commit data, recent
commit data and review data (number of PRs commented on).
At this point we have a decent list of reviewers, but it needs one last
pass for fine tuning.
Also, see https://github.com/kubernetes/contrib/issues/1389.
TLDR:
-----
As an owner of a sig/directory and a leader of the project, here’s what
we need from you:
1. Use PR https://github.com/kubernetes/kubernetes/pull/35715 as an example.
2. The pull-request is made editable, please edit the `OWNERS` file to
remove the names of people that shouldn't be reviewing code in the
future in the **reviewers** section. You probably do NOT need to modify
the **approvers** section. Names asre sorted by relevance, using some
secret statistics.
3. Notify me if you want some OWNERS file to be removed. Being an
approver or reviewer of a parent directory makes you a reviewer/approver
of the subdirectories too, so not all OWNERS files may be necessary.
4. Please use ALIAS if you want to use the same list of people over and
over again (don't hesitate to ask me for help, or use the pull-request
above as an example)
Automatic merge from submit-queue
Adding cos as an alias for gci.
**What this PR does / why we need it**: Adding COS as an alias for GCI.
cc: @adityakali @wonderfly
Automatic merge from submit-queue
Curating Owners: cmd/kubelet
cc @yujuhong @dchen1107 @vishh
In an effort to expand the existing pool of reviewers and establish a
two-tiered review process (first someone lgtms and then someone
experienced in the project approves), we are adding new reviewers to
existing owners files.
If You Care About the Process:
------------------------------
We did this by algorithmically figuring out who’s contributed code to
the project and in what directories. Unfortunately, that doesn’t work
well: people that have made mechanical code changes (e.g change the
copyright header across all directories) end up as reviewers in lots of
places.
Instead of using pure commit data, we generated an excessively large
list of reviewers and pruned based on all time commit data, recent
commit data and review data (number of PRs commented on).
At this point we have a decent list of reviewers, but it needs one last
pass for fine tuning.
Also, see https://github.com/kubernetes/contrib/issues/1389.
TLDR:
-----
As an owner of a sig/directory and a leader of the project, here’s what
we need from you:
1. Use PR https://github.com/kubernetes/kubernetes/pull/35715 as an example.
2. The pull-request is made editable, please edit the `OWNERS` file to
remove the names of people that shouldn't be reviewing code in the future in
the **reviewers** section. You probably do NOT need to modify the **approvers**
section. Names asre sorted by relevance, using some secret statistics.
3. Notify me if you want some OWNERS file to be removed. Being an
approver or reviewer of a parent directory makes you a reviewer/approver
of the subdirectories too, so not all OWNERS files may be necessary.
4. Please use ALIAS if you want to use the same list of people over and
over again (don't hesitate to ask me for help, or use the pull-request
above as an example)
Automatic merge from submit-queue (batch tested with PRs 40105, 40095)
Update dependencies: aws-sdk-go to 1.6.10; also cadvisor
updating cadvisor mainly to include [this bugfix #1558](https://github.com/google/cadvisor/pull/1558)
Because cadvisor vendors a newer version of aws than kubernetes, the aws dependency needed to be updated as well.
cc: @justinsb @zmerlynn @timstclair
Automatic merge from submit-queue (batch tested with PRs 40105, 40095)
[OpenStack-Heat] Fix regex used to get object-store URL
**Release note**:
```release-note
Fixes a bug in the OpenStack-Heat kubernetes provider, in the handling of differences between the Identity v2 and Identity v3 APIs
```
Automatic merge from submit-queue (batch tested with PRs 39898, 39904)
[scheduler] interface for config
**What this PR fixes**
This PR converts the Scheduler configuration factory into an interface, so that
- the scheduler_perf and scheduler integration tests dont rely on the struct for their implementation
- the exported functionality of the factory (i.e. what it needs to provide to create a scheduler configuration) is completely explicit, rather then completely coupled to a struct.
- makes some parts of the factory immutable, again to minimize possible coupling.
This makes it easier to make a custom factory in instances where we might specifically want to import scheduler logic without actually reusing the entire scheduler codebase.
Automatic merge from submit-queue
Build release tars using bazel
**What this PR does / why we need it**: builds equivalents of the various kubernetes release tarballs, solely using bazel.
For example, you can now do
```console
$ make bazel-release
$ hack/e2e.go -v -up -test -down
```
**Special notes for your reviewer**: this is currently dependent on 3b29803eb5, which I have yet to turn into a pull request, since I'm still trying to figure out if this is the best approach.
Basically, the issue comes up with the way we generate the various server docker image tarfiles and load them on nodes:
* we `md5sum` the binary being encapsulated (e.g. kube-proxy) and save that to `$binary.docker_tag` in the server tarball
* we then build the docker image and tag using that md5sum (e.g. `gcr.io/google_containers/kube-proxy:$MD5SUM`)
* we `docker save` this image, which embeds the full tag in the `$binary.tar` file.
* on cluster startup, we `docker load` these tarballs, which are loaded with the tag that we'd created at build time. the nodes then use the `$binary.docker_tag` file to find the right image.
With the current bazel `docker_build` rule, the tag isn't saved in the docker image tar, so the node is unable to find the image after `docker load`ing it.
My changes to the rule save the tag in the docker image tar, though I don't know if there are subtle issues with it. (Maybe we want to only tag when `--stamp` is given?)
Also, the docker images produced by bazel have the timestamp set to the unix epoch, which is not great for debugging. Might be another thing to change with a `--stamp`.
Long story short, we probably need to follow up with bazel folks on the best way to solve this problem.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 36467, 36528, 39568, 40094, 39042)
do not filter kubectl get pods if -o json or yaml
Fixes: https://github.com/kubernetes/kubernetes/issues/38327
This patch sets the value of --show-all to true if the output format
specified is 'json' or 'yaml'.
**Release note**:
```release-note
release-note-none
```
@smarterclayton
Automatic merge from submit-queue (batch tested with PRs 36467, 36528, 39568, 40094, 39042)
Bump GCE to container-vm-v20170117
Base image update only, no kubelet or Docker updates.
```release-note
Update GCE ContainerVM deployment to container-vm-v20170117 to pick up CVE fixes in base image.
```
Automatic merge from submit-queue (batch tested with PRs 36467, 36528, 39568, 40094, 39042)
Improve code coverage for algorithm/predicates.
Improve code coverage for algorithm/predicates for #39559 .
Improved coverage from 71.3% to 81.9%.
Coverage report: [combined-coverage.html.gz](https://github.com/kubernetes/kubernetes/files/691518/combined-coverage.html.gz)
Automatic merge from submit-queue
Curating Owners: pkg/kubelet
cc @euank @vishh @dchen1107 @feiskyer @yujuhong @yifan-gu @derekwaynecarr @saad-ali
In an effort to expand the existing pool of reviewers and establish a
two-tiered review process (first someone lgtms and then someone
experienced in the project approves), we are adding new reviewers to
existing owners files.
If You Care About the Process:
------------------------------
We did this by algorithmically figuring out who’s contributed code to
the project and in what directories. Unfortunately, that doesn’t work
well: people that have made mechanical code changes (e.g change the
copyright header across all directories) end up as reviewers in lots of
places.
Instead of using pure commit data, we generated an excessively large
list of reviewers and pruned based on all time commit data, recent
commit data and review data (number of PRs commented on).
At this point we have a decent list of reviewers, but it needs one last
pass for fine tuning.
Also, see https://github.com/kubernetes/contrib/issues/1389.
TLDR:
-----
As an owner of a sig/directory and a leader of the project, here’s what
we need from you:
1. Use PR https://github.com/kubernetes/kubernetes/pull/35715 as an example.
2. The pull-request is made editable, please edit the `OWNERS` file to
remove the names of people that shouldn't be reviewing code in the
future in the **reviewers** section. You probably do NOT need to modify
the **approvers** section. Names asre sorted by relevance, using some
secret statistics.
3. Notify me if you want some OWNERS file to be removed. Being an
approver or reviewer of a parent directory makes you a reviewer/approver
of the subdirectories too, so not all OWNERS files may be necessary.
4. Please use ALIAS if you want to use the same list of people over and
over again (don't hesitate to ask me for help, or use the pull-request
above as an example)
Automatic merge from submit-queue
Curating Owners: test/e2e_node
cc @random-liu @timstclair @vishh
In an effort to expand the existing pool of reviewers and establish a
two-tiered review process (first someone lgtms and then someone
experienced in the project approves), we are adding new reviewers to
existing owners files.
If You Care About the Process:
------------------------------
We did this by algorithmically figuring out who’s contributed code to
the project and in what directories. Unfortunately, that doesn’t work
well: people that have made mechanical code changes (e.g change the
copyright header across all directories) end up as reviewers in lots of
places.
Instead of using pure commit data, we generated an excessively large
list of reviewers and pruned based on all time commit data, recent
commit data and review data (number of PRs commented on).
At this point we have a decent list of reviewers, but it needs one last
pass for fine tuning.
Also, see https://github.com/kubernetes/contrib/issues/1389.
TLDR:
-----
As an owner of a sig/directory and a leader of the project, here’s what
we need from you:
1. Use PR https://github.com/kubernetes/kubernetes/pull/35715 as an example.
2. The pull-request is made editable, please edit the `OWNERS` file to
remove the names of people that shouldn't be reviewing code in the
future in the **reviewers** section. You probably do NOT need to modify
the **approvers** section. Names asre sorted by relevance, using some
secret statistics.
3. Notify me if you want some OWNERS file to be removed. Being an
approver or reviewer of a parent directory makes you a reviewer/approver
of the subdirectories too, so not all OWNERS files may be necessary.
4. Please use ALIAS if you want to use the same list of people over and
over again (don't hesitate to ask me for help, or use the pull-request
above as an example)
Automatic merge from submit-queue
Use versioned Taint/Toleration/AvoidPods objects when marshalling
fixes#39847
`kubectl taint`, the kubelet's `--register-with-taints` option, and several Taint/Toleration/AllowPod annotation helpers were marshaling/unmarshaling using internal structs
Automatic merge from submit-queue (batch tested with PRs 39625, 39842)
AWS: Remove duplicate calls to DescribeInstance during volume operations
This change removes all duplicate calls to describeInstance
from aws volume code path.
**What this PR does / why we need it**:
This PR removes the duplicate calls present in disk check code paths in AWS. I can confirm that `getAWSInstance` actually returns all instance information already and hence there is no need of making separate `describeInstance` call.
Related to - https://github.com/kubernetes/kubernetes/issues/39526
cc @justinsb @jsafrane
Automatic merge from submit-queue (batch tested with PRs 39625, 39842)
Add RBAC v1beta1
Add `rbac.authorization.k8s.io/v1beta1`. This scrubs `v1alpha1` to remove cruft, then add `v1beta1`. We'll update other bits of infrastructure to code to `v1beta1` as a separate step.
```release-note
The `attributeRestrictions` field has been removed from the PolicyRule type in the rbac.authorization.k8s.io/v1alpha1 API. The field was not used by the RBAC authorizer.
```
@kubernetes/sig-auth-misc @liggitt @erictune
Automatic merge from submit-queue
Enable lazy initialization of ext3/ext4 filesystems
**What this PR does / why we need it**: It enables lazy inode table and journal initialization in ext3 and ext4.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#30752, fixes#30240
**Release note**:
```release-note
Enable lazy inode table and journal initialization for ext3 and ext4
```
**Special notes for your reviewer**:
This PR removes the extended options to mkfs.ext3/mkfs.ext4, so that the defaults (enabled) for lazy initialization are used.
These extended options come from a script that was historically located at */usr/share/google/safe_format_and_mount* and later ported to GO so this dependency to the script could be removed. After some search, I found the original script here: https://github.com/GoogleCloudPlatform/compute-image-packages/blob/legacy/google-startup-scripts/usr/share/google/safe_format_and_mount
Checking the history of this script, I found the commit [Disable lazy init of inode table and journal.](4d7346f7f5). This one introduces the extended flags with this description:
```
Now that discard with guaranteed zeroing is supported by PD,
initializing them is really fast and prevents perf from being affected
when the filesystem is first mounted.
```
The problem is, that this is not true for all cloud providers and all disk types, e.g. Azure and AWS. I only tested with magnetic disks on Azure and AWS, so maybe it's different for SSDs on these cloud providers. The result is that this performance optimization dramatically increases the time needed to format a disk in such cases.
When mkfs.ext4 is told to not lazily initialize the inode tables and the check for guaranteed zeroing on discard fails, it falls back to a very naive implementation that simply loops and writes zeroed buffers to the disk. Performance on this highly depends on free memory and also uses up all this free memory for write caching, reducing performance of everything else in the system.
As of https://github.com/kubernetes/kubernetes/issues/30752, there is also something inside kubelet that somehow degrades performance of all this. It's however not exactly known what it is but I'd assume it has something to do with cgroups throttling IO or memory.
I checked the kernel code for lazy inode table initialization. The nice thing is, that the kernel also does the guaranteed zeroing on discard check. If it is guaranteed, the kernel uses discard for the lazy initialization, which should finish in a just few seconds. If it is not guaranteed, it falls back to using *bio*s, which does not require the use of the write cache. The result is, that free memory is not required and not touched, thus performance is maxed and the system does not suffer.
As the original reason for disabling lazy init was a performance optimization and the kernel already does this optimization by default (and in a much better way), I'd suggest to completely remove these flags and rely on the kernel to do it in the best way.