Automatic merge from submit-queue
RunnningContainerStatues spelling mistake
runtime.go:in the function GetRunningContainerStatuses, runnningContainerStatues spelling mistake, modified into runningContainerStatus
Automatic merge from submit-queue
Add `clusterid`, an optional parameter to storageclass.
At present, admin doesn't have the privilege to chose the
trusted storage pool from which persistent gluster volume
has to be provided.
This patch introduce a new storage class parameter which allows
the admin to specify storage pool/cluster if required.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
Automatic merge from submit-queue
clear impersonation headers
If you clone a request that came in after impersonation, you were also cloning the impersonation headers that came with it. These seem roughly analogous to the `Authorization` header, so this clears them.
@kubernetes/sig-auth
Automatic merge from submit-queue
Add namespace index for limit ranger
Without this PR I'm seeing a huge number of lines like this:
```
Index with name namespace does not exist
```
Those are coming from LimitRanger admission controller - this PR fixes those.
Automatic merge from submit-queue
fix forbid clusterrole with namespace
run `kubectl get clusterroles --all-namespaces`
old version
return error message:
```
NAMESPACE NAME AGE
clusterRole is not namespaced
clusterRole is not namespaced
clusterRole is not namespaced
clusterRole is not namespaced
clusterRole is not namespaced
clusterRole is not namespaced
clusterRole is not namespaced
```
```release-note
Add error message when trying to use clusterrole with namespace in kubectl
```
Automatic merge from submit-queue
kubernetes attempts to unmount a wrong vSphere volume and stops making any progress after that
This is in reference to the bug #37332 which was accidentally closed. So created this new PR.
The code is already reviewed as part of PR #37332
Fixes issue #37022
@saad-ali @jingxu97 @abrarshivani @kerneltime
Automatic merge from submit-queue
kubelet: don't reject pods without adding them to the pod manager
kubelet relies on the pod manager as a cache of the pods in the apiserver (and
other sources) . The cache should be kept up-to-date even when rejecting pods.
Without this, kubelet may decide at any point to drop the status update
(request to the apiserver) for the rejected pod since it would think the pod no
longer exists in the apiserver.
This should fix#37658
Automatic merge from submit-queue
remove checking mount point in cleanupOrphanedPodDirs
To avoid nfs hung problem, remove the mountpoint checking code in
cleanupOrphanedPodDirs(). This removal should still be safe because it checks whether there are still directories under pod's volume and if so, do not delete the pod directory.
Note: After removing the mountpoint check code in cleanupOrphanedPodDirs(), the directories might not be cleaned up in such situation.
1. delete pod, kubelet reconciler tries to unmount the volume directory successfully
2. before reconciler tries to delete the volume directory, kubelet gets retarted
3. since under pod directory, there are still volume directors exist (but not mounted), cleanupOrphanedPodDIrs() will not clean them up.
Will work on a follow up PR to solve above issue.
Automatic merge from submit-queue
Fix photon controller plugin to construct with correct PdID
**What this PR does / why we need it**:
This PR is to fix a mismatching of unmount path in photon volume plugin, which is resulted from the assigning volume spec name to persistent disk ID. Without this path, unmounting process is stalling in reconciler when a pod is deleted. Restart the same pod will see a mount failure because the previous unmounting is still going on.
The input variable of function ConstructVolumeSpec is the volume spec name instead of persistent disk ID. Previously the function directly construct new volume spec by assigning volume spec name to persistent disk ID, which will result in mismatching of mount path. The fix will find the pdID according to mount path and construct volume spec with the correct pdID.
I have tested the patch with back-to-back pod creation/deletion and mounting/unmounting of photon persistent disk volume source performs normal now.
This need to be cherry-picked to 1.5 release branch.
Automatic merge from submit-queue
move parts of the mega generic run struct out
This splits the main `ServerRunOptions` into composeable pieces that are bindable separately and adds easy paths for composing servers to run delegating authentication and authorization.
@sttts @ncdc alright, I think this is as far as I need to go to make the composing servers reasonable to write. I'll try leaving it here
Automatic merge from submit-queue
controller manager refactors
The controller manager needs some significant cleanup. This starts us down the patch by respecting parameters like `stopCh`, simplifying discovery checks, removing unnecessary parameters, preventing unncessary fatals, and using our client builder.
@sttts @ncdc
kubelet relies on the pod manager as a cache of the pods in the apiserver (and
other sources) . The cache should be kept up-to-date even when rejecting pods.
Without this, kubelet may decide at any point to drop the status update
(request to the apiserver) for the rejected pod since it would think the pod no
longer exists in the apiserver.
Also check if the pod to-be-admitted has terminated or not. In the case where
it has terminated, skip the admission process completely.
Automatic merge from submit-queue
Fix package aliases to follow golang convention
Some package aliases are not not align with golang convention https://blog.golang.org/package-names. This PR fixes them. Also adds a verify script and presubmit checks.
Fixes#35070.
cc/ @timstclair @Random-Liu
Automatic merge from submit-queue
When --grace-period=0 is provided, wait for deletion
The grace-period is automatically set to 1 unless --force is provided, and the client waits until the object is deleted.
This preserves backwards compatibility with 1.4 and earlier. It does not handle scenarios where the object is deleted and a new object is created with the same name because we don't have the initial object loaded (and that's a larger change for 1.5).
Fixes#37117 by relaxing the guarantees provided.
```release-note
When deleting an object with `--grace-period=0`, the client will begin a graceful deletion and wait until the resource is fully deleted. To force deletion, use the `--force` flag.
```
Automatic merge from submit-queue
fix typo in `kubectl proxy` command line help
<!-- Thanks for sending a pull request! Here are some tips for you:
1. If this is your first time, read our contributor guidelines https://github.com/kubernetes/kubernetes/blob/master/CONTRIBUTING.md and developer guide https://github.com/kubernetes/kubernetes/blob/master/docs/devel/development.md
2. If you want *faster* PR reviews, read how: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/faster_reviews.md
3. Follow the instructions for writing a release note: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/pull-requests.md#release-notes
-->
**What this PR does / why we need it**: improves docs
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: none
**Special notes for your reviewer**: doc only
**Release note**:
<!-- Steps to write your release note:
1. Use the release-note-* labels to set the release note state (if you have access)
2. Enter your extended release note in the below block; leaving it blank means using the PR title as the release note. If no release note is required, just write `NONE`.
-->
```release-note
```
(docs only) fixed port from 8011 to 8001 (the default) because in that particular line no specific port is specified and thus the default is going to be used.
Automatic merge from submit-queue
hold namespaces briefly before processing deletion
possible fix for #36891
in HA scenarios (either HA apiserver or HA etcd), it is possible for deletion of resources from namespace cleanup to race with creation of objects in the terminating namespace
HA master timeline:
1. "delete namespace n" API call goes to apiserver 1, deletion timestamp is set in etcd
2. namespace controller observes namespace deletion, starts cleaning up resources, lists deployments
3. "create deployment d" API call goes to apiserver 2, gets persisted to etcd
4. apiserver 2 observes namespace deletion, stops allowing new objects to be created
5. namespace controller finishes deleting listed deployments, deletes namespace
HA etcd timeline:
1. "create deployment d" API call goes to apiserver, gets persisted to etcd
2. "delete namespace n" API call goes to apiserver, deletion timestamp is set in etcd
3. namespace controller observes namespace deletion, starts cleaning up resources, lists deployments
4. list call goes to non-leader etcd member that hasn't observed the new deployment or the deleted namespace yet
5. namespace controller finishes deleting the listed deployments, deletes namespace
In both cases, simply waiting to clean up the namespace (either for etcd members to observe objects created at the last second in the namespace, or for other apiservers to observe the namespace move to terminating phase and disallow additional creations) resolves the issue
Possible other fixes:
* do a second sweep of objects before deleting the namespace
* have the namespace controller check for and clean up objects in namespaces that no longer exist
* ...?
Automatic merge from submit-queue
make drain retry forever and use a new graceful period
Implemented the 1st approach according to https://github.com/kubernetes/kubernetes/issues/37460#issuecomment-263437516
1) Make drain retry forever if the error is always Too Many Requests (429) generated by Pod Disruption Budget.
2) Use a new graceful period per #37460
3) Update the message printed out when successfully deleting or evicting a pod.
fixes#37460
cc: @davidopp @erictune
Automatic merge from submit-queue
Better waiting for watch event delivery in cacher
@lavalamp - I think we should do something simple for now (and merge for 1.5), and do something a bit more sophisticated right after 1.5, WDYT?
Automatic merge from submit-queue
Fix concurrent read/write to map error in kubelet
Fix#37560.
The concurrent read/write is to the pod annotations. The call in apiserver.go reads the annotations, and the config.go writes the annotations. I moved the reads to config.go to avoid the race.
Automatic merge from submit-queue
Improved validation error message when env.valueFrom contains no (or …
<!-- Thanks for sending a pull request! Here are some tips for you:
1. If this is your first time, read our contributor guidelines https://github.com/kubernetes/kubernetes/blob/master/CONTRIBUTING.md and developer guide https://github.com/kubernetes/kubernetes/blob/master/docs/devel/development.md
2. If you want *faster* PR reviews, read how: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/faster_reviews.md
3. Follow the instructions for writing a release note: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/pull-requests.md#release-notes
-->
**What this PR does / why we need it**:
A misleading error message is shown if the user mistypes (or forgets to specify) a field under env.valueFrom. This is the error message: "may not have more than one field specified at a time". But there is only one (misspelled) field specified.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
<!-- Steps to write your release note:
1. Use the release-note-* labels to set the release note state (if you have access)
2. Enter your extended release note in the below block; leaving it blank means using the PR title as the release note. If no release note is required, just write `NONE`.
-->
```
Improved error message for missing/misspelled field under env.valueFrom
```
Automatic merge from submit-queue
Mention overflows when mistakenly call function FromInt
**What this PR does / why we need it**:
When mistakenly call this method with a value that overflows int32 will causes strange behavior in some environment (maybe in amd64 system, i'm not sure but my test shows that).
For example, call FromInt(93333333333) would result in -1155947179 and not mention overflows.
Automatic merge from submit-queue
Removes shorthand flag -w from kubectl apply
Fixes#37342.
A shorthand flag `-w` was introduced as flag `--prune-whitelist` for kubectl apply two weeks ago. Turned out it is not what we should do. Removing this shorthand flag before 1.5 release to prevent further issues.
@ymqytw @pwittrock
Automatic merge from submit-queue
Update kubectl drain help message
Update `kubectl drain` help messages according to kubernetes/kubernetes.github.io#1768
cc: @erictune @pwittrock
Automatic merge from submit-queue
Curating Owners: pkg/apis
cc @lavalamp @smarterclayton @erictune @thockin @bgrant0607
In an effort to expand the existing pool of reviewers and establish a
two-tiered review process (first someone lgtms and then someone
experienced in the project approves), we are adding new reviewers to
existing owners files.
If You Care About the Process:
------------------------------
We did this by algorithmically figuring out who’s contributed code to
the project and in what directories. Unfortunately, that doesn’t work
well: people that have made mechanical code changes (e.g change the
copyright header across all directories) end up as reviewers in lots of
places.
Instead of using pure commit data, we generated an excessively large
list of reviewers and pruned based on all time commit data, recent
commit data and review data (number of PRs commented on).
At this point we have a decent list of reviewers, but it needs one last
pass for fine tuning.
Also, see https://github.com/kubernetes/contrib/issues/1389.
TLDR:
-----
As an owner of a sig/directory and a leader of the project, here’s what
we need from you:
1. Use PR https://github.com/kubernetes/kubernetes/pull/35715 as an example.
2. The pull-request is made editable, please edit the `OWNERS` file to
remove the names of people that shouldn't be reviewing code in the
future in the **reviewers** section. You probably do NOT need to modify
the **approvers** section. Names asre sorted by relevance, using some
secret statistics.
3. Notify me if you want some OWNERS file to be removed. Being an
approver or reviewer of a parent directory makes you a reviewer/approver
of the subdirectories too, so not all OWNERS files may be necessary.
4. Please use ALIAS if you want to use the same list of people over and
over again (don't hesitate to ask me for help, or use the pull-request
above as an example)
Automatic merge from submit-queue
Curating Owners: pkg/api
cc @lavalamp @smarterclayton @erictune @thockin @bgrant0607
In an effort to expand the existing pool of reviewers and establish a
two-tiered review process (first someone lgtms and then someone
experienced in the project approves), we are adding new reviewers to
existing owners files.
If You Care About the Process:
------------------------------
We did this by algorithmically figuring out who’s contributed code to
the project and in what directories. Unfortunately, that doesn’t work
well: people that have made mechanical code changes (e.g change the
copyright header across all directories) end up as reviewers in lots of
places.
Instead of using pure commit data, we generated an excessively large
list of reviewers and pruned based on all time commit data, recent
commit data and review data (number of PRs commented on).
At this point we have a decent list of reviewers, but it needs one last
pass for fine tuning.
Also, see https://github.com/kubernetes/contrib/issues/1389.
TLDR:
-----
As an owner of a sig/directory and a leader of the project, here’s what
we need from you:
1. Use PR https://github.com/kubernetes/kubernetes/pull/35715 as an example.
2. The pull-request is made editable, please edit the `OWNERS` file to
remove the names of people that shouldn't be reviewing code in the
future in the **reviewers** section. You probably do NOT need to modify
the **approvers** section. Names asre sorted by relevance, using some
secret statistics.
3. Notify me if you want some OWNERS file to be removed. Being an
approver or reviewer of a parent directory makes you a reviewer/approver
of the subdirectories too, so not all OWNERS files may be necessary.
4. Please use ALIAS if you want to use the same list of people over and
over again (don't hesitate to ask me for help, or use the pull-request
above as an example)
Automatic merge from submit-queue
Curating Owners: pkg/proxy
cc @thockin
In an effort to expand the existing pool of reviewers and establish a
two-tiered review process (first someone lgtms and then someone
experienced in the project approves), we are adding new reviewers to
existing owners files.
If You Care About the Process:
------------------------------
We did this by algorithmically figuring out who’s contributed code to
the project and in what directories. Unfortunately, that doesn’t work
well: people that have made mechanical code changes (e.g change the
copyright header across all directories) end up as reviewers in lots of
places.
Instead of using pure commit data, we generated an excessively large
list of reviewers and pruned based on all time commit data, recent
commit data and review data (number of PRs commented on).
At this point we have a decent list of reviewers, but it needs one last
pass for fine tuning.
Also, see https://github.com/kubernetes/contrib/issues/1389.
TLDR:
-----
As an owner of a sig/directory and a leader of the project, here’s what
we need from you:
1. Use PR https://github.com/kubernetes/kubernetes/pull/35715 as an example.
2. The pull-request is made editable, please edit the `OWNERS` file to
remove the names of people that shouldn't be reviewing code in the
future in the **reviewers** section. You probably do NOT need to modify
the **approvers** section. Names asre sorted by relevance, using some
secret statistics.
3. Notify me if you want some OWNERS file to be removed. Being an
approver or reviewer of a parent directory makes you a reviewer/approver
of the subdirectories too, so not all OWNERS files may be necessary.
4. Please use ALIAS if you want to use the same list of people over and
over again (don't hesitate to ask me for help, or use the pull-request
above as an example)
Automatic merge from submit-queue
Reduce verbosity of volume reconciler
**What this PR does / why we need it**:
It reduces the log verbosity for attaching of volumes
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
Reduce verbosity of volume reconciler when attaching volumes
```
Set logging level for information about attaching of volumes to from 1 to 4
Otherwise the log is spammed with one line per 100ms while attaching is
in progress and afterwards as long as the volume is attached.
When cni is set to kubenet, kubelet should hold the host port socket,
so that other application in this node could not listen/bind this port
any more. However, the sockets are closed accidentally, because
kubelet forget to reconcile the protocol format before comparing.
The input variable of function ConstructVolumeSpec is the volume spec
name instead of persistent disk ID. Previously the function directly
construct new volume spec by assigning volume spec name to persistent
disk ID, which will result in mismatching of mount path. The fix will
find the pdID according to mount path and construct volume spec with the
correct pdID.
The grace-period is automatically set to 1 unless --force is provided,
and the client waits until the object is deleted.
This preserves backwards compatibility with 1.4 and earlier. It does not
handle scenarios where the object is deleted and a new object is created
with the same name.
trusted storage pool from which persistent gluster volume
has to be provided.
This patch introduce a new storage class parameter which allows
the admin to specify storage pool/cluster if required.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
Automatic merge from submit-queue
kubelet: eviction: add memcg threshold notifier to improve eviction responsiveness
This PR adds the ability for the eviction code to get immediate notification from the kernel when the available memory in the root cgroup falls below a user defined threshold, controlled by setting the `memory.available` siginal with the `--eviction-hard` flag.
This PR by itself, doesn't change anything as the frequency at which new stats can be obtained is currently controlled by the cadvisor housekeeping interval. That being the case, the call to `synchronize()` by the notification loop will very likely get stale stats and not act any more quickly than it does now.
However, whenever cadvisor does get on-demand stat gathering ability, this will improve eviction responsiveness by getting async notification of the root cgroup memory state rather than relying on polling cadvisor.
@vishh @derekwaynecarr @kubernetes/rh-cluster-infra
Automatic merge from submit-queue
Change stickyMaxAge from seconds to minutes, fixes issue #35677
**What this PR does / why we need it**: Increases the service sessionAfinity time from 180 seconds to 180 minutes for proxy mode iptables which was a bug introduced in a refactor.
**Which issue this PR fixes**: fixes#35677
**Special notes for your reviewer**:
**Release note**:
``` release-note
Fixed wrong service sessionAffinity stickiness time from 180 sec to 180 minutes in proxy mode iptables.
```
Since there is no test for the sessionAffinity feature at the moment I wanted to create one but I don't know how.
Automatic merge from submit-queue
Add more logging around Pod deletion
After this PR we'll have at least V(2) level log near all Pod deletions.
@saad-ali - this is required by GKE to help with diagnosing possible problem.
cc @dchen1107 @wojtek-t
Automatic merge from submit-queue
Read all resources for finalization and gc, not just preferred
Fixes#31481.
Currently when starting namespace controller or garbage collector we only gather preferred version resources, which in case of multiple versions (you guessed it `batch/v2alpha1.CronJobs` again) isn't sufficient. This PR adds additional method which actually retrieves all resources from all versions and works on them.
@kubernetes/sig-api-machinery ptal
Automatic merge from submit-queue
Revert "Use Gid when provisioning Gluster Volumes."
On further inspection the design in #35460 was not secure enough. This PR reverts the change.
This reverts commit 7a0d219d12.