Automatic merge from submit-queue
create service add create ExternalName service implementation
@kubernetes/kubectl create service add ExternalName support, refer #34731 for more detail.
```release-note
kubectl create service externalname
```
Automatic merge from submit-queue
Fix logic error in graceful deletion
If a resource has the following criteria:
1. deletion timestamp is not nil
2. deletion graceperiod seconds as persisted in storage is 0
the resource could never be deleted as we always returned pending graceful.
Automatic merge from submit-queue
SetNodeUpdateStatusNeeded whenever nodeAdd event is received
**What this PR does / why we need it**:
Bug fix and SetNodeStatusUpdateNeeded for a node whenever its api object is added. This is to ensure that we don't lose the attached list of volumes in the node when its api object is deleted and recreated.
fixes https://github.com/kubernetes/kubernetes/issues/37586https://github.com/kubernetes/kubernetes/issues/37585
**Special notes for your reviewer**:
<!-- Steps to write your release note:
1. Use the release-note-* labels to set the release note state (if you have access)
2. Enter your extended release note in the below block; leaving it blank means using the PR title as the release note. If no release note is required, just write `NONE`.
-->
Automatic merge from submit-queue
add retry logic for eviction
Make eviction retry when it fails to update the PDB due to conflict (409) error.
fixes#37605
cc: @davidopp
Automatic merge from submit-queue
Function annotation modification
“return kl.pleg.Healthy()”,Based on the return function,"healty" to "healthy" better
Previously, we had no way to stop resync goroutine when ListAndWatch
returned. goroutine leaked every time ListAndWatch returned, for
example, with error. This commit adds another channel to signal that
resync goroutine should exit when ListAndWatch returns.
Automatic merge from submit-queue
Bug fix. Incoming UDP packets not reach newly deployed services
**What this PR does / why we need it**:
Incoming UDP packets not reach newly deployed services when old connection's state in conntrack is not cleared. When a packet arrives, it will not go through NAT table again, because it is not "the first" packet. The PR fix the issue
**Which issue this PR fixes**
Fixes#31983
xref https://github.com/docker/docker/issues/8795
Automatic merge from submit-queue
rescale immediately if the basic constraints are not satisfied
refactor reconcileAutoscaler.
If the basic constraints are not satisfied, we should rescale the target ref immediately.
Automatic merge from submit-queue
Keep host port socket open for kubenet
fixes#37087
**Release note**:
<!-- Steps to write your release note:
1. Use the release-note-* labels to set the release note state (if you have access)
2. Enter your extended release note in the below block; leaving it blank means using the PR title as the release note. If no release note is required, just write `NONE`.
-->
```release-note
NONE
```
When cni is set to kubenet, kubelet should hold the host port socket,
so that other application in this node could not listen/bind this port
any more. However, the sockets are closed accidentally, because
kubelet forget to reconcile the protocol format before comparing.
@kubernetes/sig-network
OpenShift runs Heapster on HTTPS, which means `top node` and `top pod`
are broken because they hardcode 'http' as the scheme. Provide an
options struct allowing users to specify `--heapster-namespace`,
`--heapster-service`, `--heapster-scheme`, and `--heapster-port` to the
commands (leveraging the existing defaults).
Automatic merge from submit-queue
fix rbac informer. it's listers are all internal
Fixes https://github.com/kubernetes/kubernetes/issues/37615
The rbac informer still uses internal types in its listers, which means it must use internal clients for evaluation. Since its running inside the API server, this seems ok for now and we can/should fix it when generated informers come along. This just patches us to keep RBAC working.
@kubernetes/sig-auth @sttts @liggitt this is broken in master, let's get it sorted quickly.
Automatic merge from submit-queue
add kubectl get rolebindings/clusterrolebindings -o wide
Use "-o wide" to get more information of roleRef/subjects
`kubectl get rolebindings -o wide`
|NAME | AGE | ROLE | USERS | GROUPS | SERVICEACCOUNTS|
|:-------|:-------|:-------|:-------|:-------|:-------|
|admin-resource-binding |1s | Role/admin-resource-role | test | | |
`kubectl get clusterrolebindings -o wide`
|NAME|AGE|ROLE|USERS|GROUPS|SERVICEACCOUNTS|
|:-------|:-------|:-------|:-------|:-------|:-------|
|cluster-admin|27s|cluster-admin| |system:masters| |
|system:basic-user|27s|system:basic-user| |system:authenticated, system:unauthenticated | |
|system:controller:replication-controller|27s|system:controller:replication-controller | | |kube-system/replication-controller|
|system:discovery |27s|system:discovery| |system:authenticated, system:unauthenticated| |
Automatic merge from submit-queue
RunnningContainerStatues spelling mistake
runtime.go:in the function GetRunningContainerStatuses, runnningContainerStatues spelling mistake, modified into runningContainerStatus
Automatic merge from submit-queue
Add `clusterid`, an optional parameter to storageclass.
At present, admin doesn't have the privilege to chose the
trusted storage pool from which persistent gluster volume
has to be provided.
This patch introduce a new storage class parameter which allows
the admin to specify storage pool/cluster if required.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
Automatic merge from submit-queue
clear impersonation headers
If you clone a request that came in after impersonation, you were also cloning the impersonation headers that came with it. These seem roughly analogous to the `Authorization` header, so this clears them.
@kubernetes/sig-auth
Automatic merge from submit-queue
Add namespace index for limit ranger
Without this PR I'm seeing a huge number of lines like this:
```
Index with name namespace does not exist
```
Those are coming from LimitRanger admission controller - this PR fixes those.
Automatic merge from submit-queue
fix forbid clusterrole with namespace
run `kubectl get clusterroles --all-namespaces`
old version
return error message:
```
NAMESPACE NAME AGE
clusterRole is not namespaced
clusterRole is not namespaced
clusterRole is not namespaced
clusterRole is not namespaced
clusterRole is not namespaced
clusterRole is not namespaced
clusterRole is not namespaced
```
```release-note
Add error message when trying to use clusterrole with namespace in kubectl
```
Automatic merge from submit-queue
kubernetes attempts to unmount a wrong vSphere volume and stops making any progress after that
This is in reference to the bug #37332 which was accidentally closed. So created this new PR.
The code is already reviewed as part of PR #37332
Fixes issue #37022
@saad-ali @jingxu97 @abrarshivani @kerneltime
Automatic merge from submit-queue
kubelet: don't reject pods without adding them to the pod manager
kubelet relies on the pod manager as a cache of the pods in the apiserver (and
other sources) . The cache should be kept up-to-date even when rejecting pods.
Without this, kubelet may decide at any point to drop the status update
(request to the apiserver) for the rejected pod since it would think the pod no
longer exists in the apiserver.
This should fix#37658
Automatic merge from submit-queue
remove checking mount point in cleanupOrphanedPodDirs
To avoid nfs hung problem, remove the mountpoint checking code in
cleanupOrphanedPodDirs(). This removal should still be safe because it checks whether there are still directories under pod's volume and if so, do not delete the pod directory.
Note: After removing the mountpoint check code in cleanupOrphanedPodDirs(), the directories might not be cleaned up in such situation.
1. delete pod, kubelet reconciler tries to unmount the volume directory successfully
2. before reconciler tries to delete the volume directory, kubelet gets retarted
3. since under pod directory, there are still volume directors exist (but not mounted), cleanupOrphanedPodDIrs() will not clean them up.
Will work on a follow up PR to solve above issue.
Automatic merge from submit-queue
Fix photon controller plugin to construct with correct PdID
**What this PR does / why we need it**:
This PR is to fix a mismatching of unmount path in photon volume plugin, which is resulted from the assigning volume spec name to persistent disk ID. Without this path, unmounting process is stalling in reconciler when a pod is deleted. Restart the same pod will see a mount failure because the previous unmounting is still going on.
The input variable of function ConstructVolumeSpec is the volume spec name instead of persistent disk ID. Previously the function directly construct new volume spec by assigning volume spec name to persistent disk ID, which will result in mismatching of mount path. The fix will find the pdID according to mount path and construct volume spec with the correct pdID.
I have tested the patch with back-to-back pod creation/deletion and mounting/unmounting of photon persistent disk volume source performs normal now.
This need to be cherry-picked to 1.5 release branch.
Automatic merge from submit-queue
move parts of the mega generic run struct out
This splits the main `ServerRunOptions` into composeable pieces that are bindable separately and adds easy paths for composing servers to run delegating authentication and authorization.
@sttts @ncdc alright, I think this is as far as I need to go to make the composing servers reasonable to write. I'll try leaving it here
Automatic merge from submit-queue
controller manager refactors
The controller manager needs some significant cleanup. This starts us down the patch by respecting parameters like `stopCh`, simplifying discovery checks, removing unnecessary parameters, preventing unncessary fatals, and using our client builder.
@sttts @ncdc
kubelet relies on the pod manager as a cache of the pods in the apiserver (and
other sources) . The cache should be kept up-to-date even when rejecting pods.
Without this, kubelet may decide at any point to drop the status update
(request to the apiserver) for the rejected pod since it would think the pod no
longer exists in the apiserver.
Also check if the pod to-be-admitted has terminated or not. In the case where
it has terminated, skip the admission process completely.
Automatic merge from submit-queue
Fix package aliases to follow golang convention
Some package aliases are not not align with golang convention https://blog.golang.org/package-names. This PR fixes them. Also adds a verify script and presubmit checks.
Fixes#35070.
cc/ @timstclair @Random-Liu
Update EnsureLoadBalancer/UpdateLoadBalancer API to use node objects.
In particular, this allows us to take the node address directly from the
node.Status.Addresses and avoids a name -> instance lookup.
Many providers need to do some sort of node name -> IP or instanceID
lookup before they can use the list of hostnames passed to
EnsureLoadBalancer/UpdateLoadBalancer.
This change just passes the full Node object instead of simply the node
name, allowing providers to use the node's provider ID and cached
addresses without additional lookups. Using `node.Name` reproduces the
old behaviour.
Automatic merge from submit-queue
When --grace-period=0 is provided, wait for deletion
The grace-period is automatically set to 1 unless --force is provided, and the client waits until the object is deleted.
This preserves backwards compatibility with 1.4 and earlier. It does not handle scenarios where the object is deleted and a new object is created with the same name because we don't have the initial object loaded (and that's a larger change for 1.5).
Fixes#37117 by relaxing the guarantees provided.
```release-note
When deleting an object with `--grace-period=0`, the client will begin a graceful deletion and wait until the resource is fully deleted. To force deletion, use the `--force` flag.
```
Automatic merge from submit-queue
fix typo in `kubectl proxy` command line help
<!-- Thanks for sending a pull request! Here are some tips for you:
1. If this is your first time, read our contributor guidelines https://github.com/kubernetes/kubernetes/blob/master/CONTRIBUTING.md and developer guide https://github.com/kubernetes/kubernetes/blob/master/docs/devel/development.md
2. If you want *faster* PR reviews, read how: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/faster_reviews.md
3. Follow the instructions for writing a release note: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/pull-requests.md#release-notes
-->
**What this PR does / why we need it**: improves docs
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: none
**Special notes for your reviewer**: doc only
**Release note**:
<!-- Steps to write your release note:
1. Use the release-note-* labels to set the release note state (if you have access)
2. Enter your extended release note in the below block; leaving it blank means using the PR title as the release note. If no release note is required, just write `NONE`.
-->
```release-note
```
(docs only) fixed port from 8011 to 8001 (the default) because in that particular line no specific port is specified and thus the default is going to be used.
Automatic merge from submit-queue
hold namespaces briefly before processing deletion
possible fix for #36891
in HA scenarios (either HA apiserver or HA etcd), it is possible for deletion of resources from namespace cleanup to race with creation of objects in the terminating namespace
HA master timeline:
1. "delete namespace n" API call goes to apiserver 1, deletion timestamp is set in etcd
2. namespace controller observes namespace deletion, starts cleaning up resources, lists deployments
3. "create deployment d" API call goes to apiserver 2, gets persisted to etcd
4. apiserver 2 observes namespace deletion, stops allowing new objects to be created
5. namespace controller finishes deleting listed deployments, deletes namespace
HA etcd timeline:
1. "create deployment d" API call goes to apiserver, gets persisted to etcd
2. "delete namespace n" API call goes to apiserver, deletion timestamp is set in etcd
3. namespace controller observes namespace deletion, starts cleaning up resources, lists deployments
4. list call goes to non-leader etcd member that hasn't observed the new deployment or the deleted namespace yet
5. namespace controller finishes deleting the listed deployments, deletes namespace
In both cases, simply waiting to clean up the namespace (either for etcd members to observe objects created at the last second in the namespace, or for other apiservers to observe the namespace move to terminating phase and disallow additional creations) resolves the issue
Possible other fixes:
* do a second sweep of objects before deleting the namespace
* have the namespace controller check for and clean up objects in namespaces that no longer exist
* ...?
Automatic merge from submit-queue
make drain retry forever and use a new graceful period
Implemented the 1st approach according to https://github.com/kubernetes/kubernetes/issues/37460#issuecomment-263437516
1) Make drain retry forever if the error is always Too Many Requests (429) generated by Pod Disruption Budget.
2) Use a new graceful period per #37460
3) Update the message printed out when successfully deleting or evicting a pod.
fixes#37460
cc: @davidopp @erictune
Automatic merge from submit-queue
Better waiting for watch event delivery in cacher
@lavalamp - I think we should do something simple for now (and merge for 1.5), and do something a bit more sophisticated right after 1.5, WDYT?
Automatic merge from submit-queue
Fix concurrent read/write to map error in kubelet
Fix#37560.
The concurrent read/write is to the pod annotations. The call in apiserver.go reads the annotations, and the config.go writes the annotations. I moved the reads to config.go to avoid the race.
Automatic merge from submit-queue
Improved validation error message when env.valueFrom contains no (or …
<!-- Thanks for sending a pull request! Here are some tips for you:
1. If this is your first time, read our contributor guidelines https://github.com/kubernetes/kubernetes/blob/master/CONTRIBUTING.md and developer guide https://github.com/kubernetes/kubernetes/blob/master/docs/devel/development.md
2. If you want *faster* PR reviews, read how: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/faster_reviews.md
3. Follow the instructions for writing a release note: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/pull-requests.md#release-notes
-->
**What this PR does / why we need it**:
A misleading error message is shown if the user mistypes (or forgets to specify) a field under env.valueFrom. This is the error message: "may not have more than one field specified at a time". But there is only one (misspelled) field specified.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
<!-- Steps to write your release note:
1. Use the release-note-* labels to set the release note state (if you have access)
2. Enter your extended release note in the below block; leaving it blank means using the PR title as the release note. If no release note is required, just write `NONE`.
-->
```
Improved error message for missing/misspelled field under env.valueFrom
```
Automatic merge from submit-queue
Mention overflows when mistakenly call function FromInt
**What this PR does / why we need it**:
When mistakenly call this method with a value that overflows int32 will causes strange behavior in some environment (maybe in amd64 system, i'm not sure but my test shows that).
For example, call FromInt(93333333333) would result in -1155947179 and not mention overflows.
Automatic merge from submit-queue
Removes shorthand flag -w from kubectl apply
Fixes#37342.
A shorthand flag `-w` was introduced as flag `--prune-whitelist` for kubectl apply two weeks ago. Turned out it is not what we should do. Removing this shorthand flag before 1.5 release to prevent further issues.
@ymqytw @pwittrock
Automatic merge from submit-queue
Update kubectl drain help message
Update `kubectl drain` help messages according to kubernetes/kubernetes.github.io#1768
cc: @erictune @pwittrock
Automatic merge from submit-queue
Curating Owners: pkg/apis
cc @lavalamp @smarterclayton @erictune @thockin @bgrant0607
In an effort to expand the existing pool of reviewers and establish a
two-tiered review process (first someone lgtms and then someone
experienced in the project approves), we are adding new reviewers to
existing owners files.
If You Care About the Process:
------------------------------
We did this by algorithmically figuring out who’s contributed code to
the project and in what directories. Unfortunately, that doesn’t work
well: people that have made mechanical code changes (e.g change the
copyright header across all directories) end up as reviewers in lots of
places.
Instead of using pure commit data, we generated an excessively large
list of reviewers and pruned based on all time commit data, recent
commit data and review data (number of PRs commented on).
At this point we have a decent list of reviewers, but it needs one last
pass for fine tuning.
Also, see https://github.com/kubernetes/contrib/issues/1389.
TLDR:
-----
As an owner of a sig/directory and a leader of the project, here’s what
we need from you:
1. Use PR https://github.com/kubernetes/kubernetes/pull/35715 as an example.
2. The pull-request is made editable, please edit the `OWNERS` file to
remove the names of people that shouldn't be reviewing code in the
future in the **reviewers** section. You probably do NOT need to modify
the **approvers** section. Names asre sorted by relevance, using some
secret statistics.
3. Notify me if you want some OWNERS file to be removed. Being an
approver or reviewer of a parent directory makes you a reviewer/approver
of the subdirectories too, so not all OWNERS files may be necessary.
4. Please use ALIAS if you want to use the same list of people over and
over again (don't hesitate to ask me for help, or use the pull-request
above as an example)
Automatic merge from submit-queue
Curating Owners: pkg/api
cc @lavalamp @smarterclayton @erictune @thockin @bgrant0607
In an effort to expand the existing pool of reviewers and establish a
two-tiered review process (first someone lgtms and then someone
experienced in the project approves), we are adding new reviewers to
existing owners files.
If You Care About the Process:
------------------------------
We did this by algorithmically figuring out who’s contributed code to
the project and in what directories. Unfortunately, that doesn’t work
well: people that have made mechanical code changes (e.g change the
copyright header across all directories) end up as reviewers in lots of
places.
Instead of using pure commit data, we generated an excessively large
list of reviewers and pruned based on all time commit data, recent
commit data and review data (number of PRs commented on).
At this point we have a decent list of reviewers, but it needs one last
pass for fine tuning.
Also, see https://github.com/kubernetes/contrib/issues/1389.
TLDR:
-----
As an owner of a sig/directory and a leader of the project, here’s what
we need from you:
1. Use PR https://github.com/kubernetes/kubernetes/pull/35715 as an example.
2. The pull-request is made editable, please edit the `OWNERS` file to
remove the names of people that shouldn't be reviewing code in the
future in the **reviewers** section. You probably do NOT need to modify
the **approvers** section. Names asre sorted by relevance, using some
secret statistics.
3. Notify me if you want some OWNERS file to be removed. Being an
approver or reviewer of a parent directory makes you a reviewer/approver
of the subdirectories too, so not all OWNERS files may be necessary.
4. Please use ALIAS if you want to use the same list of people over and
over again (don't hesitate to ask me for help, or use the pull-request
above as an example)
Automatic merge from submit-queue
Curating Owners: pkg/proxy
cc @thockin
In an effort to expand the existing pool of reviewers and establish a
two-tiered review process (first someone lgtms and then someone
experienced in the project approves), we are adding new reviewers to
existing owners files.
If You Care About the Process:
------------------------------
We did this by algorithmically figuring out who’s contributed code to
the project and in what directories. Unfortunately, that doesn’t work
well: people that have made mechanical code changes (e.g change the
copyright header across all directories) end up as reviewers in lots of
places.
Instead of using pure commit data, we generated an excessively large
list of reviewers and pruned based on all time commit data, recent
commit data and review data (number of PRs commented on).
At this point we have a decent list of reviewers, but it needs one last
pass for fine tuning.
Also, see https://github.com/kubernetes/contrib/issues/1389.
TLDR:
-----
As an owner of a sig/directory and a leader of the project, here’s what
we need from you:
1. Use PR https://github.com/kubernetes/kubernetes/pull/35715 as an example.
2. The pull-request is made editable, please edit the `OWNERS` file to
remove the names of people that shouldn't be reviewing code in the
future in the **reviewers** section. You probably do NOT need to modify
the **approvers** section. Names asre sorted by relevance, using some
secret statistics.
3. Notify me if you want some OWNERS file to be removed. Being an
approver or reviewer of a parent directory makes you a reviewer/approver
of the subdirectories too, so not all OWNERS files may be necessary.
4. Please use ALIAS if you want to use the same list of people over and
over again (don't hesitate to ask me for help, or use the pull-request
above as an example)
Automatic merge from submit-queue
Reduce verbosity of volume reconciler
**What this PR does / why we need it**:
It reduces the log verbosity for attaching of volumes
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
Reduce verbosity of volume reconciler when attaching volumes
```
Set logging level for information about attaching of volumes to from 1 to 4
Otherwise the log is spammed with one line per 100ms while attaching is
in progress and afterwards as long as the volume is attached.
When cni is set to kubenet, kubelet should hold the host port socket,
so that other application in this node could not listen/bind this port
any more. However, the sockets are closed accidentally, because
kubelet forget to reconcile the protocol format before comparing.
The input variable of function ConstructVolumeSpec is the volume spec
name instead of persistent disk ID. Previously the function directly
construct new volume spec by assigning volume spec name to persistent
disk ID, which will result in mismatching of mount path. The fix will
find the pdID according to mount path and construct volume spec with the
correct pdID.
The grace-period is automatically set to 1 unless --force is provided,
and the client waits until the object is deleted.
This preserves backwards compatibility with 1.4 and earlier. It does not
handle scenarios where the object is deleted and a new object is created
with the same name.
trusted storage pool from which persistent gluster volume
has to be provided.
This patch introduce a new storage class parameter which allows
the admin to specify storage pool/cluster if required.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
Automatic merge from submit-queue
kubelet: eviction: add memcg threshold notifier to improve eviction responsiveness
This PR adds the ability for the eviction code to get immediate notification from the kernel when the available memory in the root cgroup falls below a user defined threshold, controlled by setting the `memory.available` siginal with the `--eviction-hard` flag.
This PR by itself, doesn't change anything as the frequency at which new stats can be obtained is currently controlled by the cadvisor housekeeping interval. That being the case, the call to `synchronize()` by the notification loop will very likely get stale stats and not act any more quickly than it does now.
However, whenever cadvisor does get on-demand stat gathering ability, this will improve eviction responsiveness by getting async notification of the root cgroup memory state rather than relying on polling cadvisor.
@vishh @derekwaynecarr @kubernetes/rh-cluster-infra
Automatic merge from submit-queue
Change stickyMaxAge from seconds to minutes, fixes issue #35677
**What this PR does / why we need it**: Increases the service sessionAfinity time from 180 seconds to 180 minutes for proxy mode iptables which was a bug introduced in a refactor.
**Which issue this PR fixes**: fixes#35677
**Special notes for your reviewer**:
**Release note**:
``` release-note
Fixed wrong service sessionAffinity stickiness time from 180 sec to 180 minutes in proxy mode iptables.
```
Since there is no test for the sessionAffinity feature at the moment I wanted to create one but I don't know how.
Automatic merge from submit-queue
Add more logging around Pod deletion
After this PR we'll have at least V(2) level log near all Pod deletions.
@saad-ali - this is required by GKE to help with diagnosing possible problem.
cc @dchen1107 @wojtek-t
Automatic merge from submit-queue
Read all resources for finalization and gc, not just preferred
Fixes#31481.
Currently when starting namespace controller or garbage collector we only gather preferred version resources, which in case of multiple versions (you guessed it `batch/v2alpha1.CronJobs` again) isn't sufficient. This PR adds additional method which actually retrieves all resources from all versions and works on them.
@kubernetes/sig-api-machinery ptal
Automatic merge from submit-queue
Revert "Use Gid when provisioning Gluster Volumes."
On further inspection the design in #35460 was not secure enough. This PR reverts the change.
This reverts commit 7a0d219d12.
Automatic merge from submit-queue
Fields with omitempty tag should still be considered as optional
We've added an "+optional" tag while ago for optional fields. Before that OpenAPI spec generated assumed all fields with "omitempty" in their json tags are optional. This should be still the case (as well as +optional tag) until these two things happen:
- We update all documentation asking developers to use +optional (My bad, I should have added this after the +optional PR)
- We fix swagger 1.2 spec generator to use +optional instead of omitempty.
Fixes#37149
Automatic merge from submit-queue
make groupVersionResource listing dynamic for namespace controller
@derekwaynecarr @kubernetes/sig-api-machinery
```release-note
Third party resources are now deleted when a namespace is deleted.
```
Fixes https://github.com/kubernetes/kubernetes/issues/32306
Automatic merge from submit-queue
Add logs near force deletions of Pods
We should always log something when control plane force deletes the Pod.
@davidopp I think that logging force deletions is enough, or do you think we should log soft deletions as well?
cc @deads2k
Automatic merge from submit-queue
Fix kubectl Stratigic Merge Patch compatibility
As @smarterclayton pointed out in [comment1](https://github.com/kubernetes/kubernetes/pull/35647#pullrequestreview-8290820) and [comment2](https://github.com/kubernetes/kubernetes/pull/35647#pullrequestreview-8290847) in PR #35647,
we cannot assume the API servers publish version and they shares the same version.
This PR removes all the calls of GetServerSupportedSMPatchVersion().
Change the behavior of `apply` and `edit` to:
Retrying with the old patch version, if the new version fails.
Default other usage of SMPatch to the new version, since they don't update list of primitives.
fixes#36916
cc: @pwittrock @smarterclayton
Automatic merge from submit-queue
OpenAPI Bugfix: []byte should be treated as integer array
data field of v1.Secret is a map of string to byte array. Generated spec should generate a map of string to (type="string", format="byte" that means map of base64 string) however current code converts it to an array of integer that is wrong.
fixes#37126
Automatic merge from submit-queue
Add limited config-map support to kube-dns
This is an integration bugfix for https://github.com/kubernetes/kubernetes/issues/36194
```release-note
kube-dns
Added --config-map and --config-map-namespace command line options.
If --config-map is set, kube-dns will load dynamic configuration from the config map
referenced by --config-map-namespace, --config-map. The config-map supports
the following properties: "federations".
--federations flag is now deprecated. Prefer to set federations via the config-map.
Federations can be configured by settings the "federations" field to the value currently
set in the command line.
Example:
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-dns
namespace: kube-system
data:
federations: abc=def
```
Automatic merge from submit-queue
azure: support nics with multiple ipconfigs
**What this PR does / why we need it**:
When I initially wrote the cloudprovider, the ipconfig primary field either wasn't present or wasn't populated. Now it is and we have someone trying to use kubelet on a node with a nic with multiple ipconfigs and they ran into this.
**Which issue this PR fixes**: n/a no issue filed.
**Special notes for your reviewer**:
**Release note**:
```release-note
azure: support multiple ipconfigs on a NIC
```
If we can get this backported to 1.4.x, that would be great.
- Adds command line flags --config-map, --config-map-ns.
- Fixes 36194 (https://github.com/kubernetes/kubernetes/issues/36194)
- Update kube-dns yamls
- Update bazel (hack/update-bazel.sh)
- Update known command line flags
- Temporarily reference new kube-dns image (this will be fixed with
a separate commit when the DNS image is created)
Automatic merge from submit-queue
fix issue in converting aws volume id from mount paths
This PR is to fix the issue in converting aws volume id from mount
paths. Currently there are three aws volume id formats supported. The
following lists example of those three formats and their corresponding
global mount paths:
1. aws:///vol-123456
(/var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/vol-123456)
2. aws://us-east-1/vol-123456
(/var/lib/kubelet/plugins/kubernetes.io/mounts/aws/us-est-1/vol-123455)
3. vol-123456
(/var/lib/kubelet/plugins/kubernetes.io/mounts/aws/us-est-1/vol-123455)
For the first two cases, we need to check the mount path and convert
them back to the original format.
This PR fixes#36269
Automatic merge from submit-queue
Bug fix: Allows user to change service type when sourceRanges is declared.
Fixes#36784.
Adds logic in validation to make changing service type possible when sourceRanges is declared.
@bowei @bprashanth
Automatic merge from submit-queue
Add fast-path for Listing with ResourceVersion=0
We slightly change the behavior, but we keep the current contract, so release note is not needed.
cc @saad-ali
Automatic merge from submit-queue
Fix recycler pod deletion race.
We should use clone of recycler pod template instead of reusing the same
one for two or more recyclers running in parallel.
Also add some logs to relevant places to spot the error easily next time.
Fixes https://bugzilla.redhat.com/show_bug.cgi?id=1392338
Automatic merge from submit-queue
Append newline to the "deleted context ... " and "deleted cluster" message
**What this PR does / why we need it**: Append newline to the "deleted context ... " message.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#35966
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
When finishing a rolling update, a message is shown :
'Renaming oldRc to newRc'
In my case that'd be:
'Renaming nginx to nginx-df8d9fafc5007d83968cd37c8353d52e'
That naming order is incorrect, as the new name is temporary, and the final name should be the old one
Automatic merge from submit-queue
remove TPR registration, ease validation requirements
Fixes https://github.com/kubernetes/kubernetes/issues/36007 .
This removes the special casing for TPRs inside of the `UnstructuredObject`, which should allow CRUD against skewed kube api server levels.
@kubernetes/kubectl @kubernetes/sig-cli
@janetkuo
Automatic merge from submit-queue
fix leaking memory backed volumes of terminated pods
Currently, we allow volumes to remain mounted on the node, even though the pod is terminated. This creates a vector for a malicious user to exhaust memory on the node by creating memory backed volumes containing large files.
This PR removes memory backed volumes (emptyDir w/ medium Memory, secrets, configmaps) of terminated pods from the node.
@saad-ali @derekwaynecarr
Automatic merge from submit-queue
Adding statefulset to the list of things kubectl says it knows about
**What this PR does / why we need it**: Adding statefulset to the list of things kubectl says it knows about.
**Special notes for your reviewer**:
**Release note**:
<!-- Steps to write your release note:
1. Use the release-note-* labels to set the release note state (if you have access)
2. Enter your extended release note in the below block; leaving it blank means using the PR title as the release note. If no release note is required, just write `NONE`.
-->
```release-note
NONE
```
cc @kubernetes/sig-apps @erictune
Automatic merge from submit-queue
Fix hostname truncate.
Fixes https://github.com/kubernetes/kubernetes/issues/36951.
This PR will keep truncating the hostname until the ending character is valid.
/cc @kubernetes/sig-node
Mark v1.5 because this is a bug fix.
/cc @saad-ali
Automatic merge from submit-queue
make kubectl create --edit iterate
`kubectl create --edit` is broken after #36148 merged.
`kubectl create --edit` will fail when a manifest that contains multiple resources.
I guess the root cause is that dynamic typer doesn't support a list of resources currently.
This PR makes `kubectl create --edit` iterate again as `kubectl create`.
This PR is to fix the issue in converting aws volume id from mount
paths. Currently there are three aws volume id formats supported. The
following lists example of those three formats and their corresponding
global mount paths:
1. aws:///vol-123456
(/var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/vol-123456)
2. aws://us-east-1/vol-123456
(/var/lib/kubelet/plugins/kubernetes.io/mounts/aws/us-est-1/vol-123455)
3. vol-123456
(/var/lib/kubelet/plugins/kubernetes.io/mounts/aws/us-est-1/vol-123455)
For the first two cases, we need to check the mount path and convert
them back to the original format.
only extract ReadSpecificDockerConfigJsonFile from function ReadDockerConfigJSONFile
put error checking and logging in the loop above
godoc gofmt and return dockecfg directly
Automatic merge from submit-queue
Handle Empty clusterCIDR
**What this PR does / why we need it**:
Handles empty clusterCIDR by skipping the corresponding rule.
**Which issue this PR fixes**
fixes#36652
**Special notes for your reviewer**:
1. Added test to check for presence/absence of XLB to SVC rule
2. Changed an error statement to log rules along with the error string in case of a failure; This ensures that full debug info is available in case of iptables-restore errors.
Empty clusterCIDR causes invalid rules generation.
Fixes issue #36652
We have observed that, after failing to create a container due to "device or
resource busy", docker may end up having inconsistent internal state. One
symptom is that docker will not report the existence of the "failed to create"
container, but if kubelet tries to create a new container with the same name,
docker will error out with a naming conflict message.
To work around this, this commit parses the creation error message and if there
is a naming conflict, it would attempt to remove the existing container.