(*) Fix cleanup of NodePort resources. (*) Fix the logic to select existing policies
Fix review comment
Fix Bazel
Update GoDep License
Fix NodePort forwarding to target port
Fix Darwin Build break. +1
Implement IsCompatible to validate kernel support for kernel mode
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..
api: update progressdeadlineseconds comment for deployments
@kubernetes/sig-apps-api-reviews we may never end up doing autorollback - this drops the comment from the pds field for now
Automatic merge from submit-queue (batch tested with PRs 52176, 43152). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..
Eliminate hangs/throttling of node heartbeat
Fixes https://github.com/kubernetes/kubernetes/issues/48638Fixes#50304
Stops kubelet from wedging when updating node status if unable to establish tcp connection.
Notes that this only affects the node status loop. The pod sync loop would still hang until the dead TCP connections timed out, so more work is needed to keep the sync loop responsive in the face of network issues, but this change lets existing pods coast without the node controller trying to evict them
```release-note
kubelet to master communication when doing node status updates now has a timeout to prevent indefinite hangs
```
Automatic merge from submit-queue (batch tested with PRs 52486, 52588, 52524). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..
Fix nil pointer dereference in cri stats provider when there are no image file systems
**What this PR does / why we need it**:
This PR fixes a nil pointer dereference in CRI stats provider when there are no image filesystems. See https://github.com/kubernetes/kubernetes/pull/51152 for discussion.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
```
/cc yujuhong yguo0905
The following changes are proposed for the iptables proxier:
* There are three places where a string specifying IP:port is parsed
using something like this:
if index := strings.Index(e.endpoint, ":"); index != -1 {
This will fail for IPv6 since V6 addresses contain colons. Also,
the V6 address is expected to be surrounded by square brackets
(i.e. []:). Fix this by replacing call to Index with
call to LastIndex() and stripping out square brackets.
* The String() method for the localPort struct should put square brackets
around IPv6 addresses.
* The logging in the merge() method for proxyServiceMap should put brackets
around IPv6 addresses.
* There are several places where filterRules destination is hardcoded to
/32. This should be a /128 for IPv6 case.
* Add IPv6 unit test cases
fixes#48550
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..
Changes the node cloud controller to use its name for events
**What this PR does / why we need it**:
Updates the event recorder component to be the `cloud-node-controller` instead of `cloudcontrollermanager`, which aligns with how other controllers are setup like the daemonset controller which uses `daemonset-controller` and the ccm uses `cloud-controller-manager`.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
Use `cloud-node-controller` for cloud node controller events
```
/cc @luxas @wlan0 @jhorwit2
If terminationMessagePath is set to a file that does not exist, we
should not log an error message and instead try falling back to logs
(based on the user's request).
Automatic merge from submit-queue (batch tested with PRs 51796, 52223)
Fix pod and node names switched around in error message.
**What this PR does / why we need it**: This PR fixes a pod name and a node name switched around in an error message from the daemon controller.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: No issue that I know of
**Special notes for your reviewer**: -
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 52452, 52115, 52260, 52290)
Fixes device plugin re-registration handling logic to make sure:
- If a device plugin exits, its exported resource will be removed.
- No capacity change if a new device plugin instance comes up to replace the old instance.
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes https://github.com/kubernetes/kubernetes/issues/52510
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 52452, 52115, 52260, 52290)
fix azure disk mounter issue
**What this PR does / why we need it**:
fix azure disk mounter issue, it's a P1 bug, it exists in 1.7, 1.8 release, should cherry pick to 1.7, 1.8
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
fixes#52261
consider following issue:
1) A pod mounting an azure disk in a k8s agent
2) The kubelet is restarted in that k8s agent
3) The pod could not start up, it always reports error as following:
4d 1m 3065 kubelet, 14777acs9000 Warning FailedMount MountVolume.SetUp failed for volume "pvc-7a0cdeb9-92c7-11e7-b86b-000d3a36d70c" : azureDisk - No
t a mounting point for disk andykubewin175-dynamic-pvc-7a0cdeb9-92c7-11e7-b86b-000d3a36d70c on \var\lib\kubelet\pods\d146c023-92c7-11e7-b86b-000d3a36d70c\volumes\kubernetes.io~azure-disk\pvc-7a0cdeb9-92c7-11
e7-b86b-000d3a36d70c
4d 1m 3157 kubelet, 14777acs9000 Warning FailedMount Error syncing pod
**Special notes for your reviewer**:
If you take a look at following implementation of vsphere or gce, it will return nil instead of error:
https://github.com/kubernetes/kubernetes/blob/master/pkg/volume/vsphere_volume/vsphere_volume.go#L217-L220https://github.com/kubernetes/kubernetes/blob/master/pkg/volume/gce_pd/gce_pd.go#L273-L275
The logic of return info parsing here, it's wrong to return error
https://github.com/kubernetes/kubernetes/blob/master/pkg/volume/util/operationexecutor/operation_generator.go#L469-L475
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 52452, 52115, 52260, 52290)
Fix support for updating quota on update
This PR implements support for properly handling quota when resources are updated. We never take negative values and add them up.
Fixes https://github.com/kubernetes/kubernetes/issues/51736
cc @derekwaynecarr
/sig storage
```release-note
Make sure that resources being updated are handled correctly by Quota system
```
Automatic merge from submit-queue (batch tested with PRs 51824, 50476, 52451, 52009, 52237)
fix issue(#47976)Invalid value error when creating service from expor…
…ted config
**What this PR does / why we need it**:
close issue #47976
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 51824, 50476, 52451, 52009, 52237)
Plumbing the proxy dialer to the webhook admission plugin
* Fixing https://github.com/kubernetes/kubernetes/issues/49987. Plumb the `Dial` function to the `transport.Config`
* Fixing https://github.com/kubernetes/kubernetes/issues/52366. Let the webhook admission plugin sets the `TLSConfg.ServerName`.
I tested it in my gke setup. I don't have time to implement an e2e test before 1.8 release. I think it's ok to add the test later, because *i)* the change only affects the alpha webhook admission feature, and *ii)* the webhook feature is unusable without the fix. That said, it's up to my reviewer to decide.
Filed https://github.com/kubernetes/kubernetes/issues/52368 for the missing e2e test.
( The second commit is https://github.com/kubernetes/kubernetes/pull/52372, which is just a cleanup of client configuration in e2e tests. It removed a function that marshalled the client config to json and then unmarshalled it. It is a prerequisite of this PR, because this PR added the `Dial` function to the config which is not json marshallable.)
```release-note
Fixed the webhook admission plugin so that it works even if the apiserver and the nodes are in two networks (e.g., in GKE).
Fixed the webhook admission plugin so that webhook author could use the DNS name of the service as the CommonName when generating the server cert for the webhook.
Action required:
Anyone who generated server cert for admission webhooks need to regenerate the cert. Previously, when generating server cert for the admission webhook, the CN value doesn't matter. Now you must set it to the DNS name of the webhook service, i.e., `<service.Name>.<service.Namespace>.svc`.
```
Automatic merge from submit-queue (batch tested with PRs 52442, 52247, 46542, 52363, 51781)
Make CPU manager release CPUs when Pod enters completed phase.
**What this PR does / why we need it**: When CPU manager is enabled, this PR releases allocated CPUs when container is not running and is non-restartable.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#52351
**Special notes for your reviewer**:
This bug is only reproduced for pods with `restartPolicy` = `Never` or `OnFailure`. The following output is from a 4 CPU node. This bug can be reproduced as long >= half the cores are requested.
pod1.yaml:
```
apiVersion: v1
kind: Pod
metadata:
name: test-pod1
spec:
containers:
- image: ubuntu
command: ["/bin/bash"]
args: ["-c", "sleep 5"]
name: test-container1
resources:
requests:
cpu: 2
memory: 100Mi
limits:
cpu: 2
memory: 100Mi
restartPolicy: "Never"
```
pod2.yaml:
```
apiVersion: v1
kind: Pod
metadata:
name: test-pod2
spec:
containers:
- image: ubuntu
command: ["/bin/bash"]
args: ["-c", "sleep 5"]
name: test-container1
resources:
requests:
cpu: 2
memory: 100Mi
limits:
cpu: 2
memory: 100Mi
restartPolicy: "Never"
```
Run a local Kubernetes cluster with CPU manager enabled.
```sh
KUBELET_FLAGS='--feature-gates=CPUManager=true --cpu-manager-policy=static --cpu-manager-reconcile-period=1s --kube-reserved=cpu=500m' ./hack/local-up-cluster.sh
```
_Before:_
Create `test-pod1` using pod1.yaml.
```
./cluster/kubectl.sh create -f pod1.yaml
```
Wait for the pod to complete and wait another 90 seconds (give enough time for GC to kick-in).
Create `test-pod2` using pod2.yaml.
```
./cluster/kubectl.sh create -f pod2.yaml
```
Get all pods in the cluster.
```
./cluster/kubectl.sh get pods -a
NAME READY STATUS RESTARTS AGE
test-pod1 0/1 Completed 0 1m
test-pod2 0/1 not enough cpus available to satisfy request 0 9s
```
_After:_
Create `test-pod1` using pod1.yaml.
```
./cluster/kubectl.sh create -f pod1.yaml
```
Wait for the pod to complete and wait another 90 seconds (give enough time for GC to kick-in).
Create `test-pod2` using pod2.yaml.
```
./cluster/kubectl.sh create -f pod2.yaml
```
Get all pods in the cluster.
```
./cluster/kubectl.sh get pods -a
NAME READY STATUS RESTARTS AGE
test-pod1 0/1 Completed 0 1m
test-pod2 0/1 Completed 0 9s
```
Automatic merge from submit-queue (batch tested with PRs 52442, 52247, 46542, 52363, 51781)
Ignore pods for quota marked for deletion whose node is unreachable
**What this PR does / why we need it**:
Traditionally, we charge to quota all pods that are in a non-terminal phase. We have a user report that noted the behavior change in kube 1.5 for the node controller to no longer force delete pods whose nodes have been lost. Instead, the pod is marked for deletion, and the reason is updated to state that the node is unreachable. The user expected the quota to be released. If the user was at their quota limit, their application may not be able to create a new replica given the current behavior. As a result, this PR ignores pods marked for deletion that have exceeded their grace period.
**Which issue this PR fixes**
xref https://bugzilla.redhat.com/show_bug.cgi?id=1455743
fixes https://github.com/kubernetes/kubernetes/issues/52436
**Release note**:
```release-note
Ignore pods marked for deletion that exceed their grace period in ResourceQuota
```
Windows Kernel now exposes "Internal Load Balancing"
using VFP (Virtual Filtering Platform) part of Virtual Switch. An inbuild
windows service HNS (Host Networking Service) acts as interface to program
the VFP. VFP is synonymous to iptables in functionality. HNS uses json based
data as input.
With the help of the interface available in github.com/Microsoft/hcsshim,
these APIs are exposed to the world in github to program HNS and use
the feature.
*** More info about the changes in this PR ***
(1) For every endpoint available in the system, an HNS Endpoint is added
(1.a) for local endpoints, a local HNS Endpoint would already exist, as part of
container creation.
(1.b) For all remote endpoints, a remote HNS Endpoint is created via HNS
(2) For every Service, a HNS ILB LoadBalancer is added referring the endpoints
created in (1)
Sample Input to HNS:
{
"Policies": [
{
"ExternalPort": 80,
"InternalPort": 80,
"Protocol": 6,
"Type": "ELB",
"VIPs": [
"11.0.98.129"
]
}
],
"References": [
"/endpoints/ca8b877b-ab90-499a-bc0e-7d736c425632",
"/endpoints/ee0ef08b-8434-4f8b-b748-393884e77465"
]
}
(2-a) This is done for Cluster IP, LoadBalancer Ingress IP, NodePort, External IP
Following the regular service and endpoint updates,
the HNS is notified of the updates and the system is kept in sync.
Automatic merge from submit-queue (batch tested with PRs 52376, 52439, 52382, 52358, 52372)
Workaround go-junit-report bug for TestApps
**What this PR does / why we need it**: Fix output from pkg/kubectl/apps/TestApps unit test
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#51253
**Special notes for your reviewer**: Literally copy-pasta of the approach taken in #45320. Maybe a sign that this should be extracted into something shared. I'm just trying to see if we can make https://k8s-testgrid.appspot.com/kubernetes-presubmits and https://k8s-testgrid.appspot.com/release-master-blocking a little more green for now.
```release-note
NONE
```
- If a device plugin exits, its exported resource will be removed.
- No capacity change if a new device plugin instance comes up to replace the old instance.
Move negative check for testing "not patched" output to test-cmd-util.sh
as exiting with code 1 was causing patch_test.go to fail when the error
was expected as part of the test.
This implements stats for windows nodes in a new package, winstats.
WinStats exports methods to get cadvisor like datastructures, however
with windows specific metrics. WinStats only gets node level metrics and
information, container stats will go via the CRI. This enables the
use of the summary api to get metrics for windows nodes.
Automatic merge from submit-queue
Fix swallowed errors in various volume packages
**What this PR does / why we need it**: Fixes swallowed errors in various volume packages.
**Release note**:
```release-note NONE
```
Automatic merge from submit-queue (batch tested with PRs 51601, 52153, 52364, 52362, 52342)
Minor fixes to validation test
Some test cases confuse the new object with the old object. This PR fixed that. Also added a test to verify that deletionTimestamp cannot be added (via the REST endpoints).
Fix#49256
When `ScalingLimited = true` for the `hpa`, there is an accompanying
status message, describing why scaling is limited. Previously if the desired
replica count was 0, and spec.minReplicas > 0, the status message
indicated "the desired replica count was less than the min replica
count". This was particularly confusing when `spec.MinReplicas = 1`. If
there was no `spec.minReplicas`, then the status message indicated "the
desired replica count was zero" which is more informative.
Update the calculation of status message so that if the desired replica
count is 0, we always display the clearer "the desired replica count was
zero" status message, even if spec.minReplicas > 0.
Signed-off-by: mattjmcnaughton <mattjmcnaughton@gmail.com>
Automatic merge from submit-queue (batch tested with PRs 52339, 52343, 52125, 52360, 52301)
'*' is valid for allowed seccomp profiles
**What this PR does / why we need it**:
This should be valid on a PodSecurityPolicy, but is currently rejected:
```
seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
```
**Which issue this PR fixes**: fixes#52300
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 52339, 52343, 52125, 52360, 52301)
dockershim: check if f.Sync() returns an error and surface it
```release-note
dockershim: check the error when syncing the checkpoint.
```
Automatic merge from submit-queue (batch tested with PRs 52339, 52343, 52125, 52360, 52301)
Prevent enabling alpha APIs by default
related to #47691
This is a follow up to #51839 to add a check that we do not enable alpha APIs by default
Automatic merge from submit-queue (batch tested with PRs 48226, 52046, 52231, 52344, 52352)
Log at higher verbosity levels some common SyncPod errors
This log message was 90% of all glog.Errorf level statements reported on a production cluster, hiding other more impactful errors. We already log it in start container, but for extra caution we continue to log it at v(3) here (the downside of not logging a start container error is worse than some log spam at higher levels).
HandleError() is intended only for unknown and unexpected errors.
```release-note
NONE
```
@derekwaynecarr @sjenning
Automatic merge from submit-queue (batch tested with PRs 48226, 52046, 52231, 52344, 52352)
[BugFix] Soft Eviction timer works correctly
fixes#51516
thresholdsMet should not exclude previously met thresholds when we do not have new stats for a threshold.
/assign @vishh @derekwaynecarr
cc @kubernetes/sig-node-bugs
Automatic merge from submit-queue
fix kubectl set env --list description
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
none
```
Automatic merge from submit-queue
Azuredisk mount on windows node
**What this PR does / why we need it**:
This PR will enable azure disk on windows node, customer could create a pod mounted with azure disk on windows node.
There are a few pending items still left:
1) Current fstype would be forced as NTFS, will change if there is such requirement
2) GetDeviceNameFromMount function is not implemented(empty) because in Linux, we could use "cat /proc/mounts" to read all mounting points in OS easily, but in Windows, there is no such place, I am still figuring out. The empty function would cause a few warning logging, but it will not affect the main logic now.
**Special notes for your reviewer**:
1. This PR depends on https://github.com/kubernetes/kubernetes/pull/51240, which allow windows mount path in config validation
2. There is a bug in docker on windows(https://github.com/moby/moby/issues/34729), the ContainerPath could only be a drive letter now(e.g. D:), dir path would fail in the end.
The example pod with mount path is like below:
```
kind: Pod
apiVersion: v1
metadata:
name: pod-uses-shared-hdd-5g
labels:
name: storage
spec:
containers:
- image: microsoft/iis
name: az-c-01
volumeMounts:
- name: blobdisk01
mountPath: 'F:'
nodeSelector:
beta.kubernetes.io/os: windows
volumes:
- name: blobdisk01
persistentVolumeClaim:
claimName: pv-dd-shared-hdd-5
```
**Release note**:
```release-note
Automatic merge from submit-queue
Update set image description to remove job from resources that can update container image
**What this PR does / why we need it**:
This addressed the comment raised in https://github.com/kubernetes/kubernetes/issues/48388#issuecomment-322500960 by @harrissAvalon
**Special notes for your reviewer**:
**Release note**:
```release-note
none
```
Automatic merge from submit-queue (batch tested with PRs 51041, 52297, 52296, 52335, 52338)
Fix pagesize mount option name
**What this PR does / why we need it**:
Fixes#52337 .
Automatic merge from submit-queue (batch tested with PRs 51041, 52297, 52296, 52335, 52338)
Glusterfs expands in units of GB not GiB
When expanding glusterfs volumes, we should use GB units not GiB. More information - https://github.com/heketi/heketi/wiki/API
Fixes https://github.com/kubernetes/kubernetes/issues/52298
```release-note
Fixes Glusterfs storage allocation units
```
Automatic merge from submit-queue (batch tested with PRs 51041, 52297, 52296, 52335, 52338)
Use cAdvisor constant for crio imagefs
**What this PR does / why we need it**:
code hygiene to use a constant from cAdvisor
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 52007, 52196, 52169, 52263, 52291)
Remove links to GCE/AWS cloud providers from PersistentVolumeCo…
…ntroller
**What this PR does / why we need it**:
We should be able to build a cloud-controller-manager without having to
pull in code specific to GCE and AWS clouds. Note that this is a tactical
fix for now, we should have allow PVLabeler to be passed into the
PersistentVolumeController, maybe come up with better interfaces etc. Since
it is too late to do all that for 1.8, we just move cloud specific code
to where they belong and we check for PVLabeler method and use it where
needed.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
Fixes#51629
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
fsync config checkpoint files after writing
@yujuhong brought up that it's possible for a hard reboot to result in empty checkpoint files, if they haven't been synced to disk yet. This PR ensures that Kubelet configuration checkpoints are synced after writing to avoid this issue.
fixes#52222
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 52264, 51870)
Use credentials from providers for docker sandbox image
**What this PR does / why we need it**:
Sandbox image lookup uses creds from docker config only; other credential providers are ignored. This is a regression introduced in dockershim.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#51293
**Special notes for your reviewer**:
Should also cherry-pick this to release-1.6 and release-1.7.
**Release note**:
```release-note
Fix credentials providers for docker sandbox image.
```
add initial work for mount azure file on windows
fix review comments
full implementation for attach azure file on windows node
working azure file mount
remove useless functions
add a workable implementation about mounting azure file on windows node
fix review comments and make the pod creating successful even azure file mount failed
fix according to review comments
add mount_windows_test
add implementation for IsLikelyNotMountPoint func
remove mount_windows_test.go temporaly
add back unit test for mount_windows.go
add normalizeWindowsPath func
fix normalizeWindowsPath func issue
implment azure disk on windows
update bazel BUILD
revert validation.go change as it's another PR
fix merge issue and compiling issue
fix windows compiling issue
fix according to review comments
fix according to review comments
fix cross-build failure
fix according to review comments
fix test build failure temporalily
fix darwin build failure
fix azure windows test failure
add empty implementation of MakeRShared on windows
fix gofmt errors
As per glusterfs documentation it can't create
volumes in GiB and all sizes must be specified in GB.
This code was slightly buggy because we were creating
volumes of sizes lesser than user asked for
it happens when copy directory to pod and the directory
path ends with '/'.
for example:
kubectl cp /XX/XX/ XX-POD:/XX/XX
modified: pkg/kubectl/cmd/cp.go
modified: pkg/kubectl/cmd/cp_test.go
Currently some of the imports of `apimachinery` use
`k8s.io/kubernetes/staging/src/k8s.io/apimachinery...`. Replace
these with `k8s.io/apimachinery`, as is in use throughout the rest
of the code base.
Signed-off-by: mattjmcnaughton <mattjmcnaughton@gmail.com>
Automatic merge from submit-queue
newline to separate unimplemented TaintEffectNoScheduleNoAdmit
**What this PR does / why we need it**:
Unimplemented `TaintEffectNoScheduleNoAdmit ` should not be treated as comments of `TaintEffectNoExecute `
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
xref #49530
**Special notes for your reviewer**:
/assign @k82cn
**Release note**:
```release-note
None
```
Automatic merge from submit-queue
Fix splitProviderID for Azure
**What this PR does / why we need it**:
#46940 add 'splitProviderID' for Azure to get node name from provider, but it captures the resource id instead of node name.
Functions such as NodeAddresses are accepting node names:
84d9778f22/pkg/cloudprovider/providers/azure/azure_instances.go (L32)
With current implementation, it takes in a resource ID, and will result in following error
```
E0830 04:15:09.877143 10427 azure_instances.go:63] error: az.NodeAddresses, az.getIPForMachine(/subscriptions/{id}/resourceGroups/{id}/providers/Microsoft.Compute/virtualMachines/k8s-master-0), err=instance not found
```
This fix makes is return node names instead.
**Which issue this PR fixes**
**Special notes for your reviewer**:
**Release note**:
`NONE`
@brendandburns @realfake @wlan0
Automatic merge from submit-queue (batch tested with PRs 52047, 52063, 51528)
implementation of GetZoneByProviderID and GetZoneByNodeName for azure
This is part of the #50926 effort
cc @luxas
**Release note**:
```release-note
None
```
Automatic merge from submit-queue (batch tested with PRs 52047, 52063, 51528)
Improve dynamic kubelet config e2e node test and fix bugs
Rather than just changing the config once to see if dynamic kubelet
config at-least-sort-of-works, this extends the test to check that the
Kubelet reports the expected Node condition and the expected configuration
values after several possible state transitions.
Additionally, this adds a stress test that changes the configuration 100
times. It is possible for resource leaks across Kubelet restarts to
eventually prevent the Kubelet from restarting. For example, this test
revealed that cAdvisor's leaking journalctl processes (see:
https://github.com/google/cadvisor/issues/1725) could break dynamic
kubelet config. This test will help reveal these problems earlier.
This commit also makes better use of const strings and fixes a few bugs
that the new testing turned up.
Related issue: #50217
I had been sitting on this until the cAdvisor fix merged in #51751, as these tests fail without that fix.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
Added large topology tests for static policy in CPU Manager.
**What this PR does / why we need it**: This PR adds a very large topology test case for the CPU Manager feature.
Related to #51180.
CC @ConnorDoyle
Automatic merge from submit-queue (batch tested with PRs 50949, 52155, 52175, 52112, 52188)
Allow watch cache to be disabled per type
Currently setting watch cache size for a given resource does not disable
the watch cache. This commit adds a new `default-watch-cache-size` flag
to map to the existing field, and refactors how watch cache sizes are
calculated to bring all of the code into one place. It also adds debug
logging to startup to allow us to verify watch cache enablement in
production.
Part of #51825
Will allow watch cache to be disabled selectively.
Automatic merge from submit-queue
Add German translation for kubectl
**What this PR does / why we need it**:
This PR provides a first attempt to translate kubectl in German (related to #40645, #45573, #45562, #40591, #46559, #50155).
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
No issues
**Special notes for your reviewer**:
This PR requires German people to assist in the review. I'm native in German with BSc in Business Information Technology.
**Release note**:
```release-note
Adding German translation for kubectl
```
Automatic merge from submit-queue
ScaleIO - Specify SDC GUID value via node label
**What this PR does / why we need it**:
This is a ScaleIO plugin volume PR to do the following:
- Reads node label `scaleio.sdcGuid` value for the SDC GUID
- Uses value to look up the Scaleio SDC `instance ID`
- If label not found, falls back to current way of doing instance id look up now
This enhancement allows the ScaleIO plugin to work properly even if the drv_cfg binary is not installed on the kubelet node.
**Special Notes**
Associated issue - #51537Closes#51537
```release-note
The ScaleIO volume plugin can now read the SDC GUID value as node label scaleio.sdcGuid; if binary drv_cfg is not installed, the plugin will still work properly; if node label not found, it defaults to drv_cfg if installed.
```
We should be able to build a cloud-controller-manager without having to
pull in code specific to GCE and AWS clouds. Note that this is a tactical
fix for now, we should have allow PVLabeler to be passed into the
PersistentVolumeController, maybe come up with better interfaces etc. Since
it is too late to do all that for 1.8, we just move cloud specific code
to where they belong and we check for PVLabeler method and use it where
needed.
Fixes#51629
Currently setting watch cache size for a given resource does not disable
the watch cache. This commit adds a new `default-watch-cache-size` flag
to map to the existing field, and refactors how watch cache sizes are
calculated to bring all of the code into one place. It also adds debug
logging to startup to allow us to verify watch cache enablement in
production.
Automatic merge from submit-queue
Fix deployment timeout reporting
If the previous condition has been a successful rollout then we
shouldn't try to estimate any progress. Scenario:
* progressDeadlineSeconds is smaller than the difference between
now and the time the last rollout finished in the past.
* the creation of a new ReplicaSet triggers a resync of the
Deployment prior to the cached copy of the Deployment getting
updated with the status.condition that indicates the creation
of the new ReplicaSet.
The Deployment will be resynced and eventually its Progressing
condition will catch up with the state of the world.
Fixes https://github.com/kubernetes/kubernetes/issues/49637
I will also cherry-pick this back to 1.7.
**Release note**:
```NONE
```
Automatic merge from submit-queue (batch tested with PRs 51900, 51782, 52030)
apiservers: stratify versioned informer construction
The versioned share informer factory has been part of the GenericApiServer config,
but its construction depended on other fields of that config (e.g. the loopback
client config). Hence, the order of changes to the config mattered.
This PR stratifies this by moving the SharedInformerFactory from the generic Config
to the CompleteConfig struct. Hence, it is only filled during completion when it is
guaranteed that the loopback client config is set.
While doing this, the CompletedConfig construction is made more type-safe again,
i.e. the use of SkipCompletion() is considereably reduced. This is archieved by
splitting the derived apiserver Configs into the GenericConfig and the ExtraConfig
part. Then the completion is structural again because CompleteConfig is again
of the same structure: generic CompletedConfig and local completed ExtraConfig.
Fixes#50661.
If the previous condition has been a successful rollout then we
shouldn't try to estimate any progress. Scenario:
* progressDeadlineSeconds is smaller than the difference between
now and the time the last rollout finished in the past.
* the creation of a new ReplicaSet triggers a resync of the
Deployment prior to the cached copy of the Deployment getting
updated with the status.condition that indicates the creation
of the new ReplicaSet.
The Deployment will be resynced and eventually its Progressing
condition will catch up with the state of the world.
Signed-off-by: Michail Kargakis <mkargaki@redhat.com>
Automatic merge from submit-queue (batch tested with PRs 52091, 52071)
Bugfix: Improve how JobController use queue for backoff
**What this PR does / why we need it**:
In some cases, the backoff delay for a given Job is reset unnecessarily.
the PR improves how JobController uses queue for backoff:
- Centralize the key "forget" and "re-queue" process in only on method.
- Change the signature of the syncJob method in order to return the
information if it is necessary to forget the backoff delay for a given
key.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
Links to #51153
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 48552, 51876)
Disable default paging in list watches
For 1.8 this will be off by default. In 1.9 it will be on by default.
Add tests and rename some fields to use the `chunking` terminology.
Note that the pager may be used for other things besides chunking.
Follow on to #48921, we left the field on to get some exercise in the normal code paths, but needs to be disabled for 1.8.
@liggitt let's merge on wednesday.
Automatic merge from submit-queue
GCE: Bubble IP reservation error to the user when the address is specified.
This PR improves the debug-ability of internal load balancers when an IP fails to be reserved. I'm mostly worried about the case when the subnetwork URL is wrong or referencing a shared network from another project which isn't yet supported. As you can see from line 160, I had originally planned to surface the reservation error, but printed the wrong error.
**Special notes for your reviewer**:
/assign @yujuhong
Please apply 1.8 milestone.
**Release note**:
```release-note
NONE
```
Rather than just changing the config once to see if dynamic kubelet
config at-least-sort-of-works, this extends the test to check that the
Kubelet reports the expected Node condition and the expected configuration
values after several possible state transitions.
Additionally, this adds a stress test that changes the configuration 100
times. It is possible for resource leaks across Kubelet restarts to
eventually prevent the Kubelet from restarting. For example, this test
revealed that cAdvisor's leaking journalctl processes (see:
https://github.com/google/cadvisor/issues/1725) could break dynamic
kubelet config. This test will help reveal these problems earlier.
This commit also makes better use of const strings and fixes a few bugs
that the new testing turned up.
Related issue: #50217
Automatic merge from submit-queue (batch tested with PRs 52097, 52054)
Move paused deployment e2e tests to integration
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: xref #52113
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 51239, 51644, 52076)
do not update init containers status if terminated
fixes#29972#41580
This fixes an issue where, if a completed init container is removed while the pod or subsequent init containers are still running, the status for that init container will be reset to `Waiting` with `PodInitializing`.
This can manifest in a number of ways.
If the init container is removed why the main pod containers are running, the status will be reset with no functional problem but the status will be reported incorrectly in `kubectl get pod` for example
If the init container is removed why a subsequent init container is running, the init container will be **re-executed** leading to all manner of badness.
@derekwaynecarr @bparees
Automatic merge from submit-queue (batch tested with PRs 51239, 51644, 52076)
Fix swallowed error in registrytest
**What this PR does / why we need it**: Fixes a swallowed error in the registrytest package.
```release-note NONE
```
The commit allow ScaleIO volume plugin to read SDC GUID value as a node label.
If binary drv_cfg is not installed, the plugin will still work properly.
If node label not found, it defaults to drv_cfg if installed.
Automatic merge from submit-queue
Fix proxied request-uri to be valid HTTP requests
Fixes#52022, introduced in 1.7. Stringifying/re-parsing the URL masked that the path was not constructed with a leading `/` in the first place.
This makes upgrade requests proxied to pods/services via the API server proxy subresources be valid HTTP requests
```release-note
Fixes an issue with upgrade requests made via pod/service/node proxy subresources sending a non-absolute HTTP request-uri to backends
```
Centralize the key "forget" and "requeue" process in only on method.
Change the signature of the syncJob method in order to return the
information if it is necessary to forget the backoff delay for a given
key.
For 1.8 this will be off by default. In 1.9 it will be on by default.
Add tests and rename some fields to use the `chunking` terminology.
Note that the pager may be used for other things besides chunking.
Automatic merge from submit-queue (batch tested with PRs 51728, 49202)
Fix setNodeAddress when a node IP and a cloud provider are set
**What this PR does / why we need it**:
When a node IP is set and a cloud provider returns the same address with
several types, only the first address was accepted. With the changes made
in PR #45201, the vSphere cloud provider returned the ExternalIP first,
which led to a node without any InternalIP.
The behaviour is modified to return all the address types for the
specified node IP.
**Which issue this PR fixes**: fixes#48760
**Special notes for your reviewer**:
* I'm not a golang expert, is it possible to mock `kubelet.validateNodeIP()` to avoid the need of real host interface addresses in the test ?
* It would be great to have it backported for a next 1.6.8 release.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 51728, 49202)
Enable CRI-O stats from cAdvisor
**What this PR does / why we need it**:
cAdvisor may support multiple container runtimes (docker, rkt, cri-o, systemd, etc.)
As long as the kubelet continues to run cAdvisor, runtimes with native cAdvisor support may not want to run multiple monitoring agents to avoid performance regression in production. Pending kubelet running a more light-weight monitoring solution, this PR allows remote runtimes to have their stats pulled from cAdvisor when cAdvisor is registered stats provider by introspection of the runtime endpoint.
See issue https://github.com/kubernetes/kubernetes/issues/51798
**Special notes for your reviewer**:
cAdvisor will be bumped to pick up https://github.com/google/cadvisor/pull/1741
At that time, CRI-O will support fetching stats from cAdvisor.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 51956, 50708)
Move autoscaling/v2 from alpha1 to beta1
This graduates autoscaling/v2alpha1 to autoscaling/v2beta1. The move is more-or-less just a straightforward rename.
Part of kubernetes/features#117
```release-note
v2 of the autoscaling API group, including improvements to the HorizontalPodAutoscaler, has moved from alpha1 to beta1.
```
Automatic merge from submit-queue (batch tested with PRs 51839, 51987)
Disable rbac/v1alpha1, settings/v1alpha1, and scheduling/v1alpha1 by default
**What this PR does / why we need it**: Disables alpha features which were previously enabled by default. Also changes tests which relied on these alpha features being enabled by default.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#47691
**Special notes for your reviewer**:
**Release note**:
```release-note
Fixed a bug where some alpha features were enabled by default.
Automatic merge from submit-queue (batch tested with PRs 49133, 51557, 51749, 50842, 52018)
Fix panic in expand controller when checking PVs
Unbound PVs have their Spec.ClaimRef = nil, so we should not dereference it blindly.
In addition, increase AddPVCUpdate test coverage to 100%
fixes#52012#51995
**Release note**:
```release-note
NONE
```
@kubernetes/sig-storage-pr-reviews
/assign @gnufied
Automatic merge from submit-queue (batch tested with PRs 49133, 51557, 51749, 50842, 52018)
Charges quota only for initialized objects
Partially fix https://github.com/kubernetes/kubernetes/issues/51842.
Based on https://github.com/kubernetes/kubernetes/pull/51733/files. Only the commit "Don't charge quota when creating/updating an uninitialized object" is new.
The old plan was to charge quota for each update of uninitialized object. This PR makes the quota admission only charges the update that removes the last pending initializer. Because
* https://github.com/kubernetes/kubernetes/pull/51247, which lets sharedInformer see uninitialized objects, is not making the code freeze deadline. Hence, the quota replenishing controller won't capture deletion of uninitialized objects. We will leak quota if we charge quota for uninitialized objects.
* @lavalamp @erictune pointed out calculating/reserving quota is expensive, we should avoid doing it for every initializer update.
* My original argument was that quota admission should fail early so that user can easily figure out which initializer causes the quota outage. @lavalamp @erictune convinced me that user could easily figure the culprit if they watch the initialization process.
Charge object count when object is created, no matter if the object is
initialized or not.
Charge the remaining quota when the object is initialized.
Also, checking initializer.Pending and initializer.Result when
determining if an object is initialized. We didn't need to check them
because before 51082, having 0 pending initializer and nil
initializers.Result is invalid.
Automatic merge from submit-queue
set AdvancedAuditing feature gate to true by default
All feature commits are merged. The types are updated already to beta. This only enable the feature gate by default.
**Release note**:
```
Promote the AdvancedAuditing feature to beta and enable the feature gate by default.
```
Automatic merge from submit-queue (batch tested with PRs 51603, 51653)
Graduate metrics/v1alpha1 to v1beta1
This introduces v1beta1 of the resource metrics API, previously in alpha.
The v1alpha1 version remains for compatibility with the Heapster legacy version
of the resource metrics API, which is compatible with the v1alpha1 version. It also
renames the v1beta1 version to `resource-metrics.metrics.k8s.io`.
The HPA controller's REST clients (but not the legacy client) have been migrated as well.
Part of kubernetes/features#118.
```release-note
Migrate the metrics/v1alpha1 API to metrics/v1beta1. The HorizontalPodAutoscaler
controller REST client now uses that version. For v1beta1, the API is now known as
resource-metrics.metrics.k8s.io.
```
Automatic merge from submit-queue (batch tested with PRs 51603, 51653)
fix taint controller panic
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#51586
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 51733, 51838)
Relax update validation of uninitialized pod
Split from https://github.com/kubernetes/kubernetes/pull/50344
Fix https://github.com/kubernetes/kubernetes/issues/47837
* Let the podStrategy to only call `validation.ValidatePod()` if the old pod is not initialized, so fields are mutable.
* Let the podStatusStrategy refuse updates if the old pod is not initialized.
cc @smarterclayton
```release-note
Pod spec is mutable when the pod is uninitialized. The apiserver requires the pod spec to be valid even if it's uninitialized. Updating the status field of uninitialized pods is invalid.
```
Automatic merge from submit-queue (batch tested with PRs 51921, 51829, 51968, 51988, 51986)
Category expansion fully based on discovery
**What this PR does / why we need it**: Makes the expansion of resource names in `kubectl` (e.g. "all" in "kubectl get all") respect the "categories" field in the API, and fallback to the legacy expander.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes https://github.com/kubernetes/kubernetes/issues/41353
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
Improve APIService auto-registration for HA/upgrade scenarios
Fixes#51912
Required for 1.8 due to impact on HA upgrades.
/assign @deads2k
cc @kubernetes/sig-api-machinery-bugs
```release-note
Fixes an issue with APIService auto-registration affecting rolling HA apiserver restarts that add or remove API groups being served.
```
Automatic merge from submit-queue
Make *fakeMountInterface in container_manager_unsupported_test.go implement mount.Interface again.
This was broken in #45724
**Release note**:
```release-note
NONE
```
/sig storage
/sig node
/cc @jsafrane, @vishh
Automatic merge from submit-queue (batch tested with PRs 51984, 51351, 51873, 51795, 51634)
Bug Fix - Adding an allowed address pair wipes port security groups
**What this PR does / why we need it**:
Fix for cloud routes enabled instances will have their security groups
removed when the allowed address pair is added to the instance's port.
Upstream bug report is in:
https://github.com/gophercloud/gophercloud/issues/509
Upstream bug fix is in:
https://github.com/gophercloud/gophercloud/pull/510
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
Fixes#51755
**Special notes for your reviewer**:
Just an fix in vendored code. minimal changes needed in OpenStack cloud provider
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 51984, 51351, 51873, 51795, 51634)
Add EgressRule to NetworkPolicy
**What this PR does / why we need it**:
Add EgressRule to NetworkPolicy
**Which issue this PR fixes**: fixes#50453
**Special notes for your reviewer**:
- Please take a look at the comments for the various types. I tried to mimic some of the language used in the Ingress comments, but I may have mangled some sentences.
- Let me know if I should add some test cases for validation. I have 2-3, and did not think it was necessary to replicate each case already covered in ingress.
**Release note**:
```
Add egress policies to NetworkPolicy
```
Automatic merge from submit-queue (batch tested with PRs 51186, 50350, 51751, 51645, 51837)
Set up DNS server in containerized mounter path
During NFS/GlusterFS mount, it requires to have DNS server to be able to
resolve service name. This PR gets the DNS server ip from kubelet and
add it to the containerized mounter path. So if containerized mounter is
used, service name could be resolved during mount
**Release note**:
```release-note
Allow DNS resolution of service name for COS using containerized mounter. It fixed the issue with DNS resolution of NFS and Gluster services.
```
Automatic merge from submit-queue (batch tested with PRs 51186, 50350, 51751, 51645, 51837)
Update Cadvisor Dependency
Fixes: https://github.com/kubernetes/kubernetes/issues/51832
This is the worst dependency update ever...
The root of the problem is the [name change of Sirupsen -> sirupsen](https://github.com/sirupsen/logrus/issues/570#issuecomment-313933276). This means that in order to update cadvisor, which venders the lowercase, we need to update all dependencies to use the lower-cased version. With that being said, this PR updates the following packages:
`github.com/docker/docker`
- `github.com/docker/distribution`
- `github.com/opencontainers/go-digest`
- `github.com/opencontainers/image-spec`
- `github.com/opencontainers/runtime-spec`
- `github.com/opencontainers/selinux`
- `github.com/opencontainers/runc`
- `github.com/mrunalp/fileutils`
- `golang.org/x/crypto`
- `golang.org/x/sys`
- `github.com/docker/go-connections`
- `github.com/docker/go-units`
- `github.com/docker/libnetwork`
- `github.com/docker/libtrust`
- `github.com/sirupsen/logrus`
- `github.com/vishvananda/netlink`
`github.com/google/cadvisor`
- `github.com/euank/go-kmsg-parser`
`github.com/json-iterator/go`
Fixed https://github.com/kubernetes/kubernetes/issues/51832
```release-note
Fix journalctl leak on kubelet restart
Fix container memory rss
Add hugepages monitoring support
Fix incorrect CPU usage metrics with 4.7 kernel
Add tmpfs monitoring support
```
Automatic merge from submit-queue (batch tested with PRs 51186, 50350, 51751, 51645, 51837)
Wait for container cleanup before deletion
We should wait to delete pod API objects until the pod's containers have been cleaned up. See issue: #50268 for background.
This changes the kubelet container gc, which deletes containers belonging to pods considered "deleted".
It adds two conditions under which a pod is considered "deleted", allowing containers to be deleted:
Pods where deletionTimestamp is set, and containers are not running
Pods that are evicted
This PR also changes the function PodResourcesAreReclaimed by making it return false if containers still exist.
The eviction manager will wait for containers of previous evicted pod to be deleted before evicting another pod.
The status manager will wait for containers to be deleted before removing the pod API object.
/assign @vishh
Automatic merge from submit-queue (batch tested with PRs 51186, 50350, 51751, 51645, 51837)
fix bug on kubectl deleting uninitialized resources
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#51185
**Special notes for your reviewer**:
/assign @caesarxuchao @ahmetb
**Release note**:
```release-note
fix bug on kubectl deleting uninitialized resources
```
Automatic merge from submit-queue (batch tested with PRs 50072, 51744)
Deviceplugin checkpoint
**What this PR does / why we need it**:
Extends on top of PR 51209 to checkpoint device to pod allocation information on Kubelet to recover from Kubelet restarts.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue
Hugetlbfs support based on empty dir volume plugin
**What this PR does / why we need it**: Support for huge pages in empty dir volume plugin. More information about hugepages can be found [here](https://www.kernel.org/doc/Documentation/vm/hugetlbpage.txt)
Feature track issue: kubernetes/features#275
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
Support for Huge pages in empty_dir volume plugin
[Huge pages](https://www.kernel.org/doc/Documentation/vm/hugetlbpage.txt) can now be used with empty dir volume plugin.
```
During NFS/GlusterFS mount, it requires to have DNS server to be able to
resolve service name. This PR gets the DNS server ip from kubelet and
add it to the containerized mounter path. So if containerized mounter is
used, service name could be resolved during mount
Automatic merge from submit-queue
HugePages feature
**What this PR does / why we need it**:
Implements HugePages support per https://github.com/kubernetes/community/pull/837
Feature track issue: https://github.com/kubernetes/features/issues/275
**Special notes for your reviewer**:
A follow-on PR is opened to add the EmptyDir support.
**Release note**:
```release-note
Alpha support for pre-allocated hugepages
```