Automatic merge from submit-queue
add names for workqueues to gather controller latency/depth metrics
Adding names to the workqueues used by controllers allows the automatic collection of depth, rate, and latency metrics for those controllers. These are useful for diagnosing various "slow controller" cases.
@kubernetes/rh-cluster-infra
Automatic merge from submit-queue
AppArmor was flipped to beta, update feature gate
/cc @dchen1107
---
1.4 Justification:
- Risk: Low. Change is small & contained.
- Rollback: Nothing else should touch this code path or depend on its functionality.
- Cost: AppArmor is beta, but the feature gate thinks it's alpha.
Automatic merge from submit-queue
Include security options in the container created event
New container creation events look like:
```
Created container with docker id /k8s_bar2.a4; Security:[seccomp=sub/subtest(md5:07c9bcb4db631f7ca191d6e0bca49f76)]
Created container with docker id /k8s_bar2.a4; Security:[seccomp=unconfined apparmor=foo-profile]
```
The goal is to provide enough information to confirm that the requseted security constraints were honored.
For https://github.com/kubernetes/kubernetes/issues/31284
/cc @dchen1107 @thockin @jfrazelle @pweil- @pmorie
---
Justification for v1.4:
- Risk: low. This appends some additional information to a human readable message. A bug here would probably not break any functionality
- Roll-back: I don't anticipate any more changes to this area of the code. No functionality depends on this change.
- Cost of not including: Users don't get any (positive) confirmation that the AppArmor or Seccomp profile they requested were actually enabled.
Automatic merge from submit-queue
Add log message in Kubelet when controller attach/detach is enabled
Adds a message to the Kubelet log indicating whether controller attach/detach is enabled for a node.
cc @kubernetes/sig-storage
Automatic merge from submit-queue
[AppArmor] Promote AppArmor annotations to beta
Justification for promoting AppArmor to beta:
1. We will provide an upgrade path to GA
2. We don't anticipate any major changes to the design, and will continue to invest in this feature
3. We will thoroughly test it. If any serious issues are uncovered we can reevaluate, and we're committed to fixing them.
4. We plan to provide beta-level support for the feature anyway (responding quickly to issues).
Note that this does not include the yet-to-be-merged status annotation (https://github.com/kubernetes/kubernetes/pull/31382). I'd like to propose keeping that one alpha for now because I'm not sure the PodStatus is the right long-term home for it (I think a separate monitoring channel, e.g. cAdvisor, would be a better solution).
/cc @thockin @matchstick @erictune
Automatic merge from submit-queue
Set imagefs rank and reclaim functions when nodefs+imagefs share comm…
Fixes#31192
I decided that the behavior should match the current output of the kubelet summary API. With no dedicated imagefs, the ranking and reclaim functions will match the nodefs ranking and reclaim functions.
/cc @ronnielai @vishh
Automatic merge from submit-queue
Add AppArmor feature gate
Add option to disable AppArmor via a feature gate. This PR treats AppArmor as Beta, and thus depends on https://github.com/kubernetes/kubernetes/pull/31471 (I will remove `do-not-merge` once that merges).
Note that disabling AppArmor means that pods with AppArmor annotations will be rejected in validation. It does not mean that the components act as though AppArmor was never implemented. This is by design, because we want to make it difficult to accidentally run a Pod with an AppArmor annotation without AppArmor protection.
/cc @dchen1107
Automatic merge from submit-queue
Add validation preventing recycle of / in a hostPath PV
Adds a validation that prevents a user from recycling `/` when it is used in a hostPath PV
cc @kubernetes/sig-storage
Automatic merge from submit-queue
add validation for PV spec to ensure correct values are used for ReclaimPolicy on initial create
k8 currently allows invalid values for ReclaimPolicy (i.e. "scotto") - this allows the PV to be created and even bound, however, when the pvc or pod is deleted and the recycler is triggered, an error is thrown
```
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
36s 36s 1 {persistentvolume-controller } Warning VolumeUnknownReclaimPolicy Volume has unrecognized PersistentVolumeReclaimPolicy
```
New behavior will not allow the user to create the PV:
```
[root@k8dev nfs]# kubectl create -f nfs-pv-bad.yaml
The PersistentVolume "pv-gce" is invalid: spec.persistentVolumeReclaimPolicy: Unsupported value: "scotto": supported values: Delete, Recycle, Retain
```
Automatic merge from submit-queue
Add get/delete cluster, delete context to kubectl config
Fixes#29794 by adding `get-clusters`, `delete-cluster` and `delete-context` actions to `kubectl config`.
Automatic merge from submit-queue
Use updated deployment after rollback
@kubernetes/deployment
**Release note**:
<!-- Steps to write your release note:
1. Use the release-note-* labels to set the release note state (if you have access)
2. Enter your extended release note in the below block; leaving it blank means using the PR title as the release note. If no release note is required, just write `NONE`.
-->
```release-note
NONE
```
Automatic merge from submit-queue
Remove annoying log
In large clusters, those were ~85% of logs in controller manager (2.3M lines), which doesn't provide any value...
Automatic merge from submit-queue
Fix hang/websocket timeout when streaming container log with no content
When streaming and following a container log, no response headers are sent from the kubelet `containerLogs` endpoint until the first byte of content is written to the log. This propagates back to the API server, which also will not send response headers until it gets response headers from the kubelet. That includes upgrade headers, which means a websocket connection upgrade is not performed and can time out.
To recreate, create a busybox pod that runs `/bin/sh -c 'sleep 30 && echo foo && sleep 10'`
As soon as the pod starts, query the kubelet API:
```
curl -N -k -v 'https://<node>:10250/containerLogs/<ns>/<pod>/<container>?follow=true&limitBytes=100'
```
or the master API:
```
curl -N -k -v 'http://<master>:8080/api/v1/<ns>/pods/<pod>/log?follow=true&limitBytes=100'
```
In both cases, notice that the response headers are not sent until the first byte of log content is available.
This PR:
* does a 0-byte write prior to handing off to the container runtime stream copy. That commits the response header, even if the subsequent copy blocks waiting for the first byte of content from the log.
* fixes a bug with the "ping" frame sent to websocket streams, which was not respecting the requested protocol (it was sending a binary frame to a websocket that requested a base64 text protocol)
* fixes a bug in the limitwriter, which was not propagating 0-length writes, even before the writer's limit was reached
Automatic merge from submit-queue
Fix getting pods from all namespaces
**What this PR does / why we need it**:
Use Heapster handler for pods from all namespaces (added in the new version).
Depends on #30993
Automatic merge from submit-queue
persist services need to be retried in service controller cache.
fix issue reported by @anguslees
more detail on #25189
Automatic merge from submit-queue
rkt: Force `rkt fetch` to fetch from remote to conform the image pull policy.
Fix https://github.com/kubernetes/kubernetes/issues/27646
Use `--no-store` option for `rkt fetch` to force it to fetch from remote.
However, `--no-store` will fetch the remote image regardless of whether the content of the image has changed or not.
This causes performance downgrade when the image tag is ':latest' and the image pull policy is 'always'.
The issue is tracked in https://github.com/coreos/rkt/issues/2937.
Automatic merge from submit-queue
Add ReclaimPolicy to the resource printer for 'get pv'
Propose we add the RECLAIMPOLICY (persistentVolumeReclaimPolicy) from resource_printer.go to show the policy when a user does a ```kubectl get pv```
```
[root@k8dev nfs]# kubectl get pv
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE
pv-nfs 1Gi RWO Retain Available 1m
pv-nfs2 1Gi RWO Delete Available 4s
```
Automatic merge from submit-queue
Allow services which use same port, different protocol to use the same nodePort for both
fix#20092
@thockin @smarterclayton ptal.
Automatic merge from submit-queue
Filter internal Kubernetes labels from Prometheus metrics
**What this PR does / why we need it**:
Kubernetes uses Docker labels as storage for some internal labels. The
majority of these labels are not meaningful metric labels and a few of
them are even harmful as they're not static and cause wrong aggregation
results.
This change provides a custom labels func to only attach meaningful
labels to cAdvisor exported metrics.
**Which issue this PR fixes**
google/cadvisor#1312
**Special notes for your reviewer**:
Depends on google/cadvisor#1429. Once that is merged, I'll update the vendor update commit.
**Release note**:
```release-note
Remove environment variables and internal Kubernetes Docker labels from cAdvisor Prometheus metric labels.
Old behavior:
- environment variables explicitly whitelisted via --docker-env-metadata-whitelist were exported as `container_env_*=*`. Default is zero so by default non were exported
- all docker labels were exported as `container_label_*=*`
New behavior:
- Only `container_name`, `pod_name`, `namespace`, `id`, `image`, and `name` labels are exposed
- no environment variables will be exposed ever via /metrics, even if whitelisted
```
---
Given that we have full control over the exported label set, I shortened the pod_name, pod_namespace and container_name label names. Below an example of the change (reformatted for readability).
```
# BEFORE
container_cpu_cfs_periods_total{
container_label_io_kubernetes_container_hash="5af8c3b4",
container_label_io_kubernetes_container_name="sync",
container_label_io_kubernetes_container_restartCount="1",
container_label_io_kubernetes_container_terminationMessagePath="/dev/termination-log",
container_label_io_kubernetes_pod_name="popularsearches-web-3165456836-2bfey",
container_label_io_kubernetes_pod_namespace="popularsearches",
container_label_io_kubernetes_pod_terminationGracePeriod="30",
container_label_io_kubernetes_pod_uid="6a291e48-47c4-11e6-84a4-c81f66bdf8bd",
id="/docker/68e1f15353921f4d6d4d998fa7293306c4ac828d04d1284e410ddaa75cf8cf25",
image="redacted.com/popularsearches:42-16-ba6bd88",
name="k8s_sync.5af8c3b4_popularsearches-web-3165456836-2bfey_popularsearches_6a291e48-47c4-11e6-84a4-c81f66bdf8bd_c02d3775"
} 72819
# AFTER
container_cpu_cfs_periods_total{
container_name="sync",
pod_name="popularsearches-web-3165456836-2bfey",
namespace="popularsearches",
id="/docker/68e1f15353921f4d6d4d998fa7293306c4ac828d04d1284e410ddaa75cf8cf25",
image="redacted.com/popularsearches:42-16-ba6bd88",
name="k8s_sync.5af8c3b4_popularsearches-web-3165456836-2bfey_popularsearches_6a291e48-47c4-11e6-84a4-c81f66bdf8bd_c02d3775"
} 72819
```
Feedback requested on:
* Label names. Other suggestions? Should we keep these very long ones?
* Do we need to export io.kubernetes.pod.uid? It makes working with the metrics a bit more complicated and the pod name is already unique at any time (but not over time). The UID is aslo part of `name`.
As discussed with @timstclair, this should be added to v1.4 as the current labels are harmful.
PTAL @jimmidyson @fabxc @vishh
Automatic merge from submit-queue
Split the version metric out to its own package
This PR breaks a client dependency on prometheus. Combined with #30638, the client will no longer depend on these packages.