This PR exports config-releated metrics from the Kubelet.
The Guages for active, assigned, and last-known-good config can be used
to identify config versions and produce aggregate counts across several
nodes. The error-reporting Gauge can be used to determine whether a node
is experiencing a config-related error, and to prodouce an aggregate
count of nodes in an error state.
Updates dynamic Kubelet config to use a structured status, rather than a
node condition. This makes the status machine-readable, and thus more
useful for config orchestration.
Fixes: #56896
Automatic merge from submit-queue (batch tested with PRs 63624, 59847). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
explicit kubelet config key in Node.Spec.ConfigSource.ConfigMap
This makes the Kubelet config key in the ConfigMap an explicit part of
the API, so we can stop using magic key names.
As part of this change, we are retiring ConfigMapRef for ConfigMap.
```release-note
You must now specify Node.Spec.ConfigSource.ConfigMap.KubeletConfigKey when using dynamic Kubelet config to tell the Kubelet which key of the ConfigMap identifies its config file.
```
This makes the Kubelet config key in the ConfigMap an explicit part of
the API, so we can stop using magic key names.
As part of this change, we are retiring ConfigMapRef for ConfigMap.
This PR unpacks the downloaded ConfigMap to a set of files on the node.
This enables other config files to ride alongside the
KubeletConfiguration, and the KubeletConfiguration to refer to these
cohabitants with relative paths.
This PR also stops storing dynamic config metadata (e.g. current,
last-known-good config records) in the same directory as config
checkpoints. Instead, it splits the storage into `meta` and
`checkpoints` dirs.
This PR cleans up the construction of the node condition and also fixes
a small bug where the last transition time could be updated incorrectly
when the sync failure overlay was present.
This is a more accurate name for the condition, as it describes the
status of the Kubelet's configuration.
Also cleans up capitalization of internal names.
This changes the Kubelet configuration flag precedence order so that
flags take precedence over config from files/ConfigMaps.
See #56171 for rationale.
Note: Feature gates accumulate with the following
precedence (greater number overrides lesser number):
1. file-based config
2. dynamic cofig
3. flag-based config
Rather than just changing the config once to see if dynamic kubelet
config at-least-sort-of-works, this extends the test to check that the
Kubelet reports the expected Node condition and the expected configuration
values after several possible state transitions.
Additionally, this adds a stress test that changes the configuration 100
times. It is possible for resource leaks across Kubelet restarts to
eventually prevent the Kubelet from restarting. For example, this test
revealed that cAdvisor's leaking journalctl processes (see:
https://github.com/google/cadvisor/issues/1725) could break dynamic
kubelet config. This test will help reveal these problems earlier.
This commit also makes better use of const strings and fixes a few bugs
that the new testing turned up.
Related issue: #50217
The subfeature was a cool idea, but in the end it is very complex to
separate Kubelet restarts into crash-loops caused by config vs.
crash-loops caused by other phenomena, like admin-triggered node restarts,
kernel panics, and and process babysitter behavior. Dynamic kubelet config
will be better off without the potential for false positives here.
Removing this subfeature also simplifies dynamic configuration by
reducing persistent state:
- we no longer need to track bad config in a file
- we no longer need to track kubelet startups in a file