kubeadm uses certificate rotation to replace the initial high-power
cert provided in --kubeconfig with a less powerful certificate on
the masters. This requires that we pass the contents of the client
config certData and keyData down into the cert store to populate
the initial client.
Add better comments to describe why the flow is required. Add a test
that verifies initial cert contents are written to disk. Change
the cert manager to not use MustRegister for prometheus so that
it can be tested.
Expose both a Stop() method (for cleanup) and a method to force
cert rotation, but only expose Stop() on the interface.
Verify that we choose the correct client.
Ensure that bootstrap+clientcert-rotation in the Kubelet can:
1. happen in the background so that static pods aren't blocked by bootstrap
2. collapse down to a single call path for requesting a CSR
3. reorganize the code to allow future flexibility in retrieving bootstrap creds
Fetching the first certificate and later certificates when the kubelet
is using client rotation and bootstrapping should share the same code
path. We also want to start the Kubelet static pod loop before
bootstrapping completes. Finally, we want to take an incremental step
towards improving how the bootstrap credentials are loaded from disk
(potentially allowing for a CLI call to get credentials, or a remote
plugin that better integrates with cloud providers or KSMs).
Reorganize how the kubelet client config is determined. If rotation is
off, simplify the code path. If rotation is on, load the config
from disk, and then pass that into the cert manager. The cert manager
creates a client each time it tries to request a new cert.
Preserve existing behavior where:
1. bootstrap kubeconfig is used if the current kubeconfig is invalid/expired
2. we create the kubeconfig file based on the bootstrap kubeconfig, pointing to
the location that new client certs will be placed
3. the newest client cert is used once it has been loaded
- Move from the old github.com/golang/glog to k8s.io/klog
- klog as explicit InitFlags() so we add them as necessary
- we update the other repositories that we vendor that made a similar
change from glog to klog
* github.com/kubernetes/repo-infra
* k8s.io/gengo/
* k8s.io/kube-openapi/
* github.com/google/cadvisor
- Entirely remove all references to glog
- Fix some tests by explicit InitFlags in their init() methods
Change-Id: I92db545ff36fcec83afe98f550c9e630098b3135
Automatic merge from submit-queue (batch tested with PRs 67349, 66056). If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md.
wait until apiserver connection before starting kubelet tls bootstrap
I wonder if this helps with sometimes slow network programming
cc @mwielgus @awly
If we create a new key on each CSR, if CSR fails the next attempt will
create a new one instead of reusing previous CSR.
If approver/signer don't handle CSRs as quickly as new nodes come up,
they can pile up and approver would keep handling old abandoned CSRs and
Nodes would keep timing out on startup.
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
track/close kubelet->API connections on heartbeat failure
xref #48638
xref https://github.com/kubernetes-incubator/kube-aws/issues/598
we're already typically tracking kubelet -> API connections and have the ability to force close them as part of client cert rotation. if we do that tracking unconditionally, we gain the ability to also force close connections on heartbeat failure as well. it's a big hammer (means reestablishing pod watches, etc), but so is having all your pods evicted because you didn't heartbeat.
this intentionally does minimal refactoring/extraction of the cert connection tracking transport in case we want to backport this
* first commit unconditionally sets up the connection-tracking dialer, and moves all the cert management logic inside an if-block that gets skipped if no certificate manager is provided (view with whitespace ignored to see what actually changed)
* second commit plumbs the connection-closing function to the heartbeat loop and calls it on repeated failures
follow-ups:
* consider backporting this to 1.10, 1.9, 1.8
* refactor the connection managing dialer to not be so tightly bound to the client certificate management
/sig node
/sig api-machinery
```release-note
kubelet: fix hangs in updating Node status after network interruptions/changes between the kubelet and API server
```
The kubelet uses two different locations to store certificates on
initial bootstrap and then on subsequent rotation:
* bootstrap: certDir/kubelet-client.(crt|key)
* rotation: certDir/kubelet-client-(DATE|current).pem
Bootstrap also creates an initial node.kubeconfig that points to the
certs. Unfortunately, with short rotation the node.kubeconfig then
becomes out of date because it points to the initial cert/key, not the
rotated cert key.
Alter the bootstrap code to store client certs exactly as if they would
be rotated (using the same cert Store code), and reference the PEM file
containing cert/key from node.kubeconfig, which is supported by kubectl
and other Go tooling. This ensures that the node.kubeconfig continues to
be valid past the first expiration.
Automatic merge from submit-queue (batch tested with PRs 58716, 59977, 59316, 59884, 60117). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Cap how long the kubelet waits when it has no client cert
If we go a certain amount of time without being able to create a client
cert and we have no current client cert from the store, exit. This
prevents a corrupted local copy of the cert from leaving the Kubelet in a
zombie state forever. Exiting allows a config loop outside the Kubelet
to clean up the file or the bootstrap client cert to get another client
cert.
Five minutes is a totally arbitary timeout, judged to give enough time for really slow static pods to boot.
@mikedanese
```release-note
Set an upper bound (5 minutes) on how long the Kubelet will wait before exiting when the client cert from disk is missing or invalid. This prevents the Kubelet from waiting forever without attempting to bootstrap a new client credentials.
```
If we go a certain amount of time without being able to create a client
cert and we have no current client cert from the store, exit. This
prevents a corrupted local copy of the cert from leaving the Kubelet in a
zombie state forever. Exiting allows a config loop outside the Kubelet
to clean up the file or the bootstrap client cert to get another client
cert.
Prevent a Kubelet from shutting down when the server isn't responding to
us but we cannot get a new certificate. This allows a cluster to coast
if the master is unresponsive or a node is partitioned and their client
cert expires.
The client cert manager uses the most recent cert to request new
certificates. If that certificate is expired, it will be unable to
complete new CSR requests. This commit alters the manager to force
process exit if no further client cert rotation is possible, which
is expected to trigger a restart of the kubelet and either a
re-bootstrap from the bootstrap kubeconfig or a re-read of the
current disk state (assuming that some other agent is managing the
bootstrap configuration).
This prevents the Kubelet from wedging in a state where it cannot make
API calls.
Ensures that in a crash loop state we can make forward progress by
generating a new key and hence new CSR. If we do not delete the key, an
expired CSR may block startup.
Also more aggressively delete a bad cert path
Before the bootstrap client is used, check a number of conditions that
ensure it can be safely loaded by the server. If any of those conditions
are invalid, re-bootstrap the node. This is primarily to force
bootstrapping without human intervention when a certificate is expired,
but also handles partial file corruption.
Automatic merge from submit-queue
delete ineffectual assginment
**What this PR does / why we need it**:
delete ineffectual assignment
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
Automatic merge from submit-queue (batch tested with PRs 49237, 49656, 49980, 49841, 49899)
certificate manager: close existing client conns once cert rotates
After the kubelet rotates its client cert, it will keep connections to the API server open indefinitely, causing it to use its old credentials instead of the new certs. Because the API server authenticates client certs at the time of the request, and not the handshake, this could cause the kubelet to start hitting auth failures even if it rotated its certificate to a new, valid one.
When the kubelet rotates its cert, close down existing connections to force a new TLS handshake.
Ref https://github.com/kubernetes/features/issues/266
Updates https://github.com/kubernetes-incubator/bootkube/pull/663
```release-note
After a kubelet rotates its client cert, it now closes its connections to the API server to force a handshake using the new cert. Previously, the kubelet could keep its existing connection open, even if the cert used for that connection was expired and rejected by the API server.
```
/cc @kubernetes/sig-auth-bugs
/assign @jcbsmpsn @mikedanese