This explicitly registers Kubelet flags from libraries that were
registering flags globally, and stops parsing the global flag set.
In general, we should always be explicit about flags we register
and parse, so that we maintain control over our command-line API.
Automatic merge from submit-queue
Curating Owners: pkg/credentialprovider
cc @liggitt @erictune
In an effort to expand the existing pool of reviewers and establish a
two-tiered review process (first someone lgtms and then someone
experienced in the project approves), we are adding new reviewers to
existing owners files.
If You Care About the Process:
------------------------------
We did this by algorithmically figuring out who’s contributed code to
the project and in what directories. Unfortunately, that doesn’t work
well: people that have made mechanical code changes (e.g change the
copyright header across all directories) end up as reviewers in lots of
places.
Instead of using pure commit data, we generated an excessively large
list of reviewers and pruned based on all time commit data, recent
commit data and review data (number of PRs commented on).
At this point we have a decent list of reviewers, but it needs one last
pass for fine tuning.
Also, see https://github.com/kubernetes/contrib/issues/1389.
TLDR:
-----
As an owner of a sig/directory and a leader of the project, here’s what
we need from you:
1. Use PR https://github.com/kubernetes/kubernetes/pull/35715 as an example.
2. The pull-request is made editable, please edit the `OWNERS` file to
remove the names of people that shouldn't be reviewing code in the
future in the **reviewers** section. You probably do NOT need to modify
the **approvers** section. Names asre sorted by relevance, using some
secret statistics.
3. Notify me if you want some OWNERS file to be removed. Being an
approver or reviewer of a parent directory makes you a reviewer/approver
of the subdirectories too, so not all OWNERS files may be necessary.
4. Please use ALIAS if you want to use the same list of people over and
over again (don't hesitate to ask me for help, or use the pull-request
above as an example)
Automatic merge from submit-queue
Fix httpclient setup for gcp credential provider to have timeout
The default http client has no timeout.
This could cause problems when not on GCP environments.
This PR changes to use a 10s timeout, and ensures the transport has our normal defaults applied.
/cc @ncdc @liggitt
Automatic merge from submit-queue
Do not query the metadata server to find out if running on GCE. Retry metadata server query for gcr if running on gce.
Retry the logic for determining is gcr is enabled to workaround metadata unavailability.
Note: This patch does not retry fetching registry credentials.
This is step one for cross-region ECR support and has no visible effects yet.
I'm not crazy about the name LazyProvide. Perhaps the interface method could
remain like that and the package method of the same name could become
LateBind(). I still don't understand why the credential provider has a
DockerConfigEntry that has the same fields but is distinct from
docker.AuthConfiguration. I had to write a converter now that we do that in
more than one place.
In step two, I'll add another intermediate, lazy provider for each AWS region,
whose empty LazyAuthConfiguration will have a refresh time of months or years.
Behind the scenes, it'll use an actual ecrProvider with the usual ~12 hour
credentials, that will get created (and later refreshed) only when kubelet is
attempting to pull an image. If we simply turned ecrProvider directly into a
lazy provider, we would bypass all the caching and get new credentials for
each image pulled.
With this change, you can add --google_json_key=/path/to/key.json to the DAEMON_ARGS of the kubelet, e.g.
nano /etc/default/kubelet
... # Add the flag
service kubelet restart
With this setting, minions will be able to authenticate with gcr.io repositories nearly as smoothly as if K8s were running on GCE.
NOTE: This private key can be used to access most project resources, consider dropping the service account created through this flow to a project READER, or restricting its access to just the GCS bucket containing the container images.
In particular, a few of the utilities used within the credentialprovider had the pattern:
glog.Errorf("while blah %s: %v", s, err)
return nil, err
This change propagates those error message and puts the burden of logging on the caller.
In particular, this allows us to squelch all output during kubelet startup when we are detecting whether certain credentialprovider plugins should even be enabled.
Fixes: https://github.com/GoogleCloudPlatform/kubernetes/issues/2673
This change refactors the way Kubelet's DockerPuller handles the docker config credentials to utilize a new credentialprovider library.
The credentialprovider library is based on several of the files from the Kubelet's dockertools directory, but supports a new pluggable model for retrieving a .dockercfg-compatible JSON blob with credentials.
With this change, the Kubelet will lazily ask for the docker config from a set of DockerConfigProvider extensions each time it needs a credential.
This change provides common implementations of DockerConfigProvider for:
- "Default": load .dockercfg from disk
- "Caching": wraps another provider in a cache that expires after a pre-specified lifetime.
GCP-only:
- "google-dockercfg": reads a .dockercfg from a GCE instance's metadata
- "google-dockercfg-url": reads a .dockercfg from a URL specified in a GCE instance's metadata.
- "google-container-registry": reads an access token from GCE metadata into a password field.