Automatic merge from submit-queue
Use the first version as thirdparty resource preferredVersion
First commit is a one-liner, which implements the server-half of #23985.
The other two commits rearrange the test code, and add back a commented out test of thirdparty resource.
@lavalamp @nikhiljindal
Automatic merge from submit-queue
Bump up etcd dependency to fix data race
ref: https://github.com/kubernetes/kubernetes/pull/23694
What this PR does
- Bumping up the godep of etcd to fix data race in etcd watcher. Without this change, watcher PR builds will fail in race detection.
- Small changes to fix builds after upgrade
Automatic merge from submit-queue
Add memory available to summary stats provider
To support out of resource killing when low on memory, we want to let operators specify eviction thresholds based on available memory instead of memory usage for ease of use when working with heterogeneous nodes.
So for example, a valid eviction threshold would be the following:
* If node.memory.available < 200Mi for 30s, then evict pod(s)
For the node, `memory.availableBytes` is always known since the `memory.limit_in_bytes` is always known for root cgroup. For individual containers in pods, we only populate the `availableBytes` if the container was launched with a memory limit specified. When no memory limit is specified, the cgroupfs sets a value of 1 << 63 in the `memory.limit_in_bytes` so we look for a similar max value to handle unbounded limits, and ignore setting `memory.availableBytes`.
FYI @vishh @timstclair - as discussed on Slack.
/cc @kubernetes/sig-node @kubernetes/rh-cluster-infra
Automatic merge from submit-queue
Move /resetMetrics to DELETE /metrics
Reduces the surface area of the API server slightly and allows
downstream components to have deleteable metrics. After this change
genericapiserver will *not* have metrics unless the caller defines it
(allows different apiserver implementations to make that choice on their
own).
@wojtek-t
Automatic merge from submit-queue
Make etcd cache size configurable
Instead of the prior 50K limit, allow users to specify a more sensible size for their cluster.
I'm not sure what a sensible default is here. I'm still experimenting on my own clusters. 50 gives me a 270MB max footprint. 50K caused my apiserver to run out of memory as it exceeded >2GB. I believe that number is far too large for most people's use cases.
There are some other fundamental issues that I'm not addressing here:
- Old etcd items are cached and potentially never removed (it stores using modifiedIndex, and doesn't remove the old object when it gets updated)
- Cache isn't LRU, so there's no guarantee the cache remains hot. This makes its performance difficult to predict. More of an issue with a smaller cache size.
- 1.2 etcd entries seem to have a larger memory footprint (I never had an issue in 1.1, even though this cache existed there). I suspect that's due to image lists on the node status.
This is provided as a fix for #23323
Automatic merge from submit-queue
Kubelet: Refactor container related functions in DockerInterface
For #23563.
Based on #23506, will rebase after #23506 is merged.
The last 4 commits of this PR are new.
This PR refactors all container lifecycle related functions in DockerInterface, including:
* ListContainers
* InspectContainer
* CreateContainer
* StartContainer
* StopContainer
* RemoveContainer
@kubernetes/sig-node
Automatic merge from submit-queue
Add watch.Until, a conditional watch mechanism
A more powerful tool than wait.Poll, allows a watch interface to drive conditionals to react to changes on a resource or resources. Provide a set of standard conditions that are in common use in the code, and updates e2e to use a few of these.
Extracted from #23567
Automatic merge from submit-queue
the component status health check should check whether the scheme of backend storage url is https or not
fix https://github.com/kubernetes/kubernetes/issues/23897, when querying the component status of etcd (backend storage), the scheme of url is not checked and use `http` always, this commit aims to fix this.
Automatic merge from submit-queue
Flexvolume: Add support for multiple secrets
This PR adds support to pass multiple secrets for flexvolume plugins.
To allow multiple secrets, secrets are now passed as:
"kubernetes.io/secret/id-rsa":"value-2\r\n\r\n","kubernetes.io/secret/id-rsa.pub":"value-1\r\n"
Automatic merge from submit-queue
Fix expired event logic to use 404 instead of 500
It seems this logic was never updated once apiserver started returning 404s for expired (missing) events.
This change corrects it to use a 404 so events will get resent correctly if they were expired in etcd.
Fixes#23637.
Automatic merge from submit-queue
Make kubectl edit not convert GV on edits
Previously, kubectl edit was using a decoder to load in edits that
converted to the internal version. It would then re-encode this
decoded value to produce a patch. However, if you were editing
in the object in a GroupVersion that was not the internal version,
this would cause the kubectl edit command to attempt to produce
a patch which changed the GroupVersion, which would fail.
Now, we use a plain deserializer instead, so no conversion or
defaulting occurs when loading in the edited file.
Ref #23378
Automatic merge from submit-queue
phase 2 of cassandra example overhaul
Here's the next iteration in overhauling this example, towards https://github.com/kubernetes/kubernetes/issues/20961. This removes the pod adoption part, but doesn't (yet) otherwise change any of the resources used.
It also includes some README cleanup, and removes some explicit specification of labels in the rc yaml.
This PR doesn't yet add any commentary on how we're using the seed provider (re: https://github.com/kubernetes/kubernetes/issues/20961#issuecomment-190405959 etc.). Maybe we should add that.
Also: LMK if this PR should include any changes to the links out to the docs.
cc @bgrant0607 @johndmulhausen
Automatic merge from submit-queue
rkt: Fix hostnetwork.
Mount hosts' /etc/hosts, /etc/resolv.conf, set host's hostname
when running the pod in the host's network.
Fix#24235
cc @kubernetes/sig-node
Automatic merge from submit-queue
rkt: Use rkt pod's uuid as the systemd service file's name.
Previously, the service file's name is 'k8s_${POD_UID}.service',
which means we need to `systemctl daemon-reload` if the we replace
the content of the service file (e.g. pod is restarted).
However this makes the journal in the previous pod get disconnected.
This PR solves the issue by using the unique rkt uuid as the service
file's name. After the change, the service file's name will be:
'k8s_${rkt_uuid}.service'.
Fix#23691
Also add helpers for collecting the events that happen during a watch
and a helper that makes it easy to start a watch from any object with
ObjectMeta.
Reduces the surface area of the API server slightly and allows
downstream components to have deleteable metrics. After this change
genericapiserver will *not* have metrics unless the caller defines it
(allows different apiserver implementations to make that choice on their
own).
Automatic merge from submit-queue
rkt: Update the directory path for saving auth config.
Since #23308 is merged, now we have more stable way to determine where to store the auth configs.
cc @yujuhong @sjpotter
Automatic merge from submit-queue
Allow lazy binding in credential providers; don't use it in AWS yet
This is step one for cross-region ECR support and has no visible effects yet.
I'm not crazy about the name LazyProvide. Perhaps the interface method could
remain like that and the package method of the same name could become
LateBind(). I still don't understand why the credential provider has a
DockerConfigEntry that has the same fields but is distinct from
docker.AuthConfiguration. I had to write a converter now that we do that in
more than one place.
In step two, I'll add another intermediate, lazy provider for each AWS region,
whose empty LazyAuthConfiguration will have a refresh time of months or years.
Behind the scenes, it'll use an actual ecrProvider with the usual ~12 hour
credentials, that will get created (and later refreshed) only when kubelet is
attempting to pull an image. If we simply turned ecrProvider directly into a
lazy provider, we would bypass all the caching and get new credentials for
each image pulled.
Automatic merge from submit-queue
Kubelet: Better-defined Container Waiting state
For issue #20478 and #21125.
This PR corrected logic and add unit test for `ShouldContainerBeRestarted()`, cleaned up `Waiting` state related code and added unit test for `generateAPIPodStatus()`.
Fixes#20478Fixes#17971
@yujuhong
Automatic merge from submit-queue
Do not throw creation errors for containers that fail immediately after being started
Fixes (hopefully) #23607
cc @dchen1107
Mount hosts' /etc/hosts, /etc/resolv.conf, set host's hostname
when running the pod in the host's network.
Besides, do not set the DNS flags when running in host's network.
Previously, the service file's name is 'k8s_${POD_UID}.service',
which means we need to `systemctl daemon-reload` if the we replace
the content of the service file (e.g. pod is restarted).
However this makes the journal in the previous pod get disconnected.
This PR solves the issue by using the unique rkt uuid as the service
file's name. After the change, the service file's name will be:
'k8s_${rkt_uuid}.service'.