Automatic merge from submit-queue
Automatically create the kube-system namespace
At the same time we ensure that the `default` namespace is present, it also creates `kube-system` if it doesn't exist.
`kube-system` will now exist from the beginning, and will be recreated every 10s if deleted, in the same manner as the `default` ns
This makes UX much better, no need for `kubectl`ing a `kube-system.yaml` file anymore for a function that is essential to Kubernetes (addons). For instance, this makes dashboard deployment much easier when there's no need to check for the `kube-system` ns first.
A follow up in the future may remove places where logic to manually create the kube-system namespace is present.
Also fixed a small bug where `CreateNamespaceIfNeeded` ignored the `ns` parameter and was hardcoded to `api.NamespaceDefault`.
@davidopp @lavalamp @thockin @mikedanese @bryk @cheld @fgrzadkowski @smarterclayton @wojtek-t @dlorenc @vishh @dchen1107 @bgrant0607 @roberthbailey
<!-- Reviewable:start -->
---
This change is [<img src="http://reviewable.k8s.io/review_button.svg" height="35" align="absmiddle" alt="Reviewable"/>](http://reviewable.k8s.io/reviews/kubernetes/kubernetes/25196)
<!-- Reviewable:end -->
Automatic merge from submit-queue
Remove myself from a bunch of OWNERS files
For the time being I am too overloaded to do non scheduler/admission related reviews that aren't explicitly assigned to me.
cc/ @brendandburns
The codec factory should support two distinct interfaces - negotiating
for a serializer with a client, vs reading or writing data to a storage
form (etcd, disk, etc). Make the EncodeForVersion and DecodeToVersion
methods only take Encoder and Decoder, and slight refactoring elsewhere.
In the storage factory, use a content type to control what serializer to
pick, and use the universal deserializer. This ensures that storage can
read JSON (which might be from older objects) while only writing
protobuf. Add exceptions for those resources that may not be able to
write to protobuf (specifically third party resources, but potentially
others in the future).
Automatic merge from submit-queue
Provide flags to use etcd3 backed storage
ref: #24405
What's in this PR?
- Add a new flag "storage-backend" to choose "etcd2" or "etcd3". By default (i.e. empty), it's "etcd2".
- Take out etcd config code into a standalone package and let it create etcd2 or etcd3 storage backend given user input.
Automatic merge from submit-queue
stop changing the root path of the root webservice
We shouldn't mutate the root path of the root webservice (see usage). Just write the path we want.
Add tests to watch behavior in both protocols (http and websocket)
against all 3 media types. Adopt the
`application/vnd.kubernetes.protobuf;stream=watch` media type for the
content that comes back from a watch call so that it can be
distinguished from a Status result.
Automatic merge from submit-queue
Use the first version as thirdparty resource preferredVersion
First commit is a one-liner, which implements the server-half of #23985.
The other two commits rearrange the test code, and add back a commented out test of thirdparty resource.
@lavalamp @nikhiljindal
Automatic merge from submit-queue
Move /resetMetrics to DELETE /metrics
Reduces the surface area of the API server slightly and allows
downstream components to have deleteable metrics. After this change
genericapiserver will *not* have metrics unless the caller defines it
(allows different apiserver implementations to make that choice on their
own).
@wojtek-t
Automatic merge from submit-queue
Make etcd cache size configurable
Instead of the prior 50K limit, allow users to specify a more sensible size for their cluster.
I'm not sure what a sensible default is here. I'm still experimenting on my own clusters. 50 gives me a 270MB max footprint. 50K caused my apiserver to run out of memory as it exceeded >2GB. I believe that number is far too large for most people's use cases.
There are some other fundamental issues that I'm not addressing here:
- Old etcd items are cached and potentially never removed (it stores using modifiedIndex, and doesn't remove the old object when it gets updated)
- Cache isn't LRU, so there's no guarantee the cache remains hot. This makes its performance difficult to predict. More of an issue with a smaller cache size.
- 1.2 etcd entries seem to have a larger memory footprint (I never had an issue in 1.1, even though this cache existed there). I suspect that's due to image lists on the node status.
This is provided as a fix for #23323
Automatic merge from submit-queue
the component status health check should check whether the scheme of backend storage url is https or not
fix https://github.com/kubernetes/kubernetes/issues/23897, when querying the component status of etcd (backend storage), the scheme of url is not checked and use `http` always, this commit aims to fix this.
Reduces the surface area of the API server slightly and allows
downstream components to have deleteable metrics. After this change
genericapiserver will *not* have metrics unless the caller defines it
(allows different apiserver implementations to make that choice on their
own).
Automatic merge from submit-queue
Implement a streaming serializer for watch
Changeover watch to use streaming serialization. Properly version the
watch objects. Implement simple framing for JSON and Protobuf (but not
YAML).
@wojtek-t @lavalamp
Automatic merge from submit-queue
Additional go vet fixes
Mostly:
- pass lock by value
- bad syntax for struct tag value
- example functions not formatted properly