1. When overlapping deployments are discovered, annotate them
2. Expose those overlapping annotations as warnings in kubectl describe
3. Only respect the earliest updated one (skip syncing all other overlapping deployments)
4. Use indexer instead of store for deployment lister
Automatic merge from submit-queue
[GarbageCollector] Allow per-resource default garbage collection behavior
What's the bug:
When deleting an RC with `deleteOptions.OrphanDependents==nil`, garbage collector is supposed to treat it as `deleteOptions.OrphanDependents==true", and orphan the pods created by it. But the apiserver is not doing that.
What's in the pr:
Allow each resource to specify the default garbage collection behavior in the registry. For example, RC registry's default GC behavior is Orphan, and Pod registry's default GC behavior is CascadingDeletion.
Automatic merge from submit-queue
Enable v3 Client as the default on UTs
Updates the default initialization to use clientv3 interface to etcd3, and fixes the UTs.
This PR includes a cherry-pick of https://github.com/kubernetes/kubernetes/pull/30634 so we can validate the tests, so do not merge until that PR is complete.
Automatic merge from submit-queue
Make labels, fields expose selectable requirements
What?
This is to change the labels/fields Selector interface and make them expose selectable requirements. We reuse labels.Requirement struct for label selector and add fields.Requirement for field selector.
Why?
In order to index labels/fields, we need them to tell us three things: index key (a field or a label), operator (greater, less, or equal), and value (string, int, etc.). By getting selectable requirements, we are able to pass them down and use them for indexing in storage layer.
Automatic merge from submit-queue
Basic scaler/reaper for petset
Currently scaling or upgrading a petset is more complicated than it should be. Would be nice if this made code freeze on friday. I'm planning on a follow up change with generation number and e2es post freeze.
Automatic merge from submit-queue
change all PredicateFunc to use SelectionPredicate
What?
- This PR changes all PredicateFunc in registry to return SelectionPredicate instead of Matcher interface.
Why?
- We want to pass SelectionPredicate to storage layer. Matcher interface did not expose enough information for indexing.
Convert single GV and lists of GVs into an interface that can handle
more complex scenarios (everything internal, nothing supported). Pass
the interface down into conversion.
Automatic merge from submit-queue
Change kubectl create to use dynamic client
https://github.com/kubernetes/kubernetes/issues/16764https://github.com/kubernetes/kubernetes/issues/3955
This is a series of changes to allow kubectl create to use discovery-based REST mapping and dynamic clients.
cc @kubernetes/sig-api-machinery
**Release note**:
<!-- Steps to write your release note:
1. Use the release-note-* labels to set the release note state (if you have access)
2. Enter your extended release note in the below block; leaving it blank means using the PR title as the release note. If no release note is required, just write `NONE`.
-->
```release-note
kubectl will no longer do client-side defaulting on create and replace.
```
Automatic merge from submit-queue
Move new etcd storage (low level storage) into cacher
In an effort for #29888, we are pushing forward this:
What?
- It changes creating etcd storage.Interface impl into creating config
- In creating cacher storage (StorageWithCacher), it passes config created above and new etcd storage inside.
Why?
- We want to expose the information of (etcd) kv client to cacher. Cacher storage uses this information to talk to remote storage.
Automatic merge from submit-queue
Run goimport for the whole repo
While removing GOMAXPROC and running goimports, I noticed quite a lot of other files also needed a goimport format. Didn't commit `*.generated.go`, `*.deepcopy.go` or files in `vendor`
This is more for testing if it builds.
The only strange thing here is the gopkg.in/gcfg.v1 => github.com/scalingdata/gcfg replace.
cc @jfrazelle @thockin
Automatic merge from submit-queue
add subjectaccessreviews resource
Adds a subjectaccessreviews endpoint that uses the API server's authorizer to determine if a subject is allowed to perform an action.
Part of kubernetes/features#37
The swagger spec for pods/{name}/log does not include
"text/plain" as a possible content-type for the the response.
So we implement ProducesMIMETypes to make sure "text/plain"
gets added to the default list ot content-types.
the v1.json was generated by running:
hack/update-generated-swagger-docs.sh;./hack/update-swagger-spec.sh;
Fixes#14071
Automatic merge from submit-queue
make reousrce prefix consistent with other registries
Storage class registry was added recently in #29694.
In other registries, resource prefix was changed to a more generic way using RESTOptions.ResourcePrefix.
This PR is an attempt to make both consistent.
Automatic merge from submit-queue
make the resource prefix in etcd configurable for cohabitation
This looks big, its not as bad as it seems.
When you have different resources cohabiting, the resource name used for the etcd directory needs to be configurable. HPA in two different groups worked fine before. Now we're looking at something like RC<->RS. They normally store into two different etcd directories. This code allows them to be configured to store into the same location.
To maintain consistency across all resources, I allowed the `StorageFactory` to indicate which `ResourcePrefix` should be used inside `RESTOptions` which already contains storage information.
@lavalamp affects cohabitation.
@smarterclayton @mfojtik prereq for our rc<->rs and d<->dc story.
Automatic merge from submit-queue
cleanup wrong naming: limitrange -> hpa
The code is in `horizontalpodautoscaler/strategy.go`, but the parameter is "limitrange". This is legacy copy-paste issue...
Automatic merge from submit-queue
include metadata in third party resource list serialization
Third party resource listing does not include important metadata such as resourceVersion and apiVersion. This commit includes the missing metadata and also replaces the string templating with an anonymous struct.
With the introduction of batch/v1 and batch/v2alpha1, any
MultiRESTMapper that has per groupversion mappers (as a naive discovery
client would create) would end up being unable to call RESTMapping() on
batch.Jobs. As we finish up discovery we will need to be able to choose
prioritized RESTMappings based on the service discovery doc.
This change implements RESTMappings(groupversion) which returns all
possible RESTMappings for that kind. That allows a higher level call to
prioritize the returned mappings by server or client preferred version.
Automatic merge from submit-queue
ListOptions: add test for ResourceVersion > 0 in List
ref: #28472
Done:
- Add a test for ResourceVersion > 0 in registry (cache) store List()
- Fix the docs.
Automatic merge from submit-queue
TLS bootstrap API group (alpha)
This PR only covers the new types and related client/storage code- the vast majority of the line count is codegen. The implementation differs slightly from the current proposal document based on discussions in design thread (#20439). The controller logic and kubelet support mentioned in the proposal are forthcoming in separate requests.
I submit that #18762 ("Creating a new API group is really hard") is, if anything, understating it. I've tried to structure the commits to illustrate the process.
@mikedanese @erictune @smarterclayton @deads2k
```release-note-experimental
An alpha implementation of the the TLS bootstrap API described in docs/proposals/kubelet-tls-bootstrap.md.
```
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/.github/PULL_REQUEST_TEMPLATE.md?pixel)]()
Automatic merge from submit-queue
cacher.go: remove NewCacher func
NewCacher is a wrapper of NewCacherFromConfig. NewCacher understands
how to create a key func from scopeStrategy. However, it is not the
responsibility of cacher. So we should remove this function, and
construct the config in its caller, which should understand scopeStrategy.
Automatic merge from submit-queue
Remove EncodeToStream(..., []unversioned.GroupVersion)
Was not being used. Is a signature change and is necessary for post 1.3 work on Templates and other objects that nest objects.
Extracted from #26044
NewCacher is a wrapper of NewCacherFromConfig. NewCacher understands
how to create a key func from scopeStrategy. However, it is not the
responsibility of cacher. So we should remove this function, and
construct the config in its caller, which should understand scopeStrategy.
Automatic merge from submit-queue
Expose GET and PATCH for status subresource
We can do this for other status subresource. I only updated node/status in this PR to unblock https://github.com/kubernetes/node-problem-detector/issues/9.
cc @Random-Liu @lavalamp
Automatic merge from submit-queue
validate third party resources
addresses validation portion of https://github.com/kubernetes/kubernetes/issues/22768
* ThirdPartyResource: validates name (3 segment DNS subdomain) and version names (single segment DNS label)
* ThirdPartyResourceData: validates objectmeta (name is validated as a DNS label)
* removes ability to use GenerateName with thirdpartyresources (kind and api group should not be randomized, in my opinion)
test improvements:
* updates resttest to clean up after create tests (so the same valid object can be used)
* updates resttest to take a name generator (in case "foo1" isn't a valid name for the object under test)
action required for alpha thirdpartyresource users:
* existing thirdpartyresource objects that do not match these validation rules will need to be removed/updated (after removing thirdpartyresourcedata objects stored under the disallowed versions, kind, or group names)
* existing thirdpartyresourcedata objects that do not match the name validation rule will not be able to be updated, but can be removed
Automatic merge from submit-queue
Add pod condition PodScheduled to detect situation when scheduler tried to schedule a Pod, but failed
Set `PodSchedule` condition to `ConditionFalse` in `scheduleOne()` if scheduling failed and to `ConditionTrue` in `/bind` subresource.
Ref #24404
@mml (as it seems to be related to "why pending" effort)
<!-- Reviewable:start -->
---
This change is [<img src="http://reviewable.k8s.io/review_button.svg" height="35" align="absmiddle" alt="Reviewable"/>](http://reviewable.k8s.io/reviews/kubernetes/kubernetes/24459)
<!-- Reviewable:end -->
Automatic merge from submit-queue
Make ThirdPartyResource a root scoped object
ThirdPartyResource (the registration of a third party type) belongs at the cluster scope. It results in resource handlers installed in every namespace, and the same name in two namespaces collides (namespace is ignored when determining group/kind).
ThirdPartyResourceData (an actual instance of that type) is still namespace-scoped.
This PR moves ThirdPartyResource to be a root scope object. Someone previously using ThirdPartyResource definitions in alpha should be able to move them from namespace to root scope like this:
setup (run on 1.2):
```
kubectl create ns ns1
echo '{"kind":"ThirdPartyResource","apiVersion":"extensions/v1beta1","metadata":{"name":"foo.example.com"},"versions":[{"name":"v8"}]}' | kubectl create -f - --namespace=ns1
echo '{"kind":"Foo","apiVersion":"example.com/v8","metadata":{"name":"MyFoo"},"testkey":"testvalue"}' | kubectl create -f - --namespace=ns1
```
export:
```
kubectl get thirdpartyresource --all-namespaces -o yaml > tprs.yaml
```
remove namespaced kind registrations (this shouldn't remove the data of that type, which is another possible issue):
```
kubectl delete -f tprs.yaml
```
... upgrade ...
re-register the custom types at the root scope:
```
kubectl create -f tprs.yaml
```
Additionally, pre-1.3 clients that expect to read/write ThirdPartyResource at a namespace scope will not be compatible with 1.3+ servers, and 1.3+ clients that expect to read/write ThirdPartyResource at a root scope will not be compatible with pre-1.3 servers.
The codec factory should support two distinct interfaces - negotiating
for a serializer with a client, vs reading or writing data to a storage
form (etcd, disk, etc). Make the EncodeForVersion and DecodeToVersion
methods only take Encoder and Decoder, and slight refactoring elsewhere.
In the storage factory, use a content type to control what serializer to
pick, and use the universal deserializer. This ensures that storage can
read JSON (which might be from older objects) while only writing
protobuf. Add exceptions for those resources that may not be able to
write to protobuf (specifically third party resources, but potentially
others in the future).
Automatic merge from submit-queue
Petset controller
Took longer than I expected. Main parts of this pr are:
1. Identity generation based on petset spec (volumes are mapped per discussion in #18016)
2. Ensure that we create/delete pets in sequence
3. Ensuring that we create, wait for healthy, create; or delete, wait for terminationGrace, delete
4. Controller that watches apiserver and drives actual -> desired
PVCs are not deleted, yet.
Automatic merge from submit-queue
Move internal types of job from pkg/apis/extensions to pkg/apis/batch
This addressed the job part of #23216, this is still WIP. Will notify once finished. I'd like to have it in before starting working on ScheduledJob.
@lavalamp @erictune fyi
Add tests to watch behavior in both protocols (http and websocket)
against all 3 media types. Adopt the
`application/vnd.kubernetes.protobuf;stream=watch` media type for the
content that comes back from a watch call so that it can be
distinguished from a Status result.
Automatic merge from submit-queue
Make etcd cache size configurable
Instead of the prior 50K limit, allow users to specify a more sensible size for their cluster.
I'm not sure what a sensible default is here. I'm still experimenting on my own clusters. 50 gives me a 270MB max footprint. 50K caused my apiserver to run out of memory as it exceeded >2GB. I believe that number is far too large for most people's use cases.
There are some other fundamental issues that I'm not addressing here:
- Old etcd items are cached and potentially never removed (it stores using modifiedIndex, and doesn't remove the old object when it gets updated)
- Cache isn't LRU, so there's no guarantee the cache remains hot. This makes its performance difficult to predict. More of an issue with a smaller cache size.
- 1.2 etcd entries seem to have a larger memory footprint (I never had an issue in 1.1, even though this cache existed there). I suspect that's due to image lists on the node status.
This is provided as a fix for #23323
Automatic merge from submit-queue
Implement a streaming serializer for watch
Changeover watch to use streaming serialization. Properly version the
watch objects. Implement simple framing for JSON and Protobuf (but not
YAML).
@wojtek-t @lavalamp
Automatic merge from submit-queue
Additional go vet fixes
Mostly:
- pass lock by value
- bad syntax for struct tag value
- example functions not formatted properly
Here are a list of changes along with an explanation of how they work:
1. Add a new string field called TargetSelector to the external version of
extensions Scale type (extensions/v1beta1.Scale). This is a serialized
version of either the map-based selector (in case of ReplicationControllers)
or the unversioned.LabelSelector struct (in case of Deployments and
ReplicaSets).
2. Change the selector field in the internal Scale type (extensions.Scale) to
unversioned.LabelSelector.
3. Add conversion functions to convert from two external selector fields to a
single internal selector field. The rules for conversion are as follows:
i. If the target resource that this scale targets supports LabelSelector
(Deployments and ReplicaSets), then serialize the LabelSelector and
store the string in the TargetSelector field in the external version
and leave the map-based Selector field as nil.
ii. If the target resource only supports a map-based selector
(ReplicationControllers), then still serialize that selector and
store the serialized string in the TargetSelector field. Also,
set the the Selector map field in the external Scale type.
iii. When converting from external to internal version, parse the
TargetSelector string into LabelSelector struct if the string isn't
empty. If it is empty, then check if the Selector map is set and just
assign that map to the MatchLabels component of the LabelSelector.
iv. When converting from internal to external version, serialize the
LabelSelector and store it in the TargetSelector field. If only
the MatchLabel component is set, then also copy that value to
the Selector map field in the external version.
4. HPA now just converts the LabelSelector field to a Selector interface
type to list the pods.
5. Scale Get and Update etcd methods for Deployments and ReplicaSets now
return extensions.Scale instead of autoscaling.Scale.
6. Consequently, SubresourceGroupVersion override and is "autoscaling"
enabled check is now removed from pkg/master/master.go
7. Other small changes to labels package, fuzzer and LabelSelector
helpers to piece this all together.
8. Add unit tests to HPA targeting Deployments and ReplicaSets.
9. Add an e2e test to HPA targeting ReplicaSets.
Also register replicationcontrollers/scale subresource. Along with
registering the resource, also specify the cross-group override for the
subresource since Scale belongs belongs to autoscaling/v1 but
ReplicationController belongs to api/v1.
In podSecurityPolicy:
1. Rename .seLinuxContext to .seLinux
2. Rename .seLinux.type to .seLinux.rule
3. Rename .runAsUser.type to .runAsUser.rule
4. Rename .seLinux.SELinuxOptions
1,2,3 as suggested by thockin in #22159.
I added 3 for consistency with 2.
Added selector generation to Job's
strategy.Validate, right before validation.
Can't do in defaulting since UID is not known.
Added a validation to Job to ensure that the generated
labels and selector are correct when generation was requested.
This happens right after generation, but validation is in a better
place to return an error.
Adds "manualSelector" field to batch/v1 Job to control selector generation.
Adds same field to extensions/__internal. Conversion between those two
is automatic.
Adds "autoSelector" field to extensions/v1beta1 Job. Used for storing batch/v1 Jobs
- Default for v1 is to do generation.
- Default for v1beta1 is to not do it.
- In both cases, unset == false == do the default thing.
Release notes:
Added batch/v1 group, which contains just Job, and which is the next
version of extensions/v1beta1 Job.
The changes from the previous version are:
- Users no longer need to ensure labels on their pod template are unique to the enclosing
job (but may add labels as needed for categorization).
- In v1beta1, job.spec.selector was defaulted from pod labels, with the user responsible for uniqueness.
In v1, a unique label is generated and added to the pod template, and used as the selector (other
labels added by user stay on pod template, but need not be used by selector).
- a new field called "manualSelector" field exists to control whether the new behavior is used,
versus a more error-prone but more flexible "manual" (not generated) seletor. Most users
will not need to use this field and should leave it unset.
Users who are creating extensions.Job go objects and then posting them using the go client
will see a change in the default behavior. They need to either stop providing a selector (relying on
selector generation) or else specify "spec.manualSelector" until they are ready to do the former.
This should allow users to update DaemonSet pods by manually deleting
the corresponding running pods. Users can use this mechanism for
DaemonSet updates until we implement Deployment style rolling update
for DaemonSet.
Leaving the type fields as comments for reference and reminder. But
deleting the conversion, defaulting and validation code. They can
always be brough back from the previous PR once the types are
introduced. Because builds break without them anyway that serves as a
reminder, so there is no need to leave them commented out.
Update the Deployments' API types, defaulting code, conversions, helpers
and validation to use ReplicaSets instead of ReplicationControllers and
LabelSelector instead of map[string]string for selectors.
Also update the Deployment controller, registry, kubectl subcommands,
client listers package and e2e tests to use ReplicaSets and
LabelSelector for Deployments.
Move type LabelSelector and type LabelSelectorRequirement from pkg/apis/extensions
This avoids an import loop when Job (and later DaemonSet, Deployment, ReplicaSet)
are moved out of extensions to new api groups.
Also Move LabelSelectorAsSelector utility from pkg/apis/extensions/ to pkg/api/unversioned/
Also its test.
Also LabelSelectorOp* constants.
Also the pkg/apis/extensions/validation functions ValidateLabelSelectorRequirement and
ValidateLabelSelector move to pkg/api/unversioned
The related type in pkg/apis/extensions/v1beta1/ is staying there. I might move
it in another PR if neccessary.
This commit adds support for paused deployments so a user can choose
when to run a deployment that exists in the system instead of having
the deployment controller automatically reconciling it after every
change or sync interval.
Components can write services during startup, which results in the ip
allocator map being updated. Since core controllers *must* succeed for
the masters to start, we should retry a few times in order to pass.
Most of the logic related to type and kind retrieval belongs in the
codec, not in the various classes. Make it explicit that the codec
should handle these details.
Factory now returns a universal Decoder and a JSONEncoder to assist code
in kubectl that needs to specifically deal with JSON serialization
(apply, merge, patch, edit, jsonpath). Add comments to indicate the
serialization is explicit in those places. These methods decode to
internal and encode to the preferred API version as previous, although
in the future they may be changed.
React to removing Codec from version interfaces and RESTMapping by
passing it in to all the places that it is needed.