Automatic merge from submit-queue
Add GUBERNATOR flag which produces g8r link for node e2e tests
When you run 'make tests-e2e-node REMOTE=true GUBERNATOR=true' outputs a URL to view the test results on Gubernator. ~~Should work after my PR for Gubernator is merged.~~
@timstclair
Automatic merge from submit-queue
more explictly about NoDiskConflicts policy and applicable volume types
partially clarify #29670
@kubernetes/sig-scheduling
Automatic merge from submit-queue
Incorrect branch name for git push command in development.md
the branch name is "my-feature":
### Create a branch and make changes
```sh
git checkout -b my-feature
Automatic merge from submit-queue
Quobyte Volume plugin
@quofelix and myself developed a volume plugin for [Quobyte](http://www.quobyte.com) which is a software-defined storage solution. This PR allows Kubernetes users to mount a Quobyte Volume inside their containers over Kubernetes.
Here are some further informations about [Quobyte and Storage for containers](http://www.quobyte.com/containers)
Automatic merge from submit-queue
Implement dynamic provisioning (beta) of PersistentVolumes via StorageClass
Implemented according to PR #26908. There are several patches in this PR with one huge code regen inside.
* Please review the API changes (the first patch) carefully, sometimes I don't know what the code is doing...
* `PV.Spec.Class` and `PVC.Spec.Class` is not implemented, use annotation `volume.alpha.kubernetes.io/storage-class`
* See e2e test and integration test changes - Kubernetes won't provision a thing without explicit configuration of at least one `StorageClass` instance!
* Multiple provisioning volume plugins can coexist together, e.g. HostPath and AWS EBS. This is important for Gluster and RBD provisioners in #25026
* Contradicting the proposal, `claim.Selector` and `volume.alpha.kubernetes.io/storage-class` annotation are **not** mutually exclusive. They're both used for matching existing PVs. However, only `volume.alpha.kubernetes.io/storage-class` is used for provisioning, configuration of provisioning with `Selector` is left for (near) future.
* Documentation is missing. Can please someone write some while I am out?
For now, AWS volume plugin accepts classes with these parameters:
```
kind: StorageClass
metadata:
name: slow
provisionerType: kubernetes.io/aws-ebs
provisionerParameters:
type: io1
zone: us-east-1d
iopsPerGB: 10
```
* parameters are case-insensitive
* `type`: `io1`, `gp2`, `sc1`, `st1`. See AWS docs for details
* `iopsPerGB`: only for `io1` volumes. I/O operations per second per GiB. AWS volume plugin multiplies this with size of requested volume to compute IOPS of the volume and caps it at 20 000 IOPS (maximum supported by AWS, see AWS docs).
* of course, the plugin will use some defaults when a parameter is omitted in a `StorageClass` instance (`gp2` in the same zone as in 1.3).
GCE:
```
apiVersion: extensions/v1beta1
kind: StorageClass
metadata:
name: slow
provisionerType: kubernetes.io/gce-pd
provisionerParameters:
type: pd-standard
zone: us-central1-a
```
* `type`: `pd-standard` or `pd-ssd`
* `zone`: GCE zone
* of course, the plugin will use some defaults when a parameter is omitted in a `StorageClass` instance (SSD in the same zone as in 1.3 ?).
No OpenStack/Cinder yet
@kubernetes/sig-storage
Automatic merge from submit-queue
kubelet eviction on inode exhaustion
Add support for kubelet to monitor for inode exhaustion of either image or rootfs, and in response, attempt to reclaim node level resources and/or evict pods.
Automatic merge from submit-queue
Fix deadlock possibility in federated informer
On cluster add subinformer locks and tries to add cluster to federated informer. When someone checks if everything is in sync federated informer is locked and then subinformer is inspected what apparently requires a lock. With really bad timing this can create a deadlock.
This PR ensures that there is always at most 1 lock taken in federated informer.
cc: @quinton-hoole @kubernetes/sig-cluster-federation
Fixes: #30855
Automatic merge from submit-queue
Federated namespace controller - stop reconcilation if not in sync
cc: @quinton-hoole @kubernetes/sig-cluster-federation
Automatic merge from submit-queue
Adding e2e test for federation replicasets
Its a basic test which tests that we can create and delete replicasets. Will enhance it when we write the replicaset controller.
cc @kubernetes/sig-cluster-federation
Automatic merge from submit-queue
Add annotations to the PodSecurityPolicy Provider interface
@pweil- is this what you were thinking in terms of API changes? I really like to avoid functions with more than 2 return values, but couldn't think of a cleaner approach in this case.
Automatic merge from submit-queue
Allow setting permission mode bits on secrets, configmaps and downwardAPI files
cc @thockin @pmorie
Here is the first round to implement: https://github.com/kubernetes/kubernetes/pull/28733.
I made two commits: one with the actual change and the other with the auto-generated code. I think it's easier to review this way, but let me know if you prefer in some other way.
I haven't written any tests yet, I wanted to have a first glance and not write them till this (and the API) are more close to the "LGTM" :)
There are some things:
* I'm not sure where to do the "AND 0777". I'll try to look better in the code base, but suggestions are always welcome :)
* The write permission on group and others is not set when you do an `ls -l` on the running container. It does work with write permissions to the owner. Debugging seems to show that is something happening after this is correctly set on creation. Will look closer.
* The default permission (when the new fields are not specified) are the same that on kubernetes v1.3
* I do realize there are conflicts with master, but I think this is good enough to have a look. The conflicts is with the autog-enerated code, so the actual code is actually the same (and it takes like ~30 minutes to generate it here)
* I didn't generate the docs (`generated-docs` and `generated-swagger-docs` from `hack/update-all.sh`) because my machine runs out of mem. So that's why it isn't in this first PR, will try to investigate and see why it happens.
Other than that, this works fine here with some silly scripts I did to create a secret&configmap&downwardAPI, a pod and check the file permissions. Tested the "defaultMode" and "mode" for all. But of course, will write tests once this is looking fine :)
Thanks a lot again!
Rodrigo
Automatic merge from submit-queue
Continue on #30774: Change podNamespacer API
continue on #30774, credit to @wojtek-t, Ref #30759
I just fixed a test and converted IsActivePod to operate on *Pod.
Automatic merge from submit-queue
Add tag [benchmark] to node-e2e-test where performance limits are not verified
This PR adds a new tag "[benchmark]" to density and resource-usage node e2e test. The performance limits will not be verified at the end of benchmark tests.
Automatic merge from submit-queue
Expose flags for new NodeEviction logic in NodeController
Fix#28832
Last PR from the NodeController NodeEviction logic series.
cc @davidopp @lavalamp @mml
Automatic merge from submit-queue
Support for no object-native cluster name in federated libs
As the deadlines are near and #28921 is still in review and may in the end not entirely suit the lib needs (like ClusterName only in FederatedApiServer) it may be better to not depend on it the libs. So this chanegd:
- makes federated informer return a pair obj + cluster name when it is relevant
- removes preprocessing handler which caused locking/race troubles anyway
The impact on controllers that are still in review is minimal (just couple lines of code).
cc: @quinton-hoole @wojtek-t @kubernetes/sig-cluster-federation
Automatic merge from submit-queue
Remove incorrect docs about unset fields in NetworkPolicyPeer
While hammering out the semantics of not-present vs present-but-empty, we appear to have added incorrect clarifications to NetworkPolicyPeer, where the semantics of PodSelector not being present is supposed to be "do what NamespaceSelector" says, not "select no pods", and likewise with NamespaceSelector not being present.
I think it's clearest if we just don't say anything, since we already said "Exactly one of the following must be specified" above. Alternatively we could be redundant and say "(If not provided, then NamespaceSelector must be set.)" or something like that.
@caseydavenport @thockin
Automatic merge from submit-queue
Implement federation API server authentication e2e tests.
This PR depends on #30397. Please review only the last commit here.
Fixes: Issue #28602.
cc @kubernetes/sig-cluster-federation
Automatic merge from submit-queue
Allow a flag that forces kubelet to have a valid kubeconfig
`--require-kubeconfig` forces the kubelet to use the kubeconfig for all
APIserver communication, and exit cleanly. Allows cluster lifecycle to loop waiting for config to be available.
Fixes#30515
A follow up PR will handle the issue discovered where the DefaultCluster rules applied to kubeconfig allow a malicious party who can bind to localhost:8080 to take advantage of an admin misconfiguration.
@lukemarsden @mikedanese
```release-note
The Kubelet now supports the `--force-kubeconfig` option which reads all client config from the provided `--kubeconfig` file and will cause the Kubelet to exit with error code 1 on error. It also forces the Kubelet to use the server URL from the kubeconfig file rather than the `--api-servers` flag. Without this flag set, a failure to read the kubeconfig file would only result in a warning message.
In a future release, the value of this flag will be defaulted to `true`.
```
Automatic merge from submit-queue
Node Conformance Test: Statically link etcd
For #30122, #30174.
This PR is part of our roadmap to package node conformance test.
It statically linked etcd into the node e2e framework. In the future all e2e services will be linked in, and print log into the same log file `services.log`.
@dchen1107 @vishh
/cc @kubernetes/sig-node
Automatic merge from submit-queue
Nodecontroller doesn't flip readiness on pods if kubeletVersion < 1.2.0
Older versions of the kubelet didn't know how to reconcile pod.Status, so the nodecontroller would mark pods NotReady on netsplit, and if the partition recovered in < 5m, the pods would never get marked Ready resulting in NotReady endpoints indefinitely (till kubelet restart/pod recreate etc).