Automatic merge from submit-queue
Update run flags to point to generators docs
@janetkuo you've requested that in https://github.com/kubernetes/kubernetes/pull/32484#issuecomment-246840562 I'm opening this PR but like you I don't like the length of the descriptions already. The other problem with this is that there's not clean docs for a user to figure out what the generators are. I've stumbled upon this several times and I always found myself looking into the code :/ How about adding new flag/subcommand that will give you more information about generators and we'd move all those `--restart` and `--generator` information into specific generator info and present at the top level only general information?
Automatic merge from submit-queue
Fix edge case in qos evaluation
If a pod has a container C1 and C2, where sum(C1.requests, C2.requests) equals (C1.Limits), the code was reporting that the pod had "Guaranteed" qos, when it should have been Burstable.
/cc @vishh @dchen1107
Automatic merge from submit-queue
Add unit test for bad ReclaimPolicy and valid ReclaimPolicy in /pkg/api/validation
unit tests for validation.go regarding PersistentVolumeReclaimPolicy (bad value and good value)
see PR: #30304
Automatic merge from submit-queue
kubeadm join: Added support for config file.
As more behavior (#34719, #34807, fix for #33641) is added to `kubeadm join`, this will be eventually very much needed. Makes sense to go in sooner rather than later.
Also references #34501 and #34884.
/cc @luxas @mikedanese
Automatic merge from submit-queue
Corrected title of example "Java Web Application with Tomcat and Side…
**What this PR does / why we need it**: Corrects spelling error in title of example "Java Web Application with Tomcat and Sidercar Container".
**Which issue this PR fixes**: NONE
**Special notes for your reviewer**: NONE
**Release note**: NONE
Automatic merge from submit-queue
Upgrade addon-manager with kubectl apply
The first step of #33698.
Use `kubectl apply` to replace addon-manager's previous logic.
The most important issue this PR is targeting is the upgrade from 1.4 to 1.5. Procedure as below:
1. Precondition: After the master is upgraded, new addon-manager starts and all the old resources on nodes are running normally.
2. Annotate the old ReplicationController resources with kubectl.kubernetes.io/last-applied-configuration=""
3. Call `kubectl apply --prune=false` on addons folder to create new addons, including the new Deployments.
4. Wait for one minute for new addons to be spinned up.
5. Enter the periodical loop of `kubectl apply --prune=true`. The old RCs will be pruned at the first call.
Procedure of a normal startup:
1. Addon-manager starts and no addon resources are running.
2. Annotate nothing.
3. Call `kubectl apply --prune=false` to create all new addons.
4. No need to explain the remain.
Remained Issues:
- Need to add `--type` flag to `kubectl apply --prune`, mentioned [here](https://github.com/kubernetes/kubernetes/pull/33075#discussion_r80814070).
- This addon manager is not working properly with the current Deployment heapster, which runs [addon-resizer](https://github.com/kubernetes/contrib/tree/master/addon-resizer) in the same pod and changes resource limit configuration through the apiserver. `kubectl apply` fights with the addon-resizers. May be we should remove the initial resource limit field in the configuration file for this specific Deployment as we removed the replica count.
@mikedanese @thockin @bprashanth
---
Below are some logical things that may need to be clarified, feel free to **OMIT** them as they are too verbose:
- For upgrade, the old RCs will not fight with the new Deployments during the overlap period even if they use the same label in template:
- Deployment will not recognize the old pods because it need to match an additional "pod-template-hash" label.
- ReplicationController will not manage the new pods (created by deployment) because the [`controllerRef`](https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/controller-ref.md) feature.
- As we are moving all addons to Deployment, all old RCs would be removed. Attach empty annotation to RCs is only for letting `kubectl apply --prune` to recognize them, the content does not matter.
- We might need to also annotate other resource types if we plan to upgrade them in 1.5 release:
- They don't need to be attached this fake annotation if they remain in the same name. `kubectl apply` can recognize them by name/type/namespace.
- In the other case, attaching empty annotations to them will still work. As the plan is to use label selector for annotate, some innocence old resources may also be attached empty annotations, they work as below two cases:
- Resources that need to be bumped up to a newer version (mainly due to some significant update --- change disallowed fields --- that could not be managed by the update feature of `kubectl apply`) are good to go with this fake annotation, as old resources will be deleted and new sources will be created. The content in annotation does not matter.
- Resources that need to stay inside the management of `kubectl apply` is also good to go. As `kubectl apply` will [generate a 3-way merge patch](https://github.com/kubernetes/kubernetes/blob/master/pkg/util/strategicpatch/patch.go#L1202-L1226). This empty annotation is harmless enough.
Automatic merge from submit-queue
default serializer
Everyone uses the same serializer. Set it as the default, but still allow someone to take control if they want.
Found while trying to use genericapiserver for composition.
Automatic merge from submit-queue
Use same SSH tunnel as kubelet
Provides a secure workaround for #11816 by having kube-apiserver use the same SSH tunnel as the kubelet it is trying to connect to. Use in conjunction with iptables or kubelet `--address=127.0.0.1`. The latter will break heapster.
Will fallback to random behavior if the tunnel cannot be found.
Automatic merge from submit-queue
Fix typos and englishify plugin/pkg
**What this PR does / why we need it**: Just typos
**Which issue this PR fixes**: `None`
**Special notes for your reviewer**: Just typos
**Release note**: `NONE`
Automatic merge from submit-queue
Fixed downloading of flannel 0.6.x releases in ubuntu installer, 0.5.x works as well
**What this PR does / why we need it**:
This PR fixes compatibility of ubuntu installer with flannel release 0.6.0 and 0.6.1 where download url was changed.
**Release note**:
```NONE
```
Automatic merge from submit-queue
Cleanup the commented code for overriding flags with viper. For now,…
Minor cleanup for the viper configuration logic, removes commented code into a function of its own. We can decide wether or not to overwrite flag values at a later time...
Automatic merge from submit-queue
fix sed command run failed on mac os
bash command ```sed -i ... ``` run failed on mac os, it should be ```sed -i.back ..```
Automatic merge from submit-queue
Add global timeout flag
**Release note**:
```release-note
Add a new global option "--request-timeout" to the `kubectl` client
```
UPSTREAM: https://github.com/kubernetes/client-go/pull/10
This patch adds a global timeout flag (viewable with `kubectl -h`) with
a default value of `0s` (meaning no timeout).
The timeout value is added to the default http client, so that zero
values and default behavior are enforced by the client.
Adding a global timeout ensures that user-made scripts won't hang for an
indefinite amount of time while performing remote calls (right now, remote
calls are re-tried up to 10 times when each attempt fails, however, there is
no option to set a timeout in order to prevent any of these 10 attempts from
hanging indefinitely).
**Example**
```
$ kubectl get pods # no timeout flag set - default to 0s (which means no
timeout)
NAME READY STATUS RESTARTS AGE
docker-registry-1-h7etw 1/1 Running 1 2h
router-1-uv0f9 1/1 Running 1 2h
$ kubectl get pods --request-timeout=0 # zero means no timeout no timeout flag set
NAME READY STATUS RESTARTS AGE
docker-registry-1-h7etw 1/1 Running 1 2h
router-1-uv0f9 1/1 Running 1 2h
$kubectl get pods --request-timeout=1ms
Unable to connect to the server: net/http: request canceled while
waiting for connection (Client.Timeout exceeded while awaiting headers)
```
Automatic merge from submit-queue
Pass whole PVC to provisioner plugins
Gluster provisioner is interested in namespace of PVCs that are being provisioned and I don't want to add at as a new field in `volume.VolumeOptions` - it would contain almost whole PVC.
Let's rework `VolumeOptions` and pass direct reference to PVC there instead of some "interesting" fields and let the provisioner to pick information it is interested in.
There was lot of refactoring in volume plugins to apply this change (too many plugins), however the logic is simple and it's all the same in all plugins.
@rootfs @humblec
Automatic merge from submit-queue
honor SAR verb
Verbs on non-resource requests were dropped. This results in always being denied for all the authorizers I know of, so no unintended exposure, but its still ugly. We should probably pick.
@liggitt I would have expected the kubelet work to get stuck on this.
Automatic merge from submit-queue
kubeadm join: wait for API endpoints
**What this PR does / why we need it**: enhance kubeadm to allow for parallel provisioning of API endpoints and slave nodes, continued from https://github.com/kubernetes/kubernetes/pull/33543
**Fixes**: https://github.com/kubernetes/kubernetes/issues/33542
**Special notes for your reviewer**:
* Introduces a concurrent retry mechanism for bootstrapping with a single API endpoint during `kubeadm join` (this was left out in https://github.com/kubernetes/kubernetes/pull/33543 so that it can be implemented in a separate PR). The polling of the discovery service API itself is yet to come.
@errordeveloper @pires
Automatic merge from submit-queue
Increase buffer sizes in cacher for watchers interested in all/many o…
Should increase throughput of cacher in large clusters.
Automatic merge from submit-queue
Add support for admission controller based on namespace node selectors.
This work is to upstream openshift's project node selectors based admission controller.
Fixes https://github.com/kubernetes/kubernetes/issues/17151
Automatic merge from submit-queue
Add 'kubectl set resources'
Add "kubectl set resources" for easier updating container memory/cpu limits/requests (for pods or resources with pod templates).
**Usage**
`kubectl set resources (-f FILENAME | TYPE NAME) ([--limits=LIMITS & --requests=REQUESTS])`
**Examples**
Set a deployments nginx container cpu limits to "200m and memory to "512Mi"
`kubectl set resources deployment nginx -c=nginx --limits=cpu=200m,memory=512Mi`
Set the limit and requests for all containers in nginx
`kubectl set resources deployment nginx --limits=cpu=200m,memory=512Mi --requests=cpu=100m,memory=256Mi`
Print the result (in yaml format) of updating nginx container limits from a local, without hitting the server
`kubectl set resources -f path/to/file.yaml --limits=cpu=200m,memory=512Mi --local -o yaml`
Remove limits on containers in nginx
`kubectl set resources deployment nginx --limits=cpu=0,memory=0`
Ref: https://github.com/kubernetes/kubernetes/issues/21648
EDIT: removed the '--remove' flag example
Automatic merge from submit-queue
Support trust id as a scope in the OpenStack authentication logic
This patch allows the use of Kubernetes with Keystone trust delegation to avoid passing the user credentials in clear inside the config file : a specific user with delegated rights can be created and used instead.
Automatic merge from submit-queue
kubeadm: fix preflight checks
This PR fixes a couple issues cause by some bad rebases:
* When a pre-flight check returned errors, `kubeadm` would exit with error code `1` instead of `2` as the original pre-flight PR meant. This would also cause the output of `kubeadm` to include some stuff that was not supposed to be there.
* Duplicated `k8s.io/kubernetes/cmd/kubeadm/app/util` import.
I also took the freedom to do some output clean-up based on the input from the original pre-flight PR.
/cc @dmmcquay @dgoodwin @luxas