Search and replace for references to moved examples
Reverted find and replace paths on auto gen docs
Reverting changes to changelog
Fix bugs in test-cmd.sh
Fixed path in examples README
ran update-all successfully
Updated verify-flags exceptions to include renamed files
Automatic merge from submit-queue
Kubelet Volume Manager Wait For Attach Detach Controller and Backoff on Error
* Closes https://github.com/kubernetes/kubernetes/issues/27483
* Modified Attach/Detach controller to report `Node.Status.AttachedVolumes` on successful attach (unique volume name along with device path).
* Modified Kubelet Volume Manager wait for Attach/Detach controller to report success before proceeding with attach.
* Closes https://github.com/kubernetes/kubernetes/issues/27492
* Implemented an exponential backoff mechanism for for volume manager and attach/detach controller to prevent operations (attach/detach/mount/unmount/wait for controller attach/etc) from executing back to back unchecked.
* Closes https://github.com/kubernetes/kubernetes/issues/26679
* Modified volume `Attacher.WaitForAttach()` methods to uses the device path reported by the Attach/Detach controller in `Node.Status.AttachedVolumes` instead of calling out to cloud providers.
This commit adds a new volume manager in kubelet that synchronizes
volume mount/unmount (and attach/detach, if attach/detach controller
is not enabled).
This eliminates the race conditions between the pod creation loop
and the orphaned volumes loops. It also removes the unmount/detach
from the `syncPod()` path so volume clean up never blocks the
`syncPod` loop.
This PR contains Kubelet changes to enable attach/detach controller control.
* It introduces a new "enable-controller-attach-detach" kubelet flag to
enable control by controller. Default enabled.
* It removes all references "SafeToDetach" annoation from controller.
* It adds the new VolumesInUse field to the Node Status API object.
* It modifies the controller to use VolumesInUse instead of SafeToDetach
annotation to gate detachment.
* There is a bug in node-problem-detector that causes VolumesInUse to
get reset every 30 seconds. Issue https://github.com/kubernetes/node-problem-detector/issues/9
opened to fix that.
Automatic merge from submit-queue
Automatically add node labels beta.kubernetes.io/{os,arch}
Proposal: #17981
As discussed in #22623:
> @davidopp: #9044 says cloud provider but can also cover platform stuff.
Adds a label `beta.kubernetes.io/platform` to `kubelet` that informs about the os/arch it's running on.
Makes it easy to specify `nodeSelectors` for different arches in multi-arch clusters.
```console
$ kubectl get no --show-labels
NAME STATUS AGE LABELS
127.0.0.1 Ready 1m beta.kubernetes.io/platform=linux-amd64,kubernetes.io/hostname=127.0.0.1
$ kubectl describe no
Name: 127.0.0.1
Labels: beta.kubernetes.io/platform=linux-amd64,kubernetes.io/hostname=127.0.0.1
CreationTimestamp: Thu, 31 Mar 2016 20:39:15 +0300
```
@davidopp @vishh @fgrzadkowski @thockin @wojtek-t @ixdy @bgrant0607 @dchen1107 @preillyme
Automatic merge from submit-queue
Add subPath to mount a child dir or file of a volumeMount
Allow users to specify a subPath in Container.volumeMounts so they can use a single volume for many mounts instead of creating many volumes. For instance, a user can now use a single PersistentVolume to store the Mysql database and the document root of an Apache server of a LAMP stack pod by mapping them to different subPaths in this single volume.
Also solves https://github.com/kubernetes/kubernetes/issues/20466.
Automatic merge from submit-queue
Make ThirdPartyResource a root scoped object
ThirdPartyResource (the registration of a third party type) belongs at the cluster scope. It results in resource handlers installed in every namespace, and the same name in two namespaces collides (namespace is ignored when determining group/kind).
ThirdPartyResourceData (an actual instance of that type) is still namespace-scoped.
This PR moves ThirdPartyResource to be a root scope object. Someone previously using ThirdPartyResource definitions in alpha should be able to move them from namespace to root scope like this:
setup (run on 1.2):
```
kubectl create ns ns1
echo '{"kind":"ThirdPartyResource","apiVersion":"extensions/v1beta1","metadata":{"name":"foo.example.com"},"versions":[{"name":"v8"}]}' | kubectl create -f - --namespace=ns1
echo '{"kind":"Foo","apiVersion":"example.com/v8","metadata":{"name":"MyFoo"},"testkey":"testvalue"}' | kubectl create -f - --namespace=ns1
```
export:
```
kubectl get thirdpartyresource --all-namespaces -o yaml > tprs.yaml
```
remove namespaced kind registrations (this shouldn't remove the data of that type, which is another possible issue):
```
kubectl delete -f tprs.yaml
```
... upgrade ...
re-register the custom types at the root scope:
```
kubectl create -f tprs.yaml
```
Additionally, pre-1.3 clients that expect to read/write ThirdPartyResource at a namespace scope will not be compatible with 1.3+ servers, and 1.3+ clients that expect to read/write ThirdPartyResource at a root scope will not be compatible with pre-1.3 servers.
Automatic merge from submit-queue
Remove requirement that Endpoints IPs be IPv4
Signed-off-by: André Martins <aanm90@gmail.com>
Release Note: The `Endpoints` API object now allows IPv6 addresses to be stored. Other components of the system are not ready for IPv6 yet, and many cloud providers are not IPv6 compatible, but installations that use their own controller logic can now store v6 endpoints.