Automatic merge from submit-queue
Change default clusterCIDRs from /16 to /14 in GCE configs allowing 1000 Node clusters by default.
cc @thockin @roberthbailey @wojtek-t @zmerlynn @davidopp
Automatic merge from submit-queue
vSphere Cloud Provider Implementation
This is the first PR towards implementation for vSphere cloud provider support in Kubernetes (ref. issue #23932).
Automatic merge from submit-queue
Added JobTemplate, a preliminary step for ScheduledJob and Workflow
@sdminonne as promised, sorry it took this long 😊
@erictune fyi though it does not have to be in for 1.2
<!-- Reviewable:start -->
---
This change is [<img src="http://reviewable.k8s.io/review_button.svg" height="35" align="absmiddle" alt="Reviewable"/>](http://reviewable.k8s.io/reviews/kubernetes/kubernetes/21675)
<!-- Reviewable:end -->
Automatic merge from submit-queue
Openstack provider
Our pull request delivers solution to create Kubernetes cluster on the top of OpenStack. Heat OpenStack Orchestration engine describes the infrastructure for Kubernetes cluster. CentoOS images are used for Kubernetes host machines.
We tested our solution with DevStack and Citycloud provider.
We believe that our solution will fill the gap that which is on the market.
<!-- Reviewable:start -->
---
This change is [<img src="http://reviewable.k8s.io/review_button.svg" height="35" align="absmiddle" alt="Reviewable"/>](http://reviewable.k8s.io/reviews/kubernetes/kubernetes/21737)
<!-- Reviewable:end -->
Automatic merge from submit-queue
Quota integration test improvements
This PR does the following:
* allow a replication manager to get created that does not record events
* improve the shutdown behavior of replication manager and resource quota to ensure doWork funcs exit properly
* update quota integration test to use non event generating replication manager, reduce number of pods to provision
I am hoping this combination of changes should fix the referenced flake.
Fixes https://github.com/kubernetes/kubernetes/issues/25037
Automatic merge from submit-queue
devel/ tree further minor edits
Address line wrap issue #1488. Also cleans up other minor editing issues in the docs/devel/* tree such as spelling errors, links, content tables...
Signed-off-by: Mike Brown <brownwm@us.ibm.com>
Automatic merge from submit-queue
Return 'too old' errors from watch cache via watch stream
Fixes#25151
This PR updates the API server to produce the same results when a watch is attempted with a resourceVersion that is too old, regardless of whether the etcd watch cache is enabled. The expected result is a `200` http status, with a single watch event of type `ERROR`. Previously, the watch cache would deliver a `410` http response.
This is the uncached watch impl:
```
// Implements storage.Interface.
func (h *etcdHelper) WatchList(ctx context.Context, key string, resourceVersion string, filter storage.FilterFunc) (watch.Interface, error) {
if ctx == nil {
glog.Errorf("Context is nil")
}
watchRV, err := storage.ParseWatchResourceVersion(resourceVersion)
if err != nil {
return nil, err
}
key = h.prefixEtcdKey(key)
w := newEtcdWatcher(true, h.quorum, exceptKey(key), filter, h.codec, h.versioner, nil, h)
go w.etcdWatch(ctx, h.etcdKeysAPI, key, watchRV)
return w, nil
}
```
once the resourceVersion parses, there is no path that returns a direct error, so all errors would have to be returned as an `ERROR` event via the ResultChan().
Automatic merge from submit-queue
etcd3/watcher: Event.Object should have the same rev as etcd delete
### What's the problem?
When a delete is watched, the revision should be larger than any previous to guarantee ordering. However, currently etcd3 decodes the previous rev into returned object:
995f022808/pkg/storage/etcd3/watcher.go (L322)
This will break, for example, cacher's assumption here if it re-watch.
995f022808/pkg/storage/cacher.go (L579-L581)
The etcd2 impl. also takes the current ModifiedIndex to ensure it's a larger number:
995f022808/pkg/storage/etcd/etcd_watcher.go (L437-L442)
### What's this PR?
It fixes above problem by using etcd's delete revision.
Automatic merge from submit-queue
kubenet try to retrieve ip inside pod net namespace
Kubenet currently stores the ips of pods inside a map. Kubelet gets pod ip from kubenet during syncpod. If Kubelet restarts, all pods on the node lost their ips in podStatus. This PR adds logic to retrieve pod IP from pod netns.
cc: @yujuhong