Automatic merge from submit-queue
Rename "gcloud-update" jobs to "daily-maintenace" and add Docker cleanup
I'm guessing Jenkins Job Builder won't delete the old job, and we'll need to do that manually?
@spxtr @fejta
Automatic merge from submit-queue
phase 2 of cassandra example overhaul
Here's the next iteration in overhauling this example, towards https://github.com/kubernetes/kubernetes/issues/20961. This removes the pod adoption part, but doesn't (yet) otherwise change any of the resources used.
It also includes some README cleanup, and removes some explicit specification of labels in the rc yaml.
This PR doesn't yet add any commentary on how we're using the seed provider (re: https://github.com/kubernetes/kubernetes/issues/20961#issuecomment-190405959 etc.). Maybe we should add that.
Also: LMK if this PR should include any changes to the links out to the docs.
cc @bgrant0607 @johndmulhausen
in e2e/volumes.go: give time to allow pod cleanup and volume unmount happen before volume server exit;
skip cinder volume test if not running with openstack provider
comment on why pause before containerized server is stopped in volume e2e tests, fix#24100
updates NFS server image to 0.6, per #22529
fix persistent_volume e2e test: test cleanup doesn't expect client pod; delete PV after test
Signed-off-by: Huamin Chen <hchen@redhat.com>
Automatic merge from submit-queue
Set metadata.google.internal IP in dockerized e2e based on /etc/hosts
Support the metadata cacher from #24131 inside dockerized e2e runs.
cc @fejta
Automatic merge from submit-queue
rkt: Fix hostnetwork.
Mount hosts' /etc/hosts, /etc/resolv.conf, set host's hostname
when running the pod in the host's network.
Fix#24235
cc @kubernetes/sig-node
Automatic merge from submit-queue
Restart job 5m after the previous failure.
If a job flakes at the beginning of it scripts, it will likely sit around doing nothing for 30m blocking the merge queue. Decreasing this to 5m.
Automatic merge from submit-queue
rkt: Use rkt pod's uuid as the systemd service file's name.
Previously, the service file's name is 'k8s_${POD_UID}.service',
which means we need to `systemctl daemon-reload` if the we replace
the content of the service file (e.g. pod is restarted).
However this makes the journal in the previous pod get disconnected.
This PR solves the issue by using the unique rkt uuid as the service
file's name. After the change, the service file's name will be:
'k8s_${rkt_uuid}.service'.
Fix#23691
Also add helpers for collecting the events that happen during a watch
and a helper that makes it easy to start a watch from any object with
ObjectMeta.
Reduces the surface area of the API server slightly and allows
downstream components to have deleteable metrics. After this change
genericapiserver will *not* have metrics unless the caller defines it
(allows different apiserver implementations to make that choice on their
own).
Automatic merge from submit-queue
rkt: Update the directory path for saving auth config.
Since #23308 is merged, now we have more stable way to determine where to store the auth configs.
cc @yujuhong @sjpotter
Automatic merge from submit-queue
don't ship kube-registry-proxy and pause images in tars.
pause is built into containervm. if it's not on the machine we should just pull
it. nobody that I'm aware of uses kube-registry-proxy and it makes build/deployment
more complicated and slower.
Automatic merge from submit-queue
Allow lazy binding in credential providers; don't use it in AWS yet
This is step one for cross-region ECR support and has no visible effects yet.
I'm not crazy about the name LazyProvide. Perhaps the interface method could
remain like that and the package method of the same name could become
LateBind(). I still don't understand why the credential provider has a
DockerConfigEntry that has the same fields but is distinct from
docker.AuthConfiguration. I had to write a converter now that we do that in
more than one place.
In step two, I'll add another intermediate, lazy provider for each AWS region,
whose empty LazyAuthConfiguration will have a refresh time of months or years.
Behind the scenes, it'll use an actual ecrProvider with the usual ~12 hour
credentials, that will get created (and later refreshed) only when kubelet is
attempting to pull an image. If we simply turned ecrProvider directly into a
lazy provider, we would bypass all the caching and get new credentials for
each image pulled.
Automatic merge from submit-queue
Kubelet: Better-defined Container Waiting state
For issue #20478 and #21125.
This PR corrected logic and add unit test for `ShouldContainerBeRestarted()`, cleaned up `Waiting` state related code and added unit test for `generateAPIPodStatus()`.
Fixes#20478Fixes#17971
@yujuhong
Automatic merge from submit-queue
Do not throw creation errors for containers that fail immediately after being started
Fixes (hopefully) #23607
cc @dchen1107
Automatic merge from submit-queue
Updating go-restful to generate "type":"object" instead of "type":"any" in swagger-spec (breaks kubectl 1.1)
Updating go-restful to include https://github.com/emicklei/go-restful/pull/270 (replacing type "any" by type "object". Ref https://github.com/swagger-api/swagger-codegen/issues/2347 on why we want to do that)
Ref https://github.com/kubernetes/kubernetes/issues/4700#issuecomment-194719759
First commit generated using:
```
godep restore
go get -u github.com/emicklei/go-restful
godep update github.com/emicklei/go-restful
```
Second commit generated by running
```
./hack/update-swagger-spec.sh
```
Third commit generated by running:
```
./hack/update-api-reference-docs.sh
```
cc @kubernetes/sig-api-machinery @bgrant0607
Mount hosts' /etc/hosts, /etc/resolv.conf, set host's hostname
when running the pod in the host's network.
Besides, do not set the DNS flags when running in host's network.
Previously, the service file's name is 'k8s_${POD_UID}.service',
which means we need to `systemctl daemon-reload` if the we replace
the content of the service file (e.g. pod is restarted).
However this makes the journal in the previous pod get disconnected.
This PR solves the issue by using the unique rkt uuid as the service
file's name. After the change, the service file's name will be:
'k8s_${rkt_uuid}.service'.