Implements part of #24071
I am not familiar with the scheduler enough to know what to do with the scores. Punting for now.
Missing items from the implementation plan: limitranger, rkt support, kubectl
support and user docs
Automatic merge from submit-queue
Hack update all remove dollar symbol
When not running ./hack/update in silent mode, the script fails due to undefined ``$Updating`` variable.
Automatic merge from submit-queue
Move godeps to vendor/
This is a first-step towards glide support, maybe we don't want or need to take this, but it was easy to try.
This fails to compile, not sure why:
```
# k8s.io/kubernetes/pkg/apis/extensions/v1beta1
_output/local/go/src/k8s.io/kubernetes/pkg/apis/extensions/v1beta1/conversion_generated.go:2703: undefined: extensions.ClusterAutoscaler
_output/local/go/src/k8s.io/kubernetes/pkg/apis/extensions/v1beta1/conversion_generated.go:2703: undefined: ClusterAutoscaler
_output/local/go/src/k8s.io/kubernetes/pkg/apis/extensions/v1beta1/conversion_generated.go:2719: undefined: extensions.ClusterAutoscaler
_output/local/go/src/k8s.io/kubernetes/pkg/apis/extensions/v1beta1/conversion_generated.go:2719: undefined: ClusterAutoscaler
_output/local/go/src/k8s.io/kubernetes/pkg/apis/extensions/v1beta1/conversion_generated.go:2723: undefined: extensions.ClusterAutoscalerList
_output/local/go/src/k8s.io/kubernetes/pkg/apis/extensions/v1beta1/conversion_generated.go:2723: undefined: ClusterAutoscalerList
_output/local/go/src/k8s.io/kubernetes/pkg/apis/extensions/v1beta1/conversion_generated.go:3468: Convert_extensions_JobSpec_To_v1beta1_JobSpec redeclared in this block
previous declaration at _output/local/go/src/k8s.io/kubernetes/pkg/apis/extensions/v1beta1/conversion.go:328
_output/local/go/src/k8s.io/kubernetes/pkg/apis/extensions/v1beta1/conversion_generated.go:3845: Convert_extensions_ScaleStatus_To_v1beta1_ScaleStatus redeclared in this block
previous declaration at _output/local/go/src/k8s.io/kubernetes/pkg/apis/extensions/v1beta1/conversion.go:98
_output/local/go/src/k8s.io/kubernetes/pkg/apis/extensions/v1beta1/conversion_generated.go:4737: Convert_v1beta1_JobSpec_To_extensions_JobSpec redeclared in this block
previous declaration at _output/local/go/src/k8s.io/kubernetes/pkg/apis/extensions/v1beta1/conversion.go:380
_output/local/go/src/k8s.io/kubernetes/pkg/apis/extensions/v1beta1/conversion_generated.go:5186: Convert_v1beta1_ScaleStatus_To_extensions_ScaleStatus redeclared in this block
previous declaration at _output/local/go/src/k8s.io/kubernetes/pkg/apis/extensions/v1beta1/conversion.go:120
_output/local/go/src/k8s.io/kubernetes/pkg/apis/extensions/v1beta1/conversion_generated.go:2723: too many errors
!!! Error in /home/thockin/tmp/godep-vendor/src/k8s.io/kubernetes/hack/lib/golang.sh:417
```
Automatic merge from submit-queue
cluster/images/hyperkube: create symlink for each server
Add a kubelet symlink so that the hyperkube image can appear as a kubelet image. https://github.com/kubernetes/kubernetes/issues/24510
a) it doesn't need it
b) changing CWD to a path with symlinks breaks deep within ginkgo, where it
crafts a relative path to ../../../../../../platforms/amd64/whatever which then
traverses the physical path not the symlinked one, and breaks.
Our `realpath` and `readlink -f` functions (required only because of MacOS,
thanks Steve) were poor substitutes at best. Mostly they were downright
broken. This thoroughly overhauls them and adds a test (in comments, since we
don't seem to have shell tests). For all the interesting cases I could think
of, the fakes act just like the real thing.
Then use those and canonicalize KUBE_ROOT. In order to make recursive calls of
our shell tool not additively grow `pwd` we have to essentially make the
sourcing of init.sh idempotent.
Automatic merge from submit-queue
Introduce events flag for describers
Printing events for a given object is not always needed. Thus, introducing --show-events=false to ``kubectl describe`` to skip events printing.
Fixes: #24239
Automatic merge from submit-queue
Reimplement 'pause' in C - smaller footprint all around
Statically links against musl. Size of amd64 binary is 3560 bytes.
I couldn't test the arm binary since I have no hardware to test it on, though I assume we want it to work on a raspberry pi.
This PR also adds the gcc5/musl cross compiling image used to build the binaries.
@thockin
Automatic merge from submit-queue
Add subPath to mount a child dir or file of a volumeMount
Allow users to specify a subPath in Container.volumeMounts so they can use a single volume for many mounts instead of creating many volumes. For instance, a user can now use a single PersistentVolume to store the Mysql database and the document root of an Apache server of a LAMP stack pod by mapping them to different subPaths in this single volume.
Also solves https://github.com/kubernetes/kubernetes/issues/20466.
Automatic merge from submit-queue
Update e2e-runner.sh so it can fetch multiple types of Trusty images
Trusty beta images now work with k8s 1.2.
@spxtr, @andyzheng0831 Can you review this?
Automatic merge from submit-queue
Kubelet: Cleanup with new engine api
Finish step 2 of #23563
This PR:
1) Cleanup go-dockerclient reference in the code.
2) Bump up the engine-api version.
3) Cleanup the code with new engine-api.
Fixes#24076.
Fixes#23809.
/cc @yujuhong
Introduce DescriberSettings for Describer display options
Introduce --show-events flag and DescriberSettings in Describer methods
Introduce unit-tests
Regenerated kubectl describe docs
Add events flag tests to test-cmd.sh
Signed-off-by: dhodovsk@redhat.com
Signed-off-by: jchaloup@redhat.com
Automatic merge from submit-queue
Jenkins: Clean up even if we can't bring cluster up
We're gathering all the cluster logs, go ahead and clean up the
resources.
Automatic merge from submit-queue
Allow etcd to store protobuf objects
Split storage serialization from client negotiation, and allow API server to take flag controlling serialization.
TODO:
* [x] API server still doesn't start - range allocation object doesn't seem to round trip correctly to etcd
* [ ] Verify that third party resources are ignoring protobuf (add a test)
* [ ] Add integration tests that verify storage is correctly protobuf
* [ ] Add a global default for which storage format to prefer?
Automatic merge from submit-queue
Map secret files into dockerized e2e
Rather than copying the GCE and AWS private keys/credentials to each Jenkins VM, we can put them in credentials and map them through.
This is one half of the change; if the relevant environment variables are set, we'll mount the files in.
cc @fejta @rmmh @apelisse
The codec factory should support two distinct interfaces - negotiating
for a serializer with a client, vs reading or writing data to a storage
form (etcd, disk, etc). Make the EncodeForVersion and DecodeToVersion
methods only take Encoder and Decoder, and slight refactoring elsewhere.
In the storage factory, use a content type to control what serializer to
pick, and use the universal deserializer. This ensures that storage can
read JSON (which might be from older objects) while only writing
protobuf. Add exceptions for those resources that may not be able to
write to protobuf (specifically third party resources, but potentially
others in the future).
Automatic merge from submit-queue
kubectl rolling-update support for same image
Fixes#23497.
Enables `kubectl rolling-update --image` to the same image, adding a `--image-pull-policy` flag to remove ambiguity. This allows rolling-update to behave as an "update and/or restart" (https://github.com/kubernetes/kubernetes/issues/23497#issuecomment-212349730), or as a forced update when the same tag can mean multiple versions (e.g. `:latest`). cc @janetkuo @nikhiljindal
Automatic merge from submit-queue
Add several arguments to boilerplate.py
This commit makes the root directory and boilerplate content directory configurable.
The defaults have remained the same, so no behavior changes should be expected.
cc @eparis
ref https://github.com/kubernetes/minikube/pull/37
Automatic merge from submit-queue
Use HOSTNAME in Docker build image tag hash
Fixes#24661 by including `$HOSTNAME` when generating the build image tag hash.
When running the verification checks under Docker, the `$KUBE_ROOT` will be identical across builds, so tags will collide unless we add additional uniqueness. By default, the hostname inside a Docker container is its ID, which should be unique enough for us.
I also deleted a misleading error message from the same check.
@kubernetes/sig-testing
This commit makes the root directory, boilerplate content directory and
the directories to skip configurable.
The defaults have remained the same, so no behavior changes should be expected.
Automatic merge from submit-queue
Run Kubemark builds inside Docker
Since Docker-in-Docker is tricky to get right (esp. wrt volume mounts), I'm only enabling it when necessary for a build (e.g. for kubemark).
cc @spxtr @fejta @wojtek-t
Automatic merge from submit-queue
allow kubectl subcmds to process multiple resources
~~autoscale, expose & patch~~ Many kubectl subcommands were limited to processing one resource at a time.
This PR allows those subcommands to process multiple resources.
This PR is in reference to https://github.com/kubernetes/kubernetes/pull/23116#issuecomment-202360784 by @deads2k
Automatic merge from submit-queue
Up to go 1.6.2 for build and test.
~~1.6.1 contains some security fixes. 1.6.2 should be out soon.~~ 1.6.2 is out :D
Images aren't pushed yet.
Automatic merge from submit-queue
Framework support for node e2e.
This should let us port existing e2e tests to the node e2e suite, if the tests are node specific.
Automatic merge from submit-queue
Provide flags to use etcd3 backed storage
ref: #24405
What's in this PR?
- Add a new flag "storage-backend" to choose "etcd2" or "etcd3". By default (i.e. empty), it's "etcd2".
- Take out etcd config code into a standalone package and let it create etcd2 or etcd3 storage backend given user input.
Automatic merge from submit-queue
Add kubelet flags for eviction threshold configuration
This PR just adds the flags for kubelet eviction and the associated generated code.
I am happy to tweak text, but we can also do that later at this point in the release.
Since this causes codegen, I wanted to stage this first.
/cc @vishh @kubernetes/sig-node
Automatic merge from submit-queue
Initial kube-up support for VMware's Photon Controller
This is for: https://github.com/kubernetes/kubernetes/issues/24121
Photon Controller is an open-source cloud management platform. More
information is available at:
http://vmware.github.io/photon-controller/
This commit provides initial support for Photon Controller. The
following features are tested and working:
- kube-up and kube-down
- Basic pod and service management
- Networking within the Kubernetes cluster
- UI and DNS addons
It has been tested with a Kubernetes cluster of up to 10
nodes. Further work on scaling is planned for the near future.
Internally we have implemented continuous integration testing and will
run it multiple times per day against the Kubernetes master branch
once this is integrated so we can quickly react to problems.
A few things have not yet been implemented, but are planned:
- Support for kube-push
- Support for test-build-release, test-setup, test-teardown
Assuming this is accepted for inclusion, we will write documentation
for the kubernetes.io site.
We have included a script to help users configure Photon Controller
for use with Kubernetes. While not required, it will help some
users get started more quickly. It will be documented.
We are aware of the kube-deploy efforts and will track them and
support them as appropriate.
Automatic merge from submit-queue
Federation apiobject cluster
add federation api group
add cluster api object and registry
~~generate cluster client~~ moved to #24117
update scripts to generate files for /federation
#19313#23653#23554
@nikhiljindal @quinton-hoole, @deepak-vij, @XiaoningDing, @alfred-huangjian @mfanjie @huangyuqi @colhom
Automatic merge from submit-queue
update codegen before update codecgen
Currently if I remove an API field, update-codecgen will complain generated deepcopy functions referring to invalid fields. Running update-codegen before update-codecgen solves the problem.
Automatic merge from submit-queue
kubectl: Allow []byte config fields to be set by the cli
Allows []byte config fields such as 'certificate-authority-data' to be set using `kubectl config set` commands.
Automatic merge from submit-queue
Script to cache metadata requests on the jenkins master
Fixes https://github.com/kubernetes/kubernetes/issues/23545
Create an http server which caches most requests to the metadata server. Use special logic to cache access tokens such that the expires_on json field is correct. Add a script to simplify enabling/disabling the cache by editing `/etc/hosts`
This is for: https://github.com/kubernetes/kubernetes/issues/24121
Photon Controller is an open-source cloud management platform. More
information is available at:
http://vmware.github.io/photon-controller/
This commit provides initial support for Photon Controller. The
following features are tested and working:
- kube-up and kube-down
- Basic pod and service management
- Networking within the Kubernetes cluster
- UI and DNS addons
It has been tested with a Kubernetes cluster of up to 10
nodes. Further work on scaling is planned for the near future.
Internally we have implemented continuous integration testing and will
run it multiple times per day against the Kubernetes master branch
once this is integrated so we can quickly react to problems.
A few things have not yet been implemented, but are planned:
- Support for kube-push
- Support for test-build-release, test-setup, test-teardown
Assuming this is accepted for inclusion, we will write documentation
for the kubernetes.io site.
We have included a script to help users configure Photon Controller
for use with Kubernetes. While not required, it will help some
users get started more quickly. It will be documented.
We are aware of the kube-deploy efforts and will track them and
support them as appropriate.
Automatic merge from submit-queue
Removing KUBE_API_VERSIONS from our test scripts
We dont need to specify them.
Its an unnecessary extra change that people have to make while adding a group.
We also need this change for ubernetes.
cc @caesarxuchao @jianhuiz
Automatic merge from submit-queue
fix relative working dirctory of KUBE_ROOT
fix relative working dirctory of `KUBE_ROOT`, do not need to change to `KUBE_ROOT` in the first place
Signed-off-by: Crazykev <zcq8989@gmail.com>
Automatic merge from submit-queue
jenkins: Allow configuration of release bucket
This allows others to leverage the existing E2E code to test some
patched kube binary by simply overriding the bucket and reusing many of
the existing scripts
Automatic merge from submit-queue
disable linkcheck jenkins job
I don't have time to fix linkcheck any soon, so temporarily disable the job.
ref #23162
Automatic merge from submit-queue
First pass at a GKE large cluster Jenkins job
Runs a 1000 node GKE parallel e2e test. On demand only. We'll add more
tests as I see what actually works - this is going to have some
flakiness on its own.
Automatic merge from submit-queue
update hack/build-go to build federation/cmd/federated-apiserver as well
federation/cmd/federated-apiserver was added in https://github.com/kubernetes/kubernetes/pull/23509
cc @jianhuiz
Runs a 1000 node GKE parallel e2e test. On demand only. We'll add more
tests as I see what actually works - this is going to have some
flakiness on its own.
Automatic merge from submit-queue
Disable heapster job which has been broken for a month
https://github.com/kubernetes/kubernetes/issues/23538
This job is no longer producing a useful signal. http://kubekins.dls.corp.google.com/ shows that the last pass was nearly two months ago. I would like to disable the job until someone has the chance to fix it so we are not wasting jenkins resources, contributing to system instability.
Automatic merge from submit-queue
Disable flannel job until it works
https://github.com/kubernetes/kubernetes/issues/24520
See bug, this job is fails every time and has done so for two months. Until someone has time to investigate and fix disable the job on jenkins so we're not wasting resources and reducing system stability.
Automatic merge from submit-queue
Enable protobuf compilation by default
Enables protobuf compilation, build verification checks, and generates all initial code.
kubectl is now 47M on OSX, build time from clean on a 2014 MBP (4 core) on Go 1.6 is ~150s.
@wojtek-t
Automatic merge from submit-queue
hack: change update-swagger-spec.sh apiserver defaults
Removing the explicit list of KUBE_API_VERSIONS will cause the apiserver
to enable all APIs by default. This change reduces the amount of script
hacking needed to add new API groups in the future.
Automatic merge from submit-queue
Incremental improvements to kubelet e2e tests
- Add keep-alive to ssh connection
- Don't try to stop services on image-based runs
- Increase jenkins ci timeout to 90 minutes to accomadate unpredictable go build times
- Remove spammy log statement
Automatic merge from submit-queue
Add some more info to the Jenkins README.
This is a bit of a work-in-progress, and I'd appreciate feedback on what to add or remove. I'm not sure that I need to say so much about the GCS format, and I should probably say some more about JJB.
@kubernetes/sig-testing
Automatic merge from submit-queue
Removing call to update-swagger-spec.sh from update-generated-swagger-docs.sh
Fixes https://github.com/kubernetes/kubernetes/issues/24233
Right now `update-generated-swagger-docs.sh` calls `update-swagger-spec.sh`, but `verify-generated-swagger-docs.sh` does not verify swagger spec (that is done by `verify-swagger-spec.sh`).
Hence, `verify-swagger-spec` breaks if it is called after `verify-generated-swagger-docs`.
Fixing it by removing the call to `update-swagger-spec.sh` from `update-generated-swagger-docs.sh`.
This will require users to run both `update-swagger-spec` and `update-generated-swagger-docs` when they update api types, but they already need to run many more scripts (`update-api-reference-docs`, `update-codegen`).
People should mostly be running hack/update-all.sh directly :)
Automatic merge from submit-queue
Shorten cluster names in GKE Jenkins on Trusty
We identified an issue that the PD tests in GKE Jenkins on Trusty fail because the PD name is longer than the limit of 63 characters. The PD name embeds the "E2E_NAME" env variable exported in the Jenkins job configuration. This PR shortens that string for all GKE Jenkins on Trusty. As a result, the PD name will meet the limit requirement.
Automatic merge from submit-queue
Bump kubernetes-test-go timeout.
It looks like the run times got more inconsistent because of load on the VM. Adding another Jenkins slave improved things so we're not constantly timing out, but it still gets a little close to timing out at times.
Average runtime is ~45 mins so I went with a 100 min timeout.
Fixes#24285
Automatic merge from submit-queue
Remove soak and disruptive 1.1 Jenkins jobs.
They're both in the kubernetes-jenkins project, not their own. The disruptive one isn't a critical build, and I don't think the soak should be critical at all, since it's never green for a week anyway and I don't think we ever plan for it to be.
Automatic merge from submit-queue
Bump upgrade test timout to 10 hours
@spxtr is it reasonable to expect that running the v1.2 tests in serial would take longer than ~ 5 hours (assuming the upgrade beforehand takes < 1 hour)?
Automatic merge from submit-queue
Run test-go less often on release branches.
I made 1.2 run every 3 hours and 1.1 run every 6 hours. They'll still run right away once a build completes.
I'm going to have to lower the number of executors on the Jenkins slaves that run test-go jobs, since running 3 at a time makes them use up all the CPU and flake.
Automatic merge from submit-queue
Replace tab with eight spaces
This file only uses spaces for indentation, and my text editor highlighted the one tab.
- Add keep-alive to ssh connection
- Don't try to stop services on image-based runs
- Increase jenkins ci timeout to 90 minutes to accomadate unpredictable go build times
- Remove spammy log statement
Automatic merge from submit-queue
Make etcd cache size configurable
Instead of the prior 50K limit, allow users to specify a more sensible size for their cluster.
I'm not sure what a sensible default is here. I'm still experimenting on my own clusters. 50 gives me a 270MB max footprint. 50K caused my apiserver to run out of memory as it exceeded >2GB. I believe that number is far too large for most people's use cases.
There are some other fundamental issues that I'm not addressing here:
- Old etcd items are cached and potentially never removed (it stores using modifiedIndex, and doesn't remove the old object when it gets updated)
- Cache isn't LRU, so there's no guarantee the cache remains hot. This makes its performance difficult to predict. More of an issue with a smaller cache size.
- 1.2 etcd entries seem to have a larger memory footprint (I never had an issue in 1.1, even though this cache existed there). I suspect that's due to image lists on the node status.
This is provided as a fix for #23323
Automatic merge from submit-queue
hack: specify --advertise-address in hack/local-up-cluster.sh
This fixes the bug where the script fails to launch an apiserver on a
machine without active networking (issue #24272).
Automatic merge from submit-queue
Fix spacing in usage_from_stdin and info_from_stdin (issue #24186).
If "a" is a bash array, then the syntax to append the contents of $line as a
new element to the array is a+=("$line"), not messages+=$line
Using the former syntax just seems to append to the first element, creating a
long string and thus losing newline information.
Fixing this allows us to drop some empty lines from invocations of
usage_from_stdin.
Automatic merge from submit-queue
Rename "gcloud-update" jobs to "daily-maintenace" and add Docker cleanup
I'm guessing Jenkins Job Builder won't delete the old job, and we'll need to do that manually?
@spxtr @fejta
Automatic merge from submit-queue
phase 2 of cassandra example overhaul
Here's the next iteration in overhauling this example, towards https://github.com/kubernetes/kubernetes/issues/20961. This removes the pod adoption part, but doesn't (yet) otherwise change any of the resources used.
It also includes some README cleanup, and removes some explicit specification of labels in the rc yaml.
This PR doesn't yet add any commentary on how we're using the seed provider (re: https://github.com/kubernetes/kubernetes/issues/20961#issuecomment-190405959 etc.). Maybe we should add that.
Also: LMK if this PR should include any changes to the links out to the docs.
cc @bgrant0607 @johndmulhausen
Automatic merge from submit-queue
Set metadata.google.internal IP in dockerized e2e based on /etc/hosts
Support the metadata cacher from #24131 inside dockerized e2e runs.
cc @fejta
Automatic merge from submit-queue
Restart job 5m after the previous failure.
If a job flakes at the beginning of it scripts, it will likely sit around doing nothing for 30m blocking the merge queue. Decreasing this to 5m.
This allows others to leverage the existing E2E code to test some
patched kube binary by simply overriding the bucket and reusing many of
the existing scripts
Removing the explicit list of KUBE_API_VERSIONS will cause the apiserver
to enable all APIs by default. This change reduces the amount of script
hacking needed to add new API groups in the future.
If "a" is a bash array, then the syntax to append the contents of $line as a
new element to the array is a+=("$line"), not messages+=$line
Using the former syntax just seems to append to the first element, creating a
long string and thus losing newline information.
Fixing this allows us to drop some empty lines from invocations of
usage_from_stdin.
This makes it easier to determine which tests cause particular suites to
fail.
All static HTML pages are now generated by one invocation of gen_html.py.
- make index include good/flake/fail numbers for each link
- consistently use % for string interpolation
Automatic merge from submit-queue
Update hack/test-cmd.sh to use tagged, gcr.io images
Migrate hack/test-cmd.sh and related test data to use tagged, gcr.io versions of the images for #13288 and #20836
Automatic merge from submit-queue
add jenkins project for kubenet
added a jenkins project for gce using kubenet as network provider
`k8s-jkns-e2e-gce-kubenet` has been created and configured
Automatic merge from submit-queue
Migrate gke-trusty test jobs to 1.2
Following up #23100 and #23139, #23319, migrate all gke-trusty jobs to the
`release-1.2` branch, add parallel and subnet test jobs, and bump timeouts
accordingly.
Tested with `jenkins-jobs test`. Manually diff'ed gke-trusty jobs against their equivalent gke jobs. For example,
```
# diff /tmp/jobs0324/kubernetes-e2e-gke-test /tmp/jobs0324/kubernetes-e2e-gke-trusty-test
4c4
< <description>Run E2E tests on GKE test endpoint. Test owner: GKE on-call.<!-- Managed by Jenkins Job Builder --></description>
---
> <description>Run E2E tests on GKE test endpoint. Test owner: wonderfly@google.com.<!-- Managed by Jenkins Job Builder --></description>
49c49
< export PROJECT="k8s-jkns-e2e-gke-test"
---
> export PROJECT="kubekins-e2e-gke-trusty-test"
51a52
> export E2E_NAME="jkns-gke-e2e-test-trusty"
228c229
< <recipientList>$DEFAULT_RECIPIENTS</recipientList>
---
> <recipientList>wonderfly@google.com,qzheng@google.com</recipientList>
```
@spxtr @roberthbailey @ihmccreery Can you review this?
cc/ @andyzheng0831
Automatic merge from submit-queue
Add support for 3rd party objects to kubectl
@deads2k @jlowdermilk
Instructions for playing around with this:
Run an apiserver with third party resources turned on (`--runtime-config=extensions/v1beta1=true,extensions/v1beta1/thirdpartyresources=true`)
Then you should be able to:
```
kubectl create -f rsrc.json
```
```json
{
"metadata": {
"name": "foo.company.com"
},
"apiVersion": "extensions/v1beta1",
"kind": "ThirdPartyResource",
"versions": [
{
"apiGroup": "group",
"name": "v1"
},
{
"apiGroup": "group",
"name": "v2"
}
]
}
```
Once that is done, you should be able to:
```
curl http://<server>/apis/company.com/v1/foos
```
```
curl -X POST -d @${HOME}/foo.json http://localhost:8080/apis/company.com/v1/namespaces/default/foos
```
```json
{
"kind": "Foo",
"apiVersion": "company.com/v1",
"metadata": {
"name": "baz"
},
"someField": "hello world",
"otherField": 1
}
```
After this PR, you can do:
```
kubectl create -f foo.json
```
```
kubectl get foos
```
etc.
Automatic merge from submit-queue
Migrate to the new conversion generator - part1
This PR contains two commits:
- few more fixes to the generator
- migration of the pkg/api/v1 to use the new generator
The second commit is big, but I reviewed the changes and they contain:
- conversions between types that we didn't even generating conversion between
- changes in how we handle maps/pointers/slices - previously we were explicitly referencing fields, now we are using "shadowing in, out" to make the code more generic
- lack of auto-generated method for ReplicationControllerSpec (because these types are different (*int vs int for Replicas) and a preexisting conversion already exists
Most of issues in the first commit (e.g. adding references to "in" and "out" for slices/maps/points) were discovered by our tests. So I'm pretty confident that this change is correct now.
Automatic merge from submit-queue
Retry github and godep operations in test-dockerized.sh
closes#21887.
Attempt to mitigate go get and godep flakes by retrying a few times inside of jenkins
Automatic merge from submit-queue
When checking for leak look only at additional resources
This should help with "fake" leaks, when run deletes stuff that was leaked in a previous one.
cc @zmerlynn @ixdy @wojtek-t
Automatic merge from submit-queue
rkt: bump rkt version to 1.2.1
Upon bumping the rkt version, `--hostname` is supported. Also we now gets the configs from the rkt api service, so `stage1-image` is deprecated.
cc @yujuhong @Random-Liu
Following up #23100 and #23139, #23319, migrate all gke-trusty jobs to the
`release-1.2` branch, add parallel and subnet test jobs, and bump timeouts
accordingly.
godeps doesn't get everything we want, so fix the problem but write it
to a parallel tree since _workspace is reserved only for godeps auto-generated
files.
After this change, jobs that use Trusty dev images will test against the
`release-1.2` branch, and use Trusty images for both the master and the nodes.
Trusty beta and stable jobs are kept in the `release-1.1` branch, and only use
Trusty images on nodes.