Automatic merge from submit-queue
Sync up all release note related docs with the latest process/procedures
@eparis we also need to get the munger in line with the latest. I think we've stopped making changes at this point. #23743 is coming up but is an enhancement to the base procedures here.
cc @bgrant0607
Automatic merge from submit-queue
Client-gen: show the command used to generate the package in doc.go
#22928 adds a comment in every generated file to show the arguments supplied to client-gen. I received comment (https://github.com/kubernetes/kubernetes/pull/22928#issuecomment-201078363) that it generates too many one-line changes every time the command line argument is changed. To address this problem, this PR only generates that line in doc.go.
@jianhuiz @krousey
Automatic merge from submit-queue
Add some info about binary downloads
This should be merged before `v1.2`. Useful information.
WDYT?
@wojtek-t @fgrzadkowski @zmerlynn @mikedanese @brendandburns @thockin
Automatic merge from submit-queue
add jenkins project for kubenet
added a jenkins project for gce using kubenet as network provider
`k8s-jkns-e2e-gce-kubenet` has been created and configured
Automatic merge from submit-queue
Up to golang 1.6
A second attempt to upgrade go version above `go1.4`
Merge ASAP after you've cut the `release-1.2` branch and feel ready.
`go1.6` should perform slightly better than `go1.5`, so this time it might work
@gmarek @wojtek-t @zmerlynn @mikedanese @brendandburns @ixdy @thockin
Automatic merge from submit-queue
Cross-build hyperkube and debian-iptables for ARM. Also add a flannel image
We have to be able to build complex docker images too on `amd64` hosts.
Right now we can't build Dockerfiles with `RUN` commands when building for other architectures e.g. ARM.
Resin has a tutorial about this here: https://resin.io/blog/building-arm-containers-on-any-x86-machine-even-dockerhub/
But it's a bit clumsy syntax.
The other alternative would be running this command in a Makefile:
```
# This registers in the kernel that ARM binaries should be run by /usr/bin/qemu-{ARCH}-static
docker run --rm --privileged multiarch/qemu-user-static:register --reset
```
and
```
ADD https://github.com/multiarch/qemu-user-static/releases/download/v2.5.0/x86_64_qemu-arm-static.tar.xz /usr/bin
```
Then the kernel will be able to differ ARM binaries from amd64. When it finds a ARM binary, it will invoke `/usr/bin/qemu-arm-static` first and lets `qemu` translate the ARM syscalls to amd64 ones.
Some code here: https://github.com/multiarch
WDYT is the best approach? If registering `binfmt_misc` in the kernels of the machines is OK, then I think we should go with that.
Otherwise, we'll have to wait for resin's patch to be merged into mainline qemu before we may use the code I have here now.
@fgrzadkowski @david-mcmahon @brendandburns @zmerlynn @ixdy @ihmccreery @thockin
Automatic merge from submit-queue
Add a timeout to the sshDialer to prevent indefinite hangs.
Prevents the SSH Dialer from hanging forever. Fixes a problem where SSH Tunnels get stuck trying to open.
Addresses #23835.
Automatic merge from submit-queue
Ensure object returned by volume getCloudProvider incorporates cloud config
This PR addresses https://github.com/kubernetes/kubernetes/issues/23517.
**Problem**
The existing GCE PD and AWS EBS volume plugin code were fetching cloud provider without specifying a cloud config: `cloudprovider.GetCloudProvider("gce", nil)`
This caused the cloud provider to use default auth mechanism, which is not acceptable for the provisioning controller running on GKE master.
**Fix**
This PR does the following:
* Modifies the GCE PD and AWS EBS volume plugin code to use the cloud provider object pre-constructed by the binary with a cloud config.
* Enable provisioning E2E test for GKE (to catch future issues).
Thanks to @cjcullen for debugging and finding the root cause! 👍
This should be cherry-picked into the v1.2 branch for the next release.
Automatic merge from submit-queue
support NETWORK_PROVIDER=cni for KUBERNETES_PROVIDER=vagrant
While trying to develop CNI plugins for K8's, I found the docs referenced the support of --network-plugin=cni for kubelet, but this wasn't surfaced up via salt to support env NETWORK_PROVIDER=cni before a kube-up deployment.
This PR is my attempt at adding CNI support to the kube-up happy path, following a lot of similar work for NETWORK_PROVIDER=kubenet which already exists.
Also, I've added the ability to consume CNI plugin's (binaries) and configuration files from the local cluster/network-plugins directory into the necessary locations as referenced here for CNI:
http://kubernetes.io/docs/admin/network-plugins
This allows a local developer to easily work on CNI plugin development while following the existing kube-up.sh docs and process.
In general, i've struggled to find any authoritative information or answers to my questions in slack regarding CNI progress / correct integration, so comments encouraged here!
Files are taken from cluster/network-plugins/{bin,conf} to be consumed within a vagrant kube-up.sh environment.
Paths used for configuration files and the 'cni' name of the network provider are all from the kubernetes documentation, but the actual implementation in the salt automation doesn't seem to exist.
Use of NETWORK_PROVIDER=cni is documented as useable (as well as it's affects on the runtime args of kubelet),
however the actual implimentation in the salt automation doesnt seem to exist.
this change attempts to fix that for the vagrant usecase.