This was broken when we moved to the build container, but no one
noticed. We also likely have another bug, which is that protobuf should
hard fail when we have fields that aren't assigned a tag.
Automatic merge from submit-queue
Improve the multiarch situation; armel => armhf; reenable pcc64le; remove the patched golang
**What this PR does / why we need it**:
- Improves the multiarch situation as described in #38067
- Tries to bump to go1.8 for arm (and later enable ppc64le)
- GOARM 6 => GOARM 7
- Remove the golang 1.7 patch
- armel => armhf
- Bump QEMU version to v2.7.0
**Release note**:
```release-note
Improve the ARM builds and make hyperkube on ARM working again by upgrading the Go version for ARM to go1.8beta2
```
@kubernetes/sig-testing-misc @jessfraz @ixdy @jbeda @david-mcmahon @pwittrock
from etcd.sh split the start process into validate fucntion + start function so that the validate piece can be reused elsewhere. the up-cluster script has been changed to remove duplicate docker logic to the one used in buid-tools/common.sh and the validate etcd function is now used here.
moved docker daemon check function to util.sh and made function name changes and upstream changes.
If you delete a source file, we want to reflect that in the build container. We
only use --delete going that one way as we don't want to accidentally delete
files in the user's source tree.
Automatic merge from submit-queue
Check for rsync and give friendlier message
Fixes#34300.
Not sure if #34309 is the same issue. Hopefully it is the same issue.
Automatic merge from submit-queue
Make sure rsync.sh is executable inside the build image
I kept having the build fail:
```console
$ make quick-release
+++ [1006 18:13:44] Verifying Prerequisites....
+++ [1006 18:13:44] Building Docker image kube-build:build-d3c60cf83f-3-v1.6.3-9
+++ [1006 18:13:54] Creating data container kube-build-data-d3c60cf83f-3-v1.6.3-9
+++ [1006 18:13:55] Syncing sources to container
!!! [1006 18:16:01] Could not connect to rsync container. See build/README.md for setting up remote Docker engine.
make: *** [quick-release] Error 1
```
`docker ps` revealed the issue:
```console
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
75c2a3c40cb3 kube-build:build-d3c60cf83f-3-v1.6.3-9 "/rsyncd.sh" 6 seconds ago Exited (126) 5 seconds ago kube-rsync-d3c60cf83f-3-v1.6.3-9
3eb215e41f36 kube-build:build-d3c60cf83f-3-v1.6.3-9 "chown -R 85078.5000 " 8 seconds ago Exited (0) 6 seconds ago kube-build-data-d3c60cf83f-3-v1.6.3-9
5a2707af2ccd 882577c54f67 "/bin/sh -c 'cd ${K8S" 7 days ago Exited (2) 7 days ago stupefied_goldberg
$ docker logs 75c2a3c40cb3
/bin/bash: /rsyncd.sh: Permission denied
```
I'm not sure why this works on Jenkins but not on my machine.
We were using netcat to try and figure out when the rsync container is ready. Now we instead use rsync itself. I suspect that there was a race condition with some versions of Docker where it would accept connections and then close them during container start.
This fixes#34214 (I think)
make-generated-{protobuf,runtime}.sh was doing some really nasty stuff with how
the build container was managed in order to copy results out. Since we have
more flexibility to grab results out of the build container, we can now avoid
all of this. Ideally we wouldn't have `hack` calling `build` at all, but we
aren't there yet.
We also add "version" to all docker images and containers
This version is to be incremented manually when we change the shape of the build
image (like changing the golang version or the set of volumes in the data
container). This will delete all older versions of images and containers when
the version is different.
Automatic merge from submit-queue
Separate federation build.sh into development and deployment scripts.
The idea behind this separation is that it provides a clear distinction
between the dev environment and the prod environment. The
deploy/deploy.sh script will be shipped to the users, but
develop/develop.sh will be purely for development purposes and won't
be part of a release distribution.
Purely for developer convenience, all the deployment functionality is
made available through the develop/develop.sh script.
This change also copies deploy/* files into the release distribution.
cc @kubernetes/sig-cluster-federation @colhom
```release-note
Federation can now be deployed using the `federation/deploy/deploy.sh` script. This script does not depend on any of the development environment shell library/scripts. This is an alternative to the current `federation-up.sh`/`federation-down.sh` scripts. Both the scripts are going to co-exist in this release, but the `federation-up.sh`/`federation-down.sh` scripts might be removed in a future release in favor of `federation/deploy/deploy.sh` script.
```
Also build the hyperkube docker image on-the-fly.
This is only a temporary fix until the proposal in issue
https://github.com/kubernetes/kubernetes/issues/28630 is implemented.
Also, the new build/deployment method completely obviates this step.
We use debian image instead of busybox and do not build hyperkube as a
static binary yet. Wait until PR
https://github.com/kubernetes/kubernetes/pull/26028 is merged to build
static hyperkube binaries.
The idea behind this separation is that it provides a clear distinction
between the dev environment and the prod environment. The
deploy/deploy.sh script will be shipped to the users, but
develop/develop.sh will be purely for development purposes and won't
be part of a release distribution.
Purely for developer convenience, all the deployment functionality is
made available through the develop/develop.sh script.
This change also copies deploy/* files into the release distribution.
Automatic merge from submit-queue
Disable linux/ppc64le compilation by default
Work-around for #30384.
I'm still testing this locally to see if it actually works. The build is slow. (PR Jenkins won't tell us whether this fixes ppc.)
cc @Random-Liu @spxtr @david-mcmahon @luxas
Repro case:
$ make clean generated_files
$ hack/update-generated-protobuf.sh
This would complain about not finding `fmt`, and it was indicating the wrong
GOROOT. The problem was that the first step built binaries for generating
code, which *embeds* the value of GOROOT into the binary. The whole tree was
bind-mounted into the build container and then JUST the dockerized dir was
mounted over it. The in-container build tried to use the existing binaries,
but GOROOT is wrong.
This change whites-out the whole _output dir.
I first made just an anonymous volume for _output, but docker makes that as
root, which means I can't write to it from our non-root build. So I just put
it in the data container. This seems to work. The biggest change this makes
is that the $GOPATH/bin/ and $GOPATH/pkg/ dirs will persist across dockerized
builds.
This commit removes a part of common.sh script which copied
contrib/ sources for enabled contribs, which resulted in the
duplicated files inside tarball.
Fixes#30150
Automatic merge from submit-queue
Install go-bindata in cross-build image
Another follow-up to #25584.
We need `go-bindata` to create `test/e2e/generated`, and downloading it with `go get` at build time is painful for a variety of reasons. We can just include it in the cross-build image and not worry about it, especially as it updates very infrequently.
This fixes `hack/update-generated-protobuf.sh` as well.
cc @jayunit100 @soltysh
Automatic merge from submit-queue
build: fixed ${KUBE_ROOT} prefix for build scripts
Running `./make-build-image.sh` command inside the `build` directory doesn't work:
```sh
$ cd build
$ ./make-build-image.sh
./../build/common.sh: line 32: hack/lib/init.sh: No such file or directory
```
This PR adds `${KUBE_ROOT}` prefix for the `source` bash function. Also I added braces to unify the code style.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/.github/PULL_REQUEST_TEMPLATE.md?pixel)]()
If the docker-machine certificates get in a bad state, the current behavior
causes an infinite loop waiting for `docker-machine env` to return. Now it will
echo the certificate error and prompt the user to regenerate.
This logs a false "error" message, so it's time to go. It was needed to ensure
nobody has stale build images laying around, but that was quite a while ago, so
it's probably safe now.
Automatic merge from submit-queue
Substitute federation_domain_map parameter with its value in node bootstrap scripts.
This PR also removes the substitution code we added to the build scripts.
**Release Note**
```release-note
If you use one of the kube-dns replication controller manifest in `cluster/saltbase/salt/kube-dns`, i.e. `cluster/saltbase/salt/kube-dns/{skydns-rc.yaml.base,skydns-rc.yaml.in}`, either substitute one of `__PILLAR__FEDERATIONS__DOMAIN__MAP__` or `{{ pillar['federations_domain_map'] }}` with the corresponding federation name to domain name value or remove them if you do not support cluster federation at this time. If you plan to substitute the parameter with its value, here is an example for `{{ pillar['federations_domain_map'] }`
pillar['federations_domain_map'] = "- --federations=myfederation=federation.test"
where `myfederation` is the name of the federation and `federation.test` is the domain name registered for the federation.
```
cc @erictune @kubernetes/sig-cluster-federation @MikeSpreitzer @luxas
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/.github/PULL_REQUEST_TEMPLATE.md?pixel)]()
Automatic merge from submit-queue
Add upgrade Docker VM
Add an Error Message to upgarde your Docker VM if needed, example output:
```bash
+++ [0622 13:19:48] No docker host is set. Checking options for setting one...
+++ [0622 13:19:49] docker-machine was found.
+++ [0622 13:19:49] A Docker host using docker-machine named 'kube-dev' is ready to go!
Can't connect to 'docker' daemon. please fix and retry.
Possible causes:
- On Mac OS X, DOCKER_HOST hasn't been set. You may need to:
- Create and start your VM using docker-machine or boot2docker:
- docker-machine create -d virtualbox --virtualbox-memory 4096 --virtualbox-cpu-count -1 kube-dev
- boot2docker init && boot2docker start
- Set your environment variables using:
- eval $(docker-machine env kube-dev)
- $(boot2docker shellinit)
- On Linux, user isn't in 'docker' group. Add and relogin.
- Something like 'sudo usermod -a -G docker jscheuermann'
- RHEL7 bug and workaround: https://bugzilla.redhat.com/show_bug.cgi?id=1119282#c8
- On Linux, Docker daemon hasn't been started or has crashed.
!!! Error in hack/../hack/update-generated-protobuf.sh:53
'return 1' exited with status 1
Call stack:
1: hack/../hack/update-generated-protobuf.sh:53 main(...)
Exiting with status 1
Updating generated-protobuf FAILED
$docker info
Error response from daemon: client is newer than server (client API version: 1.24, server API version: 1.23)
```
After running `docker-machine upgrade kube-dev` everything is fine again. So we should add a hint in the error message that this can also happen.
Automatic merge from submit-queue
Enable all ppc64le builds, except for hyperkube
Partially fixes: #25886
Talked to @Pensu, and all other binaries seem to work fine
@david-mcmahon @ixdy @Pensu @smarterclayton
Auto generated docs are **NO LONGER CHECKED IN**, only placeholders.
To generate them, e.g. before exporting docs, run hack/generate-docs.sh.
hack/verify-generated-docs.sh ensures that generated docs are merely the
placeholder text.
hack/update-generated-docs.sh puts the placeholder text in the proper
places.
The old munge behavior is moved into hack/{update|verify}-munge-docs.sh.
Automatic merge from submit-queue
Switch DNS addons from skydns to kubedns
Change GCI and trusty cluster-helper scripts to use kubedns instead of skydns.
Unified skydns templates using a simple underscore based template and
added transform sed scripts to transform into salt and sed yaml
templates
Moved all content out of cluster/addons/dns into build/kube-dns and
saltbase/salt/kube-dns
Automatic merge from submit-queue
Support for cluster autoscaler in GCE Trusty and GCI images
Fixes: #26346
Ref: #26197
cc: @fgrzadkowski @vulpecula @piosz @jszczepkowski
Our `realpath` and `readlink -f` functions (required only because of MacOS,
thanks Steve) were poor substitutes at best. Mostly they were downright
broken. This thoroughly overhauls them and adds a test (in comments, since we
don't seem to have shell tests). For all the interesting cases I could think
of, the fakes act just like the real thing.
Then use those and canonicalize KUBE_ROOT. In order to make recursive calls of
our shell tool not additively grow `pwd` we have to essentially make the
sourcing of init.sh idempotent.