KUBE_ROOT_HASH depends just on the host name and
directory path. So when working with branches, it could lead
to some confusion since the hash is the same even when
you switch from branch to branch. Let us use the git branch
information when we compute the short hash
Fixes#1801
This was broken when we moved to the build container, but no one
noticed. We also likely have another bug, which is that protobuf should
hard fail when we have fields that aren't assigned a tag.
Automatic merge from submit-queue
Improve the multiarch situation; armel => armhf; reenable pcc64le; remove the patched golang
**What this PR does / why we need it**:
- Improves the multiarch situation as described in #38067
- Tries to bump to go1.8 for arm (and later enable ppc64le)
- GOARM 6 => GOARM 7
- Remove the golang 1.7 patch
- armel => armhf
- Bump QEMU version to v2.7.0
**Release note**:
```release-note
Improve the ARM builds and make hyperkube on ARM working again by upgrading the Go version for ARM to go1.8beta2
```
@kubernetes/sig-testing-misc @jessfraz @ixdy @jbeda @david-mcmahon @pwittrock
from etcd.sh split the start process into validate fucntion + start function so that the validate piece can be reused elsewhere. the up-cluster script has been changed to remove duplicate docker logic to the one used in buid-tools/common.sh and the validate etcd function is now used here.
moved docker daemon check function to util.sh and made function name changes and upstream changes.
If you delete a source file, we want to reflect that in the build container. We
only use --delete going that one way as we don't want to accidentally delete
files in the user's source tree.
Automatic merge from submit-queue
Check for rsync and give friendlier message
Fixes#34300.
Not sure if #34309 is the same issue. Hopefully it is the same issue.
Automatic merge from submit-queue
Make sure rsync.sh is executable inside the build image
I kept having the build fail:
```console
$ make quick-release
+++ [1006 18:13:44] Verifying Prerequisites....
+++ [1006 18:13:44] Building Docker image kube-build:build-d3c60cf83f-3-v1.6.3-9
+++ [1006 18:13:54] Creating data container kube-build-data-d3c60cf83f-3-v1.6.3-9
+++ [1006 18:13:55] Syncing sources to container
!!! [1006 18:16:01] Could not connect to rsync container. See build/README.md for setting up remote Docker engine.
make: *** [quick-release] Error 1
```
`docker ps` revealed the issue:
```console
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
75c2a3c40cb3 kube-build:build-d3c60cf83f-3-v1.6.3-9 "/rsyncd.sh" 6 seconds ago Exited (126) 5 seconds ago kube-rsync-d3c60cf83f-3-v1.6.3-9
3eb215e41f36 kube-build:build-d3c60cf83f-3-v1.6.3-9 "chown -R 85078.5000 " 8 seconds ago Exited (0) 6 seconds ago kube-build-data-d3c60cf83f-3-v1.6.3-9
5a2707af2ccd 882577c54f67 "/bin/sh -c 'cd ${K8S" 7 days ago Exited (2) 7 days ago stupefied_goldberg
$ docker logs 75c2a3c40cb3
/bin/bash: /rsyncd.sh: Permission denied
```
I'm not sure why this works on Jenkins but not on my machine.
We were using netcat to try and figure out when the rsync container is ready. Now we instead use rsync itself. I suspect that there was a race condition with some versions of Docker where it would accept connections and then close them during container start.
This fixes#34214 (I think)
make-generated-{protobuf,runtime}.sh was doing some really nasty stuff with how
the build container was managed in order to copy results out. Since we have
more flexibility to grab results out of the build container, we can now avoid
all of this. Ideally we wouldn't have `hack` calling `build` at all, but we
aren't there yet.
We also add "version" to all docker images and containers
This version is to be incremented manually when we change the shape of the build
image (like changing the golang version or the set of volumes in the data
container). This will delete all older versions of images and containers when
the version is different.
Automatic merge from submit-queue
Separate federation build.sh into development and deployment scripts.
The idea behind this separation is that it provides a clear distinction
between the dev environment and the prod environment. The
deploy/deploy.sh script will be shipped to the users, but
develop/develop.sh will be purely for development purposes and won't
be part of a release distribution.
Purely for developer convenience, all the deployment functionality is
made available through the develop/develop.sh script.
This change also copies deploy/* files into the release distribution.
cc @kubernetes/sig-cluster-federation @colhom
```release-note
Federation can now be deployed using the `federation/deploy/deploy.sh` script. This script does not depend on any of the development environment shell library/scripts. This is an alternative to the current `federation-up.sh`/`federation-down.sh` scripts. Both the scripts are going to co-exist in this release, but the `federation-up.sh`/`federation-down.sh` scripts might be removed in a future release in favor of `federation/deploy/deploy.sh` script.
```
Also build the hyperkube docker image on-the-fly.
This is only a temporary fix until the proposal in issue
https://github.com/kubernetes/kubernetes/issues/28630 is implemented.
Also, the new build/deployment method completely obviates this step.
We use debian image instead of busybox and do not build hyperkube as a
static binary yet. Wait until PR
https://github.com/kubernetes/kubernetes/pull/26028 is merged to build
static hyperkube binaries.
The idea behind this separation is that it provides a clear distinction
between the dev environment and the prod environment. The
deploy/deploy.sh script will be shipped to the users, but
develop/develop.sh will be purely for development purposes and won't
be part of a release distribution.
Purely for developer convenience, all the deployment functionality is
made available through the develop/develop.sh script.
This change also copies deploy/* files into the release distribution.
Automatic merge from submit-queue
Disable linux/ppc64le compilation by default
Work-around for #30384.
I'm still testing this locally to see if it actually works. The build is slow. (PR Jenkins won't tell us whether this fixes ppc.)
cc @Random-Liu @spxtr @david-mcmahon @luxas