Use rsync to get source into build container

We also add "version" to all docker images and containers

This version is to be incremented manually when we change the shape of the build
image (like changing the golang version or the set of volumes in the data
container).  This will delete all older versions of images and containers when
the version is different.
pull/6/head
Joe Beda 2016-08-17 10:45:47 -07:00
parent 22b7d90034
commit dc586ea8f7
12 changed files with 528 additions and 275 deletions

3
.gitignore vendored
View File

@ -62,6 +62,9 @@ network_closure.sh
.tags* .tags*
# Version file for dockerized build
.dockerized-kube-version-defs
# Web UI # Web UI
/www/master/node_modules/ /www/master/node_modules/
/www/master/npm-debug.log /www/master/npm-debug.log

View File

@ -0,0 +1 @@
3

View File

@ -4,12 +4,12 @@ Building Kubernetes is easy if you take advantage of the containerized build env
## Requirements ## Requirements
1. Docker, using one of the two following configurations: 1. Docker, using one of the following configurations:
1. **Mac OS X** You can either use Docker for Mac or docker-machine. See installation instructions [here](https://docs.docker.com/installation/mac/). 1. **Mac OS X** You can either use Docker for Mac or docker-machine. See installation instructions [here](https://docs.docker.com/installation/mac/).
**Note**: You will want to set the Docker VM to have at least 3GB of initial memory or building will likely fail. (See: [#11852]( http://issue.k8s.io/11852)) and do not `make quick-release` from `/tmp/` (See: [#14773]( https://github.com/kubernetes/kubernetes/issues/14773)) **Note**: You will want to set the Docker VM to have at least 3GB of initial memory or building will likely fail. (See: [#11852]( http://issue.k8s.io/11852)).
2. **Linux with local Docker** Install Docker according to the [instructions](https://docs.docker.com/installation/#installation) for your OS. The scripts here assume that they are using a local Docker server and that they can "reach around" docker and grab results directly from the file system. 2. **Linux with local Docker** Install Docker according to the [instructions](https://docs.docker.com/installation/#installation) for your OS.
2. [Python](https://www.python.org) 3. **Remote Docker engine** Use a big machine in the cloud to build faster. This is a little trickier so look at the section later on.
3. **Optional** [Google Cloud SDK](https://developers.google.com/cloud/sdk/) 2. **Optional** [Google Cloud SDK](https://developers.google.com/cloud/sdk/)
You must install and configure Google Cloud SDK if you want to upload your release to Google Cloud Storage and may safely omit this otherwise. You must install and configure Google Cloud SDK if you want to upload your release to Google Cloud Storage and may safely omit this otherwise.
@ -17,8 +17,6 @@ You must install and configure Google Cloud SDK if you want to upload your relea
While it is possible to build Kubernetes using a local golang installation, we have a build process that runs in a Docker container. This simplifies initial set up and provides for a very consistent build and test environment. While it is possible to build Kubernetes using a local golang installation, we have a build process that runs in a Docker container. This simplifies initial set up and provides for a very consistent build and test environment.
There is also early support for building Docker "run" containers
## Key scripts ## Key scripts
The following scripts are found in the `build/` directory. Note that all scripts must be run from the Kubernetes root directory. The following scripts are found in the `build/` directory. Note that all scripts must be run from the Kubernetes root directory.
@ -29,10 +27,68 @@ The following scripts are found in the `build/` directory. Note that all scripts
* `build/run.sh make test`: Run all unit tests * `build/run.sh make test`: Run all unit tests
* `build/run.sh make test-integration`: Run integration test * `build/run.sh make test-integration`: Run integration test
* `build/run.sh make test-cmd`: Run CLI tests * `build/run.sh make test-cmd`: Run CLI tests
* `build/copy-output.sh`: This will copy the contents of `_output/dockerized/bin` from any the Docker container to the local `_output/dockerized/bin`. * `build/copy-output.sh`: This will copy the contents of `_output/dockerized/bin` from the Docker container to the local `_output/dockerized/bin`. It will also copy out specific file patterns that are generated as part of the build process. This is run automatically as part of `build/run.sh`.
* `build/make-clean.sh`: Clean out the contents of `_output/dockerized`, remove any container images and the data container * `build/make-clean.sh`: Clean out the contents of `_output`, remove any locally built container images and remove the data container.
* `build/shell.sh`: Drop into a `bash` shell in a build container with a snapshot of the current repo code. * `/build/shell.sh`: Drop into a `bash` shell in a build container with a snapshot of the current repo code.
* `build/release.sh`: Build everything, test it, and (optionally) upload the results to a GCS bucket.
## Basic Flow
The scripts directly under `build/` are used to build and test. They will ensure that the `kube-build` Docker image is built (based on `build/build-image/Dockerfile`) and then execute the appropriate command in that container. These scripts will both ensure that the right data is cached from run to run for incremental builds and will copy the results back out of the container.
The `kube-build` container image is built by first creating a "context" directory in `_output/images/build-image`. It is done there instead of at the root of the Kubernetes repo to minimize the amount of data we need to package up when building the image.
There are 3 different containers instances that are run from this image. The first is a "data" container to store all data that needs to persist across to support incremental builds. Next there is an "rsync" container that is used to transfer data in and out to the data container. Lastly there is a "build" container that is used for actually doing build actions. The data container persists across runs while the rsync and build containers are deleted after each use.
`rsync` is used transparently behind the scenes to efficiently move data in and and of the container. This will use an ephemeral port picked by Docker. You can modify this by setting the `KUBE_RSYNC_PORT` env variable.
All Docker names are suffixed with a hash derived from the file path (to allow concurrent usage on things like CI machines) and a version number. When the version number changes all state is cleared and clean build is started. This allows the build infrastructure to be changed and signal to CI systems that old artifacts need to be deleted.
## Proxy Settings
If you are behind a proxy and you are letting these scripts use `docker-machine` to set up your local VM for you on macOS, you need to export proxy settings for kubernetes build, the following environment variables should be defined.
```
export KUBERNETES_HTTP_PROXY=http://username:password@proxyaddr:proxyport
export KUBERNETES_HTTPS_PROXY=https://username:password@proxyaddr:proxyport
```
Optionally, you can specify addresses of no proxy for kubernetes build, for example
```
export KUBERNETES_NO_PROXY=127.0.0.1
```
If you are using sudo to make kubernetes build for example make quick-release, you need run `sudo -E make quick-release` to pass the environment variables.
## Really Remote Docker Engine
It is possible to use a Docker Engine that is running remotely (under your desk or in the cloud). Docker must be configured to connect to that machine and the local rsync port must be forwarded (via SSH or nc) from localhost to the remote machine.
To do this easily with GCE and `docker-machine`, do something like this:
```
# Create the remote docker machine on GCE. This is a pretty beefy machine with SSD disk.
KUBE_BUILD_VM=k8s-build
KUBE_BUILD_GCE_PROJECT=<project>
docker-machine create \
--driver=google \
--google-project=${KUBE_BUILD_GCE_PROJECT} \
--google-zone=us-west1-a \
--google-machine-type=n1-standard-8 \
--google-disk-size=50 \
--google-disk-type=pd-ssd \
${KUBE_BUILD_VM}
# Set up local docker to talk to that machine
eval $(docker-machine env ${KUBE_BUILD_VM})
# Pin down the port that rsync will be exposed on on the remote machine
export KUBE_RSYNC_PORT=8370
# forward local 8730 to that machine so that rsync works
docker-machine ssh ${KUBE_BUILD_VM} -L ${KUBE_RSYNC_PORT}:localhost:8730 -N &
```
Look at `docker-machine stop`, `docker-machine start` and `docker-machine rm` to manage this VM.
## Releasing ## Releasing
@ -63,41 +119,6 @@ Env Variable | Default | Description
`KUBE_GCS_NO_CACHING` | `y` | Disable HTTP caching of GCS release artifacts. By default GCS will cache public objects for up to an hour. When doing "devel" releases this can cause problems. `KUBE_GCS_NO_CACHING` | `y` | Disable HTTP caching of GCS release artifacts. By default GCS will cache public objects for up to an hour. When doing "devel" releases this can cause problems.
`KUBE_GCS_DOCKER_REG_PREFIX` | `docker-reg` | *Experimental* When uploading docker images, the bucket that backs the registry. `KUBE_GCS_DOCKER_REG_PREFIX` | `docker-reg` | *Experimental* When uploading docker images, the bucket that backs the registry.
## Basic Flow
The scripts directly under `build/` are used to build and test. They will ensure that the `kube-build` Docker image is built (based on `build/build-image/Dockerfile`) and then execute the appropriate command in that container. If necessary (for Mac OS X), the scripts will also copy results out.
The `kube-build` container image is built by first creating a "context" directory in `_output/images/build-image`. It is done there instead of at the root of the Kubernetes repo to minimize the amount of data we need to package up when building the image.
Everything in `build/build-image/` is meant to be run inside of the container. If it doesn't think it is running in the container it'll throw a warning. While you can run some of that stuff outside of the container, it wasn't built to do so.
When building final release tars, they are first staged into `_output/release-stage` before being tar'd up and put into `_output/release-tars`. When building final release tars, they are first staged into `_output/release-stage` before being tar'd up and put into `_output/release-tars`.
## Proxy Settings
If you are behind a proxy, you need to export proxy settings for kubernetes build, the following environment variables should be defined.
```
export KUBERNETES_HTTP_PROXY=http://username:password@proxyaddr:proxyport
export KUBERNETES_HTTPS_PROXY=https://username:password@proxyaddr:proxyport
```
Optionally, you can specify addresses of no proxy for kubernetes build, for example
```
export KUBERNETES_NO_PROXY=127.0.0.1
```
If you are using sudo to make kubernetes build for example make quick-release, you need run `sudo -E make quick-release` to pass the environment variables.
## TODOs
These are in no particular order
* [X] Harmonize with scripts in `hack/`. How much do we support building outside of Docker and these scripts?
* [X] Deprecate/replace most of the stuff in the hack/
* [ ] Finish support for the Dockerized runtime. Issue [#19](http://issue.k8s.io/19). A key issue here is to make this fast/light enough that we can use it for development workflows.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/build/README.md?pixel)]() [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/build/README.md?pixel)]()

View File

@ -15,6 +15,8 @@
# This file creates a standard build environment for building Kubernetes # This file creates a standard build environment for building Kubernetes
FROM gcr.io/google_containers/kube-cross:KUBE_BUILD_IMAGE_CROSS_TAG FROM gcr.io/google_containers/kube-cross:KUBE_BUILD_IMAGE_CROSS_TAG
ADD localtime /etc/localtime
# Mark this as a kube-build container # Mark this as a kube-build container
RUN touch /kube-build-image RUN touch /kube-build-image
@ -25,19 +27,13 @@ RUN chmod -R a+rwx /usr/local/go/pkg ${K8S_PATCHED_GOROOT}/pkg
# of operations. # of operations.
ENV HOME /go/src/k8s.io/kubernetes ENV HOME /go/src/k8s.io/kubernetes
WORKDIR ${HOME} WORKDIR ${HOME}
# We have to mkdir the dirs we need, or else Docker will create them when we
# mount volumes, and it will create them with root-only permissions. The
# explicit chmod of _output is required, but I can't really explain why.
RUN mkdir -p ${HOME} ${HOME}/_output \
&& chmod -R a+rwx ${HOME} ${HOME}/_output
# Propagate the git tree version into the build image
ADD kube-version-defs /kube-version-defs
RUN chmod a+r /kube-version-defs
ENV KUBE_GIT_VERSION_FILE /kube-version-defs
# Make output from the dockerized build go someplace else # Make output from the dockerized build go someplace else
ENV KUBE_OUTPUT_SUBPATH _output/dockerized ENV KUBE_OUTPUT_SUBPATH _output/dockerized
# Upload Kubernetes source # Pick up version stuff here as we don't copy our .git over.
ADD kube-source.tar.gz /go/src/k8s.io/kubernetes/ ENV KUBE_GIT_VERSION_FILE ${HOME}/.dockerized-kube-version-defs
ADD rsyncd.password /
RUN chmod a+r /rsyncd.password
ADD rsyncd.sh /

83
build/build-image/rsyncd.sh Executable file
View File

@ -0,0 +1,83 @@
#!/bin/bash
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This script will set up and run rsyncd to allow data to move into and out of
# our dockerized build system. This is used for syncing sources and changes of
# sources into the docker-build-container. It is also used to transfer built binaries
# and generated files back out.
#
# When run as root (rare) it'll preserve the file ids as sent from the client.
# Usually it'll be run as non-dockerized UID/GID and end up translating all file
# ownership to that.
set -o errexit
set -o nounset
set -o pipefail
# The directory that gets sync'd
VOLUME=${HOME}
# Assume that this is running in Docker on a bridge. Allow connections from
# anything on the local subnet.
ALLOW=$(ip route | awk '/^default via/ { reg = "^[0-9./]+ dev "$5 } ; $0 ~ reg { print $1 }')
CONFDIR="/tmp/rsync.k8s"
PIDFILE="${CONFDIR}/rsyncd.pid"
CONFFILE="${CONFDIR}/rsyncd.conf"
SECRETS="${CONFDIR}/rsyncd.secrets"
mkdir -p "${CONFDIR}"
if [[ -f "${PIDFILE}" ]]; then
PID=$(cat "${PIDFILE}")
echo "Cleaning up old PID file: ${PIDFILE}"
kill $PID &> /dev/null || true
rm "${PIDFILE}"
fi
PASSWORD=$(</rsyncd.password)
cat <<EOF >"${SECRETS}"
k8s:${PASSWORD}
EOF
chmod go= "${SECRETS}"
USER_CONFIG=
if [[ "$(id -u)" == "0" ]]; then
USER_CONFIG=" uid = 0"$'\n'" gid = 0"
fi
cat <<EOF >"${CONFFILE}"
pid file = ${PIDFILE}
use chroot = no
log file = /dev/stdout
reverse lookup = no
munge symlinks = no
port = 8730
[k8s]
numeric ids = true
$USER_CONFIG
hosts deny = *
hosts allow = ${ALLOW}
auth users = k8s
secrets file = ${SECRETS}
read only = false
path = ${VOLUME}
filter = - /.make/ - /.git/ - /_tmp/
EOF
exec /usr/bin/rsync --no-detach --daemon --config="${CONFFILE}" "$@"

View File

@ -55,7 +55,15 @@ readonly KUBE_BUILD_PPC64LE="${KUBE_BUILD_PPC64LE:-n}"
# Constants # Constants
readonly KUBE_BUILD_IMAGE_REPO=kube-build readonly KUBE_BUILD_IMAGE_REPO=kube-build
readonly KUBE_BUILD_IMAGE_CROSS_TAG="$(cat ${KUBE_ROOT}/build/build-image/cross/VERSION)" readonly KUBE_BUILD_IMAGE_CROSS_TAG="$(cat ${KUBE_ROOT}/build/build-image/cross/VERSION)"
# KUBE_BUILD_DATA_CONTAINER_NAME=kube-build-data-<hash>"
# This version number is used to cause everyone to rebuild their data containers
# and build image. This is especially useful for automated build systems like
# Jenkins.
#
# Increment/change this number if you change the build image (anything under
# build/build-image) or change the set of volumes in the data container.
readonly KUBE_BUILD_IMAGE_VERSION_BASE="$(cat ${KUBE_ROOT}/build/BUILD_IMAGE_VERSION)"
readonly KUBE_BUILD_IMAGE_VERSION="${KUBE_BUILD_IMAGE_VERSION_BASE}-${KUBE_BUILD_IMAGE_CROSS_TAG}"
# Here we map the output directories across both the local and remote _output # Here we map the output directories across both the local and remote _output
# directories: # directories:
@ -76,17 +84,19 @@ readonly LOCAL_OUTPUT_IMAGE_STAGING="${LOCAL_OUTPUT_ROOT}/images"
# This is a symlink to binaries for "this platform" (e.g. build tools). # This is a symlink to binaries for "this platform" (e.g. build tools).
readonly THIS_PLATFORM_BIN="${LOCAL_OUTPUT_ROOT}/bin" readonly THIS_PLATFORM_BIN="${LOCAL_OUTPUT_ROOT}/bin"
readonly REMOTE_OUTPUT_ROOT="/go/src/${KUBE_GO_PACKAGE}/_output" readonly REMOTE_ROOT="/go/src/${KUBE_GO_PACKAGE}"
readonly REMOTE_OUTPUT_ROOT="${REMOTE_ROOT}/_output"
readonly REMOTE_OUTPUT_SUBPATH="${REMOTE_OUTPUT_ROOT}/dockerized" readonly REMOTE_OUTPUT_SUBPATH="${REMOTE_OUTPUT_ROOT}/dockerized"
readonly REMOTE_OUTPUT_BINPATH="${REMOTE_OUTPUT_SUBPATH}/bin" readonly REMOTE_OUTPUT_BINPATH="${REMOTE_OUTPUT_SUBPATH}/bin"
readonly REMOTE_OUTPUT_GOPATH="${REMOTE_OUTPUT_SUBPATH}/go" readonly REMOTE_OUTPUT_GOPATH="${REMOTE_OUTPUT_SUBPATH}/go"
readonly DOCKER_MOUNT_ARGS_BASE=( # This is the port on the workstation host to expose RSYNC on. Set this if you
# where the container build will drop output # are doing something fancy with ssh tunneling.
--volume "${LOCAL_OUTPUT_BINPATH}:${REMOTE_OUTPUT_BINPATH}" readonly KUBE_RSYNC_PORT="${KUBE_RSYNC_PORT:-}"
# timezone
--volume /etc/localtime:/etc/localtime:ro # This is the port that rsync is running on *inside* the container. This may be
) # mapped to KUBE_RSYNC_PORT via docker networking.
readonly KUBE_CONTAINER_RSYNC_PORT=8730
# This is where the final release artifacts are created locally # This is where the final release artifacts are created locally
readonly RELEASE_STAGE="${LOCAL_OUTPUT_ROOT}/release-stage" readonly RELEASE_STAGE="${LOCAL_OUTPUT_ROOT}/release-stage"
@ -141,10 +151,15 @@ kube::build::get_docker_wrapped_binaries() {
# #
# Vars set: # Vars set:
# KUBE_ROOT_HASH # KUBE_ROOT_HASH
# KUBE_BUILD_IMAGE_TAG_BASE
# KUBE_BUILD_IMAGE_TAG # KUBE_BUILD_IMAGE_TAG
# KUBE_BUILD_IMAGE # KUBE_BUILD_IMAGE
# KUBE_BUILD_CONTAINER_NAME_BASE
# KUBE_BUILD_CONTAINER_NAME # KUBE_BUILD_CONTAINER_NAME
# KUBE_BUILD_DATA_CONTAINER_NAME # KUBE_DATA_CONTAINER_NAME_BASE
# KUBE_DATA_CONTAINER_NAME
# KUBE_RSYNC_CONTAINER_NAME_BASE
# KUBE_RSYNC_CONTAINER_NAME
# DOCKER_MOUNT_ARGS # DOCKER_MOUNT_ARGS
# LOCAL_OUTPUT_BUILD_CONTEXT # LOCAL_OUTPUT_BUILD_CONTEXT
function kube::build::verify_prereqs() { function kube::build::verify_prereqs() {
@ -156,13 +171,26 @@ function kube::build::verify_prereqs() {
fi fi
kube::build::ensure_docker_daemon_connectivity || return 1 kube::build::ensure_docker_daemon_connectivity || return 1
if (( ${KUBE_VERBOSE} > 6 )); then
kube::log::status "Docker Version:"
"${DOCKER[@]}" version | kube::log::info_from_stdin
fi
KUBE_ROOT_HASH=$(kube::build::short_hash "${HOSTNAME:-}:${KUBE_ROOT}") KUBE_ROOT_HASH=$(kube::build::short_hash "${HOSTNAME:-}:${KUBE_ROOT}")
KUBE_BUILD_IMAGE_TAG="build-${KUBE_ROOT_HASH}" KUBE_BUILD_IMAGE_TAG_BASE="build-${KUBE_ROOT_HASH}"
KUBE_BUILD_IMAGE_TAG="${KUBE_BUILD_IMAGE_TAG_BASE}-${KUBE_BUILD_IMAGE_VERSION}"
KUBE_BUILD_IMAGE="${KUBE_BUILD_IMAGE_REPO}:${KUBE_BUILD_IMAGE_TAG}" KUBE_BUILD_IMAGE="${KUBE_BUILD_IMAGE_REPO}:${KUBE_BUILD_IMAGE_TAG}"
KUBE_BUILD_CONTAINER_NAME="kube-build-${KUBE_ROOT_HASH}" KUBE_BUILD_CONTAINER_NAME_BASE="kube-build-${KUBE_ROOT_HASH}"
KUBE_BUILD_DATA_CONTAINER_NAME="kube-build-data-${KUBE_ROOT_HASH}" KUBE_BUILD_CONTAINER_NAME="${KUBE_BUILD_CONTAINER_NAME_BASE}-${KUBE_BUILD_IMAGE_VERSION}"
DOCKER_MOUNT_ARGS=("${DOCKER_MOUNT_ARGS_BASE[@]}" --volumes-from "${KUBE_BUILD_DATA_CONTAINER_NAME}") KUBE_RSYNC_CONTAINER_NAME_BASE="kube-rsync-${KUBE_ROOT_HASH}"
KUBE_RSYNC_CONTAINER_NAME="${KUBE_RSYNC_CONTAINER_NAME_BASE}-${KUBE_BUILD_IMAGE_VERSION}"
KUBE_DATA_CONTAINER_NAME_BASE="kube-build-data-${KUBE_ROOT_HASH}"
KUBE_DATA_CONTAINER_NAME="${KUBE_DATA_CONTAINER_NAME_BASE}-${KUBE_BUILD_IMAGE_VERSION}"
DOCKER_MOUNT_ARGS=(--volumes-from "${KUBE_DATA_CONTAINER_NAME}")
LOCAL_OUTPUT_BUILD_CONTEXT="${LOCAL_OUTPUT_IMAGE_STAGING}/${KUBE_BUILD_IMAGE}" LOCAL_OUTPUT_BUILD_CONTEXT="${LOCAL_OUTPUT_IMAGE_STAGING}/${KUBE_BUILD_IMAGE}"
kube::version::get_version_vars
kube::version::save_version_vars "${KUBE_ROOT}/.dockerized-kube-version-defs"
} }
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
@ -177,8 +205,8 @@ function kube::build::docker_available_on_osx() {
kube::log::status "No docker host is set. Checking options for setting one..." kube::log::status "No docker host is set. Checking options for setting one..."
if [[ -z "$(which docker-machine)" ]]; then if [[ -z "$(which docker-machine)" ]]; then
kube::log::status "It looks like you're running Mac OS X, yet neither Docker for Mac or docker-machine can be found." kube::log::status "It looks like you're running Mac OS X, yet neither Docker for Mac nor docker-machine can be found."
kube::log::status "See: https://docs.docker.com/machine/ for installation instructions." kube::log::status "See: https://docs.docker.com/engine/installation/mac/ for installation instructions."
return 1 return 1
elif [[ -n "$(which docker-machine)" ]]; then elif [[ -n "$(which docker-machine)" ]]; then
kube::build::prepare_docker_machine kube::build::prepare_docker_machine
@ -244,25 +272,23 @@ function kube::build::ensure_docker_in_path() {
function kube::build::ensure_docker_daemon_connectivity { function kube::build::ensure_docker_daemon_connectivity {
if ! "${DOCKER[@]}" info > /dev/null 2>&1 ; then if ! "${DOCKER[@]}" info > /dev/null 2>&1 ; then
{ cat <<'EOF' >&2
echo "Can't connect to 'docker' daemon. please fix and retry." Can't connect to 'docker' daemon. please fix and retry.
echo
echo "Possible causes:" Possible causes:
echo " - On Mac OS X, DOCKER_HOST hasn't been set. You may need to: " - Docker Daemon not started
echo " - Set up Docker for Mac (https://docs.docker.com/docker-for-mac/)" - Linux: confirm via your init system
echo " - Or, set up docker-machine" - macOS w/ docker-machine: run `docker-machine ls` and `docker-machine start <name>`
echo " - Create and start your VM using docker-machine: " - macOS w/ Docker for Mac: Check the menu bar and start the Docker application
echo " - docker-machine create -d ${DOCKER_MACHINE_DRIVER} ${DOCKER_MACHINE_NAME}" - DOCKER_HOST hasn't been set of is set incorrectly
echo " - Set your environment variables using: " - Linux: domain socket is used, DOCKER_* should be unset. In Bash run `unset ${!DOCKER_*}`
echo " - eval \$(docker-machine env ${DOCKER_MACHINE_NAME})" - macOS w/ docker-machine: run `eval "$(docker-machine env <name>)"`
echo " - Update your Docker VM" - macOS w/ Docker for Mac: domain socket is used, DOCKER_* should be unset. In Bash run `unset ${!DOCKER_*}`
echo " - Error Message: 'Error response from daemon: client is newer than server (...)' " - Other things to check:
echo " - docker-machine upgrade ${DOCKER_MACHINE_NAME}" - Linux: User isn't in 'docker' group. Add and relogin.
echo " - On Linux, user isn't in 'docker' group. Add and relogin." - Something like 'sudo usermod -a -G docker ${USER}'
echo " - Something like 'sudo usermod -a -G docker ${USER-user}'" - RHEL7 bug and workaround: https://bugzilla.redhat.com/show_bug.cgi?id=1119282#c8
echo " - RHEL7 bug and workaround: https://bugzilla.redhat.com/show_bug.cgi?id=1119282#c8" EOF
echo " - On Linux, Docker daemon hasn't been started or has crashed."
} >&2
return 1 return 1
fi fi
} }
@ -288,57 +314,6 @@ function kube::build::ensure_tar() {
fi fi
} }
function kube::build::clean_output() {
# Clean out the output directory if it exists.
if kube::build::has_docker ; then
if kube::build::build_image_built ; then
kube::log::status "Cleaning out _output/dockerized/bin/ via docker build image"
kube::build::run_build_command bash -c "rm -rf '${REMOTE_OUTPUT_BINPATH}'/*"
else
kube::log::error "Build image not built. Cannot clean via docker build image."
fi
kube::log::status "Removing data container ${KUBE_BUILD_DATA_CONTAINER_NAME}"
"${DOCKER[@]}" rm -v "${KUBE_BUILD_DATA_CONTAINER_NAME}" >/dev/null 2>&1 || true
fi
kube::log::status "Removing _output directory"
rm -rf "${LOCAL_OUTPUT_ROOT}"
}
# Make sure the _output directory is created and mountable by docker
function kube::build::prepare_output() {
# See auto-creation of host mounts: https://github.com/docker/docker/pull/21666
# if selinux is enabled, docker run -v /foo:/foo:Z will not autocreate the host dir
mkdir -p "${LOCAL_OUTPUT_SUBPATH}"
mkdir -p "${LOCAL_OUTPUT_BINPATH}"
# On RHEL/Fedora SELinux is enabled by default and currently breaks docker
# volume mounts. We can work around this by explicitly adding a security
# context to the _output directory.
# Details: http://www.projectatomic.io/blog/2015/06/using-volumes-with-docker-can-cause-problems-with-selinux/
if which selinuxenabled &>/dev/null && \
selinuxenabled && \
which chcon >/dev/null ; then
if [[ ! $(ls -Zd "${LOCAL_OUTPUT_ROOT}") =~ svirt_sandbox_file_t ]] ; then
kube::log::status "Applying SELinux policy to '_output' directory."
if ! chcon -Rt svirt_sandbox_file_t "${LOCAL_OUTPUT_ROOT}"; then
echo " ***Failed***. This may be because you have root owned files under _output."
echo " Continuing, but this build may fail later if SELinux prevents access."
fi
fi
number=${#DOCKER_MOUNT_ARGS[@]}
for (( i=0; i<number; i++ )); do
if [[ "${DOCKER_MOUNT_ARGS[i]}" =~ "${KUBE_ROOT}" ]]; then
## Ensure we don't label the argument multiple times
if [[ ! "${DOCKER_MOUNT_ARGS[i]}" == *:Z ]]; then
DOCKER_MOUNT_ARGS[i]="${DOCKER_MOUNT_ARGS[i]}:Z"
fi
fi
done
fi
}
function kube::build::has_docker() { function kube::build::has_docker() {
which docker &> /dev/null which docker &> /dev/null
} }
@ -353,9 +328,51 @@ function kube::build::docker_image_exists() {
exit 2 exit 2
} }
# We cannot just specify the IMAGE here as `docker images` doesn't behave as [[ $("${DOCKER[@]}" images -q "${1}:${2}") ]]
# expected. See: https://github.com/docker/docker/issues/8048 }
"${DOCKER[@]}" images | grep -Eq "^(\S+/)?${1}\s+${2}\s+"
# Delete all images that match a tag prefix except for the "current" version
#
# $1: The image repo/name
# $2: The tag base. We consider any image that matches $2*
# $3: The current image not to delete if provided
function kube::build::docker_delete_old_images() {
# In Docker 1.12, we can replace this with
# docker images "$1" --format "{{.Tag}}"
for tag in $("${DOCKER[@]}" images ${1} | tail -n +2 | awk '{print $2}') ; do
if [[ "${tag}" != "${2}"* ]] ; then
V=6 kube::log::status "Keeping image ${1}:${tag}"
continue
fi
if [[ -z "${3:-}" || "${tag}" != "${3}" ]] ; then
V=2 kube::log::status "Deleting image ${1}:${tag}"
"${DOCKER[@]}" rmi "${1}:${tag}" >/dev/null
else
V=6 kube::log::status "Keeping image ${1}:${tag}"
fi
done
}
# Stop and delete all containers that match a pattern
#
# $1: The base container prefix
# $2: The current container to keep, if provided
function kube::build::docker_delete_old_containers() {
# In Docker 1.12 we can replace this line with
# docker ps -a --format="{{.Names}}"
for container in $("${DOCKER[@]}" ps -a | tail -n +2 | awk '{print $NF}') ; do
if [[ "${container}" != "${1}"* ]] ; then
V=6 kube::log::status "Keeping container ${container}"
continue
fi
if [[ -z "${2:-}" || "${container}" != "${2}" ]] ; then
V=2 kube::log::status "Deleting container ${container}"
kube::build::destroy_container "${container}"
else
V=6 kube::log::status "Keeping container ${container}"
fi
done
} }
# Takes $1 and computes a short has for it. Useful for unique tag generation # Takes $1 and computes a short has for it. Useful for unique tag generation
@ -420,34 +437,51 @@ function kube::release::parse_and_validate_ci_version() {
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
# Building # Building
function kube::build::clean() {
if kube::build::has_docker ; then
kube::build::docker_delete_old_containers "${KUBE_BUILD_CONTAINER_NAME_BASE}"
kube::build::docker_delete_old_containers "${KUBE_RSYNC_CONTAINER_NAME_BASE}"
kube::build::docker_delete_old_containers "${KUBE_DATA_CONTAINER_NAME_BASE}"
kube::build::docker_delete_old_images "${KUBE_BUILD_IMAGE_REPO}" "${KUBE_BUILD_IMAGE_TAG_BASE}"
V=2 kube::log::status "Cleaning all untagged docker images"
"${DOCKER[@]}" rmi $("${DOCKER[@]}" images -q --filter 'dangling=true') 2> /dev/null || true
fi
kube::log::status "Removing _output directory"
rm -rf "${LOCAL_OUTPUT_ROOT}"
}
function kube::build::build_image_built() { function kube::build::build_image_built() {
kube::build::docker_image_exists "${KUBE_BUILD_IMAGE_REPO}" "${KUBE_BUILD_IMAGE_TAG}" kube::build::docker_image_exists "${KUBE_BUILD_IMAGE_REPO}" "${KUBE_BUILD_IMAGE_TAG}"
} }
# The set of source targets to include in the kube-build image
function kube::build::source_targets() {
local targets=(
$(find . -mindepth 1 -maxdepth 1 -not \( \
\( -path ./_\* -o -path ./.git\* \) -prune \
\))
)
echo "${targets[@]}"
}
# Set up the context directory for the kube-build image and build it. # Set up the context directory for the kube-build image and build it.
function kube::build::build_image() { function kube::build::build_image() {
kube::build::ensure_tar if ! kube::build::build_image_built; then
mkdir -p "${LOCAL_OUTPUT_BUILD_CONTEXT}"
mkdir -p "${LOCAL_OUTPUT_BUILD_CONTEXT}" cp /etc/localtime "${LOCAL_OUTPUT_BUILD_CONTEXT}/"
"${TAR}" czf "${LOCAL_OUTPUT_BUILD_CONTEXT}/kube-source.tar.gz" $(kube::build::source_targets)
kube::version::get_version_vars cp build/build-image/Dockerfile "${LOCAL_OUTPUT_BUILD_CONTEXT}/Dockerfile"
kube::version::save_version_vars "${LOCAL_OUTPUT_BUILD_CONTEXT}/kube-version-defs" cp build/build-image/rsyncd.sh "${LOCAL_OUTPUT_BUILD_CONTEXT}/"
dd if=/dev/urandom bs=512 count=1 2>/dev/null | LC_ALL=C tr -dc 'A-Za-z0-9' | dd bs=32 count=1 2>/dev/null > "${LOCAL_OUTPUT_BUILD_CONTEXT}/rsyncd.password"
chmod go= "${LOCAL_OUTPUT_BUILD_CONTEXT}/rsyncd.password"
cp build/build-image/Dockerfile "${LOCAL_OUTPUT_BUILD_CONTEXT}/Dockerfile" kube::build::update_dockerfile
kube::build::update_dockerfile
kube::build::docker_build "${KUBE_BUILD_IMAGE}" "${LOCAL_OUTPUT_BUILD_CONTEXT}" 'false' kube::build::docker_build "${KUBE_BUILD_IMAGE}" "${LOCAL_OUTPUT_BUILD_CONTEXT}" 'false'
fi
# Clean up old versions of everything
kube::build::docker_delete_old_containers "${KUBE_BUILD_CONTAINER_NAME_BASE}" "${KUBE_BUILD_CONTAINER_NAME}"
kube::build::docker_delete_old_containers "${KUBE_RSYNC_CONTAINER_NAME_BASE}" "${KUBE_RSYNC_CONTAINER_NAME}"
kube::build::docker_delete_old_containers "${KUBE_DATA_CONTAINER_NAME_BASE}" "${KUBE_DATA_CONTAINER_NAME}"
kube::build::docker_delete_old_images "${KUBE_BUILD_IMAGE_REPO}" "${KUBE_BUILD_IMAGE_TAG_BASE}" "${KUBE_BUILD_IMAGE_TAG}"
kube::build::ensure_data_container
kube::build::sync_to_container
} }
# Build a docker image from a Dockerfile. # Build a docker image from a Dockerfile.
@ -477,70 +511,101 @@ EOF
} }
} }
function kube::build::clean_image() {
local -r image=$1
kube::log::status "Deleting docker image ${image}"
"${DOCKER[@]}" rmi ${image} 2> /dev/null || true
}
function kube::build::clean_images() {
kube::build::has_docker || return 0
kube::build::clean_image "${KUBE_BUILD_IMAGE}"
kube::log::status "Cleaning all other untagged docker images"
"${DOCKER[@]}" rmi $("${DOCKER[@]}" images -q --filter 'dangling=true') 2> /dev/null || true
}
function kube::build::ensure_data_container() { function kube::build::ensure_data_container() {
# If the data container exists AND exited successfully, we can use it. # If the data container exists AND exited successfully, we can use it.
# Otherwise nuke it and start over. # Otherwise nuke it and start over.
local ret=0 local ret=0
local code=$(docker inspect \ local code=$(docker inspect \
-f '{{.State.ExitCode}}' \ -f '{{.State.ExitCode}}' \
"${KUBE_BUILD_DATA_CONTAINER_NAME}" 2>/dev/null || ret=$?) "${KUBE_DATA_CONTAINER_NAME}" 2>/dev/null || ret=$?)
if [[ "${ret}" == 0 && "${code}" != 0 ]]; then if [[ "${ret}" == 0 && "${code}" != 0 ]]; then
kube::build::destroy_container "${KUBE_BUILD_DATA_CONTAINER_NAME}" kube::build::destroy_container "${KUBE_DATA_CONTAINER_NAME}"
ret=1 ret=1
fi fi
if [[ "${ret}" != 0 ]]; then if [[ "${ret}" != 0 ]]; then
kube::log::status "Creating data container ${KUBE_BUILD_DATA_CONTAINER_NAME}" kube::log::status "Creating data container ${KUBE_DATA_CONTAINER_NAME}"
# We have to ensure the directory exists, or else the docker run will # We have to ensure the directory exists, or else the docker run will
# create it as root. # create it as root.
mkdir -p "${LOCAL_OUTPUT_GOPATH}" mkdir -p "${LOCAL_OUTPUT_GOPATH}"
# We want this to run as root to be able to chown, so non-root users can # We want this to run as root to be able to chown, so non-root users can
# later use the result as a data container. This run both creates the data # later use the result as a data container. This run both creates the data
# container and chowns the GOPATH. # container and chowns the GOPATH.
#
# The data container creates volumes for all of the directories that store
# intermediates for the Go build. This enables incremental builds across
# Docker sessions. The *_cgo paths are re-compiled versions of the go std
# libraries for true static building.
local -ra docker_cmd=( local -ra docker_cmd=(
"${DOCKER[@]}" run "${DOCKER[@]}" run
--name "${KUBE_BUILD_DATA_CONTAINER_NAME}" --volume "${REMOTE_ROOT}" # white-out the whole output dir
--volume /usr/local/go/pkg/linux_386_cgo
--volume /usr/local/go/pkg/linux_amd64_cgo
--volume /usr/local/go/pkg/linux_arm_cgo
--volume /usr/local/go/pkg/linux_arm64_cgo
--volume /usr/local/go/pkg/linux_ppc64le_cgo
--volume /usr/local/go/pkg/darwin_amd64_cgo
--volume /usr/local/go/pkg/darwin_386_cgo
--volume /usr/local/go/pkg/windows_amd64_cgo
--volume /usr/local/go/pkg/windows_386_cgo
--name "${KUBE_DATA_CONTAINER_NAME}"
--hostname "${HOSTNAME}" --hostname "${HOSTNAME}"
--volume "${REMOTE_OUTPUT_ROOT}" # white-out the whole output dir
--volume "${REMOTE_OUTPUT_GOPATH}" # make a non-root owned mountpoint
"${KUBE_BUILD_IMAGE}" "${KUBE_BUILD_IMAGE}"
chown -R $(id -u).$(id -g) "${REMOTE_OUTPUT_ROOT}" chown -R $(id -u).$(id -g)
"${REMOTE_ROOT}"
/usr/local/go/pkg/
) )
"${docker_cmd[@]}" "${docker_cmd[@]}"
fi fi
} }
# Run a command in the kube-build image. This assumes that the image has # Run a command in the kube-build image. This assumes that the image has
# already been built. This will sync out all output data from the build. # already been built.
function kube::build::run_build_command() { function kube::build::run_build_command() {
kube::log::status "Running build command...." kube::log::status "Running build command..."
[[ $# != 0 ]] || { echo "Invalid input - please specify a command to run." >&2; return 4; } kube::build::run_build_command_ex "${KUBE_BUILD_CONTAINER_NAME}" -- "$@"
}
kube::build::ensure_data_container # Run a command in the kube-build image. This assumes that the image has
kube::build::prepare_output # already been built.
#
# Arguments are in the form of
# <container name> <extra docker args> -- <command>
function kube::build::run_build_command_ex() {
[[ $# != 0 ]] || { echo "Invalid input - please specify a container name." >&2; return 4; }
local container_name="${1}"
shift
local -a docker_run_opts=( local -a docker_run_opts=(
"--name=${KUBE_BUILD_CONTAINER_NAME}" "--name=${container_name}"
"--user=$(id -u):$(id -g)" "--user=$(id -u):$(id -g)"
"--hostname=${HOSTNAME}" "--hostname=${HOSTNAME}"
"${DOCKER_MOUNT_ARGS[@]}" "${DOCKER_MOUNT_ARGS[@]}"
) )
local detach=false
[[ $# != 0 ]] || { echo "Invalid input - please specify docker arguments followed by --." >&2; return 4; }
# Everything before "--" is an arg to docker
until [ -z "${1-}" ] ; do
if [[ "$1" == "--" ]]; then
shift
break
fi
docker_run_opts+=("$1")
if [[ "$1" == "-d" || "$1" == "--detach" ]] ; then
detach=true
fi
shift
done
# Everything after "--" is the command to run
[[ $# != 0 ]] || { echo "Invalid input - please specify a command to run." >&2; return 4; }
local -a cmd=()
until [ -z "${1-}" ] ; do
cmd+=("$1")
shift
done
docker_run_opts+=( docker_run_opts+=(
--env "KUBE_FASTBUILD=${KUBE_FASTBUILD:-false}" --env "KUBE_FASTBUILD=${KUBE_FASTBUILD:-false}"
--env "KUBE_BUILDER_OS=${OSTYPE:-notdetected}" --env "KUBE_BUILDER_OS=${OSTYPE:-notdetected}"
@ -553,7 +618,7 @@ function kube::build::run_build_command() {
# attach stderr/stdout but don't bother asking for a tty. # attach stderr/stdout but don't bother asking for a tty.
if [[ -t 0 ]]; then if [[ -t 0 ]]; then
docker_run_opts+=(--interactive --tty) docker_run_opts+=(--interactive --tty)
else elif [[ "${detach}" == false ]]; then
docker_run_opts+=(--attach=stdout --attach=stderr) docker_run_opts+=(--attach=stdout --attach=stderr)
fi fi
@ -561,73 +626,154 @@ function kube::build::run_build_command() {
"${DOCKER[@]}" run "${docker_run_opts[@]}" "${KUBE_BUILD_IMAGE}") "${DOCKER[@]}" run "${docker_run_opts[@]}" "${KUBE_BUILD_IMAGE}")
# Clean up container from any previous run # Clean up container from any previous run
kube::build::destroy_container "${KUBE_BUILD_CONTAINER_NAME}" kube::build::destroy_container "${container_name}"
"${docker_cmd[@]}" "$@" "${docker_cmd[@]}" "${cmd[@]}"
kube::build::destroy_container "${KUBE_BUILD_CONTAINER_NAME}" if [[ "${detach}" == false ]]; then
} kube::build::destroy_container "${container_name}"
# Test if the output directory is remote (and can only be accessed through
# docker) or if it is "local" and we can access the output without going through
# docker.
function kube::build::is_output_remote() {
rm -f "${LOCAL_OUTPUT_SUBPATH}/test_for_remote"
kube::build::run_build_command touch "${REMOTE_OUTPUT_BINPATH}/test_for_remote"
[[ ! -e "${LOCAL_OUTPUT_BINPATH}/test_for_remote" ]]
}
# If the Docker server is remote, copy the results back out.
function kube::build::copy_output() {
if kube::build::is_output_remote; then
# At time of this code, docker cp does not work when copying from a volume.
# As a workaround, the binaries are first copied to a local filesystem,
# /tmp, then docker cp'd to the local binaries output directory.
# The fix for the volume bug has been accepted and once it's widely
# deployed the code below should be simplified to a simple docker cp
# Bug: https://github.com/docker/docker/pull/8509
local -a docker_run_opts=(
"--name=${KUBE_BUILD_CONTAINER_NAME}"
"--user=$(id -u):$(id -g)"
"${DOCKER_MOUNT_ARGS[@]}"
-d
)
local -ra docker_cmd=(
"${DOCKER[@]}" run "${docker_run_opts[@]}" "${KUBE_BUILD_IMAGE}"
)
kube::log::status "Syncing back _output/dockerized/bin directory from remote Docker"
rm -rf "${LOCAL_OUTPUT_BINPATH}"
mkdir -p "${LOCAL_OUTPUT_BINPATH}"
rm -f "${THIS_PLATFORM_BIN}"
ln -s "${LOCAL_OUTPUT_BINPATH}" "${THIS_PLATFORM_BIN}"
kube::build::destroy_container "${KUBE_BUILD_CONTAINER_NAME}"
"${docker_cmd[@]}" bash -c "cp -r ${REMOTE_OUTPUT_BINPATH} /tmp/bin;touch /tmp/finished;rm /tmp/bin/test_for_remote;/bin/sleep 600" > /dev/null 2>&1
# Wait until binaries have finished coppying
count=0
while true;do
if "${DOCKER[@]}" cp "${KUBE_BUILD_CONTAINER_NAME}:/tmp/finished" "${LOCAL_OUTPUT_BINPATH}" > /dev/null 2>&1;then
"${DOCKER[@]}" cp "${KUBE_BUILD_CONTAINER_NAME}:/tmp/bin" "${LOCAL_OUTPUT_SUBPATH}"
break;
fi
let count=count+1
if [[ $count -eq 60 ]]; then
# break after 5m
kube::log::error "Timed out waiting for binaries..."
break
fi
sleep 5
done
"${DOCKER[@]}" rm -f -v "${KUBE_BUILD_CONTAINER_NAME}" >/dev/null 2>&1 || true
else
kube::log::status "Output directory is local. No need to copy results out."
fi fi
} }
function kube::build::probe_address {
# Apple has an ancient version of netcat with custom timeout flags. This is
# the best way I (jbeda) could find to test for that.
local netcat
if nc 2>&1 | grep -e 'apple' >/dev/null ; then
netcat="nc -G 1"
else
netcat="nc -w 1"
fi
# Wait unil rsync is up and running.
if ! which nc >/dev/null ; then
V=6 kube::log::info "netcat not installed, waiting for 1s"
sleep 1
return 0
fi
local tries=10
while (( ${tries} > 0 )) ; do
if ${netcat} -z "$1" "$2" 2> /dev/null ; then
return 0
fi
tries=$(( ${tries} - 1))
sleep 0.1
done
return 1
}
# Start up the rsync container in the backgound. This should be explicitly
# stoped with kube::build::stop_rsyncd_container.
#
# This will set the global var KUBE_RSYNC_ADDR to the effective port that the
# rsync daemon can be reached out.
function kube::build::start_rsyncd_container() {
kube::build::stop_rsyncd_container
V=6 kube::log::status "Starting rsyncd container"
kube::build::run_build_command_ex \
"${KUBE_RSYNC_CONTAINER_NAME}" -p 127.0.0.1:${KUBE_RSYNC_PORT}:${KUBE_CONTAINER_RSYNC_PORT} -d \
-- /rsyncd.sh >/dev/null
local mapped_port
if ! mapped_port=$("${DOCKER[@]}" port "${KUBE_RSYNC_CONTAINER_NAME}" ${KUBE_CONTAINER_RSYNC_PORT} 2> /dev/null | cut -d: -f 2) ; then
kube:log:error "Could not get effective rsync port"
return 1
fi
local container_ip
container_ip=$("${DOCKER[@]}" inspect --format '{{ .NetworkSettings.IPAddress }}' "${KUBE_RSYNC_CONTAINER_NAME}")
# Sometimes we can reach rsync through localhost and a NAT'd port. Other
# times (when we are running in another docker container on the Jenkins
# machines) we have to talk directly to the container IP. There is no one
# strategy that works in all cases so we test to figure out which situation we
# are in.
if kube::build::probe_address 127.0.0.1 ${mapped_port}; then
KUBE_RSYNC_ADDR="127.0.0.1:${mapped_port}"
sleep 0.5
return 0
elif kube::build::probe_address "${container_ip}" ${KUBE_CONTAINER_RSYNC_PORT}; then
KUBE_RSYNC_ADDR="${container_ip}:${KUBE_CONTAINER_RSYNC_PORT}"
sleep 0.5
return 0
fi
kube::log::error "Could not connect to rsync container. See build/README.md for setting up remote Docker engine."
return 1
}
function kube::build::stop_rsyncd_container() {
V=6 kube::log::status "Stopping any currently running rsyncd container"
unset KUBE_RSYNC_ADDR
kube::build::destroy_container "${KUBE_RSYNC_CONTAINER_NAME}"
}
# This will launch rsyncd in a container and then sync the source tree to the
# container over the local network.
function kube::build::sync_to_container() {
kube::log::status "Syncing sources to container"
kube::build::start_rsyncd_container
local rsync_extra=""
if (( ${KUBE_VERBOSE} >= 6 )); then
rsync_extra="-iv"
fi
# rsync filters are a bit confusing. Here we are syncing everything except
# output only directories and things that are not necessary like the git
# directory. The '- /' filter prevents rsync from trying to set the
# uid/gid/perms on the root of the sync tree.
V=6 kube::log::status "Running rsync"
rsync ${rsync_extra} \
--archive \
--prune-empty-dirs \
--password-file="${LOCAL_OUTPUT_BUILD_CONTEXT}/rsyncd.password" \
--filter='- /.git/' \
--filter='- /.make/' \
--filter='- /_tmp/' \
--filter='- /_output/' \
--filter='- /' \
"${KUBE_ROOT}/" "rsync://k8s@${KUBE_RSYNC_ADDR}/k8s/"
kube::build::stop_rsyncd_container
}
# Copy all build results back out.
function kube::build::copy_output() {
kube::log::status "Syncing out of container"
kube::build::start_rsyncd_container
local rsync_extra=""
if (( ${KUBE_VERBOSE} >= 6 )); then
rsync_extra="-iv"
fi
# The filter syntax for rsync is a little obscure. It filters on files and
# directories. If you don't go in to a directory you won't find any files
# there. Rules are evaluated in order. The last two rules are a little
# magic. '+ */' says to go in to every directory and '- /**' says to ignore
# any file or directory that isn't already specifically allowed.
#
# We are looking to copy out all of the built binaries along with various
# generated files.
V=6 kube::log::status "Running rsync"
rsync ${rsync_extra} \
--archive \
--prune-empty-dirs \
--password-file="${LOCAL_OUTPUT_BUILD_CONTEXT}/rsyncd.password" \
--filter='- /vendor/' \
--filter='- /_temp/' \
--filter='+ /_output/dockerized/bin/**' \
--filter='+ zz_generated.*' \
--filter='+ */' \
--filter='- /**' \
"rsync://k8s@${KUBE_RSYNC_ADDR}/k8s/" "${KUBE_ROOT}"
kube::build::stop_rsyncd_container
}
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
# Build final release artifacts # Build final release artifacts
function kube::release::clean_cruft() { function kube::release::clean_cruft() {

View File

@ -14,10 +14,7 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
# Copies any built binaries off the Docker machine. # Copies any built binaries (and other generated files) out of the Docker build contianer.
#
# This is a no-op on Linux when the Docker daemon is local. This is only
# necessary on Mac OS X with docker-machine.
set -o errexit set -o errexit
set -o nounset set -o nounset
set -o pipefail set -o pipefail

View File

@ -23,5 +23,4 @@ KUBE_ROOT=$(dirname "${BASH_SOURCE}")/..
source "${KUBE_ROOT}/build/common.sh" source "${KUBE_ROOT}/build/common.sh"
kube::build::verify_prereqs kube::build::verify_prereqs
kube::build::clean_output kube::build::clean
kube::build::clean_images

View File

@ -18,6 +18,8 @@ set -o errexit
set -o nounset set -o nounset
set -o pipefail set -o pipefail
KUBE_ROOT=$(dirname "${BASH_SOURCE}")/.. KUBE_ROOT=$(dirname "${BASH_SOURCE}")/..
source "${KUBE_ROOT}/build/common.sh" source "${KUBE_ROOT}/build/common.sh"
@ -35,11 +37,11 @@ function prereqs() {
KUBE_BUILD_IMAGE_TAG="build-${KUBE_ROOT_HASH}" KUBE_BUILD_IMAGE_TAG="build-${KUBE_ROOT_HASH}"
KUBE_BUILD_IMAGE="${KUBE_BUILD_IMAGE_REPO}:${KUBE_BUILD_IMAGE_TAG}" KUBE_BUILD_IMAGE="${KUBE_BUILD_IMAGE_REPO}:${KUBE_BUILD_IMAGE_TAG}"
KUBE_BUILD_CONTAINER_NAME="kube-build-${KUBE_ROOT_HASH}" KUBE_BUILD_CONTAINER_NAME="kube-build-${KUBE_ROOT_HASH}"
KUBE_BUILD_DATA_CONTAINER_NAME="kube-build-data-${KUBE_ROOT_HASH}" KUBE_DATA_CONTAINER_NAME="kube-build-data-${KUBE_ROOT_HASH}"
DOCKER_MOUNT_ARGS=( DOCKER_MOUNT_ARGS=(
--volume "${REPO_DIR:-${KUBE_ROOT}}:/go/src/${KUBE_GO_PACKAGE}" --volume "${REPO_DIR:-${KUBE_ROOT}}:/go/src/${KUBE_GO_PACKAGE}"
--volume /etc/localtime:/etc/localtime:ro --volume /etc/localtime:/etc/localtime:ro
--volumes-from "${KUBE_BUILD_DATA_CONTAINER_NAME}" --volumes-from "${KUBE_DATA_CONTAINER_NAME}"
) )
LOCAL_OUTPUT_BUILD_CONTEXT="${LOCAL_OUTPUT_IMAGE_STAGING}/${KUBE_BUILD_IMAGE}" LOCAL_OUTPUT_BUILD_CONTEXT="${LOCAL_OUTPUT_IMAGE_STAGING}/${KUBE_BUILD_IMAGE}"
} }

View File

@ -35,11 +35,11 @@ function prereqs() {
KUBE_BUILD_IMAGE_TAG="build-${KUBE_ROOT_HASH}" KUBE_BUILD_IMAGE_TAG="build-${KUBE_ROOT_HASH}"
KUBE_BUILD_IMAGE="${KUBE_BUILD_IMAGE_REPO}:${KUBE_BUILD_IMAGE_TAG}" KUBE_BUILD_IMAGE="${KUBE_BUILD_IMAGE_REPO}:${KUBE_BUILD_IMAGE_TAG}"
KUBE_BUILD_CONTAINER_NAME="kube-build-${KUBE_ROOT_HASH}" KUBE_BUILD_CONTAINER_NAME="kube-build-${KUBE_ROOT_HASH}"
KUBE_BUILD_DATA_CONTAINER_NAME="kube-build-data-${KUBE_ROOT_HASH}" KUBE_DATA_CONTAINER_NAME="kube-build-data-${KUBE_ROOT_HASH}"
DOCKER_MOUNT_ARGS=( DOCKER_MOUNT_ARGS=(
--volume "${REPO_DIR:-${KUBE_ROOT}}:/go/src/${KUBE_GO_PACKAGE}" --volume "${REPO_DIR:-${KUBE_ROOT}}:/go/src/${KUBE_GO_PACKAGE}"
--volume /etc/localtime:/etc/localtime:ro --volume /etc/localtime:/etc/localtime:ro
--volumes-from "${KUBE_BUILD_DATA_CONTAINER_NAME}" --volumes-from "${KUBE_DATA_CONTAINER_NAME}"
) )
LOCAL_OUTPUT_BUILD_CONTEXT="${LOCAL_OUTPUT_IMAGE_STAGING}/${KUBE_BUILD_IMAGE}" LOCAL_OUTPUT_BUILD_CONTEXT="${LOCAL_OUTPUT_IMAGE_STAGING}/${KUBE_BUILD_IMAGE}"
} }
@ -50,4 +50,3 @@ cp "${KUBE_ROOT}/cmd/libs/go2idl/go-to-protobuf/build-image/Dockerfile" "${LOCAL
kube::build::update_dockerfile kube::build::update_dockerfile
kube::build::docker_build "${KUBE_BUILD_IMAGE}" "${LOCAL_OUTPUT_BUILD_CONTEXT}" 'false' kube::build::docker_build "${KUBE_BUILD_IMAGE}" "${LOCAL_OUTPUT_BUILD_CONTEXT}" 'false'
kube::build::run_build_command hack/update-generated-runtime-dockerized.sh "$@" kube::build::run_build_command hack/update-generated-runtime-dockerized.sh "$@"

View File

@ -34,8 +34,8 @@ trap "cleanup" EXIT SIGINT
cleanup cleanup
for APIROOT in ${APIROOTS}; do for APIROOT in ${APIROOTS}; do
mkdir -p "${_tmp}/${APIROOT%/*}" mkdir -p "${_tmp}/${APIROOT}"
cp -a "${KUBE_ROOT}/${APIROOT}" "${_tmp}/${APIROOT}" cp -a -T "${KUBE_ROOT}/${APIROOT}" "${_tmp}/${APIROOT}"
done done
"${KUBE_ROOT}/hack/update-generated-protobuf.sh" "${KUBE_ROOT}/hack/update-generated-protobuf.sh"
@ -44,7 +44,7 @@ for APIROOT in ${APIROOTS}; do
echo "diffing ${APIROOT} against freshly generated protobuf" echo "diffing ${APIROOT} against freshly generated protobuf"
ret=0 ret=0
diff -Naupr -I 'Auto generated by' -x 'zz_generated.*' "${KUBE_ROOT}/${APIROOT}" "${TMP_APIROOT}" || ret=$? diff -Naupr -I 'Auto generated by' -x 'zz_generated.*' "${KUBE_ROOT}/${APIROOT}" "${TMP_APIROOT}" || ret=$?
cp -a "${TMP_APIROOT}" "${KUBE_ROOT}/${APIROOT%/*}" cp -a -T "${TMP_APIROOT}" "${KUBE_ROOT}/${APIROOT}"
if [[ $ret -eq 0 ]]; then if [[ $ret -eq 0 ]]; then
echo "${APIROOT} up to date." echo "${APIROOT} up to date."
else else

View File

@ -42,15 +42,21 @@ DIFFROOT="${KUBE_ROOT}/pkg"
TMP_DIFFROOT="${KUBE_ROOT}/_tmp/pkg" TMP_DIFFROOT="${KUBE_ROOT}/_tmp/pkg"
_tmp="${KUBE_ROOT}/_tmp" _tmp="${KUBE_ROOT}/_tmp"
cleanup() {
rm -rf "${_tmp}"
}
trap "cleanup" EXIT SIGINT
cleanup
mkdir -p "${_tmp}" mkdir -p "${_tmp}"
trap "rm -rf ${_tmp}" EXIT SIGINT cp -a -T "${DIFFROOT}" "${TMP_DIFFROOT}"
cp -a "${DIFFROOT}" "${TMP_DIFFROOT}"
"${KUBE_ROOT}/hack/update-generated-swagger-docs.sh" "${KUBE_ROOT}/hack/update-generated-swagger-docs.sh"
echo "diffing ${DIFFROOT} against freshly generated swagger type documentation" echo "diffing ${DIFFROOT} against freshly generated swagger type documentation"
ret=0 ret=0
diff -Naupr -I 'Auto generated by' "${DIFFROOT}" "${TMP_DIFFROOT}" || ret=$? diff -Naupr -I 'Auto generated by' "${DIFFROOT}" "${TMP_DIFFROOT}" || ret=$?
cp -a "${TMP_DIFFROOT}" "${KUBE_ROOT}/" cp -a -T "${TMP_DIFFROOT}" "${DIFFROOT}"
if [[ $ret -eq 0 ]] if [[ $ret -eq 0 ]]
then then
echo "${DIFFROOT} up to date." echo "${DIFFROOT} up to date."