Automatic merge from submit-queue
Update my OWNERS entries.
Not sure why I was set as a reviewer for apimachinery and apiserver stuff. Adding myself to build/.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 42379, 42668, 42876, 41473, 43260)
Silence error messages from the docker rmi call we expect to fail
**What this PR does / why we need it**: when we removed `docker tag -f` in #34361 we added a bunch of `docker rmi` calls to preserve behavior for older docker versions. That step is usually a no-op, however, and results in confusing messages like
```
Tagging docker image gcr.io/google_containers/kube-proxy:c8d0b2e7a06b451117a8ac58fc3bb3d3 as gcr.io/kubernetes-release-test/kube-proxy-amd64:v1.5.4
Error response from daemon: No such image: gcr.io/kubernetes-release-test/kube-proxy-amd64:v1.5.4
```
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#42665
**Special notes for your reviewer**: I could probably remove the `docker rmi` calls entirely, though I don't know if folks are still using docker < 1.10. (I think Jenkins still has 1.9.1.)
**Release note**:
```release-note
NONE
```
cc @jessfraz
This was broken when we moved to the build container, but no one
noticed. We also likely have another bug, which is that protobuf should
hard fail when we have fields that aren't assigned a tag.
Automatic merge from submit-queue
Remove the kube-discovery binary from the tree
**What this PR does / why we need it**:
kube-discovery was a temporary solution to implementing proposal: https://github.com/kubernetes/community/blob/master/contributors/design-proposals/bootstrap-discovery.md
However, this functionality is now gonna be implemented in the core for v1.6 and will fully replace kube-discovery:
- https://github.com/kubernetes/kubernetes/pull/36101
- https://github.com/kubernetes/kubernetes/pull/41281
- https://github.com/kubernetes/kubernetes/pull/41417
So due to that `kube-discovery` isn't used in any v1.6 code, it should be removed.
The image `gcr.io/google_containers/kube-discovery-${ARCH}:1.0` should and will continue to exist so kubeadm <= v1.5 continues to work.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
Remove cmd/kube-discovery from the tree since it's not necessary anymore
```
@jbeda @dgoodwin @mikedanese @dmmcquay @lukemarsden @errordeveloper @pires
Automatic merge from submit-queue (batch tested with PRs 41921, 41695, 42139, 42090, 41949)
Rebase kube-proxy and debian-iptables on debian-base
**What this PR does / why we need it**:
Slimmer images are generally preferred, but it's a minor optimization. The larger advantage to this change is the reduced attack surface from removing unnecessary packages, and easier maintenance from sharing a common base image.
Size comparison:
```
gcr.io/google-containers/debian-iptables-amd64:v6 127.9 MB
gcr.io/google-containers/debian-iptables-amd64:v7 45.1 MB
```
**Which issue this PR fixes** https://github.com/kubernetes/kubernetes/issues/40248
**Special notes for your reviewer**:
Tested by deploying to a private test cluster and running the e2es. This will fail the jenkins builds until I push the `gcr.io/google-containers/debian-iptables-amd64:v7` image, which I will do once I have an LGTM.
**Release note**:
```release-note
Clean up the kube-proxy container image by removing unnecessary packages and files.
```
/cc @luxas @ixdy
Automatic merge from submit-queue (batch tested with PRs 35408, 41915, 41992, 41964, 41925)
Standard Debian base image
**What this PR does / why we need it**:
The goal of this new image is to provide a standard base image for Kubernetes system images that require substantial external dependencies (e.g. kube-proxy and fluentd). The image is significantly reduced from the standard `debian:jessie-slim` image (40 MB vs 80 MB), and removes a number of unnecessary dependencies such as e2fsprogs, systemd, and sysv-rc. In the future we may consider further reducing it to the bare minimum to run the package manager, with the requirement that images based on it add all the dependencies they need.
I tested this by rebasing kube-proxy on this image and running the e2e tests. I'm targeting 1.6 for rebasing kube-proxy & fluentd on this.
For the rational behind basing on Debian, see https://github.com/kubernetes/kubernetes/issues/40248#issuecomment-280781931
Largely based off [debian-iptables](https://github.com/kubernetes/kubernetes/tree/master/build/debian-iptables/) and [ubuntu-slim](https://github.com/kubernetes/ingress/tree/master/images/ubuntu-slim).
**Which issue this PR fixes**
https://github.com/kubernetes/kubernetes/issues/40248
**Special notes for your reviewer**:
@luxas Please review the qemu cross-build logic in the Makefile. It's copied from [debian-iptables](https://github.com/kubernetes/kubernetes/blob/master/build/debian-iptables/Makefile), but I'm not sure exactly what it's doing.
/cc @jessfraz @dlorenc
Automatic merge from submit-queue (batch tested with PRs 40124, 39216, 40561, 40595, 40735)
Include a dummy src tarball unless PACKAGE_SRC=true is set
**What this PR does / why we need it**: alternative to #40546. I think this will keep the cluster startup scripts happy.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
Improve the multiarch situation; armel => armhf; reenable pcc64le; remove the patched golang
**What this PR does / why we need it**:
- Improves the multiarch situation as described in #38067
- Tries to bump to go1.8 for arm (and later enable ppc64le)
- GOARM 6 => GOARM 7
- Remove the golang 1.7 patch
- armel => armhf
- Bump QEMU version to v2.7.0
**Release note**:
```release-note
Improve the ARM builds and make hyperkube on ARM working again by upgrading the Go version for ARM to go1.8beta2
```
@kubernetes/sig-testing-misc @jessfraz @ixdy @jbeda @david-mcmahon @pwittrock
Automatic merge from submit-queue
bazel: add a config setting to control embedding kubernetes-src.tar.gz
**What this PR does / why we need it**: currently a change anywhere in the tree will cause `kubernetes-src.tar.gz` to need to be regenerated, and thus also the server and node tarballs. All of these operations are slow, so for the sake of developer productivity, only include `kubernetes-src.tar.gz` when we need it (e.g. if we were doing a real release).
I don't have metrics on how much of an effect this has, but I expect it should help incremental builds, especially those that don't affect any node/server targets.
To embed the srcs tarball with this change, you'd run
```console
bazel build //build/release-tars --define EMBED_LICENSE_TARGETS=true
```
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 39446, 40023, 36853)
Add SIGCHLD handler to pause container
**What this PR does / why we need it**: This allows pause to reap orphaned zombies in a shared PID namespace. (#1615)
**Special notes for your reviewer**: I plan to discuss this with SIG Node to ensure compatibility with future runtimes.
**Release note**: This will have no effect until shared PID namespace is enabled, so recommend release-note-none.
This allows pause to reap zombies in the upcoming Shared PID namespace
(#1615). Uses the better defined sigaction() instead of signal() for all
signals both for consistency (SIGCHLD handler avoids SA_RESTART) and to
avoid the implicit signal()->sigaction() translation of various libc
versions.
Also makes warnings errors and includes a tool to make orphaned zombies
for manual testing.
Automatic merge from submit-queue
Remove all MAINTAINER statements in the codebase as they are deprecated
**What this PR does / why we need it**:
ref: https://github.com/docker/docker/pull/25466
**Release note**:
```release-note
Remove all MAINTAINER statements in Dockerfiles in the codebase as they are deprecated by docker
```
@ixdy @thockin (who else should be notified?)
Automatic merge from submit-queue
create kuberentes-discovery image
Creates an image for `kubernetes-discovery` since this is the API registration, aggregation, and proxy image.
This update includes significant refactoring. It moves almost all of the
logic into bash scripts, modeled after the `gci` cluster scripts.
The primary differences between the two are the following:
1. Use of the `/opt/kubernetes` directory over `/home/kubernetes`
2. Support for rkt as a runtime
3. No use of logrotate
4. No use of `/etc/default/`
5. No logic related to noexec mounts or gci-specific firewall-stuff
from etcd.sh split the start process into validate fucntion + start function so that the validate piece can be reused elsewhere. the up-cluster script has been changed to remove duplicate docker logic to the one used in buid-tools/common.sh and the validate etcd function is now used here.
moved docker daemon check function to util.sh and made function name changes and upstream changes.
Automatic merge from submit-queue
[Federation][init-11.2] use USE_KUBEFED env var to choose bw old and new federation deployment
This is continuation of #35961
USE_KUBEFED variable is used for deploying federation control plane. if not defined, federation will be brought up using old method i.e scripts.
Have verified that federation comes up using the old method, using following steps
```
$ export FEDERATION=true
$ export E2E_ZONES="asia-east1-c"
$ export FEDERATION_PUSH_REPO_BASE=gcr.io/<my-project>
$ KUBE_RELEASE_RUN_TESTS=n KUBE_FASTBUILD=true go run hack/e2e.go -v -build
$ build-tools/push-federation-images.sh
$ go run hack/e2e.go -v --up
```
Should merge #35961 before this PR
@madhusudancs
Automatic merge from submit-queue
Migrated fluentd addon to daemon set
fix#23224
supersedes #23306
``` release-note
Migrated fluentd addon to daemon set
```
This allows pause to reap zombies in the upcoming Shared PID namespace
(#1615). Uses the better defined sigaction() instead of signal() for all
signals both for consistency (SIGCHLD handler avoids SA_RESTART) and to
avoid the implicit signal()->sigaction() translation of various libc
versions.
Also makes warnings errors and includes a tool to make orphaned zombies
for manual testing.
Automatic merge from submit-queue
Update `gcloud docker` commands to use `gcloud docker -- ARGS`
We can then avoid the following warning:
```
WARNING: The '--' argument must be specified between gcloud specific args on the left and DOCKER_ARGS on the right. IMPORTANT: previously, commands allowed the omission of the --, and unparsed arguments were treated as implementation args. This usage is being deprecated and will be removed in March 2017.
This will be strictly enforced in March 2017. Use 'gcloud beta docker' to see new behavior.
```
If you delete a source file, we want to reflect that in the build container. We
only use --delete going that one way as we don't want to accidentally delete
files in the user's source tree.
We can then avoid the following warning:
```
WARNING: The '--' argument must be specified between gcloud specific args on the left and DOCKER_ARGS on the right. IMPORTANT: previously, commands allowed the omission of the --, and unparsed arguments were treated as implementation args. This usage is being deprecated and will be removed in March 2017.
This will be strictly enforced in March 2017. Use 'gcloud beta docker' to see new behavior.
```
Signed-off-by: Jess Frazelle <acidburn@google.com>
Automatic merge from submit-queue
Check for rsync and give friendlier message
Fixes#34300.
Not sure if #34309 is the same issue. Hopefully it is the same issue.
Automatic merge from submit-queue
Make sure rsync.sh is executable inside the build image
I kept having the build fail:
```console
$ make quick-release
+++ [1006 18:13:44] Verifying Prerequisites....
+++ [1006 18:13:44] Building Docker image kube-build:build-d3c60cf83f-3-v1.6.3-9
+++ [1006 18:13:54] Creating data container kube-build-data-d3c60cf83f-3-v1.6.3-9
+++ [1006 18:13:55] Syncing sources to container
!!! [1006 18:16:01] Could not connect to rsync container. See build/README.md for setting up remote Docker engine.
make: *** [quick-release] Error 1
```
`docker ps` revealed the issue:
```console
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
75c2a3c40cb3 kube-build:build-d3c60cf83f-3-v1.6.3-9 "/rsyncd.sh" 6 seconds ago Exited (126) 5 seconds ago kube-rsync-d3c60cf83f-3-v1.6.3-9
3eb215e41f36 kube-build:build-d3c60cf83f-3-v1.6.3-9 "chown -R 85078.5000 " 8 seconds ago Exited (0) 6 seconds ago kube-build-data-d3c60cf83f-3-v1.6.3-9
5a2707af2ccd 882577c54f67 "/bin/sh -c 'cd ${K8S" 7 days ago Exited (2) 7 days ago stupefied_goldberg
$ docker logs 75c2a3c40cb3
/bin/bash: /rsyncd.sh: Permission denied
```
I'm not sure why this works on Jenkins but not on my machine.
We were using netcat to try and figure out when the rsync container is ready. Now we instead use rsync itself. I suspect that there was a race condition with some versions of Docker where it would accept connections and then close them during container start.
This fixes#34214 (I think)
This was broken by #30787. A stray bash `source` caused an undefined variable reference.
Apparently the federation images have a parallel nad different "release" path
that isn't tested by the pre-checkin tests.
make-generated-{protobuf,runtime}.sh was doing some really nasty stuff with how
the build container was managed in order to copy results out. Since we have
more flexibility to grab results out of the build container, we can now avoid
all of this. Ideally we wouldn't have `hack` calling `build` at all, but we
aren't there yet.
We also add "version" to all docker images and containers
This version is to be incremented manually when we change the shape of the build
image (like changing the golang version or the set of volumes in the data
container). This will delete all older versions of images and containers when
the version is different.
Automatic merge from submit-queue
Use patched golang1.7.1 for cross-builds targeting darwin
This PR extends #32517 to use the patched go1.7.1 introduced by that PR to build all darwin targets (e.g. kubectl).
This is necessary because binaries built with earlier versions of Go regularly segfault on macOS Sierra (see #32999 and #33070).
This solution is somewhat hacky, but we intend to cherry-pick this to 1.4, and switching all of 1.4 to build with go1.7.1 is very high risk.
I haven't yet pushed the cross build image yet, so this will fail to build. Will test locally and update with results.
First step of fixing #33801.
cc @luxas @pwittrock @david-mcmahon @liggitt @smarterclayton @jfrazelle @Starefossen @gerred
Automatic merge from submit-queue
Bump up addon kube-dns to v20 for graceful termination
Below images are built and pushed:
- gcr.io/google_containers/kubedns-amd64:1.8
- gcr.io/google_containers/kubedns-arm:1.8
- gcr.io/google_containers/kubedns-arm64:1.8
- gcr.io/google_containers/kubedns-ppc64le:1.8
Both kubedns and dnsmasq are bumped up in the manifest files.
@thockin @bprashanth
Automatic merge from submit-queue
Add separate build process for node test.
This PR is part of https://github.com/kubernetes/kubernetes/pull/31093. However, because currently node e2e is built on `KUBE_TEST_PLATFORMS`, which includes linux/amd64, darwin/amd64, windows/amd64 and linux/arm, it caused #32251 to fail.
In fact, node e2e is running on the same node with kubelet, and it also has built-in apiserver, etcd and namespace controller. All of them are only built on `KUBE_SERVER_PLATFORMS`, so node e2e should also only be built on those platforms.
```
KUBE_SERVER_PLATFORMS=(
linux/amd64
linux/arm
linux/arm64
)
```
This PR added a separate build process for node e2e to address this.
@vishh Do you need this for v1.4? because this blocks your #32251. /cc @dchen1107
Automatic merge from submit-queue
Use a patched golang version for building linux/arm
Fixes: #29904
Right now, linux/arm is broken because of an internal limitation in Go.
I've filed an issue for it here: https://github.com/golang/go/issues/17028
The affected binaries of this limitation are hyperkube and kube-apiserver, which are the largest binaries.
And when we now have a patched go 1.7.1 version for building "unsupported" but important architectures (ref: https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/multi-platform.md), we should also include the patch for ppc64le and start building ppc64le again.
As soon as @laboger has the patch I need up on Github, I'll include ppc64le to this PR and we'll merge it
TODO:
- [ ] ~~Update the PR with patches for ppc64le at the same time @luxas~~
- [x] Push the new kube-cross image @ixdy
- [x] Run a full `make release` before to verify nothing breaks @luxas + @ixdy
- [ ] Cherrypick into the 1.4 branch @luxas + (who?)
@lavalamp @smarterclayton @ixdy @rsc @davecheney @wojtek-t @jfrazelle @bradfitz @david-mcmahon @pwittrock
Automatic merge from submit-queue
kubectl version -c has been deprecated, use --client instead
```
Flag shorthand -c has been deprecated, please use --client instead.
```
Automatic merge from submit-queue
Deprecate release infrastructure and doc - moved to kubernetes/release
Part 2 of https://github.com/kubernetes/release/pull/1
This PR finalizes the split between the main kubernetes repo and the release tooling now under kubernetes/release.
ref #16529
Automatic merge from submit-queue
Update build docs to include path for scripts.
<!-- Thanks for sending a pull request! Here are some tips for you:
1. If this is your first time, read our contributor guidelines https://github.com/kubernetes/kubernetes/blob/master/CONTRIBUTING.md and developer guide https://github.com/kubernetes/kubernetes/blob/master/docs/devel/development.md
2. If you want *faster* PR reviews, read how: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/faster_reviews.md
3. Follow the instructions for writing a release note: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/pull-requests.md#release-notes
-->
**What this PR does / why we need it**:
This fix updates the build docs (`build/README.md`) to include the path of `build/` for shell scripts (like `run.sh`, `shell.sh`).
The reason is that while trying to follow the `build/README.md` to build the kubernetes, it is not obvious that all the scripts, e.g., `run.sh make`, `shell.sh`, etc. needs to be executed from the root directory (vs. executed from the `build/` directory).
In other words, the executation should be:
```
build/run.sh make
build/make-clean.sh
...
```
This fix adds `build/` so that it is easy for user to follow the steps.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
<!-- Steps to write your release note:
1. Use the release-note-* labels to set the release note state (if you have access)
2. Enter your extended release note in the below block; leaving it blank means using the PR title as the release note. If no release note is required, just write `NONE`.
-->
```release-note
```
Automatic merge from submit-queue
Separate federation build.sh into development and deployment scripts.
The idea behind this separation is that it provides a clear distinction
between the dev environment and the prod environment. The
deploy/deploy.sh script will be shipped to the users, but
develop/develop.sh will be purely for development purposes and won't
be part of a release distribution.
Purely for developer convenience, all the deployment functionality is
made available through the develop/develop.sh script.
This change also copies deploy/* files into the release distribution.
cc @kubernetes/sig-cluster-federation @colhom
```release-note
Federation can now be deployed using the `federation/deploy/deploy.sh` script. This script does not depend on any of the development environment shell library/scripts. This is an alternative to the current `federation-up.sh`/`federation-down.sh` scripts. Both the scripts are going to co-exist in this release, but the `federation-up.sh`/`federation-down.sh` scripts might be removed in a future release in favor of `federation/deploy/deploy.sh` script.
```
Automatic merge from submit-queue
[Federation] Downsize the release binary distribution v2.
Second attempt of PR #29632.
There are two things that this PR does:
1. It removes `federation-apiserver` and `federation-controller-manager` from binaries and docker_wrapped_binaries target lists.
2. Build the docker image for `hyperkube` on-the-fly while pushing the federation images.
```release-note
Federation binaries and their corresponding docker images - `federation-apiserver` and `federation-controller-manager` are now folded in to the `hyperkube` binary. If you were using one of these binaries or docker images, please switch to using the `hyperkube` version. Please refer to the federation manifests - `federation/manifests/federation-apiserver.yaml` and `federation/manifests/federation-controller-manager-deployment.yaml` for examples.
```
cc @kubernetes/sig-cluster-federation @colhom
Fixes Issue #28633
Automatic merge from submit-queue
Build and push kube-dns for 1.4 release.
Fix#31355.
Following docker images had been uploaded:
gcr.io/google_containers/kubedns-amd64:1.7
gcr.io/google_containers/kubedns-arm:1.7
gcr.io/google_containers/kubedns-arm64:1.7
Build for ppc64le is disabled by default, and it failed to be built using:
`KUBE_BUILD_PPC64LE=y make release`
I'm still working on making the ppc64le build. Updates will be added following this thread.
@girishkalele @thockin
Also build the hyperkube docker image on-the-fly.
This is only a temporary fix until the proposal in issue
https://github.com/kubernetes/kubernetes/issues/28630 is implemented.
Also, the new build/deployment method completely obviates this step.
We use debian image instead of busybox and do not build hyperkube as a
static binary yet. Wait until PR
https://github.com/kubernetes/kubernetes/pull/26028 is merged to build
static hyperkube binaries.
This fix updates the build docs to include the path of `build/` for
shell scripts. The reason is that while trying to follow the `build/README.md`,
it is not obvious that all the scripts, e.g., `run.sh make`, `shell.sh`,
etc. needs to be executed from the root directory (vs. executed from the
`build/` directory). In other words,
the executation should be:
```
build/run.sh make
build/make-clean.sh
...
```
This fix adds `build/` so that it is easy for user to follow the steps.
The idea behind this separation is that it provides a clear distinction
between the dev environment and the prod environment. The
deploy/deploy.sh script will be shipped to the users, but
develop/develop.sh will be purely for development purposes and won't
be part of a release distribution.
Purely for developer convenience, all the deployment functionality is
made available through the develop/develop.sh script.
This change also copies deploy/* files into the release distribution.
Automatic merge from submit-queue
Disable linux/ppc64le compilation by default
Work-around for #30384.
I'm still testing this locally to see if it actually works. The build is slow. (PR Jenkins won't tell us whether this fixes ppc.)
cc @Random-Liu @spxtr @david-mcmahon @luxas
Automatic merge from submit-queue
Fix subtle build breakage
Repro case:
$ make clean generated_files
$ hack/update-generated-protobuf.sh
This would complain about not finding `fmt`, and it was indicating the wrong
GOROOT. The problem was that the first step built binaries for generating
code, which *embeds* the value of GOROOT into the binary. The whole tree was
bind-mounted into the build container and then JUST the dockerized dir was
mounted over it. The in-container build tried to use the existing binaries,
but GOROOT is wrong.
This change whites-out the whole _output dir.
I first made just an anonymous volume for _output, but docker makes that as
root, which means I can't write to it from our non-root build. So I just put
it in the data container. This seems to work. The biggest change this makes
is that the $GOPATH/bin/ and $GOPATH/pkg/ dirs will persist across dockerized
builds.
NB: this requires a `make clean` to activate.
@lavalamp @jbeda @quinton-hoole @david-mcmahon
Repro case:
$ make clean generated_files
$ hack/update-generated-protobuf.sh
This would complain about not finding `fmt`, and it was indicating the wrong
GOROOT. The problem was that the first step built binaries for generating
code, which *embeds* the value of GOROOT into the binary. The whole tree was
bind-mounted into the build container and then JUST the dockerized dir was
mounted over it. The in-container build tried to use the existing binaries,
but GOROOT is wrong.
This change whites-out the whole _output dir.
I first made just an anonymous volume for _output, but docker makes that as
root, which means I can't write to it from our non-root build. So I just put
it in the data container. This seems to work. The biggest change this makes
is that the $GOPATH/bin/ and $GOPATH/pkg/ dirs will persist across dockerized
builds.
This commit removes a part of common.sh script which copied
contrib/ sources for enabled contribs, which resulted in the
duplicated files inside tarball.
Fixes#30150
Automatic merge from submit-queue
Install go-bindata in cross-build image
Another follow-up to #25584.
We need `go-bindata` to create `test/e2e/generated`, and downloading it with `go get` at build time is painful for a variety of reasons. We can just include it in the cross-build image and not worry about it, especially as it updates very infrequently.
This fixes `hack/update-generated-protobuf.sh` as well.
cc @jayunit100 @soltysh
This allows us to start building real dependencies into Makefile.
Leave old hack/* scripts in place but advise to use 'make'. There are a few
rules that call things like 'go run' or 'build/*' that I left as-is for now.
Automatic merge from submit-queue
build: fixed ${KUBE_ROOT} prefix for build scripts
Running `./make-build-image.sh` command inside the `build` directory doesn't work:
```sh
$ cd build
$ ./make-build-image.sh
./../build/common.sh: line 32: hack/lib/init.sh: No such file or directory
```
This PR adds `${KUBE_ROOT}` prefix for the `source` bash function. Also I added braces to unify the code style.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/.github/PULL_REQUEST_TEMPLATE.md?pixel)]()
If the docker-machine certificates get in a bad state, the current behavior
causes an infinite loop waiting for `docker-machine env` to return. Now it will
echo the certificate error and prompt the user to regenerate.
This logs a false "error" message, so it's time to go. It was needed to ensure
nobody has stale build images laying around, but that was quite a while ago, so
it's probably safe now.
Automatic merge from submit-queue
Bump skydns godeps to latest
Update Godeps for github.com/skynetservices/skydns and miekg/dns.
Bump kubedns version to 1.6 with latest skynetservices/skydns code
Built kube-dns for all architectures and pushed containers to gcr.io.
Automatic merge from submit-queue
Substitute federation_domain_map parameter with its value in node bootstrap scripts.
This PR also removes the substitution code we added to the build scripts.
**Release Note**
```release-note
If you use one of the kube-dns replication controller manifest in `cluster/saltbase/salt/kube-dns`, i.e. `cluster/saltbase/salt/kube-dns/{skydns-rc.yaml.base,skydns-rc.yaml.in}`, either substitute one of `__PILLAR__FEDERATIONS__DOMAIN__MAP__` or `{{ pillar['federations_domain_map'] }}` with the corresponding federation name to domain name value or remove them if you do not support cluster federation at this time. If you plan to substitute the parameter with its value, here is an example for `{{ pillar['federations_domain_map'] }`
pillar['federations_domain_map'] = "- --federations=myfederation=federation.test"
where `myfederation` is the name of the federation and `federation.test` is the domain name registered for the federation.
```
cc @erictune @kubernetes/sig-cluster-federation @MikeSpreitzer @luxas
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/.github/PULL_REQUEST_TEMPLATE.md?pixel)]()
Automatic merge from submit-queue
Add upgrade Docker VM
Add an Error Message to upgarde your Docker VM if needed, example output:
```bash
+++ [0622 13:19:48] No docker host is set. Checking options for setting one...
+++ [0622 13:19:49] docker-machine was found.
+++ [0622 13:19:49] A Docker host using docker-machine named 'kube-dev' is ready to go!
Can't connect to 'docker' daemon. please fix and retry.
Possible causes:
- On Mac OS X, DOCKER_HOST hasn't been set. You may need to:
- Create and start your VM using docker-machine or boot2docker:
- docker-machine create -d virtualbox --virtualbox-memory 4096 --virtualbox-cpu-count -1 kube-dev
- boot2docker init && boot2docker start
- Set your environment variables using:
- eval $(docker-machine env kube-dev)
- $(boot2docker shellinit)
- On Linux, user isn't in 'docker' group. Add and relogin.
- Something like 'sudo usermod -a -G docker jscheuermann'
- RHEL7 bug and workaround: https://bugzilla.redhat.com/show_bug.cgi?id=1119282#c8
- On Linux, Docker daemon hasn't been started or has crashed.
!!! Error in hack/../hack/update-generated-protobuf.sh:53
'return 1' exited with status 1
Call stack:
1: hack/../hack/update-generated-protobuf.sh:53 main(...)
Exiting with status 1
Updating generated-protobuf FAILED
$docker info
Error response from daemon: client is newer than server (client API version: 1.24, server API version: 1.23)
```
After running `docker-machine upgrade kube-dev` everything is fine again. So we should add a hint in the error message that this can also happen.
Automatic merge from submit-queue
Add support for Docker for MacOS
With Docker for MacOS [public beta](https://docs.docker.com/docker-for-mac) you don't need docker-machine on MacOS to build kubernetes instead you can use docker "natively". Per Default Docker for MacOS will be installed to `/Applications/Docker.app/Contents/MacOS/Docker` so if Docker for Mac is installed we should use the native version.
I tested it locally with `15.5.0 Darwin Kernel Version 15.5.0` and Docker version `1.12.0-rc2`
Automatic merge from submit-queue
Enable all ppc64le builds, except for hyperkube
Partially fixes: #25886
Talked to @Pensu, and all other binaries seem to work fine
@david-mcmahon @ixdy @Pensu @smarterclayton
Automatic merge from submit-queue
including federation binaries in the list of images we push during release
Ref https://github.com/kubernetes/kubernetes/issues/27382
Added `federation-apiserver` and `federation-controller-manager` to that list.
cc @kubernetes/sig-cluster-federation @colhom @david-mcmahon
Auto generated docs are **NO LONGER CHECKED IN**, only placeholders.
To generate them, e.g. before exporting docs, run hack/generate-docs.sh.
hack/verify-generated-docs.sh ensures that generated docs are merely the
placeholder text.
hack/update-generated-docs.sh puts the placeholder text in the proper
places.
The old munge behavior is moved into hack/{update|verify}-munge-docs.sh.
Automatic merge from submit-queue
Switch DNS addons from skydns to kubedns
Change GCI and trusty cluster-helper scripts to use kubedns instead of skydns.
Unified skydns templates using a simple underscore based template and
added transform sed scripts to transform into salt and sed yaml
templates
Moved all content out of cluster/addons/dns into build/kube-dns and
saltbase/salt/kube-dns
Automatic merge from submit-queue
Support for cluster autoscaler in GCE Trusty and GCI images
Fixes: #26346
Ref: #26197
cc: @fgrzadkowski @vulpecula @piosz @jszczepkowski
Our `realpath` and `readlink -f` functions (required only because of MacOS,
thanks Steve) were poor substitutes at best. Mostly they were downright
broken. This thoroughly overhauls them and adds a test (in comments, since we
don't seem to have shell tests). For all the interesting cases I could think
of, the fakes act just like the real thing.
Then use those and canonicalize KUBE_ROOT. In order to make recursive calls of
our shell tool not additively grow `pwd` we have to essentially make the
sourcing of init.sh idempotent.
Automatic merge from submit-queue
Reimplement 'pause' in C - smaller footprint all around
Statically links against musl. Size of amd64 binary is 3560 bytes.
I couldn't test the arm binary since I have no hardware to test it on, though I assume we want it to work on a raspberry pi.
This PR also adds the gcc5/musl cross compiling image used to build the binaries.
@thockin
I ran build/run.sh w/o any args (by mistake) and it just said
`Invalid input.`
after several other steps. I had no idea whether I was doing something
wrong or if my env was busted. Clearly, I just forget to include the
command that run.sh was to invoke in the Docker container. But it took
me time to go track down where this error came from and why. So to help
others I just tweaked the error message to be:
`Invalid input - please specify a command to run.`
Very minor thing,I know, but if it helps others...
Signed-off-by: Doug Davis <dug@us.ibm.com>
Automatic merge from submit-queue
Use v1.6.2-1 tag for build.
Is there any reason these don't use the VERSION file like everything else? cc @luxas @ixdy
Automatic merge from submit-queue
Up to go 1.6.2 for build and test.
~~1.6.1 contains some security fixes. 1.6.2 should be out soon.~~ 1.6.2 is out :D
Images aren't pushed yet.
- give the docker-machine VM more memory and access to all CPU cores
- make DOCKER_MACHINE_NAME not readonly beacuse it is set by docker-machine
- redirect stderr to ignore unhelpful error messages
- unquote 'docker-machine create' argument
Automatic merge from submit-queue
jenkins: Allow configuration of release bucket
This allows others to leverage the existing E2E code to test some
patched kube binary by simply overriding the bucket and reusing many of
the existing scripts
Automatic merge from submit-queue
don't ship kube-registry-proxy and pause images in tars.
pause is built into containervm. if it's not on the machine we should just pull
it. nobody that I'm aware of uses kube-registry-proxy and it makes build/deployment
more complicated and slower.
This allows others to leverage the existing E2E code to test some
patched kube binary by simply overriding the bucket and reusing many of
the existing scripts
Automatic merge from submit-queue
Up to golang 1.6
A second attempt to upgrade go version above `go1.4`
Merge ASAP after you've cut the `release-1.2` branch and feel ready.
`go1.6` should perform slightly better than `go1.5`, so this time it might work
@gmarek @wojtek-t @zmerlynn @mikedanese @brendandburns @ixdy @thockin
pause is built into containervm. if it's not on the machine we should just pull
it. nobody that I'm aware of uses kube-registry-proxy and it makes build/deployment
more complicated and slower.
Automatic merge from submit-queue
Cross-build hyperkube and debian-iptables for ARM. Also add a flannel image
We have to be able to build complex docker images too on `amd64` hosts.
Right now we can't build Dockerfiles with `RUN` commands when building for other architectures e.g. ARM.
Resin has a tutorial about this here: https://resin.io/blog/building-arm-containers-on-any-x86-machine-even-dockerhub/
But it's a bit clumsy syntax.
The other alternative would be running this command in a Makefile:
```
# This registers in the kernel that ARM binaries should be run by /usr/bin/qemu-{ARCH}-static
docker run --rm --privileged multiarch/qemu-user-static:register --reset
```
and
```
ADD https://github.com/multiarch/qemu-user-static/releases/download/v2.5.0/x86_64_qemu-arm-static.tar.xz /usr/bin
```
Then the kernel will be able to differ ARM binaries from amd64. When it finds a ARM binary, it will invoke `/usr/bin/qemu-arm-static` first and lets `qemu` translate the ARM syscalls to amd64 ones.
Some code here: https://github.com/multiarch
WDYT is the best approach? If registering `binfmt_misc` in the kernels of the machines is OK, then I think we should go with that.
Otherwise, we'll have to wait for resin's patch to be merged into mainline qemu before we may use the code I have here now.
@fgrzadkowski @david-mcmahon @brendandburns @zmerlynn @ixdy @ihmccreery @thockin
This change revises the way to provide kube-system manifests for clusters on Trusty. Originally, we maintained copies of some manifests under cluster/gce/trusty/kube-manifests, which is not scalable and hard to maintain. With this change, clusters on Trusty will use the same source of manifests as ContainerVM. This change also fixes some minor problems such as shell variables and comments to meet the style guidance better.
To meet licensing/compliance guidelines, bundle up the source. One of
the easiest ways to do this is just to grab the entire build image
directory - this makes it pretty much guaranteed that the user could
re-run the Docker build again from the exact code point if they wanted
to (they just need to poke at our scripts to figure out how).
This change support running kubernetes master on Ubuntu Trusty.
It uses pure cloud-config and shell scripts, and completely gets
rid of saltstack or the release salt tarball.
Use case: I have a docker-machine instance in the cloud, and an empty
DOCKER_OPTS env var. I want to `build/run.sh hack/build-go.sh`
Previously, this would invoke `docker '' cp` which was erroring out
with: '' not a command.
It cordons (marks unschedulable) the given node, and then deletes every
pod on it, optionally using a grace period. It will not delete pods
managed by neither a ReplicationController nor a DaemonSet without the
use of --force.
Also add cordon/uncordon, which just toggle node schedulability.
To build the python image, BUILD_PYTHON_IMAGE should be set during make.
When the addon script is running, it will check if python is installed
on the machine, if not, it will use the python image that built previously.
The node.yaml has some logic that will be also used by the kubernetes
master on trusty work (issue #16702). This change moves the code
shared by the master and node configuration to a separate script, and
the master and node configuration can source it to use the code.
Moreover, this change stages the script for GKE use.
This scopes down the initially ambitious PR:
https://github.com/kubernetes/kubernetes/pull/14960 to replace just
`pause` and `fluentd-elasticsearch` to come through `beta.gcr.io`.
The v2 versions have been pushed under new tags, `pause:2.0` and
`fluentd-elastisearch:1.12`.
NOTE: `beta.gcr.io` will still serve images using v1 until they are repushed with v2. Pulls through `gcr.io` will still work after pushing through `beta.gcr.io`, but will be served over v1 (via compat logic).
A small bug from #11941: When we push to GCS, we were pushing bucket
names that matched the old pattern `git describe`, e.g.:
gs://kubernetes-release/ci/v1.1.0-alpha.0-2413-g986d37d/
But after #11941, the binaries inside these actually have versions that look like:
v1.1.0-alpha.0.2413+g986d37d
to more closely match up with semver.
This pull makes the GCS directory match the binaries it's already serving.
This registry can be accessed through proxies that run on each node
listening on port 5000. We send the proxy images to the nodes directly
to avoid requests that hit the network during cluster launch. For now,
we continue to pull the registry itself over the network, especially
given its large size (we should be able to dramatically shrink the
image). On GCE we create a PD and use that for storage, otherwise we
use an emptyDir. The registry is not enabled outside of GCE. All
communication is currently plain HTTP. In order to use SSL, we will
need to be able to request a certificate/key from the apiserver signed
by the apiserver's CA cert.
Right now some of the hack/* tools use `go run` and build almost every
time. There are some which expect you to have already run `go install`.
And in all cases the pre-commit hook, which runs a full build wouldn't
want to do either, since it just built!
This creates a new hack/after-build/ directory and has the scripts which
REQUIRE that the binary already be built. It doesn't test and complain.
It just fails miserably. Users should not be in this directory. Users
should just use hack/verify-* which will just do the build and then call
the "after-build" version. The pre-commit hook or anything which KNOWS
the binaries have been built can use the fast version.
This PR changes how we version going forward in the following ways:
* mark-new-version.sh is changed to a new policy of just splitting
branches, rather than the old backmerge policy, as discussed in
vX.Y.0, and a tag for vX.(Y+1).0-alpha.0 back to master.
* I eliminated PRs back to master by making the version/base.go
gitVersion and gitCommit just be `export-subst`. I testing that this
works with GitHub's source export tarballs. There's no reason to
bother with forcing the version into `base.go` (especially twice). The
tarball versions outside a git tree aren't perfect (master looks like
"v0.0.0+hash", and the release branches look more accurate), but our
build contract has never allowed that version is perfect in this
situation, so I think we can relax this.
* That master tag gets picked up by "git describe" on master, so e.g.
master would have immediately become v1.1.0-alpha.0
* In order to be more semVer compatible, the gitVersion field for the
master branch now looks something like 1.1.0-alpha.0.6+84c76d1142ea4d.
This is a tiny translation of the "git describe". I did this because
there are a ton of consumers out there of the "gitVersion" field
expecting it to be the full version, but it would be nice if this
field were actually semver compliant. (1.1.0-alpha.0-6-84c76d1142ea4d
is, but it's not *usefully* so.)
Fixes#11495
Some versions of docker display image listings like this:
docker.io/golang 1.4 ebd45caf377c 2 weeks ago
The regular expression used to detect presence of images
needs to be updated. It's unfortunate that we're still
screen-scaping here, due to:
https://github.com/docker/docker/issues/8048
This commit does 4 things:
* Adds a script which will: (a) clone from a git tag, make release,
and give you very detailed instructions as to what to do from that
point.
* Changes `push-official-release.sh` so we can't push "dirty"
releases anymore (which `build-official-release.sh` also double
checks at the end.)
* Fixes#9576 by ensuring a correct umask.
* Changes common.sh to gtar all the way through, to ensure that
bloody OS X tar never touches the release process, because I don't
want to have to understand two tar programs and how release
artifacts are created from both (c.f. #10615.)
that reverts the doc diff.
This changes mark-new-version to create a backmerge branch that will
additionally handle the revert of the doc diff that's now created for
every release.
Sidebar: I really wish I knew a good git visualizer for GitHub - this
thing is going to start creating some awesome Gordian knots of merges.
(Or more like little Omegas, since they basically just change
version/base.go.)
Fixes#10825
Slightly neuters #8955, but we haven't had a build succeed in a while
for whatever reason. (I checked on Jenkins and the images in the build
log where deletion was attempted were actually deleted, so I think
this is primarily an exit code / API issue of some sort.)
By default, tmpfs on Docker 1.6 is 64mb which is too
small for Go builds on the Kube project (binary size, etc).
This moves the release build to use a non tmpfs work dir.
Also by pre-staging and pushing all at once, and by doing the ACL
modify in parallel, this shaves the push time down anyways, despite
the extra I/O.
Along the way: Updates to longer hashes ala #6615
Rework the parallel docker build path to create separate docker build
directories for each binary, rather than just separate files,
eliminating the use of "-f Dockerfile.foo". (I think this also shaves
a little more time off, because it was previously sending the whole
dir each time to the docker daemon.)
Also some minor cleanup.
Fixes#6463
This allows us to continue on branched like v0.14.1, v0.14.2, and tag on
that branch. But it will merge those tags into master so git describe
picks up the changes.
This does some git magic to make sure you do not tag a branch with
v0.13.3 unless that branch is a decendant of the release-0.13 branch
upstream.
Don't allow v0.13.4 if v0.13.3 doesn't exit
Don't allow v0.13.3 if v0.13.3 already exits
We keep getting tags and branches wonky. This will land
v0.14.0 as an ancestor of master
v0.14.1 will NOT be an ancestor of master
I can do the later, it only a touch of git gymnastics, not really hard, but
we only occasionably do that today. (0.13.1 is an ancestor of master,
but 0.13.2 is not)
Adds a kube::release::gcs::publish_latest_official that checks the
current contents of this file in GCS, verifies that we're pushing a
newer build, and updates it if so. (i.e. it handles the case of
pushing a 0.13.1 and then later pushing a 0.12.3.) This follows the
pattern of the ci/ build, which Jenkins just updates unconditionally.
I already updated the file for 0.13.1. After this we can update the
get-k8s script, so we don't have to keep updating it.
Change provisioning to pass all variables to both master and node. Run
Salt in a masterless setup on all nodes ala
http://docs.saltstack.com/en/latest/topics/tutorials/quickstart.html,
which involves ensuring Salt daemon is NOT running after install. Kill
Salt master install. And fix push to actually work in this new flow.
As part of this, the GCE Salt config no longer has access to the Salt
mine, which is primarily obnoxious for two reasons: - The minions
can't use Salt to see the master: this is easily fixed by static
config. - The master can't see the list of all the minions: this is
fixed temporarily by static config in util.sh, but later, by other
means (see
https://github.com/GoogleCloudPlatform/kubernetes/issues/156, which
should eventually remove this direction).
As part of it, flatten all of cluster/gce/templates/* into
configure-vm.sh, using a single, separate piece of YAML to drive the
environment variables, rather than constantly rewriting the startup
script.
The build is only looking for GNU tar as gtar on OSX, which is the name used when installed using brew. Macports installs it as gnutar, so check for that name if gtar is not found.
This implements phase 1 of the proposal in #3579, moving the creation
of the pods, RCs, and services to the master after the apiserver is
available.
This is such a wide commit because our existing initial config story
is special:
* Add kube-addons service and associated salt configuration:
** We configure /etc/kubernetes/addons to be a directory of objects
that are appropriately configured for the current cluster.
** "/etc/init.d/kube-addons start" slurps up everything in that dir.
(Most of the difficult is the business logic in salt around getting
that directory built at all.)
** We cheat and overlay cluster/addons into saltbase/salt/kube-addons
as config files for the kube-addons meta-service.
* Change .yaml.in files to salt templates
* Rename {setup,teardown}-{monitoring,logging} to
{setup,teardown}-{monitoring,logging}-firewall to properly reflect
their real purpose now (the purpose of these functions is now ONLY to
bring up the firewall rules, and possibly to relay the IP to the user).
* Rework GCE {setup,teardown}-{monitoring,logging}-firewall: Both
functions were improperly configuring global rules, yet used
lifecycles tied to the cluster. Use $NODE_INSTANCE_PREFIX with the
rule. The logging rule needed a $NETWORK specifier. The monitoring
rule tried gcloud describe first, but given the instancing, this feels
like a waste of time now.
* Plumb ENABLE_CLUSTER_MONITORING, ENABLE_CLUSTER_LOGGING,
ELASTICSEARCH_LOGGING_REPLICAS and DNS_REPLICAS down to the master,
since these are needed there now.
(Desperately want just a yaml or json file we can share between
providers that has all this crap. Maybe #3525 is an answer?)
Huge caveats: I've gone pretty firm testing on GCE, including
twiddling the env variables and making sure the objects I expect to
come up, come up. I've tested that it doesn't break GKE bringup
somehow. But I haven't had a chance to test the other providers.
This is essentially a variant of push-ci-build.sh, but pushes it to
the current project. The defaults for gcs::release actually pick a
shorthash of the GCS project, so you end up uploading to something
like: gs://kubernetes-releases-3fda2/devel/v0.8.0-437-g7f147ed/ (where
the last part is the "git describe" of your current commit)
This pushes artifacts in a similar manner to the official release,
except that instead of release/vFOO, it goes to ci/$(git describe),
e.g.: gs://kubernetes-release/ci/v0.7.0-315-gcae5722
It also pushes a text file to gs://kubernetes-release/ci/latest.txt,
so anyone can do, for instance:
gsutil ls gs://kubernetes-release/ci/$(gsutil cat gs://kubernetes-release/ci/latest.txt)
(In a parallel change, I'm going to flip the jenkins scripts over to
use git describe, since it's shorter and a little more descriptive)
Add test artifacts to the build. This lets you do:
tar -xzf kubernetes.tar.gz
tar -xzf kubernetes-test.tar.gz
cd kubernetes
go run ./hack/e2e.go -up -test -down
without having a git checkout.
This commit brings two main changes, notably:
Two new options that can be set as environment variables
- DOCKER_OPTS: any arbitrary set of docker options. Example: --tlsverify
- DOCKER_NATIVE: This forces the use of the native docker available.
This is most useful if you're on OS X and do not want
to use boot2docker.
Now uses 'docker cp' instead of tar piping to transfer files. This
currently must be done by copying the binaries off of the docker volume
and into a local filesystem (/tmp) before a docker cp is done. This
workaround will no longer be necessary after bug fix
https://github.com/docker/docker/pull/8509 makes it into stable.
This was necessary because the tar | tar method was creating corrupted
archives on OS X even with the < /dev/null workaround.
apiserver becomes kube-apiserver
controller-manager -> kube-controller-manager
scheduler and proxy similarly.
Only thing I promise is that right now hack/build-go.sh and
build/release.sh exit with 0. That's it. Who knows if any of this
actually works....
If the failure is a problem, the build will fail later. But it is
possible that this is not a fatal issue and we should let things go
forward. (a filesystem mounted with context=something in permissive
would cause chcon to fail, but the build to work)