Our `realpath` and `readlink -f` functions (required only because of MacOS,
thanks Steve) were poor substitutes at best. Mostly they were downright
broken. This thoroughly overhauls them and adds a test (in comments, since we
don't seem to have shell tests). For all the interesting cases I could think
of, the fakes act just like the real thing.
Then use those and canonicalize KUBE_ROOT. In order to make recursive calls of
our shell tool not additively grow `pwd` we have to essentially make the
sourcing of init.sh idempotent.
I ran build/run.sh w/o any args (by mistake) and it just said
`Invalid input.`
after several other steps. I had no idea whether I was doing something
wrong or if my env was busted. Clearly, I just forget to include the
command that run.sh was to invoke in the Docker container. But it took
me time to go track down where this error came from and why. So to help
others I just tweaked the error message to be:
`Invalid input - please specify a command to run.`
Very minor thing,I know, but if it helps others...
Signed-off-by: Doug Davis <dug@us.ibm.com>
Automatic merge from submit-queue
Use v1.6.2-1 tag for build.
Is there any reason these don't use the VERSION file like everything else? cc @luxas @ixdy
- give the docker-machine VM more memory and access to all CPU cores
- make DOCKER_MACHINE_NAME not readonly beacuse it is set by docker-machine
- redirect stderr to ignore unhelpful error messages
- unquote 'docker-machine create' argument
Automatic merge from submit-queue
don't ship kube-registry-proxy and pause images in tars.
pause is built into containervm. if it's not on the machine we should just pull
it. nobody that I'm aware of uses kube-registry-proxy and it makes build/deployment
more complicated and slower.
Automatic merge from submit-queue
Up to golang 1.6
A second attempt to upgrade go version above `go1.4`
Merge ASAP after you've cut the `release-1.2` branch and feel ready.
`go1.6` should perform slightly better than `go1.5`, so this time it might work
@gmarek @wojtek-t @zmerlynn @mikedanese @brendandburns @ixdy @thockin
pause is built into containervm. if it's not on the machine we should just pull
it. nobody that I'm aware of uses kube-registry-proxy and it makes build/deployment
more complicated and slower.
Automatic merge from submit-queue
Cross-build hyperkube and debian-iptables for ARM. Also add a flannel image
We have to be able to build complex docker images too on `amd64` hosts.
Right now we can't build Dockerfiles with `RUN` commands when building for other architectures e.g. ARM.
Resin has a tutorial about this here: https://resin.io/blog/building-arm-containers-on-any-x86-machine-even-dockerhub/
But it's a bit clumsy syntax.
The other alternative would be running this command in a Makefile:
```
# This registers in the kernel that ARM binaries should be run by /usr/bin/qemu-{ARCH}-static
docker run --rm --privileged multiarch/qemu-user-static:register --reset
```
and
```
ADD https://github.com/multiarch/qemu-user-static/releases/download/v2.5.0/x86_64_qemu-arm-static.tar.xz /usr/bin
```
Then the kernel will be able to differ ARM binaries from amd64. When it finds a ARM binary, it will invoke `/usr/bin/qemu-arm-static` first and lets `qemu` translate the ARM syscalls to amd64 ones.
Some code here: https://github.com/multiarch
WDYT is the best approach? If registering `binfmt_misc` in the kernels of the machines is OK, then I think we should go with that.
Otherwise, we'll have to wait for resin's patch to be merged into mainline qemu before we may use the code I have here now.
@fgrzadkowski @david-mcmahon @brendandburns @zmerlynn @ixdy @ihmccreery @thockin
This change revises the way to provide kube-system manifests for clusters on Trusty. Originally, we maintained copies of some manifests under cluster/gce/trusty/kube-manifests, which is not scalable and hard to maintain. With this change, clusters on Trusty will use the same source of manifests as ContainerVM. This change also fixes some minor problems such as shell variables and comments to meet the style guidance better.
To meet licensing/compliance guidelines, bundle up the source. One of
the easiest ways to do this is just to grab the entire build image
directory - this makes it pretty much guaranteed that the user could
re-run the Docker build again from the exact code point if they wanted
to (they just need to poke at our scripts to figure out how).
This change support running kubernetes master on Ubuntu Trusty.
It uses pure cloud-config and shell scripts, and completely gets
rid of saltstack or the release salt tarball.
Use case: I have a docker-machine instance in the cloud, and an empty
DOCKER_OPTS env var. I want to `build/run.sh hack/build-go.sh`
Previously, this would invoke `docker '' cp` which was erroring out
with: '' not a command.
It cordons (marks unschedulable) the given node, and then deletes every
pod on it, optionally using a grace period. It will not delete pods
managed by neither a ReplicationController nor a DaemonSet without the
use of --force.
Also add cordon/uncordon, which just toggle node schedulability.
To build the python image, BUILD_PYTHON_IMAGE should be set during make.
When the addon script is running, it will check if python is installed
on the machine, if not, it will use the python image that built previously.
The node.yaml has some logic that will be also used by the kubernetes
master on trusty work (issue #16702). This change moves the code
shared by the master and node configuration to a separate script, and
the master and node configuration can source it to use the code.
Moreover, this change stages the script for GKE use.
This scopes down the initially ambitious PR:
https://github.com/kubernetes/kubernetes/pull/14960 to replace just
`pause` and `fluentd-elasticsearch` to come through `beta.gcr.io`.
The v2 versions have been pushed under new tags, `pause:2.0` and
`fluentd-elastisearch:1.12`.
NOTE: `beta.gcr.io` will still serve images using v1 until they are repushed with v2. Pulls through `gcr.io` will still work after pushing through `beta.gcr.io`, but will be served over v1 (via compat logic).
This registry can be accessed through proxies that run on each node
listening on port 5000. We send the proxy images to the nodes directly
to avoid requests that hit the network during cluster launch. For now,
we continue to pull the registry itself over the network, especially
given its large size (we should be able to dramatically shrink the
image). On GCE we create a PD and use that for storage, otherwise we
use an emptyDir. The registry is not enabled outside of GCE. All
communication is currently plain HTTP. In order to use SSL, we will
need to be able to request a certificate/key from the apiserver signed
by the apiserver's CA cert.
Some versions of docker display image listings like this:
docker.io/golang 1.4 ebd45caf377c 2 weeks ago
The regular expression used to detect presence of images
needs to be updated. It's unfortunate that we're still
screen-scaping here, due to:
https://github.com/docker/docker/issues/8048
This commit does 4 things:
* Adds a script which will: (a) clone from a git tag, make release,
and give you very detailed instructions as to what to do from that
point.
* Changes `push-official-release.sh` so we can't push "dirty"
releases anymore (which `build-official-release.sh` also double
checks at the end.)
* Fixes#9576 by ensuring a correct umask.
* Changes common.sh to gtar all the way through, to ensure that
bloody OS X tar never touches the release process, because I don't
want to have to understand two tar programs and how release
artifacts are created from both (c.f. #10615.)
This commit does 4 things:
* Adds a script which will: (a) clone from a git tag, make release,
and give you very detailed instructions as to what to do from that
point.
* Changes `push-official-release.sh` so we can't push "dirty"
releases anymore (which `build-official-release.sh` also double
checks at the end.)
* Fixes#9576 by ensuring a correct umask.
* Changes common.sh to gtar all the way through, to ensure that
bloody OS X tar never touches the release process, because I don't
want to have to understand two tar programs and how release
artifacts are created from both (c.f. #10615.)
Slightly neuters #8955, but we haven't had a build succeed in a while
for whatever reason. (I checked on Jenkins and the images in the build
log where deletion was attempted were actually deleted, so I think
this is primarily an exit code / API issue of some sort.)
Also by pre-staging and pushing all at once, and by doing the ACL
modify in parallel, this shaves the push time down anyways, despite
the extra I/O.
Along the way: Updates to longer hashes ala #6615
Rework the parallel docker build path to create separate docker build
directories for each binary, rather than just separate files,
eliminating the use of "-f Dockerfile.foo". (I think this also shaves
a little more time off, because it was previously sending the whole
dir each time to the docker daemon.)
Also some minor cleanup.
Fixes#6463
Adds a kube::release::gcs::publish_latest_official that checks the
current contents of this file in GCS, verifies that we're pushing a
newer build, and updates it if so. (i.e. it handles the case of
pushing a 0.13.1 and then later pushing a 0.12.3.) This follows the
pattern of the ci/ build, which Jenkins just updates unconditionally.
I already updated the file for 0.13.1. After this we can update the
get-k8s script, so we don't have to keep updating it.
Change provisioning to pass all variables to both master and node. Run
Salt in a masterless setup on all nodes ala
http://docs.saltstack.com/en/latest/topics/tutorials/quickstart.html,
which involves ensuring Salt daemon is NOT running after install. Kill
Salt master install. And fix push to actually work in this new flow.
As part of this, the GCE Salt config no longer has access to the Salt
mine, which is primarily obnoxious for two reasons: - The minions
can't use Salt to see the master: this is easily fixed by static
config. - The master can't see the list of all the minions: this is
fixed temporarily by static config in util.sh, but later, by other
means (see
https://github.com/GoogleCloudPlatform/kubernetes/issues/156, which
should eventually remove this direction).
As part of it, flatten all of cluster/gce/templates/* into
configure-vm.sh, using a single, separate piece of YAML to drive the
environment variables, rather than constantly rewriting the startup
script.
The build is only looking for GNU tar as gtar on OSX, which is the name used when installed using brew. Macports installs it as gnutar, so check for that name if gtar is not found.