This scopes down the initially ambitious PR:
https://github.com/kubernetes/kubernetes/pull/14960 to replace just
`pause` and `fluentd-elasticsearch` to come through `beta.gcr.io`.
The v2 versions have been pushed under new tags, `pause:2.0` and
`fluentd-elastisearch:1.12`.
NOTE: `beta.gcr.io` will still serve images using v1 until they are repushed with v2. Pulls through `gcr.io` will still work after pushing through `beta.gcr.io`, but will be served over v1 (via compat logic).
A small bug from #11941: When we push to GCS, we were pushing bucket
names that matched the old pattern `git describe`, e.g.:
gs://kubernetes-release/ci/v1.1.0-alpha.0-2413-g986d37d/
But after #11941, the binaries inside these actually have versions that look like:
v1.1.0-alpha.0.2413+g986d37d
to more closely match up with semver.
This pull makes the GCS directory match the binaries it's already serving.
This registry can be accessed through proxies that run on each node
listening on port 5000. We send the proxy images to the nodes directly
to avoid requests that hit the network during cluster launch. For now,
we continue to pull the registry itself over the network, especially
given its large size (we should be able to dramatically shrink the
image). On GCE we create a PD and use that for storage, otherwise we
use an emptyDir. The registry is not enabled outside of GCE. All
communication is currently plain HTTP. In order to use SSL, we will
need to be able to request a certificate/key from the apiserver signed
by the apiserver's CA cert.
Right now some of the hack/* tools use `go run` and build almost every
time. There are some which expect you to have already run `go install`.
And in all cases the pre-commit hook, which runs a full build wouldn't
want to do either, since it just built!
This creates a new hack/after-build/ directory and has the scripts which
REQUIRE that the binary already be built. It doesn't test and complain.
It just fails miserably. Users should not be in this directory. Users
should just use hack/verify-* which will just do the build and then call
the "after-build" version. The pre-commit hook or anything which KNOWS
the binaries have been built can use the fast version.
This PR changes how we version going forward in the following ways:
* mark-new-version.sh is changed to a new policy of just splitting
branches, rather than the old backmerge policy, as discussed in
vX.Y.0, and a tag for vX.(Y+1).0-alpha.0 back to master.
* I eliminated PRs back to master by making the version/base.go
gitVersion and gitCommit just be `export-subst`. I testing that this
works with GitHub's source export tarballs. There's no reason to
bother with forcing the version into `base.go` (especially twice). The
tarball versions outside a git tree aren't perfect (master looks like
"v0.0.0+hash", and the release branches look more accurate), but our
build contract has never allowed that version is perfect in this
situation, so I think we can relax this.
* That master tag gets picked up by "git describe" on master, so e.g.
master would have immediately become v1.1.0-alpha.0
* In order to be more semVer compatible, the gitVersion field for the
master branch now looks something like 1.1.0-alpha.0.6+84c76d1142ea4d.
This is a tiny translation of the "git describe". I did this because
there are a ton of consumers out there of the "gitVersion" field
expecting it to be the full version, but it would be nice if this
field were actually semver compliant. (1.1.0-alpha.0-6-84c76d1142ea4d
is, but it's not *usefully* so.)
Fixes#11495
Some versions of docker display image listings like this:
docker.io/golang 1.4 ebd45caf377c 2 weeks ago
The regular expression used to detect presence of images
needs to be updated. It's unfortunate that we're still
screen-scaping here, due to:
https://github.com/docker/docker/issues/8048
This commit does 4 things:
* Adds a script which will: (a) clone from a git tag, make release,
and give you very detailed instructions as to what to do from that
point.
* Changes `push-official-release.sh` so we can't push "dirty"
releases anymore (which `build-official-release.sh` also double
checks at the end.)
* Fixes#9576 by ensuring a correct umask.
* Changes common.sh to gtar all the way through, to ensure that
bloody OS X tar never touches the release process, because I don't
want to have to understand two tar programs and how release
artifacts are created from both (c.f. #10615.)
that reverts the doc diff.
This changes mark-new-version to create a backmerge branch that will
additionally handle the revert of the doc diff that's now created for
every release.
Sidebar: I really wish I knew a good git visualizer for GitHub - this
thing is going to start creating some awesome Gordian knots of merges.
(Or more like little Omegas, since they basically just change
version/base.go.)
Fixes#10825
Slightly neuters #8955, but we haven't had a build succeed in a while
for whatever reason. (I checked on Jenkins and the images in the build
log where deletion was attempted were actually deleted, so I think
this is primarily an exit code / API issue of some sort.)
By default, tmpfs on Docker 1.6 is 64mb which is too
small for Go builds on the Kube project (binary size, etc).
This moves the release build to use a non tmpfs work dir.
Also by pre-staging and pushing all at once, and by doing the ACL
modify in parallel, this shaves the push time down anyways, despite
the extra I/O.
Along the way: Updates to longer hashes ala #6615
Rework the parallel docker build path to create separate docker build
directories for each binary, rather than just separate files,
eliminating the use of "-f Dockerfile.foo". (I think this also shaves
a little more time off, because it was previously sending the whole
dir each time to the docker daemon.)
Also some minor cleanup.
Fixes#6463
This allows us to continue on branched like v0.14.1, v0.14.2, and tag on
that branch. But it will merge those tags into master so git describe
picks up the changes.
This does some git magic to make sure you do not tag a branch with
v0.13.3 unless that branch is a decendant of the release-0.13 branch
upstream.
Don't allow v0.13.4 if v0.13.3 doesn't exit
Don't allow v0.13.3 if v0.13.3 already exits
We keep getting tags and branches wonky. This will land
v0.14.0 as an ancestor of master
v0.14.1 will NOT be an ancestor of master
I can do the later, it only a touch of git gymnastics, not really hard, but
we only occasionably do that today. (0.13.1 is an ancestor of master,
but 0.13.2 is not)
Adds a kube::release::gcs::publish_latest_official that checks the
current contents of this file in GCS, verifies that we're pushing a
newer build, and updates it if so. (i.e. it handles the case of
pushing a 0.13.1 and then later pushing a 0.12.3.) This follows the
pattern of the ci/ build, which Jenkins just updates unconditionally.
I already updated the file for 0.13.1. After this we can update the
get-k8s script, so we don't have to keep updating it.
Change provisioning to pass all variables to both master and node. Run
Salt in a masterless setup on all nodes ala
http://docs.saltstack.com/en/latest/topics/tutorials/quickstart.html,
which involves ensuring Salt daemon is NOT running after install. Kill
Salt master install. And fix push to actually work in this new flow.
As part of this, the GCE Salt config no longer has access to the Salt
mine, which is primarily obnoxious for two reasons: - The minions
can't use Salt to see the master: this is easily fixed by static
config. - The master can't see the list of all the minions: this is
fixed temporarily by static config in util.sh, but later, by other
means (see
https://github.com/GoogleCloudPlatform/kubernetes/issues/156, which
should eventually remove this direction).
As part of it, flatten all of cluster/gce/templates/* into
configure-vm.sh, using a single, separate piece of YAML to drive the
environment variables, rather than constantly rewriting the startup
script.
The build is only looking for GNU tar as gtar on OSX, which is the name used when installed using brew. Macports installs it as gnutar, so check for that name if gtar is not found.
This implements phase 1 of the proposal in #3579, moving the creation
of the pods, RCs, and services to the master after the apiserver is
available.
This is such a wide commit because our existing initial config story
is special:
* Add kube-addons service and associated salt configuration:
** We configure /etc/kubernetes/addons to be a directory of objects
that are appropriately configured for the current cluster.
** "/etc/init.d/kube-addons start" slurps up everything in that dir.
(Most of the difficult is the business logic in salt around getting
that directory built at all.)
** We cheat and overlay cluster/addons into saltbase/salt/kube-addons
as config files for the kube-addons meta-service.
* Change .yaml.in files to salt templates
* Rename {setup,teardown}-{monitoring,logging} to
{setup,teardown}-{monitoring,logging}-firewall to properly reflect
their real purpose now (the purpose of these functions is now ONLY to
bring up the firewall rules, and possibly to relay the IP to the user).
* Rework GCE {setup,teardown}-{monitoring,logging}-firewall: Both
functions were improperly configuring global rules, yet used
lifecycles tied to the cluster. Use $NODE_INSTANCE_PREFIX with the
rule. The logging rule needed a $NETWORK specifier. The monitoring
rule tried gcloud describe first, but given the instancing, this feels
like a waste of time now.
* Plumb ENABLE_CLUSTER_MONITORING, ENABLE_CLUSTER_LOGGING,
ELASTICSEARCH_LOGGING_REPLICAS and DNS_REPLICAS down to the master,
since these are needed there now.
(Desperately want just a yaml or json file we can share between
providers that has all this crap. Maybe #3525 is an answer?)
Huge caveats: I've gone pretty firm testing on GCE, including
twiddling the env variables and making sure the objects I expect to
come up, come up. I've tested that it doesn't break GKE bringup
somehow. But I haven't had a chance to test the other providers.
This is essentially a variant of push-ci-build.sh, but pushes it to
the current project. The defaults for gcs::release actually pick a
shorthash of the GCS project, so you end up uploading to something
like: gs://kubernetes-releases-3fda2/devel/v0.8.0-437-g7f147ed/ (where
the last part is the "git describe" of your current commit)
This pushes artifacts in a similar manner to the official release,
except that instead of release/vFOO, it goes to ci/$(git describe),
e.g.: gs://kubernetes-release/ci/v0.7.0-315-gcae5722
It also pushes a text file to gs://kubernetes-release/ci/latest.txt,
so anyone can do, for instance:
gsutil ls gs://kubernetes-release/ci/$(gsutil cat gs://kubernetes-release/ci/latest.txt)
(In a parallel change, I'm going to flip the jenkins scripts over to
use git describe, since it's shorter and a little more descriptive)
Add test artifacts to the build. This lets you do:
tar -xzf kubernetes.tar.gz
tar -xzf kubernetes-test.tar.gz
cd kubernetes
go run ./hack/e2e.go -up -test -down
without having a git checkout.
This commit brings two main changes, notably:
Two new options that can be set as environment variables
- DOCKER_OPTS: any arbitrary set of docker options. Example: --tlsverify
- DOCKER_NATIVE: This forces the use of the native docker available.
This is most useful if you're on OS X and do not want
to use boot2docker.
Now uses 'docker cp' instead of tar piping to transfer files. This
currently must be done by copying the binaries off of the docker volume
and into a local filesystem (/tmp) before a docker cp is done. This
workaround will no longer be necessary after bug fix
https://github.com/docker/docker/pull/8509 makes it into stable.
This was necessary because the tar | tar method was creating corrupted
archives on OS X even with the < /dev/null workaround.
apiserver becomes kube-apiserver
controller-manager -> kube-controller-manager
scheduler and proxy similarly.
Only thing I promise is that right now hack/build-go.sh and
build/release.sh exit with 0. That's it. Who knows if any of this
actually works....
If the failure is a problem, the build will fail later. But it is
possible that this is not a fatal issue and we should let things go
forward. (a filesystem mounted with context=something in permissive
would cause chcon to fail, but the build to work)
* Rewrite a bunch of the hack/ directory with modular reusable bash libraries.
* Have 'build/*' build on 'hack/*'. The stuff in build now just runs hack/* in a docker container.
* Use a docker data container to enable faster incremental builds.
* Standardize output to _output/{local,dockerized}/bin/OS/ARCH/*. This regularized placement makes cross compilation work.
* Move travis specific scripts under hack/travis
With new dockerized incremental builds, I can do a no-op `make quick-release` in ~30s. This is a significant improvement.
If you have two repos that are both building at the same time we don't want to have them stomp on each other as they deal with docker. Work around this by hashing the KUBE_ROOT and mixing that in to the name.
Also a bunch of script clean up. Make make-clean.sh faster and more robust.
Building the docker run images and uploading to GCS are now optional and turned off by default. Doing a release shouldn't hit the network now (I think).
Currently binaries are built using Go 1.2.2, which results
in larger binaries than those produced by newer versions of
Go. The Go source archive used for the build process is not
verified against its SHA1 hash.
Update the build-image Dockerfile to use Go 1.3 to build all
binaries, as a result binaries are now 20% - 30% smaller. The
Go source archive used for building binaries is now verified
against its SHA1 hash.
The pause image is a 240KB image that simply pauses waiting on a signal.
Use this for the net container which only needs to act as a placeholder.
Current net image is ~2.5MB. From my tests, this reduces startup time
for the net container from ~14s to ~6s.