This commit does 4 things:
* Adds a script which will: (a) clone from a git tag, make release,
and give you very detailed instructions as to what to do from that
point.
* Changes `push-official-release.sh` so we can't push "dirty"
releases anymore (which `build-official-release.sh` also double
checks at the end.)
* Fixes#9576 by ensuring a correct umask.
* Changes common.sh to gtar all the way through, to ensure that
bloody OS X tar never touches the release process, because I don't
want to have to understand two tar programs and how release
artifacts are created from both (c.f. #10615.)
that reverts the doc diff.
This changes mark-new-version to create a backmerge branch that will
additionally handle the revert of the doc diff that's now created for
every release.
Sidebar: I really wish I knew a good git visualizer for GitHub - this
thing is going to start creating some awesome Gordian knots of merges.
(Or more like little Omegas, since they basically just change
version/base.go.)
Fixes#10825
Slightly neuters #8955, but we haven't had a build succeed in a while
for whatever reason. (I checked on Jenkins and the images in the build
log where deletion was attempted were actually deleted, so I think
this is primarily an exit code / API issue of some sort.)
By default, tmpfs on Docker 1.6 is 64mb which is too
small for Go builds on the Kube project (binary size, etc).
This moves the release build to use a non tmpfs work dir.
Also by pre-staging and pushing all at once, and by doing the ACL
modify in parallel, this shaves the push time down anyways, despite
the extra I/O.
Along the way: Updates to longer hashes ala #6615
Rework the parallel docker build path to create separate docker build
directories for each binary, rather than just separate files,
eliminating the use of "-f Dockerfile.foo". (I think this also shaves
a little more time off, because it was previously sending the whole
dir each time to the docker daemon.)
Also some minor cleanup.
Fixes#6463
This allows us to continue on branched like v0.14.1, v0.14.2, and tag on
that branch. But it will merge those tags into master so git describe
picks up the changes.
This does some git magic to make sure you do not tag a branch with
v0.13.3 unless that branch is a decendant of the release-0.13 branch
upstream.
Don't allow v0.13.4 if v0.13.3 doesn't exit
Don't allow v0.13.3 if v0.13.3 already exits
We keep getting tags and branches wonky. This will land
v0.14.0 as an ancestor of master
v0.14.1 will NOT be an ancestor of master
I can do the later, it only a touch of git gymnastics, not really hard, but
we only occasionably do that today. (0.13.1 is an ancestor of master,
but 0.13.2 is not)
Adds a kube::release::gcs::publish_latest_official that checks the
current contents of this file in GCS, verifies that we're pushing a
newer build, and updates it if so. (i.e. it handles the case of
pushing a 0.13.1 and then later pushing a 0.12.3.) This follows the
pattern of the ci/ build, which Jenkins just updates unconditionally.
I already updated the file for 0.13.1. After this we can update the
get-k8s script, so we don't have to keep updating it.
Change provisioning to pass all variables to both master and node. Run
Salt in a masterless setup on all nodes ala
http://docs.saltstack.com/en/latest/topics/tutorials/quickstart.html,
which involves ensuring Salt daemon is NOT running after install. Kill
Salt master install. And fix push to actually work in this new flow.
As part of this, the GCE Salt config no longer has access to the Salt
mine, which is primarily obnoxious for two reasons: - The minions
can't use Salt to see the master: this is easily fixed by static
config. - The master can't see the list of all the minions: this is
fixed temporarily by static config in util.sh, but later, by other
means (see
https://github.com/GoogleCloudPlatform/kubernetes/issues/156, which
should eventually remove this direction).
As part of it, flatten all of cluster/gce/templates/* into
configure-vm.sh, using a single, separate piece of YAML to drive the
environment variables, rather than constantly rewriting the startup
script.
The build is only looking for GNU tar as gtar on OSX, which is the name used when installed using brew. Macports installs it as gnutar, so check for that name if gtar is not found.
This implements phase 1 of the proposal in #3579, moving the creation
of the pods, RCs, and services to the master after the apiserver is
available.
This is such a wide commit because our existing initial config story
is special:
* Add kube-addons service and associated salt configuration:
** We configure /etc/kubernetes/addons to be a directory of objects
that are appropriately configured for the current cluster.
** "/etc/init.d/kube-addons start" slurps up everything in that dir.
(Most of the difficult is the business logic in salt around getting
that directory built at all.)
** We cheat and overlay cluster/addons into saltbase/salt/kube-addons
as config files for the kube-addons meta-service.
* Change .yaml.in files to salt templates
* Rename {setup,teardown}-{monitoring,logging} to
{setup,teardown}-{monitoring,logging}-firewall to properly reflect
their real purpose now (the purpose of these functions is now ONLY to
bring up the firewall rules, and possibly to relay the IP to the user).
* Rework GCE {setup,teardown}-{monitoring,logging}-firewall: Both
functions were improperly configuring global rules, yet used
lifecycles tied to the cluster. Use $NODE_INSTANCE_PREFIX with the
rule. The logging rule needed a $NETWORK specifier. The monitoring
rule tried gcloud describe first, but given the instancing, this feels
like a waste of time now.
* Plumb ENABLE_CLUSTER_MONITORING, ENABLE_CLUSTER_LOGGING,
ELASTICSEARCH_LOGGING_REPLICAS and DNS_REPLICAS down to the master,
since these are needed there now.
(Desperately want just a yaml or json file we can share between
providers that has all this crap. Maybe #3525 is an answer?)
Huge caveats: I've gone pretty firm testing on GCE, including
twiddling the env variables and making sure the objects I expect to
come up, come up. I've tested that it doesn't break GKE bringup
somehow. But I haven't had a chance to test the other providers.
This is essentially a variant of push-ci-build.sh, but pushes it to
the current project. The defaults for gcs::release actually pick a
shorthash of the GCS project, so you end up uploading to something
like: gs://kubernetes-releases-3fda2/devel/v0.8.0-437-g7f147ed/ (where
the last part is the "git describe" of your current commit)
This pushes artifacts in a similar manner to the official release,
except that instead of release/vFOO, it goes to ci/$(git describe),
e.g.: gs://kubernetes-release/ci/v0.7.0-315-gcae5722
It also pushes a text file to gs://kubernetes-release/ci/latest.txt,
so anyone can do, for instance:
gsutil ls gs://kubernetes-release/ci/$(gsutil cat gs://kubernetes-release/ci/latest.txt)
(In a parallel change, I'm going to flip the jenkins scripts over to
use git describe, since it's shorter and a little more descriptive)
Add test artifacts to the build. This lets you do:
tar -xzf kubernetes.tar.gz
tar -xzf kubernetes-test.tar.gz
cd kubernetes
go run ./hack/e2e.go -up -test -down
without having a git checkout.
This commit brings two main changes, notably:
Two new options that can be set as environment variables
- DOCKER_OPTS: any arbitrary set of docker options. Example: --tlsverify
- DOCKER_NATIVE: This forces the use of the native docker available.
This is most useful if you're on OS X and do not want
to use boot2docker.
Now uses 'docker cp' instead of tar piping to transfer files. This
currently must be done by copying the binaries off of the docker volume
and into a local filesystem (/tmp) before a docker cp is done. This
workaround will no longer be necessary after bug fix
https://github.com/docker/docker/pull/8509 makes it into stable.
This was necessary because the tar | tar method was creating corrupted
archives on OS X even with the < /dev/null workaround.