To meet licensing/compliance guidelines, bundle up the source. One of
the easiest ways to do this is just to grab the entire build image
directory - this makes it pretty much guaranteed that the user could
re-run the Docker build again from the exact code point if they wanted
to (they just need to poke at our scripts to figure out how).
This change support running kubernetes master on Ubuntu Trusty.
It uses pure cloud-config and shell scripts, and completely gets
rid of saltstack or the release salt tarball.
Use case: I have a docker-machine instance in the cloud, and an empty
DOCKER_OPTS env var. I want to `build/run.sh hack/build-go.sh`
Previously, this would invoke `docker '' cp` which was erroring out
with: '' not a command.
It cordons (marks unschedulable) the given node, and then deletes every
pod on it, optionally using a grace period. It will not delete pods
managed by neither a ReplicationController nor a DaemonSet without the
use of --force.
Also add cordon/uncordon, which just toggle node schedulability.
To build the python image, BUILD_PYTHON_IMAGE should be set during make.
When the addon script is running, it will check if python is installed
on the machine, if not, it will use the python image that built previously.
The node.yaml has some logic that will be also used by the kubernetes
master on trusty work (issue #16702). This change moves the code
shared by the master and node configuration to a separate script, and
the master and node configuration can source it to use the code.
Moreover, this change stages the script for GKE use.
This scopes down the initially ambitious PR:
https://github.com/kubernetes/kubernetes/pull/14960 to replace just
`pause` and `fluentd-elasticsearch` to come through `beta.gcr.io`.
The v2 versions have been pushed under new tags, `pause:2.0` and
`fluentd-elastisearch:1.12`.
NOTE: `beta.gcr.io` will still serve images using v1 until they are repushed with v2. Pulls through `gcr.io` will still work after pushing through `beta.gcr.io`, but will be served over v1 (via compat logic).
This registry can be accessed through proxies that run on each node
listening on port 5000. We send the proxy images to the nodes directly
to avoid requests that hit the network during cluster launch. For now,
we continue to pull the registry itself over the network, especially
given its large size (we should be able to dramatically shrink the
image). On GCE we create a PD and use that for storage, otherwise we
use an emptyDir. The registry is not enabled outside of GCE. All
communication is currently plain HTTP. In order to use SSL, we will
need to be able to request a certificate/key from the apiserver signed
by the apiserver's CA cert.
Some versions of docker display image listings like this:
docker.io/golang 1.4 ebd45caf377c 2 weeks ago
The regular expression used to detect presence of images
needs to be updated. It's unfortunate that we're still
screen-scaping here, due to:
https://github.com/docker/docker/issues/8048
This commit does 4 things:
* Adds a script which will: (a) clone from a git tag, make release,
and give you very detailed instructions as to what to do from that
point.
* Changes `push-official-release.sh` so we can't push "dirty"
releases anymore (which `build-official-release.sh` also double
checks at the end.)
* Fixes#9576 by ensuring a correct umask.
* Changes common.sh to gtar all the way through, to ensure that
bloody OS X tar never touches the release process, because I don't
want to have to understand two tar programs and how release
artifacts are created from both (c.f. #10615.)
This commit does 4 things:
* Adds a script which will: (a) clone from a git tag, make release,
and give you very detailed instructions as to what to do from that
point.
* Changes `push-official-release.sh` so we can't push "dirty"
releases anymore (which `build-official-release.sh` also double
checks at the end.)
* Fixes#9576 by ensuring a correct umask.
* Changes common.sh to gtar all the way through, to ensure that
bloody OS X tar never touches the release process, because I don't
want to have to understand two tar programs and how release
artifacts are created from both (c.f. #10615.)
Slightly neuters #8955, but we haven't had a build succeed in a while
for whatever reason. (I checked on Jenkins and the images in the build
log where deletion was attempted were actually deleted, so I think
this is primarily an exit code / API issue of some sort.)
Also by pre-staging and pushing all at once, and by doing the ACL
modify in parallel, this shaves the push time down anyways, despite
the extra I/O.
Along the way: Updates to longer hashes ala #6615
Rework the parallel docker build path to create separate docker build
directories for each binary, rather than just separate files,
eliminating the use of "-f Dockerfile.foo". (I think this also shaves
a little more time off, because it was previously sending the whole
dir each time to the docker daemon.)
Also some minor cleanup.
Fixes#6463
Adds a kube::release::gcs::publish_latest_official that checks the
current contents of this file in GCS, verifies that we're pushing a
newer build, and updates it if so. (i.e. it handles the case of
pushing a 0.13.1 and then later pushing a 0.12.3.) This follows the
pattern of the ci/ build, which Jenkins just updates unconditionally.
I already updated the file for 0.13.1. After this we can update the
get-k8s script, so we don't have to keep updating it.
Change provisioning to pass all variables to both master and node. Run
Salt in a masterless setup on all nodes ala
http://docs.saltstack.com/en/latest/topics/tutorials/quickstart.html,
which involves ensuring Salt daemon is NOT running after install. Kill
Salt master install. And fix push to actually work in this new flow.
As part of this, the GCE Salt config no longer has access to the Salt
mine, which is primarily obnoxious for two reasons: - The minions
can't use Salt to see the master: this is easily fixed by static
config. - The master can't see the list of all the minions: this is
fixed temporarily by static config in util.sh, but later, by other
means (see
https://github.com/GoogleCloudPlatform/kubernetes/issues/156, which
should eventually remove this direction).
As part of it, flatten all of cluster/gce/templates/* into
configure-vm.sh, using a single, separate piece of YAML to drive the
environment variables, rather than constantly rewriting the startup
script.