Change GCE to use standalone Saltstack config:
Change provisioning to pass all variables to both master and node. Run
Salt in a masterless setup on all nodes ala
http://docs.saltstack.com/en/latest/topics/tutorials/quickstart.html,
which involves ensuring Salt daemon is NOT running after install. Kill
Salt master install. And fix push to actually work in this new flow.
As part of this, the GCE Salt config no longer has access to the Salt
mine, which is primarily obnoxious for two reasons: - The minions
can't use Salt to see the master: this is easily fixed by static
config. - The master can't see the list of all the minions: this is
fixed temporarily by static config in util.sh, but later, by other
means (see
https://github.com/GoogleCloudPlatform/kubernetes/issues/156, which
should eventually remove this direction).
As part of it, flatten all of cluster/gce/templates/* into
configure-vm.sh, using a single, separate piece of YAML to drive the
environment variables, rather than constantly rewriting the startup
script.
2015-03-02 22:38:58 +00:00
|
|
|
#!/bin/bash
|
|
|
|
|
2016-06-03 00:25:58 +00:00
|
|
|
# Copyright 2015 The Kubernetes Authors.
|
Change GCE to use standalone Saltstack config:
Change provisioning to pass all variables to both master and node. Run
Salt in a masterless setup on all nodes ala
http://docs.saltstack.com/en/latest/topics/tutorials/quickstart.html,
which involves ensuring Salt daemon is NOT running after install. Kill
Salt master install. And fix push to actually work in this new flow.
As part of this, the GCE Salt config no longer has access to the Salt
mine, which is primarily obnoxious for two reasons: - The minions
can't use Salt to see the master: this is easily fixed by static
config. - The master can't see the list of all the minions: this is
fixed temporarily by static config in util.sh, but later, by other
means (see
https://github.com/GoogleCloudPlatform/kubernetes/issues/156, which
should eventually remove this direction).
As part of it, flatten all of cluster/gce/templates/* into
configure-vm.sh, using a single, separate piece of YAML to drive the
environment variables, rather than constantly rewriting the startup
script.
2015-03-02 22:38:58 +00:00
|
|
|
#
|
|
|
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
|
|
# you may not use this file except in compliance with the License.
|
|
|
|
# You may obtain a copy of the License at
|
|
|
|
#
|
|
|
|
# http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
#
|
|
|
|
# Unless required by applicable law or agreed to in writing, software
|
|
|
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
|
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
|
|
# See the License for the specific language governing permissions and
|
|
|
|
# limitations under the License.
|
|
|
|
|
|
|
|
set -o errexit
|
|
|
|
set -o nounset
|
|
|
|
set -o pipefail
|
|
|
|
|
|
|
|
# If we have any arguments at all, this is a push and not just setup.
|
|
|
|
is_push=$@
|
|
|
|
|
2015-05-14 02:05:58 +00:00
|
|
|
function ensure-basic-networking() {
|
|
|
|
# Deal with GCE networking bring-up race. (We rely on DNS for a lot,
|
|
|
|
# and it's just not worth doing a whole lot of startup work if this
|
|
|
|
# isn't ready yet.)
|
|
|
|
until getent hosts metadata.google.internal &>/dev/null; do
|
|
|
|
echo 'Waiting for functional DNS (trying to resolve metadata.google.internal)...'
|
|
|
|
sleep 3
|
|
|
|
done
|
2015-10-31 23:53:57 +00:00
|
|
|
until getent hosts $(hostname -f || echo _error_) &>/dev/null; do
|
2015-05-14 02:05:58 +00:00
|
|
|
echo 'Waiting for functional DNS (trying to resolve my own FQDN)...'
|
|
|
|
sleep 3
|
|
|
|
done
|
2015-10-31 23:53:57 +00:00
|
|
|
until getent hosts $(hostname -i || echo _error_) &>/dev/null; do
|
2015-05-14 02:05:58 +00:00
|
|
|
echo 'Waiting for functional DNS (trying to resolve my own IP)...'
|
|
|
|
sleep 3
|
|
|
|
done
|
|
|
|
|
|
|
|
echo "Networking functional on $(hostname) ($(hostname -i))"
|
|
|
|
}
|
|
|
|
|
2016-02-04 14:36:30 +00:00
|
|
|
# A hookpoint for installing any needed packages
|
|
|
|
ensure-packages() {
|
|
|
|
:
|
|
|
|
}
|
|
|
|
|
2017-02-17 23:06:55 +00:00
|
|
|
function create-node-pki {
|
|
|
|
echo "Creating node pki files"
|
|
|
|
|
|
|
|
local -r pki_dir="/etc/kubernetes/pki"
|
|
|
|
mkdir -p "${pki_dir}"
|
|
|
|
|
|
|
|
if [[ -z "${CA_CERT_BUNDLE:-}" ]]; then
|
|
|
|
CA_CERT_BUNDLE="${CA_CERT}"
|
|
|
|
fi
|
|
|
|
|
2017-02-27 23:06:11 +00:00
|
|
|
CA_CERT_BUNDLE_PATH="${pki_dir}/ca-certificates.crt"
|
|
|
|
echo "${CA_CERT_BUNDLE}" | base64 --decode > "${CA_CERT_BUNDLE_PATH}"
|
|
|
|
|
2017-02-17 23:06:55 +00:00
|
|
|
if [[ ! -z "${KUBELET_CERT:-}" && ! -z "${KUBELET_KEY:-}" ]]; then
|
|
|
|
KUBELET_CERT_PATH="${pki_dir}/kubelet.crt"
|
|
|
|
echo "${KUBELET_CERT}" | base64 --decode > "${KUBELET_CERT_PATH}"
|
|
|
|
|
|
|
|
KUBELET_KEY_PATH="${pki_dir}/kubelet.key"
|
|
|
|
echo "${KUBELET_KEY}" | base64 --decode > "${KUBELET_KEY_PATH}"
|
|
|
|
fi
|
2017-03-08 22:03:51 +00:00
|
|
|
|
|
|
|
# TODO(mikedanese): remove this when we don't support downgrading to versions
|
|
|
|
# < 1.6.
|
|
|
|
ln -s "${CA_CERT_BUNDLE_PATH}" /etc/kubernetes/ca.crt
|
2017-02-17 23:06:55 +00:00
|
|
|
}
|
|
|
|
|
2016-02-04 14:36:30 +00:00
|
|
|
# A hookpoint for setting up local devices
|
|
|
|
ensure-local-disks() {
|
2016-04-15 20:56:45 +00:00
|
|
|
for ssd in /dev/disk/by-id/google-local-ssd-*; do
|
|
|
|
if [ -e "$ssd" ]; then
|
|
|
|
ssdnum=`echo $ssd | sed -e 's/\/dev\/disk\/by-id\/google-local-ssd-\([0-9]*\)/\1/'`
|
2016-06-08 20:19:33 +00:00
|
|
|
echo "Formatting and mounting local SSD $ssd to /mnt/disks/ssd$ssdnum"
|
|
|
|
mkdir -p /mnt/disks/ssd$ssdnum
|
|
|
|
/usr/share/google/safe_format_and_mount -m "mkfs.ext4 -F" "${ssd}" /mnt/disks/ssd$ssdnum &>/var/log/local-ssd-$ssdnum-mount.log || \
|
2016-04-15 20:56:45 +00:00
|
|
|
{ echo "Local SSD $ssdnum mount failed, review /var/log/local-ssd-$ssdnum-mount.log"; return 1; }
|
|
|
|
else
|
|
|
|
echo "No local SSD disks found."
|
|
|
|
fi
|
|
|
|
done
|
2016-02-04 14:36:30 +00:00
|
|
|
}
|
|
|
|
|
2017-02-27 22:07:07 +00:00
|
|
|
function config-ip-firewall {
|
|
|
|
echo "Configuring IP firewall rules"
|
|
|
|
|
|
|
|
iptables -N KUBE-METADATA-SERVER
|
|
|
|
iptables -A FORWARD -p tcp -d 169.254.169.254 --dport 80 -j KUBE-METADATA-SERVER
|
|
|
|
|
|
|
|
if [[ -n "${KUBE_FIREWALL_METADATA_SERVER:-}" ]]; then
|
|
|
|
iptables -A KUBE-METADATA-SERVER -j DROP
|
|
|
|
fi
|
|
|
|
}
|
|
|
|
|
Change GCE to use standalone Saltstack config:
Change provisioning to pass all variables to both master and node. Run
Salt in a masterless setup on all nodes ala
http://docs.saltstack.com/en/latest/topics/tutorials/quickstart.html,
which involves ensuring Salt daemon is NOT running after install. Kill
Salt master install. And fix push to actually work in this new flow.
As part of this, the GCE Salt config no longer has access to the Salt
mine, which is primarily obnoxious for two reasons: - The minions
can't use Salt to see the master: this is easily fixed by static
config. - The master can't see the list of all the minions: this is
fixed temporarily by static config in util.sh, but later, by other
means (see
https://github.com/GoogleCloudPlatform/kubernetes/issues/156, which
should eventually remove this direction).
As part of it, flatten all of cluster/gce/templates/* into
configure-vm.sh, using a single, separate piece of YAML to drive the
environment variables, rather than constantly rewriting the startup
script.
2015-03-02 22:38:58 +00:00
|
|
|
function ensure-install-dir() {
|
|
|
|
INSTALL_DIR="/var/cache/kubernetes-install"
|
|
|
|
mkdir -p ${INSTALL_DIR}
|
|
|
|
cd ${INSTALL_DIR}
|
|
|
|
}
|
|
|
|
|
2015-05-28 01:56:05 +00:00
|
|
|
function salt-apiserver-timeout-grain() {
|
|
|
|
cat <<EOF >>/etc/salt/minion.d/grains.conf
|
|
|
|
minRequestTimeout: '$1'
|
|
|
|
EOF
|
|
|
|
}
|
|
|
|
|
Change GCE to use standalone Saltstack config:
Change provisioning to pass all variables to both master and node. Run
Salt in a masterless setup on all nodes ala
http://docs.saltstack.com/en/latest/topics/tutorials/quickstart.html,
which involves ensuring Salt daemon is NOT running after install. Kill
Salt master install. And fix push to actually work in this new flow.
As part of this, the GCE Salt config no longer has access to the Salt
mine, which is primarily obnoxious for two reasons: - The minions
can't use Salt to see the master: this is easily fixed by static
config. - The master can't see the list of all the minions: this is
fixed temporarily by static config in util.sh, but later, by other
means (see
https://github.com/GoogleCloudPlatform/kubernetes/issues/156, which
should eventually remove this direction).
As part of it, flatten all of cluster/gce/templates/* into
configure-vm.sh, using a single, separate piece of YAML to drive the
environment variables, rather than constantly rewriting the startup
script.
2015-03-02 22:38:58 +00:00
|
|
|
function set-broken-motd() {
|
2016-02-04 14:36:30 +00:00
|
|
|
echo -e '\nBroken (or in progress) Kubernetes node setup! Suggested first step:\n tail /var/log/startupscript.log\n' > /etc/motd
|
Change GCE to use standalone Saltstack config:
Change provisioning to pass all variables to both master and node. Run
Salt in a masterless setup on all nodes ala
http://docs.saltstack.com/en/latest/topics/tutorials/quickstart.html,
which involves ensuring Salt daemon is NOT running after install. Kill
Salt master install. And fix push to actually work in this new flow.
As part of this, the GCE Salt config no longer has access to the Salt
mine, which is primarily obnoxious for two reasons: - The minions
can't use Salt to see the master: this is easily fixed by static
config. - The master can't see the list of all the minions: this is
fixed temporarily by static config in util.sh, but later, by other
means (see
https://github.com/GoogleCloudPlatform/kubernetes/issues/156, which
should eventually remove this direction).
As part of it, flatten all of cluster/gce/templates/* into
configure-vm.sh, using a single, separate piece of YAML to drive the
environment variables, rather than constantly rewriting the startup
script.
2015-03-02 22:38:58 +00:00
|
|
|
}
|
|
|
|
|
2016-02-12 21:58:06 +00:00
|
|
|
function reset-motd() {
|
|
|
|
# kubelet is installed both on the master and nodes, and the version is easy to parse (unlike kubectl)
|
|
|
|
local -r version="$(/usr/local/bin/kubelet --version=true | cut -f2 -d " ")"
|
|
|
|
# This logic grabs either a release tag (v1.2.1 or v1.2.1-alpha.1),
|
|
|
|
# or the git hash that's in the build info.
|
|
|
|
local gitref="$(echo "${version}" | sed -r "s/(v[0-9]+\.[0-9]+\.[0-9]+)(-[a-z]+\.[0-9]+)?.*/\1\2/g")"
|
|
|
|
local devel=""
|
|
|
|
if [[ "${gitref}" != "${version}" ]]; then
|
|
|
|
devel="
|
|
|
|
Note: This looks like a development version, which might not be present on GitHub.
|
|
|
|
If it isn't, the closest tag is at:
|
|
|
|
https://github.com/kubernetes/kubernetes/tree/${gitref}
|
|
|
|
"
|
|
|
|
gitref="${version//*+/}"
|
|
|
|
fi
|
|
|
|
cat > /etc/motd <<EOF
|
|
|
|
|
|
|
|
Welcome to Kubernetes ${version}!
|
|
|
|
|
2016-03-12 13:58:40 +00:00
|
|
|
You can find documentation for Kubernetes at:
|
|
|
|
http://docs.kubernetes.io/
|
2016-02-12 21:58:06 +00:00
|
|
|
|
2016-06-22 05:58:48 +00:00
|
|
|
The source for this release can be found at:
|
|
|
|
/usr/local/share/doc/kubernetes/kubernetes-src.tar.gz
|
|
|
|
Or you can download it at:
|
2016-02-12 21:58:06 +00:00
|
|
|
https://storage.googleapis.com/kubernetes-release/release/${version}/kubernetes-src.tar.gz
|
|
|
|
|
|
|
|
It is based on the Kubernetes source at:
|
|
|
|
https://github.com/kubernetes/kubernetes/tree/${gitref}
|
|
|
|
${devel}
|
2016-03-02 23:19:40 +00:00
|
|
|
For Kubernetes copyright and licensing information, see:
|
|
|
|
/usr/local/share/doc/kubernetes/LICENSES
|
2016-03-12 13:58:40 +00:00
|
|
|
|
2016-02-12 21:58:06 +00:00
|
|
|
EOF
|
Change GCE to use standalone Saltstack config:
Change provisioning to pass all variables to both master and node. Run
Salt in a masterless setup on all nodes ala
http://docs.saltstack.com/en/latest/topics/tutorials/quickstart.html,
which involves ensuring Salt daemon is NOT running after install. Kill
Salt master install. And fix push to actually work in this new flow.
As part of this, the GCE Salt config no longer has access to the Salt
mine, which is primarily obnoxious for two reasons: - The minions
can't use Salt to see the master: this is easily fixed by static
config. - The master can't see the list of all the minions: this is
fixed temporarily by static config in util.sh, but later, by other
means (see
https://github.com/GoogleCloudPlatform/kubernetes/issues/156, which
should eventually remove this direction).
As part of it, flatten all of cluster/gce/templates/* into
configure-vm.sh, using a single, separate piece of YAML to drive the
environment variables, rather than constantly rewriting the startup
script.
2015-03-02 22:38:58 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
function curl-metadata() {
|
2015-12-01 19:39:43 +00:00
|
|
|
curl --fail --retry 5 --silent -H 'Metadata-Flavor: Google' "http://metadata/computeMetadata/v1/instance/attributes/${1}"
|
Change GCE to use standalone Saltstack config:
Change provisioning to pass all variables to both master and node. Run
Salt in a masterless setup on all nodes ala
http://docs.saltstack.com/en/latest/topics/tutorials/quickstart.html,
which involves ensuring Salt daemon is NOT running after install. Kill
Salt master install. And fix push to actually work in this new flow.
As part of this, the GCE Salt config no longer has access to the Salt
mine, which is primarily obnoxious for two reasons: - The minions
can't use Salt to see the master: this is easily fixed by static
config. - The master can't see the list of all the minions: this is
fixed temporarily by static config in util.sh, but later, by other
means (see
https://github.com/GoogleCloudPlatform/kubernetes/issues/156, which
should eventually remove this direction).
As part of it, flatten all of cluster/gce/templates/* into
configure-vm.sh, using a single, separate piece of YAML to drive the
environment variables, rather than constantly rewriting the startup
script.
2015-03-02 22:38:58 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
function set-kube-env() {
|
|
|
|
local kube_env_yaml="${INSTALL_DIR}/kube_env.yaml"
|
|
|
|
|
|
|
|
until curl-metadata kube-env > "${kube_env_yaml}"; do
|
|
|
|
echo 'Waiting for kube-env...'
|
|
|
|
sleep 3
|
|
|
|
done
|
|
|
|
|
|
|
|
# kube-env has all the environment variables we care about, in a flat yaml format
|
2015-11-18 02:13:24 +00:00
|
|
|
eval "$(python -c '
|
Change GCE to use standalone Saltstack config:
Change provisioning to pass all variables to both master and node. Run
Salt in a masterless setup on all nodes ala
http://docs.saltstack.com/en/latest/topics/tutorials/quickstart.html,
which involves ensuring Salt daemon is NOT running after install. Kill
Salt master install. And fix push to actually work in this new flow.
As part of this, the GCE Salt config no longer has access to the Salt
mine, which is primarily obnoxious for two reasons: - The minions
can't use Salt to see the master: this is easily fixed by static
config. - The master can't see the list of all the minions: this is
fixed temporarily by static config in util.sh, but later, by other
means (see
https://github.com/GoogleCloudPlatform/kubernetes/issues/156, which
should eventually remove this direction).
As part of it, flatten all of cluster/gce/templates/* into
configure-vm.sh, using a single, separate piece of YAML to drive the
environment variables, rather than constantly rewriting the startup
script.
2015-03-02 22:38:58 +00:00
|
|
|
import pipes,sys,yaml
|
|
|
|
|
|
|
|
for k,v in yaml.load(sys.stdin).iteritems():
|
2016-01-12 01:18:41 +00:00
|
|
|
print("""readonly {var}={value}""".format(var = k, value = pipes.quote(str(v))))
|
|
|
|
print("""export {var}""".format(var = k))
|
2015-11-18 02:13:24 +00:00
|
|
|
' < """${kube_env_yaml}""")"
|
Change GCE to use standalone Saltstack config:
Change provisioning to pass all variables to both master and node. Run
Salt in a masterless setup on all nodes ala
http://docs.saltstack.com/en/latest/topics/tutorials/quickstart.html,
which involves ensuring Salt daemon is NOT running after install. Kill
Salt master install. And fix push to actually work in this new flow.
As part of this, the GCE Salt config no longer has access to the Salt
mine, which is primarily obnoxious for two reasons: - The minions
can't use Salt to see the master: this is easily fixed by static
config. - The master can't see the list of all the minions: this is
fixed temporarily by static config in util.sh, but later, by other
means (see
https://github.com/GoogleCloudPlatform/kubernetes/issues/156, which
should eventually remove this direction).
As part of it, flatten all of cluster/gce/templates/* into
configure-vm.sh, using a single, separate piece of YAML to drive the
environment variables, rather than constantly rewriting the startup
script.
2015-03-02 22:38:58 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
function remove-docker-artifacts() {
|
2015-03-29 20:58:14 +00:00
|
|
|
echo "== Deleting docker0 =="
|
2016-02-04 14:36:30 +00:00
|
|
|
apt-get-install bridge-utils
|
2015-03-29 20:58:14 +00:00
|
|
|
|
Change GCE to use standalone Saltstack config:
Change provisioning to pass all variables to both master and node. Run
Salt in a masterless setup on all nodes ala
http://docs.saltstack.com/en/latest/topics/tutorials/quickstart.html,
which involves ensuring Salt daemon is NOT running after install. Kill
Salt master install. And fix push to actually work in this new flow.
As part of this, the GCE Salt config no longer has access to the Salt
mine, which is primarily obnoxious for two reasons: - The minions
can't use Salt to see the master: this is easily fixed by static
config. - The master can't see the list of all the minions: this is
fixed temporarily by static config in util.sh, but later, by other
means (see
https://github.com/GoogleCloudPlatform/kubernetes/issues/156, which
should eventually remove this direction).
As part of it, flatten all of cluster/gce/templates/* into
configure-vm.sh, using a single, separate piece of YAML to drive the
environment variables, rather than constantly rewriting the startup
script.
2015-03-02 22:38:58 +00:00
|
|
|
# Remove docker artifacts on minion nodes, if present
|
|
|
|
iptables -t nat -F || true
|
|
|
|
ifconfig docker0 down || true
|
|
|
|
brctl delbr docker0 || true
|
2015-03-29 20:58:14 +00:00
|
|
|
echo "== Finished deleting docker0 =="
|
Change GCE to use standalone Saltstack config:
Change provisioning to pass all variables to both master and node. Run
Salt in a masterless setup on all nodes ala
http://docs.saltstack.com/en/latest/topics/tutorials/quickstart.html,
which involves ensuring Salt daemon is NOT running after install. Kill
Salt master install. And fix push to actually work in this new flow.
As part of this, the GCE Salt config no longer has access to the Salt
mine, which is primarily obnoxious for two reasons: - The minions
can't use Salt to see the master: this is easily fixed by static
config. - The master can't see the list of all the minions: this is
fixed temporarily by static config in util.sh, but later, by other
means (see
https://github.com/GoogleCloudPlatform/kubernetes/issues/156, which
should eventually remove this direction).
As part of it, flatten all of cluster/gce/templates/* into
configure-vm.sh, using a single, separate piece of YAML to drive the
environment variables, rather than constantly rewriting the startup
script.
2015-03-02 22:38:58 +00:00
|
|
|
}
|
|
|
|
|
2016-03-01 00:23:54 +00:00
|
|
|
# Retry a download until we get it. Takes a hash and a set of URLs.
|
Change GCE to use standalone Saltstack config:
Change provisioning to pass all variables to both master and node. Run
Salt in a masterless setup on all nodes ala
http://docs.saltstack.com/en/latest/topics/tutorials/quickstart.html,
which involves ensuring Salt daemon is NOT running after install. Kill
Salt master install. And fix push to actually work in this new flow.
As part of this, the GCE Salt config no longer has access to the Salt
mine, which is primarily obnoxious for two reasons: - The minions
can't use Salt to see the master: this is easily fixed by static
config. - The master can't see the list of all the minions: this is
fixed temporarily by static config in util.sh, but later, by other
means (see
https://github.com/GoogleCloudPlatform/kubernetes/issues/156, which
should eventually remove this direction).
As part of it, flatten all of cluster/gce/templates/* into
configure-vm.sh, using a single, separate piece of YAML to drive the
environment variables, rather than constantly rewriting the startup
script.
2015-03-02 22:38:58 +00:00
|
|
|
#
|
2016-03-01 00:23:54 +00:00
|
|
|
# $1 is the sha1 of the URL. Can be "" if the sha1 is unknown.
|
|
|
|
# $2+ are the URLs to download.
|
Change GCE to use standalone Saltstack config:
Change provisioning to pass all variables to both master and node. Run
Salt in a masterless setup on all nodes ala
http://docs.saltstack.com/en/latest/topics/tutorials/quickstart.html,
which involves ensuring Salt daemon is NOT running after install. Kill
Salt master install. And fix push to actually work in this new flow.
As part of this, the GCE Salt config no longer has access to the Salt
mine, which is primarily obnoxious for two reasons: - The minions
can't use Salt to see the master: this is easily fixed by static
config. - The master can't see the list of all the minions: this is
fixed temporarily by static config in util.sh, but later, by other
means (see
https://github.com/GoogleCloudPlatform/kubernetes/issues/156, which
should eventually remove this direction).
As part of it, flatten all of cluster/gce/templates/* into
configure-vm.sh, using a single, separate piece of YAML to drive the
environment variables, rather than constantly rewriting the startup
script.
2015-03-02 22:38:58 +00:00
|
|
|
download-or-bust() {
|
2016-03-01 00:23:54 +00:00
|
|
|
local -r hash="$1"
|
|
|
|
shift 1
|
|
|
|
|
|
|
|
urls=( $* )
|
|
|
|
while true; do
|
|
|
|
for url in "${urls[@]}"; do
|
|
|
|
local file="${url##*/}"
|
|
|
|
rm -f "${file}"
|
2016-07-22 04:39:35 +00:00
|
|
|
if ! curl -f --ipv4 -Lo "${file}" --connect-timeout 20 --max-time 300 --retry 6 --retry-delay 10 "${url}"; then
|
2016-03-01 00:23:54 +00:00
|
|
|
echo "== Failed to download ${url}. Retrying. =="
|
|
|
|
elif [[ -n "${hash}" ]] && ! validate-hash "${file}" "${hash}"; then
|
|
|
|
echo "== Hash validation of ${url} failed. Retrying. =="
|
|
|
|
else
|
|
|
|
if [[ -n "${hash}" ]]; then
|
|
|
|
echo "== Downloaded ${url} (SHA1 = ${hash}) =="
|
|
|
|
else
|
|
|
|
echo "== Downloaded ${url} =="
|
|
|
|
fi
|
|
|
|
return
|
|
|
|
fi
|
|
|
|
done
|
Change GCE to use standalone Saltstack config:
Change provisioning to pass all variables to both master and node. Run
Salt in a masterless setup on all nodes ala
http://docs.saltstack.com/en/latest/topics/tutorials/quickstart.html,
which involves ensuring Salt daemon is NOT running after install. Kill
Salt master install. And fix push to actually work in this new flow.
As part of this, the GCE Salt config no longer has access to the Salt
mine, which is primarily obnoxious for two reasons: - The minions
can't use Salt to see the master: this is easily fixed by static
config. - The master can't see the list of all the minions: this is
fixed temporarily by static config in util.sh, but later, by other
means (see
https://github.com/GoogleCloudPlatform/kubernetes/issues/156, which
should eventually remove this direction).
As part of it, flatten all of cluster/gce/templates/* into
configure-vm.sh, using a single, separate piece of YAML to drive the
environment variables, rather than constantly rewriting the startup
script.
2015-03-02 22:38:58 +00:00
|
|
|
done
|
|
|
|
}
|
|
|
|
|
2015-06-18 18:31:21 +00:00
|
|
|
validate-hash() {
|
|
|
|
local -r file="$1"
|
|
|
|
local -r expected="$2"
|
|
|
|
local actual
|
|
|
|
|
|
|
|
actual=$(sha1sum ${file} | awk '{ print $1 }') || true
|
|
|
|
if [[ "${actual}" != "${expected}" ]]; then
|
|
|
|
echo "== ${file} corrupted, sha1 ${actual} doesn't match expected ${expected} =="
|
|
|
|
return 1
|
|
|
|
fi
|
|
|
|
}
|
|
|
|
|
2016-02-04 14:36:30 +00:00
|
|
|
apt-get-install() {
|
2016-02-26 21:38:38 +00:00
|
|
|
local -r packages=( $@ )
|
|
|
|
installed=true
|
|
|
|
for package in "${packages[@]}"; do
|
|
|
|
if ! dpkg -s "${package}" &>/dev/null; then
|
|
|
|
installed=false
|
|
|
|
break
|
|
|
|
fi
|
|
|
|
done
|
|
|
|
if [[ "${installed}" == "true" ]]; then
|
|
|
|
echo "== ${packages[@]} already installed, skipped apt-get install ${packages[@]} =="
|
|
|
|
return
|
|
|
|
fi
|
|
|
|
|
|
|
|
apt-get-update
|
|
|
|
|
2016-02-04 14:36:30 +00:00
|
|
|
# Forcibly install packages (options borrowed from Salt logs).
|
|
|
|
until apt-get -q -y -o DPkg::Options::=--force-confold -o DPkg::Options::=--force-confdef install $@; do
|
|
|
|
echo "== install of packages $@ failed, retrying =="
|
|
|
|
sleep 5
|
|
|
|
done
|
|
|
|
}
|
|
|
|
|
|
|
|
apt-get-update() {
|
|
|
|
echo "== Refreshing package database =="
|
|
|
|
until apt-get update; do
|
|
|
|
echo "== apt-get update failed, retrying =="
|
2016-02-26 21:38:38 +00:00
|
|
|
sleep 5
|
2016-02-04 14:36:30 +00:00
|
|
|
done
|
|
|
|
}
|
|
|
|
|
2016-02-18 14:43:09 +00:00
|
|
|
# Restart any services that need restarting due to a library upgrade
|
|
|
|
# Uses needrestart
|
|
|
|
restart-updated-services() {
|
2016-02-28 01:43:51 +00:00
|
|
|
# We default to restarting services, because this is only done as part of an update
|
|
|
|
if [[ "${AUTO_RESTART_SERVICES:-true}" != "true" ]]; then
|
2016-02-18 14:43:09 +00:00
|
|
|
echo "Auto restart of services prevented by AUTO_RESTART_SERVICES=${AUTO_RESTART_SERVICES}"
|
|
|
|
return
|
|
|
|
fi
|
|
|
|
echo "Restarting services with updated libraries (needrestart -r a)"
|
|
|
|
# The pipes make sure that needrestart doesn't think it is running with a TTY
|
|
|
|
# Debian bug #803249; fixed but not necessarily in package repos yet
|
|
|
|
echo "" | needrestart -r a 2>&1 | tee /dev/null
|
|
|
|
}
|
|
|
|
|
|
|
|
# Reboot the machine if /var/run/reboot-required exists
|
|
|
|
reboot-if-required() {
|
|
|
|
if [[ ! -e "/var/run/reboot-required" ]]; then
|
|
|
|
return
|
|
|
|
fi
|
|
|
|
|
|
|
|
echo "Reboot is required (/var/run/reboot-required detected)"
|
|
|
|
if [[ -e "/var/run/reboot-required.pkgs" ]]; then
|
|
|
|
echo "Packages that triggered reboot:"
|
|
|
|
cat /var/run/reboot-required.pkgs
|
|
|
|
fi
|
|
|
|
|
2016-02-28 01:43:51 +00:00
|
|
|
# We default to rebooting the machine because this is only done as part of an update
|
|
|
|
if [[ "${AUTO_REBOOT:-true}" != "true" ]]; then
|
2016-02-18 14:43:09 +00:00
|
|
|
echo "Reboot prevented by AUTO_REBOOT=${AUTO_REBOOT}"
|
|
|
|
return
|
|
|
|
fi
|
|
|
|
|
|
|
|
rm -f /var/run/reboot-required
|
|
|
|
rm -f /var/run/reboot-required.pkgs
|
|
|
|
echo "Triggering reboot"
|
|
|
|
init 6
|
|
|
|
}
|
|
|
|
|
|
|
|
# Install upgrades using unattended-upgrades, then reboot or restart services
|
|
|
|
auto-upgrade() {
|
2016-02-28 01:43:51 +00:00
|
|
|
# We default to not installing upgrades
|
|
|
|
if [[ "${AUTO_UPGRADE:-false}" != "true" ]]; then
|
|
|
|
echo "AUTO_UPGRADE not set to true; won't auto-upgrade"
|
|
|
|
return
|
|
|
|
fi
|
2016-02-18 14:43:09 +00:00
|
|
|
apt-get-install unattended-upgrades needrestart
|
|
|
|
unattended-upgrade --debug
|
|
|
|
reboot-if-required # We may reboot the machine right here
|
|
|
|
restart-updated-services
|
|
|
|
}
|
|
|
|
|
2016-02-04 14:36:30 +00:00
|
|
|
#
|
Change GCE to use standalone Saltstack config:
Change provisioning to pass all variables to both master and node. Run
Salt in a masterless setup on all nodes ala
http://docs.saltstack.com/en/latest/topics/tutorials/quickstart.html,
which involves ensuring Salt daemon is NOT running after install. Kill
Salt master install. And fix push to actually work in this new flow.
As part of this, the GCE Salt config no longer has access to the Salt
mine, which is primarily obnoxious for two reasons: - The minions
can't use Salt to see the master: this is easily fixed by static
config. - The master can't see the list of all the minions: this is
fixed temporarily by static config in util.sh, but later, by other
means (see
https://github.com/GoogleCloudPlatform/kubernetes/issues/156, which
should eventually remove this direction).
As part of it, flatten all of cluster/gce/templates/* into
configure-vm.sh, using a single, separate piece of YAML to drive the
environment variables, rather than constantly rewriting the startup
script.
2015-03-02 22:38:58 +00:00
|
|
|
# Install salt from GCS. See README.md for instructions on how to update these
|
|
|
|
# debs.
|
|
|
|
install-salt() {
|
2015-05-04 23:11:40 +00:00
|
|
|
if dpkg -s salt-minion &>/dev/null; then
|
|
|
|
echo "== SaltStack already installed, skipping install step =="
|
|
|
|
return
|
|
|
|
fi
|
|
|
|
|
2015-03-24 23:11:40 +00:00
|
|
|
echo "== Refreshing package database =="
|
|
|
|
until apt-get update; do
|
|
|
|
echo "== apt-get update failed, retrying =="
|
2016-01-18 02:29:01 +00:00
|
|
|
sleep 5
|
2015-03-24 23:11:40 +00:00
|
|
|
done
|
Change GCE to use standalone Saltstack config:
Change provisioning to pass all variables to both master and node. Run
Salt in a masterless setup on all nodes ala
http://docs.saltstack.com/en/latest/topics/tutorials/quickstart.html,
which involves ensuring Salt daemon is NOT running after install. Kill
Salt master install. And fix push to actually work in this new flow.
As part of this, the GCE Salt config no longer has access to the Salt
mine, which is primarily obnoxious for two reasons: - The minions
can't use Salt to see the master: this is easily fixed by static
config. - The master can't see the list of all the minions: this is
fixed temporarily by static config in util.sh, but later, by other
means (see
https://github.com/GoogleCloudPlatform/kubernetes/issues/156, which
should eventually remove this direction).
As part of it, flatten all of cluster/gce/templates/* into
configure-vm.sh, using a single, separate piece of YAML to drive the
environment variables, rather than constantly rewriting the startup
script.
2015-03-02 22:38:58 +00:00
|
|
|
|
|
|
|
mkdir -p /var/cache/salt-install
|
|
|
|
cd /var/cache/salt-install
|
|
|
|
|
2015-03-24 23:11:40 +00:00
|
|
|
DEBS=(
|
Change GCE to use standalone Saltstack config:
Change provisioning to pass all variables to both master and node. Run
Salt in a masterless setup on all nodes ala
http://docs.saltstack.com/en/latest/topics/tutorials/quickstart.html,
which involves ensuring Salt daemon is NOT running after install. Kill
Salt master install. And fix push to actually work in this new flow.
As part of this, the GCE Salt config no longer has access to the Salt
mine, which is primarily obnoxious for two reasons: - The minions
can't use Salt to see the master: this is easily fixed by static
config. - The master can't see the list of all the minions: this is
fixed temporarily by static config in util.sh, but later, by other
means (see
https://github.com/GoogleCloudPlatform/kubernetes/issues/156, which
should eventually remove this direction).
As part of it, flatten all of cluster/gce/templates/* into
configure-vm.sh, using a single, separate piece of YAML to drive the
environment variables, rather than constantly rewriting the startup
script.
2015-03-02 22:38:58 +00:00
|
|
|
libzmq3_3.2.3+dfsg-1~bpo70~dst+1_amd64.deb
|
|
|
|
python-zmq_13.1.0-1~bpo70~dst+1_amd64.deb
|
|
|
|
salt-common_2014.1.13+ds-1~bpo70+1_all.deb
|
|
|
|
salt-minion_2014.1.13+ds-1~bpo70+1_all.deb
|
|
|
|
)
|
|
|
|
URL_BASE="https://storage.googleapis.com/kubernetes-release/salt"
|
|
|
|
|
2015-03-24 23:11:40 +00:00
|
|
|
for deb in "${DEBS[@]}"; do
|
2015-04-17 19:05:01 +00:00
|
|
|
if [ ! -e "${deb}" ]; then
|
2016-03-02 14:54:13 +00:00
|
|
|
download-or-bust "" "${URL_BASE}/${deb}"
|
2015-04-17 19:05:01 +00:00
|
|
|
fi
|
2015-03-24 23:11:40 +00:00
|
|
|
done
|
|
|
|
|
2015-03-18 23:11:10 +00:00
|
|
|
# Based on
|
|
|
|
# https://major.io/2014/06/26/install-debian-packages-without-starting-daemons/
|
|
|
|
# We do this to prevent Salt from starting the salt-minion
|
|
|
|
# daemon. The other packages don't have relevant daemons. (If you
|
|
|
|
# add a package that needs a daemon started, add it to a different
|
|
|
|
# list.)
|
|
|
|
cat > /usr/sbin/policy-rc.d <<EOF
|
|
|
|
#!/bin/sh
|
|
|
|
echo "Salt shall not start." >&2
|
|
|
|
exit 101
|
|
|
|
EOF
|
|
|
|
chmod 0755 /usr/sbin/policy-rc.d
|
|
|
|
|
2015-03-24 23:11:40 +00:00
|
|
|
for deb in "${DEBS[@]}"; do
|
|
|
|
echo "== Installing ${deb}, ignore dependency complaints (will fix later) =="
|
2015-04-17 19:05:01 +00:00
|
|
|
dpkg --skip-same-version --force-depends -i "${deb}"
|
Change GCE to use standalone Saltstack config:
Change provisioning to pass all variables to both master and node. Run
Salt in a masterless setup on all nodes ala
http://docs.saltstack.com/en/latest/topics/tutorials/quickstart.html,
which involves ensuring Salt daemon is NOT running after install. Kill
Salt master install. And fix push to actually work in this new flow.
As part of this, the GCE Salt config no longer has access to the Salt
mine, which is primarily obnoxious for two reasons: - The minions
can't use Salt to see the master: this is easily fixed by static
config. - The master can't see the list of all the minions: this is
fixed temporarily by static config in util.sh, but later, by other
means (see
https://github.com/GoogleCloudPlatform/kubernetes/issues/156, which
should eventually remove this direction).
As part of it, flatten all of cluster/gce/templates/* into
configure-vm.sh, using a single, separate piece of YAML to drive the
environment variables, rather than constantly rewriting the startup
script.
2015-03-02 22:38:58 +00:00
|
|
|
done
|
|
|
|
|
|
|
|
# This will install any of the unmet dependencies from above.
|
2015-03-24 23:11:40 +00:00
|
|
|
echo "== Installing unmet dependencies =="
|
|
|
|
until apt-get install -f -y; do
|
|
|
|
echo "== apt-get install failed, retrying =="
|
2016-01-18 02:29:01 +00:00
|
|
|
sleep 5
|
2015-03-24 23:11:40 +00:00
|
|
|
done
|
2015-03-18 23:11:10 +00:00
|
|
|
|
|
|
|
rm /usr/sbin/policy-rc.d
|
2015-03-29 20:58:14 +00:00
|
|
|
|
|
|
|
# Log a timestamp
|
|
|
|
echo "== Finished installing Salt =="
|
Change GCE to use standalone Saltstack config:
Change provisioning to pass all variables to both master and node. Run
Salt in a masterless setup on all nodes ala
http://docs.saltstack.com/en/latest/topics/tutorials/quickstart.html,
which involves ensuring Salt daemon is NOT running after install. Kill
Salt master install. And fix push to actually work in this new flow.
As part of this, the GCE Salt config no longer has access to the Salt
mine, which is primarily obnoxious for two reasons: - The minions
can't use Salt to see the master: this is easily fixed by static
config. - The master can't see the list of all the minions: this is
fixed temporarily by static config in util.sh, but later, by other
means (see
https://github.com/GoogleCloudPlatform/kubernetes/issues/156, which
should eventually remove this direction).
As part of it, flatten all of cluster/gce/templates/* into
configure-vm.sh, using a single, separate piece of YAML to drive the
environment variables, rather than constantly rewriting the startup
script.
2015-03-02 22:38:58 +00:00
|
|
|
}
|
|
|
|
|
2015-05-04 23:11:40 +00:00
|
|
|
# Ensure salt-minion isn't running and never runs
|
Change GCE to use standalone Saltstack config:
Change provisioning to pass all variables to both master and node. Run
Salt in a masterless setup on all nodes ala
http://docs.saltstack.com/en/latest/topics/tutorials/quickstart.html,
which involves ensuring Salt daemon is NOT running after install. Kill
Salt master install. And fix push to actually work in this new flow.
As part of this, the GCE Salt config no longer has access to the Salt
mine, which is primarily obnoxious for two reasons: - The minions
can't use Salt to see the master: this is easily fixed by static
config. - The master can't see the list of all the minions: this is
fixed temporarily by static config in util.sh, but later, by other
means (see
https://github.com/GoogleCloudPlatform/kubernetes/issues/156, which
should eventually remove this direction).
As part of it, flatten all of cluster/gce/templates/* into
configure-vm.sh, using a single, separate piece of YAML to drive the
environment variables, rather than constantly rewriting the startup
script.
2015-03-02 22:38:58 +00:00
|
|
|
stop-salt-minion() {
|
2015-05-04 23:11:40 +00:00
|
|
|
if [[ -e /etc/init/salt-minion.override ]]; then
|
|
|
|
# Assume this has already run (upgrade, or baked into containervm)
|
|
|
|
return
|
|
|
|
fi
|
|
|
|
|
Change GCE to use standalone Saltstack config:
Change provisioning to pass all variables to both master and node. Run
Salt in a masterless setup on all nodes ala
http://docs.saltstack.com/en/latest/topics/tutorials/quickstart.html,
which involves ensuring Salt daemon is NOT running after install. Kill
Salt master install. And fix push to actually work in this new flow.
As part of this, the GCE Salt config no longer has access to the Salt
mine, which is primarily obnoxious for two reasons: - The minions
can't use Salt to see the master: this is easily fixed by static
config. - The master can't see the list of all the minions: this is
fixed temporarily by static config in util.sh, but later, by other
means (see
https://github.com/GoogleCloudPlatform/kubernetes/issues/156, which
should eventually remove this direction).
As part of it, flatten all of cluster/gce/templates/* into
configure-vm.sh, using a single, separate piece of YAML to drive the
environment variables, rather than constantly rewriting the startup
script.
2015-03-02 22:38:58 +00:00
|
|
|
# This ensures it on next reboot
|
|
|
|
echo manual > /etc/init/salt-minion.override
|
2015-04-17 19:05:01 +00:00
|
|
|
update-rc.d salt-minion disable
|
Change GCE to use standalone Saltstack config:
Change provisioning to pass all variables to both master and node. Run
Salt in a masterless setup on all nodes ala
http://docs.saltstack.com/en/latest/topics/tutorials/quickstart.html,
which involves ensuring Salt daemon is NOT running after install. Kill
Salt master install. And fix push to actually work in this new flow.
As part of this, the GCE Salt config no longer has access to the Salt
mine, which is primarily obnoxious for two reasons: - The minions
can't use Salt to see the master: this is easily fixed by static
config. - The master can't see the list of all the minions: this is
fixed temporarily by static config in util.sh, but later, by other
means (see
https://github.com/GoogleCloudPlatform/kubernetes/issues/156, which
should eventually remove this direction).
As part of it, flatten all of cluster/gce/templates/* into
configure-vm.sh, using a single, separate piece of YAML to drive the
environment variables, rather than constantly rewriting the startup
script.
2015-03-02 22:38:58 +00:00
|
|
|
|
2015-05-04 23:11:40 +00:00
|
|
|
while service salt-minion status >/dev/null; do
|
|
|
|
echo "salt-minion found running, stopping"
|
|
|
|
service salt-minion stop
|
|
|
|
sleep 1
|
|
|
|
done
|
Change GCE to use standalone Saltstack config:
Change provisioning to pass all variables to both master and node. Run
Salt in a masterless setup on all nodes ala
http://docs.saltstack.com/en/latest/topics/tutorials/quickstart.html,
which involves ensuring Salt daemon is NOT running after install. Kill
Salt master install. And fix push to actually work in this new flow.
As part of this, the GCE Salt config no longer has access to the Salt
mine, which is primarily obnoxious for two reasons: - The minions
can't use Salt to see the master: this is easily fixed by static
config. - The master can't see the list of all the minions: this is
fixed temporarily by static config in util.sh, but later, by other
means (see
https://github.com/GoogleCloudPlatform/kubernetes/issues/156, which
should eventually remove this direction).
As part of it, flatten all of cluster/gce/templates/* into
configure-vm.sh, using a single, separate piece of YAML to drive the
environment variables, rather than constantly rewriting the startup
script.
2015-03-02 22:38:58 +00:00
|
|
|
}
|
|
|
|
|
2016-02-04 14:36:30 +00:00
|
|
|
# Finds the master PD device; returns it in MASTER_PD_DEVICE
|
|
|
|
find-master-pd() {
|
|
|
|
MASTER_PD_DEVICE=""
|
|
|
|
if [[ ! -e /dev/disk/by-id/google-master-pd ]]; then
|
|
|
|
return
|
|
|
|
fi
|
|
|
|
device_info=$(ls -l /dev/disk/by-id/google-master-pd)
|
|
|
|
relative_path=${device_info##* }
|
|
|
|
MASTER_PD_DEVICE="/dev/disk/by-id/${relative_path}"
|
|
|
|
}
|
|
|
|
|
Change GCE to use standalone Saltstack config:
Change provisioning to pass all variables to both master and node. Run
Salt in a masterless setup on all nodes ala
http://docs.saltstack.com/en/latest/topics/tutorials/quickstart.html,
which involves ensuring Salt daemon is NOT running after install. Kill
Salt master install. And fix push to actually work in this new flow.
As part of this, the GCE Salt config no longer has access to the Salt
mine, which is primarily obnoxious for two reasons: - The minions
can't use Salt to see the master: this is easily fixed by static
config. - The master can't see the list of all the minions: this is
fixed temporarily by static config in util.sh, but later, by other
means (see
https://github.com/GoogleCloudPlatform/kubernetes/issues/156, which
should eventually remove this direction).
As part of it, flatten all of cluster/gce/templates/* into
configure-vm.sh, using a single, separate piece of YAML to drive the
environment variables, rather than constantly rewriting the startup
script.
2015-03-02 22:38:58 +00:00
|
|
|
# Create the overlay files for the salt tree. We create these in a separate
|
|
|
|
# place so that we can blow away the rest of the salt configs on a kube-push and
|
|
|
|
# re-apply these.
|
|
|
|
function create-salt-pillar() {
|
|
|
|
# Always overwrite the cluster-params.sls (even on a push, we have
|
|
|
|
# these variables)
|
|
|
|
mkdir -p /srv/salt-overlay/pillar
|
|
|
|
cat <<EOF >/srv/salt-overlay/pillar/cluster-params.sls
|
|
|
|
instance_prefix: '$(echo "$INSTANCE_PREFIX" | sed -e "s/'/''/g")'
|
2016-07-18 21:20:45 +00:00
|
|
|
node_tags: '$(echo "$NODE_TAGS" | sed -e "s/'/''/g")'
|
Change GCE to use standalone Saltstack config:
Change provisioning to pass all variables to both master and node. Run
Salt in a masterless setup on all nodes ala
http://docs.saltstack.com/en/latest/topics/tutorials/quickstart.html,
which involves ensuring Salt daemon is NOT running after install. Kill
Salt master install. And fix push to actually work in this new flow.
As part of this, the GCE Salt config no longer has access to the Salt
mine, which is primarily obnoxious for two reasons: - The minions
can't use Salt to see the master: this is easily fixed by static
config. - The master can't see the list of all the minions: this is
fixed temporarily by static config in util.sh, but later, by other
means (see
https://github.com/GoogleCloudPlatform/kubernetes/issues/156, which
should eventually remove this direction).
As part of it, flatten all of cluster/gce/templates/* into
configure-vm.sh, using a single, separate piece of YAML to drive the
environment variables, rather than constantly rewriting the startup
script.
2015-03-02 22:38:58 +00:00
|
|
|
node_instance_prefix: '$(echo "$NODE_INSTANCE_PREFIX" | sed -e "s/'/''/g")'
|
2015-05-06 21:48:45 +00:00
|
|
|
cluster_cidr: '$(echo "$CLUSTER_IP_RANGE" | sed -e "s/'/''/g")'
|
2015-04-28 15:02:45 +00:00
|
|
|
allocate_node_cidrs: '$(echo "$ALLOCATE_NODE_CIDRS" | sed -e "s/'/''/g")'
|
2016-03-11 22:38:14 +00:00
|
|
|
non_masquerade_cidr: '$(echo "$NON_MASQUERADE_CIDR" | sed -e "s/'/''/g")'
|
2015-05-24 05:17:55 +00:00
|
|
|
service_cluster_ip_range: '$(echo "$SERVICE_CLUSTER_IP_RANGE" | sed -e "s/'/''/g")'
|
Change GCE to use standalone Saltstack config:
Change provisioning to pass all variables to both master and node. Run
Salt in a masterless setup on all nodes ala
http://docs.saltstack.com/en/latest/topics/tutorials/quickstart.html,
which involves ensuring Salt daemon is NOT running after install. Kill
Salt master install. And fix push to actually work in this new flow.
As part of this, the GCE Salt config no longer has access to the Salt
mine, which is primarily obnoxious for two reasons: - The minions
can't use Salt to see the master: this is easily fixed by static
config. - The master can't see the list of all the minions: this is
fixed temporarily by static config in util.sh, but later, by other
means (see
https://github.com/GoogleCloudPlatform/kubernetes/issues/156, which
should eventually remove this direction).
As part of it, flatten all of cluster/gce/templates/* into
configure-vm.sh, using a single, separate piece of YAML to drive the
environment variables, rather than constantly rewriting the startup
script.
2015-03-02 22:38:58 +00:00
|
|
|
enable_cluster_monitoring: '$(echo "$ENABLE_CLUSTER_MONITORING" | sed -e "s/'/''/g")'
|
|
|
|
enable_cluster_logging: '$(echo "$ENABLE_CLUSTER_LOGGING" | sed -e "s/'/''/g")'
|
2015-07-23 18:30:53 +00:00
|
|
|
enable_cluster_ui: '$(echo "$ENABLE_CLUSTER_UI" | sed -e "s/'/''/g")'
|
2016-05-20 08:09:18 +00:00
|
|
|
enable_node_problem_detector: '$(echo "$ENABLE_NODE_PROBLEM_DETECTOR" | sed -e "s/'/''/g")'
|
2015-10-23 06:11:34 +00:00
|
|
|
enable_l7_loadbalancing: '$(echo "$ENABLE_L7_LOADBALANCING" | sed -e "s/'/''/g")'
|
Change GCE to use standalone Saltstack config:
Change provisioning to pass all variables to both master and node. Run
Salt in a masterless setup on all nodes ala
http://docs.saltstack.com/en/latest/topics/tutorials/quickstart.html,
which involves ensuring Salt daemon is NOT running after install. Kill
Salt master install. And fix push to actually work in this new flow.
As part of this, the GCE Salt config no longer has access to the Salt
mine, which is primarily obnoxious for two reasons: - The minions
can't use Salt to see the master: this is easily fixed by static
config. - The master can't see the list of all the minions: this is
fixed temporarily by static config in util.sh, but later, by other
means (see
https://github.com/GoogleCloudPlatform/kubernetes/issues/156, which
should eventually remove this direction).
As part of it, flatten all of cluster/gce/templates/* into
configure-vm.sh, using a single, separate piece of YAML to drive the
environment variables, rather than constantly rewriting the startup
script.
2015-03-02 22:38:58 +00:00
|
|
|
enable_node_logging: '$(echo "$ENABLE_NODE_LOGGING" | sed -e "s/'/''/g")'
|
2016-08-17 13:00:29 +00:00
|
|
|
enable_rescheduler: '$(echo "$ENABLE_RESCHEDULER" | sed -e "s/'/''/g")'
|
Change GCE to use standalone Saltstack config:
Change provisioning to pass all variables to both master and node. Run
Salt in a masterless setup on all nodes ala
http://docs.saltstack.com/en/latest/topics/tutorials/quickstart.html,
which involves ensuring Salt daemon is NOT running after install. Kill
Salt master install. And fix push to actually work in this new flow.
As part of this, the GCE Salt config no longer has access to the Salt
mine, which is primarily obnoxious for two reasons: - The minions
can't use Salt to see the master: this is easily fixed by static
config. - The master can't see the list of all the minions: this is
fixed temporarily by static config in util.sh, but later, by other
means (see
https://github.com/GoogleCloudPlatform/kubernetes/issues/156, which
should eventually remove this direction).
As part of it, flatten all of cluster/gce/templates/* into
configure-vm.sh, using a single, separate piece of YAML to drive the
environment variables, rather than constantly rewriting the startup
script.
2015-03-02 22:38:58 +00:00
|
|
|
logging_destination: '$(echo "$LOGGING_DESTINATION" | sed -e "s/'/''/g")'
|
|
|
|
elasticsearch_replicas: '$(echo "$ELASTICSEARCH_LOGGING_REPLICAS" | sed -e "s/'/''/g")'
|
|
|
|
enable_cluster_dns: '$(echo "$ENABLE_CLUSTER_DNS" | sed -e "s/'/''/g")'
|
2015-07-27 18:50:31 +00:00
|
|
|
enable_cluster_registry: '$(echo "$ENABLE_CLUSTER_REGISTRY" | sed -e "s/'/''/g")'
|
Change GCE to use standalone Saltstack config:
Change provisioning to pass all variables to both master and node. Run
Salt in a masterless setup on all nodes ala
http://docs.saltstack.com/en/latest/topics/tutorials/quickstart.html,
which involves ensuring Salt daemon is NOT running after install. Kill
Salt master install. And fix push to actually work in this new flow.
As part of this, the GCE Salt config no longer has access to the Salt
mine, which is primarily obnoxious for two reasons: - The minions
can't use Salt to see the master: this is easily fixed by static
config. - The master can't see the list of all the minions: this is
fixed temporarily by static config in util.sh, but later, by other
means (see
https://github.com/GoogleCloudPlatform/kubernetes/issues/156, which
should eventually remove this direction).
As part of it, flatten all of cluster/gce/templates/* into
configure-vm.sh, using a single, separate piece of YAML to drive the
environment variables, rather than constantly rewriting the startup
script.
2015-03-02 22:38:58 +00:00
|
|
|
dns_server: '$(echo "$DNS_SERVER_IP" | sed -e "s/'/''/g")'
|
|
|
|
dns_domain: '$(echo "$DNS_DOMAIN" | sed -e "s/'/''/g")'
|
2016-11-07 18:44:42 +00:00
|
|
|
enable_dns_horizontal_autoscaler: '$(echo "$ENABLE_DNS_HORIZONTAL_AUTOSCALER" | sed -e "s/'/''/g")'
|
2015-03-12 02:57:18 +00:00
|
|
|
admission_control: '$(echo "$ADMISSION_CONTROL" | sed -e "s/'/''/g")'
|
2015-12-28 11:23:59 +00:00
|
|
|
network_provider: '$(echo "$NETWORK_PROVIDER" | sed -e "s/'/''/g")'
|
2016-05-18 02:16:32 +00:00
|
|
|
prepull_e2e_images: '$(echo "$PREPULL_E2E_IMAGES" | sed -e "s/'/''/g")'
|
2016-02-08 18:52:30 +00:00
|
|
|
hairpin_mode: '$(echo "$HAIRPIN_MODE" | sed -e "s/'/''/g")'
|
2016-12-03 00:09:16 +00:00
|
|
|
softlockup_panic: '$(echo "$SOFTLOCKUP_PANIC" | sed -e "s/'/''/g")'
|
2015-12-28 11:23:59 +00:00
|
|
|
opencontrail_tag: '$(echo "$OPENCONTRAIL_TAG" | sed -e "s/'/''/g")'
|
2015-10-03 15:03:02 +00:00
|
|
|
opencontrail_kubernetes_tag: '$(echo "$OPENCONTRAIL_KUBERNETES_TAG")'
|
|
|
|
opencontrail_public_subnet: '$(echo "$OPENCONTRAIL_PUBLIC_SUBNET")'
|
2016-05-21 16:14:38 +00:00
|
|
|
network_policy_provider: '$(echo "$NETWORK_POLICY_PROVIDER" | sed -e "s/'/''/g")'
|
2016-04-02 02:10:26 +00:00
|
|
|
enable_manifest_url: '$(echo "${ENABLE_MANIFEST_URL:-}" | sed -e "s/'/''/g")'
|
|
|
|
manifest_url: '$(echo "${MANIFEST_URL:-}" | sed -e "s/'/''/g")'
|
|
|
|
manifest_url_header: '$(echo "${MANIFEST_URL_HEADER:-}" | sed -e "s/'/''/g")'
|
|
|
|
num_nodes: $(echo "${NUM_NODES:-}" | sed -e "s/'/''/g")
|
2015-10-29 10:03:34 +00:00
|
|
|
e2e_storage_test_environment: '$(echo "$E2E_STORAGE_TEST_ENVIRONMENT" | sed -e "s/'/''/g")'
|
2016-04-05 00:28:52 +00:00
|
|
|
kube_uid: '$(echo "${KUBE_UID}" | sed -e "s/'/''/g")'
|
2016-07-29 17:33:50 +00:00
|
|
|
initial_etcd_cluster: '$(echo "${INITIAL_ETCD_CLUSTER:-}" | sed -e "s/'/''/g")'
|
2017-02-13 15:10:47 +00:00
|
|
|
initial_etcd_cluster_state: '$(echo "${INITIAL_ETCD_CLUSTER_STATE:-}" | sed -e "s/'/''/g")'
|
2017-02-17 23:06:55 +00:00
|
|
|
ca_cert_bundle_path: '$(echo "${CA_CERT_BUNDLE_PATH:-}" | sed -e "s/'/''/g")'
|
2016-08-04 03:19:20 +00:00
|
|
|
hostname: $(hostname -s)
|
2016-11-21 09:16:29 +00:00
|
|
|
enable_default_storage_class: '$(echo "$ENABLE_DEFAULT_STORAGE_CLASS" | sed -e "s/'/''/g")'
|
Change GCE to use standalone Saltstack config:
Change provisioning to pass all variables to both master and node. Run
Salt in a masterless setup on all nodes ala
http://docs.saltstack.com/en/latest/topics/tutorials/quickstart.html,
which involves ensuring Salt daemon is NOT running after install. Kill
Salt master install. And fix push to actually work in this new flow.
As part of this, the GCE Salt config no longer has access to the Salt
mine, which is primarily obnoxious for two reasons: - The minions
can't use Salt to see the master: this is easily fixed by static
config. - The master can't see the list of all the minions: this is
fixed temporarily by static config in util.sh, but later, by other
means (see
https://github.com/GoogleCloudPlatform/kubernetes/issues/156, which
should eventually remove this direction).
As part of it, flatten all of cluster/gce/templates/* into
configure-vm.sh, using a single, separate piece of YAML to drive the
environment variables, rather than constantly rewriting the startup
script.
2015-03-02 22:38:58 +00:00
|
|
|
EOF
|
2016-11-04 12:46:01 +00:00
|
|
|
if [ -n "${STORAGE_BACKEND:-}" ]; then
|
|
|
|
cat <<EOF >>/srv/salt-overlay/pillar/cluster-params.sls
|
|
|
|
storage_backend: '$(echo "$STORAGE_BACKEND" | sed -e "s/'/''/g")'
|
2017-02-17 12:27:52 +00:00
|
|
|
EOF
|
|
|
|
fi
|
|
|
|
if [ -n "${STORAGE_MEDIA_TYPE:-}" ]; then
|
|
|
|
cat <<EOF >>/srv/salt-overlay/pillar/cluster-params.sls
|
|
|
|
storage_media_type: '$(echo "$STORAGE_MEDIA_TYPE" | sed -e "s/'/''/g")'
|
2016-11-04 12:46:01 +00:00
|
|
|
EOF
|
|
|
|
fi
|
2016-08-18 20:42:57 +00:00
|
|
|
if [ -n "${ADMISSION_CONTROL:-}" ] && [ ${ADMISSION_CONTROL} == *"ImagePolicyWebhook"* ]; then
|
|
|
|
cat <<EOF >>/srv/salt-overlay/pillar/cluster-params.sls
|
|
|
|
admission-control-config-file: /etc/admission_controller.config
|
|
|
|
EOF
|
|
|
|
fi
|
2015-11-25 08:35:51 +00:00
|
|
|
if [ -n "${KUBELET_PORT:-}" ]; then
|
|
|
|
cat <<EOF >>/srv/salt-overlay/pillar/cluster-params.sls
|
|
|
|
kubelet_port: '$(echo "$KUBELET_PORT" | sed -e "s/'/''/g")'
|
|
|
|
EOF
|
|
|
|
fi
|
2016-10-31 09:40:41 +00:00
|
|
|
if [ -n "${ETCD_IMAGE:-}" ]; then
|
|
|
|
cat <<EOF >>/srv/salt-overlay/pillar/cluster-params.sls
|
|
|
|
etcd_docker_tag: '$(echo "$ETCD_IMAGE" | sed -e "s/'/''/g")'
|
|
|
|
EOF
|
|
|
|
fi
|
2016-10-27 17:27:02 +00:00
|
|
|
if [ -n "${ETCD_VERSION:-}" ]; then
|
2016-08-18 07:55:16 +00:00
|
|
|
cat <<EOF >>/srv/salt-overlay/pillar/cluster-params.sls
|
2016-10-31 09:40:41 +00:00
|
|
|
etcd_version: '$(echo "$ETCD_VERSION" | sed -e "s/'/''/g")'
|
2016-10-20 09:08:43 +00:00
|
|
|
EOF
|
|
|
|
fi
|
|
|
|
if [[ -n "${ETCD_CA_KEY:-}" && -n "${ETCD_CA_CERT:-}" && -n "${ETCD_PEER_KEY:-}" && -n "${ETCD_PEER_CERT:-}" ]]; then
|
|
|
|
cat <<EOF >>/srv/salt-overlay/pillar/cluster-params.sls
|
|
|
|
etcd_over_ssl: 'true'
|
|
|
|
EOF
|
|
|
|
else
|
|
|
|
cat <<EOF >>/srv/salt-overlay/pillar/cluster-params.sls
|
|
|
|
etcd_over_ssl: 'false'
|
2016-12-07 14:43:47 +00:00
|
|
|
EOF
|
|
|
|
fi
|
|
|
|
if [ -n "${ETCD_QUORUM_READ:-}" ]; then
|
|
|
|
cat <<EOF >>/srv/salt-overlay/pillar/cluster-params.sls
|
|
|
|
etcd_quorum_read: '$(echo "${ETCD_QUORUM_READ}" | sed -e "s/'/''/g")'
|
2016-08-18 07:55:16 +00:00
|
|
|
EOF
|
|
|
|
fi
|
2016-10-31 09:40:41 +00:00
|
|
|
# Configuration changes for test clusters
|
2015-05-26 08:43:48 +00:00
|
|
|
if [ -n "${APISERVER_TEST_ARGS:-}" ]; then
|
|
|
|
cat <<EOF >>/srv/salt-overlay/pillar/cluster-params.sls
|
|
|
|
apiserver_test_args: '$(echo "$APISERVER_TEST_ARGS" | sed -e "s/'/''/g")'
|
2015-12-23 16:02:20 +00:00
|
|
|
EOF
|
|
|
|
fi
|
|
|
|
if [ -n "${API_SERVER_TEST_LOG_LEVEL:-}" ]; then
|
|
|
|
cat <<EOF >>/srv/salt-overlay/pillar/cluster-params.sls
|
|
|
|
api_server_test_log_level: '$(echo "$API_SERVER_TEST_LOG_LEVEL" | sed -e "s/'/''/g")'
|
2015-05-26 08:43:48 +00:00
|
|
|
EOF
|
|
|
|
fi
|
|
|
|
if [ -n "${KUBELET_TEST_ARGS:-}" ]; then
|
|
|
|
cat <<EOF >>/srv/salt-overlay/pillar/cluster-params.sls
|
|
|
|
kubelet_test_args: '$(echo "$KUBELET_TEST_ARGS" | sed -e "s/'/''/g")'
|
2015-12-23 16:02:20 +00:00
|
|
|
EOF
|
|
|
|
fi
|
|
|
|
if [ -n "${KUBELET_TEST_LOG_LEVEL:-}" ]; then
|
|
|
|
cat <<EOF >>/srv/salt-overlay/pillar/cluster-params.sls
|
|
|
|
kubelet_test_log_level: '$(echo "$KUBELET_TEST_LOG_LEVEL" | sed -e "s/'/''/g")'
|
2016-03-04 19:47:50 +00:00
|
|
|
EOF
|
|
|
|
fi
|
|
|
|
if [ -n "${DOCKER_TEST_LOG_LEVEL:-}" ]; then
|
|
|
|
cat <<EOF >>/srv/salt-overlay/pillar/cluster-params.sls
|
|
|
|
docker_test_log_level: '$(echo "$DOCKER_TEST_LOG_LEVEL" | sed -e "s/'/''/g")'
|
2015-05-26 08:43:48 +00:00
|
|
|
EOF
|
|
|
|
fi
|
|
|
|
if [ -n "${CONTROLLER_MANAGER_TEST_ARGS:-}" ]; then
|
|
|
|
cat <<EOF >>/srv/salt-overlay/pillar/cluster-params.sls
|
|
|
|
controller_manager_test_args: '$(echo "$CONTROLLER_MANAGER_TEST_ARGS" | sed -e "s/'/''/g")'
|
2015-12-23 16:02:20 +00:00
|
|
|
EOF
|
|
|
|
fi
|
|
|
|
if [ -n "${CONTROLLER_MANAGER_TEST_LOG_LEVEL:-}" ]; then
|
|
|
|
cat <<EOF >>/srv/salt-overlay/pillar/cluster-params.sls
|
|
|
|
controller_manager_test_log_level: '$(echo "$CONTROLLER_MANAGER_TEST_LOG_LEVEL" | sed -e "s/'/''/g")'
|
2015-05-26 08:43:48 +00:00
|
|
|
EOF
|
|
|
|
fi
|
|
|
|
if [ -n "${SCHEDULER_TEST_ARGS:-}" ]; then
|
|
|
|
cat <<EOF >>/srv/salt-overlay/pillar/cluster-params.sls
|
|
|
|
scheduler_test_args: '$(echo "$SCHEDULER_TEST_ARGS" | sed -e "s/'/''/g")'
|
2015-12-23 16:02:20 +00:00
|
|
|
EOF
|
|
|
|
fi
|
|
|
|
if [ -n "${SCHEDULER_TEST_LOG_LEVEL:-}" ]; then
|
|
|
|
cat <<EOF >>/srv/salt-overlay/pillar/cluster-params.sls
|
|
|
|
scheduler_test_log_level: '$(echo "$SCHEDULER_TEST_LOG_LEVEL" | sed -e "s/'/''/g")'
|
2015-05-26 08:43:48 +00:00
|
|
|
EOF
|
|
|
|
fi
|
|
|
|
if [ -n "${KUBEPROXY_TEST_ARGS:-}" ]; then
|
|
|
|
cat <<EOF >>/srv/salt-overlay/pillar/cluster-params.sls
|
|
|
|
kubeproxy_test_args: '$(echo "$KUBEPROXY_TEST_ARGS" | sed -e "s/'/''/g")'
|
2015-12-23 16:02:20 +00:00
|
|
|
EOF
|
|
|
|
fi
|
|
|
|
if [ -n "${KUBEPROXY_TEST_LOG_LEVEL:-}" ]; then
|
|
|
|
cat <<EOF >>/srv/salt-overlay/pillar/cluster-params.sls
|
|
|
|
kubeproxy_test_log_level: '$(echo "$KUBEPROXY_TEST_LOG_LEVEL" | sed -e "s/'/''/g")'
|
2015-07-27 18:50:31 +00:00
|
|
|
EOF
|
|
|
|
fi
|
|
|
|
# TODO: Replace this with a persistent volume (and create it).
|
|
|
|
if [[ "${ENABLE_CLUSTER_REGISTRY}" == true && -n "${CLUSTER_REGISTRY_DISK}" ]]; then
|
|
|
|
cat <<EOF >>/srv/salt-overlay/pillar/cluster-params.sls
|
2015-08-19 22:47:55 +00:00
|
|
|
cluster_registry_disk_type: gce
|
2015-12-28 11:23:59 +00:00
|
|
|
cluster_registry_disk_size: $(echo $(convert-bytes-gce-kube ${CLUSTER_REGISTRY_DISK_SIZE}) | sed -e "s/'/''/g")
|
|
|
|
cluster_registry_disk_name: $(echo ${CLUSTER_REGISTRY_DISK} | sed -e "s/'/''/g")
|
2015-10-09 16:55:49 +00:00
|
|
|
EOF
|
|
|
|
fi
|
|
|
|
if [ -n "${TERMINATED_POD_GC_THRESHOLD:-}" ]; then
|
|
|
|
cat <<EOF >>/srv/salt-overlay/pillar/cluster-params.sls
|
|
|
|
terminated_pod_gc_threshold: '$(echo "${TERMINATED_POD_GC_THRESHOLD}" | sed -e "s/'/''/g")'
|
2016-02-07 17:26:49 +00:00
|
|
|
EOF
|
|
|
|
fi
|
|
|
|
if [ -n "${ENABLE_CUSTOM_METRICS:-}" ]; then
|
2016-02-18 23:58:08 +00:00
|
|
|
cat <<EOF >>/srv/salt-overlay/pillar/cluster-params.sls
|
2016-02-07 17:26:49 +00:00
|
|
|
enable_custom_metrics: '$(echo "${ENABLE_CUSTOM_METRICS}" | sed -e "s/'/''/g")'
|
2016-02-18 23:58:08 +00:00
|
|
|
EOF
|
|
|
|
fi
|
|
|
|
if [ -n "${NODE_LABELS:-}" ]; then
|
|
|
|
cat <<EOF >>/srv/salt-overlay/pillar/cluster-params.sls
|
|
|
|
node_labels: '$(echo "${NODE_LABELS}" | sed -e "s/'/''/g")'
|
2016-06-03 23:06:28 +00:00
|
|
|
EOF
|
|
|
|
fi
|
|
|
|
if [ -n "${EVICTION_HARD:-}" ]; then
|
|
|
|
cat <<EOF >>/srv/salt-overlay/pillar/cluster-params.sls
|
|
|
|
eviction_hard: '$(echo "${EVICTION_HARD}" | sed -e "s/'/''/g")'
|
2015-05-26 08:43:48 +00:00
|
|
|
EOF
|
|
|
|
fi
|
2016-06-07 20:10:17 +00:00
|
|
|
if [[ "${ENABLE_CLUSTER_AUTOSCALER:-false}" == "true" ]]; then
|
2016-05-09 14:23:00 +00:00
|
|
|
cat <<EOF >>/srv/salt-overlay/pillar/cluster-params.sls
|
2016-06-07 20:10:17 +00:00
|
|
|
enable_cluster_autoscaler: '$(echo "${ENABLE_CLUSTER_AUTOSCALER}" | sed -e "s/'/''/g")'
|
2016-05-09 14:23:00 +00:00
|
|
|
autoscaler_mig_config: '$(echo "${AUTOSCALER_MIG_CONFIG}" | sed -e "s/'/''/g")'
|
|
|
|
EOF
|
|
|
|
fi
|
2016-06-27 21:44:32 +00:00
|
|
|
if [[ "${FEDERATION:-}" == "true" ]]; then
|
2016-07-06 17:40:11 +00:00
|
|
|
local federations_domain_map="${FEDERATIONS_DOMAIN_MAP:-}"
|
|
|
|
if [[ -z "${federations_domain_map}" && -n "${FEDERATION_NAME:-}" && -n "${DNS_ZONE_NAME:-}" ]]; then
|
|
|
|
federations_domain_map="${FEDERATION_NAME}=${DNS_ZONE_NAME}"
|
2016-06-27 21:44:32 +00:00
|
|
|
fi
|
2016-07-06 17:40:11 +00:00
|
|
|
if [[ -n "${federations_domain_map}" ]]; then
|
2016-06-27 21:44:32 +00:00
|
|
|
cat <<EOF >>/srv/salt-overlay/pillar/cluster-params.sls
|
2016-07-06 17:40:11 +00:00
|
|
|
federations_domain_map: '$(echo "- --federations=${federations_domain_map}" | sed -e "s/'/''/g")'
|
2016-06-27 21:44:32 +00:00
|
|
|
EOF
|
|
|
|
else
|
|
|
|
cat <<EOF >>/srv/salt-overlay/pillar/cluster-params.sls
|
|
|
|
federations_domain_map: ''
|
|
|
|
EOF
|
|
|
|
fi
|
|
|
|
else
|
|
|
|
cat <<EOF >>/srv/salt-overlay/pillar/cluster-params.sls
|
|
|
|
federations_domain_map: ''
|
2016-08-22 09:09:05 +00:00
|
|
|
EOF
|
|
|
|
fi
|
|
|
|
if [ -n "${SCHEDULING_ALGORITHM_PROVIDER:-}" ]; then
|
|
|
|
cat <<EOF >>/srv/salt-overlay/pillar/cluster-params.sls
|
|
|
|
scheduling_algorithm_provider: '$(echo "${SCHEDULING_ALGORITHM_PROVIDER}" | sed -e "s/'/''/g")'
|
2016-06-27 21:44:32 +00:00
|
|
|
EOF
|
|
|
|
fi
|
Change GCE to use standalone Saltstack config:
Change provisioning to pass all variables to both master and node. Run
Salt in a masterless setup on all nodes ala
http://docs.saltstack.com/en/latest/topics/tutorials/quickstart.html,
which involves ensuring Salt daemon is NOT running after install. Kill
Salt master install. And fix push to actually work in this new flow.
As part of this, the GCE Salt config no longer has access to the Salt
mine, which is primarily obnoxious for two reasons: - The minions
can't use Salt to see the master: this is easily fixed by static
config. - The master can't see the list of all the minions: this is
fixed temporarily by static config in util.sh, but later, by other
means (see
https://github.com/GoogleCloudPlatform/kubernetes/issues/156, which
should eventually remove this direction).
As part of it, flatten all of cluster/gce/templates/* into
configure-vm.sh, using a single, separate piece of YAML to drive the
environment variables, rather than constantly rewriting the startup
script.
2015-03-02 22:38:58 +00:00
|
|
|
}
|
|
|
|
|
2015-08-20 00:19:37 +00:00
|
|
|
# The job of this function is simple, but the basic regular expression syntax makes
|
|
|
|
# this difficult to read. What we want to do is convert from [0-9]+B, KB, KiB, MB, etc
|
|
|
|
# into [0-9]+, Ki, Mi, Gi, etc.
|
|
|
|
# This is done in two steps:
|
|
|
|
# 1. Convert from [0-9]+X?i?B into [0-9]X? (X denotes the prefix, ? means the field
|
|
|
|
# is optional.
|
|
|
|
# 2. Attach an 'i' to the end of the string if we find a letter.
|
|
|
|
# The two step process is needed to handle the edge case in which we want to convert
|
|
|
|
# a raw byte count, as the result should be a simple number (e.g. 5B -> 5).
|
|
|
|
function convert-bytes-gce-kube() {
|
|
|
|
local -r storage_space=$1
|
|
|
|
echo "${storage_space}" | sed -e 's/^\([0-9]\+\)\([A-Z]\)\?i\?B$/\1\2/g' -e 's/\([A-Z]\)$/\1i/'
|
|
|
|
}
|
|
|
|
|
2015-07-17 23:13:01 +00:00
|
|
|
# This should happen both on cluster initialization and node upgrades.
|
|
|
|
#
|
2015-08-04 18:14:46 +00:00
|
|
|
# - Uses KUBELET_CA_CERT (falling back to CA_CERT), KUBELET_CERT, and
|
|
|
|
# KUBELET_KEY to generate a kubeconfig file for the kubelet to securely
|
|
|
|
# connect to the apiserver.
|
|
|
|
|
2015-07-17 23:13:01 +00:00
|
|
|
function create-salt-kubelet-auth() {
|
|
|
|
local -r kubelet_kubeconfig_file="/srv/salt-overlay/salt/kubelet/kubeconfig"
|
2015-05-12 03:44:13 +00:00
|
|
|
if [ ! -e "${kubelet_kubeconfig_file}" ]; then
|
2015-04-03 21:48:39 +00:00
|
|
|
mkdir -p /srv/salt-overlay/salt/kubelet
|
2015-07-17 23:13:01 +00:00
|
|
|
(umask 077;
|
|
|
|
cat > "${kubelet_kubeconfig_file}" <<EOF
|
2015-05-11 18:43:44 +00:00
|
|
|
apiVersion: v1
|
|
|
|
kind: Config
|
|
|
|
users:
|
|
|
|
- name: kubelet
|
|
|
|
user:
|
2017-02-17 23:06:55 +00:00
|
|
|
client-certificate: ${KUBELET_CERT_PATH}
|
|
|
|
client-key: ${KUBELET_KEY_PATH}
|
2015-05-11 18:43:44 +00:00
|
|
|
clusters:
|
|
|
|
- name: local
|
|
|
|
cluster:
|
2016-05-21 16:14:38 +00:00
|
|
|
server: https://kubernetes-master
|
2017-02-17 23:06:55 +00:00
|
|
|
certificate-authority: ${CA_CERT_BUNDLE_PATH}
|
2015-05-11 18:43:44 +00:00
|
|
|
contexts:
|
|
|
|
- context:
|
|
|
|
cluster: local
|
|
|
|
user: kubelet
|
|
|
|
name: service-account-context
|
|
|
|
current-context: service-account-context
|
|
|
|
EOF
|
|
|
|
)
|
2015-05-08 23:30:20 +00:00
|
|
|
fi
|
2015-07-17 23:13:01 +00:00
|
|
|
}
|
2015-04-03 21:48:39 +00:00
|
|
|
|
2015-07-17 23:13:01 +00:00
|
|
|
# This should happen both on cluster initialization and node upgrades.
|
|
|
|
#
|
|
|
|
# - Uses the CA_CERT and KUBE_PROXY_TOKEN to generate a kubeconfig file for
|
|
|
|
# the kube-proxy to securely connect to the apiserver.
|
|
|
|
function create-salt-kubeproxy-auth() {
|
|
|
|
local -r kube_proxy_kubeconfig_file="/srv/salt-overlay/salt/kube-proxy/kubeconfig"
|
2015-05-08 23:30:20 +00:00
|
|
|
if [ ! -e "${kube_proxy_kubeconfig_file}" ]; then
|
Generate a token for kube-proxy.
Tested on GCE.
Includes untested modifications for AWS and Vagrant.
No changes for any other distros.
Probably will work on other up-to-date providers
but beware. Symptom would be that service proxying
stops working.
1. Generates a token kube-proxy in AWS, GCE, and Vagrant setup scripts.
1. Distributes the token via salt-overlay, and salt to /var/lib/kube-proxy/kubeconfig
1. Changes kube-proxy args:
- use the --kubeconfig argument
- changes --master argument from http://MASTER:7080 to https://MASTER
- http -> https
- explicit port 7080 -> implied 443
Possible ways this might break other distros:
Mitigation: there is an default empty kubeconfig file.
If the distro does not populate the salt-overlay, then
it should get the empty, which parses to an empty
object, which, combined with the --master argument,
should still work.
Mitigation:
- azure: Special case to use 7080 in
- rackspace: way out of date, so don't care.
- vsphere: way out of date, so don't care.
- other distros: not using salt.
2015-04-24 16:27:11 +00:00
|
|
|
mkdir -p /srv/salt-overlay/salt/kube-proxy
|
2015-07-17 23:13:01 +00:00
|
|
|
(umask 077;
|
2015-05-11 18:43:44 +00:00
|
|
|
cat > "${kube_proxy_kubeconfig_file}" <<EOF
|
|
|
|
apiVersion: v1
|
|
|
|
kind: Config
|
|
|
|
users:
|
|
|
|
- name: kube-proxy
|
|
|
|
user:
|
|
|
|
token: ${KUBE_PROXY_TOKEN}
|
|
|
|
clusters:
|
|
|
|
- name: local
|
|
|
|
cluster:
|
2017-02-17 23:06:55 +00:00
|
|
|
certificate-authority-data: ${CA_CERT_BUNDLE}
|
2015-05-11 18:43:44 +00:00
|
|
|
contexts:
|
|
|
|
- context:
|
|
|
|
cluster: local
|
|
|
|
user: kube-proxy
|
|
|
|
name: service-account-context
|
|
|
|
current-context: service-account-context
|
|
|
|
EOF
|
|
|
|
)
|
2015-04-03 21:48:39 +00:00
|
|
|
fi
|
Change GCE to use standalone Saltstack config:
Change provisioning to pass all variables to both master and node. Run
Salt in a masterless setup on all nodes ala
http://docs.saltstack.com/en/latest/topics/tutorials/quickstart.html,
which involves ensuring Salt daemon is NOT running after install. Kill
Salt master install. And fix push to actually work in this new flow.
As part of this, the GCE Salt config no longer has access to the Salt
mine, which is primarily obnoxious for two reasons: - The minions
can't use Salt to see the master: this is easily fixed by static
config. - The master can't see the list of all the minions: this is
fixed temporarily by static config in util.sh, but later, by other
means (see
https://github.com/GoogleCloudPlatform/kubernetes/issues/156, which
should eventually remove this direction).
As part of it, flatten all of cluster/gce/templates/* into
configure-vm.sh, using a single, separate piece of YAML to drive the
environment variables, rather than constantly rewriting the startup
script.
2015-03-02 22:38:58 +00:00
|
|
|
}
|
|
|
|
|
2016-03-01 00:23:54 +00:00
|
|
|
function split-commas() {
|
|
|
|
echo $1 | tr "," "\n"
|
|
|
|
}
|
|
|
|
|
2015-06-18 18:31:21 +00:00
|
|
|
function try-download-release() {
|
|
|
|
# TODO(zmerlynn): Now we REALLy have no excuse not to do the reboot
|
|
|
|
# optimization.
|
|
|
|
|
2016-03-01 00:23:54 +00:00
|
|
|
local -r server_binary_tar_urls=( $(split-commas "${SERVER_BINARY_TAR_URL}") )
|
|
|
|
local -r server_binary_tar="${server_binary_tar_urls[0]##*/}"
|
|
|
|
if [[ -n "${SERVER_BINARY_TAR_HASH:-}" ]]; then
|
|
|
|
local -r server_binary_tar_hash="${SERVER_BINARY_TAR_HASH}"
|
|
|
|
else
|
2015-06-18 18:31:21 +00:00
|
|
|
echo "Downloading binary release sha1 (not found in env)"
|
2016-03-01 00:23:54 +00:00
|
|
|
download-or-bust "" "${server_binary_tar_urls[@]/.tar.gz/.tar.gz.sha1}"
|
|
|
|
local -r server_binary_tar_hash=$(cat "${server_binary_tar}.sha1")
|
2015-06-18 18:31:21 +00:00
|
|
|
fi
|
|
|
|
|
2016-03-01 00:23:54 +00:00
|
|
|
echo "Downloading binary release tar (${server_binary_tar_urls[@]})"
|
|
|
|
download-or-bust "${server_binary_tar_hash}" "${server_binary_tar_urls[@]}"
|
2015-06-18 18:31:21 +00:00
|
|
|
|
2016-03-01 00:23:54 +00:00
|
|
|
local -r salt_tar_urls=( $(split-commas "${SALT_TAR_URL}") )
|
|
|
|
local -r salt_tar="${salt_tar_urls[0]##*/}"
|
|
|
|
if [[ -n "${SALT_TAR_HASH:-}" ]]; then
|
|
|
|
local -r salt_tar_hash="${SALT_TAR_HASH}"
|
|
|
|
else
|
2015-06-18 18:31:21 +00:00
|
|
|
echo "Downloading Salt tar sha1 (not found in env)"
|
2016-03-01 00:23:54 +00:00
|
|
|
download-or-bust "" "${salt_tar_urls[@]/.tar.gz/.tar.gz.sha1}"
|
|
|
|
local -r salt_tar_hash=$(cat "${salt_tar}.sha1")
|
2015-06-18 18:31:21 +00:00
|
|
|
fi
|
|
|
|
|
2016-03-01 00:23:54 +00:00
|
|
|
echo "Downloading Salt tar (${salt_tar_urls[@]})"
|
|
|
|
download-or-bust "${salt_tar_hash}" "${salt_tar_urls[@]}"
|
2015-06-18 18:31:21 +00:00
|
|
|
|
|
|
|
echo "Unpacking Salt tree and checking integrity of binary release tar"
|
|
|
|
rm -rf kubernetes
|
2016-03-01 00:23:54 +00:00
|
|
|
tar xzf "${salt_tar}" && tar tzf "${server_binary_tar}" > /dev/null
|
2015-06-18 18:31:21 +00:00
|
|
|
}
|
|
|
|
|
Change GCE to use standalone Saltstack config:
Change provisioning to pass all variables to both master and node. Run
Salt in a masterless setup on all nodes ala
http://docs.saltstack.com/en/latest/topics/tutorials/quickstart.html,
which involves ensuring Salt daemon is NOT running after install. Kill
Salt master install. And fix push to actually work in this new flow.
As part of this, the GCE Salt config no longer has access to the Salt
mine, which is primarily obnoxious for two reasons: - The minions
can't use Salt to see the master: this is easily fixed by static
config. - The master can't see the list of all the minions: this is
fixed temporarily by static config in util.sh, but later, by other
means (see
https://github.com/GoogleCloudPlatform/kubernetes/issues/156, which
should eventually remove this direction).
As part of it, flatten all of cluster/gce/templates/* into
configure-vm.sh, using a single, separate piece of YAML to drive the
environment variables, rather than constantly rewriting the startup
script.
2015-03-02 22:38:58 +00:00
|
|
|
function download-release() {
|
2015-06-18 18:31:21 +00:00
|
|
|
# In case of failure checking integrity of release, retry.
|
|
|
|
until try-download-release; do
|
2015-05-29 11:01:05 +00:00
|
|
|
sleep 15
|
2015-06-18 18:31:21 +00:00
|
|
|
echo "Couldn't download release. Retrying..."
|
2015-05-29 11:01:05 +00:00
|
|
|
done
|
Change GCE to use standalone Saltstack config:
Change provisioning to pass all variables to both master and node. Run
Salt in a masterless setup on all nodes ala
http://docs.saltstack.com/en/latest/topics/tutorials/quickstart.html,
which involves ensuring Salt daemon is NOT running after install. Kill
Salt master install. And fix push to actually work in this new flow.
As part of this, the GCE Salt config no longer has access to the Salt
mine, which is primarily obnoxious for two reasons: - The minions
can't use Salt to see the master: this is easily fixed by static
config. - The master can't see the list of all the minions: this is
fixed temporarily by static config in util.sh, but later, by other
means (see
https://github.com/GoogleCloudPlatform/kubernetes/issues/156, which
should eventually remove this direction).
As part of it, flatten all of cluster/gce/templates/* into
configure-vm.sh, using a single, separate piece of YAML to drive the
environment variables, rather than constantly rewriting the startup
script.
2015-03-02 22:38:58 +00:00
|
|
|
|
|
|
|
echo "Running release install script"
|
2015-11-18 02:13:24 +00:00
|
|
|
kubernetes/saltbase/install.sh "${SERVER_BINARY_TAR_URL##*/}"
|
Change GCE to use standalone Saltstack config:
Change provisioning to pass all variables to both master and node. Run
Salt in a masterless setup on all nodes ala
http://docs.saltstack.com/en/latest/topics/tutorials/quickstart.html,
which involves ensuring Salt daemon is NOT running after install. Kill
Salt master install. And fix push to actually work in this new flow.
As part of this, the GCE Salt config no longer has access to the Salt
mine, which is primarily obnoxious for two reasons: - The minions
can't use Salt to see the master: this is easily fixed by static
config. - The master can't see the list of all the minions: this is
fixed temporarily by static config in util.sh, but later, by other
means (see
https://github.com/GoogleCloudPlatform/kubernetes/issues/156, which
should eventually remove this direction).
As part of it, flatten all of cluster/gce/templates/* into
configure-vm.sh, using a single, separate piece of YAML to drive the
environment variables, rather than constantly rewriting the startup
script.
2015-03-02 22:38:58 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
function fix-apt-sources() {
|
|
|
|
sed -i -e "\|^deb.*http://http.debian.net/debian| s/^/#/" /etc/apt/sources.list
|
|
|
|
sed -i -e "\|^deb.*http://ftp.debian.org/debian| s/^/#/" /etc/apt/sources.list.d/backports.list
|
|
|
|
}
|
|
|
|
|
|
|
|
function salt-run-local() {
|
|
|
|
cat <<EOF >/etc/salt/minion.d/local.conf
|
|
|
|
file_client: local
|
|
|
|
file_roots:
|
|
|
|
base:
|
|
|
|
- /srv/salt
|
|
|
|
EOF
|
|
|
|
}
|
|
|
|
|
|
|
|
function salt-debug-log() {
|
|
|
|
cat <<EOF >/etc/salt/minion.d/log-level-debug.conf
|
|
|
|
log_level: debug
|
|
|
|
log_level_logfile: debug
|
|
|
|
EOF
|
|
|
|
}
|
|
|
|
|
|
|
|
function salt-node-role() {
|
|
|
|
cat <<EOF >/etc/salt/minion.d/grains.conf
|
|
|
|
grains:
|
|
|
|
roles:
|
|
|
|
- kubernetes-pool
|
|
|
|
cloud: gce
|
2015-08-04 18:14:46 +00:00
|
|
|
api_servers: '${KUBERNETES_MASTER_NAME}'
|
Change GCE to use standalone Saltstack config:
Change provisioning to pass all variables to both master and node. Run
Salt in a masterless setup on all nodes ala
http://docs.saltstack.com/en/latest/topics/tutorials/quickstart.html,
which involves ensuring Salt daemon is NOT running after install. Kill
Salt master install. And fix push to actually work in this new flow.
As part of this, the GCE Salt config no longer has access to the Salt
mine, which is primarily obnoxious for two reasons: - The minions
can't use Salt to see the master: this is easily fixed by static
config. - The master can't see the list of all the minions: this is
fixed temporarily by static config in util.sh, but later, by other
means (see
https://github.com/GoogleCloudPlatform/kubernetes/issues/156, which
should eventually remove this direction).
As part of it, flatten all of cluster/gce/templates/* into
configure-vm.sh, using a single, separate piece of YAML to drive the
environment variables, rather than constantly rewriting the startup
script.
2015-03-02 22:38:58 +00:00
|
|
|
EOF
|
|
|
|
}
|
|
|
|
|
2016-02-04 14:36:30 +00:00
|
|
|
function env-to-grains {
|
|
|
|
local key=$1
|
|
|
|
local env_key=`echo $key | tr '[:lower:]' '[:upper:]'`
|
|
|
|
local value=${!env_key:-}
|
|
|
|
if [[ -n "${value}" ]]; then
|
|
|
|
# Note this is yaml, so indentation matters
|
|
|
|
cat <<EOF >>/etc/salt/minion.d/grains.conf
|
|
|
|
${key}: '$(echo "${value}" | sed -e "s/'/''/g")'
|
|
|
|
EOF
|
|
|
|
fi
|
|
|
|
}
|
Change GCE to use standalone Saltstack config:
Change provisioning to pass all variables to both master and node. Run
Salt in a masterless setup on all nodes ala
http://docs.saltstack.com/en/latest/topics/tutorials/quickstart.html,
which involves ensuring Salt daemon is NOT running after install. Kill
Salt master install. And fix push to actually work in this new flow.
As part of this, the GCE Salt config no longer has access to the Salt
mine, which is primarily obnoxious for two reasons: - The minions
can't use Salt to see the master: this is easily fixed by static
config. - The master can't see the list of all the minions: this is
fixed temporarily by static config in util.sh, but later, by other
means (see
https://github.com/GoogleCloudPlatform/kubernetes/issues/156, which
should eventually remove this direction).
As part of it, flatten all of cluster/gce/templates/* into
configure-vm.sh, using a single, separate piece of YAML to drive the
environment variables, rather than constantly rewriting the startup
script.
2015-03-02 22:38:58 +00:00
|
|
|
|
2016-02-04 14:36:30 +00:00
|
|
|
function node-docker-opts() {
|
Change GCE to use standalone Saltstack config:
Change provisioning to pass all variables to both master and node. Run
Salt in a masterless setup on all nodes ala
http://docs.saltstack.com/en/latest/topics/tutorials/quickstart.html,
which involves ensuring Salt daemon is NOT running after install. Kill
Salt master install. And fix push to actually work in this new flow.
As part of this, the GCE Salt config no longer has access to the Salt
mine, which is primarily obnoxious for two reasons: - The minions
can't use Salt to see the master: this is easily fixed by static
config. - The master can't see the list of all the minions: this is
fixed temporarily by static config in util.sh, but later, by other
means (see
https://github.com/GoogleCloudPlatform/kubernetes/issues/156, which
should eventually remove this direction).
As part of it, flatten all of cluster/gce/templates/* into
configure-vm.sh, using a single, separate piece of YAML to drive the
environment variables, rather than constantly rewriting the startup
script.
2015-03-02 22:38:58 +00:00
|
|
|
if [[ -n "${EXTRA_DOCKER_OPTS-}" ]]; then
|
2016-02-04 14:36:30 +00:00
|
|
|
DOCKER_OPTS="${DOCKER_OPTS:-} ${EXTRA_DOCKER_OPTS}"
|
Change GCE to use standalone Saltstack config:
Change provisioning to pass all variables to both master and node. Run
Salt in a masterless setup on all nodes ala
http://docs.saltstack.com/en/latest/topics/tutorials/quickstart.html,
which involves ensuring Salt daemon is NOT running after install. Kill
Salt master install. And fix push to actually work in this new flow.
As part of this, the GCE Salt config no longer has access to the Salt
mine, which is primarily obnoxious for two reasons: - The minions
can't use Salt to see the master: this is easily fixed by static
config. - The master can't see the list of all the minions: this is
fixed temporarily by static config in util.sh, but later, by other
means (see
https://github.com/GoogleCloudPlatform/kubernetes/issues/156, which
should eventually remove this direction).
As part of it, flatten all of cluster/gce/templates/* into
configure-vm.sh, using a single, separate piece of YAML to drive the
environment variables, rather than constantly rewriting the startup
script.
2015-03-02 22:38:58 +00:00
|
|
|
fi
|
2016-05-18 17:02:33 +00:00
|
|
|
|
|
|
|
# Decide whether to enable a docker registry mirror. This is taken from
|
|
|
|
# the "kube-env" metadata value.
|
|
|
|
if [[ -n "${DOCKER_REGISTRY_MIRROR_URL:-}" ]]; then
|
|
|
|
echo "Enable docker registry mirror at: ${DOCKER_REGISTRY_MIRROR_URL}"
|
|
|
|
DOCKER_OPTS="${DOCKER_OPTS:-} --registry-mirror=${DOCKER_REGISTRY_MIRROR_URL}"
|
|
|
|
fi
|
2016-02-04 14:36:30 +00:00
|
|
|
}
|
Change GCE to use standalone Saltstack config:
Change provisioning to pass all variables to both master and node. Run
Salt in a masterless setup on all nodes ala
http://docs.saltstack.com/en/latest/topics/tutorials/quickstart.html,
which involves ensuring Salt daemon is NOT running after install. Kill
Salt master install. And fix push to actually work in this new flow.
As part of this, the GCE Salt config no longer has access to the Salt
mine, which is primarily obnoxious for two reasons: - The minions
can't use Salt to see the master: this is easily fixed by static
config. - The master can't see the list of all the minions: this is
fixed temporarily by static config in util.sh, but later, by other
means (see
https://github.com/GoogleCloudPlatform/kubernetes/issues/156, which
should eventually remove this direction).
As part of it, flatten all of cluster/gce/templates/* into
configure-vm.sh, using a single, separate piece of YAML to drive the
environment variables, rather than constantly rewriting the startup
script.
2015-03-02 22:38:58 +00:00
|
|
|
|
2016-02-04 14:36:30 +00:00
|
|
|
function salt-grains() {
|
|
|
|
env-to-grains "docker_opts"
|
|
|
|
env-to-grains "docker_root"
|
|
|
|
env-to-grains "kubelet_root"
|
2016-08-26 00:34:41 +00:00
|
|
|
env-to-grains "feature_gates"
|
Change GCE to use standalone Saltstack config:
Change provisioning to pass all variables to both master and node. Run
Salt in a masterless setup on all nodes ala
http://docs.saltstack.com/en/latest/topics/tutorials/quickstart.html,
which involves ensuring Salt daemon is NOT running after install. Kill
Salt master install. And fix push to actually work in this new flow.
As part of this, the GCE Salt config no longer has access to the Salt
mine, which is primarily obnoxious for two reasons: - The minions
can't use Salt to see the master: this is easily fixed by static
config. - The master can't see the list of all the minions: this is
fixed temporarily by static config in util.sh, but later, by other
means (see
https://github.com/GoogleCloudPlatform/kubernetes/issues/156, which
should eventually remove this direction).
As part of it, flatten all of cluster/gce/templates/* into
configure-vm.sh, using a single, separate piece of YAML to drive the
environment variables, rather than constantly rewriting the startup
script.
2015-03-02 22:38:58 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
function configure-salt() {
|
|
|
|
mkdir -p /etc/salt/minion.d
|
|
|
|
salt-run-local
|
2017-02-17 23:07:17 +00:00
|
|
|
salt-node-role
|
|
|
|
node-docker-opts
|
2016-02-04 14:36:30 +00:00
|
|
|
salt-grains
|
Change GCE to use standalone Saltstack config:
Change provisioning to pass all variables to both master and node. Run
Salt in a masterless setup on all nodes ala
http://docs.saltstack.com/en/latest/topics/tutorials/quickstart.html,
which involves ensuring Salt daemon is NOT running after install. Kill
Salt master install. And fix push to actually work in this new flow.
As part of this, the GCE Salt config no longer has access to the Salt
mine, which is primarily obnoxious for two reasons: - The minions
can't use Salt to see the master: this is easily fixed by static
config. - The master can't see the list of all the minions: this is
fixed temporarily by static config in util.sh, but later, by other
means (see
https://github.com/GoogleCloudPlatform/kubernetes/issues/156, which
should eventually remove this direction).
As part of it, flatten all of cluster/gce/templates/* into
configure-vm.sh, using a single, separate piece of YAML to drive the
environment variables, rather than constantly rewriting the startup
script.
2015-03-02 22:38:58 +00:00
|
|
|
install-salt
|
|
|
|
stop-salt-minion
|
|
|
|
}
|
|
|
|
|
|
|
|
function run-salt() {
|
2015-03-29 20:58:14 +00:00
|
|
|
echo "== Calling Salt =="
|
2016-10-11 00:09:01 +00:00
|
|
|
local rc=0
|
|
|
|
for i in {0..6}; do
|
|
|
|
salt-call --local state.highstate && rc=0 || rc=$?
|
|
|
|
if [[ "${rc}" == 0 ]]; then
|
|
|
|
return 0
|
|
|
|
fi
|
|
|
|
done
|
|
|
|
echo "Salt failed to run repeatedly" >&2
|
|
|
|
return "${rc}"
|
Change GCE to use standalone Saltstack config:
Change provisioning to pass all variables to both master and node. Run
Salt in a masterless setup on all nodes ala
http://docs.saltstack.com/en/latest/topics/tutorials/quickstart.html,
which involves ensuring Salt daemon is NOT running after install. Kill
Salt master install. And fix push to actually work in this new flow.
As part of this, the GCE Salt config no longer has access to the Salt
mine, which is primarily obnoxious for two reasons: - The minions
can't use Salt to see the master: this is easily fixed by static
config. - The master can't see the list of all the minions: this is
fixed temporarily by static config in util.sh, but later, by other
means (see
https://github.com/GoogleCloudPlatform/kubernetes/issues/156, which
should eventually remove this direction).
As part of it, flatten all of cluster/gce/templates/* into
configure-vm.sh, using a single, separate piece of YAML to drive the
environment variables, rather than constantly rewriting the startup
script.
2015-03-02 22:38:58 +00:00
|
|
|
}
|
|
|
|
|
2016-02-04 14:36:30 +00:00
|
|
|
function run-user-script() {
|
|
|
|
if curl-metadata k8s-user-startup-script > "${INSTALL_DIR}/k8s-user-script.sh"; then
|
|
|
|
user_script=$(cat "${INSTALL_DIR}/k8s-user-script.sh")
|
|
|
|
fi
|
|
|
|
if [[ ! -z ${user_script:-} ]]; then
|
|
|
|
chmod u+x "${INSTALL_DIR}/k8s-user-script.sh"
|
|
|
|
echo "== running user startup script =="
|
|
|
|
"${INSTALL_DIR}/k8s-user-script.sh"
|
|
|
|
fi
|
|
|
|
}
|
|
|
|
|
2017-02-17 23:07:17 +00:00
|
|
|
if [[ "${KUBERNETES_MASTER:-}" == "true" ]]; then
|
|
|
|
echo "Support for debian master has been removed"
|
|
|
|
exit 1
|
|
|
|
fi
|
Change GCE to use standalone Saltstack config:
Change provisioning to pass all variables to both master and node. Run
Salt in a masterless setup on all nodes ala
http://docs.saltstack.com/en/latest/topics/tutorials/quickstart.html,
which involves ensuring Salt daemon is NOT running after install. Kill
Salt master install. And fix push to actually work in this new flow.
As part of this, the GCE Salt config no longer has access to the Salt
mine, which is primarily obnoxious for two reasons: - The minions
can't use Salt to see the master: this is easily fixed by static
config. - The master can't see the list of all the minions: this is
fixed temporarily by static config in util.sh, but later, by other
means (see
https://github.com/GoogleCloudPlatform/kubernetes/issues/156, which
should eventually remove this direction).
As part of it, flatten all of cluster/gce/templates/* into
configure-vm.sh, using a single, separate piece of YAML to drive the
environment variables, rather than constantly rewriting the startup
script.
2015-03-02 22:38:58 +00:00
|
|
|
|
|
|
|
if [[ -z "${is_push}" ]]; then
|
|
|
|
echo "== kube-up node config starting =="
|
|
|
|
set-broken-motd
|
2017-02-27 22:07:07 +00:00
|
|
|
config-ip-firewall
|
2015-05-14 02:05:58 +00:00
|
|
|
ensure-basic-networking
|
2016-02-04 14:36:30 +00:00
|
|
|
fix-apt-sources
|
Change GCE to use standalone Saltstack config:
Change provisioning to pass all variables to both master and node. Run
Salt in a masterless setup on all nodes ala
http://docs.saltstack.com/en/latest/topics/tutorials/quickstart.html,
which involves ensuring Salt daemon is NOT running after install. Kill
Salt master install. And fix push to actually work in this new flow.
As part of this, the GCE Salt config no longer has access to the Salt
mine, which is primarily obnoxious for two reasons: - The minions
can't use Salt to see the master: this is easily fixed by static
config. - The master can't see the list of all the minions: this is
fixed temporarily by static config in util.sh, but later, by other
means (see
https://github.com/GoogleCloudPlatform/kubernetes/issues/156, which
should eventually remove this direction).
As part of it, flatten all of cluster/gce/templates/* into
configure-vm.sh, using a single, separate piece of YAML to drive the
environment variables, rather than constantly rewriting the startup
script.
2015-03-02 22:38:58 +00:00
|
|
|
ensure-install-dir
|
2016-02-04 14:36:30 +00:00
|
|
|
ensure-packages
|
Change GCE to use standalone Saltstack config:
Change provisioning to pass all variables to both master and node. Run
Salt in a masterless setup on all nodes ala
http://docs.saltstack.com/en/latest/topics/tutorials/quickstart.html,
which involves ensuring Salt daemon is NOT running after install. Kill
Salt master install. And fix push to actually work in this new flow.
As part of this, the GCE Salt config no longer has access to the Salt
mine, which is primarily obnoxious for two reasons: - The minions
can't use Salt to see the master: this is easily fixed by static
config. - The master can't see the list of all the minions: this is
fixed temporarily by static config in util.sh, but later, by other
means (see
https://github.com/GoogleCloudPlatform/kubernetes/issues/156, which
should eventually remove this direction).
As part of it, flatten all of cluster/gce/templates/* into
configure-vm.sh, using a single, separate piece of YAML to drive the
environment variables, rather than constantly rewriting the startup
script.
2015-03-02 22:38:58 +00:00
|
|
|
set-kube-env
|
2016-02-28 01:43:51 +00:00
|
|
|
auto-upgrade
|
2016-02-04 14:36:30 +00:00
|
|
|
ensure-local-disks
|
2017-02-17 23:06:55 +00:00
|
|
|
create-node-pki
|
Change GCE to use standalone Saltstack config:
Change provisioning to pass all variables to both master and node. Run
Salt in a masterless setup on all nodes ala
http://docs.saltstack.com/en/latest/topics/tutorials/quickstart.html,
which involves ensuring Salt daemon is NOT running after install. Kill
Salt master install. And fix push to actually work in this new flow.
As part of this, the GCE Salt config no longer has access to the Salt
mine, which is primarily obnoxious for two reasons: - The minions
can't use Salt to see the master: this is easily fixed by static
config. - The master can't see the list of all the minions: this is
fixed temporarily by static config in util.sh, but later, by other
means (see
https://github.com/GoogleCloudPlatform/kubernetes/issues/156, which
should eventually remove this direction).
As part of it, flatten all of cluster/gce/templates/* into
configure-vm.sh, using a single, separate piece of YAML to drive the
environment variables, rather than constantly rewriting the startup
script.
2015-03-02 22:38:58 +00:00
|
|
|
create-salt-pillar
|
2017-02-17 23:07:17 +00:00
|
|
|
create-salt-kubelet-auth
|
|
|
|
create-salt-kubeproxy-auth
|
Change GCE to use standalone Saltstack config:
Change provisioning to pass all variables to both master and node. Run
Salt in a masterless setup on all nodes ala
http://docs.saltstack.com/en/latest/topics/tutorials/quickstart.html,
which involves ensuring Salt daemon is NOT running after install. Kill
Salt master install. And fix push to actually work in this new flow.
As part of this, the GCE Salt config no longer has access to the Salt
mine, which is primarily obnoxious for two reasons: - The minions
can't use Salt to see the master: this is easily fixed by static
config. - The master can't see the list of all the minions: this is
fixed temporarily by static config in util.sh, but later, by other
means (see
https://github.com/GoogleCloudPlatform/kubernetes/issues/156, which
should eventually remove this direction).
As part of it, flatten all of cluster/gce/templates/* into
configure-vm.sh, using a single, separate piece of YAML to drive the
environment variables, rather than constantly rewriting the startup
script.
2015-03-02 22:38:58 +00:00
|
|
|
download-release
|
|
|
|
configure-salt
|
|
|
|
remove-docker-artifacts
|
|
|
|
run-salt
|
2016-02-12 21:58:06 +00:00
|
|
|
reset-motd
|
2015-12-01 19:39:43 +00:00
|
|
|
|
2016-02-04 14:36:30 +00:00
|
|
|
run-user-script
|
Change GCE to use standalone Saltstack config:
Change provisioning to pass all variables to both master and node. Run
Salt in a masterless setup on all nodes ala
http://docs.saltstack.com/en/latest/topics/tutorials/quickstart.html,
which involves ensuring Salt daemon is NOT running after install. Kill
Salt master install. And fix push to actually work in this new flow.
As part of this, the GCE Salt config no longer has access to the Salt
mine, which is primarily obnoxious for two reasons: - The minions
can't use Salt to see the master: this is easily fixed by static
config. - The master can't see the list of all the minions: this is
fixed temporarily by static config in util.sh, but later, by other
means (see
https://github.com/GoogleCloudPlatform/kubernetes/issues/156, which
should eventually remove this direction).
As part of it, flatten all of cluster/gce/templates/* into
configure-vm.sh, using a single, separate piece of YAML to drive the
environment variables, rather than constantly rewriting the startup
script.
2015-03-02 22:38:58 +00:00
|
|
|
echo "== kube-up node config done =="
|
|
|
|
else
|
|
|
|
echo "== kube-push node config starting =="
|
2015-05-14 02:05:58 +00:00
|
|
|
ensure-basic-networking
|
Change GCE to use standalone Saltstack config:
Change provisioning to pass all variables to both master and node. Run
Salt in a masterless setup on all nodes ala
http://docs.saltstack.com/en/latest/topics/tutorials/quickstart.html,
which involves ensuring Salt daemon is NOT running after install. Kill
Salt master install. And fix push to actually work in this new flow.
As part of this, the GCE Salt config no longer has access to the Salt
mine, which is primarily obnoxious for two reasons: - The minions
can't use Salt to see the master: this is easily fixed by static
config. - The master can't see the list of all the minions: this is
fixed temporarily by static config in util.sh, but later, by other
means (see
https://github.com/GoogleCloudPlatform/kubernetes/issues/156, which
should eventually remove this direction).
As part of it, flatten all of cluster/gce/templates/* into
configure-vm.sh, using a single, separate piece of YAML to drive the
environment variables, rather than constantly rewriting the startup
script.
2015-03-02 22:38:58 +00:00
|
|
|
ensure-install-dir
|
|
|
|
set-kube-env
|
|
|
|
create-salt-pillar
|
2015-03-13 01:04:36 +00:00
|
|
|
download-release
|
2016-02-12 21:58:06 +00:00
|
|
|
reset-motd
|
Change GCE to use standalone Saltstack config:
Change provisioning to pass all variables to both master and node. Run
Salt in a masterless setup on all nodes ala
http://docs.saltstack.com/en/latest/topics/tutorials/quickstart.html,
which involves ensuring Salt daemon is NOT running after install. Kill
Salt master install. And fix push to actually work in this new flow.
As part of this, the GCE Salt config no longer has access to the Salt
mine, which is primarily obnoxious for two reasons: - The minions
can't use Salt to see the master: this is easily fixed by static
config. - The master can't see the list of all the minions: this is
fixed temporarily by static config in util.sh, but later, by other
means (see
https://github.com/GoogleCloudPlatform/kubernetes/issues/156, which
should eventually remove this direction).
As part of it, flatten all of cluster/gce/templates/* into
configure-vm.sh, using a single, separate piece of YAML to drive the
environment variables, rather than constantly rewriting the startup
script.
2015-03-02 22:38:58 +00:00
|
|
|
run-salt
|
|
|
|
echo "== kube-push node config done =="
|
|
|
|
fi
|