Initial kube-up support for VMware's Photon Controller

This is for: https://github.com/kubernetes/kubernetes/issues/24121

Photon Controller is an open-source cloud management platform. More
information is available at:
http://vmware.github.io/photon-controller/

This commit provides initial support for Photon Controller. The
following features are tested and working:
- kube-up and kube-down
- Basic pod and service management
- Networking within the Kubernetes cluster
- UI and DNS addons

It has been tested with a Kubernetes cluster of up to 10
nodes. Further work on scaling is planned for the near future.

Internally we have implemented continuous integration testing and will
run it multiple times per day against the Kubernetes master branch
once this is integrated so we can quickly react to problems.

A few things have not yet been implemented, but are planned:
- Support for kube-push
- Support for test-build-release, test-setup, test-teardown

Assuming this is accepted for inclusion, we will write documentation
for the kubernetes.io site.

We have included a script to help users configure Photon Controller
for use with Kubernetes. While not required, it will help some
users get started more quickly. It will be documented.

We are aware of the kube-deploy efforts and will track them and
support them as appropriate.
pull/6/head
Alain Roy 2016-03-08 11:25:41 -08:00
parent 82458d8f46
commit fa9d79df75
22 changed files with 1829 additions and 15 deletions

View File

@ -19,6 +19,9 @@ spec:
version: v11
kubernetes.io/cluster-service: "true"
spec:
{% if grains['cloud'] is defined and grains['cloud'] in [ 'vsphere', 'photon-controller' ] %}
hostNetwork: true
{% endif %}
containers:
- name: etcd
image: gcr.io/google_containers/etcd-amd64:2.2.1

View File

@ -34,6 +34,8 @@
# * export KUBERNETES_PROVIDER=vagrant; wget -q -O - https://get.k8s.io | bash
# VMWare VSphere
# * export KUBERNETES_PROVIDER=vsphere; wget -q -O - https://get.k8s.io | bash
# VMWare Photon Controller
# * export KUBERNETES_PROVIDER=photon-controller; wget -q -O - https://get.k8s.io | bash
# Rackspace
# * export KUBERNETES_PROVIDER=rackspace; wget -q -O - https://get.k8s.io | bash
#

View File

@ -0,0 +1,72 @@
#!/bin/bash
# Copyright 2014 The Kubernetes Authors All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##########################################################
#
# These parameters describe objects we are using from
# Photon Controller. They are all assumed to be pre-existing.
#
# Note: if you want help in creating them, you can use
# the setup-prereq.sh script, which will create any of these
# that do not already exist.
#
##########################################################
# Pre-created tenant for Kubernetes to use
PHOTON_TENANT=kube-tenant
# Pre-created project in PHOTON_TENANT for Kubernetes to use
PHOTON_PROJECT=kube-project
# Pre-created VM flavor for Kubernetes master to use
# Can be same as master
# We recommend at least 1GB of memory
PHOTON_MASTER_FLAVOR=kube-vm
# Pre-created VM flavor for Kubernetes node to use
# Can be same as master
# We recommend at least 2GB of memory
PHOTON_NODE_FLAVOR=kube-vm
# Pre-created disk flavor for Kubernetes to use
PHOTON_DISK_FLAVOR=kube-disk
# Pre-created Debian 8 image with kube user uploaded to Photon Controller
# Note: While Photon Controller allows multiple images to have the same
# name, we assume that there is exactly one image with this name.
PHOTON_IMAGE=kube
##########################################################
#
# Parameters just for the setup-prereq.sh script: not used
# elsewhere. If you create the above objects by hand, you
# do not need to edit these.
#
# Note that setup-prereq.sh also creates the objects
# above.
#
##########################################################
# The specifications for the master and node flavors
SETUP_MASTER_FLAVOR_SPEC="vm 1 COUNT, vm.cpu 1 COUNT, vm.memory 2 GB"
SETUP_NODE_FLAVOR_SPEC=${SETUP_MASTER_FLAVOR_SPEC}
# The specification for the ephemeral disk flavor.
SETUP_DISK_FLAVOR_SPEC="ephemeral-disk 1 COUNT"
# The specification for the tenant resource ticket and the project resources
SETUP_TICKET_SPEC="vm.memory 1000 GB, vm 1000 COUNT"
SETUP_PROJECT_SPEC="${SETUP_TICKET_SPEC}"

View File

@ -0,0 +1,92 @@
#!/bin/bash
# Copyright 2014 The Kubernetes Authors All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##########################################################
#
# Common parameters for Kubernetes
#
##########################################################
# Default number of nodes to make. You can change this as needed
NUM_NODES=3
# Range of IPs assigned to pods
NODE_IP_RANGES="10.244.0.0/16"
# IPs used by Kubernetes master
MASTER_IP_RANGE="${MASTER_IP_RANGE:-10.246.0.0/24}"
# Range of IPs assigned by Kubernetes to services
SERVICE_CLUSTER_IP_RANGE="10.244.240.0/20"
##########################################################
#
# Advanced parameters for Kubernetes
#
##########################################################
# The instance prefix is the beginning of the name given to each VM we create
# If this is changed, you can have multiple kubernetes clusters per project
# Note that even if you don't change it, each tenant/project can have its own
# Kubernetes cluster
INSTANCE_PREFIX=kubernetes
# Name of the user used to configure the VM
# We use cloud-init to create the user
VM_USER=kube
# SSH options for how we connect to the Kubernetes VMs
# We set the user known hosts file to /dev/null because we are connecting to new VMs.
# When working in an environment where there is a lot of VM churn, VM IP addresses
# will be reused, and the ssh keys will be different. This prevents us from seeing error
# due to this, and it will not save the SSH key to the known_hosts file, so users will
# still have standard ssh security checks.
SSH_OPTS="-oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -oLogLevel=ERROR"
# Optional: Enable node logging.
# Note: currently untested
ENABLE_NODE_LOGGING=false
LOGGING_DESTINATION=elasticsearch
# Optional: When set to true, Elasticsearch and Kibana will be setup
# Note: currently untested
ENABLE_CLUSTER_LOGGING=false
ELASTICSEARCH_LOGGING_REPLICAS=1
# Optional: Cluster monitoring to setup as part of the cluster bring up:
# none - No cluster monitoring setup
# influxdb - Heapster, InfluxDB, and Grafana
# google - Heapster, Google Cloud Monitoring, and Google Cloud Logging
# Note: currently untested
ENABLE_CLUSTER_MONITORING="${KUBE_ENABLE_CLUSTER_MONITORING:-influxdb}"
# Optional: Install cluster DNS.
ENABLE_CLUSTER_DNS="${KUBE_ENABLE_CLUSTER_DNS:-true}"
DNS_SERVER_IP="10.244.240.240"
DNS_DOMAIN="cluster.local"
DNS_REPLICAS=1
# Optional: Install Kubernetes UI
ENABLE_CLUSTER_UI=true
# We need to configure subject alternate names (SANs) for the master's certificate
# we generate. While users will connect via the external IP, pods (like the UI)
# will connect via the cluster IP, from the SERVICE_CLUSTER_IP_RANGE.
# In addition to the extra SANS here, we'll also add one for for the service IP.
MASTER_EXTRA_SANS="DNS:kubernetes,DNS:kubernetes.default,DNS:kubernetes.default.svc,DNS:kubernetes.default.svc.${DNS_DOMAIN}"
# Optional: if set to true, kube-up will configure the cluster to run e2e tests.
E2E_STORAGE_TEST_ENVIRONMENT=${KUBE_E2E_STORAGE_TEST_ENVIRONMENT:-false}

View File

@ -0,0 +1,20 @@
#!/bin/bash
# Copyright 2014 The Kubernetes Authors All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
NUM_NODES=2
NODE_IP_RANGES="10.244.0.0/16"
MASTER_IP_RANGE="${MASTER_IP_RANGE:-10.246.0.0/24}"
SERVICE_CLUSTER_IP_RANGE="10.244.240.0/20"

View File

@ -0,0 +1,239 @@
#!/bin/bash
# Copyright 2016 The Kubernetes Authors All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This sets up a Photon Controller with the tenant, project, flavors
# and image that are needed to deploy Kubernetes with kube-up.
#
# This is not meant to be used in production: it creates resource tickets
# (quotas) that are arbitrary and not likely to work in your environment.
# However, it may be a quick way to get your environment set up to try out
# a Kubernetes installation.
#
# It uses the names for the tenant, project, and flavors as specified in the
# config-common.sh file
#
# If you want to do this by hand, this script is equivalent to the following
# Photon Controller commands (assuming you haven't edited config-common.sh
# to change the names)
#
# photon target set https://192.0.2.2
# photon tenant create kube-tenant
# photon tenant set kube-tenant
# photon resource-ticket create --tenant kube-tenant --name kube-resources --limits "vm.memory 1000 GB, vm 1000 COUNT"
# photon project create --tenant kube-tenant --resource-ticket kube-resources --name kube-project --limits "vm.memory 1000 GB, vm 1000 COUNT"
# photon project set kube-project
# photon -n flavor create --name "kube-vm" --kind "vm" --cost "vm 1 COUNT, vm.cpu 1 COUNT, vm.memory 2 GB"
# photon -n flavor create --name "kube-disk" --kind "ephemeral-disk" --cost "ephemeral-disk 1 COUNT"
# photon image create kube.vmdk -n kube-image -i EAGER
#
# Note that the kube.vmdk can be downloaded as specified in the documentation.
set -o errexit
set -o nounset
set -o pipefail
KUBE_ROOT=$(dirname "${BASH_SOURCE[0]}")/../..
# shellcheck source=./util.sh
source "${KUBE_ROOT}/cluster/photon-controller/util.sh"
function main {
verify-cmd-in-path photon
set-target
create-tenant
create-project
create-vm-flavor "${PHOTON_MASTER_FLAVOR}" "${SETUP_MASTER_FLAVOR_SPEC}"
if [ "${PHOTON_MASTER_FLAVOR}" != "${PHOTON_NODE_FLAVOR}" ]; then
create-vm-flavor "${PHOTON_NODE_FLAVOR}" "${SETUP_NODE_FLAVOR_SPEC}"
fi
create-disk-flavor
create-image
}
function parse-cmd-line {
PHOTON_TARGET=${1:-""}
PHOTON_VMDK=${2:-""}
if [[ "${PHOTON_TARGET}" = "" || "${PHOTON_VMDK}" = "" ]]; then
echo "Usage: setup-prereq <photon target> <path-to-kube-vmdk>"
echo "Target should be a URL like https://192.0.2.1"
echo ""
echo "This will create the following, based on the configuration in config-common.sh"
echo " * A tenant named ${PHOTON_TENANT}"
echo " * A project named ${PHOTON_PROJECT}"
echo " * A VM flavor named ${PHOTON_MASTER_FLAVOR}"
echo " * A disk flavor named ${PHOTON_DISK_FLAVOR}"
echo "It will also upload the Kube VMDK"
echo ""
echo "It creates the tenant with a resource ticket (quota) that may"
echo "be inappropriate for your environment. For a production"
echo "environment, you should configure these to match your"
echo "environment."
exit 1
fi
echo "Photon Target: ${PHOTON_TARGET}"
echo "Photon VMDK: ${PHOTON_VMDK}"
}
function set-target {
${PHOTON} target set "${PHOTON_TARGET}" > /dev/null 2>&1
}
function create-tenant {
local rc=0
local output
${PHOTON} tenant list | grep -q "\t${PHOTON_TENANT}$" > /dev/null 2>&1 || rc=$?
if [[ ${rc} -eq 0 ]]; then
echo "Tenant ${PHOTON_TENANT} already made, skipping"
else
echo "Making tenant ${PHOTON_TENANT}"
rc=0
output=$(${PHOTON} tenant create "${PHOTON_TENANT}" 2>&1) || {
echo "ERROR: Could not create tenant \"${PHOTON_TENANT}\", exiting"
echo "Output from tenant creation:"
echo "${output}"
exit 1
}
fi
${PHOTON} tenant set "${PHOTON_TENANT}" > /dev/null 2>&1
}
function create-project {
local rc=0
local output
${PHOTON} project list | grep -q "\t${PHOTON_PROJECT}\t" > /dev/null 2>&1 || rc=$?
if [[ ${rc} -eq 0 ]]; then
echo "Project ${PHOTON_PROJECT} already made, skipping"
else
echo "Making project ${PHOTON_PROJECT}"
rc=0
output=$(${PHOTON} resource-ticket create --tenant "${PHOTON_TENANT}" --name "${PHOTON_TENANT}-resources" --limits "${SETUP_TICKET_SPEC}" 2>&1) || {
echo "ERROR: Could not create resource ticket, exiting"
echo "Output from resource ticket creation:"
echo "${output}"
exit 1
}
rc=0
output=$(${PHOTON} project create --tenant "${PHOTON_TENANT}" --resource-ticket "${PHOTON_TENANT}-resources" --name "${PHOTON_PROJECT}" --limits "${SETUP_PROJECT_SPEC}" 2>&1) || {
echo "ERROR: Could not create project \"${PHOTON_PROJECT}\", exiting"
echo "Output from project creation:"
echo "${output}"
exit 1
}
fi
${PHOTON} project set "${PHOTON_PROJECT}"
}
function create-vm-flavor {
local flavor_name=${1}
local flavor_spec=${2}
local rc=0
local output
${PHOTON} flavor list | grep -q "\t${flavor_name}\t" > /dev/null 2>&1 || rc=$?
if [[ ${rc} -eq 0 ]]; then
check-flavor-ready "${flavor_name}"
echo "Flavor ${flavor_name} already made, skipping"
else
echo "Making VM flavor ${flavor_name}"
rc=0
output=$(${PHOTON} -n flavor create --name "${flavor_name}" --kind "vm" --cost "${flavor_spec}" 2>&1) || {
echo "ERROR: Could not create vm flavor \"${flavor_name}\", exiting"
echo "Output from flavor creation:"
echo "${output}"
exit 1
}
fi
}
function create-disk-flavor {
local rc=0
local output
${PHOTON} flavor list | grep -q "\t${PHOTON_DISK_FLAVOR}\t" > /dev/null 2>&1 || rc=$?
if [[ ${rc} -eq 0 ]]; then
check-flavor-ready "${PHOTON_DISK_FLAVOR}"
echo "Flavor ${PHOTON_DISK_FLAVOR} already made, skipping"
else
echo "Making disk flavor ${PHOTON_DISK_FLAVOR}"
rc=0
output=$(${PHOTON} -n flavor create --name "${PHOTON_DISK_FLAVOR}" --kind "ephemeral-disk" --cost "${SETUP_DISK_FLAVOR_SPEC}" 2>&1) || {
echo "ERROR: Could not create disk flavor \"${PHOTON_DISK_FLAVOR}\", exiting"
echo "Output from flavor creation:"
echo "${output}"
exit 1
}
fi
}
function check-flavor-ready {
local flavor_name=${1}
local rc=0
local flavor_id
flavor_id=$(${PHOTON} flavor list | grep "\t${flavor_name}\t" | awk '{print $1}') || {
echo "ERROR: Found ${flavor_name} but cannot find it's id"
exit 1
}
${PHOTON} flavor show "${flavor_id}" | grep "\tREADY\$" > /dev/null 2>&1 || {
echo "ERROR: Flavor \"${flavor_name}\" already exists but is not READY. Please delete or fix it."
exit 1
}
}
function create-image {
local rc=0
local num_images
local output
${PHOTON} image list | grep "\t${PHOTON_IMAGE}\t" | grep -q ERROR > /dev/null 2>&1 || rc=$?
if [[ ${rc} -eq 0 ]]; then
echo "Warning: You have at least one ${PHOTON_IMAGE} image in the ERROR state. You may want to investigate."
echo "Images in the ERROR state will be ignored."
fi
rc=0
# We don't use grep -c because it exists non-zero when there are no matches, tell shellcheck
# shellcheck disable=SC2126
num_images=$(${PHOTON} image list | grep "\t${PHOTON_IMAGE}\t" | grep READY | wc -l)
if [[ "${num_images}" -gt 1 ]]; then
echo "Warning: You have more than one good ${PHOTON_IMAGE} image. You may want to remove duplicates."
fi
${PHOTON} image list | grep "\t${PHOTON_IMAGE}\t" | grep -q READY > /dev/null 2>&1 || rc=$?
if [[ ${rc} -eq 0 ]]; then
echo "Image ${PHOTON_VMDK} already uploaded, skipping"
else
echo "Uploading image ${PHOTON_VMDK}"
rc=0
output=$(${PHOTON} image create "${PHOTON_VMDK}" -n "${PHOTON_IMAGE}" -i EAGER 2>&1) || {
echo "ERROR: Could not upload image, exiting"
echo "Output from image create:"
echo "${output}"
exit 1
}
fi
}
# We don't want silent pipeline failure: we check for failure
set +o pipefail
parse-cmd-line "$@"
main

View File

@ -0,0 +1,4 @@
The scripts in this directory are not meant to be invoked
directly. Instead they are partial scripts that are combined into full
scripts by util.sh and are run on the Kubernetes nodes are part of the
setup.

View File

@ -0,0 +1,130 @@
#!/bin/bash
# Copyright 2014 The Kubernetes Authors All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#generate token files
KUBELET_TOKEN=$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64 | tr -d "=+/" | dd bs=32 count=1 2>/dev/null)
KUBE_PROXY_TOKEN=$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64 | tr -d "=+/" | dd bs=32 count=1 2>/dev/null)
known_tokens_file="/srv/salt-overlay/salt/kube-apiserver/known_tokens.csv"
if [[ ! -f "${known_tokens_file}" ]]; then
mkdir -p /srv/salt-overlay/salt/kube-apiserver
known_tokens_file="/srv/salt-overlay/salt/kube-apiserver/known_tokens.csv"
(umask u=rw,go= ;
echo "$KUBELET_TOKEN,kubelet,kubelet" > $known_tokens_file;
echo "$KUBE_PROXY_TOKEN,kube_proxy,kube_proxy" >> $known_tokens_file)
mkdir -p /srv/salt-overlay/salt/kubelet
kubelet_auth_file="/srv/salt-overlay/salt/kubelet/kubernetes_auth"
(umask u=rw,go= ; echo "{\"BearerToken\": \"$KUBELET_TOKEN\", \"Insecure\": true }" > $kubelet_auth_file)
kubelet_kubeconfig_file="/srv/salt-overlay/salt/kubelet/kubeconfig"
mkdir -p /srv/salt-overlay/salt/kubelet
(umask 077;
cat > "${kubelet_kubeconfig_file}" << EOF
apiVersion: v1
kind: Config
clusters:
- cluster:
insecure-skip-tls-verify: true
name: local
contexts:
- context:
cluster: local
user: kubelet
name: service-account-context
current-context: service-account-context
users:
- name: kubelet
user:
token: ${KUBELET_TOKEN}
EOF
)
mkdir -p /srv/salt-overlay/salt/kube-proxy
kube_proxy_kubeconfig_file="/srv/salt-overlay/salt/kube-proxy/kubeconfig"
# Make a kubeconfig file with the token.
# TODO(etune): put apiserver certs into secret too, and reference from authfile,
# so that "Insecure" is not needed.
(umask 077;
cat > "${kube_proxy_kubeconfig_file}" << EOF
apiVersion: v1
kind: Config
clusters:
- cluster:
insecure-skip-tls-verify: true
name: local
contexts:
- context:
cluster: local
user: kube-proxy
name: service-account-context
current-context: service-account-context
users:
- name: kube-proxy
user:
token: ${KUBE_PROXY_TOKEN}
EOF
)
# Generate tokens for other "service accounts". Append to known_tokens.
#
# NB: If this list ever changes, this script actually has to
# change to detect the existence of this file, kill any deleted
# old tokens and add any new tokens (to handle the upgrade case).
service_accounts=("system:scheduler" "system:controller_manager" "system:logging" "system:monitoring" "system:dns")
for account in "${service_accounts[@]}"; do
token=$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64 | tr -d "=+/" | dd bs=32 count=1 2>/dev/null)
echo "${token},${account},${account}" >> "${known_tokens_file}"
done
fi
readonly BASIC_AUTH_FILE="/srv/salt-overlay/salt/kube-apiserver/basic_auth.csv"
if [[ ! -e "${BASIC_AUTH_FILE}" ]]; then
mkdir -p /srv/salt-overlay/salt/kube-apiserver
(umask 077;
echo "${KUBE_PASSWORD},${KUBE_USER},admin" > "${BASIC_AUTH_FILE}")
fi
# Create the overlay files for the salt tree. We create these in a separate
# place so that we can blow away the rest of the salt configs on a kube-push and
# re-apply these.
mkdir -p /srv/salt-overlay/pillar
cat <<EOF >/srv/salt-overlay/pillar/cluster-params.sls
instance_prefix: '$(echo "$INSTANCE_PREFIX" | sed -e "s/'/''/g")'
node_instance_prefix: $NODE_INSTANCE_PREFIX
service_cluster_ip_range: $SERVICE_CLUSTER_IP_RANGE
enable_cluster_monitoring: "${ENABLE_CLUSTER_MONITORING:-none}"
enable_cluster_logging: "${ENABLE_CLUSTER_LOGGING:false}"
enable_cluster_ui: "${ENABLE_CLUSTER_UI:true}"
enable_node_logging: "${ENABLE_NODE_LOGGING:false}"
logging_destination: $LOGGING_DESTINATION
elasticsearch_replicas: $ELASTICSEARCH_LOGGING_REPLICAS
enable_cluster_dns: "${ENABLE_CLUSTER_DNS:-false}"
dns_replicas: ${DNS_REPLICAS:-1}
dns_server: $DNS_SERVER_IP
dns_domain: $DNS_DOMAIN
e2e_storage_test_environment: "${E2E_STORAGE_TEST_ENVIRONMENT:-false}"
cluster_cidr: "$NODE_IP_RANGES"
allocate_node_cidrs: "${ALLOCATE_NODE_CIDRS:-true}"
admission_control: NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota
EOF
mkdir -p /srv/salt-overlay/salt/nginx
echo ${MASTER_HTPASSWD} > /srv/salt-overlay/salt/nginx/htpasswd

View File

@ -0,0 +1,22 @@
#!/bin/bash
# Copyright 2014 The Kubernetes Authors All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Remove kube.vm from /etc/hosts
sed -i -e 's/\b\w\+.vm\b//' /etc/hosts
# Update hostname in /etc/hosts and /etc/hostname
sed -i -e "s/\\bkube\\b/${MY_NAME}/g" /etc/host{s,name}
hostname ${MY_NAME}

View File

@ -0,0 +1,26 @@
#!/bin/bash
# Copyright 2014 The Kubernetes Authors All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This script assumes that the environment variable SERVER_BINARY_TAR contains
# the release tar to download and unpack. It is meant to be pushed to the
# master and run.
echo "Unpacking Salt tree"
rm -rf kubernetes
tar xzf "${SALT_TAR}"
echo "Running release install script"
sudo kubernetes/saltbase/install.sh "${SERVER_BINARY_TAR}"

View File

@ -0,0 +1,56 @@
#!/bin/bash
# Copyright 2014 The Kubernetes Authors All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Use other Debian mirror
sed -i -e "s/http.us.debian.org/mirrors.kernel.org/" /etc/apt/sources.list
# Prepopulate the name of the Master
mkdir -p /etc/salt/minion.d
echo "master: ${MASTER_NAME}" > /etc/salt/minion.d/master.conf
cat <<EOF >/etc/salt/minion.d/grains.conf
grains:
roles:
- kubernetes-master
cbr-cidr: $MASTER_IP_RANGE
cloud: photon-controller
master_extra_sans: $MASTER_EXTRA_SANS
EOF
# Auto accept all keys from minions that try to join
mkdir -p /etc/salt/master.d
cat <<EOF >/etc/salt/master.d/auto-accept.conf
auto_accept: True
EOF
cat <<EOF >/etc/salt/master.d/reactor.conf
# React to new minions starting by running highstate on them.
reactor:
- 'salt/minion/*/start':
- /srv/reactor/highstate-new.sls
- /srv/reactor/highstate-masters.sls
- /srv/reactor/highstate-minions.sls
EOF
# Install Salt
#
# We specify -X to avoid a race condition that can cause minion failure to
# install. See https://github.com/saltstack/salt-bootstrap/issues/270
#
# -M installs the master
set +x
curl -L --connect-timeout 20 --retry 6 --retry-delay 10 https://bootstrap.saltstack.com | sh -s -- -M -X
set -x

View File

@ -0,0 +1,51 @@
#!/bin/bash
# Copyright 2014 The Kubernetes Authors All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Use other Debian mirror
sed -i -e "s/http.us.debian.org/mirrors.kernel.org/" /etc/apt/sources.list
# Resolve hostname of master
if ! grep -q $KUBE_MASTER /etc/hosts; then
echo "Adding host entry for $KUBE_MASTER"
echo "${KUBE_MASTER_IP} ${KUBE_MASTER}" >> /etc/hosts
fi
# Prepopulate the name of the Master
mkdir -p /etc/salt/minion.d
echo "master: ${KUBE_MASTER}" > /etc/salt/minion.d/master.conf
# Turn on debugging for salt-minion
# echo "DAEMON_ARGS=\"\$DAEMON_ARGS --log-file-level=debug\"" > /etc/default/salt-minion
# Our minions will have a pool role to distinguish them from the master.
#
# Setting the "minion_ip" here causes the kubelet to use its IP for
# identification instead of its hostname.
#
cat <<EOF >/etc/salt/minion.d/grains.conf
grains:
hostname_override: $(ip route get 1.1.1.1 | awk '{print $7}')
roles:
- kubernetes-pool
- kubernetes-pool-photon-controller
cloud: photon-controller
EOF
# Install Salt
#
# We specify -X to avoid a race condition that can cause minion failure to
# install. See https://github.com/saltstack/salt-bootstrap/issues/270
curl -L --connect-timeout 20 --retry 6 --retry-delay 10 https://bootstrap.saltstack.com | sh -s -- -X

1092
cluster/photon-controller/util.sh Executable file

File diff suppressed because it is too large Load Diff

View File

@ -47,7 +47,7 @@ docker:
- pkg: docker-io
{% endif %}
{% elif grains.cloud is defined and grains.cloud == 'vsphere' and grains.os == 'Debian' and grains.osrelease_info[0] >=8 %}
{% elif grains.cloud is defined and grains.cloud in ['vsphere', 'photon-controller'] and grains.os == 'Debian' and grains.osrelease_info[0] >=8 %}
{% if pillar.get('is_systemd') %}
@ -69,6 +69,7 @@ docker:
environment_file: {{ environment_file }}
- require:
- file: /opt/kubernetes/helpers/docker-prestart
- pkg: docker-engine
# The docker service.running block below doesn't work reliably
# Instead we run our script which e.g. does a systemd daemon-reload

View File

@ -6,7 +6,7 @@
{% if grains.cloud == 'aws' %}
{% set cert_ip='_use_aws_external_ip_' %}
{% endif %}
{% if grains.cloud == 'vsphere' %}
{% if grains.cloud == 'vsphere' or grains.cloud == 'photon-controller' %}
{% set cert_ip=grains.ip_interfaces.eth0[0] %}
{% endif %}
{% endif %}

View File

@ -1,5 +1,5 @@
{% if grains.cloud is defined %}
{% if grains.cloud in ['aws', 'gce', 'vagrant', 'vsphere'] %}
{% if grains.cloud in ['aws', 'gce', 'vagrant', 'vsphere', 'photon-controller'] %}
# TODO: generate and distribute tokens on other cloud providers.
/srv/kubernetes/known_tokens.csv:
file.managed:
@ -12,7 +12,7 @@
{% endif %}
{% endif %}
{% if grains['cloud'] is defined and grains.cloud in [ 'aws', 'gce', 'vagrant' ,'vsphere'] %}
{% if grains['cloud'] is defined and grains.cloud in [ 'aws', 'gce', 'vagrant' ,'vsphere', 'photon-controller'] %}
/srv/kubernetes/basic_auth.csv:
file.managed:
- source: salt://kube-apiserver/basic_auth.csv

View File

@ -14,7 +14,7 @@
{% set srv_sshproxy_path = "/srv/sshproxy" -%}
{% if grains.cloud is defined -%}
{% if grains.cloud not in ['vagrant', 'vsphere'] -%}
{% if grains.cloud not in ['vagrant', 'vsphere', 'photon-controller'] -%}
{% set cloud_provider = "--cloud-provider=" + grains.cloud -%}
{% endif -%}
@ -58,7 +58,7 @@
{% set client_ca_file = "" -%}
{% set secure_port = "6443" -%}
{% if grains['cloud'] is defined and grains.cloud in [ 'aws', 'gce', 'vagrant', 'vsphere' ] %}
{% if grains['cloud'] is defined and grains.cloud in [ 'aws', 'gce', 'vagrant', 'vsphere', 'photon-controller' ] %}
{% set secure_port = "443" -%}
{% set client_ca_file = "--client-ca-file=/srv/kubernetes/ca.crt" -%}
{% endif -%}
@ -72,12 +72,12 @@
{% endif -%}
{% if grains.cloud is defined -%}
{% if grains.cloud in [ 'aws', 'gce', 'vagrant', 'vsphere' ] -%}
{% if grains.cloud in [ 'aws', 'gce', 'vagrant', 'vsphere', 'photon-controller' ] -%}
{% set token_auth_file = "--token-auth-file=/srv/kubernetes/known_tokens.csv" -%}
{% endif -%}
{% endif -%}
{% if grains['cloud'] is defined and grains.cloud in [ 'aws', 'gce', 'vagrant', 'vsphere'] %}
{% if grains['cloud'] is defined and grains.cloud in [ 'aws', 'gce', 'vagrant', 'vsphere', 'photon-controller' ] %}
{% set basic_auth_file = "--basic-auth-file=/srv/kubernetes/basic_auth.csv" -%}
{% endif -%}

View File

@ -32,7 +32,7 @@
{% set srv_kube_path = "/srv/kubernetes" -%}
{% if grains.cloud is defined -%}
{% if grains.cloud not in ['vagrant', 'vsphere'] -%}
{% if grains.cloud not in ['vagrant', 'vsphere', 'photon-controller'] -%}
{% set cloud_provider = "--cloud-provider=" + grains.cloud -%}
{% endif -%}
{% set service_account_key = "--service-account-private-key-file=/srv/kubernetes/server.key" -%}
@ -46,7 +46,7 @@
{% set root_ca_file = "" -%}
{% if grains['cloud'] is defined and grains.cloud in [ 'aws', 'gce', 'vagrant', 'vsphere' ] %}
{% if grains['cloud'] is defined and grains.cloud in [ 'aws', 'gce', 'vagrant', 'vsphere', 'photon-controller' ] %}
{% set root_ca_file = "--root-ca-file=/srv/kubernetes/ca.crt" -%}
{% endif -%}

View File

@ -5,7 +5,7 @@
{% set ips = salt['mine.get']('roles:kubernetes-master', 'network.ip_addrs', 'grain').values() -%}
{% set api_servers = "--master=https://" + ips[0][0] -%}
{% endif -%}
{% if grains['cloud'] is defined and grains.cloud in [ 'aws', 'gce', 'vagrant', 'vsphere' ] %}
{% if grains['cloud'] is defined and grains.cloud in [ 'aws', 'gce', 'vagrant', 'vsphere', 'photon-controller' ] %}
{% set api_servers_with_port = api_servers -%}
{% else -%}
{% set api_servers_with_port = api_servers + ":6443" -%}

View File

@ -16,7 +16,7 @@
{% endif -%}
# TODO: remove nginx for other cloud providers.
{% if grains['cloud'] is defined and grains.cloud in [ 'aws', 'gce', 'vagrant', 'vsphere' ] %}
{% if grains['cloud'] is defined and grains.cloud in [ 'aws', 'gce', 'vagrant', 'vsphere', 'photon-controller' ] %}
{% set api_servers_with_port = api_servers -%}
{% else -%}
{% set api_servers_with_port = api_servers + ":6443" -%}
@ -28,7 +28,7 @@
{% set reconcile_cidr_args = "" -%}
{% if grains['roles'][0] == 'kubernetes-master' -%}
{% if grains.cloud in ['aws', 'gce', 'vagrant', 'vsphere'] -%}
{% if grains.cloud in ['aws', 'gce', 'vagrant', 'vsphere', 'photon-controller'] -%}
# Unless given a specific directive, disable registration for the kubelet
# running on the master.
@ -48,7 +48,7 @@
{% endif -%}
{% set cloud_provider = "" -%}
{% if grains.cloud is defined and grains.cloud not in ['vagrant', 'vsphere'] -%}
{% if grains.cloud is defined and grains.cloud not in ['vagrant', 'vsphere', 'photon-controller'] -%}
{% set cloud_provider = "--cloud-provider=" + grains.cloud -%}
{% endif -%}

View File

@ -72,7 +72,7 @@ base:
- logrotate
{% endif %}
- kube-addons
{% if grains['cloud'] is defined and grains['cloud'] in [ 'vagrant', 'gce', 'aws', 'vsphere' ] %}
{% if grains['cloud'] is defined and grains['cloud'] in [ 'vagrant', 'gce', 'aws', 'vsphere', 'photon-controller' ] %}
- docker
- kubelet
{% endif %}

View File

@ -26,6 +26,10 @@ cluster/log-dump.sh: for node_name in "${NODE_NAMES[@]}"; do
cluster/log-dump.sh: local -r node_name="${1}"
cluster/log-dump.sh:readonly report_dir="${1:-_artifacts}"
cluster/mesos/docker/km/build.sh: km_path=$(find-binary km darwin/amd64)
cluster/photon-controller/templates/salt-minion.sh: hostname_override: $(ip route get 1.1.1.1 | awk '{print $7}')
cluster/photon-controller/util.sh: node_ip=$(${PHOTON} vm networks "${node_id}" | grep -v "^-" | grep -E '[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+' | head -1 | awk -F'\t' '{print $3}')
cluster/photon-controller/util.sh: local cert_dir="/srv/kubernetes"
cluster/photon-controller/util.sh: node_name=${1}
cluster/rackspace/util.sh: local node_ip=$(nova show --minimal ${NODE_NAMES[$i]} \
cluster/saltbase/salt/kube-addons/kube-addons.sh:# Create admission_control objects if defined before any other addon services. If the limits
cluster/saltbase/salt/kube-admission-controls/init.sls:{% if 'LimitRanger' in pillar.get('admission_control', '') %}