Merge pull request #65242 from brandondr96/workbranch

Automatic merge from submit-queue (batch tested with PRs 62423, 66180, 66492, 66506, 65242). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Adding IKS functionality to kubemark

**What this PR does / why we need it**:
This PR adds bash scripts in which kubemark is able to be run on IKS clusters. This adds versatility to the testing ability of kubemark by adding another cloud provider and example of use. The scripts to clean up kubemark after use are also included. In addition to this, minor changes were added to other kubemark related files to increase cloud-provider flexibility.

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
No issues will be fixed, as this is an extension to a feature.

**Special notes for your reviewer**:
I currently have the IKS scripts separate from the default ones, which are mainly based off of GCE. If it is preferable, I could combine them into single scripts which prompt the user to choose which cloud provider to test. If there are any issues with the scripts or code, please let me know. Thank you!

**Release note**:

```release-note
NONE
```
pull/8/head
Kubernetes Submit Queue 2018-07-23 12:32:17 -07:00 committed by GitHub
commit 446cf20c9f
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
7 changed files with 586 additions and 0 deletions

View File

@ -0,0 +1,35 @@
#!/usr/bin/env bash
# Copyright 2018 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Cloud information
RANDGEN=$(dd if=/dev/urandom bs=64 count=1 2>/dev/null | base64 | tr -d "=+/" | dd bs=16 count=1 2>/dev/null | sed 's/[A-Z]//g')
KUBE_NAMESPACE="kubemark_${RANDGEN}"
KUBEMARK_IMAGE_TAG="${KUBEMARK_IMAGE_TAG:-2}"
KUBEMARK_IMAGE_LOCATION="${KUBEMARK_IMAGE_LOCATION:-${KUBE_ROOT}/cluster/images/kubemark}"
KUBEMARK_INIT_TAG="${KUBEMARK_INIT_TAG:-${PROJECT}:${KUBEMARK_IMAGE_TAG}}"
CLUSTER_LOCATION="${CLUSTER_LOCATION:-wdc06}"
REGISTRY_LOGIN_URL="${REGISTRY_LOGIN_URL:-https://api.ng.bluemix.net}"
# User defined
NUM_NODES="${NUM_NODES:-2}"
DESIRED_NODES="${DESIRED_NODES:-10}"
ENABLE_KUBEMARK_CLUSTER_AUTOSCALER="${ENABLE_KUBEMARK_CLUSTER_AUTOSCALER:-true}"
ENABLE_KUBEMARK_KUBE_DNS="${ENABLE_KUBEMARK_KUBE_DNS:-false}"
KUBELET_TEST_LOG_LEVEL="${KUBELET_TEST_LOG_LEVEL:-"--v=2"}"
KUBEPROXY_TEST_LOG_LEVEL="${KUBEPROXY_TEST_LOG_LEVEL:-"--v=4"}"
USE_REAL_PROXIER=${USE_REAL_PROXIER:-false}
NODE_INSTANCE_PREFIX=${NODE_INSTANCE_PREFIX:-node}
USE_EXISTING=${USE_EXISTING:-}

View File

@ -787,6 +787,8 @@ if [[ -z "${color_start-}" ]]; then
declare -r color_red="${color_start}0;31m"
declare -r color_yellow="${color_start}0;33m"
declare -r color_green="${color_start}0;32m"
declare -r color_blue="${color_start}1;34m"
declare -r color_cyan="${color_start}1;36m"
declare -r color_norm="${color_start}0m"
fi

View File

@ -0,0 +1,39 @@
#!/usr/bin/env bash
# Copyright 2018 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Script that destroys the clusters used, namespace, and deployment.
KUBECTL=kubectl
KUBEMARK_DIRECTORY="${KUBE_ROOT}/test/kubemark"
RESOURCE_DIRECTORY="${KUBEMARK_DIRECTORY}/resources"
# Login to cloud services
complete-login
# Remove resources created for kubemark
echo -e "${color_yellow}REMOVING RESOURCES${color_norm}"
spawn-config
"${KUBECTL}" delete -f "${RESOURCE_DIRECTORY}/addons" &> /dev/null || true
"${KUBECTL}" delete -f "${RESOURCE_DIRECTORY}/hollow-node.yaml" &> /dev/null || true
"${KUBECTL}" delete -f "${RESOURCE_DIRECTORY}/kubemark-ns.json" &> /dev/null || true
rm -rf "${RESOURCE_DIRECTORY}/addons"
"${RESOURCE_DIRECTORY}/hollow-node.yaml" &> /dev/null || true
# Remove clusters, namespaces, and deployments
delete-clusters
bash ${RESOURCE_DIRECTORY}/iks-namespacelist.sh
rm -f ${RESOURCE_DIRECTORY}/iks-namespacelist.sh
exit 0

View File

@ -0,0 +1,294 @@
#!/usr/bin/env bash
# Copyright 2018 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Script that creates a Kubemark cluster for IBM cloud.
KUBECTL="${KUBE_ROOT}/cluster/kubectl.sh"
KUBEMARK_DIRECTORY="${KUBE_ROOT}/test/kubemark"
RESOURCE_DIRECTORY="${KUBEMARK_DIRECTORY}/resources"
# Generate secret and configMap for the hollow-node pods to work, prepare
# manifests of the hollow-node and heapster replication controllers from
# templates, and finally create these resources through kubectl.
function create-kube-hollow-node-resources {
# Create kubeconfig for Kubelet.
KUBELET_KUBECONFIG_CONTENTS=$(echo "apiVersion: v1
kind: Config
users:
- name: kubelet
user:
client-certificate-data: "${KUBELET_CERT_BASE64}"
client-key-data: "${KUBELET_KEY_BASE64}"
clusters:
- name: kubemark
cluster:
certificate-authority-data: "${CA_CERT_BASE64}"
server: https://${MASTER_IP}
contexts:
- context:
cluster: kubemark
user: kubelet
name: kubemark-context
current-context: kubemark-context")
# Create kubeconfig for Kubeproxy.
KUBEPROXY_KUBECONFIG_CONTENTS=$(echo "apiVersion: v1
kind: Config
users:
- name: kube-proxy
user:
client-certificate-data: "${KUBELET_CERT_BASE64}"
client-key-data: "${KUBELET_KEY_BASE64}"
clusters:
- name: kubemark
cluster:
insecure-skip-tls-verify: true
server: https://${MASTER_IP}
contexts:
- context:
cluster: kubemark
user: kube-proxy
name: kubemark-context
current-context: kubemark-context")
# Create kubeconfig for Heapster.
HEAPSTER_KUBECONFIG_CONTENTS=$(echo "apiVersion: v1
kind: Config
users:
- name: heapster
user:
client-certificate-data: "${KUBELET_CERT_BASE64}"
client-key-data: "${KUBELET_KEY_BASE64}"
clusters:
- name: kubemark
cluster:
insecure-skip-tls-verify: true
server: https://${MASTER_IP}
contexts:
- context:
cluster: kubemark
user: heapster
name: kubemark-context
current-context: kubemark-context")
# Create kubeconfig for Cluster Autoscaler.
CLUSTER_AUTOSCALER_KUBECONFIG_CONTENTS=$(echo "apiVersion: v1
kind: Config
users:
- name: cluster-autoscaler
user:
client-certificate-data: "${KUBELET_CERT_BASE64}"
client-key-data: "${KUBELET_KEY_BASE64}"
clusters:
- name: kubemark
cluster:
insecure-skip-tls-verify: true
server: https://${MASTER_IP}
contexts:
- context:
cluster: kubemark
user: cluster-autoscaler
name: kubemark-context
current-context: kubemark-context")
# Create kubeconfig for NodeProblemDetector.
NPD_KUBECONFIG_CONTENTS=$(echo "apiVersion: v1
kind: Config
users:
- name: node-problem-detector
user:
client-certificate-data: "${KUBELET_CERT_BASE64}"
client-key-data: "${KUBELET_KEY_BASE64}"
clusters:
- name: kubemark
cluster:
insecure-skip-tls-verify: true
server: https://${MASTER_IP}
contexts:
- context:
cluster: kubemark
user: node-problem-detector
name: kubemark-context
current-context: kubemark-context")
# Create kubeconfig for Kube DNS.
KUBE_DNS_KUBECONFIG_CONTENTS=$(echo "apiVersion: v1
kind: Config
users:
- name: kube-dns
user:
client-certificate-data: "${KUBELET_CERT_BASE64}"
client-key-data: "${KUBELET_KEY_BASE64}"
clusters:
- name: kubemark
cluster:
insecure-skip-tls-verify: true
server: https://${MASTER_IP}
contexts:
- context:
cluster: kubemark
user: kube-dns
name: kubemark-context
current-context: kubemark-context")
# Create kubemark namespace.
spawn-config
if kubectl get ns | grep -Fq "kubemark"; then
kubectl delete ns kubemark
while kubectl get ns | grep -Fq "kubemark"
do
sleep 10
done
fi
"${KUBECTL}" create -f "${RESOURCE_DIRECTORY}/kubemark-ns.json"
# Create configmap for configuring hollow- kubelet, proxy and npd.
"${KUBECTL}" create configmap "node-configmap" --namespace="kubemark" \
--from-literal=content.type="${TEST_CLUSTER_API_CONTENT_TYPE}" \
--from-file=kernel.monitor="${RESOURCE_DIRECTORY}/kernel-monitor.json"
# Create secret for passing kubeconfigs to kubelet, kubeproxy and npd.
"${KUBECTL}" create secret generic "kubeconfig" --type=Opaque --namespace="kubemark" \
--from-literal=kubelet.kubeconfig="${KUBELET_KUBECONFIG_CONTENTS}" \
--from-literal=kubeproxy.kubeconfig="${KUBEPROXY_KUBECONFIG_CONTENTS}" \
--from-literal=heapster.kubeconfig="${HEAPSTER_KUBECONFIG_CONTENTS}" \
--from-literal=cluster_autoscaler.kubeconfig="${CLUSTER_AUTOSCALER_KUBECONFIG_CONTENTS}" \
--from-literal=npd.kubeconfig="${NPD_KUBECONFIG_CONTENTS}" \
--from-literal=dns.kubeconfig="${KUBE_DNS_KUBECONFIG_CONTENTS}"
# Create addon pods.
# Heapster.
mkdir -p "${RESOURCE_DIRECTORY}/addons"
sed "s/{{MASTER_IP}}/${MASTER_IP}/g" "${RESOURCE_DIRECTORY}/heapster_template.json" > "${RESOURCE_DIRECTORY}/addons/heapster.json"
metrics_mem_per_node=4
metrics_mem=$((200 + ${metrics_mem_per_node}*${NUM_NODES}))
sed -i'' -e "s/{{METRICS_MEM}}/${metrics_mem}/g" "${RESOURCE_DIRECTORY}/addons/heapster.json"
metrics_cpu_per_node_numerator=${NUM_NODES}
metrics_cpu_per_node_denominator=2
metrics_cpu=$((80 + metrics_cpu_per_node_numerator / metrics_cpu_per_node_denominator))
sed -i'' -e "s/{{METRICS_CPU}}/${metrics_cpu}/g" "${RESOURCE_DIRECTORY}/addons/heapster.json"
eventer_mem_per_node=500
eventer_mem=$((200 * 1024 + ${eventer_mem_per_node}*${NUM_NODES}))
sed -i'' -e "s/{{EVENTER_MEM}}/${eventer_mem}/g" "${RESOURCE_DIRECTORY}/addons/heapster.json"
# Cluster Autoscaler.
if [[ "${ENABLE_KUBEMARK_CLUSTER_AUTOSCALER:-}" == "true" ]]; then
echo "Setting up Cluster Autoscaler"
KUBEMARK_AUTOSCALER_MIG_NAME="${KUBEMARK_AUTOSCALER_MIG_NAME:-${NODE_INSTANCE_PREFIX}-group}"
KUBEMARK_AUTOSCALER_MIN_NODES="${KUBEMARK_AUTOSCALER_MIN_NODES:-0}"
KUBEMARK_AUTOSCALER_MAX_NODES="${KUBEMARK_AUTOSCALER_MAX_NODES:-${DESIRED_NODES}}"
NUM_NODES=${KUBEMARK_AUTOSCALER_MAX_NODES}
echo "Setting maximum cluster size to ${NUM_NODES}."
KUBEMARK_MIG_CONFIG="autoscaling.k8s.io/nodegroup: ${KUBEMARK_AUTOSCALER_MIG_NAME}"
sed "s/{{master_ip}}/${MASTER_IP}/g" "${RESOURCE_DIRECTORY}/cluster-autoscaler_template.json" > "${RESOURCE_DIRECTORY}/addons/cluster-autoscaler.json"
sed -i'' -e "s/{{kubemark_autoscaler_mig_name}}/${KUBEMARK_AUTOSCALER_MIG_NAME}/g" "${RESOURCE_DIRECTORY}/addons/cluster-autoscaler.json"
sed -i'' -e "s/{{kubemark_autoscaler_min_nodes}}/${KUBEMARK_AUTOSCALER_MIN_NODES}/g" "${RESOURCE_DIRECTORY}/addons/cluster-autoscaler.json"
sed -i'' -e "s/{{kubemark_autoscaler_max_nodes}}/${KUBEMARK_AUTOSCALER_MAX_NODES}/g" "${RESOURCE_DIRECTORY}/addons/cluster-autoscaler.json"
fi
# Kube DNS.
if [[ "${ENABLE_KUBEMARK_KUBE_DNS:-}" == "true" ]]; then
echo "Setting up kube-dns"
sed "s/{{dns_domain}}/${KUBE_DNS_DOMAIN}/g" "${RESOURCE_DIRECTORY}/kube_dns_template.yaml" > "${RESOURCE_DIRECTORY}/addons/kube_dns.yaml"
fi
"${KUBECTL}" create -f "${RESOURCE_DIRECTORY}/addons" --namespace="kubemark"
set-registry-secrets
# Create the replication controller for hollow-nodes.
# We allow to override the NUM_REPLICAS when running Cluster Autoscaler.
NUM_REPLICAS=${NUM_REPLICAS:-${NUM_NODES}}
sed "s/{{numreplicas}}/${NUM_REPLICAS}/g" "${RESOURCE_DIRECTORY}/hollow-node_template.yaml" > "${RESOURCE_DIRECTORY}/hollow-node.yaml"
proxy_cpu=20
if [ "${NUM_NODES}" -gt 1000 ]; then
proxy_cpu=50
fi
proxy_mem_per_node=50
proxy_mem=$((100 * 1024 + ${proxy_mem_per_node}*${NUM_NODES}))
sed -i'' -e "s/{{HOLLOW_PROXY_CPU}}/${proxy_cpu}/g" "${RESOURCE_DIRECTORY}/hollow-node.yaml"
sed -i'' -e "s/{{HOLLOW_PROXY_MEM}}/${proxy_mem}/g" "${RESOURCE_DIRECTORY}/hollow-node.yaml"
sed -i'' -e "s'{{kubemark_image_registry}}'${KUBEMARK_IMAGE_REGISTRY}${KUBE_NAMESPACE}'g" "${RESOURCE_DIRECTORY}/hollow-node.yaml"
sed -i'' -e "s/{{kubemark_image_tag}}/${KUBEMARK_IMAGE_TAG}/g" "${RESOURCE_DIRECTORY}/hollow-node.yaml"
sed -i'' -e "s/{{master_ip}}/${MASTER_IP}/g" "${RESOURCE_DIRECTORY}/hollow-node.yaml"
sed -i'' -e "s/{{kubelet_verbosity_level}}/${KUBELET_TEST_LOG_LEVEL}/g" "${RESOURCE_DIRECTORY}/hollow-node.yaml"
sed -i'' -e "s/{{kubeproxy_verbosity_level}}/${KUBEPROXY_TEST_LOG_LEVEL}/g" "${RESOURCE_DIRECTORY}/hollow-node.yaml"
sed -i'' -e "s/{{use_real_proxier}}/${USE_REAL_PROXIER}/g" "${RESOURCE_DIRECTORY}/hollow-node.yaml"
sed -i'' -e "s'{{kubemark_mig_config}}'${KUBEMARK_MIG_CONFIG:-}'g" "${RESOURCE_DIRECTORY}/hollow-node.yaml"
"${KUBECTL}" create -f "${RESOURCE_DIRECTORY}/hollow-node.yaml" --namespace="kubemark"
echo "Created secrets, configMaps, replication-controllers required for hollow-nodes."
}
# Wait until all hollow-nodes are running or there is a timeout.
function wait-for-hollow-nodes-to-run-or-timeout {
echo -n "Waiting for all hollow-nodes to become Running"
start=$(date +%s)
nodes=$("${KUBECTL}" --kubeconfig="${KUBECONFIG}" get node 2> /dev/null) || true
ready=$(($(echo "${nodes}" | grep -v "NotReady" | wc -l) - 1))
until [[ "${ready}" -ge "${NUM_REPLICAS}" ]]; do
echo -n "."
sleep 1
now=$(date +%s)
# Fail it if it already took more than 30 minutes.
if [ $((now - start)) -gt 1800 ]; then
echo ""
echo -e "${color_red} Timeout waiting for all hollow-nodes to become Running. ${color_norm}"
# Try listing nodes again - if it fails it means that API server is not responding
if "${KUBECTL}" --kubeconfig="${KUBECONFIG}" get node &> /dev/null; then
echo "Found only ${ready} ready hollow-nodes while waiting for ${NUM_NODES}."
else
echo "Got error while trying to list hollow-nodes. Probably API server is down."
fi
spawn-config
pods=$("${KUBECTL}" get pods -l name=hollow-node --namespace=kubemark) || true
running=$(($(echo "${pods}" | grep "Running" | wc -l)))
echo "${running} hollow-nodes are reported as 'Running'"
not_running=$(($(echo "${pods}" | grep -v "Running" | wc -l) - 1))
echo "${not_running} hollow-nodes are reported as NOT 'Running'"
echo $(echo "${pods}" | grep -v "Running")
exit 1
fi
nodes=$("${KUBECTL}" --kubeconfig="${KUBECONFIG}" get node 2> /dev/null) || true
ready=$(($(echo "${nodes}" | grep -v "NotReady" | wc -l) - 1))
done
echo -e "${color_green} Done!${color_norm}"
}
############################### Main Function ########################################
# In order for the cluster autoscalar to function, the template file must be changed so that the ":443"
# is removed. This is because the port is already given with the MASTER_IP.
# Create clusters and populate with hollow nodes
complete-login
build-kubemark-image
choose-clusters
generate-values
set-hollow-master
echo "Creating kube hollow node resources"
create-kube-hollow-node-resources
master-config
echo -e "${color_blue}EXECUTION COMPLETE${color_norm}"
# Check status of Kubemark
echo -e "${color_yellow}CHECKING STATUS${color_norm}"
wait-for-hollow-nodes-to-run-or-timeout
# Celebrate
echo ""
echo -e "${color_blue}SUCCESS${color_norm}"
clean-repo
exit 0

206
test/kubemark/iks/util.sh Normal file
View File

@ -0,0 +1,206 @@
#!/usr/bin/env bash
# Copyright 2018 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
KUBE_ROOT=$(dirname "${BASH_SOURCE}")/../../..
# Creates a new kube-spawn cluster
function create-clusters {
echo -e "${color_yellow}CHECKING CLUSTERS${color_norm}"
if bx cs clusters | grep -Fq 'deleting'; then
echo -n "Deleting old clusters"
fi
while bx cs clusters | grep -Fq 'deleting'
do
echo -n "."
sleep 10
done
echo ""
bx cs region-set us-east >/dev/null
bx cs vlans wdc06 >/dev/null
PRIVLAN=$(bx cs vlans wdc06 --json | jq '. | .[] | select(.type == "private") | .id' | sed -e "s/\"//g")
PUBVLAN=$(bx cs vlans wdc06 --json | jq '. | .[] | select(.type == "public") | .id' | sed -e "s/\"//g")
if ! bx cs clusters | grep -Fq 'kubeSpawnTester'; then
echo "Creating spawning cluster"
bx cs cluster-create --location ${CLUSTER_LOCATION} --public-vlan ${PUBVLAN} --private-vlan ${PRIVLAN} --workers 2 --machine-type u2c.2x4 --name kubeSpawnTester
fi
if ! bx cs clusters | grep -Fq 'kubeMasterTester'; then
echo "Creating master cluster"
bx cs cluster-create --location ${CLUSTER_LOCATION} --public-vlan ${PUBVLAN} --private-vlan ${PRIVLAN} --workers 2 --machine-type u2c.2x4 --name kubeMasterTester
fi
push-image
if ! bx cs clusters | grep 'kubeSpawnTester' | grep -Fq 'normal'; then
echo -e "${color_cyan}Warning: new clusters may take up to 60 minutes to be ready${color_norm}"
echo -n "Clusters loading"
fi
while ! bx cs clusters | grep 'kubeSpawnTester' | grep -Fq 'normal'
do
echo -n "."
sleep 5
done
while ! bx cs clusters | grep 'kubeMasterTester' | grep -Fq 'normal'
do
echo -n "."
sleep 5
done
echo -e "${color_yellow}CLUSTER CREATION COMPLETE${color_norm}"
}
# Builds and pushes image to registry
function push-image {
if [[ "${ISBUILD}" = "y" ]]; then
if ! bx cr namespaces | grep -Fq ${KUBE_NAMESPACE}; then
echo "Creating registry namespace"
bx cr namespace-add ${KUBE_NAMESPACE}
echo "bx cr namespace-rm ${KUBE_NAMESPACE}" >> ${RESOURCE_DIRECTORY}/iks-namespacelist.sh
fi
docker build -t ${KUBEMARK_INIT_TAG} ${KUBEMARK_IMAGE_LOCATION}
docker tag ${KUBEMARK_INIT_TAG} ${KUBEMARK_IMAGE_REGISTRY}${KUBE_NAMESPACE}/${PROJECT}:${KUBEMARK_IMAGE_TAG}
docker push ${KUBEMARK_IMAGE_REGISTRY}${KUBE_NAMESPACE}/${PROJECT}:${KUBEMARK_IMAGE_TAG}
echo "Image pushed"
else
KUBEMARK_IMAGE_REGISTRY=$(echo "brandondr96")
KUBE_NAMESPACE=""
fi
}
# Allow user to use existing clusters if desired
function choose-clusters {
echo -n -e "Do you want to use custom clusters? [y/N]${color_cyan}>${color_norm} "
read USE_EXISTING
if [[ "${USE_EXISTING}" = "y" ]]; then
echo -e "${color_yellow}Enter path for desired hollow-node spawning cluster kubeconfig file:${color_norm}"
read CUSTOM_SPAWN_CONFIG
echo -e "${color_yellow}Enter path for desired hollow-node hosting cluster kubeconfig file:${color_norm}"
read CUSTOM_MASTER_CONFIG
push-image
elif [[ "${USE_EXISTING}" = "N" ]]; then
create-clusters
else
echo -e "${color_red}Invalid response, please try again:${color_norm}"
choose-clusters
fi
}
# Ensure secrets are correctly set
function set-registry-secrets {
spawn-config
kubectl get secret bluemix-default-secret-regional -o yaml | sed 's/default/kubemark/g' | kubectl -n kubemark create -f -
kubectl patch serviceaccount -n kubemark default -p '{"imagePullSecrets": [{"name": "bluemix-kubemark-secret"}]}'
kubectl -n kubemark get serviceaccounts default -o json | jq 'del(.metadata.resourceVersion)' | jq 'setpath(["imagePullSecrets"];[{"name":"bluemix-kubemark-secret-regional"}])' | kubectl -n kubemark replace serviceaccount default -f -
}
# Sets hollow nodes spawned under master
function set-hollow-master {
echo -e "${color_yellow}CONFIGURING MASTER${color_norm}"
master-config
MASTER_IP=$(cat $KUBECONFIG | grep server | awk -F "/" '{print $3}')
}
# Set up master cluster environment
function master-config {
if [[ "${USE_EXISTING}" = "y" ]]; then
export KUBECONFIG=${CUSTOM_MASTER_CONFIG}
else
$(bx cs cluster-config kubeMasterTester --admin | grep export)
fi
}
# Set up spawn cluster environment
function spawn-config {
if [[ "${USE_EXISTING}" = "y" ]]; then
export KUBECONFIG=${CUSTOM_SPAWN_CONFIG}
else
$(bx cs cluster-config kubeSpawnTester --admin | grep export)
fi
}
# Deletes existing clusters
function delete-clusters {
echo "DELETING CLUSTERS"
bx cs cluster-rm kubeSpawnTester
bx cs cluster-rm kubeMasterTester
while ! bx cs clusters | grep 'kubeSpawnTester' | grep -Fq 'deleting'
do
sleep 5
done
while ! bx cs clusters | grep 'kubeMasterTester' | grep -Fq 'deleting'
do
sleep 5
done
kubectl delete ns kubemark
}
# Login to cloud services
function complete-login {
echo -e "${color_yellow}LOGGING INTO CLOUD SERVICES${color_norm}"
echo -n -e "Do you have a federated IBM cloud login? [y/N]${color_cyan}>${color_norm} "
read ISFED
if [[ "${ISFED}" = "y" ]]; then
bx login --sso -a ${REGISTRY_LOGIN_URL}
elif [[ "${ISFED}" = "N" ]]; then
bx login -a ${REGISTRY_LOGIN_URL}
else
echo -e "${color_red}Invalid response, please try again:${color_norm}"
complete-login
fi
bx cr login
}
# Generate values to fill the hollow-node configuration
function generate-values {
echo "Generating values"
master-config
KUBECTL=kubectl
KUBEMARK_DIRECTORY="${KUBE_ROOT}/test/kubemark"
RESOURCE_DIRECTORY="${KUBEMARK_DIRECTORY}/resources"
TEST_CLUSTER_API_CONTENT_TYPE="bluemix" #Determine correct usage of this
CONFIGPATH=${KUBECONFIG%/*}
KUBELET_CERT_BASE64="${KUBELET_CERT_BASE64:-$(cat ${CONFIGPATH}/admin.pem | base64 | tr -d '\r\n')}"
KUBELET_KEY_BASE64="${KUBELET_KEY_BASE64:-$(cat ${CONFIGPATH}/admin-key.pem | base64 | tr -d '\r\n')}"
CA_CERT_BASE64="${CA_CERT_BASE64:-$(cat `find ${CONFIGPATH} -name *ca*` | base64 | tr -d '\r\n')}"
}
# Build image for kubemark
function build-kubemark-image {
echo -n -e "Do you want to build the kubemark image? [y/N]${color_cyan}>${color_norm} "
read ISBUILD
if [[ "${ISBUILD}" = "y" ]]; then
echo -e "${color_yellow}BUILDING IMAGE${color_norm}"
${KUBE_ROOT}/build/run.sh make kubemark
cp ${KUBE_ROOT}/_output/dockerized/bin/linux/amd64/kubemark ${KUBEMARK_IMAGE_LOCATION}
elif [[ "${ISBUILD}" = "N" ]]; then
echo -n ""
else
echo -e "${color_red}Invalid response, please try again:${color_norm}"
build-kubemark-image
fi
}
# Clean up repository
function clean-repo {
echo -n -e "Do you want to remove build output and binary? [y/N]${color_cyan}>${color_norm} "
read ISCLEAN
if [[ "${ISCLEAN}" = "y" ]]; then
echo -e "${color_yellow}CLEANING REPO${color_norm}"
rm -rf ${KUBE_ROOT}/_output
rm -f ${KUBEMARK_IMAGE_LOCATION}/kubemark
elif [[ "${ISCLEAN}" = "N" ]]; then
echo -n ""
else
echo -e "${color_red}Invalid response, please try again:${color_norm}"
clean-repo
fi
}

View File

@ -27,6 +27,11 @@ source "${KUBE_ROOT}/test/kubemark/skeleton/util.sh"
source "${KUBE_ROOT}/test/kubemark/cloud-provider-config.sh"
source "${KUBE_ROOT}/test/kubemark/${CLOUD_PROVIDER}/util.sh"
source "${KUBE_ROOT}/cluster/kubemark/${CLOUD_PROVIDER}/config-default.sh"
if [[ -f "${KUBE_ROOT}/test/kubemark/${CLOUD_PROVIDER}/startup.sh" ]] ; then
source "${KUBE_ROOT}/test/kubemark/${CLOUD_PROVIDER}/startup.sh"
fi
source "${KUBE_ROOT}/cluster/kubemark/util.sh"
# hack/lib/init.sh will ovewrite ETCD_VERSION if this is unset

View File

@ -22,6 +22,11 @@ source "${KUBE_ROOT}/test/kubemark/skeleton/util.sh"
source "${KUBE_ROOT}/test/kubemark/cloud-provider-config.sh"
source "${KUBE_ROOT}/test/kubemark/${CLOUD_PROVIDER}/util.sh"
source "${KUBE_ROOT}/cluster/kubemark/${CLOUD_PROVIDER}/config-default.sh"
if [[ -f "${KUBE_ROOT}/test/kubemark/${CLOUD_PROVIDER}/shutdown.sh" ]] ; then
source "${KUBE_ROOT}/test/kubemark/${CLOUD_PROVIDER}/shutdown.sh"
fi
source "${KUBE_ROOT}/cluster/kubemark/util.sh"
KUBECTL="${KUBE_ROOT}/cluster/kubectl.sh"