First cut at a "conformance test".

A conformance test is a test you run against a cluster that is already
set up.  We would use it to test a hosted kubernetes service to make
sure that it meets a bar for quality.  Also, a getting-started-guide
author, who has not implemented a complete set of cluster/...
scripts (that is, the getting-started-guide has some non-automated steps)
can use this to see which e2e tests pass on a cluster.

To be done in future PRs:
- disable tests which can't possibly run in a conformance test
  because they require things like cluster ssh.
- document that when we accept a getting-started-guide, that
  people should run the conformance test against their cluster
  (unless they already have cluster/... scripts.

I ran this against a GCE cluster and 22 tests passed.
pull/6/head
Eric Tune 2015-03-05 15:41:52 -08:00
parent 07c2035630
commit 72945955ae
3 changed files with 104 additions and 41 deletions

View File

@ -51,7 +51,7 @@ func main() {
util.InitFlags() util.InitFlags()
goruntime.GOMAXPROCS(goruntime.NumCPU()) goruntime.GOMAXPROCS(goruntime.NumCPU())
if *provider == "" { if *provider == "" {
glog.Error("e2e needs the have the --provider flag set") glog.Info("The --provider flag is not set. Treating as a conformance test. Some tests may not be run.")
os.Exit(1) os.Exit(1)
} }
if *times <= 0 { if *times <= 0 {

39
hack/conformance-test.sh Executable file
View File

@ -0,0 +1,39 @@
#!/bin/bash
# Copyright 2015 Google Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# The conformance test is provided to let users run an e2e test
# against an already-setup cluster for which there is no automated
# setup, teardown, and other cluster/... scripts.
#
# The user must export these environment variables:
# KUBE_MASTER_IP to the ip address of the master.
# AUTH_CONFIG to the argument of the "--auth_config=" flag.
# If certs required, set CERT_DIR.
#
# Example to test against a local vagrant cluster:
# declare -x AUTH_CONFIG="$HOME/.kubernetes_vagrant_auth"
# declare -x KUBE_MASTER_IP=10.245.1.2
# hack/conformance-test.sh
if [[ -z "KUBE_MASTER_IP" ]]; then
echo "Must set KUBE_MASTER_IP before running conformance test."
exit 1
fi
if [[ -z "AUTH_CONFIG" ]]; then
echo "Must set AUTH_CONFIG before running conformance test."
exit 1
fi
hack/ginkgo-e2e.sh
exit $?

View File

@ -20,18 +20,7 @@ set -o pipefail
KUBE_ROOT=$(dirname "${BASH_SOURCE}")/.. KUBE_ROOT=$(dirname "${BASH_SOURCE}")/..
: ${KUBE_VERSION_ROOT:=${KUBE_ROOT}} # --- Find local test binaries.
: ${KUBECTL:="${KUBE_VERSION_ROOT}/cluster/kubectl.sh"}
: ${KUBE_CONFIG_FILE:="config-test.sh"}
export KUBECTL KUBE_CONFIG_FILE
source "${KUBE_ROOT}/cluster/kube-env.sh"
source "${KUBE_VERSION_ROOT}/cluster/${KUBERNETES_PROVIDER}/util.sh"
prepare-e2e
detect-master >/dev/null
# Detect the OS name/arch so that we can find our binary # Detect the OS name/arch so that we can find our binary
case "$(uname -s)" in case "$(uname -s)" in
@ -77,36 +66,71 @@ locations=(
) )
e2e=$( (ls -t "${locations[@]}" 2>/dev/null || true) | head -1 ) e2e=$( (ls -t "${locations[@]}" 2>/dev/null || true) | head -1 )
if [[ "$KUBERNETES_PROVIDER" == "vagrant" ]]; then # --- Setup some env vars.
# When we are using vagrant it has hard coded auth. We repeat that here so that
# we don't clobber auth that might be used for a publicly facing cluster.
auth_config=(
"--auth_config=$HOME/.kubernetes_vagrant_auth"
)
elif [[ "${KUBERNETES_PROVIDER}" == "gke" ]]; then
# With GKE, our auth and certs are in gcloud's config directory.
detect-project &> /dev/null
cfg_dir="${GCLOUD_CONFIG_DIR}/${PROJECT}.${ZONE}.${CLUSTER_NAME}"
auth_config=(
"--auth_config=${cfg_dir}/kubernetes_auth"
"--cert_dir=${cfg_dir}"
)
elif [[ "${KUBERNETES_PROVIDER}" == "gce" ]]; then
auth_config=(
"--kubeconfig=${HOME}/.kube/.kubeconfig"
)
elif [[ "${KUBERNETES_PROVIDER}" == "aws" ]]; then
auth_config=(
"--auth_config=${HOME}/.kube/${INSTANCE_PREFIX}/kubernetes_auth"
)
else
auth_config=()
fi
if [[ "$KUBERNETES_PROVIDER" == "libvirt-coreos" ]]; then : ${KUBE_VERSION_ROOT:=${KUBE_ROOT}}
host="http://${KUBE_MASTER_IP-}:8080" : ${KUBECTL:="${KUBE_VERSION_ROOT}/cluster/kubectl.sh"}
: ${KUBE_CONFIG_FILE:="config-test.sh"}
export KUBECTL KUBE_CONFIG_FILE
source "${KUBE_ROOT}/cluster/kube-env.sh"
# ---- Do cloud-provider-specific setup
if [[ -z "$AUTH_CONFIG" ]]; then
echo "Setting up for KUBERNETES_PROVIDER=\"${KUBERNETES_PROVIDER}\"."
source "${KUBE_VERSION_ROOT}/cluster/${KUBERNETES_PROVIDER}/util.sh"
prepare-e2e
detect-master >/dev/null
if [[ "$KUBERNETES_PROVIDER" == "vagrant" ]]; then
# When we are using vagrant it has hard coded auth. We repeat that here so that
# we don't clobber auth that might be used for a publicly facing cluster.
auth_config=(
"--auth_config=$HOME/.kubernetes_vagrant_auth"
)
elif [[ "${KUBERNETES_PROVIDER}" == "gke" ]]; then
# With GKE, our auth and certs are in gcloud's config directory.
detect-project &> /dev/null
cfg_dir="${GCLOUD_CONFIG_DIR}/${PROJECT}.${ZONE}.${CLUSTER_NAME}"
auth_config=(
"--auth_config=${cfg_dir}/kubernetes_auth"
"--cert_dir=${cfg_dir}"
)
elif [[ "${KUBERNETES_PROVIDER}" == "gce" ]]; then
auth_config=(
"--kubeconfig=${HOME}/.kube/.kubeconfig"
)
elif [[ "${KUBERNETES_PROVIDER}" == "aws" ]]; then
auth_config=(
"--auth_config=${HOME}/.kube/${INSTANCE_PREFIX}/kubernetes_auth"
)
elif [[ "${KUBERNETES_PROVIDER}" == "conformance_test" ]]; then
auth_config=(
"--auth_config=${KUBERNETES_CONFORMANCE_TEST_AUTH_CONFIG:-}"
"--cert_dir=${KUBERNETES_CONFORMANCE_TEST_CERT_DIR:-}"
)
else
auth_config=()
fi
if [[ "$KUBERNETES_PROVIDER" == "libvirt-coreos" ]]; then
host="http://${KUBE_MASTER_IP-}:8080"
else
host="https://${KUBE_MASTER_IP-}"
fi
else else
host="https://${KUBE_MASTER_IP-}" echo "Conformance Test. No cloud-provider-specific preparation."
KUBERNETES_PROVIDER=""
auth_config=(
"--auth_config=${AUTH_CONFIG:-}"
"--cert_dir=${CERT_DIR:-}"
)
host="https://${KUBE_MASTER_IP-}"
fi fi
# Use the kubectl binary from the same directory as the e2e binary. # Use the kubectl binary from the same directory as the e2e binary.