Lightweight Kubernetes
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
 
 

11 KiB

Testing Standards in K3s

Testing in K3s comes in 4 forms:

This document will explain when each test should be written and how each test should be generated, formatted, and run.

Note: all shell commands given are relative to the root k3s repo directory.


Unit Tests

Unit tests should be written when a component or function of a package needs testing. Unit tests should be used for "white box" testing.

Framework

All unit tests in K3s follow a Table Driven Test style. Specifically, K3s unit tests are automatically generated using the gotests tool. This is built into the Go vscode extension, has documented integrations for other popular editors, or can be run via command line. Additionally, a set of custom templates are provided to extend the generated test's functionality. To use these templates, call:

gotests --template_dir=<PATH_TO_K3S>/contrib/gotests_templates

Or in vscode, edit the Go extension setting Go: Generate Tests Flags
and add --template_dir=<PATH_TO_K3S>/contrib/gotests_templates as an item.

To facilitate unit test creation, see tests/util/runtime.go helper functions.

Format

All unit tests should be placed within the package of the file they test.
All unit test files should be named: <FILE_UNDER_TEST>_test.go.
All unit test functions should be named: Test_Unit<FUNCTION_TO_TEST> or Test_Unit<RECEIVER>_<METHOD_TO_TEST>.
See the etcd unit test as an example.

Running

go test ./pkg/... -run Unit

Note: As unit tests call functions directly, they are the primary drivers of K3s's code coverage metric.


Integration Tests

Integration tests should be used to test a specific functionality of k3s that exists across multiple Go packages, either via exported function calls, or more often, CLI comands. Integration tests should be used for "black box" testing.

Framework

All integration tests in K3s follow a Behavior Diven Development (BDD) style. Specifically, K3s uses Ginkgo and Gomega to drive the tests.
To generate an initial test, the command ginkgo bootstrap can be used.

To facilitate K3s CLI testing, see tests/util/cmd.go helper functions.

Format

All integration tests should be placed under tests/integration/<TEST_NAME>.
All integration test files should be named: <TEST_NAME>_int_test.go.
All integration test functions should be named: Test_Integration<TEST_NAME>.
See the local storage test as an example.

Running

Integration tests can be run with no k3s cluster present, each test will spin up and kill the appropriate k3s server it needs.
Note: Integration tests must be run as root, prefix the commands below with sudo -E env "PATH=$PATH" if a sudo user.

go test ./tests/integration/... -run Integration

Additionally, to generate JUnit reporting for the tests, the Ginkgo CLI is used

ginkgo --junit-report=result.xml ./tests/integration/...

Integration tests can be run on an existing single-node cluster via compile time flag, tests will skip if the server is not configured correctly.

go test -ldflags "-X 'github.com/k3s-io/k3s/tests/util.existingServer=True'" ./tests/integration/... -run Integration

Integration tests can also be run via a Sonobuoy plugin on an existing single-node cluster.

./scripts/build-tests-sonobuoy
sudo KUBECONFIG=/etc/rancher/k3s/k3s.yaml sonobuoy run --plugin ./dist/artifacts/k3s-int-tests.yaml

Check the sonobuoy status and retrieve results

sudo KUBECONFIG=/etc/rancher/k3s/k3s.yaml sonobuoy status
sudo KUBECONFIG=/etc/rancher/k3s/k3s.yaml sonobuoy retrieve
sudo KUBECONFIG=/etc/rancher/k3s/k3s.yaml sonobuoy results <TAR_FILE_FROM_RETRIEVE>

Smoke Tests

Smoke tests are defined under the tests/vagrant path at the root of this repository. The sub-directories therein contain fixtures for running simple clusters to assert correct behavior for "happy path" scenarios. These fixtures are mostly self-contained Vagrantfiles describing single-node installations that are easily spun up with Vagrant for the libvirt and virtualbox providers:

When adding new installer test(s) please copy the prevalent style for the Vagrantfile. Ideally, the boxes used for additional assertions will support the default virtualbox provider which enables them to be used by our Github Actions Workflow(s). See:

Framework

If you are new to Vagrant, Hashicorp has written some pretty decent introductory tutorials and docs, see:

Plugins and Providers

The libvirt and vmware_desktop providers cannot be used without first installing the relevant plugins which are vagrant-libvirt and vagrant-vmware-desktop, respectively. Much like the default virtualbox provider these will do nothing useful without also installing the relevant server runtimes and/or client programs.

Environment Variables

These can be set on the CLI or exported before invoking Vagrant:

  • TEST_VM_CPUS (default 2)
    The number of vCPU for the guest to use.
  • TEST_VM_MEMORY (default 2048)
    The number of megabytes of memory for the guest to use.
  • TEST_VM_BOOT_TIMEOUT (default 600)
    The time in seconds that Vagrant will wait for the machine to boot and be accessible.

Running

The Install Script tests can be run by changing to the fixture directory and invoking vagrant up, e.g.:

cd tests/vagrant/install/centos-8
vagrant up
# the following provisioners are optional. the do not run by default but are invoked
# explicitly by github actions workflow to avoid certain timeout issues on slow runners
vagrant provision --provision-with=k3s-wait-for-node
vagrant provision --provision-with=k3s-wait-for-coredns
vagrant provision --provision-with=k3s-wait-for-local-storage
vagrant provision --provision-with=k3s-wait-for-metrics-server
vagrant provision --provision-with=k3s-wait-for-traefik
vagrant provision --provision-with=k3s-status
vagrant provision --provision-with=k3s-procps

The Control Groups and Snapshotter tests require that k3s binary is built at dist/artifacts/k3s. They are invoked similarly, i.e. vagrant up, but with different sets of named shell provisioners. Take a look at the individual Vagrantfiles and/or the Github Actions workflows that harness them to get an idea of how they can be invoked.


End-to-End (E2E) Tests

E2E tests cover multi-node K3s configuration and administration: bringup, update, teardown etc. across a wide range of operating systems. E2E tests are run nightly as part of K3s quality assurance (QA).

Framework

End-to-end tests utilize Ginkgo and Gomega like the integration tests, but rely on Vagrant to provide the underlying cluster configuration.

Currently tested operating systems are:

Format

All E2E tests should be placed under tests/e2e/<TEST_NAME>.
All E2E test functions should be named: Test_E2E<TEST_NAME>.
A E2E test consists of two parts:

  1. Vagrantfile: a vagrant file which describes and configures the VMs upon which the cluster and test will run
  2. <TEST_NAME>.go: A go test file which calls vagrant up and controls the actual testing of the cluster

See the validate cluster test as an example.

Running

Generally, E2E tests are run as a nightly Jenkins job for QA. They can still be run locally but additional setup may be required. By default, all E2E tests are designed with libvirt as the underlying VM provider. Instructions for installing libvirt and its associated vagrant plugin, vagrant-libvirt can be found here. VirtualBox is also supported as a backup VM provider.

Once setup is complete, all E2E tests can be run with:

go test -timeout=15m ./tests/e2e/... -run E2E

Tests can be run individually with:

go test -timeout=15m ./tests/e2e/validatecluster/... -run E2E
#or
go test -timeout=15m ./tests/e2e/... -run E2EClusterValidation

Additionally, to generate junit reporting for the tests, the Ginkgo CLI is used. Installation instructions can be found here.

To run the all E2E tests and generate JUnit testing reports:

ginkgo --junit-report=result.xml ./tests/e2e/...

Note: The go test default timeout is 10 minutes, thus the -timeout flag should be used. The ginkgo default timeout is 1 hour, no timeout flag is needed.

Contributing New Or Updated Tests


We gladly accept new and updated tests of all types. If you wish to create a new test or update an existing test, please submit a PR with a title that includes the words <NAME_OF_TEST> (Created/Updated).