Allow bootstrapping with kubeadm bootstrap token strings or existing
Kubelet certs. This allows agents to join the cluster using kubeadm
bootstrap tokens, as created with the `k3s token create` command.
When the token expires or is deleted, agents can successfully restart by
authenticating with their kubelet certificate via node authentication.
If the token is gone and the node is deleted from the cluster, node auth
will fail and they will be prevented from rejoining the cluster until
provided with a valid token.
Servers still must be bootstrapped with the static cluster token, as
they will need to know it to decrypt the bootstrap data.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
* General cleanup of test-helpers functions to address CI failures
* Install awscli in test image
* Log containerd output to file even when running with --debug
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
ServiceLB now requires this module, but it will not get autoloaded by the kubelet if the host is using nftables.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
Using the node external IP address for all CNI traffic is a breaking change from previous versions; we should make it an opt-in for distributed clusters instead of default behavior.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
* Consolidate data dir flag
* Group cluster flags together
* Reorder and group agent flags
* Add additional info around vmodule flag
* Hide deprecated flags, and add warning about their removal
Signed-off-by: Derek Nola <derek.nola@suse.com>
* Use INVOCATION_ID to detect execution under systemd, since as of a9b5a1933f NOTIFY_SOCKET is now cleared by the server code.
* Set the unit type to notify by default for both server and agent, which is what Rancher-managed installs have done for a while.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
* Move startup hooks wg into a runtime pointer, check before notifying systemd
* Switch default systemd notification to server
* Add 1 sec delay to allow etcd to write to disk
Signed-off-by: Derek Nola <derek.nola@suse.com>
The control-plane context handles requests outside the cluster and
should not be sent to the proxy.
In agent mode, we don't watch pods and just direct-dial any request for
a non-node address, which is the original behavior.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
Watching pods appears to be the most reliable way to ensure that the
proxy routes and authorizes connections.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
Allow the flannel backend to be specified as
backend=option=val,option2=val2 to select a given backend with extra options.
In particular this adds the following options to wireguard-native
backend:
* Mode - flannel wireguard tunnel mode
* PersistentKeepaliveInterval- wireguard persistent keepalive interval
Signed-off-by: Sjoerd Simons <sjoerd@collabora.com>
Also update all use of 'go get' => 'go install', update CI tooling for
1.18 compatibility, and gofmt everything so lint passes.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
Reduces code complexity a bit and ensures we don't have to handle closed watch channels on our own
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
Before this change, kube-router was always assuming that IPv4 is
enabled, which is not the case in IPv6-only clusters. To enable network
policies in IPv6-only, we need to explicitly let kube-router know when
to disable IPv4.
Signed-off-by: Michal Rostecki <vadorovsky@gmail.com>
* Bump etcd to v3.5.4-k3s1
* Fix issue with datastore corruption on cluster-reset
* Disable unnecessary components during cluster reset
Disable control-plane components and the tunnel setup during
cluster-reset, even when not doing a restore. This reduces the amount of
log clutter during cluster reset/restore, making any errors encountered
more obvious.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
This requires a further set of gofmt -s improvements to the
code, but nothing major. golangci-lint 1.45.2 brings golang 1.18
support which might be needed in the future.
Signed-off-by: Dirk Müller <dirk@dmllr.de>
This change allows to define two cluster CIDRs for compatibility with
Kubernetes dual-stuck, with an assumption that two CIDRs are usually
IPv4 and IPv6.
It does that by levearaging changes in out kube-router fork, with the
following downstream release:
https://github.com/k3s-io/kube-router/releases/tag/v1.3.2%2Bk3s
Signed-off-by: Michal Rostecki <vadorovsky@gmail.com>
Ideally we'd have fully fleshed out support for it (i.e. #5011), but
that's a potentially breaking change and taking a little while to merge.
This is a much simpler change which won't break anything, but will allow
a "Type": "wireguard" reference in the "--flannel-conf" custom config
file to work.
Signed-off-by: Euan Kemp <euank@euank.com>
Before this change, we were copying a part of kube-router code to
pkg/agent/netpol directory with modifications, from which the biggest
one was consumption of k3s node config instead of kube-router config.
However, that approach made it hard to follow new upstream versions.
It's possible to use kube-router as a library, so it seems like a better
way to do that.
Instead of modifying kube-router network policy controller to comsume
k3s configuration, this change just converts k3s node config into
kube-router config. All the functionality of kube-router except netpol
is still disabled.
Signed-off-by: Michal Rostecki <mrostecki@opensuse.org>
Signed-off-by: Manuel Buil <mbuil@suse.com>
Since we now start the server's agent sooner and in the background, we
may need to wait longer than 30 seconds for the apiserver to become
ready on downstream projects such as RKE2.
Since this essentially just serves as an analogue for the server's
apiReady channel, there's little danger in setting it to something
relatively high.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
* Update the default containerd config template with support for adding extra container runtimes. Add logic to discover nvidia container runtimes installed via the the gpu operator or package manager.
Signed-off-by: Joe Kralicky <joe.kralicky@suse.com>
Also honor node-ip when adding the node address to the SAN list, instead
of hardcoding the autodetected IP address.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
* Reset load balancer state during restoraion
Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
* Reset load balancer state during restoraion
Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
Updated the logic to handle if extra args are passed with existing hyphens in the arg. The test was updated to add the additional case of having pre-existing hyphens. The method name was also refactored based on previous feedback.
* Commit of new etcd snapshot integration tests.
* Updated integration github action to not run on doc changes.
* Update Drone runner to only run unit tests
Signed-off-by: dereknola <derek.nola@suse.com>
Problem:
tcpproxy repository has been moved out of the github.com/google org to github.com/inetaf.
Solution:
Switch to the new repo.
FYI: https://godoc.org/inet.af/tcpproxy/
Signed-off-by: William Zhang <warmchang@outlook.com>
This changes the crictl template for issues with the socket information. It also addresses a typo in the socket address. Last it makes tweaks to configuration that aren't required or had incorrect logic.
Signed-off-by: Jamie Phillips <jamie.phillips@suse.com>
spelling
* Changed containerd image licenses from context lifespan to permanent. Delete any existing licenses owned by k3s on server startup
Signed-off-by: dereknola <derek.nola@suse.com>
* Changed containerd image licenses from 24h to permanent. Delete any existing licenses on server startup
Signed-off-by: dereknola <derek.nola@suse.com>
strings has a specific function to replace all matches. We should use that one instead of strings.Replace(string, old, new string, -1)
Signed-off-by: Manuel Buil <mbuil@suse.com>
* Move registries.yaml handling out to rancher/wharfie
* Add system-default-registry support
* Add CLI support for kubelet image credential providers
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
* Update Kubernetes to v1.21.0
* Update to golang v1.16.2
* Update dependent modules to track with upstream
* Switch to upstream flannel
* Track changes to upstream cloud-controller-manager and FeatureGates
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
Now rootless mode can be used with cgroup v2 resource limitations.
A pod is executed in a cgroup like "/user.slice/user-1001.slice/user@1001.service/k3s-rootless.service/kubepods/podd0eb6921-c81a-4214-b36c-d3b9bb212fac/63b5a253a1fd4627da16bfce9bec58d72144cf30fe833e0ca9a6d60ebf837475".
This is accomplished by running `kubelet` in a cgroup namespace, and enabling `cgroupfs` driver for the cgroup hierarchy delegated by systemd.
To enable cgroup v2 resource limitation, `k3s server --rootless` needs to be launched as `systemctl --user` service.
Please see the comment lines in `k3s-rootless.service` for the usage.
Running `k3s server --rootless` via a terminal is not supported.
When it really needs to be launched via a terminal, `systemd-run --user -p Delegate --tty` needs to be prepended to create a systemd scope.
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
Support repository regex rewrite rules when fetching image content.
Example configuration:
```yaml
# /etc/rancher/k3s/registries.yaml
mirrors:
"docker.io":
endpoint:
- "https://registry-1.docker.io/v2"
rewrite:
"^library/alpine$": "my-org/alpine"
```
This will instruct k3s containerd to fetch content for `alpine` images
from `docker.io/my-org/alpine` instead of the default
`docker.io/library/alpine` locations.
Signed-off-by: Jacob Blain Christen <jacob@rancher.com>
get() is called in a loop until client configuration is successfully
retrieved. Each iteration will try to configure the apiserver proxy,
which will in turn create a new load balancer. Skip creating a new
load balancer if we already have one.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
If the port wanted by the client load balancer is in TIME_WAIT, startup
will fail. Set SO_REUSEPORT so that it can be listened on again
immediately.
The configurable Listen call wants a context, so plumb that through as
well.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
* Always use static ports for the load-balancers
This fixes an issue where RKE2 kube-proxy daemonset pods were failing to
communicate with the apiserver when RKE2 was restarted because the
load-balancer used a different port every time it started up.
This also changes the apiserver load-balancer port to be 1 below the
supervisor port instead of 1 above it. This makes the apiserver port
consistent at 6443 across servers and agents on RKE2.
Additional fixes below were required to successfully test and use this change
on etcd-only nodes.
* Actually add lb-server-port flag to CLI
* Fix nil pointer when starting server with --disable-etcd but no --server
* Don't try to use full URI as initial load-balancer endpoint
* Fix etcd load-balancer pool updates
* Update dynamiclistener to fix cert updates on etcd-only nodes
* Handle recursive initial server URL in load balancer
* Don't run the deploy controller on etcd-only nodes