* Bump etcd to v3.5.4-k3s1
* Fix issue with datastore corruption on cluster-reset
* Disable unnecessary components during cluster reset
Disable control-plane components and the tunnel setup during
cluster-reset, even when not doing a restore. This reduces the amount of
log clutter during cluster reset/restore, making any errors encountered
more obvious.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
This requires a further set of gofmt -s improvements to the
code, but nothing major. golangci-lint 1.45.2 brings golang 1.18
support which might be needed in the future.
Signed-off-by: Dirk Müller <dirk@dmllr.de>
This change allows to define two cluster CIDRs for compatibility with
Kubernetes dual-stuck, with an assumption that two CIDRs are usually
IPv4 and IPv6.
It does that by levearaging changes in out kube-router fork, with the
following downstream release:
https://github.com/k3s-io/kube-router/releases/tag/v1.3.2%2Bk3s
Signed-off-by: Michal Rostecki <vadorovsky@gmail.com>
Ideally we'd have fully fleshed out support for it (i.e. #5011), but
that's a potentially breaking change and taking a little while to merge.
This is a much simpler change which won't break anything, but will allow
a "Type": "wireguard" reference in the "--flannel-conf" custom config
file to work.
Signed-off-by: Euan Kemp <euank@euank.com>
Before this change, we were copying a part of kube-router code to
pkg/agent/netpol directory with modifications, from which the biggest
one was consumption of k3s node config instead of kube-router config.
However, that approach made it hard to follow new upstream versions.
It's possible to use kube-router as a library, so it seems like a better
way to do that.
Instead of modifying kube-router network policy controller to comsume
k3s configuration, this change just converts k3s node config into
kube-router config. All the functionality of kube-router except netpol
is still disabled.
Signed-off-by: Michal Rostecki <mrostecki@opensuse.org>
Signed-off-by: Manuel Buil <mbuil@suse.com>
Since we now start the server's agent sooner and in the background, we
may need to wait longer than 30 seconds for the apiserver to become
ready on downstream projects such as RKE2.
Since this essentially just serves as an analogue for the server's
apiReady channel, there's little danger in setting it to something
relatively high.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
* Update the default containerd config template with support for adding extra container runtimes. Add logic to discover nvidia container runtimes installed via the the gpu operator or package manager.
Signed-off-by: Joe Kralicky <joe.kralicky@suse.com>