If the user points S3 backups at a bucket containing other files, those
file names may not be valid configmap keys.
For example, RKE1 generates backup files with names like
`s3-c-zrjnb-rs-6hxpk_2022-05-05T12:05:15Z.zip`; the semicolons in the
timestamp portion of the name are not allowed for use in configmap keys.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
From https://github.com/urfave/cli/pull/1383 :
> This removes the resulting binary dependency on cpuguy83/md2man and
> russross/blackfriday (and a few more packages imported by those),
> which saves more than 400 KB (more than 300 KB
> once stripped) from the resulting binary.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
* Move startup hooks wg into a runtime pointer, check before notifying systemd
* Switch default systemd notification to server
* Add 1 sec delay to allow etcd to write to disk
Signed-off-by: Derek Nola <derek.nola@suse.com>
This parameter controls which namespace the klipper-lb pods will be create.
It defaults to kube-system so that k3s does not by default create a new
namespace. It can be changed if users wish to isolate the pods and apply
some policy to them.
Signed-off-by: Darren Shepherd <darren@acorn.io>
The baseline PodSecurity profile will reject klipper-lb pods from running.
Since klipper-lb pods are put in the same namespace as the Service this
means users can not use PodSecurity baseline profile in combination with
the k3s servicelb.
The solution is to move all klipper-lb pods to a klipper-lb-system where
the security policy of the klipper-lb pods can be different an uniformly
managed.
Signed-off-by: Darren Shepherd <darren@acorn.io>
* Add rancher install sript, taints to cp/etcd roles
* Revert back to generic/ubuntu2004, libvirt networking is unreliable on opensuse
* Added support for alpine
* Rancher deployment script
* Refactor installType into function
* Cleanup splitserver test
Signed-off-by: Derek Nola <derek.nola@suse.com>
* New startup integration test
* Add testing section to PR template
* Move helper functions to direct k8s client calls
Signed-off-by: Derek Nola <derek.nola@suse.com>
The control-plane context handles requests outside the cluster and
should not be sent to the proxy.
In agent mode, we don't watch pods and just direct-dial any request for
a non-node address, which is the original behavior.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
Watching pods appears to be the most reliable way to ensure that the
proxy routes and authorizes connections.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
Allow the flannel backend to be specified as
backend=option=val,option2=val2 to select a given backend with extra options.
In particular this adds the following options to wireguard-native
backend:
* Mode - flannel wireguard tunnel mode
* PersistentKeepaliveInterval- wireguard persistent keepalive interval
Signed-off-by: Sjoerd Simons <sjoerd@collabora.com>
This reverts commit aa9065749c.
Setting dual-stack node-ip does not work when --cloud-provider is set
to anything, including 'external'. Just set node-ip to the first IP, and
let the cloud provider add the other address.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
The cloud-provider arg is deprecated and cannot be set to anything other than external, but must still be used or node addresses are not set properly.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
This entry wasn't of a correct format, which meant it resulted in errors
for some operations, such as:
```
$ go mod download
go mod download: github.com/k3s-io/etcd@v3.4.18-k3s1+incompatible: invalid version: module contains a go.mod file, so module path must match major version ("github.com/k3s-io/etcd/v3")
```
`go build` did not complain, so the release still worked, but some build
processes desire to fetch dependencies and then compile offline or such.
The extra etcd entry appears to not be actually used, so it seems safe
to delete it.
A few other diffs in the go.sum file are from a `go mod tidy`.
Signed-off-by: Euan Kemp <euank@euank.com>
* Remove objects when removed from manifests
If a user puts a file in /var/lib/rancher/k3s/server/manifests/ then the
objects contained therein are deployed to the cluster. If the objects
are removed from that file, they are not removed from the cluster.
This change tracks the GVKs in the files and will remove objects when
there are removed from the cluster.
Signed-off-by: Donnie Adams <donnie.adams@suse.com>