If key ends in "+" the value of the key is appended to previous
values found. If values are string instead of a slice they are
automatically converted to a slice of one string.
Signed-off-by: Darren Shepherd <darren@rancher.com>
Configuration will be loaded from config.yaml and then config.yaml.d/*.(yaml|yml) in
alphanumeric order. The merging is done by just taking the last value of
a key found, so LIFO for keys. Slices are not merged but replaced.
Signed-off-by: Darren Shepherd <darren@rancher.com>
* Update Kubernetes to v1.21.0
* Update to golang v1.16.2
* Update dependent modules to track with upstream
* Switch to upstream flannel
* Track changes to upstream cloud-controller-manager and FeatureGates
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
Remove early return preventing local retention policy to be enforced
resulting in N number of snapshots being stored.
Signed-off-by: Brian Downs <brian.downs@gmail.com>
When `/dev/kmsg` is unreadable due to sysctl value `kernel.dmesg_restrict=1`,
bind-mount `/dev/null` into `/dev/kmsg`
Fix issue 3011
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
Now rootless mode can be used with cgroup v2 resource limitations.
A pod is executed in a cgroup like "/user.slice/user-1001.slice/user@1001.service/k3s-rootless.service/kubepods/podd0eb6921-c81a-4214-b36c-d3b9bb212fac/63b5a253a1fd4627da16bfce9bec58d72144cf30fe833e0ca9a6d60ebf837475".
This is accomplished by running `kubelet` in a cgroup namespace, and enabling `cgroupfs` driver for the cgroup hierarchy delegated by systemd.
To enable cgroup v2 resource limitation, `k3s server --rootless` needs to be launched as `systemctl --user` service.
Please see the comment lines in `k3s-rootless.service` for the usage.
Running `k3s server --rootless` via a terminal is not supported.
When it really needs to be launched via a terminal, `systemd-run --user -p Delegate --tty` needs to be prepended to create a systemd scope.
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
* remove etcd data dir when etcd is disabled
Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
* fix comment
Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
* more fixes
Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
* use debug instead of info logs
Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
Support repository regex rewrite rules when fetching image content.
Example configuration:
```yaml
# /etc/rancher/k3s/registries.yaml
mirrors:
"docker.io":
endpoint:
- "https://registry-1.docker.io/v2"
rewrite:
"^library/alpine$": "my-org/alpine"
```
This will instruct k3s containerd to fetch content for `alpine` images
from `docker.io/my-org/alpine` instead of the default
`docker.io/library/alpine` locations.
Signed-off-by: Jacob Blain Christen <jacob@rancher.com>
get() is called in a loop until client configuration is successfully
retrieved. Each iteration will try to configure the apiserver proxy,
which will in turn create a new load balancer. Skip creating a new
load balancer if we already have one.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
If the port wanted by the client load balancer is in TIME_WAIT, startup
will fail. Set SO_REUSEPORT so that it can be listened on again
immediately.
The configurable Listen call wants a context, so plumb that through as
well.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
* Always use static ports for the load-balancers
This fixes an issue where RKE2 kube-proxy daemonset pods were failing to
communicate with the apiserver when RKE2 was restarted because the
load-balancer used a different port every time it started up.
This also changes the apiserver load-balancer port to be 1 below the
supervisor port instead of 1 above it. This makes the apiserver port
consistent at 6443 across servers and agents on RKE2.
Additional fixes below were required to successfully test and use this change
on etcd-only nodes.
* Actually add lb-server-port flag to CLI
* Fix nil pointer when starting server with --disable-etcd but no --server
* Don't try to use full URI as initial load-balancer endpoint
* Fix etcd load-balancer pool updates
* Update dynamiclistener to fix cert updates on etcd-only nodes
* Handle recursive initial server URL in load balancer
* Don't run the deploy controller on etcd-only nodes
* Add functionality for etcd snapshot/restore to and from S3 compatible backends.
* Update etcd restore functionality to extract and write certificates and configs from snapshot.
Servers should always be upgraded before agents, but generally this
isn't required because things are compatible between versions. In this
case we're OK with failing closed if the user upgrades out of order, but
we should give a clearer message about what steps are required to fix
the issue.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
K3s upgrade via watch over file change of static file and manifest
and triggers helm-controller for change. It seems reasonable to
only allow upgrade traefik v1->v2 when there is no existing custom
traefik HelmChartConfig in the cluster to avoid any
incompatibility.
Here also separate the CRDs and put them into a different chart
to support CRD upgrade.
Signed-off-by: Chin-Ya Huang <chin-ya.huang@suse.com>
It is possible that the apiserver may serve read requests but not allow
writes yet, in which case flannel will crash on startup when trying to
configure the subnet manager.
Fix this by waiting for the apiserver to become fully ready before
starting flannel and the network policy controller.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
Adds support for retagging images to appear to have been sourced from
one or more additional registries as they are imported from the tarball.
This is intended to support RKE2 use cases with system-default-registry
where the images need to appear to have been pulled from a registry
other than docker.io.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
* Collect IPs from all pods before deciding to use internal or external addresses
@Taloth correctly noted that the code that iterates over ServiceLB pods
to collect IP addresses was failing to add additional internal IPs once
the map contained ANY entry from a previous node. This may date back to
when ServiceLB used a Deployment instead of a DaemonSet, so there was
only ever a single pod.
The new behavior is to collect all internal and external IPs, and then
construct the address list of a single type - external if there are any,
otherwise internal.
https://github.com/k3s-io/k3s/issues/1652#issuecomment-774497788
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
Co-authored-by: Brian Downs <brian.downs@gmail.com>
* Add tests to clientaccess/token
* Fix issues in clientaccess/token identified by tests
* Update tests to close coverage gaps
* Remove redundant check turned up by code coverage reports
* Add warnings if CA hash will not be validated
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
Fix issue 900
cgroup2 support was introduced in PR 2584, but got broken in f3de60ff31
It was failing with "F1210 19:13:37.305388 4955 server.go:181] cannot set feature gate SupportPodPidsLimit to false, feature is locked to true"
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
Since a recent commit, rootless mode was failing with the following errors:
```
E0122 22:59:47.615567 21 kuberuntime_manager.go:755] createPodSandbox for pod "helm-install-traefik-wf8lc_kube-system(9de0a1b2-e2a2-4ea5-8fb6-22c9272a182f)" failed: rpc error: code = Unknown desc = failed to create network namespace for sandbox "285ab835609387f82d304bac1fefa5fb2a6c49a542a9921995d0c35d33c683d5": failed to setup netns: open /var/run/netns/cni-c628a228-651e-e03e-d27d-bb5e87281846: permission denied
...
E0122 23:31:34.027814 21 pod_workers.go:191] Error syncing pod 1a77d21f-ff3d-4475-9749-224229ddc31a ("coredns-854c77959c-w4d7g_kube-system(1a77d21f-ff3d-4475-9749-224229ddc31a)"), skipping: failed to "CreatePodSandbox" for "coredns-854c77959c-w4d7g_kube-system(1a77d21f-ff3d-4475-9749-224229ddc31a)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-854c77959c-w4d7g_kube-system(1a77d21f-ff3d-4475-9749-224229ddc31a)\" failed: rpc error: code = Unknown desc = failed to create containerd task: io.containerd.runc.v2: create new shim socket: listen unix /run/containerd/s/8f0e40e11a69738407f1ebaf31ced3f08c29bb62022058813314fb004f93c422: bind: permission denied\n: exit status 1: unknown"
```
Remove symlinks to /run/{netns,containerd} so that rootless mode can create their own /run/{netns,containerd}.
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
* Replace k3s cloud provider wrangler controller with core node informer
Upstream k8s has exposed an interface for cloud providers to access the
cloud controller manager's node cache and shared informer since
Kubernetes 1.9. This is used by all the other in-tree cloud providers;
we should use it too instead of running a dedicated wrangler controller.
Doing so also appears to fix an intermittent issue with the uninitialized
taint not getting cleared on nodes in CI.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
Problem:
While using ZFS on debian and K3s with docker, I am unable to get k3s working as the snapshotter value is being validated and the validation fails.
Solution:
We should not validate snapshotter value if we are using docker as it's a no-op in that case.
Signed-off-by: Waqar Ahmed <waqarahmedjoyia@live.com>
We're not setting ``--service-account-issuer` to a https URL, which causes an
error message at startup when the feature gate is enabled. From the
docs on that flag:
> If this option is not a valid URI per the OpenID Discovery 1.0 spec, the
> ServiceAccountIssuerDiscovery feature will remain disabled, even if the
> feature gate is set to true. It is highly recommended that this value
> comply with the OpenID spec:
> https://openid.net/specs/openid-connect-discovery-1_0.html. In practice,
> this means that service-account-issuer must be an https URL. It is also
> highly recommended that this URL be capable of serving OpenID discovery
> documents at {service-account-issuer}/.well-known/openid-configuration.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
Solution: Set priorityClassName to system-node-critical of traefik, metrics-server, local storage and coredns deployment
Signed-off-by: transhapHigsn <fet.prashantsingh@gmail.com>
Ubuntu and Debian kernels support mounting real overlayfs inside userns,
but the vanilla kernel still does not allow it.
OTOH fuse-overlayfs can be mounted inside userns with the vanilla kernel (>= 4.18).
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
Removing the cfg.DataDir mutation in 3e4fd7b did not break anything, but
did change some paths in unwanted ways. Rather than mutating the
user-supplied command-line flags, explicitly specify the agent
subdirectory as needed.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
As per documentation, the cloud-provider flag should not be passed to
controller-manager when using cloud-controller. However, the legacy
cloud-related controllers still need to be explicitly disabled to
prevent errors from being logged.
Fixing this also prevents controller-manager from creating the
cloud-controller-manager service account that needed extra RBAC.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
Related to rancher/rke2#474
Note that anyone who customizes the data-dir path will have to set
CRI_CONFIG_FILE to the correct path when using the wrapped binaries
(crictl, etc). This is better than dropping files in the incorrect
location.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
Resolves warning 2 from #2471.
As per https://github.com/kubernetes/cloud-provider/issues/12 the
ClusterID requirement was never really followed through on, so the
flag is probably going to be removed in the future.
One side-effect of this is that the core k8s cloud-controller-manager
also wants to watch nodes, and needs RBAC to do so.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
Related to #2455 and containerd/containerd#4684
These were not meant to be enabled by default, break images with many
layers, and will be disabled by default on the next containerd release.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
The --disable/--no-deploy flags actually turn off some built-in
controllers, in addition to preventing manifests from getting loaded.
Make it clear which controllers can still be disabled even when the
packaged components are ommited by the no_stage build tag.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
* skip node delete from removed member
Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
* use grpc errors
Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
* go imports
Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
* exit if node is the etcd that being removed
Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
* Set etcd timeouts using values from k8s instead of etcdctl
Fix for one of the warnings from #2303
* Use etcd zap logger instead of deprecated capsnlog
Fix for one of the warnings from #2303
* Remove member self-promotion code paths
* Add learner promotion tracking code
* Fix RaftAppliedIndex progress check
* Remove ErrGRPCKeyNotFound check
This is not used by v3 API - it just returns a response with 0 KVs.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
The default http client does not have an overall request timeout, so
connections to misbehaving or unavailable servers can stall for an
excessive amount of time. At the moment, just attempting to join
an unavailable cluster takes 2 minutes and 40 seconds to timeout.
Resolve that by setting a reasonable request timeout.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
According to @galal-hussein this is dead code that was probably brought
over from Kine. I certainly couldn't figure out what it is supposed to
be doing.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
We should ignore --token and --server if the managed database is initialized,
just like we ignore --cluster-init. If the user wants to join a new
cluster, or rejoin a cluster after --cluster-reset, they need to delete
the database. This a cleaner way to prevent deadlocking on quorum loss,
and removes the requirement that the target of the --server argument
must be online before already joined nodes can start.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
This attempts to update logging statements to make them consistent
through out the code base. It also adds additional context to messages
where possible, simplifies messages, and updates level where necessary.
Since we're replacing the k3s rolebindings.yaml in rke2, we should allow
renaming this so that we can use the white-labeled name downstream.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
Related to #1908
Will be fixed upstream by
https://github.com/rancher/local-path-provisioner/pull/135/ but we're
not going to update the LPP image right now since it's undergoing some
changes that we don't want to pick up at the moment.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
Previously a bool flag would be rendered as --flag false for `flag: false`
which is invalid and results in the opposite of what you'd expect.
Signed-off-by: Darren Shepherd <darren@rancher.com>