* Wait for kubelet port to be ready before setting
* Wait for kubelet to update the Ready status before reading port
Signed-off-by: Daishan Peng <daishan@acorn.io>
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
Co-authored-by: Brad Davidson <brad.davidson@rancher.com>
* Bump wrangler to 1.1.1
* Match golang.org/x/net with flannel version
* Match golang.org/x/sys with containerd version
* Update gax-go to 2.1.1
* Isolate terraform e2e test with seperate go.mod/go.sum
* Bump containerd
Signed-off-by: Derek Nola <derek.nola@suse.com>
* Initial drone vagrant pipeline
Signed-off-by: Derek Nola <derek.nola@suse.com>
* Build e2e test image
Signed-off-by: Derek Nola <derek.nola@suse.com>
* Add docker registry to E2E pipeline
Signed-off-by: Derek Nola <derek.nola@suse.com>
* Bump libvirt image
Signed-off-by: Derek Nola <derek.nola@suse.com>
* Add ci flag to secretsencryption
Signed-off-by: Derek Nola <derek.nola@suse.com>
* Fix vagrant log on secretsencryption
Signed-off-by: Derek Nola <derek.nola@suse.com>
* Remove DB parallel tests
Signed-off-by: Derek Nola <derek.nola@suse.com>
* Reduce sonobuoy tests even further
Signed-off-by: Derek Nola <derek.nola@suse.com>
* Add local build
Signed-off-by: Derek Nola <derek.nola@suse.com>
* Add cron conformance pipeline
Signed-off-by: Derek Nola <derek.nola@suse.com>
* Add string output for nodes
Signed-off-by: Derek Nola <derek.nola@suse.com>
* Switch snapshot restore for upgrade cluster
Signed-off-by: Derek Nola <derek.nola@suse.com>
* Fix cp
Signed-off-by: Derek Nola <derek.nola@suse.com>
---------
Signed-off-by: Derek Nola <derek.nola@suse.com>
Turns out etcd-only nodes were never running **any** of the controllers,
so allowing multiple controllers didn't really fix things.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
Prevents errors when starting with fail-closed webhooks
Also, use panic instead of Fatalf so that the CloudControllerManager rescue can handle the error
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
Allow bootstrapping with kubeadm bootstrap token strings or existing
Kubelet certs. This allows agents to join the cluster using kubeadm
bootstrap tokens, as created with the `k3s token create` command.
When the token expires or is deleted, agents can successfully restart by
authenticating with their kubelet certificate via node authentication.
If the token is gone and the node is deleted from the cluster, node auth
will fail and they will be prevented from rejoining the cluster until
provided with a valid token.
Servers still must be bootstrapped with the static cluster token, as
they will need to know it to decrypt the bootstrap data.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
This command must be run on a server while the service is running. After this command completes, all the servers in the cluster should be restarted to load the new CA files.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
The documentation is no longer part of the Rancher project but can be found in
k3s-io/docs. Fix the wording an link in the contribution docs to point the
potential contributor to the proper location
Signed-off-by: Robert Schweikert <rjschwei@suse.com>