From 17a44a319700b69ad20ec13086e219bcceb24cb5 Mon Sep 17 00:00:00 2001 From: Darren Shepherd Date: Tue, 22 Jan 2019 16:06:42 -0700 Subject: [PATCH] Update README --- README.md | 235 +++++++++++++++++++++++++++++++++++++++++++++ docker-compose.yml | 4 +- 2 files changed, 237 insertions(+), 2 deletions(-) create mode 100644 README.md diff --git a/README.md b/README.md new file mode 100644 index 0000000000..6c1173c057 --- /dev/null +++ b/README.md @@ -0,0 +1,235 @@ +k3s - 5 less than k8s +=============================================== + +Lightweight Kubernetes. Easy to install, half the memory, all in a binary less than 40mb. + +Great for +* Edge +* IoT +* CI +* ARM +* Situations where a PhD in k8s clusterology is infeasible + +What is this? +--- + +k3s is intended to be a fully compliant Kubernetes cluster with the following changes: + +1. Legacy, alpha, non-default features are removed. Hopefully you shouldn't notice the + stuff that has been removed. +2. Removed most in-tree plugins (cloud providers and storage plugins) which can be replaced + with out of tree addons. +3. Add sqlite3 as the default storage mechanism. etcd3 is still available, but not the default. +4. Wrapped in simple launcher that handles a lot of the complexity of TLS and options. +5. Minimal to no OS dependencies (just a sane kernel and cgroup mounts needed). k3s packages required + dependencies + * containerd + * Flannel + * CoreDNS + * CNI + * Host utilities (iptables, socat, etc) + +Quick Start +----------- +1. Download `k3s` from latest [release](https://github.com/ibuildthecloud/k3s/releases/latest), x86_64, armhf, and arm64 are + supported +2. Run server + +```bash +sudo k3s server & +# Kubeconfig is written to /root/.kube/k3s.yaml +sudo k3s kubectl get node + +# On a different node run the below. NODE_TOKEN comes from /var/lib/rancher/k3s/server/node-token +# on your server +sudo k3s agent --server https://myserver:6443 --token ${NODE_TOKEN} + +``` + +Running Server +-------------- + +To run the server just do + + k3s server + +You should get an output similar to + +``` +INFO[2019-01-22T15:16:19.908493986-07:00] Starting k3s dev +INFO[2019-01-22T15:16:19.908934479-07:00] Running kube-apiserver --allow-privileged=true --authorization-mode Node,RBAC --service-account-signing-key-file /var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range 10.43.0.0/16 --advertise-port 6445 --advertise-address 127.0.0.1 --insecure-port 0 --secure-port 6444 --bind-address 127.0.0.1 --tls-cert-file /var/lib/rancher/k3s/server/tls/localhost.crt --tls-private-key-file /var/lib/rancher/k3s/server/tls/localhost.key --service-account-key-file /var/lib/rancher/k3s/server/tls/service.key --service-account-issuer k3s --api-audiences unknown --basic-auth-file /var/lib/rancher/k3s/server/cred/passwd --kubelet-client-certificate /var/lib/rancher/k3s/server/tls/token-node.crt --kubelet-client-key /var/lib/rancher/k3s/server/tls/token-node.key +Flag --insecure-port has been deprecated, This flag will be removed in a future version. +INFO[2019-01-22T15:16:20.196766005-07:00] Running kube-scheduler --kubeconfig /var/lib/rancher/k3s/server/cred/kubeconfig-system.yaml --port 0 --secure-port 0 --leader-elect=false +INFO[2019-01-22T15:16:20.196880841-07:00] Running kube-controller-manager --kubeconfig /var/lib/rancher/k3s/server/cred/kubeconfig-system.yaml --service-account-private-key-file /var/lib/rancher/k3s/server/tls/service.key --allocate-node-cidrs --cluster-cidr 10.42.0.0/16 --root-ca-file /var/lib/rancher/k3s/server/tls/token-ca.crt --port 0 --secure-port 0 --leader-elect=false +Flag --port has been deprecated, see --secure-port instead. +INFO[2019-01-22T15:16:20.273441984-07:00] Listening on :6443 +INFO[2019-01-22T15:16:20.278383446-07:00] Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml +INFO[2019-01-22T15:16:20.474454524-07:00] Node token is available at /var/lib/rancher/k3s/server/node-token +INFO[2019-01-22T15:16:20.474471391-07:00] To join node to cluster: k3s agent -s https://10.20.0.3:6443 -t ${NODE_TOKEN} +INFO[2019-01-22T15:16:20.541027133-07:00] Wrote kubeconfig /root/.kube/k3s.yaml +INFO[2019-01-22T15:16:20.541049100-07:00] Run: k3s kubectl +``` + +The output will probably be much longer as the agent will spew a lot of logs. By default the server +will register itself as a node (run the agent). It is common and almost required these days +that the control plane be part of the cluster. To not run the agent by default use the `--disable-agent` +flag + + k3s server --disable-agent + +At this point you can run the agent as a separate process or not run it on this node at all. + +Joining Nodes +------------- + +When the server starts it creates a file `/var/lib/rancher/k3s/server/node-token`. Use the contents +of that file as `NODE_TOKEN` and then run the agent as follows + + k3s agent --server https://myserver:6443 --token ${NODE_TOKEN} + +That's it. + +Building from source +-------------------- + +The clone will be much faster on this repo if you do + + git clone --depth 1 https://github.com/ibuildthecloud/k3s.git + +This repo includes all of Kubernetes history so `--depth 1` will avoid most of that. + +For development you just need go 1.11 and a sane GOPATH. To compile the binaries run + +```bash +go build -o k3s +go build -o kubectl ./cmd/kubectl +go build -o hyperkube ./vendor/k8s.io/kubernetes/cmd/hyperkube +``` + +This will create the main executable, but it does not include the dependencies like containerd, CNI, +etc. To run a server and agent with all the dependencies for development run the following +helper scripts + +```bash +# Server +./scripts/dev-server.sh + +# Agent +./scripts/dev-agent.sh +``` + +To build the full release binary run `make` and that will create `./dist/k3s` + +Kubernetes Source +----------------- + +The source code for Kubernetes is in `vendor/` and the location from which that is copied +is in `./vendor.conf`. Go to the referenced repo/tag and you'll find all the patches applied +to upstream Kubernetes. + +Open Ports/Network Security +--------------------------- + +The server needs port 6443 to be accessible by the nodes. The nodes need to be able to reach +other nodes over UDP port 4789. This is used for flannel VXLAN. If you don't use flannel +and provide your own custom CNI, then 4789 is not needed by k3s. The node should not listen +on any other port. k3s uses reverse tunneling such that the nodes make outbound connections +to the server and all kubelet traffic runs through that tunnel. + +IMPORTANT. The VXLAN port on nodes should not be exposed to the world, it opens up your +cluster network to accessed by anyone. Run your nodes behind a firewall/security group that +disables access to port 4789. + + +Server HA +--------- +Just don't right now :) It's currently broken. + + +Running in Docker (and docker-compose) +----------------- + +I wouldn't be me if I couldn't run my cluster in Docker. `rancher/k3s` images are available +to run k3s server and agent from Docker. A `docker-compose.yml` is in the root of this repo that +serves as an example how to run k3s from Docker. To run from `docker-compose` from this repo run + + docker-compose up --scale node=3 + # kubeconfig is written to current dir + kubectl --kubeconfig kubeconfig.yaml get node + + NAME STATUS ROLES AGE VERSION + 497278a2d6a2 Ready 11s v1.13.2-k3s2 + d54c8b17c055 Ready 11s v1.13.2-k3s2 + db7a5a5a5bdd Ready 12s v1.13.2-k3s2 + + +Hyperkube +-------- + +k3s is bundled in a nice wrapper to remove the majority of the headache of running k8s. If +you don't want that wrapper and just want a smaller k8s distro, the releases includes +the `hyperkube` binary you can use. It's then up to you to know how to use `hyperkube`. If +you want individual binaries you will need to compile them yourself from source + +containerd and Docker +---------- + +k3s includes and defaults to containerd. Why? Because it's just plain better. If you want to +run with Docker first stop and think, "Really? Do I really want more headache?" If still +yes then you just need to run the agent with the `--docker` flag + + k3s agent -u ${SERVER_URL} -t ${NODE_TOKEN} --docker & + +systemd +------- + +If you are bound by the shackles of systemd (as most of us are), there is a sample unit file +in the root of this repo `k3s.service` which is as follows + +```ini +[Unit] +Description=Lightweight Kubernetes +Documentation=https://k3s.io +After=network.target + +[Service] +ExecStartPre=-/sbin/modprobe overlay +ExecStart=/usr/local/bin/k3s server +KillMode=process +Delegate=yes +LimitNOFILE=infinity +LimitNPROC=infinity +LimitCORE=infinity +TasksMax=infinity + +[Install] +WantedBy=multi-user.target +``` + +Flannel +------- + +Flannel is included by default, if you don't want flannel then run the agent with `--no-flannel` as follows + + k3s agent -u ${SERVER_URL} -t ${NODE_TOKEN} --no-flannel & + +In this setup you will still be required to install your own CNI driver. More info [here](https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#pod-network) + +CoreDNS +------- + +CoreDNS is deployed on start of the agent, to disable add `--no-deploy coredns` to the server + + k3s server --no-deploy coredns + +If you don't install CoreDNS you will need to install a cluster DNS provider yourself. + +TODO +---- +Currently broken or stuff that needs to be done for this to be considered production quality. + +1. Metrics API due to kube aggretation not being setup +2. HA +3. Work on e2e, sonobouy, CNCF certification. +4. etcd doesn't actuallly work because args aren't exposed + diff --git a/docker-compose.yml b/docker-compose.yml index 2ba524a1af..e3f81a5d29 100644 --- a/docker-compose.yml +++ b/docker-compose.yml @@ -1,7 +1,7 @@ version: '3' services: server: - image: rancher/k3s:fa08d60-dirty-amd64 + image: rancher/k3s:v0.1.0-rc1 command: server --disable-agent environment: - K3S_CLUSTER_SECRET=somethingtotallyrandom @@ -15,7 +15,7 @@ services: - 6443:6443 node: - image: rancher/k3s:fa08d60-dirty-amd64 + image: rancher/k3s:v0.1.0-rc1 tmpfs: - /run - /var/run