Fix markdown files according to markdownlint recommendations

There are some issues and quirks in the markdown documentation files
suggested by the markdownlint project checker that might benefit from
being fixed, which this patch does.

Change-Id: I33245825e5bb543b5ce1732204984d4a0b169668
Signed-off-by: Joakim Roubert <joakimr@axis.com>
pull/1493/head
Joakim Roubert 5 years ago
parent ceff3f58fb
commit 4286ba7163

@ -2,18 +2,22 @@ See the [release](https://github.com/rancher/k3s/releases/latest) page for pre-b
The clone will be much faster on this repo if you do
git clone --depth 1 https://github.com/rancher/k3s.git
```bash
git clone --depth 1 https://github.com/rancher/k3s.git
```
This repo includes all of Kubernetes history so `--depth 1` will avoid most of that.
To build the full release binary run `make` and that will create `./dist/artifacts/k3s`.
Optionally to build the binaries using local go environment without running linting or building docker images:
```sh
```bash
./scripts/download && ./scripts/build && ./scripts/package-cli
```
For development, you just need go 1.12+ and a proper GOPATH. To compile the binaries run:
```bash
go build -o k3s
go build -o kubectl ./cmd/kubectl
@ -23,6 +27,7 @@ go build -o hyperkube ./vendor/k8s.io/kubernetes/cmd/hyperkube
This will create the main executable at `./dist/artifacts` , but it does not include the dependencies like containerd, CNI,
etc. To run a server and agent with all the dependencies for development run the following
helper scripts:
```bash
# Server
./scripts/dev-server.sh
@ -37,3 +42,4 @@ Kubernetes Source
The source code for Kubernetes is in `vendor/` and the location from which that is copied
is in `./go.mod`. Go to the referenced repo/tag and you'll find all the patches applied
to upstream Kubernetes.

@ -28,7 +28,6 @@ k3s is intended to be a fully compliant Kubernetes distribution with the followi
* CNI
* Host utilities (iptables, socat, etc)
Documentation
-------------
@ -36,8 +35,10 @@ Please see [the official docs site](https://rancher.com/docs/k3s/latest/en/) for
Quick-Start - Install Script
--------------
The k3s `install.sh` script provides a convenient way for installing to systemd or openrc,
to install k3s as a service just run:
```bash
curl -sfL https://get.k3s.io | sh -
```
@ -52,12 +53,14 @@ sudo kubectl get nodes
`K3S_TOKEN` is created at `/var/lib/rancher/k3s/server/node-token` on your server.
To install on worker nodes we should pass `K3S_URL` along with
`K3S_TOKEN` or `K3S_CLUSTER_SECRET` environment variables, for example:
```bash
curl -sfL https://get.k3s.io | K3S_URL=https://myserver:6443 K3S_TOKEN=XXX sh -
```
Manual Download
---------------
1. Download `k3s` from latest [release](https://github.com/rancher/k3s/releases/latest), x86_64, armhf, and arm64 are supported.
2. Run server.
@ -66,7 +69,8 @@ sudo k3s server &
# Kubeconfig is written to /etc/rancher/k3s/k3s.yaml
sudo k3s kubectl get nodes
# On a different node run the below. NODE_TOKEN comes from
# On a different node run the below. NODE_TOKEN comes from
# /var/lib/rancher/k3s/server/node-token on your server
sudo k3s agent --server https://myserver:6443 --token ${NODE_TOKEN}
```

@ -1,14 +1,14 @@
# Build a Kubernetes cluster using k3s via Ansible.
# Build a Kubernetes cluster using k3s via Ansible
Author: https://github.com/itwars
Author: <https://github.com/itwars>
## K3s Ansible Playbook
Build a Kubernetes cluster using Ansible with k3s. The goal is easily install a Kubernetes cluster on machines running:
- [X] Debian
- [ ] Ubuntu
- [X] CentOS
- [X] Debian
- [ ] Ubuntu
- [X] CentOS
on processor architecture:
@ -16,7 +16,7 @@ on processor architecture:
- [X] arm64
- [X] armhf
## System requirements:
## System requirements
Deployment environment must have Ansible 2.4.0+
Master and nodes must have passwordless SSH access
@ -25,7 +25,7 @@ Master and nodes must have passwordless SSH access
Add the system information gathered above into a file called hosts.ini. For example:
```
```bash
[master]
192.16.35.12
@ -35,14 +35,20 @@ Add the system information gathered above into a file called hosts.ini. For exam
[k3s-cluster:children]
master
node
```
Start provisioning of the cluster using the following command:
```
```bash
ansible-playbook site.yml -i inventory/hosts.ini
```
## Kubeconfig
To get access to your **Kubernetes** cluster just scp debian@master_pi:~/kube/config ~/.kube/config
To get access to your **Kubernetes** cluster just
```bash
scp debian@master_pi:~/kube/config ~/.kube/config
```

@ -1,4 +1,5 @@
## K3S Performance Tests
# K3S Performance Tests
---
These scripts uses Terraform to automate building and testing on k3s clusters on AWS, it supports building normal and HA clusters with N master nodes, N workers nodes and multiple storage backends including:
@ -14,19 +15,19 @@ The scripts divides into three sections:
- agents
- tests
### Server
## Server
The server section deploys the storage backend and then deploys N master nodes, the scripts can be customized to use HA mode or use a single node cluster with sqlite backend, it can also support using 1 master node with external DB, the scripts can also be customized to specify instance type and k3s version, all available options are described in the variable section below.
The server section will also create a one or more agent nodes specifically for Prometheus deployment, clusterloader2 will deploy prometheus and grafana.
### Agents
## Agents
The agents section deploys the k3s agents, it can be customized with different options that controls the agent node count and the instance types.
### Tests
## Tests
The tests section uses a fork off the (clusterloader2)[https://github.com/kubernetes/perf-tests/tree/master/clusterloader2] tool, the fork just modifies the logging and removes the etcd metrics probes.
The tests section uses a fork off the [clusterloader2](https://github.com/kubernetes/perf-tests/tree/master/clusterloader2) tool, the fork just modifies the logging and removes the etcd metrics probes.
this section will use a dockerized version of the tool, which will run the tests and save the report in `tests/<test_name>-<random-number>`.
@ -39,7 +40,7 @@ The current available tests are:
The scripts can be modified by customizing the variables in `scripts/config`, the variables includes:
**Main Vars**
### Main Vars
| Name | Description |
|:----------------:|:------------------------------------------------------------------------------:|
@ -51,7 +52,7 @@ The scripts can be modified by customizing the variables in `scripts/config`, th
| PRIVATE_KEY_PATH | Private ssh key that will be used by clusterloader2 to ssh and collect metrics |
| DEBUG | Debug mode for k3s servers |
**Database Variables**
### Database Variables
| Name | Description |
|:----------------:|:---------------------------------------------------------------------------------------------------:|
@ -62,7 +63,7 @@ The scripts can be modified by customizing the variables in `scripts/config`, th
| DB_PASSWORD | Database password for the user created only for postgres and mysql |
| DB_VERSION | Database version |
**K3S Server Variables**
### K3S Server Variables
| Name | Description |
|:--------------------:|:---------------------------------------------------------------------------------:|
@ -70,28 +71,27 @@ The scripts can be modified by customizing the variables in `scripts/config`, th
| SERVER_COUNT | k3s master node count |
| SERVER_INSTANCE_TYPE | Ec2 instance type created for k3s server(s) |
**K3S Agent Variables**
### K3S Agent Variables
| Name | Description |
|:-------------------:|:-----------------------------------------:|
| AGENT_NODE_COUNT | Number of k3s agents that will be created |
| AGENT_INSTANCE_TYPE | Ec2 instance type created for k3s agents |
**Prometheus server Variables**
### Prometheus server Variables
| Name | Description |
|:-------------------------:|:-------------------------------------------------------------------:|
| PROM_WORKER_NODE_COUNT | Number of k3s agents that will be created for prometheus deployment |
| PROM_WORKER_INSTANCE_TYPE | Ec2 instance type created for k3s prometheus agents |
## Usage
### build
The script includes a Makefile that run different sections, to build the master and workers, adjust the config file in `tests/perf/scripts/config` and then use the following:
```
```bash
cd tests/perf
make apply
```
@ -102,7 +102,7 @@ This will basically build the db, server, and agent layers, it will also deploy
To start the clusterloader2 load test you can modify the tests/perf/tests/load/config.yaml and then run the following:
```
```bash
cd tests/perf
make test
```
@ -110,7 +110,9 @@ make test
### destroy
To destroy the cluster just run the following:
```
```bash
make destroy
make clean
```

Loading…
Cancel
Save