mirror of https://github.com/k3s-io/k3s
Merge pull request #10843 from jiangyaoguo/change-get-minions-in-docs
change "get minions" to "get nodes" in docspull/6/head
commit
8bb5c5060c
|
@ -55,7 +55,7 @@ The building blocks of an easier solution:
|
||||||
* **Move to TLS** We will move to using TLS for all intra-cluster communication. We will explicitly identify the trust chain (the set of trusted CAs) as opposed to trusting the system CAs. We will also use client certificates for all AuthN.
|
* **Move to TLS** We will move to using TLS for all intra-cluster communication. We will explicitly identify the trust chain (the set of trusted CAs) as opposed to trusting the system CAs. We will also use client certificates for all AuthN.
|
||||||
* [optional] **API driven CA** Optionally, we will run a CA in the master that will mint certificates for the nodes/kubelets. There will be pluggable policies that will automatically approve certificate requests here as appropriate.
|
* [optional] **API driven CA** Optionally, we will run a CA in the master that will mint certificates for the nodes/kubelets. There will be pluggable policies that will automatically approve certificate requests here as appropriate.
|
||||||
* **CA approval policy** This is a pluggable policy object that can automatically approve CA signing requests. Stock policies will include `always-reject`, `queue` and `insecure-always-approve`. With `queue` there would be an API for evaluating and accepting/rejecting requests. Cloud providers could implement a policy here that verifies other out of band information and automatically approves/rejects based on other external factors.
|
* **CA approval policy** This is a pluggable policy object that can automatically approve CA signing requests. Stock policies will include `always-reject`, `queue` and `insecure-always-approve`. With `queue` there would be an API for evaluating and accepting/rejecting requests. Cloud providers could implement a policy here that verifies other out of band information and automatically approves/rejects based on other external factors.
|
||||||
* **Scoped Kubelet Accounts** These accounts are per-minion and (optionally) give a minion permission to register itself.
|
* **Scoped Kubelet Accounts** These accounts are per-node and (optionally) give a node permission to register itself.
|
||||||
* To start with, we'd have the kubelets generate a cert/account in the form of `kubelet:<host>`. To start we would then hard code policy such that we give that particular account appropriate permissions. Over time, we can make the policy engine more generic.
|
* To start with, we'd have the kubelets generate a cert/account in the form of `kubelet:<host>`. To start we would then hard code policy such that we give that particular account appropriate permissions. Over time, we can make the policy engine more generic.
|
||||||
* [optional] **Bootstrap API endpoint** This is a helper service hosted outside of the Kubernetes cluster that helps with initial discovery of the master.
|
* [optional] **Bootstrap API endpoint** This is a helper service hosted outside of the Kubernetes cluster that helps with initial discovery of the master.
|
||||||
|
|
||||||
|
|
|
@ -219,7 +219,7 @@ hack/test-integration.sh
|
||||||
|
|
||||||
## End-to-End tests
|
## End-to-End tests
|
||||||
|
|
||||||
You can run an end-to-end test which will bring up a master and two minions, perform some tests, and then tear everything down. Make sure you have followed the getting started steps for your chosen cloud platform (which might involve changing the `KUBERNETES_PROVIDER` environment variable to something other than "gce".
|
You can run an end-to-end test which will bring up a master and two nodes, perform some tests, and then tear everything down. Make sure you have followed the getting started steps for your chosen cloud platform (which might involve changing the `KUBERNETES_PROVIDER` environment variable to something other than "gce".
|
||||||
```
|
```
|
||||||
cd kubernetes
|
cd kubernetes
|
||||||
hack/e2e-test.sh
|
hack/e2e-test.sh
|
||||||
|
|
|
@ -27,7 +27,7 @@ This is a getting started guide for CentOS. It is a manual configuration so you
|
||||||
|
|
||||||
This guide will only get ONE minion working. Multiple minions requires a functional [networking configuration](../../networking.md) done outside of kubernetes. Although the additional kubernetes configuration requirements should be obvious.
|
This guide will only get ONE minion working. Multiple minions requires a functional [networking configuration](../../networking.md) done outside of kubernetes. Although the additional kubernetes configuration requirements should be obvious.
|
||||||
|
|
||||||
The kubernetes package provides a few services: kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, kube-proxy. These services are managed by systemd and the configuration resides in a central location: /etc/kubernetes. We will break the services up between the hosts. The first host, centos-master, will be the kubernetes master. This host will run the kube-apiserver, kube-controller-manager, and kube-scheduler. In addition, the master will also run _etcd_. The remaining host, centos-minion will be the minion and run kubelet, proxy, cadvisor and docker.
|
The kubernetes package provides a few services: kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, kube-proxy. These services are managed by systemd and the configuration resides in a central location: /etc/kubernetes. We will break the services up between the hosts. The first host, centos-master, will be the kubernetes master. This host will run the kube-apiserver, kube-controller-manager, and kube-scheduler. In addition, the master will also run _etcd_. The remaining host, centos-minion will be the node and run kubelet, proxy, cadvisor and docker.
|
||||||
|
|
||||||
**System Information:**
|
**System Information:**
|
||||||
|
|
||||||
|
@ -115,7 +115,7 @@ KUBE_API_PORT="--port=8080"
|
||||||
# How the replication controller and scheduler find the kube-apiserver
|
# How the replication controller and scheduler find the kube-apiserver
|
||||||
KUBE_MASTER="--master=http://centos-master:8080"
|
KUBE_MASTER="--master=http://centos-master:8080"
|
||||||
|
|
||||||
# Port minions listen on
|
# Port nodes listen on
|
||||||
KUBELET_PORT="--kubelet_port=10250"
|
KUBELET_PORT="--kubelet_port=10250"
|
||||||
|
|
||||||
# Address range to use for services
|
# Address range to use for services
|
||||||
|
@ -135,7 +135,7 @@ for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do
|
||||||
done
|
done
|
||||||
```
|
```
|
||||||
|
|
||||||
**Configure the kubernetes services on the minion.**
|
**Configure the kubernetes services on the node.**
|
||||||
|
|
||||||
***We need to configure the kubelet and start the kubelet and proxy***
|
***We need to configure the kubelet and start the kubelet and proxy***
|
||||||
|
|
||||||
|
@ -155,7 +155,7 @@ KUBELET_HOSTNAME="--hostname_override=centos-minion"
|
||||||
KUBELET_ARGS=""
|
KUBELET_ARGS=""
|
||||||
```
|
```
|
||||||
|
|
||||||
* Start the appropriate services on minion (centos-minion).
|
* Start the appropriate services on node (centos-minion).
|
||||||
|
|
||||||
```
|
```
|
||||||
for SERVICES in kube-proxy kubelet docker; do
|
for SERVICES in kube-proxy kubelet docker; do
|
||||||
|
@ -167,7 +167,7 @@ done
|
||||||
|
|
||||||
*You should be finished!*
|
*You should be finished!*
|
||||||
|
|
||||||
* Check to make sure the cluster can see the minion (on centos-master)
|
* Check to make sure the cluster can see the node (on centos-master)
|
||||||
|
|
||||||
```
|
```
|
||||||
kubectl get nodes
|
kubectl get nodes
|
||||||
|
|
|
@ -56,7 +56,7 @@ Now, all you need to do is:
|
||||||
./create-kubernetes-cluster.js
|
./create-kubernetes-cluster.js
|
||||||
```
|
```
|
||||||
|
|
||||||
This script will provision a cluster suitable for production use, where there is a ring of 3 dedicated etcd nodes, Kubernetes master and 2 nodes. The `kube-00` VM will be the master, your work loads are only to be deployed on the minion nodes, `kube-01` and `kube-02`. Initially, all VMs are single-core, to ensure a user of the free tier can reproduce it without paying extra. I will show how to add more bigger VMs later.
|
This script will provision a cluster suitable for production use, where there is a ring of 3 dedicated etcd nodes, 1 kubernetes master and 2 kubernetes nodes. The `kube-00` VM will be the master, your work loads are only to be deployed on the nodes, `kube-01` and `kube-02`. Initially, all VMs are single-core, to ensure a user of the free tier can reproduce it without paying extra. I will show how to add more bigger VMs later.
|
||||||
|
|
||||||
![VMs in Azure](initial_cluster.png)
|
![VMs in Azure](initial_cluster.png)
|
||||||
|
|
||||||
|
|
|
@ -648,14 +648,14 @@ List fleet machines
|
||||||
|
|
||||||
fleetctl list-machines
|
fleetctl list-machines
|
||||||
|
|
||||||
Check system status of services on master node:
|
Check system status of services on master:
|
||||||
|
|
||||||
systemctl status kube-apiserver
|
systemctl status kube-apiserver
|
||||||
systemctl status kube-controller-manager
|
systemctl status kube-controller-manager
|
||||||
systemctl status kube-scheduler
|
systemctl status kube-scheduler
|
||||||
systemctl status kube-register
|
systemctl status kube-register
|
||||||
|
|
||||||
Check system status of services on a minion node:
|
Check system status of services on a node:
|
||||||
|
|
||||||
systemctl status kube-kubelet
|
systemctl status kube-kubelet
|
||||||
systemctl status docker.service
|
systemctl status docker.service
|
||||||
|
|
|
@ -30,20 +30,20 @@ Configuring kubernetes on Fedora via Ansible offers a simple way to quickly crea
|
||||||
|
|
||||||
1. Host able to run ansible and able to clone the following repo: [kubernetes-ansible](https://github.com/eparis/kubernetes-ansible)
|
1. Host able to run ansible and able to clone the following repo: [kubernetes-ansible](https://github.com/eparis/kubernetes-ansible)
|
||||||
2. A Fedora 20+ or RHEL7 host to act as cluster master
|
2. A Fedora 20+ or RHEL7 host to act as cluster master
|
||||||
3. As many Fedora 20+ or RHEL7 hosts as you would like, that act as cluster minions
|
3. As many Fedora 20+ or RHEL7 hosts as you would like, that act as cluster nodes
|
||||||
|
|
||||||
The hosts can be virtual or bare metal. The only requirement to make the ansible network setup work is that all of the machines are connected via the same layer 2 network.
|
The hosts can be virtual or bare metal. The only requirement to make the ansible network setup work is that all of the machines are connected via the same layer 2 network.
|
||||||
|
|
||||||
Ansible will take care of the rest of the configuration for you - configuring networking, installing packages, handling the firewall, etc... This example will use one master and two minions.
|
Ansible will take care of the rest of the configuration for you - configuring networking, installing packages, handling the firewall, etc... This example will use one master and two nodes.
|
||||||
|
|
||||||
## Architecture of the cluster
|
## Architecture of the cluster
|
||||||
|
|
||||||
A Kubernetes cluster requires etcd, a master, and n minions, so we will create a cluster with three hosts, for example:
|
A Kubernetes cluster requires etcd, a master, and n nodes, so we will create a cluster with three hosts, for example:
|
||||||
|
|
||||||
```
|
```
|
||||||
fed1 (master,etcd) = 192.168.121.205
|
fed1 (master,etcd) = 192.168.121.205
|
||||||
fed2 (minion) = 192.168.121.84
|
fed2 (node) = 192.168.121.84
|
||||||
fed3 (minion) = 192.168.121.116
|
fed3 (node) = 192.168.121.116
|
||||||
```
|
```
|
||||||
|
|
||||||
**Make sure your local machine**
|
**Make sure your local machine**
|
||||||
|
@ -61,7 +61,7 @@ A Kubernetes cluster requires etcd, a master, and n minions, so we will create a
|
||||||
|
|
||||||
**Tell ansible about each machine and its role in your cluster.**
|
**Tell ansible about each machine and its role in your cluster.**
|
||||||
|
|
||||||
Get the IP addresses from the master and minions. Add those to the `inventory` file (at the root of the repo) on the host running Ansible.
|
Get the IP addresses from the master and nodes. Add those to the `inventory` file (at the root of the repo) on the host running Ansible.
|
||||||
|
|
||||||
We will set the kube_ip_addr to '10.254.0.[1-3]', for now. The reason we do this is explained later... It might work for you as a default.
|
We will set the kube_ip_addr to '10.254.0.[1-3]', for now. The reason we do this is explained later... It might work for you as a default.
|
||||||
|
|
||||||
|
@ -124,7 +124,7 @@ The ansible scripts are quite hacky configuring the network, you can see the [RE
|
||||||
|
|
||||||
**Configure the ip addresses which should be used to run pods on each machine**
|
**Configure the ip addresses which should be used to run pods on each machine**
|
||||||
|
|
||||||
The IP address pool used to assign addresses to pods for each minion is the `kube_ip_addr`= option. Choose a /24 to use for each minion and add that to you inventory file.
|
The IP address pool used to assign addresses to pods for each node is the `kube_ip_addr`= option. Choose a /24 to use for each node and add that to your inventory file.
|
||||||
|
|
||||||
For this example, as shown earlier, we can do something like this...
|
For this example, as shown earlier, we can do something like this...
|
||||||
|
|
||||||
|
@ -181,19 +181,19 @@ ansible-playbook -i inventory setup.yml
|
||||||
That's all there is to it. It's really that easy. At this point you should have a functioning kubernetes cluster.
|
That's all there is to it. It's really that easy. At this point you should have a functioning kubernetes cluster.
|
||||||
|
|
||||||
|
|
||||||
**Show services running on masters and minions.**
|
**Show services running on masters and nodes.**
|
||||||
|
|
||||||
```
|
```
|
||||||
systemctl | grep -i kube
|
systemctl | grep -i kube
|
||||||
```
|
```
|
||||||
|
|
||||||
**Show firewall rules on the masters and minions.**
|
**Show firewall rules on the masters and nodes.**
|
||||||
|
|
||||||
```
|
```
|
||||||
iptables -nvL
|
iptables -nvL
|
||||||
```
|
```
|
||||||
|
|
||||||
**Create the following apache.json file and deploy pod to minion.**
|
**Create the following apache.json file and deploy pod to node.**
|
||||||
|
|
||||||
```
|
```
|
||||||
cat << EOF > apache.json
|
cat << EOF > apache.json
|
||||||
|
@ -229,7 +229,7 @@ EOF
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
**Check where the pod was created**
|
**Check where the pod was created.**
|
||||||
|
|
||||||
```
|
```
|
||||||
kubectl get pods
|
kubectl get pods
|
||||||
|
@ -241,14 +241,14 @@ In this example, that was the 10.254 network.
|
||||||
|
|
||||||
If you see 172 in the IP fields, networking was not setup correctly, and you may want to re run or dive deeper into the way networking is being setup by looking at the details of the networking scripts used above.
|
If you see 172 in the IP fields, networking was not setup correctly, and you may want to re run or dive deeper into the way networking is being setup by looking at the details of the networking scripts used above.
|
||||||
|
|
||||||
**Check Docker status on minion.**
|
**Check Docker status on node.**
|
||||||
|
|
||||||
```
|
```
|
||||||
docker ps
|
docker ps
|
||||||
docker images
|
docker images
|
||||||
```
|
```
|
||||||
|
|
||||||
**After the pod is 'Running' Check web server access on the minion**
|
**After the pod is 'Running' Check web server access on the node.**
|
||||||
|
|
||||||
```
|
```
|
||||||
curl http://localhost
|
curl http://localhost
|
||||||
|
|
|
@ -83,7 +83,7 @@ the required predependencies to get started with Juju, additionally it will
|
||||||
launch a curses based configuration utility allowing you to select your cloud
|
launch a curses based configuration utility allowing you to select your cloud
|
||||||
provider and enter the proper access credentials.
|
provider and enter the proper access credentials.
|
||||||
|
|
||||||
Next it will deploy the kubernetes master, etcd, 2 minions with flannel based
|
Next it will deploy the kubernetes master, etcd, 2 nodes with flannel based
|
||||||
Software Defined Networking.
|
Software Defined Networking.
|
||||||
|
|
||||||
|
|
||||||
|
@ -157,7 +157,7 @@ Get info on the pod:
|
||||||
kubectl get pods
|
kubectl get pods
|
||||||
|
|
||||||
|
|
||||||
To test the hello app, we need to locate which minion is hosting
|
To test the hello app, we need to locate which node is hosting
|
||||||
the container. Better tooling for using juju to introspect container
|
the container. Better tooling for using juju to introspect container
|
||||||
is in the works but we can use `juju run` and `juju status` to find
|
is in the works but we can use `juju run` and `juju status` to find
|
||||||
our hello app.
|
our hello app.
|
||||||
|
@ -186,7 +186,7 @@ Finally delete the pod:
|
||||||
|
|
||||||
## Scale out cluster
|
## Scale out cluster
|
||||||
|
|
||||||
We can add minion units like so:
|
We can add node units like so:
|
||||||
|
|
||||||
juju add-unit docker # creates unit docker/2, kubernetes/2, docker-flannel/2
|
juju add-unit docker # creates unit docker/2, kubernetes/2, docker-flannel/2
|
||||||
|
|
||||||
|
|
|
@ -109,7 +109,7 @@ setfacl -m g:kvm:--x ~
|
||||||
|
|
||||||
### Setup
|
### Setup
|
||||||
|
|
||||||
By default, the libvirt-coreos setup will create a single kubernetes master and 3 kubernetes minions. Because the VM drives use Copy-on-Write and because of memory ballooning and KSM, there is a lot of resource over-allocation.
|
By default, the libvirt-coreos setup will create a single kubernetes master and 3 kubernetes nodes. Because the VM drives use Copy-on-Write and because of memory ballooning and KSM, there is a lot of resource over-allocation.
|
||||||
|
|
||||||
To start your local cluster, open a shell and run:
|
To start your local cluster, open a shell and run:
|
||||||
|
|
||||||
|
@ -122,7 +122,7 @@ cluster/kube-up.sh
|
||||||
|
|
||||||
The `KUBERNETES_PROVIDER` environment variable tells all of the various cluster management scripts which variant to use. If you forget to set this, the assumption is you are running on Google Compute Engine.
|
The `KUBERNETES_PROVIDER` environment variable tells all of the various cluster management scripts which variant to use. If you forget to set this, the assumption is you are running on Google Compute Engine.
|
||||||
|
|
||||||
The `NUM_MINIONS` environment variable may be set to specify the number of minions to start. If it is not set, the number of minions defaults to 3.
|
The `NUM_MINIONS` environment variable may be set to specify the number of nodes to start. If it is not set, the number of nodes defaults to 3.
|
||||||
|
|
||||||
The `KUBE_PUSH` environment variable may be set to specify which kubernetes binaries must be deployed on the cluster. Its possible values are:
|
The `KUBE_PUSH` environment variable may be set to specify which kubernetes binaries must be deployed on the cluster. Its possible values are:
|
||||||
|
|
||||||
|
@ -155,7 +155,7 @@ The VMs are running [CoreOS](https://coreos.com/).
|
||||||
Your ssh keys have already been pushed to the VM. (It looks for ~/.ssh/id_*.pub)
|
Your ssh keys have already been pushed to the VM. (It looks for ~/.ssh/id_*.pub)
|
||||||
The user to use to connect to the VM is `core`.
|
The user to use to connect to the VM is `core`.
|
||||||
The IP to connect to the master is 192.168.10.1.
|
The IP to connect to the master is 192.168.10.1.
|
||||||
The IPs to connect to the minions are 192.168.10.2 and onwards.
|
The IPs to connect to the nodes are 192.168.10.2 and onwards.
|
||||||
|
|
||||||
Connect to `kubernetes_master`:
|
Connect to `kubernetes_master`:
|
||||||
```
|
```
|
||||||
|
@ -175,7 +175,7 @@ All of the following commands assume you have set `KUBERNETES_PROVIDER` appropri
|
||||||
export KUBERNETES_PROVIDER=libvirt-coreos
|
export KUBERNETES_PROVIDER=libvirt-coreos
|
||||||
```
|
```
|
||||||
|
|
||||||
Bring up a libvirt-CoreOS cluster of 5 minions
|
Bring up a libvirt-CoreOS cluster of 5 nodes
|
||||||
|
|
||||||
```
|
```
|
||||||
NUM_MINIONS=5 cluster/kube-up.sh
|
NUM_MINIONS=5 cluster/kube-up.sh
|
||||||
|
|
|
@ -63,7 +63,7 @@ hack/local-up-cluster.sh
|
||||||
```
|
```
|
||||||
|
|
||||||
This will build and start a lightweight local cluster, consisting of a master
|
This will build and start a lightweight local cluster, consisting of a master
|
||||||
and a single minion. Type Control-C to shut it down.
|
and a single node. Type Control-C to shut it down.
|
||||||
|
|
||||||
You can use the cluster/kubectl.sh script to interact with the local cluster. hack/local-up-cluster.sh will
|
You can use the cluster/kubectl.sh script to interact with the local cluster. hack/local-up-cluster.sh will
|
||||||
print the commands to run to point kubectl at the local cluster.
|
print the commands to run to point kubectl at the local cluster.
|
||||||
|
@ -127,7 +127,7 @@ change the service-cluster-ip-range flag to something else.
|
||||||
|
|
||||||
#### I cannot create a replication controller with replica size greater than 1! What gives?
|
#### I cannot create a replication controller with replica size greater than 1! What gives?
|
||||||
|
|
||||||
You are running a single minion setup. This has the limitation of only supporting a single replica of a given pod. If you are interested in running with larger replica sizes, we encourage you to try the local vagrant setup or one of the cloud providers.
|
You are running a single node setup. This has the limitation of only supporting a single replica of a given pod. If you are interested in running with larger replica sizes, we encourage you to try the local vagrant setup or one of the cloud providers.
|
||||||
|
|
||||||
#### I changed Kubernetes code, how do I run it?
|
#### I changed Kubernetes code, how do I run it?
|
||||||
|
|
||||||
|
|
|
@ -54,7 +54,7 @@ The current cluster design is inspired by:
|
||||||
1. The kubernetes binaries will be built via the common build scripts in `build/`.
|
1. The kubernetes binaries will be built via the common build scripts in `build/`.
|
||||||
2. If you've set the ENV `KUBERNETES_PROVIDER=rackspace`, the scripts will upload `kubernetes-server-linux-amd64.tar.gz` to Cloud Files.
|
2. If you've set the ENV `KUBERNETES_PROVIDER=rackspace`, the scripts will upload `kubernetes-server-linux-amd64.tar.gz` to Cloud Files.
|
||||||
2. A cloud files container will be created via the `swiftly` CLI and a temp URL will be enabled on the object.
|
2. A cloud files container will be created via the `swiftly` CLI and a temp URL will be enabled on the object.
|
||||||
3. The built `kubernetes-server-linux-amd64.tar.gz` will be uploaded to this container and the URL will be passed to master/minions nodes when booted.
|
3. The built `kubernetes-server-linux-amd64.tar.gz` will be uploaded to this container and the URL will be passed to master/nodes when booted.
|
||||||
|
|
||||||
## Cluster
|
## Cluster
|
||||||
There is a specific `cluster/rackspace` directory with the scripts for the following steps:
|
There is a specific `cluster/rackspace` directory with the scripts for the following steps:
|
||||||
|
|
|
@ -25,12 +25,12 @@ Kubernetes Deployment On Bare-metal Ubuntu Nodes
|
||||||
|
|
||||||
## Introduction
|
## Introduction
|
||||||
|
|
||||||
This document describes how to deploy kubernetes on ubuntu nodes, including 1 master node and 3 minion nodes, and people uses this approach can scale to **any number of minion nodes** by changing some settings with ease. The original idea was heavily inspired by @jainvipin 's ubuntu single node work, which has been merge into this document.
|
This document describes how to deploy kubernetes on ubuntu nodes, including 1 kubernetes master and 3 kubernetes nodes, and people uses this approach can scale to **any number of nodes** by changing some settings with ease. The original idea was heavily inspired by @jainvipin 's ubuntu single node work, which has been merge into this document.
|
||||||
|
|
||||||
[Cloud team from Zhejiang University](https://github.com/ZJU-SEL) will maintain this work.
|
[Cloud team from Zhejiang University](https://github.com/ZJU-SEL) will maintain this work.
|
||||||
|
|
||||||
## Prerequisites
|
## Prerequisites
|
||||||
*1 The minion nodes have installed docker version 1.2+ and bridge-utils to manipulate linux bridge*
|
*1 The nodes have installed docker version 1.2+ and bridge-utils to manipulate linux bridge*
|
||||||
|
|
||||||
*2 All machines can communicate with each other, no need to connect Internet (should use private docker registry in this case)*
|
*2 All machines can communicate with each other, no need to connect Internet (should use private docker registry in this case)*
|
||||||
|
|
||||||
|
@ -60,9 +60,9 @@ An example cluster is listed as below:
|
||||||
|
|
||||||
| IP Address|Role |
|
| IP Address|Role |
|
||||||
|---------|------|
|
|---------|------|
|
||||||
|10.10.103.223| minion |
|
|10.10.103.223| node |
|
||||||
|10.10.103.162| minion |
|
|10.10.103.162| node |
|
||||||
|10.10.103.250| both master and minion|
|
|10.10.103.250| both master and node|
|
||||||
|
|
||||||
First configure the cluster information in cluster/ubuntu/config-default.sh, below is a simple sample.
|
First configure the cluster information in cluster/ubuntu/config-default.sh, below is a simple sample.
|
||||||
|
|
||||||
|
@ -82,7 +82,7 @@ export FLANNEL_NET=172.16.0.0/16
|
||||||
|
|
||||||
The first variable `nodes` defines all your cluster nodes, MASTER node comes first and separated with blank space like `<user_1@ip_1> <user_2@ip_2> <user_3@ip_3> `
|
The first variable `nodes` defines all your cluster nodes, MASTER node comes first and separated with blank space like `<user_1@ip_1> <user_2@ip_2> <user_3@ip_3> `
|
||||||
|
|
||||||
Then the `roles ` variable defines the role of above machine in the same order, "ai" stands for machine acts as both master and minion, "a" stands for master, "i" stands for minion. So they are just defined the k8s cluster as the table above described.
|
Then the `roles ` variable defines the role of above machine in the same order, "ai" stands for machine acts as both master and node, "a" stands for master, "i" stands for node. So they are just defined the k8s cluster as the table above described.
|
||||||
|
|
||||||
The `NUM_MINIONS` variable defines the total number of minions.
|
The `NUM_MINIONS` variable defines the total number of minions.
|
||||||
|
|
||||||
|
@ -119,7 +119,7 @@ If all things goes right, you will see the below message from console
|
||||||
|
|
||||||
You can also use `kubectl` command to see if the newly created k8s is working correctly. The `kubectl` binary is under the `cluster/ubuntu/binaries` directory. You can move it into your PATH. Then you can use the below command smoothly.
|
You can also use `kubectl` command to see if the newly created k8s is working correctly. The `kubectl` binary is under the `cluster/ubuntu/binaries` directory. You can move it into your PATH. Then you can use the below command smoothly.
|
||||||
|
|
||||||
For example, use `$ kubectl get nodes` to see if all your minion nodes are in ready status. It may take some time for the minions ready to use like below.
|
For example, use `$ kubectl get nodes` to see if all your nodes are in ready status. It may take some time for the nodes ready to use like below.
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
Loading…
Reference in New Issue