Merge pull request #5847 from aveshagarwal/kubernetes-fedora-guides-minions

Describes addition and deletion of minions (or nodes) at run time, and some other fixes.
pull/6/head
Eric Paris 2015-03-25 12:04:59 -04:00
commit 50452994cb
1 changed files with 59 additions and 45 deletions

View File

@ -2,34 +2,34 @@
This is a getting started guide for Fedora. It is a manual configuration so you understand all the underlying packages / services / ports, etc...
This guide will only get ONE minion working. Multiple minions require a functional [networking configuration](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/networking.md) done outside of kubernetes. Although the additional kubernetes configuration requirements should be obvious.
This guide will only get ONE node (previously minion) working. Multiple nodes require a functional [networking configuration](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/networking.md) done outside of kubernetes. Although the additional kubernetes configuration requirements should be obvious.
The kubernetes package provides a few services: kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, kube-proxy. These services are managed by systemd and the configuration resides in a central location: /etc/kubernetes. We will break the services up between the hosts. The first host, fed-master, will be the kubernetes master. This host will run the kube-apiserver, kube-controller-manager, and kube-scheduler. In addition, the master will also run _etcd_ (not needed if _etcd_ runs on a different host but this guide assumes that _etcd_ and kubernetes master run on the same host). The remaining host, fed-minion will be the minion and run kubelet, proxy and docker.
The kubernetes package provides a few services: kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, kube-proxy. These services are managed by systemd and the configuration resides in a central location: /etc/kubernetes. We will break the services up between the hosts. The first host, fed-master, will be the kubernetes master. This host will run the kube-apiserver, kube-controller-manager, and kube-scheduler. In addition, the master will also run _etcd_ (not needed if _etcd_ runs on a different host but this guide assumes that _etcd_ and kubernetes master run on the same host). The remaining host, fed-node will be the node and run kubelet, proxy and docker.
**System Information:**
Hosts:
```
fed-master = 192.168.121.9
fed-minion = 192.168.121.65
fed-node = 192.168.121.65
```
**Prepare the hosts:**
* Install kubernetes on all hosts - fed-{master,minion}. This will also pull in etcd and docker. This guide has been tested with kubernetes-0.12.0 but should work with later versions too.
* Install kubernetes on all hosts - fed-{master,node}. This will also pull in etcd and docker. This guide has been tested with kubernetes-0.12.0 but should work with later versions too.
```
yum -y install --enablerepo=updates-testing kubernetes
```
* Add master and minion to /etc/hosts on all machines (not needed if hostnames already in DNS). Make sure that communication works between fed-master and fed-minion by using a utility such as ping.
* Add master and node to /etc/hosts on all machines (not needed if hostnames already in DNS). Make sure that communication works between fed-master and fed-node by using a utility such as ping.
```
echo "192.168.121.9 fed-master
192.168.121.65 fed-minion" >> /etc/hosts
192.168.121.65 fed-node" >> /etc/hosts
```
* Edit /etc/kubernetes/config which will be the same on all hosts to contain:
* Edit /etc/kubernetes/config which will be the same on all hosts (master and node) to contain:
```
# Comma separated list of nodes in the etcd cluster
@ -45,7 +45,7 @@ KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow_privileged=false"
```
* Disable the firewall on both the master and minion, as docker does not play well with other firewall rule managers. Please note that iptables-services does not exist on default fedora server install.
* Disable the firewall on both the master and node, as docker does not play well with other firewall rule managers. Please note that iptables-services does not exist on default fedora server install.
```
systemctl disable iptables-services firewalld
@ -70,19 +70,6 @@ KUBE_SERVICE_ADDRESSES="--portal_net=10.254.0.0/16"
KUBE_API_ARGS=""
```
* Edit /etc/kubernetes/controller-manager to appear as such:
```
# The following values are used to configure the kubernetes controller-manager
# defaults from config and apiserver should be adequate
# Comma separated list of minions
KUBELET_ADDRESSES="--machines=fed-minion"
# Add you own!
KUBE_CONTROLLER_MANAGER_ARGS=""
```
* Start the appropriate services on master:
```
@ -93,21 +80,53 @@ for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do
done
```
**Configure the kubernetes services on the minion.**
* Addition of nodes:
***We need to configure the kubelet and proxy and start them.***
* Create following node.json file on kubernetes master node:
```json
{
"id": "fed-node",
"kind": "Minion",
"apiVersion": "v1beta1",
"labels": {
"name": "fed-node-label"
}
}
```
Now create a node object internally in your kubernetes cluster by running:
```
$ kubectl create -f node.json
$ kubectl get nodes
NAME LABELS STATUS
fed-node name=fed-node-label Unknown
```
Please note that in the above, it only creates a representation for the node
_fed-node_ internally. It does not provision the actual _fed-node_. Also, it
is assumed that _fed-node_ (as specified in `id`) can be resolved and is
reachable from kubernetes master node. This guide will discuss how to provision
a kubernetes node (fed-node) below.
**Configure the kubernetes services on the node.**
***We need to configure the kubelet on the node.***
* Edit /etc/kubernetes/kubelet to appear as such:
```
###
# kubernetes kubelet (minion) config
# kubernetes kubelet (node) config
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0"
# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname_override=fed-minion"
KUBELET_HOSTNAME="--hostname_override=fed-node"
# location of the api-server
KUBELET_API_SERVER="--api_servers=http://fed-master:8080"
@ -116,19 +135,7 @@ KUBELET_API_SERVER="--api_servers=http://fed-master:8080"
#KUBELET_ARGS=""
```
* Edit /etc/kubernetes/proxy to appear as such:
```
###
# kubernetes proxy config
# default config should be adequate
# Add your own!
KUBE_PROXY_ARGS="--master=http://fed-master:8080"
```
* Start the appropriate services on minion (fed-minion).
* Start the appropriate services on the node (fed-node).
```
for SERVICES in kube-proxy kubelet docker; do
@ -138,15 +145,22 @@ for SERVICES in kube-proxy kubelet docker; do
done
```
*You should be finished!*
* Check to make sure the cluster can see the minion (on fed-master).
* Check to make sure now the cluster can see the fed-node on fed-master, and its status changes to _Ready_.
```
kubectl get minions
kubectl get nodes
NAME LABELS STATUS
fed-minion <none> Ready
fed-node name=fed-node-label Ready
```
* Deletion of nodes:
To delete _fed-node_ from your kubernetes cluster, one should run the following on fed-master (Please do not do it, it is just for information):
```
$ kubectl delete -f node.json
```
*You should be finished!*
**The cluster should be running! Launch a test pod.**