k3s/docs/getting-started-guides/fedora/fedora_ansible_config.md

283 lines
8.7 KiB
Markdown
Raw Normal View History

2015-07-12 04:04:52 +00:00
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
<!-- BEGIN STRIP_FOR_RELEASE -->
2015-07-16 17:02:26 +00:00
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<h2>PLEASE NOTE: This document applies to the HEAD of the source tree</h2>
If you are using a released version of Kubernetes, you should
refer to the docs that go with that version.
<strong>
The latest 1.0.x release of this document can be found
[here](http://releases.k8s.io/release-1.0/docs/getting-started-guides/fedora/fedora_ansible_config.md).
Documentation for other releases can be found at
[releases.k8s.io](http://releases.k8s.io).
</strong>
--
2015-07-13 22:15:35 +00:00
2015-07-12 04:04:52 +00:00
<!-- END STRIP_FOR_RELEASE -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
Configuring kubernetes on [Fedora](http://fedoraproject.org) via [Ansible](http://www.ansible.com/home)
-------------------------------------------------------------------------------------------------------
Configuring kubernetes on Fedora via Ansible offers a simple way to quickly create a clustered environment with little effort.
**Table of Contents**
- [Prerequisites](#prerequisites)
- [Architecture of the cluster](#architecture-of-the-cluster)
- [Configuring ssh access to the cluster](#configuring-ssh-access-to-the-cluster)
- [Configuring the internal kubernetes network](#configuring-the-internal-kubernetes-network)
- [Setting up the cluster](#setting-up-the-cluster)
- [Testing and using your new cluster](#testing-and-using-your-new-cluster)
2015-07-17 22:35:41 +00:00
## Prerequisites
2014-10-28 22:57:07 +00:00
1. Host able to run ansible and able to clone the following repo: [kubernetes-ansible](https://github.com/eparis/kubernetes-ansible)
2. A Fedora 20+ or RHEL7 host to act as cluster master
2015-07-07 17:37:40 +00:00
3. As many Fedora 20+ or RHEL7 hosts as you would like, that act as cluster nodes
2014-10-28 22:57:07 +00:00
The hosts can be virtual or bare metal. The only requirement to make the ansible network setup work is that all of the machines are connected via the same layer 2 network.
2015-07-07 17:37:40 +00:00
Ansible will take care of the rest of the configuration for you - configuring networking, installing packages, handling the firewall, etc... This example will use one master and two nodes.
2014-10-28 22:57:07 +00:00
## Architecture of the cluster
2015-07-07 17:37:40 +00:00
A Kubernetes cluster requires etcd, a master, and n nodes, so we will create a cluster with three hosts, for example:
```
fed1 (master,etcd) = 192.168.121.205
2015-07-07 17:37:40 +00:00
fed2 (node) = 192.168.121.84
fed3 (node) = 192.168.121.116
```
**Make sure your local machine**
- has ansible
- has git
2015-04-09 17:54:14 +00:00
**then we just clone down the kubernetes-ansible repository**
```
yum install -y ansible git
git clone https://github.com/eparis/kubernetes-ansible.git
cd kubernetes-ansible
```
2014-10-28 22:57:07 +00:00
**Tell ansible about each machine and its role in your cluster.**
2015-07-07 17:37:40 +00:00
Get the IP addresses from the master and nodes. Add those to the `inventory` file (at the root of the repo) on the host running Ansible.
We will set the kube_ip_addr to '10.254.0.[1-3]', for now. The reason we do this is explained later... It might work for you as a default.
```
[masters]
192.168.121.205
[etcd]
192.168.121.205
[minions]
192.168.121.84 kube_ip_addr=[10.254.0.1]
192.168.121.116 kube_ip_addr=[10.254.0.2]
```
**Setup ansible access to your nodes**
If you already are running on a machine which has passwordless ssh access to the fed[1-3] nodes, and 'sudo' privileges, simply set the value of `ansible_ssh_user` in `group_vars/all.yaml` to the username which you use to ssh to the nodes (i.e. `fedora`), and proceed to the next step...
*Otherwise* setup ssh on the machines like so (you will need to know the root password to all machines in the cluster).
2014-10-28 22:57:07 +00:00
edit: group_vars/all.yml
```
2014-10-28 22:57:07 +00:00
ansible_ssh_user: root
```
2014-10-28 22:57:07 +00:00
## Configuring ssh access to the cluster
If you already have ssh access to every machine using ssh public keys you may skip to [configuring the network](#configuring-the-network)
**Create a password file.**
2014-12-27 17:36:00 +00:00
The password file should contain the root password for every machine in the cluster. It will be used in order to lay down your ssh public key. Make sure your machines sshd-config allows password logins from root.
```
echo "password" > ~/rootpassword
```
2014-10-28 22:57:07 +00:00
**Agree to accept each machine's ssh public key**
After this is completed, ansible is now enabled to ssh into any of the machines you're configuring.
```
ansible-playbook -i inventory ping.yml # This will look like it fails, that's ok
```
2014-10-28 22:57:07 +00:00
**Push your ssh public key to every machine**
Again, you can skip this step if your ansible machine has ssh access to the nodes you are going to use in the kubernetes cluster.
2015-07-17 02:01:02 +00:00
```
ansible-playbook -i inventory keys.yml
```
## Configuring the internal kubernetes network
2014-10-28 22:57:07 +00:00
If you already have configured your network and docker will use it correctly, skip to [setting up the cluster](#setting-up-the-cluster)
2015-07-13 04:15:58 +00:00
The ansible scripts are quite hacky configuring the network, you can see the [README](https://github.com/eparis/kubernetes-ansible) for details, or you can simply enter in variants of the 'kube_service_addresses' (in the all.yaml file) as `kube_ip_addr` entries in the nodes field, as shown in the next section.
2014-10-28 22:57:07 +00:00
**Configure the ip addresses which should be used to run pods on each machine**
2015-07-07 17:37:40 +00:00
The IP address pool used to assign addresses to pods for each node is the `kube_ip_addr`= option. Choose a /24 to use for each node and add that to your inventory file.
For this example, as shown earlier, we can do something like this...
2014-10-28 22:57:07 +00:00
```
[minions]
192.168.121.84 kube_ip_addr=10.254.0.1
192.168.121.116 kube_ip_addr=10.254.0.2
2014-10-28 22:57:07 +00:00
```
**Run the network setup playbook**
There are two ways to do this: via flannel, or using NetworkManager.
Flannel is a cleaner mechanism to use, and is the recommended choice.
2015-04-09 17:54:14 +00:00
- If you are using flannel, you should check the kubernetes-ansible repository above.
2015-04-09 17:54:14 +00:00
Currently, you essentially have to (1) update group_vars/all.yml, and then (2) run
2015-07-17 02:01:02 +00:00
```
ansible-playbook -i inventory flannel.yml
```
- On the other hand, if using the NetworkManager based setup (i.e. you do not want to use flannel).
On EACH node, make sure NetworkManager is installed, and the service "NetworkManager" is running, then you can run
the network manager playbook...
2014-10-28 22:57:07 +00:00
```
ansible-playbook -i inventory ./old-network-config/hack-network.yml
2014-10-28 22:57:07 +00:00
```
## Setting up the cluster
**Configure the IP addresses used for services**
Each kubernetes service gets its own IP address. These are not real IPs. You need only select a range of IPs which are not in use elsewhere in your environment. This must be done even if you do not use the network setup provided by the ansible scripts.
edit: group_vars/all.yml
```
kube_service_addresses: 10.254.0.0/16
```
**Tell ansible to get to work!**
This will finally setup your whole kubernetes cluster for you.
```
ansible-playbook -i inventory setup.yml
```
2014-10-28 22:57:07 +00:00
## Testing and using your new cluster
That's all there is to it. It's really that easy. At this point you should have a functioning kubernetes cluster.
2015-07-07 17:37:40 +00:00
**Show services running on masters and nodes.**
```
systemctl | grep -i kube
```
2015-07-07 17:37:40 +00:00
**Show firewall rules on the masters and nodes.**
```
iptables -nvL
```
2015-07-07 17:37:40 +00:00
**Create the following apache.json file and deploy pod to node.**
```
cat << EOF > apache.json
{
"kind": "Pod",
2015-06-05 19:47:15 +00:00
"apiVersion": "v1",
"metadata": {
"name": "fedoraapache",
"labels": {
"name": "fedoraapache"
}
},
"spec": {
"containers": [
{
"name": "fedoraapache",
"image": "fedora/apache",
"ports": [
{
"hostPort": 80,
"containerPort": 80
}
]
}
]
}
}
EOF
2014-10-28 22:57:07 +00:00
/usr/bin/kubectl create -f apache.json
**Testing your new kube cluster**
```
2015-07-07 17:37:40 +00:00
**Check where the pod was created.**
2014-10-28 22:57:07 +00:00
```
kubectl get pods
2014-10-28 22:57:07 +00:00
```
2015-04-09 17:54:14 +00:00
Important : Note that the IP of the pods IP fields are on the network which you created in the kube_ip_addr file.
In this example, that was the 10.254 network.
If you see 172 in the IP fields, networking was not setup correctly, and you may want to re run or dive deeper into the way networking is being setup by looking at the details of the networking scripts used above.
2015-07-07 17:37:40 +00:00
**Check Docker status on node.**
```
docker ps
docker images
```
2015-07-07 17:37:40 +00:00
**After the pod is 'Running' Check web server access on the node.**
```
curl http://localhost
```
2015-04-09 17:54:14 +00:00
That's it !
2015-07-14 00:13:09 +00:00
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/fedora/fedora_ansible_config.md?pixel)]()
2015-07-14 00:13:09 +00:00
<!-- END MUNGE: GENERATED_ANALYTICS -->